Facing Healthcare’s Future: Designing Facial Expressivity for Robotic Patient Mannequins

Speaker: Laurel Riek, University of Notre Dame
Title: Facing Healthcare’s Future: Designing Facial Expressivity for Robotic Patient Mannequins

Abstract:

In the United States, there are an estimated 98,000 people per year killed and $17.1 billion dollars lost due to medical errors. One way to prevent these errors is to have clinical students engage in simulation-based medical education, to help move the learning curve away from the patient. This training often takes place on human-sized android robots, called high-fidelity patient simulators (HFPS), which are capable of conveying human-like physiological cues (e.g., respiration, heart rate). Training with them can include anything from diagnostic skills (e.g., recognizing sepsis, a failure that recently killed 12-year-old Rory Staunton) to procedural skills (e.g., IV insertion) to communication skills (e.g., breaking bad news). HFPS systems allow students a chance to safely make mistakes within a simulation context without harming real patients, with the goal that these skills will ultimately be transferable to real patients.

While simulator use is a step in the right direction toward safer healthcare, one major challenge and critical technology gap is that none of the commercially available HFPS systems exhibit facial expressions, gaze, or realistic mouth movements, despite the vital importance of these cues in helping providers assess and treat patients. This is a critical omission, because almost all areas of health care involve face-to-face interaction, and there is overwhelming evidence that providers who are skilled at decoding communication cues are better healthcare providers – they have improved outcomes, higher compliance, greater safety, higher satisfaction, and they experience fewer malpractice lawsuits. In fact, communication errors are the leading cause of avoidable patient harm in the US: they are the root cause of 70% of sentinel events, 75% of which lead to a patient dying.

In the Robotics, Health, and Communication (RHC) Lab at the University of Notre Dame, we are addressing this problem by leveraging our expertise in android robotics and social signal processing to design and build a new, facially expressive, interactive HFPS system. In this talk, I will discuss our efforts to date, including: in situ observational studies exploring how individuals, teams, and operators interact with existing HFPS technology; design-focused interviews with simulation center directors and educators which future HFPS systems are envisioned; and initial software prototyping efforts incorporating novel facial expression synthesis techniques.

Biography:

Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame. She directs the RHC Lab, and leads research on human-robot interaction, social signal processing, facial expression synthesis, and clinical communication. She received her PhD at the University of Cambridge Computer Laboratory, and prior to that worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE.

Event details

  • When: 4th September 2012 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

TayViz – The bi-monthly meeting of the Tayside and Fife network for data visualisation

Talks:

Information Visualization Research in the SACHI group

Speaker: Aaron Quigley

Abstract:

Aaron will provide a quick overview of the incipient InfoViz research and prospects of the SACHI group.

A few examples of visualisation in computational systems biology of anti-inflammatory and anticancer drug actions

Speaker: Alexey Goltsov

Abstract:

Visualization is a key aspect in computational systems biology to analyse results of in silico modelling, generate and test hypothesises.  Some examples of visualisation in computational systems biology of cellular response to drug intervention are discussed. First, the developed method of the complex dynamics visualisation of enzyme kinetics is discussed and illustrated with the dynamic visualisation of cyclooxygenase enzyme function and its inhibition by anti-inflammatory drug, aspirin. Second, 3D dynamic visualisation of thrombosis in blood vessel is demonstrated based on the developed agent-based model of blood clotting and anticoagulation drug effect. Third, visualisation in computational systems biology of cancer are discussed and illustrated with the visualisation methods of the determination of promising drug targets and analysis of changing sensitivity of tumor to anticancer therapy at different oncogenic mutations.

FatFonts: Combining the Symbolic and Visual Aspects of Numbers

Speaker: Miguel Nacenta

Abstract:

In this talk I present a new technique for visualisation that makes use of typography. FatFonts is a technique for visualizing quantitative data that bridges the gap betweennumeric and visual representations. FatFonts are based onArabic numerals but, unlike regular numeric typefaces, theamount of ink (dark pixels) used for each digit is propor-tional to its quantitative value. This enables accurate read-ing of the numerical data while preserving an overall visual context. During the talk, I discuss the challenges of this approach, it’s possible uses, and how to use it in visualizations.

Bio:

Miguel Nacenta is a Lecturer in the School of Computer Science. He is interested in new interaction form factors (e.g., tabletops, multi-touch, multi-display environments), perception, and information visualisation.

Event details

  • When: 15th May 2012 18:30 - 20:30
  • Where: Cole 1.33a
  • Format: Talk

Helen Purchase on An Exploration of Interface Visual Aesthetics

Speaker: Helen Purchase, University of Glasgow
Title: An Exploration of Interface Visual Aesthetics
Abstract:
The visual design of an interface is not merely an ‘add-on’ to the functionality provided by a system: it is well-known that it can affect user preference, engagement and motivation, but does it have any effect on user performance? Can the efficiency or effectiveness of a system be improved by its visual design? This seminar will report on experiments that investigate whether any such effect can be quantified and tested. Key to this question is the definition of an unambiguous, quantifiable characterisation of an interface’s ‘visual aesthetic’: ways in which this could be determined will be discussed.

About Helen:
Dr Helen Purchase is Senior Lecturer in the School of Computing Science at the University of Glasgow. She has worked in the area of empirical studies of graph layout for several years, and also has research interests in visual aesthetics, task-based empirical design, collaborative learning in higher education, and sketch tools for design. She is currently writing a book on empirical methods for HCI research.

Event details

  • When: 15th May 2012 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Honourable mentions for two ACM research papers

Per Ola Kristensson has two recent papers published in top ACM conferences that have received honourable mentions:

Crowdsourcing research featured in the New Scientist

The latest issue of the New Scientist magazine writes about Per Ola Kristensson‘s work on using crowdsourcing and online web sources to create better statistical language models for AAC devices: Crowdsourcing improves predictive texting.

The research paper was published in the Association for Computational Linguistics’  2011 Conference on Empirical Methods in Natural Language Processing. It is published using the open access model and can be read  here. The language models are publicly released and can be found here.

Special software to trawl thousands of historic archives to uncover Empire trade boom

Professor Aaron Quigley’s research on exploratory visualisation allows historians to trace the flow of a wide range of natural resources around the globe.
By working with world experts in text mining within the Scottish Informatics and Computer Science Alliance and domain experts in York University, Canada, we can bridge the research divide and answer historical questions on trading

Full news article

Augmentative and Alternative Communication across the Lifespan of Individuals with Complex Communication Needs

Speaker: Annalu Waller, University of Dundee

Abstract:

Augmentative and alternative communication (AAC) attempts to augment natural speech, or to provide alternative ways to communicate for people with limited or no speech. Technology has played an increasing role in AAC. At the most simplest level, people with complex communication needs (CCN) can cause a prestored message to be spoken by activating a single switch. At the most sophisticated level, literate users can generate novel text. Although some individuals with CCN become effective communicators, most do not – they tend to be passive communicators, responding mainly to questions or prompts at a one or two word level. Conversational skills such as initiation, elaboration and story telling are seldom observed.
One reason for the reduced levels of communicative ability is that AAC technology provides the user with a purely physical link to speech output. The user is required to have sufficient language abilities and physical stamina to translate what they want to say into the code sequence of operations needed to produce the desired output. Instead of placing all the cognitive load on the user, AAC devices can be designed to support the cognitive and language needs of individuals with CCN, taking into account the need to scaffold communication as children develop into adulthood. A range of research projects, including systems to support personal narrative and language play, will be used to illustrate the application of Human Computer Interaction (HCI) and Natural Language Generation (NLG) in the design and implementation of electronic AAC devices.

About Annalu:
Dr Annalu Waller is a Senior Lecturer in the School of Computing at the University of Dundee. She has worked in the field of Augmentative and Alternate Communication (AAC) since 1985, designing communication systems for and with nonspeaking individuals. She established the first AAC assessment and training centre in South Africa in 1987 before coming to Dundee in 1989. Her PhD developed narrative technology support for adults with acquired dysphasia following stroke. Her primary research areas are human computer interaction, natural language generation, personal narrative and assistive technology. In particular, she focuses on empowering end users, including disabled adults and children, by involving them in the design and use of technology. She manages a number of interdisciplinary research projects with industry and practitioners from rehabilitation engineering, special education, speech and language therapy, nursing and dentistry. She is on the editorial boards of several academic journals and sits on the boards of a number of national and international organisations representing disabled people.

Event details

  • When: 11th October 2011 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar