Event details
- When: 30th April 2013 13:00 - 14:00
- Where: Cole 1.33a
- Format: Seminar
SACHI seminar
Title: Digital tabletops: in the lab and in the wild
Speaker: Patrick Olivier, Culture Lab, Newcastle University
Abstract:
The purpose of this talk will be to introduce Culture Lab’s past and current interaction design research into digital tabletops. The talk will span our interaction techniques and technologies research (including pen-based interaction, authentication and actuated tangibles) but also application domains (education, play therapy and creative practice) by reference to four Culture Lab tabletop studies: (1) Digital Mysteries (Ahmed Kharrufa’s classroom-based higher order thinking skills application); (2) Waves (Jon Hook’s expressive performance environment for VJs); (3) Magic Land (Olga Pykhtina’s tabletop play therapy tool); and (4) StoryCrate (Tom Bartindale’s collaborative TV production tool). I’ll focus on a number of specific challenges for digital tabletop research, including selection of appropriate design approaches, the role and character of evaluation, the importance of appropriate “in the wild” settings, and avoiding the trap of simple remediation when working in multidisciplinary teams.
Bio:
Patrick Olivier is a Professor of Human-Computer Interaction in the School of Computing Science at Newcastle University. He leads the Digital Interaction Group in Culture Lab, Newcastle’s centre for interdisciplinary practice-based research in digital technologies. Their main interest is interaction design for everyday life settings and Patrick is particularly interested in the application of pervasive computing to education, creative practice, and health and wellbeing, as well as the development of new technologies for interaction (such as novel sensing platforms and interaction techniques).
Abstract:
Modern computer workstation setups regularly include multiple displays in various configurations. With such multi-monitor or multi-display setups we have reached a stage where we have more display real-estate available than we are able to comfortably attend to. This talk will present the results of an exploration of techniques for visualising display changes in multi-display environments. Apart from four subtle gaze-dependent techniques for visualising change on unattended displays, it will cover the technology used to enable quick and cost-effective deployment to workstations. An evaluation of the technology as well as the techniques themselves will be presented as well. The talk will conclude with a brief discussion on the challenges in evaluating subtle interaction techniques. Continue reading
The Sinhalese language (which falls into the family of Indo-Aryan languages) is spoken, read and written by over 22 million users worldwide (and by almost all the citizens of Sri Lanka). The language itself is very rich and complex – with over 60 base characters + 13 vowel variations for each, and also in terms of contextual phrases and idioms, which are much more diverse than Western languages. Nevertheless, very little work has been done in terms of creating efficient, user friendly text entry mechanisms for Sinhalese, in both computers and mobile devices. As present, despite attempts to standardize input methods, no such single main-stream popular method of text entry has surfaced. Continue reading
TBA
NOW RESCHEDULED to March 19, 2013
The relationship between multimodal exhibits and museum visitors experience, engaging with a topic, social engagement and engagement with the exhibit itself.
Congratulations to Per Ola and colleagues Ha Trinh, Annalu Waller, Keith Vertanen and Vicki L. Hanson. Their paper “iSCAN: a phoneme-based predictive communication aid for nonspeaking individuals” received the ACM SIGACCESS Best Student Paper Award at the 14th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2012) earlier this year.
When: Wednesday 12th of September 9:30am – 5pm (with a 1 hour break for lunch)
Where: Sub-honours lab in Jack Cole building (0.35)
As part of this competition, you may be offered an opportunity to participate in a Human-Computer Interaction study on subtle interaction. Participation in this study is completely voluntary.
There will be two competitive categories:
HCI study participants:
1st prize: 7” Samsung Galaxy Tab 2
2nd prize: £50 Amazon voucher
3rd prize: £20 Amazon voucher
Everyone:
1st prize: £50 Amazon voucher
2nd prize: £20 Amazon voucher
3rd prize: £10 Amazon voucher
We will try to include as many programming languages as is reasonable, so if you have any special requests, let us know.
If you have one, bring a laptop in case we run out of lab computers!
If you have any questions, please email Jakub on jd67@st-andrews.ac.uk
Speaker: Laurel Riek, University of Notre Dame
Title: Facing Healthcare’s Future: Designing Facial Expressivity for Robotic Patient Mannequins
Abstract:
In the United States, there are an estimated 98,000 people per year killed and $17.1 billion dollars lost due to medical errors. One way to prevent these errors is to have clinical students engage in simulation-based medical education, to help move the learning curve away from the patient. This training often takes place on human-sized android robots, called high-fidelity patient simulators (HFPS), which are capable of conveying human-like physiological cues (e.g., respiration, heart rate). Training with them can include anything from diagnostic skills (e.g., recognizing sepsis, a failure that recently killed 12-year-old Rory Staunton) to procedural skills (e.g., IV insertion) to communication skills (e.g., breaking bad news). HFPS systems allow students a chance to safely make mistakes within a simulation context without harming real patients, with the goal that these skills will ultimately be transferable to real patients.
While simulator use is a step in the right direction toward safer healthcare, one major challenge and critical technology gap is that none of the commercially available HFPS systems exhibit facial expressions, gaze, or realistic mouth movements, despite the vital importance of these cues in helping providers assess and treat patients. This is a critical omission, because almost all areas of health care involve face-to-face interaction, and there is overwhelming evidence that providers who are skilled at decoding communication cues are better healthcare providers – they have improved outcomes, higher compliance, greater safety, higher satisfaction, and they experience fewer malpractice lawsuits. In fact, communication errors are the leading cause of avoidable patient harm in the US: they are the root cause of 70% of sentinel events, 75% of which lead to a patient dying.
In the Robotics, Health, and Communication (RHC) Lab at the University of Notre Dame, we are addressing this problem by leveraging our expertise in android robotics and social signal processing to design and build a new, facially expressive, interactive HFPS system. In this talk, I will discuss our efforts to date, including: in situ observational studies exploring how individuals, teams, and operators interact with existing HFPS technology; design-focused interviews with simulation center directors and educators which future HFPS systems are envisioned; and initial software prototyping efforts incorporating novel facial expression synthesis techniques.
Biography:
Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame. She directs the RHC Lab, and leads research on human-robot interaction, social signal processing, facial expression synthesis, and clinical communication. She received her PhD at the University of Cambridge Computer Laboratory, and prior to that worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE.