School Seminar – Barry Brown

Mobility in vivo

Barry Brown, Co-director Mobile Life, University of Stockholm

barbro.tumblr.com
The Mobile VINN Excellence Centre

Abstract
Despite the widespread use of mobile devices, details of mobile technology use ‘in the wild’ have proven difficult to collect. For this study we uses video data to gain new insight into the use of mobile computing devices. Screen-captures of smartphone use, combined with video recordings from wearable cameras, allow for the analysis of the detail of device use in a variety of activity and settings. We use this data to describe how mobile device use is threaded into other co-present activities, focusing on the use of maps and internet searches to support users on a day-trip. Close analysis of the video data reveals novel aspects of how gestures are used on touch screens, in that they form a resource for the ongoing coordination of joint action. We go on to describe how the local environment and information in the environment are combined to guide and support action. In conclusion, we argue for the mobility of mobile devices being as much about this interweaving of activity and device use, as it is about physical portability.

Barry Brown

Event details

  • When: 1st October 2012 15:00 - 16:00
  • Where: Phys Theatre C
  • Format: Seminar

Facing Healthcare’s Future: Designing Facial Expressivity for Robotic Patient Mannequins

Speaker: Laurel Riek, University of Notre Dame
Title: Facing Healthcare’s Future: Designing Facial Expressivity for Robotic Patient Mannequins

Abstract:

In the United States, there are an estimated 98,000 people per year killed and $17.1 billion dollars lost due to medical errors. One way to prevent these errors is to have clinical students engage in simulation-based medical education, to help move the learning curve away from the patient. This training often takes place on human-sized android robots, called high-fidelity patient simulators (HFPS), which are capable of conveying human-like physiological cues (e.g., respiration, heart rate). Training with them can include anything from diagnostic skills (e.g., recognizing sepsis, a failure that recently killed 12-year-old Rory Staunton) to procedural skills (e.g., IV insertion) to communication skills (e.g., breaking bad news). HFPS systems allow students a chance to safely make mistakes within a simulation context without harming real patients, with the goal that these skills will ultimately be transferable to real patients.

While simulator use is a step in the right direction toward safer healthcare, one major challenge and critical technology gap is that none of the commercially available HFPS systems exhibit facial expressions, gaze, or realistic mouth movements, despite the vital importance of these cues in helping providers assess and treat patients. This is a critical omission, because almost all areas of health care involve face-to-face interaction, and there is overwhelming evidence that providers who are skilled at decoding communication cues are better healthcare providers – they have improved outcomes, higher compliance, greater safety, higher satisfaction, and they experience fewer malpractice lawsuits. In fact, communication errors are the leading cause of avoidable patient harm in the US: they are the root cause of 70% of sentinel events, 75% of which lead to a patient dying.

In the Robotics, Health, and Communication (RHC) Lab at the University of Notre Dame, we are addressing this problem by leveraging our expertise in android robotics and social signal processing to design and build a new, facially expressive, interactive HFPS system. In this talk, I will discuss our efforts to date, including: in situ observational studies exploring how individuals, teams, and operators interact with existing HFPS technology; design-focused interviews with simulation center directors and educators which future HFPS systems are envisioned; and initial software prototyping efforts incorporating novel facial expression synthesis techniques.

Biography:

Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame. She directs the RHC Lab, and leads research on human-robot interaction, social signal processing, facial expression synthesis, and clinical communication. She received her PhD at the University of Cambridge Computer Laboratory, and prior to that worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE.

Event details

  • When: 4th September 2012 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Forthcoming talk by SICSA Distinguished Visitor

Room 1.33a at 2:00 pm on Friday 7th September 2012

  • Introduction to Grammatical Formalisms for Natural Language Parsing
  • Giorgio Satta, Department of Information Engineering, University of Padua, Italy

Abstract:
In the field of natural language parsing, the syntax of natural languages is

modeled by means of formal grammars and automata. Sometimes these formalisms

are borrowed from the field of formal language theory and are adapted to the
task at hand, as in the case of context-free grammars and their lexicalized
versions, where each individual rule is specialized for one or more lexical
items. Sometimes these formalisms are newly developed, as in the case of
dependency grammars and tree adjoining grammars. In this talk, I will
briefly overview several of these models, discussing their mathematical
properties and their use in parsing of natural language.

Event details

  • When: 7th September 2012 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Seminar, Talk

Soundcomber: A Stealthy and Context-Aware Sound Trojan for Smartphones

Seminar by Dr Apu Kapadia, Indiana University

We introduce Soundcomber, a “sensory malware” for smartphones that
uses the microphone to steal private information from phone
conversations. Soundcomber is lightweight and stealthy. It uses
targeted profiles to locally analyze portions of speech likely to
contain information such as credit card numbers. It evades known
defenses by transferring small amounts of private data to the malware
server utilizing smartphone-specific covert channels. Additionally, we
present a general defensive architecture that prevents such sensory
malware attacks.

Event details

  • When: 9th August 2012 14:00 - 15:00
  • Where: Cole 1.33
  • Format: Seminar

Helen Purchase on An Exploration of Interface Visual Aesthetics

Speaker: Helen Purchase, University of Glasgow
Title: An Exploration of Interface Visual Aesthetics
Abstract:
The visual design of an interface is not merely an ‘add-on’ to the functionality provided by a system: it is well-known that it can affect user preference, engagement and motivation, but does it have any effect on user performance? Can the efficiency or effectiveness of a system be improved by its visual design? This seminar will report on experiments that investigate whether any such effect can be quantified and tested. Key to this question is the definition of an unambiguous, quantifiable characterisation of an interface’s ‘visual aesthetic’: ways in which this could be determined will be discussed.

About Helen:
Dr Helen Purchase is Senior Lecturer in the School of Computing Science at the University of Glasgow. She has worked in the area of empirical studies of graph layout for several years, and also has research interests in visual aesthetics, task-based empirical design, collaborative learning in higher education, and sketch tools for design. She is currently writing a book on empirical methods for HCI research.

Event details

  • When: 15th May 2012 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

The Results Delusion – Systems Seminar by John Thomson

Systems Seminar – by John Thomson

All wecome.

The Results Delusion

Abstract:

It is often said that any subject which requires the word ‘science’ to be placed somewhere in its name, is unlikely to be very scientific. This is unfortunately far too true for systems research in general. Every systems conference, papers are presented which show significant speedups over previous approaches to problem X, but these improvements are rarely replicated in output from industry. Why? The unpalatable answer is that a significant amount of systems research is the result of self-delusion, bad science and, I suspect occasionally, fraud.

Standards of scientific rigour in CS often fall well below what would be taken for granted in other sciences – particularly with regard to measurement of results, statistical analysis and replicability of results. I would like to do something about this, and will be presenting the idea for a new CS journal, which focuses on this exact problem. Oh, and peer review is gone too! Pitfalls abound. Would love to hear your comments, objections and advice.

Event details

  • When: 27th March 2012 13:00 - 13:45
  • Where: Cole 1.33a
  • Format: Seminar

Autonomy handover and rich interaction on mobile devices by Simon Rodgers

Abstract: In this talk I will present some of the work being done in the new Inference, Dynamics, and Interaction group, at the University of Glasgow. In particular, we are interested in using probabilistic inference to improve interaction technology on handheld devices (particularly with touch screens).

I will show how we are using sequential Monte-Carlo techniques to infer distributions over user inputs which can be (1) augmented with applications to provide a smooth handover of control between the human and device and (2) used to extract additional information regarding touch interactions and subsequently improve touch accuracy.

There is a short bio on my webpage:
http://www.dcs.gla.ac.uk/~srogers

Event details

  • When: 19th March 2012 14:00 - 15:00
  • Where: Phys Theatre C
  • Series: CS Colloquia Series
  • Format: Colloquium, Seminar

A large-scale study of information needs by Karen Church

In recent years, mobile phones have evolved from simple communication devices to sophisticated personal computers enabling anytime, anywhereaccess to a wealth of information. Understanding the types of information needs that occur while mobile and how these needs are addressed is crucial in order to design and develop novel services that are tailored to mobile users.

To date, studies exploring information needs, in particular mobile needs, have been relatively small in terms of scope, scale and duration. The goal of this work is to investigate information needs on a much larger-scale and to explore, through quantitative analysis, how those needs are addressed.To this end, we conducted one of the most comprehensive studies of information needs to date, spanning a 3-month period and involving over 100 users. The study employed an intelligent experience sampling algorithm, an online diary and SMS technology to gather insights into the types of needs that occur from day to day.

Our results not only complement earlier studies but also shed new light on the differences between mobile and non-mobile information needs as well as the impact of demographics like gender have on the types of needs that arise and on the means chosen to satisfy those needs. Finally, we point to a number of design implications for enriching the future experiences of mobile users based on our findings..

Continue reading

Event details

  • When: 5th March 2012 14:00 - 15:00
  • Where: Phys Theatre C
  • Series: CS Colloquia Series
  • Format: Colloquium, Seminar

Alan Frisch Seminar Video

From October to December 2011, the School of Computer Science hosted Dr Alan Frisch from the University of York as a SICSA Distinguished Visiting Fellow. While here, Dr Frisch kindly agreed to give a seminar entitled “Decade of Progress in Constraint Modelling & Reformulation: The Quest for Abstraction and Automation”, the video of which can now be found here.

During his Fellowship Dr Frisch also visited, and spoke at, the universities of Dundee, Edinburgh and Glasgow.

Event details

  • When: 3rd October 2011 - 22nd December 2011
  • Format: Seminar

Proactive contextual information retrieval by Samuel Kaski

A talk on “Proactive contextual information retrieval” by Samuel Kaski of Aalto University and University of Helsinki, Finland.

Abstract:

In proactive information retrieval the ultimate goal is to seamlessly access relevant multimodal information in a context-sensitive way. Usually explicit queries are not available or are insufficient, and the alternative is to try to infer users’ interests from implicit feedback signals, such as clickstreams or eye tracking. We have studied how to infer relevance of texts and images to the user from the eye movement patterns. The interests, formulated as an implicit query, can then be used in further searches. I will discuss our new machine learning-based results in this field, including data glasses-based augmented reality interface to contextual information, and timeline browsers for life logs.

Continue reading

Event details

  • When: 23rd January 2012 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: CS Colloquia Series
  • Format: Seminar