Multimodal mobile interaction – making the most of our users’ capabilities by Stephen Brewster, University of Glasgow

Title: Multimodal mobile interaction – making the most of our users’ capabilities


Mobile user interfaces are commonly based on techniques developed for desktop computers in the 1970s, often including buttons, sliders, windows and progress bars. These can be hard to use on the move which then limits the way we use our devices and the applications on them. This talk will look at the possibility of moving away from these kinds of interactions to ones more suited to mobile devices and their dynamic contexts of use where users need to be able to look where they are going, carry shopping bags and hold on to children. Multimodal (gestural, audio and haptic) interactions provide us new ways to use our devices that can be eyes and hands free, and allow users to interact in a ‘head up’ way. These new interactions will facilitate new services, applications and devices that fit better into our daily lives and allow us to do a whole host of new things


I will discuss some of the work we are doing on input using gestures done with fingers, wrist and head, along with work on output using non-speech audio, 3D sound and tactile displays in applications such as for mobile devices such as text entry, camera phone user interfaces and navigation. I will also discuss some of the issues of social acceptability of these new interfaces; we have to be careful that the new ways we want people to use devices are socially appropriate and don’t make us feel embarrassed or awkward


Biography: Stephen is a Professor of Human-Computer Interaction in the Department of Computing Science at the University of Glasgow, UK. His main research interest is in Multimodal Human-Computer Interaction, sound and haptics and gestures. He has done a lot of research into Earcons, a particular form of non-speech sounds. He completed his degree in Computer Science at the University of Herfordshire in the UK. After a period in industry he did his PhD in the Human-Computer Interaction Group at the University of York in the UK with Dr Alistair Edwards. The title of his thesis is “Providing a structured method for integrating non-speech audio into human-computer interfaces”. That is where he developed his interests in earcons and non-speech sound. After finishing his PhD he worked as a research fellow for the European Union as part of the European Research Consortium for Informatics and Mathematics (ERCIM). From September, 1994 – March, 1995 he worked at VTT Information Technology in Helsinki, Finland. He then worked at SINTEF DELAB in Trondheim, Norway.

Event details

  • When: 20th February 2012 14:00 - 15:00
  • Where: Phys Theatre C
  • Series: CS Colloquia Series
  • Format: Colloquium