At the Edge by Alan Dix, University of Birmingham

Abstract:
From buying plane tickets to eGovernment, participation in consumer and civic society is predicated on continuous connectivity and copious computation . And yet for many at the edges of society, the elderly, the poor, the disabled, and those in rural areas, poor access to digital technology makes them more marginalised, potentially cut off from modern citizenship. I spent three and half months last summer walking over a thousand miles around the margins of Wales in order to experience more directly some of the issues facing those on the physical edges of a modern nation, who are often also at the social and economic margins. I will talk about some of the theoretical and practical issues raised; how designing software with constrained resources is more challenging but potentially more rewarding than assuming everyone lives with Silicon Valley levels of connectivity.

Bio:
Alan is Professor of Computing at University of Birmingham and Senior Researcher at Talis based in Birmingham, but, when not in Birmingham, or elsewhere lives in Tiree a remote island of the west coast of Scotland.

Alan’s career has included mathematical modelling for agricultural crop sprayers, COBOL programming, submarine design and intelligent lighting. However, he is best known for his work in Human Computer Interaction over three decades including his well known HCI textbook and some of the earliest work in formal methods, mobile interaction, and privacy in HCI. He has worked in posts across the university sector as well as a period as founder director of two dotcom companies, aQtive (1998) and vfridge (2000), which, between them, attracted £850,000 of venture capital funding. He currently works part-time for the University of Birmingham and is on the REF Panel for Computer Science. He also works part-time for Talis, which, inter alia, provides the reading list software used at St Andrews.

His interests and research methods remain, as ever, eclectic, from formal methods, to technical creativity and the modelling of regret. At present he is completing a book, TouchIT, about physicality in design, working with musicologists on next generation digital archives, envisioning how learning analytics can inform and maybe transform university teaching, and working in various projects connected with communication and energy use on Tiree and rural communities.

Last year he completed a walk around Wales as an exploration into technical issues ‘at the edge’, the topic of his seminar.

Event details

  • When: 6th May 2014 14:00 - 15:00
  • Where: Maths Theatre B
  • Format: Seminar

Highly Deformable Mobile Devices & Future Mobile Phones by Johannes Schöning, Hasselt University

Speaker: Johannes Schöning, Hasselt University
Date/Time: 2-3pm April 8th, 2014
Location: Maths Lecture Theatre B, University of St Andrews

Title: Highly Deformable Mobile Devices & Future Mobile Phones

Abstract:
In the talk I will present the concept of highly deformable mobile devices that can be transformed into various special-purpose controls in order to bring physical controls to mobile devices (https://www.youtube.com/watch?v=zLe52PFZrtc). I will present different interaction techniques enabled by this concept and present results from an in-depth study. Our findings show that these physical controls provide several benefits over traditional touch interaction techniques commonly used on mobile devices. In addition we will give insights on a large-scale study that logged detailed application usage information from over 4,100 users of Android-powered mobile devices.

Bio:
Johannes Schöning is a professor of computer science with a focus on HCI at Hasselt University, working within the Expertise centre for Digital Media (EDM) – the ICT research Institute of Hasselt University. In addition, he is a visiting lecturer at UCL London within the Intel Collaborative Research Institute for Sustainable Cities.

His research interests are new methods and novel mobile interfaces to navigate through spatial information. In general, he develops, designs and tests user interfaces that help people to solve daily tasks more enjoyable and/ or effectively. This includes the development of mobile augmented reality applications, interactive surfaces and tabletops and other “post desktop” interfaces. His research and work was awarded with several prices and awards, such as the ACM Eugene Lawler Award or the Vodafone Research Award for his PhD.  In addition, Johannes serve as a junior fellow of “Gesellschaft für Informatik”.

This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.

Event details

  • When: 8th April 2014 14:00 - 15:00
  • Where: Maths Theatre B
  • Format: Seminar

Should Technology be more mindful? by Yvonne Rogers, UCL.

Abstract:
We are increasingly living in our digital bubbles. Even when physically together – as families and friends in our living rooms, outdoors and public places – we have our eyes glued to our own phones, tablets and laptops. The new generation of ‘all about me’ health and fitness gadgets, that is becoming more mainstream, is making it worse. Do we really need smart shoes that tell us when we are being lazy and glasses that tell us what we can and cannot eat? Is this what we want from technology – ever more forms of digital narcissism, virtual nagging and data addiction? In contrast, I argue for a radical rethink of our relationship with future digital technologies. One that inspires us, through shared devices, tools and data, to be more creative, playful and thoughtful of each other and our surrounding environments.

Bio:
Yvonne Rogers is a Professor of Interaction Design, the director of UCLIC and a deputy head of the Computer Science department at UCL. Her research interests are in the areas of ubiquitous computing, interaction design and human-computer interaction. A central theme is how to design interactive technologies that can enhance life by augmenting and extending everyday, learning and work activities. This involves informing, building and evaluating novel user experiences through creating and assembling a diversity of pervasive technologies.

This seminar is part of our ongoing series from researchers in HCI. See here for our current schedule.

Event details

  • When: 11th March 2014 14:00 - 15:00
  • Where: Maths Theatre B
  • Format: Seminar

How human-human dialogue research can lead us to understand speech behaviours in human-computer dialogue: The case of lexical alignment by Benjamin Cowan, University of Birmingham.

Abstract:
Dialogue is a dynamic social activity. Research has consistently shown that our dialogue partners impact our speech choices whereby we converge (or align) on aspects such as lexical choice and syntax. With the development of more natural computer dialogue partners and the increase of speech as an interaction modality in many devices and applications, it is important that we understand what impacts how we behave linguistically in such dialogue interactions wth computers. My talk will focus on my current work looking at how design choices and computer partner behaviours affect alignment in human-computer dialogue and how this can inform the theory-based debate over what leads to such a behaviour.

Bio:
Dr Benjamin Cowan is a Research Fellow at the University of Birmingham’s Human-Computer Interaction Centre, based in the School of Computer Science. His research is at the juncture between Psychology and Computer Science, studying how interface design affects user perceptions, emotions and behaviours in human-computer based interactions. Specifically he studies how design and system actions affect user linguistic behaviours as well as the causes and predictors of user anxiety towards social system contributions.

Event details

  • When: 25th February 2014 14:00 - 15:00
  • Where: Maths Theatre B
  • Format: Seminar

Ubicomp, Touch and Gaze by Hans Gellersen

Abstract:
Touch input and two-handed interaction were intensively studied in the mid 80′s but it’s taken 20 years for these ideas to emerge in the mainstream, with the advent of multi-touch interfaces. Gaze has been studied for almost as long as interaction modality and appears on the brink of wider use. This talk will present recent work that reconsiders touch and gaze to address challenges in ubiquitous computing: interaction across personal devices and large displays, and spontaneous interaction with displays using our eyes only.

Bio:
Hans Gellersen is a Professor of Interactive Systems in the School of Computing & Communications at Lancaster University. His research interests are in ubiquitous computing and systems and technologies for human-computer interaction. He has contributed on topics including location, context and activity sensing, device association and cross-device interaction, and interfaces that blend physical and digital interaction. In his recent work, he is particularly interested in eye movement analysis: as a source of contextual information on human activity, interest and well-being; and as a resource for interaction beyond the lab. Hans is closely involved with the UbiComp conference series which we founded in 1999, and served on the Editorial Boards of Personal and Ubiquitous Computing, and IEEE Pervasive Computing. He holds a PhD from Karlsruhe University.

Event details

  • When: 11th February 2014 14:00 - 15:00
  • Where: Maths Theatre B
  • Format: Seminar

Dr Per Ola Kristensson: A visionary that will shape the future

Congratulations to our very own Per Ola Kristensson. Earlier this year he was named as one of the people most likely to change the world by the prestigious MIT Technology Review’s list of Innovators under 35.

Described as visionary he appears, today at number 11 in IMPACT 100, he is a lecturer in Human Computer Interaction here in the School of Computer Science, where he leads the Intelligent Interactive Systems Group.

IMPACT 100 PANEL VIEW:

People like Per Ola Kristensson are the shapers of the future where social interaction and new technology are concerned. Recognition at this level from an organisation like MIT is hugely impressive.

Honorary Professor John Stasko

Dean Dearle, Professor Quigley with Professor StaskoProfessor John Stasko and the Associate Chair of the School of Interactive Computing in the College of Computing at Georgia Tech has been appointed as an Honorary Professor in the School of Computer Science. This appointment comes following a SICSA distinguished visiting fellowship John was awarded. This fellowship allowed John to participate in the SACHI/Big Data Lab summer school in Big Data Information Visulisation in St Andrews. This industry linked summer school has successful paved the way for a new generation of students to explore Data Science and Information Visualisation.
Professor Stasko at the Big Data Info Vis Summer School 2013
John is a newly elected fellow of the IEEE for his contributions to information visualization, visual analytics and human-computer interaction. Professor Quigley who has known John for the past 14 years and said, “I’m delighted John will join us a honorary Professor here in St Andrews. His world leading research and experience in Information Visualisation will be of great benefit to our staff, students and colleagues across the University. I first met John when I was a PhD student and organiser of a Software Visualisation conference we held in Sydney. Then, as now, his enthusiasm, breath of knowledge and desire to engage and work with others marks him out as a true intellectual thought leader. We hope to see John here regularly in the years ahead and we will be working with him on new projects.”

ITS & UIST 2013: “Influential and Ground Breaking”

These are words used by the Co-Chair of UIST 2013, Dr Shahram Izadi of Microsoft Research Cambridge (UK), to describe one of the prestigious conferences taking place in St Andrews this week.

“UIST is the leading conference on new user interface trends and technologies. Some of the most influential and ground breaking work on graphical user interfaces, multi-touch, augmented reality, 3D user interaction and sensing was published at this conference.

It is now in its 26th year, and the first time it has been hosted in the UK. We are very excited to be hosting a packed program at the University of St Andrews. The program includes great papers, demos, posters, a wet and wonderful student innovation competition, and a great keynote on flying robots.”

Ivan Poupyrev, principal research scientist at Disney Research in Pittsburgh, described hosting UIST in St Andrews as “an acknowledgment of some great research in human-computer interaction that is carried out by research groups in Scotland, including the University of St Andrews.”

Two major events taking place this week are the 8th ACM International Conference on Interactive Tabletops and Surfaces (ITS), and the 26th ACM Symposium on User Interface Software and Technology (UIST), hosted by the Human Computer Interaction Group in the School of Computer Science at the University of St Andrews.

Read more about the events in the University News and local media.

Dr Per Ola Kristensson tipped to change the world

Dr Per Ola Kristensson is one of 35 top young innovators named today by the prestigious MIT Technology Review.

For over a decade, the global media company has recognised a list of exceptionally talented technologists whose work has great potential to “transform the world.”

Dr Kristensson (34) joins a stellar list of technological talent. Previous winners include Larry Page and Sergey Brin, the cofounders of Google; Mark Zuckerberg, the cofounder of Facebook; Jonathan Ive, the chief designer of Apple; and David Karp, the creator of Tumblr.

The award recognises Per Ola’s  work at the intersection of artificial intelligence and human-computer interaction. He builds intelligent interactive systems that enable people to be more creative, expressive and satisfied in their daily lives. focusingon text entry interfaces and other interaction techniques.

One example  is the gesture keyboard, which  enables users to quickly and accurately write text on mobile devices by sliding a  finger across  a touchscreen keyboard.  To write “the” the user touches the T key, slides to the H key, then the E key, and then lifts the finger. The result is a shorthand gesture for the word “the” which can be identified as a user’s intended word using a recognition algorithm. Today, gesture keyboards are found in products such as ShapeWriter, Swype and T9 Trace, and pre-installed on Android phones. Per Ola’s own ShapeWriter, Inc. iPhone app, ranked the 8th best app by Time Magazine in 2008, had a million downloads in the first few months.

Two factors explain the success of the gesture keyboard: speed, and ease of adoption. Gesture keyboards are faster than regular touchscreen keyboards because expert users can quickly gesture  a word by direct recall from motor memory. The gesture keyboard is easy to adopt because it enables users to smoothly and unconsciously transition from slow visual tracing to this fast recall directly from motor memory. Novice users spell out words by sliding their finger  from letter to the letter using visually guided movements. With repetition, the gesture gradually builds up in the user’s motor memory until it can be quickly recalled.

A gesture keyboard works by matching the gesture made on the keyboard to a set of possible words, and then decides which word is intended by looking at both the gesture and the contents of the sentence being entered. Doing this can require checking as many as 60000 possible words: doing this quickly on a mobile phone required developing new techniques for searching, indexing, and caching.

An example of a gesture recognition algorithm is available here as an interactive Java demo: http://pokristensson.com/increc.html

There are many ways to improve gesture keyboard technology. One way to improve recognition accuracy is to use more sophisticated gesture recognition algorithms to compute the likelihood that a user’s gesture matches the shape of a word. Many researchers work on this problem. Another way  is to use better language models. These models can be dramatically improved by identifying large bodies of  text  similar to what users want to write. This is often achieved by mining the web. Another way to improve language models is to use better estimation algorithms. For example, smoothing is the process of assigning some of the probability mass of the language model to word sequences the language model estimation algorithm has not seen. Smoothing tends to improve the language model’s ability to accurately predict words.

An interesting point about gesture keyboards  is how they may disrupt other areas of computer input. Recently we have developed a system that enables a user to enter text via speech recognition, a gesture keyboard, or a combination of both. Users can fix speech recognition errors by simply gesturing the intended word. The system will automatically realize there is a speech recognition error, locate it, and replace the erroneous word with the result provided by the gesture keyboard. This is possible by fusing the probabilistic information provided by the speech and the keyboard.

Per Ola also works in the areas of multi-display systems, eye-tracking systems, and crowdsourcing and human computation. He takes on undergraduate and postgraduate project students and PhD students. If you are interested in working with him, you are encouraged to read http://pokristensson.com/phdposition.html

References:

Kristensson, P.O. and Zhai, S. 2004. SHARK2: a large vocabulary shorthand writing system for pen-based computers. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST 2004). ACM Press: 43-52.

(http://dx.doi.org/10.1145/1029632.1029640)

Kristensson, P.O. and Vertanen, K. 2011. Asynchronous multimodal text entry using speech and gesture keyboards. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011). ISCA: 581-584.

(http://www.isca-speech.org/archive/interspeech_2011/i11_0581.html)

Full Press Release

SACHI Seminar: Team-buddy: investigating a long-lived robot companion

SACHI seminar

Title: Team-buddy: investigating a long-lived robot companion

Speaker: Ruth Aylett, Heriot-Watt University, Edinburgh

Abstract:
In the EU-funded LIREC project, finishing last year, Heriot-Watt University investigated how a long-lived multi-embodied (robot, graphical) companion might be incorporated into a work-environment as a team buddy, running a final continuous three-week study. This talk gives an overview of the technology issues and some of the surprises from various user-studies.

Bio:
Ruth Aylett is Professor of Computer Sciences in the School of Mathematical and Computer Science at Heriot-Watt University. She researches intelligent graphical characters, affective agent models, human-robot interaction, and interactive narrative. She was a founder of the International Conference on Intelligent Virtual Agents and was a partner in the large HRI project LIREC – see lirec.eu. She has more than 200 publications – book chapters, journals, and refereed conferences and coordinates the Autonomous affective Agents group at Heriot-Watt University- see here

Event details

  • When: 10th September 2013 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar