Honorary Professor John Stasko

Dean Dearle, Professor Quigley with Professor StaskoProfessor John Stasko and the Associate Chair of the School of Interactive Computing in the College of Computing at Georgia Tech has been appointed as an Honorary Professor in the School of Computer Science. This appointment comes following a SICSA distinguished visiting fellowship John was awarded. This fellowship allowed John to participate in the SACHI/Big Data Lab summer school in Big Data Information Visulisation in St Andrews. This industry linked summer school has successful paved the way for a new generation of students to explore Data Science and Information Visualisation.
Professor Stasko at the Big Data Info Vis Summer School 2013
John is a newly elected fellow of the IEEE for his contributions to information visualization, visual analytics and human-computer interaction. Professor Quigley who has known John for the past 14 years and said, “I’m delighted John will join us a honorary Professor here in St Andrews. His world leading research and experience in Information Visualisation will be of great benefit to our staff, students and colleagues across the University. I first met John when I was a PhD student and organiser of a Software Visualisation conference we held in Sydney. Then, as now, his enthusiasm, breath of knowledge and desire to engage and work with others marks him out as a true intellectual thought leader. We hope to see John here regularly in the years ahead and we will be working with him on new projects.”

ITS & UIST 2013: “Influential and Ground Breaking”

These are words used by the Co-Chair of UIST 2013, Dr Shahram Izadi of Microsoft Research Cambridge (UK), to describe one of the prestigious conferences taking place in St Andrews this week.

“UIST is the leading conference on new user interface trends and technologies. Some of the most influential and ground breaking work on graphical user interfaces, multi-touch, augmented reality, 3D user interaction and sensing was published at this conference.

It is now in its 26th year, and the first time it has been hosted in the UK. We are very excited to be hosting a packed program at the University of St Andrews. The program includes great papers, demos, posters, a wet and wonderful student innovation competition, and a great keynote on flying robots.”

Ivan Poupyrev, principal research scientist at Disney Research in Pittsburgh, described hosting UIST in St Andrews as “an acknowledgment of some great research in human-computer interaction that is carried out by research groups in Scotland, including the University of St Andrews.”

Two major events taking place this week are the 8th ACM International Conference on Interactive Tabletops and Surfaces (ITS), and the 26th ACM Symposium on User Interface Software and Technology (UIST), hosted by the Human Computer Interaction Group in the School of Computer Science at the University of St Andrews.

Read more about the events in the University News and local media.

Dr Per Ola Kristensson tipped to change the world

Dr Per Ola Kristensson is one of 35 top young innovators named today by the prestigious MIT Technology Review.

For over a decade, the global media company has recognised a list of exceptionally talented technologists whose work has great potential to “transform the world.”

Dr Kristensson (34) joins a stellar list of technological talent. Previous winners include Larry Page and Sergey Brin, the cofounders of Google; Mark Zuckerberg, the cofounder of Facebook; Jonathan Ive, the chief designer of Apple; and David Karp, the creator of Tumblr.

The award recognises Per Ola’s  work at the intersection of artificial intelligence and human-computer interaction. He builds intelligent interactive systems that enable people to be more creative, expressive and satisfied in their daily lives. focusingon text entry interfaces and other interaction techniques.

One example  is the gesture keyboard, which  enables users to quickly and accurately write text on mobile devices by sliding a  finger across  a touchscreen keyboard.  To write “the” the user touches the T key, slides to the H key, then the E key, and then lifts the finger. The result is a shorthand gesture for the word “the” which can be identified as a user’s intended word using a recognition algorithm. Today, gesture keyboards are found in products such as ShapeWriter, Swype and T9 Trace, and pre-installed on Android phones. Per Ola’s own ShapeWriter, Inc. iPhone app, ranked the 8th best app by Time Magazine in 2008, had a million downloads in the first few months.

Two factors explain the success of the gesture keyboard: speed, and ease of adoption. Gesture keyboards are faster than regular touchscreen keyboards because expert users can quickly gesture  a word by direct recall from motor memory. The gesture keyboard is easy to adopt because it enables users to smoothly and unconsciously transition from slow visual tracing to this fast recall directly from motor memory. Novice users spell out words by sliding their finger  from letter to the letter using visually guided movements. With repetition, the gesture gradually builds up in the user’s motor memory until it can be quickly recalled.

A gesture keyboard works by matching the gesture made on the keyboard to a set of possible words, and then decides which word is intended by looking at both the gesture and the contents of the sentence being entered. Doing this can require checking as many as 60000 possible words: doing this quickly on a mobile phone required developing new techniques for searching, indexing, and caching.

An example of a gesture recognition algorithm is available here as an interactive Java demo: http://pokristensson.com/increc.html

There are many ways to improve gesture keyboard technology. One way to improve recognition accuracy is to use more sophisticated gesture recognition algorithms to compute the likelihood that a user’s gesture matches the shape of a word. Many researchers work on this problem. Another way  is to use better language models. These models can be dramatically improved by identifying large bodies of  text  similar to what users want to write. This is often achieved by mining the web. Another way to improve language models is to use better estimation algorithms. For example, smoothing is the process of assigning some of the probability mass of the language model to word sequences the language model estimation algorithm has not seen. Smoothing tends to improve the language model’s ability to accurately predict words.

An interesting point about gesture keyboards  is how they may disrupt other areas of computer input. Recently we have developed a system that enables a user to enter text via speech recognition, a gesture keyboard, or a combination of both. Users can fix speech recognition errors by simply gesturing the intended word. The system will automatically realize there is a speech recognition error, locate it, and replace the erroneous word with the result provided by the gesture keyboard. This is possible by fusing the probabilistic information provided by the speech and the keyboard.

Per Ola also works in the areas of multi-display systems, eye-tracking systems, and crowdsourcing and human computation. He takes on undergraduate and postgraduate project students and PhD students. If you are interested in working with him, you are encouraged to read http://pokristensson.com/phdposition.html

References:

Kristensson, P.O. and Zhai, S. 2004. SHARK2: a large vocabulary shorthand writing system for pen-based computers. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST 2004). ACM Press: 43-52.

(http://dx.doi.org/10.1145/1029632.1029640)

Kristensson, P.O. and Vertanen, K. 2011. Asynchronous multimodal text entry using speech and gesture keyboards. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011). ISCA: 581-584.

(http://www.isca-speech.org/archive/interspeech_2011/i11_0581.html)

Full Press Release

SACHI Seminar: Team-buddy: investigating a long-lived robot companion

SACHI seminar

Title: Team-buddy: investigating a long-lived robot companion

Speaker: Ruth Aylett, Heriot-Watt University, Edinburgh

Abstract:
In the EU-funded LIREC project, finishing last year, Heriot-Watt University investigated how a long-lived multi-embodied (robot, graphical) companion might be incorporated into a work-environment as a team buddy, running a final continuous three-week study. This talk gives an overview of the technology issues and some of the surprises from various user-studies.

Bio:
Ruth Aylett is Professor of Computer Sciences in the School of Mathematical and Computer Science at Heriot-Watt University. She researches intelligent graphical characters, affective agent models, human-robot interaction, and interactive narrative. She was a founder of the International Conference on Intelligent Virtual Agents and was a partner in the large HRI project LIREC – see lirec.eu. She has more than 200 publications – book chapters, journals, and refereed conferences and coordinates the Autonomous affective Agents group at Heriot-Watt University- see here

Event details

  • When: 10th September 2013 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Jacob Eisenstein: Interactive Topic Visualization for Exploratory Text Analysis

Abstract:
Large text document collections are increasingly important in a variety of domains; examples of such collections include news articles, streaming social media, scientific research papers, and digitized literary documents. Existing methods for searching and exploring these collections focus on surface-level matches to user queries, ignoring higher-level thematic structure. Probabilistic topic models are a machine learning technique for finding themes that recur across a corpus, but there has been little work on how they can support end users in exploratory analysis. In this talk I will survey the topic modeling literature and describe our ongoing work on using topic models to support digital humanities research. In the second half of the talk, I will describe TopicViz, an interactive environment that combines traditional search and citation-graph exploration with a dust-and-magnet layout that links documents to the latent themes discovered by the topic model.
This work is in collaboration with:
Polo Chau, Jaegul Choo, Niki Kittur, Chang-Hyun Lee, Lauren Klein, Jarek Rossignac, Haesun Park, Eric P. Xing, and Tina Zhou

Bio:
Jacob Eisenstein is an Assistant Professor in the School of Interactive Computing at Georgia Tech. He works on statistical natural language processing, focusing on social media analysis, discourse, and latent variable models. Jacob was a Postdoctoral researcher at Carnegie Mellon and the University of Illinois. He completed his Ph.D. at MIT in 2008, winning the George M. Sprowls dissertation award.

Event details

  • When: 23rd July 2013 13:00 - 14:00
  • Where: Cole 1.33
  • Format: Seminar

Computing Reviews’ Notable Books and Articles 2012

ACM Computing Reviews has selected a recent survey paper written by Per Ola Kristensson and colleagues as one of the Notable Computing Books and Articles of 2012.

The list consists of nominations from Computing Reviews reviewers, Computing Reviews category editors, the editors in chief of journals covered by Computing Reviews, and others in the computing community.

The selected survey paper is entitled “Foundational Issues in Touch-Surface Stroke Gesture Design — An Integrative Review” and it was published by the journal Foundations and Trends in Human-Computer Interaction in 2012.

Gesture-based Natural User Interfaces

Research into personalised gestures for user interfaces carried out by Miguel Nacenta, Per Ola Kristensson and two of our recent MSc students, Yemliha Kamber and Yizhou Qiang featured in the University News last week. You can read more about their research in the MIT Technology Review, and Fast Company’s Co.DESIGN. Their results question whether pre-programmed gestures need the personal touch to make them more effective.

MIT Technology Review – Jakub Dostal

MIT Technology Review has written a comprehensive article about Jakub Dostal’s Diff Displays that track visual changes on unattended displays. Jakub presented the work two weeks ago at the 18th ACM International Conference on Intelligent User Interfaces in Santa Monica, California, USA. The Diff Displays project is part of Jakub’s PhD thesis on proximity-aware user interfaces. His PhD is supervised by Prof. Aaron Quigley and Dr Per Ola Kristensson.

SACHI Conference: Changing Perspectives at CHI 2013

CHI is the premier international conference on human computer interaction, and this year’s event is looking to be the most exciting yet for the St Andrews Computer Human Interaction (SACHI) research group in the School of Computer Science.

Seven members of SACHI will attend CHI in Paris this April to present three full papers, one note, one work in progress paper and five workshop papers. In addition members of SACHI are involved in organising two workshops and one special interest group meeting. Two academics in SACHI are Associate Chairs for respective sub-committees and two PhD students will be serving as student volunteers at the 2013 conference. A very busy time for all!

For more complete details on these papers, notes etc. please see http://sachi.cs.st-andrews.ac.uk/2013/02/sachi-changing-perspectives-at-chi-2013/

Please note that the school of Computer Sciience is going to be introducing a new Masters in HCI from September this year.

Event details

  • When: 27th April 2013 - 2nd May 2013
  • Format: Conference