Honorary Professor John Stasko

Dean Dearle, Professor Quigley with Professor StaskoProfessor John Stasko and the Associate Chair of the School of Interactive Computing in the College of Computing at Georgia Tech has been appointed as an Honorary Professor in the School of Computer Science. This appointment comes following a SICSA distinguished visiting fellowship John was awarded. This fellowship allowed John to participate in the SACHI/Big Data Lab summer school in Big Data Information Visulisation in St Andrews. This industry linked summer school has successful paved the way for a new generation of students to explore Data Science and Information Visualisation.
Professor Stasko at the Big Data Info Vis Summer School 2013
John is a newly elected fellow of the IEEE for his contributions to information visualization, visual analytics and human-computer interaction. Professor Quigley who has known John for the past 14 years and said, “I’m delighted John will join us a honorary Professor here in St Andrews. His world leading research and experience in Information Visualisation will be of great benefit to our staff, students and colleagues across the University. I first met John when I was a PhD student and organiser of a Software Visualisation conference we held in Sydney. Then, as now, his enthusiasm, breath of knowledge and desire to engage and work with others marks him out as a true intellectual thought leader. We hope to see John here regularly in the years ahead and we will be working with him on new projects.”

Sapere project comes to a successful end

Over the past three years the School has been involved in the Sapere project, funded by the European Commission. Sapere has been looking at new ways to build large-scale pervasive systems, moving away from traditional client/server approaches to explore biochemically-inspired system models in which services and users can “bond” spontaneously as they encounter each other in the real world. Sapere was co-ordinated by the University of Modena Reggio Emilia and — as well as St Andrews — involved the University of Bologna, University of Geneva, and Johannes Kepler University of Linz. Sapere had its final review this week and was ranked as “excellent … the project has even exceeded expectations”.

The project achieved considerable visibility by being deployed at the Vienna City Marathon to provide services including runner tracking and guiding spectators to possible viewing opportunities. The deployment had several thousands users downloading and using a smartphone app throughout the event, as well as several large attention-sensing public displays that responded directly to people stopping to look at them.

The scientific highlights of Sapere include developing a formal model of spontaneous interactions; building a middleware platform based on these ideas; developing a catalogue of useful patterns that describe co-ordinated interactions at a high level; and creating several exciting new algorithms for context awareness and situation recognition. This last activity was led from St Andrews by Simon Dobson, Juan Ye, and Graeme Stevenson, and allowed us to recognise activities going on in “busy” spaces where multiple things are happening simultaneously — a problem that has been extremely resistant to solution until now.

Sapere shows that pervasive systems are now “ready for prime time,” and that even research that seems highly speculative and challenging can lead to results that affect people’s lives directly. We’ve thoroughly enjoyed working with our collaborators, and we’ll certainly be looking to take these ideas forward in new projects and directions.

Here’s a non-technical overview video introducing Sapere:

 

ITS & UIST 2013: “Influential and Ground Breaking”

These are words used by the Co-Chair of UIST 2013, Dr Shahram Izadi of Microsoft Research Cambridge (UK), to describe one of the prestigious conferences taking place in St Andrews this week.

“UIST is the leading conference on new user interface trends and technologies. Some of the most influential and ground breaking work on graphical user interfaces, multi-touch, augmented reality, 3D user interaction and sensing was published at this conference.

It is now in its 26th year, and the first time it has been hosted in the UK. We are very excited to be hosting a packed program at the University of St Andrews. The program includes great papers, demos, posters, a wet and wonderful student innovation competition, and a great keynote on flying robots.”

Ivan Poupyrev, principal research scientist at Disney Research in Pittsburgh, described hosting UIST in St Andrews as “an acknowledgment of some great research in human-computer interaction that is carried out by research groups in Scotland, including the University of St Andrews.”

Two major events taking place this week are the 8th ACM International Conference on Interactive Tabletops and Surfaces (ITS), and the 26th ACM Symposium on User Interface Software and Technology (UIST), hosted by the Human Computer Interaction Group in the School of Computer Science at the University of St Andrews.

Read more about the events in the University News and local media.

Big Data Research Featured in MIT Technology Review

A survey article written by Jonathan Ward and Adam Barker has been featured in the MIT Technology Review.

Undefined By Data: A Survey of Big Data Definitions surveys the various definitions of big data offered by the world’s biggest and most influential high-tech organisations. The article then attempts to distill from all this noise a definition that everyone can agree on. The article was picked up by the MIT Technology Review and has fostered a lively discussion around a coherent definition; according to Topsy (social media analysis) the article has been retweeted over 400 times.

Creating High Value Cloud Services at ScotSoft Forum

On August 29th Gordon Baxter, Derek Wang and Ian Sommerville (St Andrews), along with Ian Allison (RGU) manned the stand for the SFC funded project “Creating High Value Cloud Services” at Scotland IS’s annual ScotSoft Forum. There were over 500 people at the event which was held in Edinburgh’s Sheraton Grand Hotel. The programme of talks that took place throughout the afternoon included presentations by Larry Cable (Salesforce) and a keynote by Vint Cerf (Google).

Find out more about the project on Services to the Cloud and The Cloudscape blog

Constraint Modelling Winners

Medal given to prize winning team
At the annual conference on Constraint Programming, CP 2013, Ian Gent and Ian Miguel were members of the winning team in the “First International Lightning Model and Solve” competition. Many thanks to the organisers of the event and especially to Allen van Gelder of UCSC for having the idea of entering a manual team and for inviting us to join in.

This was a quick event – just two hours – and the team’s strategy was to solve problems by hand, using pen and paper.  This was reflected in their team name, “Mano”.

Ian Gent has written a much longer blog post about the experience, why the team won, and why it is not bad news for constraint programming.

 

Dr Per Ola Kristensson tipped to change the world

Dr Per Ola Kristensson is one of 35 top young innovators named today by the prestigious MIT Technology Review.

For over a decade, the global media company has recognised a list of exceptionally talented technologists whose work has great potential to “transform the world.”

Dr Kristensson (34) joins a stellar list of technological talent. Previous winners include Larry Page and Sergey Brin, the cofounders of Google; Mark Zuckerberg, the cofounder of Facebook; Jonathan Ive, the chief designer of Apple; and David Karp, the creator of Tumblr.

The award recognises Per Ola’s  work at the intersection of artificial intelligence and human-computer interaction. He builds intelligent interactive systems that enable people to be more creative, expressive and satisfied in their daily lives. focusingon text entry interfaces and other interaction techniques.

One example  is the gesture keyboard, which  enables users to quickly and accurately write text on mobile devices by sliding a  finger across  a touchscreen keyboard.  To write “the” the user touches the T key, slides to the H key, then the E key, and then lifts the finger. The result is a shorthand gesture for the word “the” which can be identified as a user’s intended word using a recognition algorithm. Today, gesture keyboards are found in products such as ShapeWriter, Swype and T9 Trace, and pre-installed on Android phones. Per Ola’s own ShapeWriter, Inc. iPhone app, ranked the 8th best app by Time Magazine in 2008, had a million downloads in the first few months.

Two factors explain the success of the gesture keyboard: speed, and ease of adoption. Gesture keyboards are faster than regular touchscreen keyboards because expert users can quickly gesture  a word by direct recall from motor memory. The gesture keyboard is easy to adopt because it enables users to smoothly and unconsciously transition from slow visual tracing to this fast recall directly from motor memory. Novice users spell out words by sliding their finger  from letter to the letter using visually guided movements. With repetition, the gesture gradually builds up in the user’s motor memory until it can be quickly recalled.

A gesture keyboard works by matching the gesture made on the keyboard to a set of possible words, and then decides which word is intended by looking at both the gesture and the contents of the sentence being entered. Doing this can require checking as many as 60000 possible words: doing this quickly on a mobile phone required developing new techniques for searching, indexing, and caching.

An example of a gesture recognition algorithm is available here as an interactive Java demo: http://pokristensson.com/increc.html

There are many ways to improve gesture keyboard technology. One way to improve recognition accuracy is to use more sophisticated gesture recognition algorithms to compute the likelihood that a user’s gesture matches the shape of a word. Many researchers work on this problem. Another way  is to use better language models. These models can be dramatically improved by identifying large bodies of  text  similar to what users want to write. This is often achieved by mining the web. Another way to improve language models is to use better estimation algorithms. For example, smoothing is the process of assigning some of the probability mass of the language model to word sequences the language model estimation algorithm has not seen. Smoothing tends to improve the language model’s ability to accurately predict words.

An interesting point about gesture keyboards  is how they may disrupt other areas of computer input. Recently we have developed a system that enables a user to enter text via speech recognition, a gesture keyboard, or a combination of both. Users can fix speech recognition errors by simply gesturing the intended word. The system will automatically realize there is a speech recognition error, locate it, and replace the erroneous word with the result provided by the gesture keyboard. This is possible by fusing the probabilistic information provided by the speech and the keyboard.

Per Ola also works in the areas of multi-display systems, eye-tracking systems, and crowdsourcing and human computation. He takes on undergraduate and postgraduate project students and PhD students. If you are interested in working with him, you are encouraged to read http://pokristensson.com/phdposition.html

References:

Kristensson, P.O. and Zhai, S. 2004. SHARK2: a large vocabulary shorthand writing system for pen-based computers. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST 2004). ACM Press: 43-52.

(http://dx.doi.org/10.1145/1029632.1029640)

Kristensson, P.O. and Vertanen, K. 2011. Asynchronous multimodal text entry using speech and gesture keyboards. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011). ISCA: 581-584.

(http://www.isca-speech.org/archive/interspeech_2011/i11_0581.html)

Full Press Release

The 11th International Conference on Finite-State Methods and Natural Language Processing (FSMNLP 2013)

The 11th International Conference on Finite-State Methods and Natural Language Processing (FSMNLP 2013) was held in the Gateway in St Andrews on July 15-17,2013. Presented were 17 peer-reviewed papers on natural language processing applications, language resources, and theoretical and implementational issues with relevance to finite-state methods. In addition, there were two keynote lectures, by Alexander Clark (King’s College London) and Bill Byrne (University of Cambridge), and three tutorials, by Ruth Hoffmann (University of St Andrews), Bevan Keeley Jones (University of Edinburgh) and Kousha Etessami (University of Edinburgh).

The conference was attended by 34 researchers and students from three continents. It also hosted a business meeting of SIGFSM (ACL Special Interest Group on Finite-State Methods). The social programme included a reception on July 14th, and a guided walk, a conference dinner in Lower College Hall and a concert in St Salvator’s Chapel on July 16th.

Accommodation in Agnes Blackadder Hall was arranged for non-local delegates, and lunches were served in the Gateway. Coffee breaks could be used for informal demos in the smaller seminar rooms of the Gateway.

Sponsored student places were available thanks to support from SICSA. Further support was received from VisitScotland and the University of St Andrews.

The full programme, with links to the proceedings, can be found from the website: http://fsmnlp2013.cs.st-andrews.ac.uk/

Images and text courtesy of Mark-Jan Nederhof (conference chair), Anssi Yli-Jyrä and Shyam Reyal.

Services to the Cloud

On June 27th Gordon Baxter and Derek Wang gave a presentation about their work on the SFC funded project “Creating High Value Cloud Services” at the Edinburgh Chamber of Commerce’s Business Growth Club.

Gordon talked about the lessons that have been learned so far from working closely with several Scottish SMEs who are adopting the cloud. Derek then gave a short demonstration of the web-based toolkit he has developed to analyse the potential costs and revenues associated with delivering a product or service through the cloud.

Find out more about the project on Services to the Cloud and The Cloudscape blog