Dr Per Ola Kristensson tipped to change the world

Dr Per Ola Kristensson is one of 35 top young innovators named today by the prestigious MIT Technology Review.

For over a decade, the global media company has recognised a list of exceptionally talented technologists whose work has great potential to “transform the world.”

Dr Kristensson (34) joins a stellar list of technological talent. Previous winners include Larry Page and Sergey Brin, the cofounders of Google; Mark Zuckerberg, the cofounder of Facebook; Jonathan Ive, the chief designer of Apple; and David Karp, the creator of Tumblr.

The award recognises Per Ola’s  work at the intersection of artificial intelligence and human-computer interaction. He builds intelligent interactive systems that enable people to be more creative, expressive and satisfied in their daily lives. focusingon text entry interfaces and other interaction techniques.

One example  is the gesture keyboard, which  enables users to quickly and accurately write text on mobile devices by sliding a  finger across  a touchscreen keyboard.  To write “the” the user touches the T key, slides to the H key, then the E key, and then lifts the finger. The result is a shorthand gesture for the word “the” which can be identified as a user’s intended word using a recognition algorithm. Today, gesture keyboards are found in products such as ShapeWriter, Swype and T9 Trace, and pre-installed on Android phones. Per Ola’s own ShapeWriter, Inc. iPhone app, ranked the 8th best app by Time Magazine in 2008, had a million downloads in the first few months.

Two factors explain the success of the gesture keyboard: speed, and ease of adoption. Gesture keyboards are faster than regular touchscreen keyboards because expert users can quickly gesture  a word by direct recall from motor memory. The gesture keyboard is easy to adopt because it enables users to smoothly and unconsciously transition from slow visual tracing to this fast recall directly from motor memory. Novice users spell out words by sliding their finger  from letter to the letter using visually guided movements. With repetition, the gesture gradually builds up in the user’s motor memory until it can be quickly recalled.

A gesture keyboard works by matching the gesture made on the keyboard to a set of possible words, and then decides which word is intended by looking at both the gesture and the contents of the sentence being entered. Doing this can require checking as many as 60000 possible words: doing this quickly on a mobile phone required developing new techniques for searching, indexing, and caching.

An example of a gesture recognition algorithm is available here as an interactive Java demo: http://pokristensson.com/increc.html

There are many ways to improve gesture keyboard technology. One way to improve recognition accuracy is to use more sophisticated gesture recognition algorithms to compute the likelihood that a user’s gesture matches the shape of a word. Many researchers work on this problem. Another way  is to use better language models. These models can be dramatically improved by identifying large bodies of  text  similar to what users want to write. This is often achieved by mining the web. Another way to improve language models is to use better estimation algorithms. For example, smoothing is the process of assigning some of the probability mass of the language model to word sequences the language model estimation algorithm has not seen. Smoothing tends to improve the language model’s ability to accurately predict words.

An interesting point about gesture keyboards  is how they may disrupt other areas of computer input. Recently we have developed a system that enables a user to enter text via speech recognition, a gesture keyboard, or a combination of both. Users can fix speech recognition errors by simply gesturing the intended word. The system will automatically realize there is a speech recognition error, locate it, and replace the erroneous word with the result provided by the gesture keyboard. This is possible by fusing the probabilistic information provided by the speech and the keyboard.

Per Ola also works in the areas of multi-display systems, eye-tracking systems, and crowdsourcing and human computation. He takes on undergraduate and postgraduate project students and PhD students. If you are interested in working with him, you are encouraged to read http://pokristensson.com/phdposition.html

References:

Kristensson, P.O. and Zhai, S. 2004. SHARK2: a large vocabulary shorthand writing system for pen-based computers. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST 2004). ACM Press: 43-52.

(http://dx.doi.org/10.1145/1029632.1029640)

Kristensson, P.O. and Vertanen, K. 2011. Asynchronous multimodal text entry using speech and gesture keyboards. In Proceedings of the 12th Annual Conference of the International Speech Communication Association (Interspeech 2011). ISCA: 581-584.

(http://www.isca-speech.org/archive/interspeech_2011/i11_0581.html)

Full Press Release

On Normalising Disjunctive Intermediate Logics

Speaker:
Prof. Jonathan Seldin, University of Lethbridge, Canada

Abstract:
In this talk it is shown that every intermediate logic obtained from intuitionistic logic by adding a disjunction can be normalized. However, the normalisation procedure is not as complete as that for intuitionistic and minimal logic because some results which usually follow from normalisation fail, including the separation property and the subformula property.

Biography:
Jonathan P. Seldin, now Professor Emeritus, is a well-established senior scientist at the University of Lethbridge, Alberta, Canada, with an Amsterdam PhD in combinatory logic supervised by Haskell Curry. This logic, together with lambda-calculus (to which it is equivalent) is a prototype for functional languages, such as Haskell, and typed lambda-calculus is a prototype for the typing discipline in programming languages. His work on lambda-calculus, both pure and typed, has applications in formal verification, the use of formal logics to prove properties of programs (e.g., that they satisfy their specifications). He has co-authored works with Curry and Hindley on combinatory logic and lambda calculus. He is also interested in the history and philosophy of mathematics and in proof normalisation and cut-elimination for various systems of formal logic. His visit to Scotland is as a SICSA Distinguished Visiting Fellow, to work with Prof. Kamareddine at Heriot-Watt University and with Dr Dyckhoff at St Andrews. For details and publications see http://directory.uleth.ca/users/jonathan.seldin

Event details

  • When: 3rd September 2013 11:30 - 12:30
  • Where: Cole 1.33a
  • Format: Seminar

SACHI Seminar: Team-buddy: investigating a long-lived robot companion

SACHI seminar

Title: Team-buddy: investigating a long-lived robot companion

Speaker: Ruth Aylett, Heriot-Watt University, Edinburgh

Abstract:
In the EU-funded LIREC project, finishing last year, Heriot-Watt University investigated how a long-lived multi-embodied (robot, graphical) companion might be incorporated into a work-environment as a team buddy, running a final continuous three-week study. This talk gives an overview of the technology issues and some of the surprises from various user-studies.

Bio:
Ruth Aylett is Professor of Computer Sciences in the School of Mathematical and Computer Science at Heriot-Watt University. She researches intelligent graphical characters, affective agent models, human-robot interaction, and interactive narrative. She was a founder of the International Conference on Intelligent Virtual Agents and was a partner in the large HRI project LIREC – see lirec.eu. She has more than 200 publications – book chapters, journals, and refereed conferences and coordinates the Autonomous affective Agents group at Heriot-Watt University- see here

Event details

  • When: 10th September 2013 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Ambitious, entrepreneurial, innovative, employable and highflying…

Words we use to describe our alumni, who work in New York, Switzerland, London and Edinburgh amongst other places.

Whether working for established companies such as Adobe and Google or in their own business start-ups such as AetherWorks LLC. and PlanForCloud (formerly ShopForCloud) our graduates continue to flourish. And rumour has it more of our talented CS graduates will be joining some of them shortly. The suspense! They are all exemplars of why St Andrews is the only Scottish university to feature in the 2013 High Fliers, a report about the graduate market in 2013.

CollageImagepngg

Clockwise from top left:

Rob, Angus and Greg from AetherWorks LLC., who took time out to capture a photo of themselves outside their offices in New York.

Ali at graduation, sporting a colour-co-ordinated Google Glass (who knew!). Listen to Ali discuss his career at the SICSA PhD conference careers panel.

We caught up with Adam, Andrew and James earlier this year when they represented Google at the Tech Talk by Google engineers.

Neil (complete with sunglasses) visited the school last week, on an unusually sunny day, with colleagues from Adobe.

Thanks to:
AetherWorks LLC.: Robert MacInnis, Angus MacDonald, Allan Boyd and Greg Bigwood.
PlanForCloud: Ali Khajeh-Hosseini and Alistair Scott.
Adobe: Neil Moore.
Google: James Smith, Adam Copp and Andrew McCarthy.
Editorial Support: Anne Campbell

The 11th International Conference on Finite-State Methods and Natural Language Processing (FSMNLP 2013)

The 11th International Conference on Finite-State Methods and Natural Language Processing (FSMNLP 2013) was held in the Gateway in St Andrews on July 15-17,2013. Presented were 17 peer-reviewed papers on natural language processing applications, language resources, and theoretical and implementational issues with relevance to finite-state methods. In addition, there were two keynote lectures, by Alexander Clark (King’s College London) and Bill Byrne (University of Cambridge), and three tutorials, by Ruth Hoffmann (University of St Andrews), Bevan Keeley Jones (University of Edinburgh) and Kousha Etessami (University of Edinburgh).

The conference was attended by 34 researchers and students from three continents. It also hosted a business meeting of SIGFSM (ACL Special Interest Group on Finite-State Methods). The social programme included a reception on July 14th, and a guided walk, a conference dinner in Lower College Hall and a concert in St Salvator’s Chapel on July 16th.

Accommodation in Agnes Blackadder Hall was arranged for non-local delegates, and lunches were served in the Gateway. Coffee breaks could be used for informal demos in the smaller seminar rooms of the Gateway.

Sponsored student places were available thanks to support from SICSA. Further support was received from VisitScotland and the University of St Andrews.

The full programme, with links to the proceedings, can be found from the website: http://fsmnlp2013.cs.st-andrews.ac.uk/

Images and text courtesy of Mark-Jan Nederhof (conference chair), Anssi Yli-Jyrä and Shyam Reyal.

Summer Days in Computer Science

Students and staff took advantage of the Scottish weather on Friday and held a BBQ to mark the anniversary of these events.

  • The Great Fire of Rome
  • Birth of Computer Scientist Mark Crispin
  • The Opening of the StACS Garden

CollageImage3

CollageImage4

Organised by Jan de Muijnck-Hughes and David Letham. Cooked by Jan de Muijnck-Hughes and Masih Hajiarabderkani. Salad ingredients from the StACS Garden. Enjoyed by all (including Pippa the dog).

Services to the Cloud

On June 27th Gordon Baxter and Derek Wang gave a presentation about their work on the SFC funded project “Creating High Value Cloud Services” at the Edinburgh Chamber of Commerce’s Business Growth Club.

Gordon talked about the lessons that have been learned so far from working closely with several Scottish SMEs who are adopting the cloud. Derek then gave a short demonstration of the web-based toolkit he has developed to analyse the potential costs and revenues associated with delivering a product or service through the cloud.

Find out more about the project on Services to the Cloud and The Cloudscape blog

MSc in Human Computer Interaction starting in September 2013

We have added more details on our new MSc in Human Computer Interaction which is starting in September 2013. This is an intensive one-year programme designed to provide a solid theoretical and practical foundation in HCI. It is designed to enable students from a variety of backgrounds to become HCI practitioners, in roles including UX designer, visual analysts, interaction designers and interaction architects. This MSc will also help prepare you for a PhD programme in HCI. In semester 1 students take Human Computer Interaction Principles and Human Computer Interaction Practice, followed by User-Centred Interaction Design and Evaluation Methods in Human Computer Interaction in semester 2. Other modules can be selected from the general MSc portfolio.

You can find more details here on the MSc in Human Computer Interaction.

MSc-Program1  typist   blockFontExample_cut