Practice talks for papers that Aaron and Daniel are presenting at AVI.

Title: AwToolkit: Attention-Aware User Interface Widgets
Authors: Juan-Enrique Garrido, Victor M. R. Penichet, Maria-Dolores Lozano, Aaron Quigley, Per Ola Kristensson.

Abstract: Increasing screen real-estate allows for the development of applications where a single user can manage a large amount of data and related tasks through a distributed user inter- face. However, such users can easily become overloaded and become unaware of display changes as they alternate their attention towards different displays. We propose Aw- Toolkit, a novel widget set for developers that supports users in maintaining awareness in multi-display systems. The Aw- Toolkit widgets automatically determine which display a user is looking at and provide users with notifications with different levels of subtlety to make the user aware of any unattended display changes. The toolkit uses four notification levels (unnoticeable, subtle, intrusive and disruptive), ranging from an almost imperceptible visual change to a clear and visually salient change. We describe AwToolkit’s six widgets, which have been designed for C# developers, and the design of a user study with an application oriented towards healthcare environments. The evaluation results re- veal a marked increase in user awareness in comparison to the same application implemented without AwToolkit.

TItle: An Evaluation of Dasher with a High-Performance Language Model as a Gaze Communication Method
Authors: Daniel Rough, Keith Vertanen, Per Ola Kristensson

Abstract: Dasher is a promising fast assistive gaze communication method. However, previous evaluations of Dasher have been inconclusive. Either the studies have been too short, involved too few participants, suffered from sampling bias, lacked a control condition, used an inappropriate language model, or a combination of the above. To rectify this, we report results from two new evaluations of Dasher carried out using a Tobii P10 assistive eye-tracker machine. We also present a method of modifying Dasher so that it can use a state-of-the-art long-span statistical language model. Our experimental results show that compared to a baseline eye-typing method, Dasher resulted in significantly faster entry rates (12.6 wpm versus 6.0 wpm in Experiment 1, and 14.2 wpm versus 7.0 wpm in Experiment 2). These faster entry rates were possible while maintaining error rates comparable to the baseline eye-typing method. Participants’ perceived physical demand, mental demand, effort and frustration were all significantly lower for Dasher. Finally, participants significantly rated Dasher as being more likeable, requiring less concentration and being more fun.

Event details

  • When: 20th May 2014 12:00 - 13:00
  • Where: Cole 1.33a
  • Format: Seminar

Honorary degree for Professor Dana Scott

We’re delighted that the University will be awarding the degree of Doctor of Science, honoris causa, to Professor Dana Scott at the graduation ceremony on Wednesday 25th June.

What does it mean to describe a computation? For Turing, it meant designing an ideal machine whose small set of simple operations could perform calculations: the operational view of computing that allows machines to perform tasks previously thought to require humans. Set against this is a view that is independent of mechanisation, where the calculations, rather than the machines that perform them, take centre stage. When we take this view, we are making use of ideas that owe their modern existence to the work of Dana Scott.

Working at Oxford in the 1970s, Scott developed the mathematical structures now known as Scott domains that provide a way of precisely describing how recursive functions make progress towards their final result. This led directly to an approach for describing the meanings of programs and programming languages — the Scott-Strachey approach to denotational semantics — and indirectly both to approaches to proving programs correct, and to the development of the lazy functional programming languages that today form a major strand of computer science research.

Dana Scott is a Turing Award recipient (jointly with Michael Rabin), a winner of the International Bolzano Prize, and a supervisor of over 50 PhD students. His contributions to the foundations of computer science have been immense, and we’re very excited to be having his company alongside our graduating class.

What’s so great about compositionality? by Professor Stuart M Shieber, Harvard.

Abstract: Compositionality is the tenet that the meaning of an expression is determined by the meanings of its immediate parts along with their method of combination. The semantics of artificial languages (such as programming languages or logics) are uniformly given compositionally, so that the notion doesn’t even arise in that literature. Linguistic theories, on the other hand, differ as to whether the relationship that they posit between the syntax and semantics of a natural language is structured in a compositional manner. Theories following the tradition of Richard Montague take compositionality to be a Good Thing, whereas theories in the transformational tradition eschew it.

I will look at what compositionality is and isn’t, why it seems desirable, why it seems problematic, and whether its advantages can’t be provided by other means. In particular, I argue that synchronous semantics can provide many of the advantages of compositionality, whether it is itself properly viewed as a compositional method, as well as having interesting practical applications.

Event details

  • When: 6th June 2014 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Seminar

Summer School on Experimental Methodology in Computational Science Research

The purpose of this summer school is to bring together interested computer scientists and other researchers who work in the broadly-defined area of “computational science”, and to explore the state-of-the-art in methods and tools for enabling reproducible and “recomputable” research. Reproducibility is crucial to the scientific process; without it researchers cannot build on findings, or even verify these findings. The development and emergence of new tools, hardware and processing platforms means that reproducibility should be easier than ever before. But to do so, we also need to effect “a culture change that will integrate computational reproducibility into the research process”.

The school will be hands on, comprising lectures, tutorials and practical sessions in topics including statistical methods, using cloud computing services for conducting and sharing reproducible experiments, methods for publishing code and data, legal issues surrounding the publication and sharing of code and data, and generally the design of experiments with replication in mind. Speakers include academics from mathematics, computer science and law schools, and other researchers and industrial speakers from Figshare, Microsoft Azure, the Software Sustainability Institute and more. Practicals will include the replication of existing experiments and a “hackathon” to improve tools for replication. The aim of the school will be to create a report that will be published in arXiv by the end of the week, and in a suitable journal later on.

For more information and to register please visit our web site at http://blogs.cs.st-andrews.ac.uk/emcsr2014/.

Event details

  • When: 4th August 2014 09:00 - 8th August 2014 17:00
  • Format: Summer School

Computational Social Choice: an Overview by Edith Elkind, University of Oxford

ABSTRACT
In this talk, we will provide a self-contained introduction to the field of computational social choice – an emerging research area that applies tools and techniques of computer science (most notably, algorithms, complexity and artificial intelligence) to problems that arise in voting theory, fair division, and other subfields of social choice theory. We will give a high-level overview of this research area, and mention some open problems that may be of interest to mathematicians and computer scientists.

Event details

  • When: 15th April 2014 - 15:00
  • Where: Maths Theatre B
  • Series: School Seminar Series
  • Format: Seminar

A slippery slope — the path to national health data linkage in Australia – John Bass

Abstract: Linkage of health-related data in Australia dates back to the late 1960’s with the first inspiration coming from the United Kingdom. Since then computers have developed at a barely believable rate, and technical considerations still exist but do not pose any serious problems. Progress has been slowed by the increasing need for better privacy and confidentiality. Further complications have resulted from living in a large and diverse country ruled by several highly parochial states as well as the federal government. This presentation tells the story from a viewpoint largely based in Perth, Western Australia. In 1984 this city had a population of less than a million, and the nearest city/town of more than 20,000 people was Adelaide, more than 1,650 miles away by road. In our context, this was a benefit as much as a hindrance, and Perth has been very much the epicentre of data linkage.

Bio: After an early career in marine zoology combined with computing, John Bass has been at the leading edge of health-related data linkage in Australia since 1984. Early work on infant mortality in Western Australia resulted in a linked dataset that became the cornerstone of the Telethon Institute for Child Health Research. He then implemented the Australian National Death Index in Canberra before returning to Perth as the founding manager of the Western Australian linked health data project — the first of its kind in the country. He designed and implemented the technical system of this group, which is widely recognised as the foremost data linkage unit in Australia. John stepped aside from his position in 2000 but has continued a close relationship with the project, designing and overseeing the implementation of genealogical links and then spending several years working with state and federal government to implement the first large-scale linkage of national pharmaceutical and general practice information. This involved the development of new best-practice privacy protocols that are now widely adopted across Australia. He was a core participant in developing a detailed plan for the implementation of a second state-based data linkage unit involving New South Wales and the Australian Capital Territory. In 2008 John moved to Tasmania, where he spent four years planning and paving the way for the implementation of a state-wide data linkage unit. He is now semi-retired, but still working on new developments in data linkage technology.

Event details

  • When: 13th May 2014 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Seminar

The Chomsky-Schutzenberger Theorem for Quantitative Context-Free Languages by Heiko Vogler, University of Dresden

ABSTRACT:
Weighted automata model quantitative aspects of systems like the consumption of resources during executions. Traditionally, the weights are assumed to form the algebraic structure of a semiring, but recently also other weight computations like average have been considered. Here, we investigate quantitative context-free languages over very general weight structures incorporating all semirings, average computations, lattices. In our main result, we derive the Chomsky-Schutzenberger Theorem for such quantitative context-free languages, showing that each arises as the image of a Dyck language and a recognizable language under a suitable morphism.

This is joint work with Manfred Droste (University of Leipzig)

BIOGRAPHY:
Prof. Dr.-Ing-habil. Heiko Vogler received the degree of Doktor in De Technische Wetenschappen at the Technische Hogeschool Twente, The Netherlands in 1986. He achieved the Habilitation in Computer Science at the RWTH Aachen in 1990, was associate professor at the University of Ulm from 1991-1994, and since 1994 he is full professor at the TU Dresden. He received the degree of Doktor honoris causa from the University of Szeged, Hungary in November 2013. His research interests are weighted tree automata and formal models for statistical machine translation of natural languages.

Event details

  • When: 7th April 2014 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Doing Research in the Wild – Paul Marshall, UCL

Abstract: There has been
significant growth in interest in ‘research in the wild’ as an approach to
developing and understanding novel technologies in real world contexts.
However, the concept remains underdeveloped and it is unclear how it differs
from previous technology deployments and in situ studies. In this talk, I will
attempt an initial characterisation of research in the wild. I will discuss
some of the benefits of studying novel technologies in situ as well as some of
the challenges inherent in encouraging and studying sustained use.

Bio: Paul Marshall is a lecturer in interaction design in the UCL
Interaction Centre. His research interests focus on understanding how
ubiquitous computing technologies are used in everyday contexts such as the
home, in education or in public spaces. Prior to joining UCL he worked as a
post doc at the University of Warwick (2010-11) researching participatory
design approaches in healthcare and at the Open University (2006-10) where he
ran ethnographic and laboratory studies of shareable interfaces and sensory
extension devices. He completed a PhD project on learning with tangible
interfaces as part of the Equator project at the University of Sussex, and
prior to that a BSc (Hons) in psychology at the University of Edinburgh.

Event details

  • When: 1st April 2014 14:00 - 15:00
  • Where: Maths Theatre B
  • Series: School Seminar Series

St Andrews Programming Competition 2014

IMG_6817

The St Andrews Programming Competition 2014 is a friendly programming contest organised by the School of Computer Science for students belonging to all levels, coming from any background with any amount of programming experience. Team up with up to 3 members per team, compete for 3 hours by solving a set of programming problems using your favourite programming language and win £200 worth of prizes.

Generally, programming competitions are aimed at the best programmers, this is a first-of-its-kind competition where students from all levels with any amount of programming experience stand a chance to win a prize. Another unique aspect of this competition is that it has also open to members of staff from the School of Computer Science, making this a fun experience and a bonding opportunity for staff and students.

Students can use this opportunity gain valuable exposure to solving quick algorithmic programming questions – of the style that may come up in job interviews, where candidates are required to solve problems on the fly while being observed. Such interview practices are common among many companies nowadays including Google.

For more details and registration visit: http://goo.gl/I78Hyf
Facebook: www.facebook.com/stapc14
Twitter: @stapc14

If you have any questions, please email Shyam on smr20@st-andrews.ac.uk

The event, prizes and refreshments will be sponsored by AetherStore.

AetherStore_square color logo jpg

Event details

  • When: 7th April 2014 14:00 - 17:00
  • Where: Cole 0.35 - Subhons Lab

PhD Admissions Session, Thursday 13 March 2pm

There will be a short session for students (either 4th year or Masters) interested in applying for a PhD in the School of Computer Science.

The deadline for the University’s funded 7th Century scholarships is March 31, so this is a good time to be thinking about it if you are interested and have not already applied.

The session will consist of a short talk and time for Q&A with John Thomson and Ian Gent, who handle PhD admissions in the School.

It will be in Jack Cole 1.33a, from 2pm to 2.30pm on Thursday 13 March 2014

Event details

  • When: 13th March 2014 14:00 - 14:30
  • Where: Cole 1.33a