Enterprise NoSQL in the BBC

Hear why MarkLogic was chosen as the 2012 Olympic website content store to ingest, store and deliver the data and content assets to the BBC¹s mobile app and thousands of web pages.
Speaker: Paul Preuveneers, Director, Sales Engineering, MarkLogic

Paul Preuveneers has more than 9 years of development experience with MarkLogic, with expertise in running software teams as well as spearheading the European office of MarkLogic UK. Paul Preuveneers joined MarkLogic from Elsevier Science, where he led the Agile Development Team, working on leading edge products including the many CONSULT sites and the main strategic elsevierhealth.com site. Trained in Extreme Programming and Agile Techniques, Paul has been on the forefront of many of the most innovative applications using MarkLogic in Europe. Prior to Elsevier Science, Paul held positions at Action Information Management and gained his Bsc in Computer Science at Southampton University.

Event details

  • When: 16th September 2014 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Seminar by John Slaney

What is Meyer’s E1 problem?

John Slaney, Australian National University

The E1 problem is a rather specialised question concerning propositional logic. It was posed by R. K. Meyer almost 50 years ago, and is still open. In this talk, I undertake to explain the problem, to review progress towards its solution and possibly even to make it look less eccentric than it might at first seem. The talk is accessible to anyone with an interest in computer science or logic, as it does not presuppose any great technicalities.

John Slaney is Professor of Computer Science at Australian National University, Canberra.

His research has focussed on many aspects of logic and artificial intelligence, sometimes from a very philosophical standpoint but also from a very practical one of building better solvers. He also wrote Logic4Fun, an interactive logic modelling and solving website.

John Slaney has never denied rumours that he was a professional ice hockey player in North America, including scoring the winning goal in a Canada-USSR match. However, if asked he probably will deny them (since he was never a hockey player).

Event details

  • When: 1st September 2014 11:00 - 12:00
  • Where: Cole 1.33a
  • Format: Seminar, Talk

Big data, the Cloud and the future of computing by Dr Kenji Takeda, Microsoft Research

Abstract: We live in an information society, with cloud computing is changing the way we live, work and play in a world of devices and services. In this talk we’ll explore what, why and how this new era of computing is changing the way we think about conceiving, developing and delivering software and services. We’ll then look at how the concept of Big Data is transforming science, and the opportunities it presents for the future.

Bio: Dr Kenji Takeda is Solutions Architect and Technical Manager in Microsoft Research. He is currently focussed on Azure for Research and Environmental Science tools and technologies. The Azure for Research programme currently supports over 300 projects worldwide, including two at the University of St Andrews – see

http://www.azure4research.com

Kenji has extensive experience in Cloud Computing, High Performance and High Productivity Computing, Data-intensive Science, Scientific Workflows, Scholarly Communication, Engineering and Educational Outreach. He has a passion for developing novel computational approaches to tackle fundamental and applied problems in science and engineering.

Event details

  • When: 5th August 2014 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Design Frontiers in Parallel Languages: The Role of Determinism

Constraints can be a source of inspiration; their role in creative art forms is well-recognized, with poetry as the quintessential example.  We argue that the requirement of determinism can play the same role in the design of parallel programming languages. This talk describes a series of design explorations that begin with determinism as the constraint, introduce the concept of monotonically-changing concurrent data structures (LVars), and end in some interesting places—flirting with the boundaries to yield quasideterminism, and revealing synergies between parallel effects, such as cancelation and memoization, when used in a deterministic context.

Our goal is for guaranteed-deterministic parallel programming to be practical and efficient for a wide range of applications. One challenge is simply to integrate the known forms of deterministic-by-construction parallelism, which we overview in this talk: Kahn process networks, pure data-parallelism, single assignment languages, functional programming, and type-effect systems that enforce limited access to state by threads. My group, together with many others around the world, are developing libraries such as LVish and Accelerate that add these capabilities to the programming language Haskell. It is early days yet, but already possible to build programs that mix concurrent, lock-free data structures, blocking data-flow, callbacks, and GPU-based data-parallelism, without ever compromising determinism or referential transparency.

Event details

  • When: 12th June 2014 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Teachers Together

The School is welcoming teachers and representatives of Local Education Authorities to a departmental visit as part of the Teachers Together Conference.

Attendees will hear about our first year curriculum and how subjects such as Maths and Physics feed into it. They will also take part in a discussion about subject development in computer science, with particular focus on the Curriculum for Excellence.

Event details

  • When: 20th June 2014 09:30 - 11:15
  • Where: Cole 1.33a
  • Format: Seminar

Practice talks for papers that Aaron and Daniel are presenting at AVI.

Title: AwToolkit: Attention-Aware User Interface Widgets
Authors: Juan-Enrique Garrido, Victor M. R. Penichet, Maria-Dolores Lozano, Aaron Quigley, Per Ola Kristensson.

Abstract: Increasing screen real-estate allows for the development of applications where a single user can manage a large amount of data and related tasks through a distributed user inter- face. However, such users can easily become overloaded and become unaware of display changes as they alternate their attention towards different displays. We propose Aw- Toolkit, a novel widget set for developers that supports users in maintaining awareness in multi-display systems. The Aw- Toolkit widgets automatically determine which display a user is looking at and provide users with notifications with different levels of subtlety to make the user aware of any unattended display changes. The toolkit uses four notification levels (unnoticeable, subtle, intrusive and disruptive), ranging from an almost imperceptible visual change to a clear and visually salient change. We describe AwToolkit’s six widgets, which have been designed for C# developers, and the design of a user study with an application oriented towards healthcare environments. The evaluation results re- veal a marked increase in user awareness in comparison to the same application implemented without AwToolkit.

TItle: An Evaluation of Dasher with a High-Performance Language Model as a Gaze Communication Method
Authors: Daniel Rough, Keith Vertanen, Per Ola Kristensson

Abstract: Dasher is a promising fast assistive gaze communication method. However, previous evaluations of Dasher have been inconclusive. Either the studies have been too short, involved too few participants, suffered from sampling bias, lacked a control condition, used an inappropriate language model, or a combination of the above. To rectify this, we report results from two new evaluations of Dasher carried out using a Tobii P10 assistive eye-tracker machine. We also present a method of modifying Dasher so that it can use a state-of-the-art long-span statistical language model. Our experimental results show that compared to a baseline eye-typing method, Dasher resulted in significantly faster entry rates (12.6 wpm versus 6.0 wpm in Experiment 1, and 14.2 wpm versus 7.0 wpm in Experiment 2). These faster entry rates were possible while maintaining error rates comparable to the baseline eye-typing method. Participants’ perceived physical demand, mental demand, effort and frustration were all significantly lower for Dasher. Finally, participants significantly rated Dasher as being more likeable, requiring less concentration and being more fun.

Event details

  • When: 20th May 2014 12:00 - 13:00
  • Where: Cole 1.33a
  • Format: Seminar

What’s so great about compositionality? by Professor Stuart M Shieber, Harvard.

Abstract: Compositionality is the tenet that the meaning of an expression is determined by the meanings of its immediate parts along with their method of combination. The semantics of artificial languages (such as programming languages or logics) are uniformly given compositionally, so that the notion doesn’t even arise in that literature. Linguistic theories, on the other hand, differ as to whether the relationship that they posit between the syntax and semantics of a natural language is structured in a compositional manner. Theories following the tradition of Richard Montague take compositionality to be a Good Thing, whereas theories in the transformational tradition eschew it.

I will look at what compositionality is and isn’t, why it seems desirable, why it seems problematic, and whether its advantages can’t be provided by other means. In particular, I argue that synchronous semantics can provide many of the advantages of compositionality, whether it is itself properly viewed as a compositional method, as well as having interesting practical applications.

Event details

  • When: 6th June 2014 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Seminar

Computational Social Choice: an Overview by Edith Elkind, University of Oxford

ABSTRACT
In this talk, we will provide a self-contained introduction to the field of computational social choice – an emerging research area that applies tools and techniques of computer science (most notably, algorithms, complexity and artificial intelligence) to problems that arise in voting theory, fair division, and other subfields of social choice theory. We will give a high-level overview of this research area, and mention some open problems that may be of interest to mathematicians and computer scientists.

Event details

  • When: 15th April 2014 - 15:00
  • Where: Maths Theatre B
  • Series: School Seminar Series
  • Format: Seminar

A slippery slope — the path to national health data linkage in Australia – John Bass

Abstract: Linkage of health-related data in Australia dates back to the late 1960’s with the first inspiration coming from the United Kingdom. Since then computers have developed at a barely believable rate, and technical considerations still exist but do not pose any serious problems. Progress has been slowed by the increasing need for better privacy and confidentiality. Further complications have resulted from living in a large and diverse country ruled by several highly parochial states as well as the federal government. This presentation tells the story from a viewpoint largely based in Perth, Western Australia. In 1984 this city had a population of less than a million, and the nearest city/town of more than 20,000 people was Adelaide, more than 1,650 miles away by road. In our context, this was a benefit as much as a hindrance, and Perth has been very much the epicentre of data linkage.

Bio: After an early career in marine zoology combined with computing, John Bass has been at the leading edge of health-related data linkage in Australia since 1984. Early work on infant mortality in Western Australia resulted in a linked dataset that became the cornerstone of the Telethon Institute for Child Health Research. He then implemented the Australian National Death Index in Canberra before returning to Perth as the founding manager of the Western Australian linked health data project — the first of its kind in the country. He designed and implemented the technical system of this group, which is widely recognised as the foremost data linkage unit in Australia. John stepped aside from his position in 2000 but has continued a close relationship with the project, designing and overseeing the implementation of genealogical links and then spending several years working with state and federal government to implement the first large-scale linkage of national pharmaceutical and general practice information. This involved the development of new best-practice privacy protocols that are now widely adopted across Australia. He was a core participant in developing a detailed plan for the implementation of a second state-based data linkage unit involving New South Wales and the Australian Capital Territory. In 2008 John moved to Tasmania, where he spent four years planning and paving the way for the implementation of a state-wide data linkage unit. He is now semi-retired, but still working on new developments in data linkage technology.

Event details

  • When: 13th May 2014 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Seminar

The Chomsky-Schutzenberger Theorem for Quantitative Context-Free Languages by Heiko Vogler, University of Dresden

ABSTRACT:
Weighted automata model quantitative aspects of systems like the consumption of resources during executions. Traditionally, the weights are assumed to form the algebraic structure of a semiring, but recently also other weight computations like average have been considered. Here, we investigate quantitative context-free languages over very general weight structures incorporating all semirings, average computations, lattices. In our main result, we derive the Chomsky-Schutzenberger Theorem for such quantitative context-free languages, showing that each arises as the image of a Dyck language and a recognizable language under a suitable morphism.

This is joint work with Manfred Droste (University of Leipzig)

BIOGRAPHY:
Prof. Dr.-Ing-habil. Heiko Vogler received the degree of Doktor in De Technische Wetenschappen at the Technische Hogeschool Twente, The Netherlands in 1986. He achieved the Habilitation in Computer Science at the RWTH Aachen in 1990, was associate professor at the University of Ulm from 1991-1994, and since 1994 he is full professor at the TU Dresden. He received the degree of Doktor honoris causa from the University of Szeged, Hungary in November 2013. His research interests are weighted tree automata and formal models for statistical machine translation of natural languages.

Event details

  • When: 7th April 2014 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar