Max L. Wilson (University of Nottingham): Brain-based HCI – What could brain data can tell us HCI

Please note non-standard date and time for this talk

Abstract:

This talk will describe a range of our projects, utilising functional Near Infrared Spectroscopy (fNIRS) in HCI. As a portable alternative that’s more tolerate of motion artefacts than EEG, fNIRS measures the amount of oxygen in the brain, as e.g. mental workload creates demand. As opposed to BCI (trying to control systems with our brain), we focus on brain-based HCI, asking what brain data can tell us about our software, our work, our habits, and ourselves. In particular, we are driven by the idea that brain data can become personal data in the future.

Speaker Bio:

Dr Max L. Wilson is an Associate Professor in the Mixed Reality Lab in Computer Science at the University of Nottingham.  His research focus is on evaluating Mental Workload in HCI contexts – as real-world as possible – primarily using functional Near Infrared Spectroscopy (fNIRS).  As a highly tolerant form of brain sensor, fNIRS is suitable for use in HCI research into user interface design, work tasks, and everyday experiences.  This work emerged from his prior research into the design and evaluation of complex user interfaces for information interfaces. Across these two research areas, Max has over 120 publications, including a Honourable Mention CHI2019 paper on a Brain-Controlled Movie – The MOMENT.

Event details

  • When: 25th October 2019 14:00 - 15:00
  • Where: Cole 1.33b
  • Series: School Seminar Series
  • Format: Seminar

DLS: Multimodal human-computer interaction: past, present and future

Speaker: Stephen Brewster (University of Glasgow)
Venue: The Byre Theatre

Timetable:

9:30: Lecture 1: The past: what is multimodal interaction?
10:30 Coffee break
11:15 Lecture 2: The present: does it work in practice?
12:15 Lunch (not provided)
14:15 The future: Where next for multimodal interaction?

Speaker Bio:

Professor Brewster is a Professor of Human-Computer Interaction in the Department of Computing Science at the University of Glasgow, UK. His main research interest is in Multimodal Human-Computer Interaction, sound and haptics and gestures. He has done a lot of research into Earcons, a particular form of non-speech sounds.

He did his degree in Computer Science at the University of Herfordshire in the UK. After a period in industry he did his PhD in the Human-Computer Interaction Group at the University of York in the UK with Dr Alistair Edwards. The title of his thesis was “Providing a structured method for integrating non-speech audio into human-computer interfaces”. That is where he developed my interests in Earcons and non-speech sound.

After finishing his PhD he worked as a research fellow for the European Union as part of the European Research Consortium for Informatics and Mathematics (ERCIM). From September, 1994 – March, 1995 he worked at VTT Information Technology in Helsinki, Finland. He then worked at SINTEF DELAB in Trondheim, Norway.

Event details

  • When: 8th October 2019 09:30 - 15:15
  • Where: Byre Theatre
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture

Daniel S. Katz (University of Illinois): Parsl: Pervasive Parallel Programming in Python

Please note non-standard date and time for this talk

Abstract: High-level programming languages such as Python are increasingly used to provide intuitive interfaces to libraries written in lower-level languages and for assembling applications from various components. This migration towards orchestration rather than implementation, coupled with the growing need for parallel computing (e.g., due to big data and the end of Moore’s law), necessitates rethinking how parallelism is expressed in programs.

Here, we present Parsl, a parallel scripting library that augments Python with simple, scalable, and flexible constructs for encoding parallelism. These constructs allow Parsl to construct a dynamic dependency graph of components from a Python program enhanced with a small number of decorators that define the components to be executed asynchronously and in parallel, and then execute it efficiently on one or many processors. Parsl is designed for scalability, with an extensible set of executors tailored to different use cases, such as low-latency, high-throughput, or extreme-scale execution. We show, via experiments on the Blue Waters supercomputer, that Parsl executors can allow Python scripts to execute components with as little as 5 ms of overhead, scale to more than 250000 workers across more than 8000 nodes, and process upward of 1200 tasks per second.

Other Parsl features simplify the construction and execution of composite programs by supporting elastic provisioning and scaling of infrastructure, fault-tolerant execution, and integrated wide-area data management. We show that these capabilities satisfy the needs of many-task, interactive, online, and machine learning applications in fields such as biology, cosmology, and materials science.

Slides: see here.

Speaker Bio: Daniel S. Katz is Assistant Director for Scientific Software and Applications at the National Center for Supercomputing Applications (NCSA), and Research Associate Professor in Computer Science; Electrical & Computer Engineering; and the School of Information Sciences at the University of Illinois Urbana-Champaign. For further details, please see his website here.

Event details

  • When: 18th October 2019 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: School Seminar Series
  • Format: Seminar

Ankush Jhalani (Bloomberg): Building Near Real-Time News Search

Abstract:

This talk provides an insight into the challenges involved in providing near real-time news search to Bloomberg customers. It starts with a picture of what’s involved in building such a backend, then delves into what makes up a search engine. Finally we discuss the challenges of scaling up for low-latency and high-load, and how we tackle them.

Speaker Bio:

Ankush leads the News Search infrastructure team at the Bloomberg Engineering office in London. After completing his Masters in Computer Science, he joined Bloomberg at their New York office in 2009. Later working from Washington DC, he led a team to build a web application leveraging Lucene/Elasticsearch for businesses to discover government contracting opportunities. In London, his team focuses on search infrastructure and services allowing clients to search news events from all over the globe with near real-time access and sub-second latencies.

 

Event details

  • When: 15th October 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Software Carpentry Workshop

Registration is open for the next Software Carpentry workshop in St Andrews on September 23-24 in the Parliament Hall. We will teach UNIX shell, version control with Git and programming with Python. Please see the workshop page for further details and the link to registration via PDMS.

Event details

  • When: 23rd September 2019 - 24th September 2019
  • Where: Parliament Hall
  • Format: Workshop

MIP Modelling Made Manageable

Can a user write a good MIP model without understanding linearization? Modelling languages such as AMPL and AIMMS are being extended to support more features, with the goal of making MIP modelling easier. A big step is the incorporation of predicates, such a “cycle” which encapsulate MIP sub-models. This talk explores the impact of such predicates in the MiniZinc modelling language when it is used as a MIP front-end. It reports on the performance of the resulting models, and the features of MiniZinc that make this possible.

Professor Mark Wallace is Professor of Data Science & AI at Monash University, Australia. We gratefully acknowledge support from a SICSA Distinguished Visiting Fellowship which helped finance his visit.

Professor Wallace graduated from Oxford University in Mathematics and Philosophy. He worked for the UK computer company ICL for 21 years while completing a Masters degree in Artificial Intelligence at the University of London and a PhD sponsored by ICL at Southampton University. For his PhD, Professor Wallace designed a natural language processing system which ICL turned into a product. He moved to Imperial College in 2002, taking a Chair at Monash University in 2004.

His research interests span different techniques and algorithms for optimisation and their integration and application to solving complex resource planning and scheduling problems. He was a co-founder of the hybrid algorithms research area and is a leader in the research areas of Constraint Programming (CP) and hybrid techniques (CPAIOR). The outcomes of his research in these areas include practical applications in transport optimisation.

He is passionate about modelling and optimisation and the benefits they bring.  His focus both in industry and University has been on application-driven research and development, where industry funding is essential both to ensure research impact and to support sufficient research effort to build software systems that are robust enough for application developers to use.

He led the team that developed the ECLiPSe constraint programming platform, which was bought by Cisco Systems in 2004. Moving to Australia, he worked on a novel hybrid optimisation software platform called G12, and founded the company Opturion to commercialise it.  He also established the Monash-CTI Centre for optimisation in travel, transport and logistics.   He has developed solutions for major companies such as BA, RAC, CFA, and Qantas.  He is currently involved in the Alertness CRC, plant design for Woodside planning, optimisation for Melbourne Water, and work allocation for the Alfred hospital.

Event details

  • When: 19th June 2019 11:00 - 12:00
  • Where: Cole 1.33a
  • Series: AI Seminar Series
  • Format: Lecture, Seminar

St Andrews Bioinformatics Workshop 10/06/19

Next Monday is the annual St Andrews Bioinformatics workshop in Seminar Room 1, School of Medicine. Some of the presentations are very relevant to Computer Science, and all should be interesting. More information below:

Agenda:

14:00  – 14:15: Valeria MontanoThe PreNeolithic evolutionary history of human genetic resistance to Plasmodium falciparum

14:15 – 14:30: Chloe Hequet: Estimation of Polygenic Risk with Machine Learning

14:30 – 14:45: Roopam Gupta: Label-free optical hemogram of granulocytes enhanced by artificial neural networks

15:00 – 15:15: Damilola Oresegun: Nanopore: Research; then, now and the future

15:15 – 15:30: Xiao Zhang: Functional and population genomics of extremely rapid evolution in Hawaiian crickets

15:30 – 16:00: Networking with refreshments

16:00 – 17:00: Chris Ponting: The power of One: Single variants, single factors, single cells

You can register your interest in attending here.

Event details

  • When: 10th June 2019 14:00 - 17:00
  • Format: Lecture, Talk, Workshop

Graduation Reception: Wednesday 26th June

The School of Computer Science will host a graduation reception on Wednesday 26th June, in the Jack Cole building, between 11.00 and 13.00. Graduating students and their guests are invited to the School to celebrate with a glass of bubbly and a cream cake. Computer Science degrees will be conferred in an afternoon ceremony in the Younger Hall. Family and friends who can’t make it on the day can watch a live broadcast of graduation. Graduation receptions have been held in the school from 2010.

A class photo will be taken at 12.00 outside the Jack Cole building.

Event details

  • When: 26th June 2019 11:00 - 13:00
  • Where: Cole Coffee Area
  • Format: graduation

Juho Rousu: Predicting Drug Interactions with Kernel Methods

Title:
Predicting Drug Interactions with Kernel Methods

Abstract:
Many real world prediction problems can be formulated as pairwise learning problems, in which one is interested in making predictions for pairs of objects, e.g. drugs and their targets. Kernel-based approaches have emerged as powerful tools for solving problems of that kind, and especially multiple kernel learning (MKL) offers promising benefits as it enables integrating various types of complex biomedical information sources in the form of kernels, along with learning their importance for the prediction task. However, the immense size of pairwise kernel spaces remains a major bottleneck, making the existing MKL algorithms computationally infeasible even for small number of input pairs. We introduce pairwiseMKL, the first method for time- and memory-efficient learning with multiple pairwise kernels. pairwiseMKL first determines the mixture weights of the input pairwise kernels, and then learns the pairwise prediction function. Both steps are performed efficiently without explicit computation of the massive pairwise matrices, therefore making the method applicable to solving large pairwise learning problems. We demonstrate the performance of pairwiseMKL in two related tasks of quantitative drug bioactivity prediction using up to 167 995 bioactivity measurements and 3120 pairwise kernels: (i) prediction of anticancer efficacy of drug compounds across a large panel of cancer cell lines; and (ii) prediction of target profiles of anticancer compounds across their kinome-wide target spaces. We show that pairwiseMKL provides accurate predictions using sparse solutions in terms of selected kernels, and therefore it automatically identifies also data sources relevant for the prediction problem.

References:
Anna Cichonska, Tapio Pahikkala, Sandor Szedmak, Heli Julkunen, Antti Airola, Markus Heinonen, Tero Aittokallio, Juho Rousu; Learning with multiple pairwise kernels for drug bioactivity prediction, Bioinformatics, Volume 34, Issue 13, 1 July 2018, Pages i509–i518, https://doi.org/10.1093/bioinformatics/bty277

Short Bio:
Juho Rousu is a Professor of Computer Science at Aalto University, Finland. Rousu obtained his PhD in 2001 form University of Helsinki, while working at VTT Technical Centre of Finland. In 2003-2005 he was a Marie Curie Fellow at Royal Holloway University of London. In 2005-2011 he held Lecturer and Professor positions at University of Helsinki, before moving to Aalto University in 2012 where he leads a research group on Kernel Methods, Pattern Analysis and Computational Metabolomics (KEPACO). Rousu’s main research interest is in learning with multiple and structured targets, multiple views and ensembles, with methodological emphasis in regularised learning, kernels and sparsity, as well as efficient convex/non-convex optimisation methods. His applications of interest include metabolomics, biomedicine, pharmacology and synthetic biology.

Event details

  • When: 30th April 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Seminar

Distinguished Lecture Series: Formal Approaches to Quantitative Evaluation

Biography:
Jane Hillston was appointed Professor of Quantitative Modelling in the School of Informatics at the University of Edinburgh in 2006, having joined the University as a Lecturer in Computer Science in 1995. She is currently Head of the School of Informatics. She is a Fellow of the Royal Society of Edinburgh and Member of Academia Europaea. She currently chairs the Executive Committee of the UK Computing Research Committee.
Jane Hillston’s research is concerned with formal approaches to modelling dynamic behaviour, particularly the use of stochastic process algebras for performance modelling and stochastic verification. The application of her modelling techniques have ranged from computer systems, to biological processes and transport systems. Her PhD dissertation was awarded the BCS/CPHC Distinguished Dissertation award in 1995 and she was the first recipient of the Roger Needham Award in 2005. She has published over 100 journal and conference papers and held several Research Council and European Commission grants.
She has a strong interest in promoting equality and diversity within Computer Science; she is a member of the Women’s Committee of the BCS Computing Academy and chaired the Women in Informatics Research and Education working group of Informatics Europe 2016—2018, and during that time instigated the Minerva Informatics Equality Award.

Formal Approaches to Quantitative Evaluation
Qualitative evaluation of computer systems seeks to ensure that the system does not exhibit bad behaviour and is in some sense “correct”. Whilst this is important it is also often useful to be able to reason not just about what will happen in the system, but also the dynamics of that behaviour: how long it will take, what are the probabilities of alternative outcomes, how much resource is used….? Such questions can be answered by quantitative analysis when information about timing and probability are incorporated into models of system behaviour.

In this short series of lectures I will talk about how we can extend formal methods to support quantitative evaluation as well as qualitative evaluation of systems. The first lecture will focus on computer systems and a basic approach based on the stochastic process algebra PEPA. In the second lecture I will introduce the language CARMA which is designed to support the analysis of collective adaptive systems, in which the structure of the system may change over time. In the third lecture I will consider systems where the exact details of behaviour may not be known and present the process algebra ProPPA which combines aspect of machine learning and inference with formal quantitative models.

Timetable:
Lecture 1: 9:30 – 10:30 – Performance Evaluation Process Algebra (PEPA)

Coffee break at 10:30 – 11:15
Lecture 2: 11:15 – 12:15 – Collective Adaptive Resource-sharing Markovian Agents (CARMA)

Lecture 3: 14:15 – 15:15 – Probabilistic Programming for Stochastic Dynamical Systems (ProPPA)


Venue: Upper and Lower College Halls

Event details

  • When: 8th April 2019 09:30 - 15:30
  • Where: Lower College Hall
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture