- When: 16th April 2019 14:00 - 15:00
- Where: Cole 1.33a
- Series: School Seminar Series
- Format: Seminar
Title and Abstract TBD
Title and Abstract TBD
2018 was a crazy year for privacy. The General Data Protection Regulation came into force in May, and new revelations on the personal data ecosystem were making headlines on a weekly basis. I will give the behind the scenes for a lot of these events, question why they didn’t happen earlier, and offer some thoughts on the necessary future of online services. This will include a brief discussion of topics such as semantic alignment, interpretable machine learning, or new privacy-preserving data processing techniques.
Paul-Olivier Dehaye is a mathematician by training. Affiliated to the University of Zurich as a SNSF Assistant Professor until 2016, his career then took a turn towards data protection activism and social entrepreneurship. He was the researcher on several news articles who have reached millions of readers (including many with Carole Cadwalladr), and testified in front of the UK and EU Parliaments on multiple occasions. He is on the board of MyData Global, has founded the NGO PersonalData.IO, and the project MyData Geneva.
Escape rooms are popular recreational activities whereby players are locked in a room and must solve a series of puzzles in order to ‘escape’. Recent years have seen a large expansion technology being used in these rooms in order to provide ever changing and increasingly immersive experiences. This technology could be used to minimise accessibility issues for users, e.g. with hearing or visual impairments, so that they can engage in the same way as their peers without disabilities. Escape room designers and players completed an online questionnaire exploring the use of technology and the accessibility of escape rooms. Results show that accessibility remains a key challenge in the design and implementation of escape rooms, despite the inclusion of technology that could be used to improve the experience of users with disabilities. This presentation will explore the lack of accessibility within Escape Rooms and the potential for technology to bridge this gap.
Dr Rachel Menzies is the Head of Undergraduate Studies for Computing at the University of Dundee and is the current SICSA Director of Education (https://www.sicsa.ac.uk/education/). She co-directs the UX’d research group (https://www.ux-d.co.uk/) and her research interests include user centred design with marginalised user groups, such as users with disabilities, as well as exploring novel interfaces, data visualisation and CS education. Her most recent work focusses on accessibility is in escape rooms, in particular how users with varied disabilities can access and enjoy the experience alongside typical users.
From mapping the spread of disease to monitoring climate change, data holds the key to solving some of the world’s biggest challenges. Dependable decisions rely on understanding the provenance and reliability of data. Historically, only a small fraction of the generated data was shared and re-used, while the majority of data were used once and then erased or archived. At NPL Data Science we are defining best practice in measurement data reuse and traceability by developing metadata standards and data storage structures to locate and interpret datasets and make them available for sharing, publication and data mining.
The talk will shed light on the most burning issues in the scientific data management, and illustrate it with examples from industrial and academic practices. It will present several NPL Data Science projects that focus on delivering confidence in data obtained from life science imaging, medicine, geosciences and fundamental physics.
Dr Marina Romanchikova joined the NPL Data Science team in 2017 to work on data quality and metadata standards. She obtained an MSc in Medical Informatics at University of Heidelberg, Germany, where she specialised in medical image processing and in management of hospital information systems. In 2010 she received a PhD on Monte Carlo dosimetry for targeted radionuclide therapy at the Institute of Cancer Research in Sutton, UK. Marina worked six years as a radiotherapy research physicist at Cambridge University Hospitals where she developed methods for curation and analysis of medical images.
– Quantitative quality assessment of medical images and medical image segmentation
– Harmonisation of medical and healthcare data from heterogeneous sources
– Applications of machine learning in healthcare
– Automated data quality assurance
There has been a dramatic growth in the number and range of Internet of Things (IoT) sensors that generate healthcare data. These sensors stream high-dimensional time series data that must be analysed in order to provide the insights into medical conditions that can improve patient healthcare. This raises both statistical and computational challenges, including where to deploy the streaming data analytics, given that a typical healthcare IoT system will combine a highly diverse set of components with very varied computational characteristics, e.g. sensors, mobile phones and clouds. Different partitionings of the analytics across these components can dramatically affect key factors such as the battery life of the sensors, and the overall performance. In this work we describe a method for automatically partitioning stream processing across a set of components in order to optimise for a range of factors including sensor battery life and communications bandwidth. We illustrate this using our implementation of a statistical model predicting the glucose levels of type II diabetes patients in order to reduce the risk of hyperglycaemia.
Lauren and Peter are final year PhD students at the CDT in Cloud Computing for Big Data at Newcastle University. Peter has a background in Computer Engineering from University of Žilina, Slovakia and a double-degree in Computer Software Engineering from JAMK University of Applied Sciences, Jyväskylä, Finland. His research interests are within distributed event processing, edge computing and Internet of Things with a special focus on energy and bandwidth constrains. Lauren has an MMath degree from Newcastle University and her research interests lie in statistical modelling of time series data.
Apprenticeship degrees have sprung up so fast that there has been little time for us all to reflect on how this apparently new form of education, to universities at least, could significantly affect our educational offerings. The University of Glasgow has been undertaking some preparatory work for Skills Development Scotland prior to running its apprenticeship degree in software engineering, and this has afforded us some time to see what others nationally and internationally have been doing, and to consider relevant aspects of the literature, as well as consult with industry. One view that we are developing of these degrees is as a true evolution of typical, largely campus-based, software engineering degrees, towards a full-blown professional degree such as in medicine, where university and hospitals are in real partnership over the training of doctors. In this talk, I will outline our thinking and raise a number of issues for discussion. In suggesting a closer relationship with industry in a talk in St Andrews, I do not of course miss the irony that industry accreditation was never (I believe) something that St Andrews was particularly bothered about – thinking that my BSc (Hons) 1988 is not accredited!
Tissue ablation is a widely used treatment in both the cosmetic and medical sectors, for treating various diseases or to improve cosmetic outlooks. We present our tissue ablation model which can predict the depth of ablation, and the surrounding thermal damage caused by the laser during ablation.
“Non-diffracting” beams have a multitude of uses in physics, from optical manipulation to improved microscopy light sources. For the first time we show that these beams can be modelled using Monte Carlo radiation transport method. Allowing better insight into how these beams propagate in a turbid medium.
Both of these projects use the Monte Carlo radiation transport method (MCRT) to simulate light transport. The MCRT method is a powerful numerical method that can solve light transport though heavily scattering and absorbing mediums, such as biological tissues. The method is extremely flexible and can model arbitrary geometries and light sources. MCRT can also model the various micro-physics of the simulated medium, such as polarisation, fluorescence, and Raman scattering. This talk will give an overview of our group’s work, with particular focus on simulating tissue ablation, and modelling “non-diffracting” beams.
Lewis McMillan is a final year physics PhD student at St Andrews University. His research interests are in using Monte Carlo radiation transport method for various applications within medicine and biophotonics.
This is joint work with Charlie Blake.
The most famous single-player card game is ‘Klondike’, but our ignorance of its winnability percentage has been called “one of the embarrassments of applied mathematics”. Klondike is just one of many single-player card games, generically called ‘solitaire’ or ‘patience’ games, for which players have long wanted to know how likely a particular game is to be winnable for a random deal. A number of different games have been studied empirically in the academic literature and by non-academic enthusiasts.
Here we show that a single general purpose Artificial Intelligence program, called “Solvitaire”, can be used to determine the winnability percentage of approximately 30 different single-player card games with a 95% confidence interval of ± 0.1% or better. For example, we report the winnability of Klondike to within 0.10% (in the ‘thoughtful’ variant where the player knows the location of all cards). This is a 30-fold reduction in confidence interval, and almost all our results are either entirely new or represent significant improvements on previous knowledge.
Ian Gent is professor of Computer Science at the University of St Andrews. His mother taught him to play patience and herself showed endless patience when he “helped” her by taking complete control of the game. A program to play a patience game was one of the programs he wrote on his 1982 Sinclair Spectrum now on the wall outside his office.
This talk is an overview of the VAMPIRE (Vessel Assessment and Measurement Platform for Images of the REtina) project, an international and interdisciplinary research initiative created and led by the Universities of Dundee and Edinburgh in Scotland, UK, since the early 2000s. VAMPIRE research focuses on the eye as a source of biomarkers for systemic diseases (e.g. cardiovascular, diabetes, dementia) and cognitive decline, as well as on eye-specific diseases. VAMPIRE is highly interdisciplinary, bringing together medical image analysis, machine learning and data analysis, medical research, and data governance and management at scale. The talk introduces concisely the aims, structure and current results of VAMPIRE, the current vision for effective translation to society, and the several non-technical factors complementing technical research needed to achieve effective translation.
Emanuele (Manuel) Trucco, MSc, PhD, FRSA, FIAPR, is the NRP Chair of Computational Vision in Computing, School of Science and Engineering, at the University of Dundee, and an Honorary Clinical Researcher of NHS Tayside. He has been active since 1984 in computer vision, and since 2002 in medical image analysis, publishing more than 270 refereed papers and 2 textbooks, and serving on the organizing or program committee of major international and UK conferences. Manuel is co-director of VAMPIRE (Vessel Assessment and Measurement Platform for Images of the Retina), an international research initiative led by the Universities of Dundee and Edinburgh (co-director Dr Tom MacGillivray), and part of the UK Biobank Eye andVision Consortium. VAMPIRE develops software tools for efficient data and image analysis with a focus on multi-modal retinal images. VAMPIRE has been used in UK and international biomarker studies on cardiovascular risk, stroke, dementia, diabetes and complications, cognitive performance, neurodegenerative diseases, and genetics.
Venue: The Old Course Hotel (Hall of Champions)
9:30 Lecture 1
10:30 Break with Coffee
11:15 Lecture 2
12:15 Break for Lunch (not provided)
14:15 Lecture 3
Lecture 1: Introduction to Scalable Intelligent Systems
Lecture 2: Foundations for Scalable Intelligent Systems
Lecture 3: Implications of Scalable Intelligent Systems
Professor Carl Hewitt is the creator (together with his students and other colleagues) of the Actor Model of computation, which influenced the development of the Scheme programming language and the π calculus, and inspired several other systems and programming languages. The Actor Model is in widespread industrial use including eBay, Microsoft, and Twitter. For his doctoral thesis, he designed Planner, the first programming language based on pattern-invoked procedural plans.
Professor Hewitt’s recent research centers on the area of Inconsistency Robustness, i.e., system performance in the face of continual, pervasive inconsistencies (a shift from the previously dominant paradigms of inconsistency denial and inconsistency elimination, i.e., to sweep inconsistencies under the rug). ActorScript and the Actor Model on which it is based can play an important role in the implementation of more inconsistency-robust information systems. Hewitt is an advocate in the emerging campaign against mandatory installation of backdoors in the Internet of Things.
Hewitt is Board Chair of iRobust™, an international scientific society for the promotion of the field of Inconsistency Robustness. He is also Board Chair of Standard IoT™, an international standards organization for the Internet of Things, which is using the Actor Model to unify and generalize emerging standards for IoT. He has been a Visiting Professor at Stanford University and Keio University and is Emeritus in the EECS department at MIT.
A project to build the technology stack outlined in these lectures can bring Scalable Intelligent Systems to fruition by 2025. Scalable Intelligent Systems have the following characteristics:
Technology stack for Scalable Intelligent Systems is outlined below:
For example, pain management could greatly benefit from Scalable Intelligent Systems. Complexities of dealing with pain have led to the current opioid crisis. According to Eric Rodgers, PhD., director of the VA’s Office of Evidence Based Practice:
“The use of opioids has changed tremendously since the 1990s, when we first started formulating a plan for guidelines. The concept then was that opioid therapy was an underused strategy for helping our patients and we were trying to get our providers to use this type of therapy more. But as time went on, we became more aware of the harms of opioid therapy and the development of pill mills. The problems got worse.
It’s now become routine for providers to check the state databases to see if there’s multi-sourcing — getting prescriptions from other providers. Providers are also now supposed to use urine drug screenings and, if there are unusual results, to do a confirmation. [For every death from an opioid overdose] there are 10 people who have a problem with opioid use disorder or addiction. And for every addicted person, we have another 10 who are misusing their medication.”
Pain management requires much more than just prescribing opioids, which are often critical for short-term and less often longer-term use. [Coker 2015; Friedberg 2012; Holt 2017; Marchant 2017; McKinney 2015; Spiegel 2018; Tedesco, et. al. 2017; White 2017] Organizational aspects play an important role in pain management. [Fagerhaugh and Strauss 1977]