Distinguished Lecture series 2024

This years Distinguished Lecture series was delivered yesterday ( Tuesday 12th March) by Professor Neil Lawrence, University of Cambridge

In his talk on, ‘The Atomic Human Understanding Ourselves in the Age of AI’ he gave an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include the practical application of existing algorithms in the face of the need to explain decision-making, mechanisms for improving the quality and availability of data and dealing with large unstructured datasets.

Distinguished Lecture Series: The Atomic Human: Understanding Ourselves in the Age of AI

  • Tuesday 12 March
  • Booth Lecture Theatre, Medical Sciences Building.

We look forward to welcoming Prof Neil Lawrence, Cambridge who will talk about ‘The Atomic Human: Understanding Ourselves in the Age of AI’.

A vital perspective is missing from the discussions we are having about Artificial Intelligence: what does it mean for our identity?

Our fascination with AI stems from the perceived uniqueness of human intelligence. We believe it is what differentiates us. Fears of AI not only concern how it invades our digital lives but also the implied threat of an intelligence that displaces us from our position at the centre of the world.

Atomism, proposed by Democritus, suggested it was impossible to continue dividing matter down into ever smaller components: eventually, we reach a point where a cut cannot be made (the Greek for uncuttable is ‘atom’). In the same way, by slicing away at the facets of human intelligence that can be replaced by machines, AI uncovers what is left: an indivisible core that is the essence of humanity.

By contrasting our own (evolved, locked-in, embodied) intelligence with the capabilities of machine intelligence through history, The
Atomic Human reveals the technical origins, capabilities, and limitations of AI systems, and how they should be wielded. Not just
by the experts, but by ordinary people. Either AI is a tool for us, or we become a tool of AI. Understanding this will enable us to choose
the future we want.

This talk is based on Neil’s forthcoming book to be published with Allen Lane in June 2024. Machine learning solutions, in particular
those based on deep learning methods, form an underpinning of the the current revolution in “artificial intelligence” that has dominated
popular press headlines and is having a significant influence on the wider tech agenda.

In this talk, I will give an overview of where we are now with machine learning solutions, and what challenges we face both in the
near and far future. These include practical application of existing algorithms in the face of the need to explain decision-making,
mechanisms for improving the quality and availability of data, dealing with large unstructured datasets.

Distinguished Lecture Series: Computer Science and the Environment

Thank you to Professor Gordon Blair for delivering this year’s distinguished lecture on Computer Science and the environment.

The series of talks explained the role of computer science in addressing the massive challenges associated with a changing climate.

Feedback was positive and the series was enjoyed by all!

From Left to Right: Jonathan Lewis, Blesson Varghese, Simon Dobson, Gordon Blair, Ian Miguel & Al Dearle (Back)

Distinguished Lecture Series: Computer Science and the Environment -14 March 2023

Professor Gordon Blair

Prof. Gordon Blair is Head of Environmental Digital Strategy at UKCEH. He is also a Distinguished Professor of Distributed Systems at Lancaster University where he holds a part-time post (20%). He is also Co-Director of the Centre of Excellence in Environmental Data Science (CEEDS), a joint initiative between UKCEH and Lancaster University. His current research interests focus on the role of digital technology in supporting environmental science. This includes new forms of environmental monitoring and data acquisition, including the role of Internet of Things technology, new forms of computational infrastructure to support the storage and processing of such data, specifically using cloud computing, and new forms of analysing and making sense of this data using data science and AI. This all builds on a strong legacy of research in distributed systems, having been involved since the inception of the field in the early 1980s, including research in the area of middleware architectures that underpin complex distributed systems applications and services.

Abstract:

Computer Science innovation has revolutionised many areas of society including the way we work, play, shop and indeed study. Computer science also has enormous potential in environmental science, including supporting scientists in understanding the impacts of climate change and developing mitigation and adaptation policies and approaches. Examples include new forms of environmental monitoring and data acquisition, including the role of Internet of Things technology, new forms of computational infrastructure to support the storage and processing of such data, specifically using cloud computing, and new forms of analysing and making sense of this data using data science and AI. This series of talks will examine the role of computer science in addressing the massive challenges associated with a changing climate. The first talk will examine the opportunities in this area in some depth, also considering for balance the negative impacts of computing technology on the environment, highlighting the need for responsible innovation in this area. The second talk will zoom in on the nature of environmental data and the unique challenges in terms analysing and making sense of these unique data sets. The final talk will then look at one grand challenge in the environmental space – what does it mean to build digital twins of aspects of the environment.

Time: 12:00 – 17:00

Date: Tuesday 14th March

Place: Medical Booth Lecture Theatre 

  • 12:00 – Welcome
  • 12:15 – Lecture 1 with Q&A  
  • 13:15 – Uncatered lunch break
  • 14:30 – Reconvening remarks
  • 14:35 – Lecture 2 with Q&A
  • 15:30 – Catered coffee break
  • 16:00 – Lecture 3 with Q&A
  • 16:55 – Concluding remarks

DLS: Multimodal human-computer interaction: past, present and future

Speaker: Stephen Brewster (University of Glasgow)
Venue: The Byre Theatre

Timetable:

9:30: Lecture 1: The past: what is multimodal interaction?
10:30 Coffee break
11:15 Lecture 2: The present: does it work in practice?
12:15 Lunch (not provided)
14:15 The future: Where next for multimodal interaction?

Speaker Bio:

Professor Brewster is a Professor of Human-Computer Interaction in the Department of Computing Science at the University of Glasgow, UK. His main research interest is in Multimodal Human-Computer Interaction, sound and haptics and gestures. He has done a lot of research into Earcons, a particular form of non-speech sounds.

He did his degree in Computer Science at the University of Herfordshire in the UK. After a period in industry he did his PhD in the Human-Computer Interaction Group at the University of York in the UK with Dr Alistair Edwards. The title of his thesis was “Providing a structured method for integrating non-speech audio into human-computer interfaces”. That is where he developed my interests in Earcons and non-speech sound.

After finishing his PhD he worked as a research fellow for the European Union as part of the European Research Consortium for Informatics and Mathematics (ERCIM). From September, 1994 – March, 1995 he worked at VTT Information Technology in Helsinki, Finland. He then worked at SINTEF DELAB in Trondheim, Norway.

Event details

  • When: 8th October 2019 09:30 - 15:15
  • Where: Byre Theatre
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture

Distinguished Lecture Series: Formal Approaches to Quantitative Evaluation

Biography:
Jane Hillston was appointed Professor of Quantitative Modelling in the School of Informatics at the University of Edinburgh in 2006, having joined the University as a Lecturer in Computer Science in 1995. She is currently Head of the School of Informatics. She is a Fellow of the Royal Society of Edinburgh and Member of Academia Europaea. She currently chairs the Executive Committee of the UK Computing Research Committee.
Jane Hillston’s research is concerned with formal approaches to modelling dynamic behaviour, particularly the use of stochastic process algebras for performance modelling and stochastic verification. The application of her modelling techniques have ranged from computer systems, to biological processes and transport systems. Her PhD dissertation was awarded the BCS/CPHC Distinguished Dissertation award in 1995 and she was the first recipient of the Roger Needham Award in 2005. She has published over 100 journal and conference papers and held several Research Council and European Commission grants.
She has a strong interest in promoting equality and diversity within Computer Science; she is a member of the Women’s Committee of the BCS Computing Academy and chaired the Women in Informatics Research and Education working group of Informatics Europe 2016—2018, and during that time instigated the Minerva Informatics Equality Award.

Formal Approaches to Quantitative Evaluation
Qualitative evaluation of computer systems seeks to ensure that the system does not exhibit bad behaviour and is in some sense “correct”. Whilst this is important it is also often useful to be able to reason not just about what will happen in the system, but also the dynamics of that behaviour: how long it will take, what are the probabilities of alternative outcomes, how much resource is used….? Such questions can be answered by quantitative analysis when information about timing and probability are incorporated into models of system behaviour.

In this short series of lectures I will talk about how we can extend formal methods to support quantitative evaluation as well as qualitative evaluation of systems. The first lecture will focus on computer systems and a basic approach based on the stochastic process algebra PEPA. In the second lecture I will introduce the language CARMA which is designed to support the analysis of collective adaptive systems, in which the structure of the system may change over time. In the third lecture I will consider systems where the exact details of behaviour may not be known and present the process algebra ProPPA which combines aspect of machine learning and inference with formal quantitative models.

Timetable:
Lecture 1: 9:30 – 10:30 – Performance Evaluation Process Algebra (PEPA)

Coffee break at 10:30 – 11:15
Lecture 2: 11:15 – 12:15 – Collective Adaptive Resource-sharing Markovian Agents (CARMA)

Lecture 3: 14:15 – 15:15 – Probabilistic Programming for Stochastic Dynamical Systems (ProPPA)


Venue: Upper and Lower College Halls

Event details

  • When: 8th April 2019 09:30 - 15:30
  • Where: Lower College Hall
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture

DLS: Scalable Intelligent Systems by 2025 (Carl Hewitt)

Venue: The Old Course Hotel (Hall of Champions)

Timetable:

9:30 Lecture 1
10:30 Break with Coffee
11:15 Lecture 2
12:15 Break for Lunch (not provided)
14:15 Lecture 3
15:15 Discussion

Lecture 1: Introduction to Scalable Intelligent Systems

Lecture 2: Foundations for Scalable Intelligent Systems

Lecture 3: Implications of Scalable Intelligent Systems

Speaker Bio:

Professor Carl Hewitt is the creator (together with his students and other colleagues) of the Actor Model of computation, which influenced the development of the Scheme programming language and the π calculus, and inspired several other systems and programming languages. The Actor Model is in widespread industrial use including eBay, Microsoft, and Twitter. For his doctoral thesis, he designed Planner, the first programming language based on pattern-invoked procedural plans.

Professor Hewitt’s recent research centers on the area of Inconsistency Robustness, i.e., system performance in the face of continual, pervasive inconsistencies (a shift from the previously dominant paradigms of inconsistency denial and inconsistency elimination, i.e., to sweep inconsistencies under the rug). ActorScript and the Actor Model on which it is based can play an important role in the implementation of more inconsistency-robust information systems. Hewitt is an advocate in the emerging campaign against mandatory installation of backdoors in the Internet of Things.

Hewitt is Board Chair of iRobust™, an international scientific society for the promotion of the field of Inconsistency Robustness. He is also Board Chair of Standard IoT™, an international standards organization for the Internet of Things, which is using the Actor Model to unify and generalize emerging standards for IoT. He has been a Visiting Professor at Stanford University and Keio University and is Emeritus in the EECS department at MIT.

Abstract:

A project to build the technology stack outlined in these lectures can bring Scalable Intelligent Systems to fruition by 2025. Scalable Intelligent Systems have the following characteristics:

  • Interactively acquire information from video, Web pages, hologlasses, online data bases, sensors, articles, human speech and gestures, etc.
  • Real-time integration of massive pervasively inconsistent information
  • Scalability in all important dimensions meaning that there are no hard barriers to continual improvement in the above areas
  • Close human collaboration with hologlasses for secure mobile interaction. Computers alone cannot implement the above capabilities
  • No closed-form algorithmic solution is possible to implement the above capabilities

Technology stack for Scalable Intelligent Systems is outlined below:

  • Experiences Hologlasses: Collaboration, Gestures, Animations, Video
  • Matrix Discourse, Rhetoric, and Narration
  • Citadels No single point of failure
  • Massive Inconsistency Robust Ontology Propositions, Goals, Plans, Descriptions, Statistics, Narratives
  • Actor Services Hardware and Software
  • Actor Many Cores Non-sequential, Every-word-tagged, Faraday cage Crypto, Stacked Carbon Nanotube

For example, pain management could greatly benefit from Scalable Intelligent Systems. Complexities of dealing with pain have led to the current opioid crisis. According to Eric Rodgers, PhD., director of the VA’s Office of Evidence Based Practice:

“The use of opioids has changed tremendously since the 1990s, when we first started formulating a plan for guidelines. The concept then was that opioid therapy was an underused strategy for helping our patients and we were trying to get our providers to use this type of therapy more. But as time went on, we became more aware of the harms of opioid therapy and the development of pill mills. The problems got worse.

It’s now become routine for providers to check the state databases to see if there’s multi-sourcing — getting prescriptions from other providers. Providers are also now supposed to use urine drug screenings and, if there are unusual results, to do a confirmation. [For every death from an opioid overdose] there are 10 people who have a problem with opioid use disorder or addiction. And for every addicted person, we have another 10 who are misusing their medication.”

Pain management requires much more than just prescribing opioids, which are often critical for short-term and less often longer-term use. [Coker 2015; Friedberg 2012; Holt 2017; Marchant 2017; McKinney 2015; Spiegel 2018; Tedesco, et. al. 2017; White 2017] Organizational aspects play an important role in pain management. [Fagerhaugh and Strauss 1977]

Event details

  • When: 13th November 2018 09:30 - 15:30
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture

DLS: Functional Foundations for Operating Systems

Biography: Dr. Anil Madhavapeddy is a University Lecturer at the Cambridge Computer Laboratory, and a Fellow of Pembroke College where he is Director of Studies for Computer Science. He has worked in industry (NetApp, Citrix, Intel), academia (Cambridge, Imperial, UCLA) and startups (XenSource, Unikernel Systems, Docker) over the past two decades. At Cambridge, he directs the OCaml Labs research group which delves into the intersection of functional programming and systems, and is a maintainer on many open source projects such as OpenBSD, OCaml, Xen and Docker.

Timetable
9:30: Introduction by Professor Saleem Bhatti
9:35: Lecture 1
10:35: Break with tea and coffee
11:15: Lecture 2
12:15: Lunch (not provided)
14:00: Lecture 3
15:00: Close by Professor Simon Dobson

Lecture 1: Rebuilding Operating Systems with Functional Principles
The software stacks that we deploy across computing devices in the world are based on shaky foundations. Millions of lines of C code crammed into monolithic operating system kernels, mixed with layers of scheduling logic, wrapped in a hypervisor, and served with a dose of nominal security checking on the side. In this talk, I will describe an alternative approach to constructing reliable, specialised systems with a familiar developer experience. We will use modular functional programming to build several services such as a secure web server that have no reliance on conventional operating systems, and explain how to express their logic in a high level, functional fashion. By the end of it, everyone in the audience should be able to build their own so-called unikernels!

Lecture 2: The First Billion Real Deployments of Unikernels
Unikernels offer a path to a more sane basis for driving applications on hardware, but will they ever be adopted for real? For the past fifteen years, an intrepid group of adventurers have been developing the MirageOS application stack in the OCaml programming language. Along the way, it has been deployed in many unusual industrial situations that I will describe in this talk, starting with the Docker container stack, then moving onto the Xen hypervisor that drives billions of servers worldwide. I will explain the challenges of using functional programming in industry, but also the rewards of seeing successful deployments quietly working in mission-critical areas of systems software.

Lecture 3: Programming the Next Trillion Embedded Devices
The unikernel approach of compiling highly specialised applications from high-level source code is perfectly suited to programming the trillions of embedded devices that are making their way around the world. However, this raises new challenges from a programming language perspective: how can we run on a spectrum of devices from the very tiny (with just kilobytes of RAM) to specialised hardware? I will describe the new frontier of functional metaprogramming (programs which generate more programs) that we are using to compile a single application to many heterogenous devices, and a Git-like model to coordinate across thousands of nodes. I will conclude with by motivating the need for a next-generation operating system to power new exciting applications such as augmented and virtual reality in our situated environments, and remove the need for constant centralised coordination via the Internet.

Event details

  • When: 13th February 2018 09:30 - 15:15
  • Where: Byre Theatre
  • Series: Distinguished Lectures Series, Systems Seminars Series
  • Format: Distinguished lecture

DLS: What Every Computer Scientist Should Know About Computer History

What Every Computer Scientist Should Know About Computer History

Prof Ursula Martin

Update: Lectures will be live streamed at this link.

Distinguished Lecture Series, Semester 1, 2017-18

Biography:

Professor Ursula Martin CBE FREng FRSE joined the University of Oxford as Professor of Computer Science in 2014, and is a member of the Mathematical Institute.  She holds an EPSRC Established Career Fellowship, and a Senior Research Fellowship at Wadham College. Her research, initially in algebra, logic and the use of computers to create mathematical proofs, now focuses on wider social and cultural approaches to understanding the success and impact of current and historical computer science research.

Prof Ursula Martin

Prof Ursula Martin

Before joining Oxford she worked at  Queen Mary University of London, where she was Vice-Principal for Science and Engineering (2005-2009), and Director of the impactQM project (2009-2012), an innovative knowledge transfer initiative. She serves on numerous international committees, including the Royal Society’s Diversity Committee and the UK Defence Science Advisory Council.  She worked  at the University of St Andrews from 1992 – 2002, as only its second female professor, and its first in over 50 years. She holds an MA in Mathematics from Cambridge, and a PhD in Mathematics from Warwick.

Timetable:

9.30 Introduction

9.35 Lecture 1:  The early history of computing: Ada Lovelace, Charles Babbage, and the history of programming

10.35 Break with Refreshments Provided

11.15 Lecture 2: Case study, Alan Turing,  Grace Hopper, and the history of getting things right

12.15 Lunch (not provided)

2.30 Welcome by the Principal, Prof Sally Mapstone

2.35 Lecture 3: What do historians of computing do, and why is it  important for computer scientists today

3.30 Close

Lecture 1. The early history of computing: Ada Lovelace, Charles Babbage, and the history of programming

IN 1843 Ada Lovelace published a remarkable paper in which she explained  Charles Babbage’s designs for his Analytical Engine. Had it been built, it would have had in principle the same capabilities  as a modern general purpose computer. Lovelace’s paper is famous for its insights into more general questions, as well as for its detailed account of how the machine performed its calculations – illustrated with a large table which is often called, incorrectly, the “first programme”.   I’ll talk about the wider context; why people were interested in computing engines; and some of the other work that was going on at the time, for example Babbage’s remarkable hardware description language. I’ll  look at different explanations for why Babbage’s ideas did not take off, and give a quick overview of what did happen over the next 100 years, before  the invention of the first digital computers.

Lecture 2. Case study, Alan Turing,  Grace Hopper, and the history of getting things right

Getting software right was a theme of programming for the days of Babbage onwards. I’ll look at the work of pioneers Alan Turing and Grace Hopper, and talk about the long interaction of computer science with logic, which has led to better programming languages, new ways to prove programmes correct, and sophisticated mathematical theories of importance in their own right.  I’ll look at the history of the age-old debate about whether computer science needs mathematics to explain its main ideas, or whether practical skills, building things and making things simple for the user are more important.

Lecture 3: What do historians of computing do, and why is it  important for computer scientists today

When people think about computer science, they think about ideas and technologies that are transforming the future – smaller faster smarter connected devices, powered by, AI, and big data – and looking at the past can be seen as a bit of a waste of time. In this lecture I’ll look at what historians do and why it is important; how we get history wrong; and in particular often miss the contribution of of women.  I’ll illustrate my talk with  my own work on Ada Lovelace’s papers, to show how  detailed historical work is needed to debunk popular myths – it is often claimed that Lovelace’s talent was  “poetical science” rather than maths, but I’ve shown that she was a gifted perceptive and knowledgeable mathematician. I’ll explain how the historian’s techniques of getting it right can help us get to grip with  topical problems like “Fake news”, and give us new ways of thinking about the future.

Event details

  • When: 10th October 2017 09:30 - 16:00
  • Where: Byre Theatre
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture

DLS: Algorithms for healthcare-related matching problems

Algorithms for healthcare-related matching problems

Distinguished Lecture Series, Semester 2, 2016-7

David Manlove

School of Computing Science, University of Glasgow

Lower College Hall (with overflow simulcast in Upper College Hall)

Abstract:

Algorithms arise in numerous everyday appPicture of David Manlovelications – in this series of lectures I will describe how algorithms can be used to solve matching problems having applications in healthcare settings.  I will begin by outlining how algorithms can be designed to cope with computationally hard problems.  I will then describe algorithms developed at the University of Glasgow that have been used by the NHS to solve two particular matching problems.  These problems correspond to the annual assignment of junior doctors to Scottish hospitals, and finding “kidney exchanges” between kidney patients and their incompatible donors in the UK.
Continue reading

Event details

  • When: 31st March 2017 09:15 - 15:30
  • Where: Lower College Hall
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture