Abstract:
A job candidate has been pre-selected for shortlist by a neural net; an autonomous car has suddenly changed lanes almost causing an accident; the intelligent fridge has ordered an extra pint of milk. From the life changing or life threatening to day-to-day living, decisions are made by computer systems on our behalf. If something goes wrong, or even when the decision appears correct, we may need to ask the question, “why?” In the case of failures we need to know whether it is the result of a bug in the software,; a need for more data, sensors or training; or simply one of those things: a decision correct in the context, that happened to turn out badly. Even if the decision appears acceptable, we may wish to understand it for our own curiosity, peace of mind, or for legal compliance. In this talk I will pick up threads of research dating back to early work in the 1990s on gender and ethnic bias in black-box machine-learning systems, as well as more recent developments such as deep learning and concerns such as those that gave rise to the EPSRC human-like computing programme. In particular I will present nascent work on an AIX Toolkit (AI explainability): a structured collection of techniques designed to help developers of intelligent systems create more comprehensible representations of the reasoning. Crucial to the AIX Toolkit is the understanding that human-human explanations are rarely utterly precise or reproducible, but they are sufficient to inspire confidence and trust in a collaborative endeavour.
Speaker Bio:
Alan Dix is Director of the Computational Foundry at Swansea University. Previously he has spent 10 years in a mix of academic and commercial roles, most recently as Professor in the HCI Centre at the University of Birmingham and Senior Researcher at Talis. He has worked in human–computer interaction research since the mid 1980s, and is the author of one of the major international textbooks on HCI as well as of over 450 research publications from formal methods to design creativity, including some of the earliest papers in the HCI literature on topics such as privacy, mobile interaction, and gender and ethnic bias in intelligent algorithms. Issues of space and time in user interaction have been a long term interest, from his “Myth of the Infinitely Fast Machine” in 1987, to his co-authored book, TouchIT, on physicality in a digital age, due to be published in 2018. Alan organises a twice-yearly workshop, Tiree Tech Wave, on the small Scottish island where he has lived for 10 years, and where he has been engaged in a number of community research projects relating to heritage, communications, energy use and open data. In 2013, he walked the complete periphery of Wales, over a thousand miles. This was a personal journey, but also a research expedition, exploring the technology needs of the walker and the people along the way. The data from this including 19,000 images, about 150,000 words of geo-tagged text, and many giga-bytes of bio-data is available in the public domain as an ‘open science’ resource. Alan’s new role at the Computational Foundry has brought him back to his homeland. The Computational Foundry is a 30 million pound initiative to boost computational research in Wales with a strong focus on creating social and economic benefit. Digital technology is at a bifurcation point when it could simply reinforce existing structures of industry, government and health, or could allow us to radically reimagine and transform society. The Foundry is built on the belief that addressing human needs and human values requires and inspires the deepest forms of fundamental science.
Home
Update: The slides from the talk are available here.
Event details
- When: 16th October 2018 14:00 - 15:00
- Where: Cole 1.33a
- Series: School Seminar Series
- Format: Seminar