PGR Seminar with Carla Davesa Sureda

The next PGR seminar is taking place this Friday 22nd November at 2PM in JC 1.33a

Below is a Title and Abstract for Carla’s talk – Please do come along if you are able.

Title:

Towards High-Level Modelling in Automated Planning

Abstract:

Planning is a fundamental activity, arising frequently in many contexts, from daily tasks to industrial processes. The planning task consists of selecting a sequence of actions to achieve a specified goal from specified initial conditions. The Planning Domain Definition Language (PDDL) is the leading language used in the field of automated planning to model planning problems. Previous work has highlighted the limitations of PDDL, particularly in terms of its expressivity. Our interest lies in facilitating the handling of complex problems and enhancing the overall capability of automated planning systems. Unified-Planning is a Python library offering high-level API to specify planning problems and to invoke automated planners. In this paper, we present an extension of the UP library aimed at enhancing its expressivity for high-level problem modelling. In particular, we have added an array type, an expression to count booleans, and the allowance for integer parameters in actions. We show how these facilities enable natural high-level models of three classical planning problems.

Doughnuts will be available! 🍩

AI Seminar Tuesday 19th November – Francesco Leofante

The School is hosting an AI seminar on Tuesday 19th November at 11am in JCB1.33A/B

Our speaker is Francesco Leofante from Imperial College London.

Title:

Robustness issues in algorithmic recourse.

Abstract:

Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CEs. Since a lack of robustness may compromise the validity of CEs, techniques to mitigate this risk are in order. In this talk we will begin by introducing the problem of (lack of) robustness, discuss its implications and present some recent solutions we developed to compute CEs with robustness guarantees.

Bio:

Francesco is an Imperial College Research Fellow affiliated with the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe and explainable AI, with special emphasis on counterfactual explanations and their robustness. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four-year effort devoted to the formal study of robustness issues arising in XAI. More details about Francesco and his research can be found at https://fraleo.github.io/.

PGR Seminar with Daniel Wyeth and Ferdia McKeogh

The next PGR seminar is taking place this Friday 15th November at 2PM in JC 1.33a

Below is a Title and Abstract for Daniel’s and Ferdia’s talks – Please do come along if you are able.

Daniel:

Deep Priors: Integrating Domain Knowledge into Deep Neural Networks

Deep neural networks represent the state of the art for learning complex functions purely from data.  There are however problems, such as medical imaging, where data is limited, and effective training of such networks is difficult.  Moreover, this requirement for large datasets represents a deficiency compared to human learning, which is able harness prior understanding to acquire new concepts with very few examples.  My work looks at methods for integrating domain knowledge into deep neural networks to guide training so that fewer examples are required.  In particular I explore probabilistic atlases and probabilistic graphical models as representations for this prior information, architectures which enable networks to use these, and the application of these to problems in medical image understanding.

Ferdia:

“Lessons Learned From Emulating Architectures”

Automatically generating fast emulators from formal architecture specifications avoids the error-prone and time-consuming effort of manually implementing an emulator. The key challenge is achieving high performance from correctness-focused specifications; extracting relevant functional semantics and performing aggressive optimisations. In this talk I will present my work thus far, and reflect on some of the unsuccessful paths of research.

Doughnuts will be available! 🍩

PGR Seminar with Ariane Hine

The PGR seminars for this academic year are beginning this Friday 8th November at 2PM in JC 1.33A/B

Below is a title and Abstract for Ariane’s talk – Please do come along if you are able.

Title: Enhancing and Personalising Endometriosis Care with Causal Machine Learning

Abstract: Endometriosis poses significant challenges in diagnosis and management due to the wide range of varied symptoms and systemic implications. Integrating machine learning into healthcare screening processes can significantly enhance and optimise resource allocation and diagnostic efficiency, and facilitate more tailored and personalised treatment plans. This talk will discuss the potential of leveraging patient-reported symptom data through causal machine learning to advance endometriosis care and reduce the lengthy diagnostic delays associated with this condition.

The goal is to propose a novel personalised non-invasive diagnostic approach that understands the underlying causes of patient symptoms and combines health records and other factors to enhance prediction accuracy, providing an approach that can be utilised globally.

Fudge donuts will be available! 🍩

AI Seminar Friday 18th October – Leonardo Bezerra

The School is hosting an AI seminar on Friday 18th October at 11.30am in JCB1.33A!

Our speaker is Leonardo Bezerra from the University of Stirling.

FAIRTECH by design: assessing and addressing the social impacts of artificial intelligence systems

In a decade, social media and big data have transformed society and enabled groundbreaking artificial intelligence (AI) technologies like deep learning and generative AI. Applications like ChatGPT have impacted the world and outpaced regulatory agencies, who were rushed from a data-centred to an AI-centred concern. Recent developments from both the United Kingdom (UK) and the United States (US) originated in the executive branch, and the most advanced Western binding legislation is the European Union (EU) AI Act, expected to be implemented over the next three years. In the meantime, the United Nations (UN) have proposed an AI advisory body similar to the International Panel on Climate Change (IPCC), and countries from the Global South like Brazil are following Western proposals. In turn, AI companies have been proactive in the regulation debate, aiming at a scenario of improved accountability and reduced liability. In this talk, we will briefly overview efforts and challenges regarding AI regulation and how major AI players are addressing it. The goal of the talk is to stir future project collaborations from a multidisciplinary perspective, to promote a culture where the development and adoption of AI systems is fair, accountable, inclusive, responsible, transparent, ethical, carbon-efficient, and human-centred (FAIRTECH) by design.

Speaker bio: Leonardo Bezerra joined the University of Stirling as a Lecturer in Artificial Intelligence (AI)/Data Science in 2023, after having been a Lecturer in Brazil for the past 7 years. He received his Ph.D. degree from Université Libre de Bruxelles (Belgium) in 2016, having defended a thesis on the automated design of multi-objective evolutionary algorithms. His research experience spans from applied data science projects with public and private institutions to supervising theses on automated and deep machine learning. Recently, his research has concentrated on the social impact of AI applications, integrating the Participatory Harm Auditing Workbenches and Methodologies project funded by Responsible AI UK.

Distinguished Lecture series 2024

This years Distinguished Lecture series was delivered yesterday ( Tuesday 12th March) by Professor Neil Lawrence, University of Cambridge

In his talk on, ‘The Atomic Human Understanding Ourselves in the Age of AI’ he gave an overview of where we are now with machine learning solutions, and what challenges we face both in the near and far future. These include the practical application of existing algorithms in the face of the need to explain decision-making, mechanisms for improving the quality and availability of data and dealing with large unstructured datasets.

Seminar: Tangible User Interfaces 13th March 2024

We have 2 presentations next week focusing on Tangible interfaces by Laura Pruszko and Anna Carter.

Talk 1: Designing for Modularity – a modular approach to physical user interfaces

Abstract:

Designing for Modularity – a modular approach to physical user interfaces by Laura Pruszko
Physical user interfaces, future or history? While some of our old physical UIs get progressively replaced by their graphical counterparts, humans still rely on physicality for eye-free interaction. Shape-changing user interfaces — i.e. physical devices able to change their shape to accommodate the user, the task, or the environment – are often presented as a way to bridge the gap between the physicality of physical user interfaces and the flexibility of graphical user interfaces, but they come with their fair share of challenges. In this presentation, we will talk about these challenges under the specific scope of modular shape-changing interfaces: how do we design for modularity? What is the impact on the user? As these kinds of interfaces are not commonplace in our everyday lives, they introduce novel usability considerations for the HCI community to explore.

Bio:

Laura Pruszko is a lecturer in the Applied Computer Games department of Glasgow Caledonian University. Her research focuses on interaction with physical user interfaces and modular systems. She obtained her PhD from Grenoble Alpes University in 2023, as part of the multidisciplinary Programmable Matter consortium. This consortium brings together people from different horizons such as artists, entrepreneurs, HCI and robotics researchers, to collaborate towards enabling the long-term vision of Claytronics.

Talk 2: Sense of Place, Cultural Heritage and Civic Engagement

Abstract:

In this presentation, I will provide an overview of my recent work, where I implemented a range of interactive probes, exploring sense of place and cultural heritage within a regenerating city centre. Through these digital multimodal interactions, citizens actively participated in the sharing of cultural heritage, fostering a sense of belonging and nostalgia. Looking ahead, I’ll discuss how these insights inform my ongoing work at the intersection of the Digital Civics project and the Centre for Digital Citizens project. This presentation will not only offer my personal insights but also open the floor for collaborative discussions on integrating these crucial aspects into future embedded research.

Bio:

Anna Carter is a Research Fellow at Northumbria University she has extensive experience in designing technologies for local council regeneration programs, her work focuses on creating accessible digital experiences in a variety of contexts using human-centred methods and participatory design. She works on building Digital Civics research capacities of early career researchers as part of the EU funded DCitizens Programme and on digital civics, outdoor spaces and sense of place as part of the EPSRC funded Centre for Digital Citizens.

Event details:

  • When: 13th March 2024 12:00 – 14:00. There’ll be cakes and soft drinks from 12 onwards. The talks will be from 12:30 – 13:30
  • Where: Jack Cole 1.33 (Soft drinks and cake provided by F&D)

SACHI Seminar: Rights-driven Development

Abstract:

Alex will discuss a critique of modern software engineering and outline how it systematically produces systems that have negative social consequences. To help counter this trend, he offers the notion of rights-driven development, which puts the concept of a right at the heart of software engineering practices. Alex’s first step to develop rights-driven practices is to introduce a language for rights in software engineering. He provides an overview of the elements such a language must contain and outlines some ideas for developing a domain-specific language that can be integrated with modern software engineering approaches. 

Bio:

Alex Voss, who’s an Honorary Lecturer here at the school and an external member of our group. Alex was also a Technology Fellow at the Carr Center for Human Rights Policy at Harvard’s John F. Kennedy School of Government and an Associate in the Department of Philosophy at Harvard.

Alex holds a PhD in Informatics and works at the intersection of the social sciences and computer science. His current research aims to develop new representations, practices and tools for rights-respecting software engineering. He is also working on the role that theories of causation have in making sense of complex socio-technical systems.

His research interests include: causality in computing, specifically in big data and machine learning applications; human-centric co-realization of technologies; responsible innovation; computing and society; computer-based and computer-aided research methods.

More about Alex: https://research-portal.st-andrews.ac.uk/en/persons/alexander-voss

Event details:

  • When: 28th February 2024 12:30 – 13:30
  • Where: Jack Cole 1.19

If you’re interested in attending any of the seminars in room 1.19, please email the SACHI seminar coordinator: aaa8@st-andrews.ac.uk so they can make appropriate arrangements for the seminar based on the number of attendees.

SICSA DVF Seminar – Dr André G. Pereira

We had our first School seminar of the semester today. The speaker was André G. Pereira visiting Scotland on a SICSA DVF Fellowship. André is working on AI Planning problems, an area that is closely related to the work of our own Constraint Programming research group.

Title: Understanding Neuro-Symbolic Planning

Abstract: In this seminar, we present the area of neuro-symbolic planning, introducing fundamental concepts and applications. We focus on presenting recent research on the problem of learning heuristic functions with machine learning techniques. We discuss the distinctions and particularities between the “model-based” and “model-free” approaches, and the different methods to address the problem. Then, we focus on explaining the behavior of “model-free” approaches. We discuss the generation of the training set, and present sampling algorithms and techniques to improve the quality of the training set. We also discuss how the distribution of samples over the state space of a task, together with the quality of its estimators, are directly related to the quality of the learned heuristic function. Finally, we empirically detail which factors have the greatest impact on the quality of the learned heuristic function.

Biography: Dr. André G. Pereira is a professor at the Federal University of Rio Grande do Sul, Brazil. His research aims to develop and explain the behavior of intelligent systems for sequential decision-making problems. Dr. Pereira has authored several papers on top-tier venues such as IJCAI, AAAI, and ICAPS. These papers contribute towards explaining the behavior of heuristic search algorithms, how to use combinatorial optimization-based reasoning to solve planning tasks, and how to use machine learning techniques to produce heuristic functions. Dr. Pereira is a program committee member of IJCAI and AAAI. His doctoral dissertation was awarded second place in the national Doctoral Dissertation Contest on Computer Science (2017), and first place in the national Doctoral Dissertation Contest on Artificial Intelligence (2018). Dr. Pereira advised three awarded students on national events, including first place and finalist in the Scientific Initiation Work Contest (2018, 2022), and finalist in the Master Dissertation Contest on Artificial Intelligence (2020).