Professor Stephen Linton Retirement

Colleagues from the University, past and present, gathered to say a fond farewell to Professor Stephen Linton who is retiring from the School at the end of 2024. Steve was a valuable and respected member of staff for 31 years and also a former Head of School and Director of CIRCA.

Steve was fundamental in building our collaboration with colleagues in the School of Mathematics and Statistics, serving as Director of the Centre of Interdisciplinary Research in Computational Algebra (CIRCA) for many years. CIRCA has been the platform for nearly 25 years of fruitful collaboration between the two Schools, producing an internationally recognised body of work spanning both research papers and software. Steve played a central role in building and sustaining the Centre, supporting it with a substantial EPSRC Platform Grant in 2010.

As Head of School, Steve was always generous in helping those around him progress, even allowing staff to study for a law degree during research leave!

Below are some tributes from colleagues/friends:-

Professor Ian Gent: “Steve is one of those programmers who is 10 times better than other people who are themselves really good programmers.  The story of one of the coffee area’s Go boards illustrates this.   The school hosted an afternoon programming competition open to teams of 3.   Steve entered by himself having been assured that his favourite programming language GAP was available.  When that turned out to be wrong he just used C instead.  And then very comfortably won the programming competition solving more problems than any of the groups of three.  A Head of School, he didn’t want to benefit personally from the prize so bought the school a Go board.
Nobody will argue with the statement that the smartest person in the school is retiring.  When I used to research with him, people often asked me how I coped with working with somebody so ridiculously clever. My reply was “because I’m working WITH him” the advantage being that you weren’t competing!   A classic example of this was when we had a paper rejected because reviewers didn’t think the work was novel despite depending on amazing algorithms Steve had coded up.  In making sure the revision emphasised the novelty, Steve said “But any one of two dozen people could have done it” and I said “Yes and if any one of them had done, it would have been novel!”   The revised paper got accepted and has just hit 100 citations on Google Scholar.”

Professor Tom Kelsey: “Steve, Ian Gent and I – had a research meeting with Colva Roney-Dougal at which we agreed that there were two distinct coding tasks, one for Steve, the other for Ian and me, both quite challenging. Ian and I started work on the whiteboard outside Steve’s office, planning how we might go about writing and evaluating our code. After 25 minutes detailed discussion, we’d made good progress and had the kernel of a plan that would give us plenty of work for the rest of the week. Steve came out of his office having finished his task in one short attempt – the resulting paper used this code without revision. Ian and I felt like a pair of numbskulls.

On a more personal note, I was Steve’s first PhD student. During my studies one of my daughters became quite unwell, and dealing with her complex treatments and the other four children didn’t leave much time for my Doctoral studies. Steve was incredibly supportive, dealing with the School and University in such a way that all I had to worry about was my daughter’s wellbeing. When I returned I was still very much focussed on family issues, but Steve guided me expertly and kindly through the rest of my studies. For which I am eternally grateful.”

Professor Graham Kirby: “Steve is compassionate and understanding. I was a newish DoT at a time when my wife was seriously ill, and I was responsible for writing the school’s institutional teaching review document. As HoS, Steve found ways to alleviate the real or perceived pressure on me, enabling me to focus on family.”

Professor Chris Jefferson: “Steve also contributed to many major research projects, in particular GAP. GAP is a mathematics system, which has been actively developed since 1988.  While GAP is maintained by academics from around the world, St Andrews computing and mathematics, led by Steve, took over the leadership of GAP from RWTH Aachen in 1997, and lead GAP until 2022. In that time, many students and academics at St Andrews have been involved with GAP.”

Professor Karen Petrie (University of Dundee and University of St Andrews graduate 2000): “Steve has had a very lasting impression on me from my UG days when he was my tutor. I first met him in 1st year in C programming tutorials. I remember having my first ever memory leak in those days and Steve making all the hours which I spent fixing it better when he told me ‘now you are a computer sciemtist’. He also taught me the difference between P and NP, I remember being amazed to learn that P vs NP was an open problem, the very idea of open problems was new to me and incredibly exciting but challenging. He was also extremely compassionate I remember in my 4th year I had tonsillitis and Steve asked me why I was in University and sent me home to bed. He told me he did not want to see me until I could speak again! One of the amazing things about Steve is his practical coding ability and his theoretical ability as a student he felt like the rock star of CS, that impression of him has never changed. This means a kind word from him means a lot as we all want to emanate him. As an example as a PhD student we were working on a joint research paper as research does, it was not going well, so we were debugging code together. It had been a long trying day. Steve made it all better by telling me ‘you now debug code as well as I do’. Now, I am a professor myself and I try to do the same for my students as Steve did for me: to challenge them when appropriate, to be compassionate when that is what they need but most importantly always to treat them as an equal. I feel incredibly fortunate to be able to call Steve not just a mentor, nor a collaborator but a friend.”

All at CS would like to wish Steve a long and happy retirement 🎉

Alex Bain (School Manager), Professor Steve Linton, Professor Ian Miguel (Head of School) 

Professor Ian Miguel (HoS) and Professor Ron Morrison (former HoS) 

Dr Tristan Henderson, Professor Steve Linton 

PGR Seminar with Mustafa Abdelwahed and Maria Andrei

The next PGR seminar is taking place this Friday 6th December at 2PM in JC 1.33a

Below is a title and Abstract for Mustafa and Maria’s talks – Please do come along if you are able.

Mustafa Abdelwahed:

Title: Behaviour Planning: A toolbox for diverse planning

Abstract:

Diverse planning approaches are utilised in real-world applications like risk management, automated streamed data analysis, and malware detection. These approaches aim to create diverse plans through a two-phase process. The first phase generates plans, while the second selects a subset of plans based on a diversity model. A diversity model is a function that quantifies the diversity of a given set of plans based on a provided distance function.

Unfortunately, existing diverse planning approaches do not account for those models when generating plans and struggle to explain why any two plans are different.

Existing diverse planning approaches do not account for those models when generating plans, hence struggle to explain why any two plans are different, and are limited to classical planning.

To address such limitations, we introduce Behaviour Planning, a novel toolbox that creates diverse plans based on customisable diversity models and can explain why two plans are different concerning such models.

Maria Andrei

Title: Leveraging Immersive Technology to Enhance Climate Communication, Education & Action

Abstract: Climate change represents one of the most pressing challenges of our time, not only in its environmental impacts, but also as a pivotal science communication problem. Despite widespread scientific consensus on the causes and mitigation strategies for climate change, public understanding remains deeply fragmented and polarized. This disconnect hinders the collective action required from individuals, organizations, and policymakers to combat global warming effectively. My research explores the potential of immersive technologies to bridge the gap between scientific knowledge and public understanding by leveraging experiential learning experiences to inspire the attitudinal and behavioural shifts necessary to address climate change.

PGR Seminar with Zhongliang Guo

The next PGR seminar is taking place this Friday at 2PM in JC 1.33a

Below is a title and Abstract for Zhongliang’s talk– Please do come along if you are able.

Title: Adversarial Attack as a Defense: Preventing Unauthorized AI Generation in Computer Vision

Abstract: Adversarial attack is a technique that generate adversarial examples by adding imperceptible perturbations to clean images. These adversarial perturbations, though invisible to human eyes, can cause neural networks to produce incorrect outputs, making adversarial examples a significant security concern in deep learning. While previous research has primarily focused on designing powerful attacks to expose neural network vulnerabilities or using them as baselines for robustness evaluation, our work takes a novel perspective by leveraging adversarial examples to counter malicious uses of machine learning. In this seminar, I will present two of our recent works in this direction. First, I will introduce the Locally Adaptive Adversarial Color Attack (LAACA), which enables artists to protect their artwork from unauthorized neural style transfer by embedding imperceptible perturbations that significantly degrade the quality of style transfer results. Second, I will discuss our Posterior Collapse Attack (PCA), a grey-box attack method that disrupts unauthorized image editing based on Stable Diffusion by exploiting the common VAE structure in latent diffusion models. Our research demonstrates how adversarial examples, traditionally viewed as a security threat, can be repurposed as a proactive defense mechanism against the misuse of generative AI, contributing to the responsible development and deployment of these powerful technologies.

AI Seminar Wednesday 27th November – Lars Kotthoff

We have another exciting AI seminar coming up on Wednesday 27th November at 1pm.

This time our speaker is an alumnus!

When? 27/11/24, 1pm

Where? JCB 1.33B

Who? Lars Kotthoff

Lars Kotthoff is the Templeton Associate Professor of Computer Science, Founding Adjunct Faculty at the School of Computing, and a Presidential Faculty Fellow at the University of Wyoming. His research in foundational AI and Machine Learning as well as applications of AI in other areas (in particular Materials Science) has been widely published and recognized. Lars is a senior member of the Association for the Advancement of AI and the Association of Computing Machinery.

What?

Title: AI for Materials Science: Tuning Laser-Induced Graphene Production

Abstract: AI and machine learning have advanced the state of the art in many application domains. We present an application to materials science; in particular, we use surrogate models with Bayesian optimization for automated parameter tuning to optimize the fabrication of laser-induced graphene. This process allows to create thin conductive lines in thin layers of insulating material, enabling the development of next-generation nano-circuits. This is of interest for example for in-space manufacturing. We are able to achieve improvements of up to a factor of two compared to existing approaches in the literature and to what human experts are able to achieve, in a reproducible manner. Our implementation is based on the open-source mlr and mlrMBO frameworks and generalizes to other applications.

PGR Seminar with Carla Davesa Sureda

The next PGR seminar is taking place this Friday 22nd November at 2PM in JC 1.33a

Below is a Title and Abstract for Carla’s talk – Please do come along if you are able.

Title:

Towards High-Level Modelling in Automated Planning

Abstract:

Planning is a fundamental activity, arising frequently in many contexts, from daily tasks to industrial processes. The planning task consists of selecting a sequence of actions to achieve a specified goal from specified initial conditions. The Planning Domain Definition Language (PDDL) is the leading language used in the field of automated planning to model planning problems. Previous work has highlighted the limitations of PDDL, particularly in terms of its expressivity. Our interest lies in facilitating the handling of complex problems and enhancing the overall capability of automated planning systems. Unified-Planning is a Python library offering high-level API to specify planning problems and to invoke automated planners. In this paper, we present an extension of the UP library aimed at enhancing its expressivity for high-level problem modelling. In particular, we have added an array type, an expression to count booleans, and the allowance for integer parameters in actions. We show how these facilities enable natural high-level models of three classical planning problems.

Doughnuts will be available! 🍩

AI Seminar Tuesday 19th November – Francesco Leofante

The School is hosting an AI seminar on Tuesday 19th November at 11am in JCB1.33A/B

Our speaker is Francesco Leofante from Imperial College London.

Title:

Robustness issues in algorithmic recourse.

Abstract:

Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CEs. Since a lack of robustness may compromise the validity of CEs, techniques to mitigate this risk are in order. In this talk we will begin by introducing the problem of (lack of) robustness, discuss its implications and present some recent solutions we developed to compute CEs with robustness guarantees.

Bio:

Francesco is an Imperial College Research Fellow affiliated with the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe and explainable AI, with special emphasis on counterfactual explanations and their robustness. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four-year effort devoted to the formal study of robustness issues arising in XAI. More details about Francesco and his research can be found at https://fraleo.github.io/.

PGR Seminar with Daniel Wyeth and Ferdia McKeogh

The next PGR seminar is taking place this Friday 15th November at 2PM in JC 1.33a

Below is a Title and Abstract for Daniel’s and Ferdia’s talks – Please do come along if you are able.

Daniel:

Deep Priors: Integrating Domain Knowledge into Deep Neural Networks

Deep neural networks represent the state of the art for learning complex functions purely from data.  There are however problems, such as medical imaging, where data is limited, and effective training of such networks is difficult.  Moreover, this requirement for large datasets represents a deficiency compared to human learning, which is able harness prior understanding to acquire new concepts with very few examples.  My work looks at methods for integrating domain knowledge into deep neural networks to guide training so that fewer examples are required.  In particular I explore probabilistic atlases and probabilistic graphical models as representations for this prior information, architectures which enable networks to use these, and the application of these to problems in medical image understanding.

Ferdia:

“Lessons Learned From Emulating Architectures”

Automatically generating fast emulators from formal architecture specifications avoids the error-prone and time-consuming effort of manually implementing an emulator. The key challenge is achieving high performance from correctness-focused specifications; extracting relevant functional semantics and performing aggressive optimisations. In this talk I will present my work thus far, and reflect on some of the unsuccessful paths of research.

Doughnuts will be available! 🍩

PGR Seminar with Ariane Hine

The PGR seminars for this academic year are beginning this Friday 8th November at 2PM in JC 1.33A/B

Below is a title and Abstract for Ariane’s talk – Please do come along if you are able.

Title: Enhancing and Personalising Endometriosis Care with Causal Machine Learning

Abstract: Endometriosis poses significant challenges in diagnosis and management due to the wide range of varied symptoms and systemic implications. Integrating machine learning into healthcare screening processes can significantly enhance and optimise resource allocation and diagnostic efficiency, and facilitate more tailored and personalised treatment plans. This talk will discuss the potential of leveraging patient-reported symptom data through causal machine learning to advance endometriosis care and reduce the lengthy diagnostic delays associated with this condition.

The goal is to propose a novel personalised non-invasive diagnostic approach that understands the underlying causes of patient symptoms and combines health records and other factors to enhance prediction accuracy, providing an approach that can be utilised globally.

Fudge donuts will be available! 🍩

AI Seminar Friday 18th October – Leonardo Bezerra

The School is hosting an AI seminar on Friday 18th October at 11.30am in JCB1.33A!

Our speaker is Leonardo Bezerra from the University of Stirling.

FAIRTECH by design: assessing and addressing the social impacts of artificial intelligence systems

In a decade, social media and big data have transformed society and enabled groundbreaking artificial intelligence (AI) technologies like deep learning and generative AI. Applications like ChatGPT have impacted the world and outpaced regulatory agencies, who were rushed from a data-centred to an AI-centred concern. Recent developments from both the United Kingdom (UK) and the United States (US) originated in the executive branch, and the most advanced Western binding legislation is the European Union (EU) AI Act, expected to be implemented over the next three years. In the meantime, the United Nations (UN) have proposed an AI advisory body similar to the International Panel on Climate Change (IPCC), and countries from the Global South like Brazil are following Western proposals. In turn, AI companies have been proactive in the regulation debate, aiming at a scenario of improved accountability and reduced liability. In this talk, we will briefly overview efforts and challenges regarding AI regulation and how major AI players are addressing it. The goal of the talk is to stir future project collaborations from a multidisciplinary perspective, to promote a culture where the development and adoption of AI systems is fair, accountable, inclusive, responsible, transparent, ethical, carbon-efficient, and human-centred (FAIRTECH) by design.

Speaker bio: Leonardo Bezerra joined the University of Stirling as a Lecturer in Artificial Intelligence (AI)/Data Science in 2023, after having been a Lecturer in Brazil for the past 7 years. He received his Ph.D. degree from Université Libre de Bruxelles (Belgium) in 2016, having defended a thesis on the automated design of multi-objective evolutionary algorithms. His research experience spans from applied data science projects with public and private institutions to supervising theses on automated and deep machine learning. Recently, his research has concentrated on the social impact of AI applications, integrating the Participatory Harm Auditing Workbenches and Methodologies project funded by Responsible AI UK.