The School is hosting an AI seminar on Friday 18th October at 11.30am in JCB1.33A!
Our speaker is Leonardo Bezerra from the University of Stirling.
FAIRTECH by design: assessing and addressing the social impacts of artificial intelligence systems
In a decade, social media and big data have transformed society and enabled groundbreaking artificial intelligence (AI) technologies like deep learning and generative AI. Applications like ChatGPT have impacted the world and outpaced regulatory agencies, who were rushed from a data-centred to an AI-centred concern. Recent developments from both the United Kingdom (UK) and the United States (US) originated in the executive branch, and the most advanced Western binding legislation is the European Union (EU) AI Act, expected to be implemented over the next three years. In the meantime, the United Nations (UN) have proposed an AI advisory body similar to the International Panel on Climate Change (IPCC), and countries from the Global South like Brazil are following Western proposals. In turn, AI companies have been proactive in the regulation debate, aiming at a scenario of improved accountability and reduced liability. In this talk, we will briefly overview efforts and challenges regarding AI regulation and how major AI players are addressing it. The goal of the talk is to stir future project collaborations from a multidisciplinary perspective, to promote a culture where the development and adoption of AI systems is fair, accountable, inclusive, responsible, transparent, ethical, carbon-efficient, and human-centred (FAIRTECH) by design.
Speaker bio: Leonardo Bezerra joined the University of Stirling as a Lecturer in Artificial Intelligence (AI)/Data Science in 2023, after having been a Lecturer in Brazil for the past 7 years. He received his Ph.D. degree from Université Libre de Bruxelles (Belgium) in 2016, having defended a thesis on the automated design of multi-objective evolutionary algorithms. His research experience spans from applied data science projects with public and private institutions to supervising theses on automated and deep machine learning. Recently, his research has concentrated on the social impact of AI applications, integrating the Participatory Harm Auditing Workbenches and Methodologies project funded by Responsible AI UK.