Seminar – Phong Le, Amazon – 3rd March 2021

Can Language Models be Weak Annotators

We are happy to have Phong Le, from Amazon, talk on Teams on Wed 3 March at 12 noon on Teams.

Abstract

Deep language models e.g. BERT and GPT3 are the breakthrough in Natural Language Processing in the last 3 years. Being trained on massive raw text data, they capture useful priors for several tasks such as syntactic parsing, information extraction, and question answering. Moreover, they are capable of answering factual and commonsense cloze questions such as “Dante was born in _____”. In this talk, I will firstly give an overview about what language models “know”. I will then present our work on exploiting their knowledge as weak supervision for a specific task called relation classification.

Relation classification, the identification of a particular relation type between two entities in text, requires annotated data. Data annotation is either a manual process for supervised learning, or automated, using knowledge bases for distant learning. However, both methodologies are costly and time-consuming since they depend on intensive human labour for annotation or for knowledge base creation. Using language models as annotators, on the contrary, is very cheap but the annotation quality is low. We hence propose NoelA, an auto-encoder using a noisy channel, to improve the accuracy by learning from the low quality annotated data. NoelA outperforms BERT and a bootstrapping baseline on TACRED and reWIKI datasets.

Bio: I’m an applied scientist at Amazon Alexa. Before that, I was a tenure-track research fellow at the University of Manchester. I did a postdoc with Ivan Titov at the University of Edinburgh, and got a PhD from the University of Amsterdam under the supervision of (Jelle) Willem Zuidema. I’m interested in neural networks and deep learning. My current work is to employ them to solve natural language processing tasks such as entity linking, coreference resolution, and dependency parsing. I’m also interested in formal semantics, especially learning semantic parsing.

For more details, please visit my homepage https://sites.google.com/site/lephongxyz/

 

Please note the session will not be recorded, to preserve the like-for-like nature of physical seminars and also avoid any privacy/rights issues.

Event details

  • When: 3rd March 2021 12:00 - 3rd February 2021 13:00
  • Format: Seminar

Seminar – Richard Connor – 5th November

The second school seminar on 5th November at 2pm, on Teams.  If you do not have the Teams link available please contact the organiser, Ian Gent.

Dimensionality Reduction in non-Euclidean Spaces
Richard Connor
Deep Learning (ie Convolutional Neural Networks) gives astoundingly good classification over many domains, notably images. Less well known, but perhaps more exciting, are similarity models that can be applied to their inner layers, where there lurk data representations that can give a much more generic notion of similarity. The problem is that these data representations are huge, and so searching a very large space for similar objects is inherently intractable.
If we treat the data as high-dimensional vectors in Euclidean space, then a wealth of approximation techniques is available, most notably dimensionality reduction which can give much smaller forms of the data within acceptable error bounds. However, this data is not inherently a Euclidean space, and there are better ways of measuring similarity using more sophisticated metrics.
The problem now is that existing dimensionality reduction techniques perform analysis over the coordinate space to achieve the size reduction. The more sophisticated metrics give only relative distances and are not amenable to analysis of the coordinates. In this talk, we show a novel technique which uses only the distances among whole objects to achieve a mapping into a low dimensional Euclidean space. As well as being applicable to non-Euclidean metrics, its performance over Euclidean spaces themselves is also interesting.
This is work in progress; anyone interested is more than welcome to collaborate!

MIP Modelling Made Manageable

Can a user write a good MIP model without understanding linearization? Modelling languages such as AMPL and AIMMS are being extended to support more features, with the goal of making MIP modelling easier. A big step is the incorporation of predicates, such a “cycle” which encapsulate MIP sub-models. This talk explores the impact of such predicates in the MiniZinc modelling language when it is used as a MIP front-end. It reports on the performance of the resulting models, and the features of MiniZinc that make this possible.

Professor Mark Wallace is Professor of Data Science & AI at Monash University, Australia. We gratefully acknowledge support from a SICSA Distinguished Visiting Fellowship which helped finance his visit.

Professor Wallace graduated from Oxford University in Mathematics and Philosophy. He worked for the UK computer company ICL for 21 years while completing a Masters degree in Artificial Intelligence at the University of London and a PhD sponsored by ICL at Southampton University. For his PhD, Professor Wallace designed a natural language processing system which ICL turned into a product. He moved to Imperial College in 2002, taking a Chair at Monash University in 2004.

His research interests span different techniques and algorithms for optimisation and their integration and application to solving complex resource planning and scheduling problems. He was a co-founder of the hybrid algorithms research area and is a leader in the research areas of Constraint Programming (CP) and hybrid techniques (CPAIOR). The outcomes of his research in these areas include practical applications in transport optimisation.

He is passionate about modelling and optimisation and the benefits they bring.  His focus both in industry and University has been on application-driven research and development, where industry funding is essential both to ensure research impact and to support sufficient research effort to build software systems that are robust enough for application developers to use.

He led the team that developed the ECLiPSe constraint programming platform, which was bought by Cisco Systems in 2004. Moving to Australia, he worked on a novel hybrid optimisation software platform called G12, and founded the company Opturion to commercialise it.  He also established the Monash-CTI Centre for optimisation in travel, transport and logistics.   He has developed solutions for major companies such as BA, RAC, CFA, and Qantas.  He is currently involved in the Alertness CRC, plant design for Woodside planning, optimisation for Melbourne Water, and work allocation for the Alfred hospital.

Event details

  • When: 19th June 2019 11:00 - 12:00
  • Where: Cole 1.33a
  • Series: AI Seminar Series
  • Format: Lecture, Seminar

Fable-based Learning: Seminar by Prof Jimmy Lee

CUHK + UniMelb = Fable-based Learning + A Tale of Two Cities

Prof Jimmy Lee, Chinese University of Hong Kong

This talk reports on the pedagogical innovation and experience of a joint venture by The Chinese University of Hong Kong (CUHK) and the University of Melbourne (UniMelb) in the development of MOOCs on the computer science subject of “Modeling and Solving Discrete Optimization Problems”.  In a nutshell, the MOOCs feature the Fable-based Learning approach, which is a form of problem-based learning encapsulated in a coherent story plot.  Each video lecture begins with an animation that tells a story based on the Chinese classic “Romance of the Three Kingdoms”, in which the protagonists in the novel encounter a problem requiring technical assistance from the two professors from modern time via a magical tablet bestowed upon them by a fairy god.  The new pedagogy aims at increasing learners’ motivation as well as situating the learners in a coherent learning context.  In addition to scriptwriting, animation production and situating the teaching materials in the story plot, another challenge of the project is the remote distance and potential cultural gap between the two institutions as well as the need to produce all teaching materials in both (Mandarin) Chinese and English to cater for different geographical learning needs.  The MOOCs have been running recurrently on Coursera since 2017.  Some learner statistics and feedbacks will be presented.  The experience and preliminary observations of adopting the online materials in a Flipped Classroom setting at CUHK will also be detailed.

This video at Youtube shows the trailer for the Coursera Course:

Biography:

Jimmy Lee has been on the faculty of The Chinese University of Hong Kong since 1992, where he is currently the Assistant Dean (Education) in the Faculty of Engineering and a Professor in the Department of Computer Science and Engineering.  His major research focuses on constraint satisfaction and optimization with applications in discrete optimization, but he is also involved in investigating ways of improving students’ learning experience via proper use of technologies.  Jimmy is a two-time recipient (2004 and 2015) of the Vice-Chancellor’s Exemplary Teaching Award and most recently the recipient of the University Education Award (2017) at CUHK.

Event details

  • When: 21st August 2018 13:30 - 14:30
  • Where: Cole 1.33b
  • Format: Seminar

School achieves Athena SWAN Bronze Award

Athena SWAN Bronze Award Logo

We are delighted to announced that the School of Computer Science has achieved an Athena SWAN Bronze Award, as recognition of our commitment to advancing gender equality.

Almost all teaching staff contributed to the application for the award, as well as many other staff in all categories, research students, masters students, and undergraduates. In congratulating staff, Simon Dobson as Head of School said:

It really does have all our fingerprints on it. The award reflects the fact that we’ve identified things that we wanted to change and have planned how to make them happen: from now on they’ll all just be “how things are” rather than part of an external process.

DLS: What Every Computer Scientist Should Know About Computer History

What Every Computer Scientist Should Know About Computer History

Prof Ursula Martin

Update: Lectures will be live streamed at this link.

Distinguished Lecture Series, Semester 1, 2017-18

Biography:

Professor Ursula Martin CBE FREng FRSE joined the University of Oxford as Professor of Computer Science in 2014, and is a member of the Mathematical Institute.  She holds an EPSRC Established Career Fellowship, and a Senior Research Fellowship at Wadham College. Her research, initially in algebra, logic and the use of computers to create mathematical proofs, now focuses on wider social and cultural approaches to understanding the success and impact of current and historical computer science research.

Prof Ursula Martin

Prof Ursula Martin

Before joining Oxford she worked at  Queen Mary University of London, where she was Vice-Principal for Science and Engineering (2005-2009), and Director of the impactQM project (2009-2012), an innovative knowledge transfer initiative. She serves on numerous international committees, including the Royal Society’s Diversity Committee and the UK Defence Science Advisory Council.  She worked  at the University of St Andrews from 1992 – 2002, as only its second female professor, and its first in over 50 years. She holds an MA in Mathematics from Cambridge, and a PhD in Mathematics from Warwick.

Timetable:

9.30 Introduction

9.35 Lecture 1:  The early history of computing: Ada Lovelace, Charles Babbage, and the history of programming

10.35 Break with Refreshments Provided

11.15 Lecture 2: Case study, Alan Turing,  Grace Hopper, and the history of getting things right

12.15 Lunch (not provided)

2.30 Welcome by the Principal, Prof Sally Mapstone

2.35 Lecture 3: What do historians of computing do, and why is it  important for computer scientists today

3.30 Close

Lecture 1. The early history of computing: Ada Lovelace, Charles Babbage, and the history of programming

IN 1843 Ada Lovelace published a remarkable paper in which she explained  Charles Babbage’s designs for his Analytical Engine. Had it been built, it would have had in principle the same capabilities  as a modern general purpose computer. Lovelace’s paper is famous for its insights into more general questions, as well as for its detailed account of how the machine performed its calculations – illustrated with a large table which is often called, incorrectly, the “first programme”.   I’ll talk about the wider context; why people were interested in computing engines; and some of the other work that was going on at the time, for example Babbage’s remarkable hardware description language. I’ll  look at different explanations for why Babbage’s ideas did not take off, and give a quick overview of what did happen over the next 100 years, before  the invention of the first digital computers.

Lecture 2. Case study, Alan Turing,  Grace Hopper, and the history of getting things right

Getting software right was a theme of programming for the days of Babbage onwards. I’ll look at the work of pioneers Alan Turing and Grace Hopper, and talk about the long interaction of computer science with logic, which has led to better programming languages, new ways to prove programmes correct, and sophisticated mathematical theories of importance in their own right.  I’ll look at the history of the age-old debate about whether computer science needs mathematics to explain its main ideas, or whether practical skills, building things and making things simple for the user are more important.

Lecture 3: What do historians of computing do, and why is it  important for computer scientists today

When people think about computer science, they think about ideas and technologies that are transforming the future – smaller faster smarter connected devices, powered by, AI, and big data – and looking at the past can be seen as a bit of a waste of time. In this lecture I’ll look at what historians do and why it is important; how we get history wrong; and in particular often miss the contribution of of women.  I’ll illustrate my talk with  my own work on Ada Lovelace’s papers, to show how  detailed historical work is needed to debunk popular myths – it is often claimed that Lovelace’s talent was  “poetical science” rather than maths, but I’ve shown that she was a gifted perceptive and knowledgeable mathematician. I’ll explain how the historian’s techniques of getting it right can help us get to grip with  topical problems like “Fake news”, and give us new ways of thinking about the future.

Event details

  • When: 10th October 2017 09:30 - 16:00
  • Where: Byre Theatre
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture

n-Queens Completion is NP-Complete

Update, 2021

Over the years since we published this research, many people have approached us having solved the n queens puzzle, either for one n like 8 or 1000, or having written an algorithm to solve it for different sizes.  Unfortunately this is not a major result in Computer Science and does not make one eligible to claim the $1M Clay prize. Many have been disappointed by this so we want to clarify why  this is the case. 

It is true that work on this problem could potentially result in the award but only if some exceptionally difficult conditions are met.

  • EITHER prove mathematically that NO possible algorithm could solve the n queens completion problem in polynomial time;
  • OR prove that there is an algorithm which is guaranteed to solve every instance of the n queens completion problem in polynomial time. Note that in this case the algorithm has to work on the completion version of the problem studied in our paper, not placing queens on an empty board; the algorithm has to give the correct answer on every possible instance given to it; and there has to be a mathematical proof that the algorithm’s runtime is bounded above by some polynomial in the size of the board.  However fast a given algorithm runs when tested, this is not sufficient because there are an infinite number of possible tests available, so a mathematical proof is required.
  • AND in either case, prove this at a level that is published in a respected academic source and is widely accepted by research experts as correct.

We are delighted that our work has led so many people to be interested in the problem of solving problems like the n queens puzzle that fascinates us.  But we also apologise for any impression we gave, unintentionally, that a solution to the n queens puzzle could lead to the award of the prize except under the extremely strenuous conditions listed above.

Ian Gent, 10 May 2021

Original Post from 2017:

Ian Gent, Christopher Jefferson and Peter Nightingale have shown that a classic chess puzzle is NP-Complete. Their paper “Complexity of n-Queens Completion” was published in the Journal of Artificial Intelligence Research on August 30, 2017.

The n-Queens puzzle is a classic chess problem: given a chessboard of size n by n, can you place n queens so that no two queens attack each other?  That is, can you place the queens with no two queens are on the same row, column, or diagonal? The n-Queens puzzle has long been known to be simple to solve:  you can solve the problem for all n except 2 and 3, and solutions for all other n can be described in a few lines.  This very simplicity has led to repeated

Peter Nightingale and Ian Gent at Falkland Palace, Wednesday, 17 August 2017.
©Stuart Nicol Photography, 2017

controversy in Artificial Intelligence (AI). The n-Queens puzzle is often used as a benchmark problem, but good results on the problem can always be challenged because the problem is so simple to solve without using AI methods.

The new work follows a challenge on Facebook on New Year’s Day, 2015, when a friend of Ian’s asked him how hard n-Queens is if some queens were already placed on the board.  It turns out, this version (dating from 1850) of the puzzle is only two years younger than the more famous n-Queens problem. The picture shows Peter (left) and Ian (right) with queens on the board at positions suggested by Nauck in 1850, the squares b4 and d5.  Can you put another 6 queens on the board so that the entire board is a solution of 8-Queens?  The general version with some number of queens preplaced on an n by n board is the n-Queens Completion puzzle.

 

With queens at b4 and d5, can you place 6 more queens to get a solution to the 8-queens puzzle? ©Stuart Nicol Photography, 2017

Ian, Christopher and Peter have shown that the n-Queens puzzle is in fact hard, not simple.  It belongs to the complexity classes NP-Complete and #P-Complete. Other NP-Complete problems include the “travelling salesperson problem”, finding cliques in graphs, and many other important problems, from scheduling to circuit layout. This puts n-Queens Completion at the centre of the most important theoretical problem in computer science — it has long been known that either all NP-complete problems are easy, or none of them are. Most computer scientists believe that this means there is no efficient algorithm possible for solving this problem, compared to the very simple techniques long known for n-Queens.
The importance of this work is that it provides researchers with a benchmark that can be used for evaluating AI techniques. Moreover, it helps to explain why so many AI techniques have been used on the n-Queens puzzle. Most techniques do most of their work with some queens partially placed, using some form of (hopefully intelligent) trial and error. In fact it turns out that many researchers – in order to solve a simple problem – have solved it by turning the simple problem of n-Queens into the hard problem of n-Queens Completion.
It does seem that AI researchers should not use n-Queens as a benchmark, but the very closely related n-Queens Completion puzzle is a valid benchmark. As well as the theoretical results, the paper shows how example instances can be generated which appear to be hard in practice. Some caution is still needed, though. It does seem to be quite hard to generate hard instances of n-Queens Completion.
The University has also issued an article on the same paper, under the title “Simple” chess puzzle holds key to $1m prize

Seminar: Propagation and Reification: SAT and SMT in Prolog (continued)

Jacob Howe, City University, London

Abstract: This talk will recap how a watched literal DPLL based SAT solver can be succinctly coded in 20 lines of Prolog. The focus of the talk will be the extension of this solver to an SMT solver which will be discussed with a particular focus on the case where the theory is that of rational-tree constraints, and its application in a reverse engineering problem.
[Note change of time from that previously advertised]

Event details

  • When: 23rd June 2017 13:00 - 14:00
  • Where: Cole 1.33a
  • Series: AI Seminar Series
  • Format: Seminar

Best Final Year Student at Lovelace 2017

We are delighted to congratulate Iveta Dulova, who attended the 10th BCSWomen  Lovelace Colloquium, and walked away with the prize for “Best Final Year Student”. Iveta’s poster, titled “SensorCube: An end-to-end framework for conducting research via mobile sensing“, was based on her final year project supervised by Dr Juan Ye.

The event was held at Aberystwyth University on April 12, 2017. Also attending from St Andrews were Chloe Collins, competing in the second year category with the poster “Pedal to the metal – the role of technology in transportation” and Laura Brewis with her poster “What percentage of solitaire games are actually winnable?”.

It showed great commitment for these three students to undertake the lengthy trip at a busy time of semester. Like St Andrews, Aberystwyth, is a beautiful small seaside town with an excellent Computer Science department.  Iveta took a couple of photos showing off the beach and the campus.

DLS: Algorithms for healthcare-related matching problems

Algorithms for healthcare-related matching problems

Distinguished Lecture Series, Semester 2, 2016-7

David Manlove

School of Computing Science, University of Glasgow

Lower College Hall (with overflow simulcast in Upper College Hall)

Abstract:

Algorithms arise in numerous everyday appPicture of David Manlovelications – in this series of lectures I will describe how algorithms can be used to solve matching problems having applications in healthcare settings.  I will begin by outlining how algorithms can be designed to cope with computationally hard problems.  I will then describe algorithms developed at the University of Glasgow that have been used by the NHS to solve two particular matching problems.  These problems correspond to the annual assignment of junior doctors to Scottish hospitals, and finding “kidney exchanges” between kidney patients and their incompatible donors in the UK.
Continue reading

Event details

  • When: 31st March 2017 09:15 - 15:30
  • Where: Lower College Hall
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture