MIP Modelling Made Manageable

Event details

  • When: 19th June 2019 11:00 - 12:00
  • Where: Cole 1.33a
  • Series: AI Seminar Series
  • Format: Lecture, Seminar

Can a user write a good MIP model without understanding linearization? Modelling languages such as AMPL and AIMMS are being extended to support more features, with the goal of making MIP modelling easier. A big step is the incorporation of predicates, such a “cycle” which encapsulate MIP sub-models. This talk explores the impact of such predicates in the MiniZinc modelling language when it is used as a MIP front-end. It reports on the performance of the resulting models, and the features of MiniZinc that make this possible.

Professor Mark Wallace is Professor of Data Science & AI at Monash University, Australia. We gratefully acknowledge support from a SICSA Distinguished Visiting Fellowship which helped finance his visit.

Professor Wallace graduated from Oxford University in Mathematics and Philosophy. He worked for the UK computer company ICL for 21 years while completing a Masters degree in Artificial Intelligence at the University of London and a PhD sponsored by ICL at Southampton University. For his PhD, Professor Wallace designed a natural language processing system which ICL turned into a product. He moved to Imperial College in 2002, taking a Chair at Monash University in 2004.

His research interests span different techniques and algorithms for optimisation and their integration and application to solving complex resource planning and scheduling problems. He was a co-founder of the hybrid algorithms research area and is a leader in the research areas of Constraint Programming (CP) and hybrid techniques (CPAIOR). The outcomes of his research in these areas include practical applications in transport optimisation.

He is passionate about modelling and optimisation and the benefits they bring.  His focus both in industry and University has been on application-driven research and development, where industry funding is essential both to ensure research impact and to support sufficient research effort to build software systems that are robust enough for application developers to use.

He led the team that developed the ECLiPSe constraint programming platform, which was bought by Cisco Systems in 2004. Moving to Australia, he worked on a novel hybrid optimisation software platform called G12, and founded the company Opturion to commercialise it.  He also established the Monash-CTI Centre for optimisation in travel, transport and logistics.   He has developed solutions for major companies such as BA, RAC, CFA, and Qantas.  He is currently involved in the Alertness CRC, plant design for Woodside planning, optimisation for Melbourne Water, and work allocation for the Alfred hospital.

Juho Rousu: Predicting Drug Interactions with Kernel Methods

Event details

  • When: 30th April 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Seminar

Title:
Predicting Drug Interactions with Kernel Methods

Abstract:
Many real world prediction problems can be formulated as pairwise learning problems, in which one is interested in making predictions for pairs of objects, e.g. drugs and their targets. Kernel-based approaches have emerged as powerful tools for solving problems of that kind, and especially multiple kernel learning (MKL) offers promising benefits as it enables integrating various types of complex biomedical information sources in the form of kernels, along with learning their importance for the prediction task. However, the immense size of pairwise kernel spaces remains a major bottleneck, making the existing MKL algorithms computationally infeasible even for small number of input pairs. We introduce pairwiseMKL, the first method for time- and memory-efficient learning with multiple pairwise kernels. pairwiseMKL first determines the mixture weights of the input pairwise kernels, and then learns the pairwise prediction function. Both steps are performed efficiently without explicit computation of the massive pairwise matrices, therefore making the method applicable to solving large pairwise learning problems. We demonstrate the performance of pairwiseMKL in two related tasks of quantitative drug bioactivity prediction using up to 167 995 bioactivity measurements and 3120 pairwise kernels: (i) prediction of anticancer efficacy of drug compounds across a large panel of cancer cell lines; and (ii) prediction of target profiles of anticancer compounds across their kinome-wide target spaces. We show that pairwiseMKL provides accurate predictions using sparse solutions in terms of selected kernels, and therefore it automatically identifies also data sources relevant for the prediction problem.

References:
Anna Cichonska, Tapio Pahikkala, Sandor Szedmak, Heli Julkunen, Antti Airola, Markus Heinonen, Tero Aittokallio, Juho Rousu; Learning with multiple pairwise kernels for drug bioactivity prediction, Bioinformatics, Volume 34, Issue 13, 1 July 2018, Pages i509–i518, https://doi.org/10.1093/bioinformatics/bty277

Short Bio:
Juho Rousu is a Professor of Computer Science at Aalto University, Finland. Rousu obtained his PhD in 2001 form University of Helsinki, while working at VTT Technical Centre of Finland. In 2003-2005 he was a Marie Curie Fellow at Royal Holloway University of London. In 2005-2011 he held Lecturer and Professor positions at University of Helsinki, before moving to Aalto University in 2012 where he leads a research group on Kernel Methods, Pattern Analysis and Computational Metabolomics (KEPACO). Rousu’s main research interest is in learning with multiple and structured targets, multiple views and ensembles, with methodological emphasis in regularised learning, kernels and sparsity, as well as efficient convex/non-convex optimisation methods. His applications of interest include metabolomics, biomedicine, pharmacology and synthetic biology.

Hugh Leather (Edinburgh): Deep Learning for Compilers (School Seminar)

Event details

  • When: 9th April 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Abstract:

Writing optimising compilers is difficult. The range of programs that may be presented to the compiler is huge and the system on which they run are complex, heterogeneous, non-deterministic, and constantly changing. Machine learning has been shown to make writing compiler heuristics easier, but many issues remain.

In this talk I will discuss recent advances in using deep learning to solve compiler issues: learning heuristics and testing compiler correctness.

Speaker Bio:

Hugh is a reader (associate professor) at the University of Edinburgh. His research involves all elements of compilers and operating systems, usually targeting performance and energy optimisation, often with a focus on using machine learning for those tasks. After his PhD, also at Edinburgh, he was a Fellow of the Royal Society of Engineering. Before returning to academia, he was an engineer at Microsoft and architect and team leader at Trilogy, delivering multi-million dollar projects to Fortune 500 companies.

Paul-Olivier Dehaye: From Cambridge Analytica to the future of online services: a personal journey (School Seminar)

Event details

  • When: 19th March 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Abstract:

2018 was a crazy year for privacy. The General Data Protection Regulation came into force in May, and new revelations on the personal data ecosystem were making headlines on a weekly basis. I will give the behind the scenes for a lot of these events, question why they didn’t happen earlier, and offer some thoughts on the necessary future of online services. This will include a brief discussion of topics such as semantic alignment, interpretable machine learning, or new privacy-preserving data processing techniques.

Speaker Bio:

Paul-Olivier Dehaye is a mathematician by training. Affiliated to the University of Zurich as a SNSF Assistant Professor until 2016, his career then took a turn towards data protection activism and social entrepreneurship. He was the researcher on several news articles who have reached millions of readers (including many with Carole Cadwalladr), and testified in front of the UK and EU Parliaments on multiple occasions. He is on the board of MyData Global, has founded the NGO PersonalData.IO, and the project MyData Geneva.

Rachel Menzies (Dundee): Unlocking Accessible Escape Rooms: Is Technology the Key? (School Seminar)

Event details

  • When: 2nd April 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Abstract:

Escape rooms are popular recreational activities whereby players are locked in a room and must solve a series of puzzles in order to ‘escape’. Recent years have seen a large expansion technology being used in these rooms in order to provide ever changing and increasingly immersive experiences. This technology could be used to minimise accessibility issues for users, e.g. with hearing or visual impairments, so that they can engage in the same way as their peers without disabilities. Escape room designers and players completed an online questionnaire exploring the use of technology and the accessibility of escape rooms. Results show that accessibility remains a key challenge in the design and implementation of escape rooms, despite the inclusion of technology that could be used to improve the experience of users with disabilities. This presentation will explore the lack of accessibility within Escape Rooms and the potential for technology to bridge this gap.

Speaker Bio:

Dr Rachel Menzies is the Head of Undergraduate Studies for Computing at the University of Dundee and is the current SICSA Director of Education (https://www.sicsa.ac.uk/education/). She co-directs the UX’d research group (https://www.ux-d.co.uk/) and her research interests include user centred design with marginalised user groups, such as users with disabilities, as well as exploring novel interfaces, data visualisation and CS education. Her most recent work focusses on accessibility is in escape rooms, in particular how users with varied disabilities can access and enjoy the experience alongside typical users.

Marina Romanchikova (NPL): How good are our data? Measuring the data quality at National Physical Laboratory (School Seminar)

Event details

  • When: 12th March 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Abstract:

From mapping the spread of disease to monitoring climate change, data holds the key to solving some of the world’s biggest challenges. Dependable decisions rely on understanding the provenance and reliability of data. Historically, only a small fraction of the generated data was shared and re-used, while the majority of data were used once and then erased or archived. At NPL Data Science we are defining best practice in measurement data reuse and traceability by developing metadata standards and data storage structures to locate and interpret datasets and make them available for sharing, publication and data mining.

The talk will shed light on the most burning issues in the scientific data management, and illustrate it with examples from industrial and academic practices. It will present several NPL Data Science projects that focus on delivering confidence in data obtained from life science imaging, medicine, geosciences and fundamental physics.

Speaker Bio:

Dr Marina Romanchikova joined the NPL Data Science team in 2017 to work on data quality and metadata standards. She obtained an MSc in Medical Informatics at University of Heidelberg, Germany, where she specialised in medical image processing and in management of hospital information systems. In 2010 she received a PhD on Monte Carlo dosimetry for targeted radionuclide therapy at the Institute of Cancer Research in Sutton, UK. Marina worked six years as a radiotherapy research physicist at Cambridge University Hospitals where she developed methods for curation and analysis of medical images.

Current interests

– Quantitative quality assessment of medical images and medical image segmentation
– Harmonisation of medical and healthcare data from heterogeneous sources
– Applications of machine learning in healthcare
– Automated data quality assurance

Lauren Roberts & Peter Michalák (Newcastle): Automating the Placement of Time Series Models for IoT Healthcare Applications (School Seminar)

Event details

  • When: 26th February 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Abstract:

There has been a dramatic growth in the number and range of Internet of Things (IoT) sensors that generate healthcare data. These sensors stream high-dimensional time series data that must be analysed in order to provide the insights into medical conditions that can improve patient healthcare. This raises both statistical and computational challenges, including where to deploy the streaming data analytics, given that a typical healthcare IoT system will combine a highly diverse set of components with very varied computational characteristics, e.g. sensors, mobile phones and clouds. Different partitionings of the analytics across these components can dramatically affect key factors such as the battery life of the sensors, and the overall performance. In this work we describe a method for automatically partitioning stream processing across a set of components in order to optimise for a range of factors including sensor battery life and communications bandwidth. We illustrate this using our implementation of a statistical model predicting the glucose levels of type II diabetes patients in order to reduce the risk of hyperglycaemia.

Speaker Bios:

Lauren and Peter are final year PhD students at the CDT in Cloud Computing for Big Data at Newcastle University. Peter has a background in Computer Engineering from University of Žilina, Slovakia and a double-degree in Computer Software Engineering from JAMK University of Applied Sciences, Jyväskylä, Finland. His research interests are within distributed event processing, edge computing and Internet of Things with a special focus on energy and bandwidth constrains. Lauren has an MMath degree from Newcastle University and her research interests lie in statistical modelling of time series data.

Quintin Cutts (Glasgow): Re-imagining software engineering education through the apprenticeship lens (School Seminar)

Event details

  • When: 19th February 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Abstract:

Apprenticeship degrees have sprung up so fast that there has been little time for us all to reflect on how this apparently new form of education, to universities at least, could significantly affect our educational offerings. The University of Glasgow has been undertaking some preparatory work for Skills Development Scotland prior to running its apprenticeship degree in software engineering, and this has afforded us some time to see what others nationally and internationally have been doing, and to consider relevant aspects of the literature, as well as consult with industry. One view that we are developing of these degrees is as a true evolution of typical, largely campus-based, software engineering degrees, towards a full-blown professional degree such as in medicine, where university and hospitals are in real partnership over the training of doctors. In this talk, I will outline our thinking and raise a number of issues for discussion. In suggesting a closer relationship with industry in a talk in St Andrews, I do not of course miss the irony that industry accreditation was never (I believe) something that St Andrews was particularly bothered about – thinking that my BSc (Hons) 1988 is not accredited!

Lewis McMillan (St Andrews): Parallel Computer Simulations of Light-Tissue Interactions for Applications in Medicine, Cosmetics Industry and Biophotonics Research (School Seminar)

Event details

  • When: 23rd April 2019 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Abstract:

Tissue ablation is a widely used treatment in both the cosmetic and medical sectors, for treating various diseases or to improve cosmetic outlooks. We present our tissue ablation model which can predict the depth of ablation, and the surrounding thermal damage caused by the laser during ablation.

“Non-diffracting” beams have a multitude of uses in physics, from optical manipulation to improved microscopy light sources. For the first time we show that these beams can be modelled using Monte Carlo radiation transport method. Allowing better insight into how these beams propagate in a turbid medium.

Both of these projects use the Monte Carlo radiation transport method (MCRT) to simulate light transport. The MCRT method is a powerful numerical method that can solve light transport though heavily scattering and absorbing mediums, such as biological tissues. The method is extremely flexible and can model arbitrary geometries and light sources. MCRT can also model the various micro-physics of the simulated medium, such as polarisation, fluorescence, and Raman scattering. This talk will give an overview of our group’s work, with particular focus on simulating tissue ablation, and modelling “non-diffracting” beams.

Speaker Bio:

Lewis McMillan is a final year physics PhD student at St Andrews University. His research interests are in using Monte Carlo radiation transport method for various applications within medicine and biophotonics.