Daniel Sorin (Duke University): Designing Formally Verifiable Cache Coherence Protocol (School Seminar)

Abstract:
The cache coherence protocol is an important but notoriously complicated part of a multicore processor. Typical protocols are far too complicated to verify completely and thus industry relies on extensive testing in hopes of uncovering bugs. In this work, we propose a verification-aware approach to protocol design, in which we design scalable protocols such that they can be completely formally verified. Rather than innovate in verification techniques, we use existing verification techniques and innovate in the design of the protocols. We present two design methodologies that, if followed, facilitate verification of arbitrarily scaled protocols. We discuss the impact of the constraints that must be followed, and we highlight possible future directions in verification-aware microarchitecture.

Speaker Bio:
Daniel J. Sorin is the Addy Professor of Electrical and Computer Engineering at Duke University. His research interests are in computer architecture, with a focus on fault tolerance, verification, and memory system design. He is the author of “Fault Tolerant Computer Architecture” and a co-author of “A Primer on Memory Consistency and Cache Coherence.” He is the recipient of a SICSA Distinguished Visiting Fellowship, a National Science Foundation Career Award, and Duke’s Imhoff Distinguished Teaching Award. He received a PhD and MS in electrical and computer engineering from the University of Wisconsin, and he received a BSE in electrical engineering from Duke University.

Event details

  • When: 26th September 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Felipe Meneguzzi (PUCRS): Plan Recognition in the Real World (School Seminar)

Abstract:
Plan and goal recognition is the task of inferring the plan and goal of an agent through the observation of its actions and its environment and has a number of applications on computer-human interaction, assistive technologies and surveillance.
Although such techniques using planning domain theories have developed a number of very accurate and effective techniques, they often rely on assumptions of full observability and noise-free observations.
These assumptions are not necessarily true in the real world, regardless of the technique used to translate sensor data into symbolic logic-based observations.
In this work, we develop plan recognition techniques, based on classical planning domain theories, that can cope with observations that are both incomplete and noisy and show how they can be applied to sensor data processed through deep learning techniques.
We evaluate such techniques on a kitchen video dataset, bridging the gap between symbolic goal recognition and real-world data.

Speaker Bio:
Dr. Felipe Meneguzzi is a researcher on multiagent systems, normative reasoning and automated planning. He is currently an associate professor at Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS). Prior to that appointment he was a Project Scientist at the Robotics Institute at Carnegie Mellon University in the US. Felipe got his PhD at King’s College London in the UK and an undergraduate and masters degree at PUCRS in Brazil. He received the 2016 Google Research Awards for Latin America, and was one of four runners up to 2013 Microsoft Research Awards. His current research interests include plan recognition, hybrid planning and norm reasoning.

Slides from the talk

Event details

  • When: 19th September 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Mark Olleson (Bloomberg): Super-sized mobile apps: getting the foundations right (School Seminar)

Abstract:
An email client. An instant messenger. A real-time financial market data viewer and news reader. A portfolio viewer. A note taker, file manager, media viewer, flight planner, restaurant finder… All built into one secure mobile application. On 4 different mobile operating systems. Does this sound challenging?
Mark from Bloomberg’s Mobile team will discuss how conventional development tools and techniques scale poorly when faced with this challenge, and how Bloomberg tackles the problem.

Speaker Bio:
Mark Olleson is a software engineer working in Bloomberg’s Mobile Professional team. Mark start developing iOS apps around the time the original iPad launched, and since has worked on projects which share common characteristics: scale and complexity. Today he specialises in large-scale and cross-platform mobile-app technology.

Event details

  • When: 17th October 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Siobhán Clarke (Trinity College Dublin): Exploring Autonomous Behaviour in Open, Complex Systems (School Seminar)

Abstract:
Modern, complex systems are likely to execute in open environments (e.g., applications running over the Internet of Things), where changes are frequent and have the potential to cause significant negative consequences for the application. A better understanding of the dynamics in the environment will enable applications to better automate planning for change and remain resilient in the face of loss of data sources through, for example, mobility or
battery loss. This talk explores our recent work on autonomous applications in such open, complex systems. The approaches include a brief look at early work on more static, multi-layer system and change modelling, through to multi-agent systems that learn and adapt to changes in the environment, and finally collaborative models for emergent behaviour detection, and for resource sharing. I discuss the work in the context of smart cities applications, such as transport, energy and emergency response.

Speaker Bio:
Siobhán Clarke is a Professor in the School of Computer Science and Statistics at Trinity College Dublin. She joined Trinity in 2000, having previously worked for over ten years as a software engineer for IBM. Her current research focus is on software engineering models for the provision of smart and dynamic software services to urban stakeholders, addressing challenges in the engineering of dynamic software in ad hoc, mobile environments. She has published over 170 papers including in journals such as IEEE/ACM Transactions (TAAS, TSC, TSE, TECS, TMC, TODAES) and conference proceedings including in ICSE, OOPSLA, AAMAS, ICSOC, SEAMS, SASO. She is a Science Foundation Ireland (SFI) Principal Investigator, exploring an Internet of Things middleware for adaptable, urban-scale
software services.

Prof. Clarke is the founding Director of Future Cities, the Trinity Centre for Smart and Sustainable Cities, with contributors from a range of disciplines, including Computer Science, Statistics, Engineering, Social Science, Geography, Law, Business and the Health Sciences. She is also Director for Enable, a national collaboration between industry and seven Higher Education Institutes funded by both SFI and the industry partners, which is
focused on connecting communities to smart urban environments through the Internet of Things. Enable links three SFI Research Centres: Connect, Insight and Lero, bringing together world-class research on future networks, data analytics and software engineering.

Prof. Clarke leads the School’s Distributed Systems Group, and was elected Fellow of Trinity College Dublin in 2006.

Event details

  • When: 29th November 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Stephen McKenna (Dundee): Recognising Interactions with Objects and People (School Seminar)

CANCELLED!

This talk has been postponed, due to the ongoing strike.

Abstract:

This talk describes work in our research group using computer vision along with other sensor modalities to recognise (i) actions in which people manipulate objects, and (ii) social interactions and their participants.

Activities such as those involved in food preparation involve interactions between hands, tools and manipulated objects that affect them in visually complex ways making recognition of their constituent actions challenging. One approach is to represent properties of local visual features with respect to trajectories of tracked objects. We explore an example in which reference trajectories are provided by visually tracking embedded inertial sensors. Additionally, we propose a vision method using discriminative spatio-temporal superpixel groups, obtaining state-of-the-art results (compared with published results using deep neural networks) whilst employing a compact, interpretable representation.

Continuous analysis of social interactions from wearable sensor data streams has a range of potential applications in domains including healthcare and assistive technology. I will present our recent work on (i) detection of focused social interactions using visual and audio cues, and (ii) identification of interaction partners using face matching. By modifying the output activation function of a deep convolutional neural network during training, we obtain an improved representation for open-set face recognition.

Speaker Bio:

Prof. Stephen McKenna co-leads the Computer Vision and Image Processing (CVIP) group at the University of Dundee where he is Chair of Computer Vision and Computing’s Head of Research. His interests lie primarily in biomedical image analysis, computer vision, and applied machine learning.

Event details

  • When: 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Emma Hart (Edinburgh Napier): Lifelong Learning in Optimisation (School Seminar)

Abstract:

The previous two decades have seen significant advances in optimisation techniques that are able to quickly find optimal or near-optimal solutions to problem instances in many combinatorial optimisation domains. Despite many successful applications of both these approaches, some common weaknesses exist in that if the nature of the problems to be solved changes over time, then algorithms needs to be at best periodically re-tuned. In the worst case, new algorithms may need to be periodically redeveloped. Furthermore, many approaches are inefficient, starting from a clean slate every time a problem is solved, therefore failing to exploit previously learned knowledge.

In contrast, in the field of machine-learning, a number of recent proposals suggest that learning algorithms should exhibit life-long learning, retaining knowledge and using it to improve learning in the future. I propose that optimisation algorithms should follow the same approach – looking to nature, we observe that the natural immune system exhibits many properties of a life-long learning system that could be exploited computationally in an optimisation framework. I will give a brief overview of the immune system, focusing on highlighting its relevant computational properties and then show how it can be used to construct a lifelong learning optimisation system. The system exploits genetic programming to continually evolve new optimisation algorithms, which form a continually adapting ensemble of optimisers. The system is shown to adapt to new problems, exhibit memory, and produce efficient and effective solutions when tested in both the bin-packing and scheduling domains.

Speaker Bio:

Emma Hart is a Professor in Natural Computation at Edinburgh Napier University in Scotland, where she also directs the Centre for Algorithms, Visualisation and Evolving Systems. Prior to that, she received a degree in Chemistry from the University of Oxford and a PhD in Artificial Immune Systems for Optimisation and Learning from the University of Edinburgh.

Her research focuses on developing novel bio-inspired techniques for solving a range of real-world optimisation and classification problems, particularly through the application of hyper-heuristic approaches and genetic programming. Her recent research explores optimisation techniques which are capable of continuously improving through experience, as well as ensemble approaches to optimisation for solving large classes of problems.

She is Editor-in-Chief of the journal Evolutionary Computation (MIT Press), ) and an elected member of the ACM SIGEVO Executive Committee. She also edits SIGEVOlution, the magazine of SIGEVO. She was General Chair of PPSN 2016, and regularly acts as Track Chair at GECCO . She has recently given keynotes at EURO 2016, Poland, and IJCCI (Maderia, 2017) on Lifelong Optimisation.

Her work is funded by both national funding agencies (EPSRC) and the European, where has recently led projects in Fundamentals of Collective Adaptive System (FOCAS) and Self-Aware systems (AWARE). She has worked with a range of real-world clients including from the Forestry Industry, Logistics and Personnel Scheduling.

Event details

  • When: 14th November 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Jessie Kennedy (Edinburgh Napier): Visualization and Taxonomy (School Seminar)

Abstract:

This talk will consider the relationship between visualization and taxonomy from two perspectives. Firstly, how visualization can aid understanding the process of taxonomy, specifically biological taxonomy and the visualization challenges this poses. Secondly, the role of taxonomy in understanding and making sense of the growing field of visualization will be discussed and the challenges facing the visualization community in making this process more rigorous will be considered.

Speaker Bio:

Jessie joined Edinburgh Napier University in 1986 as a lecturer, was promoted to Senior Lecturer, Reader, and then Professor in 2000 Thereafter she held the post of Director of the Institute for Informatics and Digital Innovation from 2010-14 and is currently Dean of Research and Innovation for the University.

Jessie has published widely, with over 100 peer-reviewed publications and over £2 million in research funding from a range of bodies, including EPSRC, BBSRC, National Science Foundation, and KTP, and has had 13 PhD students complete. She has been programme chair, programme committee member and organiser of many international conferences, a reviewer and panel member for many national and international computer science funding bodies, and became a Member of EPSRC Peer Review College in 1996 and a Fellow of the British Computer Society.

Jessie has a long-standing record of contribution to inter-disciplinary research, working to further biological research through the application of novel computing technology.

Her research in the areas of user interfaces to databases and data visualisation in biology contributed to the establishment of the field of biological visualisation. She hosted the first biological visualisation workshop at the Royal Society of Edinburgh in 2008, was an invited speaker at a BBSRC workshop on Challenges in Biological Visualisation in 2010, was a founding member of the International Symposium in Biological Visualisation – being Programme Chair in 2011, General Chair in 2012 and 2013 – and steering committee member since 2014.

She has been keynote speaker at related international conferences and workshops, such as VIZBI, the International Visualisation conference and BioIT World, and is currently leading a BBSRC network on biological visualisation.

Her research in collaboration with taxonomists at the Royal Botanic Gardens, Edinburgh, produced a data model for representing differing taxonomic opinions in Linnaean classification. This work led to collaboration on a large USA-funded project with ecologists from six US universities and resulted in a data standard for the exchange biodiversity data that has been adopted by major global taxonomic and biodiversity organisations.

Event details

  • When: 7th November 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Barnaby Martin (Durham): The Complexity of Quantified Constraints (School Seminar)

Abstract:

We elaborate the complexity of the Quantified Constraint Satisfaction Problem, QCSP(A), where A is a finite idempotent algebra. Such a problem is either in NP or is co-NP-hard, and the borderline is given precisely according to whether A enjoys the polynomially-generated powers (PGP) property. This reduces the complexity classification problem for QCSPs to that of CSPs, modulo that co-NP-hard cases might have complexity rising up to Pspace-complete. Our result requires infinite languages, but in this realm represents the proof of a slightly weaker form of a conjecture for QCSP complexitymade by Hubie Chen in 2012. The result relies heavily on the algebraic dichotomy between PGP and exponentially-generated powers (EGP), proved by Dmitriy Zhuk in 2015, married carefully to previous work of Chen.

Event details

  • When: 24th October 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Maja Popović (Humboldt-Universität zu Berlin): (Dis)similarity Metrics for Texts (School Seminar)

Abstract:
Natural language processing (NLP) is a multidisciplinary field closely related to linguistics, machine learning and artificial intelligence. It comprises a number of different subfields dealing with different kinds of analysis and/or generation of natural language texts. All these methods and approaches need some kind of evaluation, i.e. comparison between the obtained result with a given gold standard. For tasks dealing with text generation (such as speech recognition or machine translation), a comparison between two texts has to be carried out. This is usually done either by counting matched words or word sequences (which produces a similarity score) or by calculating edit distance, i.e. a number of operations needed to transform the generated word sequence into a desired word sequence (which produces a “dissimilarity” score called “error rate”). The talk will give an overview of advantages, disadvantages and challenges related to this type of metrics mainly concentrating on machine translation (MT) but also relating to some other NLP tasks.

Speaker bio:
Maja Popović graduated at the Faculty of Electrical Engineering, University of Belgrade and continued her studies at the RWTH Aachen, Germany, where she obtained her PhD with the thesis “Machine Translation: Statistical Approach with Additional Linguistic Knowledge”. After that, she continued her research at the DFKI Institute and thereafter at the Humboldt University of Berlin, mainly related to various approaches for evaluation of machine translation. She has developed two open source evaluation tools, (i) Hjerson, a tool for automatic translation error classification, and (ii) chrF, an automatic metric for machine translation evaluation based on character sequence matching.

Event details

  • When: 29th September 2017 13:00 - 14:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar