SACHI Seminar – Florian Echtler (Bauhaus-Universität Weimar): Instant Interaction

Title:

Instant Interaction

Abstract:

Although Mark Weiser’s original vision of “ubiquitous computing” has all but arrived due to the wide availability of smartphones, tablets and interactive screens, the envisioned ease of use is still mostly lacking. This is particularly apparent when we consider interaction and collaboration between multiple persons and their personal mobile devices. These issues can be partly mitigated by relying on cloud services for data exchange, but this approach opens up multiple other issues regarding data safety and privacy. In this talk, I will present the concept of “instant interaction”, which aims to enable ad-hoc interaction between multiple persons, their individual mobile devices, and fixed infrastructure, without requiring any prior exchange of account data or PINs. The only prerequisite for immediate interaction is physical proximity. Examples from my current research will illustrate this  concept.

Speaker Biography:

Florian Echtler is junior professor for mobile media at Bauhaus-Universität Weimar. His research interests focus on interaction and collaboration using peer-to-peer communication technologies available in today’s mobile devices. Additional topics covered by his research include computer vision for HCI applications, sensor technology and gesture recognition.

Event details

  • When: 16th November 2017 15:00 - 16:00
  • Where: Cole 1.33b
  • Format: Seminar

“A Decentralised Multimodal Integration of Social Signals: A Bio-Inspired Approach” by Esma Benssassi and “Plug and Play Bench: Simplifying Big Data Benchmarking Using Containers” by Sheriffo Ceesay

Esma’s abstract

The ability to integrate information from different sensory modalities in a social context is crucial for achieving an understanding of social cues and gaining useful social interaction and experience. Recent research has focused on multi-modal integration of social signals from visual, auditory, haptic or physiological data. Different data fusion techniques have been designed and developed; however, the majority have not achieved significant accuracy improvement in recognising social cues compared to uni-modal social signal recognition. One of the possible limitations is that these existing approaches have no sufficient capacity to model various types of interactions between different modalities and have not been able to leverage the advantages of multi-modal signals by considering each of them as complementary to the others. We introduce ideas for creating a decentralised model for social signals integration inspired by computational models of multi-sensory integration in neuroscience and the perception of social signals in the human brain.

Sheriffo’s abstract

The recent boom of big data, coupled with the challenges of its processing and storage gave rise to the development of distributed data processing and storage paradigms like MapReduce, Spark, and NoSQL databases. With the advent of cloud computing, processing and storing such massive datasets on clusters of machines is now feasible with ease. However, there are limited tools and approaches, which users can rely on to gauge and comprehend the performance of their big data applications deployed locally on clusters, or in the cloud. Researchers have started exploring this area by providing benchmarking suites suitable for big data applications. However, many of these tools are fragmented, complex to deploy and manage, and do not provide transparency with respect to the monetary cost of benchmarking an application.

In this talk, I will present Plug And Play Bench PAPB (https://github.com/sneceesay77/papb): an infrastructure aware abstraction built to integrate and simplify the process of big data benchmarking. PAPB automates the tedious process of installing, configuring and executing common big data benchmark workloads by containerising the tools and settings based on the underlying cluster deployment framework. Our proof of concept implementation utilises HiBench as the benchmark suite, HDP as the cluster deployment framework and Azure as the cloud platform. The talk will further illustrate the inclusion of cost metrics based on the underlying Microsoft Azure cloud platform.

Event details

  • When: 26th October 2017 13:00 - 14:00
  • Where: Cole 1.33a
  • Series: Systems Seminars Series
  • Format: Seminar

SRG Seminar: “Adaptive Multisite Computation Offloading in Mobile Clouds” by Dawand Sulaiman and “Topological Ranking-Based Resource Scheduling for Multi-Accelerator Systems” by Teng Yu

Dawand’s abstract

The concept of using cloud hosted infrastructure as a means to overcome the resource-constraints of mobile devices is known as Mobile Cloud Computing (MCC), and allows applications to run partially on the device, and partially on a remote cloud instance, thereby overcoming any device-specific resource constraints. However, as smart phones and tablets gain more CPU power and longer battery life, the meaning of MCC gradually changes. Instead of being fully dependent on the cloud, a number of nearby devices can be used to coordinate and distribute content and resources in a decentralised manner; this is known as Mobile Ad hoc Cloud Computing. Mobile devices with less computational power and lower battery life can be leveraged by the nearby mobile devices to run resource-intensive applications. Therefore, more efficient and reliable methodologies need to be explored for resource hungry and real time applications such as face recognition, data-intensive, and augmented reality mobile applications.
We present a unified framework which allows each mobile device within the shared environment to intelligently offload its computation to other external platforms. For the individual mobile devices, it is important to make the offloading decision based on network conditions, load of other machines, and mobile device’s own constraints (e.g., mobility and battery). Moreover, to achieve a global optimal task completion time for tasks from all the mobile devices, it is necessary to devise a task scheduling solution that schedules offloaded tasks in real time. The offloading decision engine needs to adapt to the dynamic changes in both the host device and connected nearby and remote devices.

Teng’s abstract

Accelerators are becoming increasingly prevalent in distributed computation. FPGAs have been shown to be fast and power efficient for particular tasks, yet scheduling on multi-accelerator systems is challenging when workloads vary significantly in granularity in terms of task size and/or number of computational unit required.
We present a novel approach for dynamically scheduling tasks on networked multi-accelerator systems which maintains high performance, even in the presence of irregular jobs. Our topological ranking-based scheduling allows realistic irregular workloads to be processed while maintaining a significantly higher level of performance than existing schedulers.

Event details

  • When: 12th October 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

Semantics for probabilistic programming – Dr Chris Heunen

Statistical models in e.g. machine learning are traditionally expressed in some sort of flow charts. Writing sophisticated models succinctly is much easier in a fully fledged programming language. The programmer can then rely on generic inference algorithms instead of having to craft one for each model. Several such higher-order functional probabilistic programming languages exist, but their semantics, and hence correctness, are not clear. The problem is that the standard semantics of probability theory, given by measurable spaces, does not support function types. I will describe how to get around this.

Event details

  • When: 6th October 2017 12:00
  • Where: Cole 1.33b

DLS: What Every Computer Scientist Should Know About Computer History

What Every Computer Scientist Should Know About Computer History

Prof Ursula Martin

Update: Lectures will be live streamed at this link.

Distinguished Lecture Series, Semester 1, 2017-18

Biography:

Professor Ursula Martin CBE FREng FRSE joined the University of Oxford as Professor of Computer Science in 2014, and is a member of the Mathematical Institute.  She holds an EPSRC Established Career Fellowship, and a Senior Research Fellowship at Wadham College. Her research, initially in algebra, logic and the use of computers to create mathematical proofs, now focuses on wider social and cultural approaches to understanding the success and impact of current and historical computer science research.

Prof Ursula Martin

Prof Ursula Martin

Before joining Oxford she worked at  Queen Mary University of London, where she was Vice-Principal for Science and Engineering (2005-2009), and Director of the impactQM project (2009-2012), an innovative knowledge transfer initiative. She serves on numerous international committees, including the Royal Society’s Diversity Committee and the UK Defence Science Advisory Council.  She worked  at the University of St Andrews from 1992 – 2002, as only its second female professor, and its first in over 50 years. She holds an MA in Mathematics from Cambridge, and a PhD in Mathematics from Warwick.

Timetable:

9.30 Introduction

9.35 Lecture 1:  The early history of computing: Ada Lovelace, Charles Babbage, and the history of programming

10.35 Break with Refreshments Provided

11.15 Lecture 2: Case study, Alan Turing,  Grace Hopper, and the history of getting things right

12.15 Lunch (not provided)

2.30 Welcome by the Principal, Prof Sally Mapstone

2.35 Lecture 3: What do historians of computing do, and why is it  important for computer scientists today

3.30 Close

Lecture 1. The early history of computing: Ada Lovelace, Charles Babbage, and the history of programming

IN 1843 Ada Lovelace published a remarkable paper in which she explained  Charles Babbage’s designs for his Analytical Engine. Had it been built, it would have had in principle the same capabilities  as a modern general purpose computer. Lovelace’s paper is famous for its insights into more general questions, as well as for its detailed account of how the machine performed its calculations – illustrated with a large table which is often called, incorrectly, the “first programme”.   I’ll talk about the wider context; why people were interested in computing engines; and some of the other work that was going on at the time, for example Babbage’s remarkable hardware description language. I’ll  look at different explanations for why Babbage’s ideas did not take off, and give a quick overview of what did happen over the next 100 years, before  the invention of the first digital computers.

Lecture 2. Case study, Alan Turing,  Grace Hopper, and the history of getting things right

Getting software right was a theme of programming for the days of Babbage onwards. I’ll look at the work of pioneers Alan Turing and Grace Hopper, and talk about the long interaction of computer science with logic, which has led to better programming languages, new ways to prove programmes correct, and sophisticated mathematical theories of importance in their own right.  I’ll look at the history of the age-old debate about whether computer science needs mathematics to explain its main ideas, or whether practical skills, building things and making things simple for the user are more important.

Lecture 3: What do historians of computing do, and why is it  important for computer scientists today

When people think about computer science, they think about ideas and technologies that are transforming the future – smaller faster smarter connected devices, powered by, AI, and big data – and looking at the past can be seen as a bit of a waste of time. In this lecture I’ll look at what historians do and why it is important; how we get history wrong; and in particular often miss the contribution of of women.  I’ll illustrate my talk with  my own work on Ada Lovelace’s papers, to show how  detailed historical work is needed to debunk popular myths – it is often claimed that Lovelace’s talent was  “poetical science” rather than maths, but I’ve shown that she was a gifted perceptive and knowledgeable mathematician. I’ll explain how the historian’s techniques of getting it right can help us get to grip with  topical problems like “Fake news”, and give us new ways of thinking about the future.

Event details

  • When: 10th October 2017 09:30 - 16:00
  • Where: Byre Theatre
  • Series: Distinguished Lectures Series
  • Format: Distinguished lecture

SRG Seminar: “Simulating a pulmonary tuberculosis infection using a network-based metapopulation model” by Michael Pitcher and “A Fake City of People: Modeling the Co-evolution of City and Citizens” by Xue Guo

Event details
When: 28th September 2017 13:00 – 14:00
Where: Cole 1.33b
Series: Systems Seminars Series
Format: Seminar

Michael Pitcher’s abstract

Tuberculosis (TB) is one of the world’s most deadly infectious diseases, claiming over 1.4 million lives every year. TB infections typically affect the lungs and treatment regimens are long and arduous, requiring at least 6 months of daily chemotherapy. Previous investigations have shown TB to have unique localisations within the lung at varying stages of infection. The initial implant and the primary lesion which arises from it can occur anywhere in the lungs, with a greater probability of occurrence in the lower to middle regions of the lung. However, reactivation of a previously latent form of disease always involves cavitation of the tissue at the apical regions. This difference in spatial location of TB infections suggests two important factors: i) bacteria are able to disseminate across the lung in some manner, and ii) the environment at the top of the lung has some properties that make it preferential for TB replication.

In this project, we aim to build a whole-organ model of the lung and surrounding lymphatics which incorporates both bacterial dissemination possibilities and lung tissue spatial heterogeneity in order to understand their impact on TB. We develop ComMeN (Compartmentalised Metapopulation Network), a Python framework designed to allow the easy creation of complex network-based metapopulations with spatial heterogeneity upon which interaction dynamics can be applied, with discrete event modelling using the Gillespie Algorithm. We then extend this framework to create a TB-specific model, PTBComMeN, which models a TB infection occurring over lung tissue which is divided into patches, each of which contains spatial attributes appropriate to its position in the lung, such as ventilation, perfusion and oxygen tension. Events dictate the interactions between cells and bacteria and their interaction with the environment, with dissemination occurring between edges joining patches on the lung network. This model allows experimentation into studying the effects spatial heterogeneities and bacterial dissemination may have on the progression of disease and the model is designed to provide insight into the factors that result in long treatment times for TB.

Xue Guo’s abstract

By the year 2050, the global urban population will reach 2.5 billion. While the fast pace of urbanisation brings improved quality of life initially, the surging population will inevitably lead to unique urban issues. Emerging research fields, with the aim of creating smarter cities, plan to counteract these problems. To facilitate this research, we need solid models to generate ’fake cities’, which cannot be easily produced by existing random graph algorithms due to spatial constraints. Therefore, we propose a new model for the co-evolution of city and population, which can show how street network forms, how population spreads and how settlements emerge and diminish. The new model will be a random city generator, which could be used to backtrack the history and predict the future of a city, or act as test cases for the validation and evaluation of urban optimisation algorithms.

Event details

  • When: 28th September 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: School Seminar Series, Systems Seminars Series
  • Format: Seminar

Daniel Sorin (Duke University): Designing Formally Verifiable Cache Coherence Protocol (School Seminar)

Abstract:
The cache coherence protocol is an important but notoriously complicated part of a multicore processor. Typical protocols are far too complicated to verify completely and thus industry relies on extensive testing in hopes of uncovering bugs. In this work, we propose a verification-aware approach to protocol design, in which we design scalable protocols such that they can be completely formally verified. Rather than innovate in verification techniques, we use existing verification techniques and innovate in the design of the protocols. We present two design methodologies that, if followed, facilitate verification of arbitrarily scaled protocols. We discuss the impact of the constraints that must be followed, and we highlight possible future directions in verification-aware microarchitecture.

Speaker Bio:
Daniel J. Sorin is the Addy Professor of Electrical and Computer Engineering at Duke University. His research interests are in computer architecture, with a focus on fault tolerance, verification, and memory system design. He is the author of “Fault Tolerant Computer Architecture” and a co-author of “A Primer on Memory Consistency and Cache Coherence.” He is the recipient of a SICSA Distinguished Visiting Fellowship, a National Science Foundation Career Award, and Duke’s Imhoff Distinguished Teaching Award. He received a PhD and MS in electrical and computer engineering from the University of Wisconsin, and he received a BSE in electrical engineering from Duke University.

Event details

  • When: 26th September 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Felipe Meneguzzi (PUCRS): Plan Recognition in the Real World (School Seminar)

Abstract:
Plan and goal recognition is the task of inferring the plan and goal of an agent through the observation of its actions and its environment and has a number of applications on computer-human interaction, assistive technologies and surveillance.
Although such techniques using planning domain theories have developed a number of very accurate and effective techniques, they often rely on assumptions of full observability and noise-free observations.
These assumptions are not necessarily true in the real world, regardless of the technique used to translate sensor data into symbolic logic-based observations.
In this work, we develop plan recognition techniques, based on classical planning domain theories, that can cope with observations that are both incomplete and noisy and show how they can be applied to sensor data processed through deep learning techniques.
We evaluate such techniques on a kitchen video dataset, bridging the gap between symbolic goal recognition and real-world data.

Speaker Bio:
Dr. Felipe Meneguzzi is a researcher on multiagent systems, normative reasoning and automated planning. He is currently an associate professor at Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS). Prior to that appointment he was a Project Scientist at the Robotics Institute at Carnegie Mellon University in the US. Felipe got his PhD at King’s College London in the UK and an undergraduate and masters degree at PUCRS in Brazil. He received the 2016 Google Research Awards for Latin America, and was one of four runners up to 2013 Microsoft Research Awards. His current research interests include plan recognition, hybrid planning and norm reasoning.

Slides from the talk

Event details

  • When: 19th September 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Mark Olleson (Bloomberg): Super-sized mobile apps: getting the foundations right (School Seminar)

Abstract:
An email client. An instant messenger. A real-time financial market data viewer and news reader. A portfolio viewer. A note taker, file manager, media viewer, flight planner, restaurant finder… All built into one secure mobile application. On 4 different mobile operating systems. Does this sound challenging?
Mark from Bloomberg’s Mobile team will discuss how conventional development tools and techniques scale poorly when faced with this challenge, and how Bloomberg tackles the problem.

Speaker Bio:
Mark Olleson is a software engineer working in Bloomberg’s Mobile Professional team. Mark start developing iOS apps around the time the original iPad launched, and since has worked on projects which share common characteristics: scale and complexity. Today he specialises in large-scale and cross-platform mobile-app technology.

Event details

  • When: 17th October 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Siobhán Clarke (Trinity College Dublin): Exploring Autonomous Behaviour in Open, Complex Systems (School Seminar)

Abstract:
Modern, complex systems are likely to execute in open environments (e.g., applications running over the Internet of Things), where changes are frequent and have the potential to cause significant negative consequences for the application. A better understanding of the dynamics in the environment will enable applications to better automate planning for change and remain resilient in the face of loss of data sources through, for example, mobility or
battery loss. This talk explores our recent work on autonomous applications in such open, complex systems. The approaches include a brief look at early work on more static, multi-layer system and change modelling, through to multi-agent systems that learn and adapt to changes in the environment, and finally collaborative models for emergent behaviour detection, and for resource sharing. I discuss the work in the context of smart cities applications, such as transport, energy and emergency response.

Speaker Bio:
Siobhán Clarke is a Professor in the School of Computer Science and Statistics at Trinity College Dublin. She joined Trinity in 2000, having previously worked for over ten years as a software engineer for IBM. Her current research focus is on software engineering models for the provision of smart and dynamic software services to urban stakeholders, addressing challenges in the engineering of dynamic software in ad hoc, mobile environments. She has published over 170 papers including in journals such as IEEE/ACM Transactions (TAAS, TSC, TSE, TECS, TMC, TODAES) and conference proceedings including in ICSE, OOPSLA, AAMAS, ICSOC, SEAMS, SASO. She is a Science Foundation Ireland (SFI) Principal Investigator, exploring an Internet of Things middleware for adaptable, urban-scale
software services.

Prof. Clarke is the founding Director of Future Cities, the Trinity Centre for Smart and Sustainable Cities, with contributors from a range of disciplines, including Computer Science, Statistics, Engineering, Social Science, Geography, Law, Business and the Health Sciences. She is also Director for Enable, a national collaboration between industry and seven Higher Education Institutes funded by both SFI and the industry partners, which is
focused on connecting communities to smart urban environments through the Internet of Things. Enable links three SFI Research Centres: Connect, Insight and Lero, bringing together world-class research on future networks, data analytics and software engineering.

Prof. Clarke leads the School’s Distributed Systems Group, and was elected Fellow of Trinity College Dublin in 2006.

Event details

  • When: 29th November 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar