- When: 1st November 2018 13:00 - 14:00
- Where: Cole 1.33b
- Series: Systems Seminars Series
- Format: Seminar, Talk
Record linkage is the process of identifying records that refer to the same real-world entities, in situations where entity identifiers are unavailable. Records are linked on the basis of similarity between common attributes, with every pair being classified as a link or non-link depending on their degree of similarity. Record linkage is usually performed in a three-step process: first groups of similar candidate records are identified using indexing, pairs within the same group are then compared in more detail, and finally classified. Even state-of-the-art indexing techniques, such as Locality Sensitive Hashing, have potential drawbacks. They may fail to group together some true matching records with high similarity. Conversely, they may group records with low similarity, leading to high computational overhead. We propose using metric space indexing to perform complete record linkage, which results in a parameter-free record linkage process combining indexing, comparison and classification into a single step delivering complete and efficient record linkage. Our experimental evaluation on real-world datasets from several domains shows that linkage using metric space indexing can yield better quality than current indexing techniques, with similar execution cost, without the need for domain knowledge or trial and error to configure the process.
Virtualisation is a powerful tool used for the isolation, partitioning, and sharing of physical computing resources. Employed heavily in data centres, becoming increasingly popular in industrial settings, and used by home-users for running alternative operating systems, hardware virtualisation has seen a lot of attention from hardware and software developers over the last ten?fifteen years.
From the hardware side, this takes the form of so-called hardware assisted virtualisation, and appears in technologies such as Intel-VT, AMD-V and ARM Virtualization Extensions. However, most forms of hardware virtualisation are typically same-architecture virtualisation, where virtual versions of the host physical machine are created, providing very fast isolated instances of the physical machine, in which entire operating systems can be booted. But, there is a distinct lack of hardware support for cross-architecture virtualisation, where the guest machine architecture is different to the host.
I will talk about my research in this area, and describe the cross-architecture virtualisation hypervisor Captive that can boot unmodified guest operating systems, compiled for one architecture in the virtual machine of another.
I will talk about the challenges of full system simulation (such as memory, instruction, and device emulation), our approaches to this, and how we can efficiently map guest behaviour to host behaviour.
Finally, I will discuss our plans for open-sourcing the hypervisor, the work we are currently doing and what future work we have planned.
In this talk, I will talk about the possibility of using Bayesian nonparametric clustering, or Dirichlet Process Mixture model to solve human activity recognition problem. In particular, I will discuss how the technique can be useful when the activity labels are not annotated and/or the activity evolves over the time. This initial study is built on an existing work on using directional statistical models (von Mises-Fisher) distribution, called Hierarchical Mixture of Conditional Independent von Mises Fisher distribution (HMCIvMFs), for unknown events detection and learning. Markov chain Monte Carlo sampling based learning algorithm will be presented together with some initial experiment results.
We have explored data coordination techniques that permit distributed systems to be constructed by interconnecting services. In such systems the network latency is often a problem. For example, large data volumes might have to be transmitted across the network if computation cannot be co-located close to data sources. One solution to this problem is the ability to deploy services in appropriate geographical locations and compose them together to create distributed ecosystems. Hence we seek to be able to deploy such services rapidly and dynamically enact and orchestrate them. However, this goal is hindered by the size of the deployments. Currently, virtual machine appliances that host such services on top of monolithic kernels are very large, thus are potentially slow to deploy as they may need to be transmitted across a network.
Our principles led us to take the route of re-engineering the standard software stack to create self-contained applications that are less-bloated and consequently much smaller based on Unikernels. Unikernels are compact library operating systems that enable a single application to be statically linked against a simple kernel that manages the underlying resources presented by a hypervisor. In this talk I will present Stardust – a specialised Unikernel that aims to support the deployment of application services based on the Java programming language.
The core problem in many sensing applications is that we’re trying to
infer high-resolution information from low-resolution observations —
and keep our trust in this information as the sensors degrade. How can
we do this in a principled way? There’s an emerging body of work on
using topology to manage both sensing and analytics, and in this talk I
try to get a handle on how this might work for some of the problems
we’re interested in. I will present an experiment we did to explore
these ideas, which highlights some fascinating problems.
Self-organisation and self-governance offer an effective approach to resolving collective action problems in multi-agent systems, such as fair and sustainable resource allocation. Nevertheless, self-governing systems which allow unrestricted and unsupervised self-modification expose themselves to several risks, including the Suber’s paradox of self-amendment (rules specify their own amendment) and Michel’s iron law of oligarchy (that the system will inevitably be taken over by a small clique and be run for its own benefit, rather than in the collective interest). This talk will present an algorithmic approach to resisting both the paradox and the iron law, based on the idea of interactional justice derived from sociology, and legal and organizational theory. The process of interactional justice operationalised in this talk uses opinion formation over a social network with respect to a shared set of congruent values, to transform a set of individual, subjective self-assessments into a collective, relative, aggregated assessment.
Using multi-agent simulation, we present some experimental results about detecting and resisting cliques. We conclude with a discussion of some implications concerning institutional reformation and stability, ownership of the means of coordination, and knowledge management processes in ‘democratic’ systems.
Jeremy Pitt is Professor of Intelligent and Self-Organising Systems in the Department of Electrical & Electronic Engineering at Imperial College London, where he is also Deputy Head of the Intelligent Systems & Networks Group. His research interests focus on developing formal models of social processes using computational logic, and their application in self-organising multi-agent systems, for example fair and sustainable common-pool resource management in ad hoc and sensor network. He also has strong interests in human-computer interaction, socio-technical systems, and the social impact of technology; with regard to the latter he has edited two books, This Pervasive Day (IC Press, 2012) and The Computer After Me (IC Press, 2014). He has been an investigator on more than 30 national and European research projects and has published more than 150 articles in journals and conferences. He is a Senior Member of the ACM, a Fellow of the BCS, and a Fellow of the IET; he is also an Associate Editor of ACM Transactions on Autonomous and Adaptive Systems and an Associate Editor of IEEE Technology and Society Magazine.
“Indoor environment quality has a significant effect on worker productivity through a complex interplay of factors such as temperature, humidity and levels of Volatile Organic Compounds (VOCs).
In this talk I will discuss my Masters project which used off the shelf sensors and Raspberry Pis to collect environmental readings at one minute intervals throughout the Computer Science buildings. The prevalence of erroneous readings due to sensor failure and the strategy used for the identification and correction of such faults will be presented. Identifiable correlations between environmental variables and attempts to model these relationships will be discussed
Past studies identifying the ideal environmental conditions for human comfort and productivity allow for the objective assessment of indoor environmental conditions. An adaptation of Frešer’s environment rating system will be presented, showing how VOC levels can be incorporated into assessments of environment quality and how this can be communicated to building users.”
“Cloud computing is becoming an almost ubiquitous part of the computing landscape. For many companies today, moving their entire infrastructure and workloads to the cloud reduces complexity, time to deployment, and saves money. Spot Instances, a subset of Amazon’s cloud computing infrastructure (EC2), expands on this. They allow a user to bid on spare compute capacity in Amazon’s data centres at heavily discounted prices. If demand was ever to increase such that the user’s maximum bid is exceeded, their compute instance is terminated.
In this work, we conduct one of the first detailed analyses of how location affects the overall cost of deployment of a spot instance. We simultaneously examine the reliability of pricing data of a spot instance, and whether a user can be confident that their instance has a low risk of termination.
We analyse spot pricing data across all available Amazon Web Services regions for 60 days on a variety of instance types. We find that location does play a critical role in spot instance pricing and also that pricing differs depending on the granularity of the location – from a more coarse-grained AWS region to a more fine-grained Availability Zone within a region. We relate the pricing differences we find to the price’s stability, confirming whether we can be confident in the bid prices we make.
We conclude by showing that it is very possible to run workloads on Spot Instances achieving
both a very low risk of termination as well as paying very low amounts per hour.”
The ability to integrate information from different sensory modalities in a social context is crucial for achieving an understanding of social cues and gaining useful social interaction and experience. Recent research has focused on multi-modal integration of social signals from visual, auditory, haptic or physiological data. Different data fusion techniques have been designed and developed; however, the majority have not achieved significant accuracy improvement in recognising social cues compared to uni-modal social signal recognition. One of the possible limitations is that these existing approaches have no sufficient capacity to model various types of interactions between different modalities and have not been able to leverage the advantages of multi-modal signals by considering each of them as complementary to the others. We introduce ideas for creating a decentralised model for social signals integration inspired by computational models of multi-sensory integration in neuroscience and the perception of social signals in the human brain.
The recent boom of big data, coupled with the challenges of its processing and storage gave rise to the development of distributed data processing and storage paradigms like MapReduce, Spark, and NoSQL databases. With the advent of cloud computing, processing and storing such massive datasets on clusters of machines is now feasible with ease. However, there are limited tools and approaches, which users can rely on to gauge and comprehend the performance of their big data applications deployed locally on clusters, or in the cloud. Researchers have started exploring this area by providing benchmarking suites suitable for big data applications. However, many of these tools are fragmented, complex to deploy and manage, and do not provide transparency with respect to the monetary cost of benchmarking an application.
In this talk, I will present Plug And Play Bench PAPB (https://github.com/sneceesay77/papb): an infrastructure aware abstraction built to integrate and simplify the process of big data benchmarking. PAPB automates the tedious process of installing, configuring and executing common big data benchmark workloads by containerising the tools and settings based on the underlying cluster deployment framework. Our proof of concept implementation utilises HiBench as the benchmark suite, HDP as the cluster deployment framework and Azure as the cloud platform. The talk will further illustrate the inclusion of cost metrics based on the underlying Microsoft Azure cloud platform.