DLS: Functional Foundations for Operating Systems

Biography: Dr. Anil Madhavapeddy is a University Lecturer at the Cambridge Computer Laboratory, and a Fellow of Pembroke College where he is Director of Studies for Computer Science. He has worked in industry (NetApp, Citrix, Intel), academia (Cambridge, Imperial, UCLA) and startups (XenSource, Unikernel Systems, Docker) over the past two decades. At Cambridge, he directs the OCaml Labs research group which delves into the intersection of functional programming and systems, and is a maintainer on many open source projects such as OpenBSD, OCaml, Xen and Docker.

Timetable
9:30: Introduction by Professor Saleem Bhatti
9:35: Lecture 1
10:35: Break with tea and coffee
11:15: Lecture 2
12:15: Lunch (not provided)
14:00: Lecture 3
15:00: Close by Professor Simon Dobson

Lecture 1: Rebuilding Operating Systems with Functional Principles
The software stacks that we deploy across computing devices in the world are based on shaky foundations. Millions of lines of C code crammed into monolithic operating system kernels, mixed with layers of scheduling logic, wrapped in a hypervisor, and served with a dose of nominal security checking on the side. In this talk, I will describe an alternative approach to constructing reliable, specialised systems with a familiar developer experience. We will use modular functional programming to build several services such as a secure web server that have no reliance on conventional operating systems, and explain how to express their logic in a high level, functional fashion. By the end of it, everyone in the audience should be able to build their own so-called unikernels!

Lecture 2: The First Billion Real Deployments of Unikernels
Unikernels offer a path to a more sane basis for driving applications on hardware, but will they ever be adopted for real? For the past fifteen years, an intrepid group of adventurers have been developing the MirageOS application stack in the OCaml programming language. Along the way, it has been deployed in many unusual industrial situations that I will describe in this talk, starting with the Docker container stack, then moving onto the Xen hypervisor that drives billions of servers worldwide. I will explain the challenges of using functional programming in industry, but also the rewards of seeing successful deployments quietly working in mission-critical areas of systems software.

Lecture 3: Programming the Next Trillion Embedded Devices
The unikernel approach of compiling highly specialised applications from high-level source code is perfectly suited to programming the trillions of embedded devices that are making their way around the world. However, this raises new challenges from a programming language perspective: how can we run on a spectrum of devices from the very tiny (with just kilobytes of RAM) to specialised hardware? I will describe the new frontier of functional metaprogramming (programs which generate more programs) that we are using to compile a single application to many heterogenous devices, and a Git-like model to coordinate across thousands of nodes. I will conclude with by motivating the need for a next-generation operating system to power new exciting applications such as augmented and virtual reality in our situated environments, and remove the need for constant centralised coordination via the Internet.

Event details

  • When: 13th February 2018 09:30 - 15:15
  • Where: Byre Theatre
  • Series: Distinguished Lectures Series, Systems Seminars Series
  • Format: Distinguished lecture

Containers for HPC environments

Rethinking High performance computing Platforms: Challenges, Opportunities and Recommendations, co-authored by Adam Barker and a team (Ole Weidner, Malcolm Atkinson, Rosa Filgueira Vicente) in the School of Informatics, University of Edinburgh was recently featured in the Communications of the ACM and HPC Wire.

The paper focuses on container technology and argues that a number of “second generation” high-performance computing applications with heterogeneous, dynamic and data-intensive properties have an extended set of requirements, which are not met by the current production HPC platform models and policies. These applications (and users) require a new approach to supporting infrastructure, which draws on container-like technology and services. The paper then goes on to describe cHPC: an early prototype of an implementation based on Linux Containers (LXC).

Ali Khajeh-Hosseini, Co-founder of AbarCloud and former co-founder of ShopForCloud (acquired by RightScale as PlanForCloud) said of this research, “Containers have helped speed-up the development and deployment of applications in heterogeneous environments found in larger enterprises. It’s interesting to investigate their applications in similar types of environments in newer HPC applications.

SRG Seminar: Cloud scheduling algorithms by Long Thai

“Thanks to cloud computing, accessing to a virtualised computing cluster has become not only easy but also feasible to organisations, especially small and medium-sized ones. First of all, it does not require an upfront investment in building data centres and a constant expense for managing them. Instead, users can pay only for the amount of resources that they actually use. Secondly, cloud providers offer a resource provisioning mechanism which allows users to add or remove resources from their cluster easily and quickly in order to accommodate the workload which changes dynamically in real-time. The flexibility of users’ computing clusters are further increased as they are able to select one or a combination of different virtual machine types, each of which has different hardware specification.

Nevertheless, the users of cloud computing have to face the challenges that they have never encountered before. The monetary cost changes dynamically based on the amount of resources used by the clients. Which means it is no longer cost-effective to adopt a greedy approaches which acquires as much resource as possible. Instead, it requires a careful consideration before making any decision regarding acquiring resources. Moreover, the users of cloud computing have the face that paradox of choice resulted from the high number of options regarding hardware specification offered by cloud providers. As a result, finding a suitable machine type for an application can be difficult. It is even more challenging when a user owns many applications which of which performs different. Finally, addressing all the above challenges while ensuring that a user receives a desired performance further increase the difficulty of effectively using cloud computing resources.

In this research, we investigate and propose the approach that aims to solve the challenge of optimising the usage of cloud computing resource by constructing the heterogeneous cloud cluster which dynamically changes based on the workload. Our proposed approach consists two processes. The first one, named execution scheduling, aims to determine the amount of virtual machines and the allocate of workload on each machine in order to achieve the desired performance with the minimum cost. The second process, named execution management, monitors the execution during runtime, detects and handles unexpected events. The proposed research has been thoroughly evaluated by both simulated and real world experiments. The results have showed that our approach is able to not only achieve the desired performance while minimising the monetary cost but also reduce, or even completely prevent, negative results caused by unexpected events at runtime.”

Event details

  • When: 9th March 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

Job Vacancy: Research Fellow in Computer Science

WORKANDHOME is an interdisciplinary project between the School of Computer Science and the School of Geography, which investigates how home-based businesses are shaping society and space. The research explores how this transformation of work-residence relations has implications on economic activity, economic spaces, city models, the meaning of the home, the role of the neighbourhood and residential choices. Social science research and computer sciences will be integrated to record and predict the consequences of changing activity/networking patterns for future cities through computational network analysis and agent-based modelling. A component of the WORKANDHOME project involves conducting a survey of households in selected UK cities. Individuals’ activities and social networks – both in physical space and virtual space activities will be tracked using mobile devices, social media applications and a web-browser plug-in.

You will work as a Researcher and Engineering Lead for a number of software components which will facilitate a UK-wide survey. Firstly, a mobile phone application (Android and iPhone) which will utilise GPS data in order to measure location, distance travelled etc. as well as reporting the purpose of trips (e.g., shopping, bringing children to school) and types of social contacts (e.g., business supplier, friend). Secondly, for tracking activities in social networks, apps for several social media networks will be developed (e.g. Facebook and Twitter). These will record activities through mobile phones and PCs/laptops (location of contact and frequency). Finally, a web-browser plug-in for a popular browser such as Google Chrome or Firefox will be used in order to report browsing history to a database in St Andrews.

This research will be undertaken in the School of Computer Science at the renowned University of St Andrews. This is a unique opportunity to work at the cutting edge of systems research. Come join us in St Andrews.

For an informal discussion about the post you are welcome to contact Dr Adam Barker.

Fixed term: Full time for 12 months or Part time for 24 months
Salary: £31,342 per annum
Full job listing

PhD Scholarship in Data Science

Potential PhD students with a strong background in Computer Science are encouraged to apply for this three-year studentship funded by the Research Council of the European Commission (ERC). The student will work within an interdisciplinary team of researchers from Computer Science and Geography in the WORKANDHOME project (ERC Starting Grant 2014), which investigates how home-based businesses are shaping society and space.

The student will examine the Computer Science challenges within this research project. The exact scope of the PhD project is open to discussion but we anticipate that the successful candidate will be working broadly on Data Science topics, potentially covering one or more of the following areas: cloud computing, social network analysis and agent-based modelling. This is a unique opportunity to work at the cutting edge of systems research. Come join us in St Andrews.

Funding Notes: The studentship will cover UK/EU tuition fees and an annual tax-free stipend of approximately £13,000. Funding will be for three years of full-time study, starting asap.

Applications: It is expected that applicants should have or expect to obtain a UK first-class honours degree (or its equivalent from non-UK institutions) in Computer Science but the minimal standard that we will consider is a UK upper-second class Honours degree or its equivalent.

For further information on how to apply, see our postgraduate web pages. All interested candidates should contact Dr Adam Barker in the first instance to discuss your eligibility for the scholarship and a proposal for research.

PhD Scholarship in Data Science

Potential PhD students with a strong background in Computer Science are encouraged to apply for this three-year studentship funded by the Research Council of the European Commission (ERC). The student will work within an interdisciplinary team of researchers from Computer Science and Geography in the WORKANDHOME project (ERC Starting Grant 2014), which investigates how home-based businesses are shaping society and space.

The student will examine the Computer Science challenges within this research project. The exact scope of the PhD project is open to discussion but we anticipate that the successful candidate will be working broadly on Data Science topics, potentially covering one or more of the following areas: cloud computing, social network analysis and agent-based modelling. This is a unique opportunity to work at the cutting edge of systems research. Come join us in St Andrews.

Funding Notes: The studentship will cover UK/EU tuition fees and an annual tax-free stipend of approximately £13,000. Funding will be for three years of full-time study, starting date ideally in September/October 2015.

Applications: It is expected that applicants should have or expect to obtain a UK first-class honours degree (or its equivalent from non-UK institutions) in Computer Science but the minimal standard that we will consider is a UK upper-second class Honours degree or its equivalent.

For further information on how to apply, see our postgraduate web pages. The closing date for applications is June 30th 2015. All interested candidates should contact Dr Adam Barker in the first instance to discuss your eligibility for the scholarship and a proposal for research.

Big Data Research Featured in MIT Technology Review

A survey article written by Jonathan Ward and Adam Barker has been featured in the MIT Technology Review.

Undefined By Data: A Survey of Big Data Definitions surveys the various definitions of big data offered by the world’s biggest and most influential high-tech organisations. The article then attempts to distill from all this noise a definition that everyone can agree on. The article was picked up by the MIT Technology Review and has fostered a lively discussion around a coherent definition; according to Topsy (social media analysis) the article has been retweeted over 400 times.

News & events

The system for posting news and events has been replaced with a combination of the School blog and an RSS feed to the School web site. For more information, follow the links at the foot of the School home page.