SRG Seminar: nMANET, the Name-based Data Network (NDN) for Mobile Ad-hoc Networks (MANETs) by Percy Perez Aruni

The aim of this talk is to introduce the nMANET, the Name-based Data Network (NDN) for Mobile Ad-hoc Networks (MANETs) approach. nMANET is an alternative perspective on utilising the characteristics of NDN to solve the limitations of MANETs, such as mobility and energy consumption. NDN, which is an instance of Information Centric Networking (ICN), provides an alternative architecture for the future Internet. In contrast with traditional TCP/IP networks, NDN enables content addressing instead of host based communication. NDN secures the content instead of securing the communication channel between hosts, therefore the content can be obtained from the intermediate caches or final producers. Although NDN has proven to be an effective design in wired networks, it does not perfectly address challenges arising in MANETs. This shortcoming is due to the high mobilty of mobile devices and their inherent resource constraints, such as remaining energy in batteries.

The implementation of nMANET, the Java based NDN Forwarder Daemon (JNFD), aims to fill this gap and provide a Mobile Name-based Ad-hoc Network prototype compatible with NDN implementations. JNFD was designed for Android mobile devices and offers a set of energy efficient forwarding strategies to distribute content in a dynamic topology where consumers, producers and forwarders have high mobility and may join or leave the network at unpredictable times. nMANET evalues JNFD through benchmarking to estimate efficiency, which is defined as high rates of reliability, throughput and responsiveness with a low energy consumption.

Event details

  • When: 6th April 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

SRG Seminar: Managing Shared Mutable Data in a Distributed Environment (Simone Conte)

Title: Managing Shared Mutable Data in a Distributed Environment

Abstract: Managing data is central to our digital lives. The average user owns multiple devices and uses a large variety of applications, services and tools. In an ideal world storage is infinite, data is easy to share and version, and available irrespective of where it is stored, and users can protect and exert control over the data arbitrarily.

In the real world, however, achieving such properties is very hard. File systems provide abstractions that do not satisfy all the needs of our daily lives anymore. Many applications now abstract data management to users but do so within their own silos. Cloud services provide each their own storage abstraction adding more fragmentation to the overall system.

The work presented in this talk is about engineering a system that usefully approximates to the ideal world. We present the Sea Of Stuff, a model where users can operate over distributed storage as if using their local storage, they can organise and version data in a distributed manner, and automatically exert policies about how to store content.

Event details

  • When: 23rd March 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

SRG Seminar: Cloud scheduling algorithms by Long Thai

“Thanks to cloud computing, accessing to a virtualised computing cluster has become not only easy but also feasible to organisations, especially small and medium-sized ones. First of all, it does not require an upfront investment in building data centres and a constant expense for managing them. Instead, users can pay only for the amount of resources that they actually use. Secondly, cloud providers offer a resource provisioning mechanism which allows users to add or remove resources from their cluster easily and quickly in order to accommodate the workload which changes dynamically in real-time. The flexibility of users’ computing clusters are further increased as they are able to select one or a combination of different virtual machine types, each of which has different hardware specification.

Nevertheless, the users of cloud computing have to face the challenges that they have never encountered before. The monetary cost changes dynamically based on the amount of resources used by the clients. Which means it is no longer cost-effective to adopt a greedy approaches which acquires as much resource as possible. Instead, it requires a careful consideration before making any decision regarding acquiring resources. Moreover, the users of cloud computing have the face that paradox of choice resulted from the high number of options regarding hardware specification offered by cloud providers. As a result, finding a suitable machine type for an application can be difficult. It is even more challenging when a user owns many applications which of which performs different. Finally, addressing all the above challenges while ensuring that a user receives a desired performance further increase the difficulty of effectively using cloud computing resources.

In this research, we investigate and propose the approach that aims to solve the challenge of optimising the usage of cloud computing resource by constructing the heterogeneous cloud cluster which dynamically changes based on the workload. Our proposed approach consists two processes. The first one, named execution scheduling, aims to determine the amount of virtual machines and the allocate of workload on each machine in order to achieve the desired performance with the minimum cost. The second process, named execution management, monitors the execution during runtime, detects and handles unexpected events. The proposed research has been thoroughly evaluated by both simulated and real world experiments. The results have showed that our approach is able to not only achieve the desired performance while minimising the monetary cost but also reduce, or even completely prevent, negative results caused by unexpected events at runtime.”

Event details

  • When: 9th March 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

Research on containers for HPC environments featured in CACM and HPC Wire

Rethinking High performance computing Platforms: Challenges, Opportunities and Recommendations, co-authored by Adam Barker and a team (Ole Weidner, Malcolm Atkinson, Rosa Filgueira Vicente) in the School of Informatics, University of Edinburgh was recently featured in the Communications of the ACM and HPC Wire.

The paper focuses on container technology and argues that a number of “second generation” high-performance computing applications with heterogeneous, dynamic and data-intensive properties have an extended set of requirements, which are not met by the current production HPC platform models and policies. These applications (and users) require a new approach to supporting infrastructure, which draws on container-like technology and services. The paper then goes on to describe cHPC: an early prototype of an implementation based on Linux Containers (LXC).

Ali Khajeh-Hosseini, Co-founder of AbarCloud and former co-founder of ShopForCloud (acquired by RightScale as PlanForCloud) said of this research, “Containers have helped speed-up the development and deployment of applications in heterogeneous environments found in larger enterprises. It’s interesting to investigate their applications in similar types of environments in newer HPC applications.