Workshop on Considering Technology through a Philosophical Lens

Technology fundamentally shapes our communication, relationships, and access to information. It also evolves through our interaction with it. Dialoguing across disciplines can facilitate an understanding of these complex and reciprocal relationships and fuel reflection and innovation.

This hands-on, participant-driven and experimental workshop will start a discussion of what can come from considering technology through a philosophical lens. Through discussions and hands-on design activities, it will provide an introduction to and reflection on questions at the intersection of computer science and philosophy, such as:

  • How have philosophy and technology shaped each other in the past?
  • How can philosophical ideas and methods guide research in Computer Science?
  • How can thinking through technology help Humanities researchers discover relevance and articulate impact in their research?

Engaging these questions can provide participants an entry-point into exploring these themes in the context of their own research.

This workshop is aimed at researchers from computer science who are curious about philosophy and how to leverage it to inform technically oriented research questions and designing for innovation. It is also aimed at researchers in the arts & humanities, social sciences, and philosophy who are curious about current research questions and approaches in computer science and how questions of technology can stimulate philosophical thought and research.

Attending the workshop is free but please register by emailing Nick Daly: nd40[at]st-andrews.ac.uk

Organisers: Nick Daly (School of Modern Languages) and Uta Hinrichs (School of Computer Science)

 

Event details

  • When: 18th May 2017 10:00 - 13:00
  • Where: Cole 1.33a
  • Format: Workshop

SRG Seminar: “Evaluating Data Linkage: Creation and use of synthetic data for comprehensive linkage evaluation” by Tom Dalton and “Container orchestration” by Uchechukwu Awada

The abstract of Tom’s talk:

“Data linkage approaches are often evaluated with small or few data sets. If a linkage approach is to be used widely, quantifying its performance with varying data sets would be beneficial. In addition, given a data set needs to be linked, the true links are by definition unknown. The success of a linkage approach is thus difficult to comprehensively evaluate.

This talk focuses on the use of many synthetic data sets for the evaluation of linkage quality achieved by automatic linkage algorithms in the domain of population reconstruction. It presents an evaluation approach which considers linkage quality when characteristics of the population are varied. We envisage a sequence of experiments where a set of populations are generated to consider how linkage quality varies across different populations: with the same characteristics, with differing characteristics, and with differing types and levels of corruption. The performance of an approach at scale is also considered.

The approach to generate synthetic populations with varying characteristics on demand will also be addressed. The use of synthetic populations has the advantage that all the true links are known, thus allowing evaluation as if with real-world ‘gold-standard’ linked data sets.

Given the large number of data sets evaluated against we also give consideration as to how to present these findings. The ability to assess variations in linkage quality across many data sets will assist in the development of new linkage approaches and identifying areas where existing linkage approaches may be more widely applied.”

The abstract of Awada’s talk:

“Over the years, there has been rapid development in the area of software development. A recent innovation in software or application deployment and execution is the use of Containers. Containers provide a lightweight, isolated and well-defined execution environment. Application container like Docker, wrap up a piece of software in a complete file-system that contain everything it needs to run: code, runtime, system tools, system libraries, etc. To support and simplify large-scale deployment, cloud computing providers (i.e., AWS, Google, Microsoft, etc) have recently introduced Container Service Platforms (CSPs), which support automated and flexible orchestration of containerised applications on container-instances (virtual machines).

Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, and resource wastage through under utilised container-instances; but also a reduction in the number of applications that can be deployed, given the available resources. In addition, current CSP frameworks do not currently support: the deployment and scaling of containers across multiple regions at the same time; merging containers into a multi-container unit in order to achieve higher cluster utilisation and reduced execution times.

Our research aims to extend the existing system by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications and resource pool in multiple (geographical distributed) cloud regions. We evaluate CMS on a cloud-based CSPs i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the custom deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation and up to 70% reduction in execution times.”

Event details

  • When: 20th April 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

ACM SIGCHI: Communication Ambassador & Turing Award Celebration News

Congratulations to Hui-Shyong Yeo, who has been selected as both an ACM SIGCHI communication ambassador and to represent SIGCHI at the ACM 50 Years of the A.M. Turing Award Celebration.

Yeo is a 2nd year PhD student and is particularly interested in exploring and developing novel interaction techniques. Since joining us in SACHI, he has had work accepted at ACM CHI 2016 and CHI 2017, ACM MobileHCI 2016 and 2017 and ACM UIST 2016. His work has featured at Google I/O 2016, locally on STV news and he gave a talk at Google UK in 2016 about his research. His work has also featured in the media including in Gizmodo, TheVerge, Engadget and TechCrunch., see his personal website for more details. Continue reading

SRG Seminar: nMANET, the Name-based Data Network (NDN) for Mobile Ad-hoc Networks (MANETs) by Percy Perez Aruni

The aim of this talk is to introduce the nMANET, the Name-based Data Network (NDN) for Mobile Ad-hoc Networks (MANETs) approach. nMANET is an alternative perspective on utilising the characteristics of NDN to solve the limitations of MANETs, such as mobility and energy consumption. NDN, which is an instance of Information Centric Networking (ICN), provides an alternative architecture for the future Internet. In contrast with traditional TCP/IP networks, NDN enables content addressing instead of host based communication. NDN secures the content instead of securing the communication channel between hosts, therefore the content can be obtained from the intermediate caches or final producers. Although NDN has proven to be an effective design in wired networks, it does not perfectly address challenges arising in MANETs. This shortcoming is due to the high mobilty of mobile devices and their inherent resource constraints, such as remaining energy in batteries.

The implementation of nMANET, the Java based NDN Forwarder Daemon (JNFD), aims to fill this gap and provide a Mobile Name-based Ad-hoc Network prototype compatible with NDN implementations. JNFD was designed for Android mobile devices and offers a set of energy efficient forwarding strategies to distribute content in a dynamic topology where consumers, producers and forwarders have high mobility and may join or leave the network at unpredictable times. nMANET evalues JNFD through benchmarking to estimate efficiency, which is defined as high rates of reliability, throughput and responsiveness with a low energy consumption.

Event details

  • When: 6th April 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

SRG Seminar: Managing Shared Mutable Data in a Distributed Environment (Simone Conte)

Title: Managing Shared Mutable Data in a Distributed Environment

Abstract: Managing data is central to our digital lives. The average user owns multiple devices and uses a large variety of applications, services and tools. In an ideal world storage is infinite, data is easy to share and version, and available irrespective of where it is stored, and users can protect and exert control over the data arbitrarily.

In the real world, however, achieving such properties is very hard. File systems provide abstractions that do not satisfy all the needs of our daily lives anymore. Many applications now abstract data management to users but do so within their own silos. Cloud services provide each their own storage abstraction adding more fragmentation to the overall system.

The work presented in this talk is about engineering a system that usefully approximates to the ideal world. We present the Sea Of Stuff, a model where users can operate over distributed storage as if using their local storage, they can organise and version data in a distributed manner, and automatically exert policies about how to store content.

Event details

  • When: 23rd March 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

SRG Seminar: Cloud scheduling algorithms by Long Thai

“Thanks to cloud computing, accessing to a virtualised computing cluster has become not only easy but also feasible to organisations, especially small and medium-sized ones. First of all, it does not require an upfront investment in building data centres and a constant expense for managing them. Instead, users can pay only for the amount of resources that they actually use. Secondly, cloud providers offer a resource provisioning mechanism which allows users to add or remove resources from their cluster easily and quickly in order to accommodate the workload which changes dynamically in real-time. The flexibility of users’ computing clusters are further increased as they are able to select one or a combination of different virtual machine types, each of which has different hardware specification.

Nevertheless, the users of cloud computing have to face the challenges that they have never encountered before. The monetary cost changes dynamically based on the amount of resources used by the clients. Which means it is no longer cost-effective to adopt a greedy approaches which acquires as much resource as possible. Instead, it requires a careful consideration before making any decision regarding acquiring resources. Moreover, the users of cloud computing have the face that paradox of choice resulted from the high number of options regarding hardware specification offered by cloud providers. As a result, finding a suitable machine type for an application can be difficult. It is even more challenging when a user owns many applications which of which performs different. Finally, addressing all the above challenges while ensuring that a user receives a desired performance further increase the difficulty of effectively using cloud computing resources.

In this research, we investigate and propose the approach that aims to solve the challenge of optimising the usage of cloud computing resource by constructing the heterogeneous cloud cluster which dynamically changes based on the workload. Our proposed approach consists two processes. The first one, named execution scheduling, aims to determine the amount of virtual machines and the allocate of workload on each machine in order to achieve the desired performance with the minimum cost. The second process, named execution management, monitors the execution during runtime, detects and handles unexpected events. The proposed research has been thoroughly evaluated by both simulated and real world experiments. The results have showed that our approach is able to not only achieve the desired performance while minimising the monetary cost but also reduce, or even completely prevent, negative results caused by unexpected events at runtime.”

Event details

  • When: 9th March 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar

Wrist Worn Haptic Feedback Device

One of our PhD students Esma Mansouri Benssassi and her supervisor Dr Erica Ye defined a requirement for a wrist worn device to group a number of Haptic feedback elements for an experiment they wished to carry out. The on-board Haptic elements are two eccentric rotating mass micro motors and an linear resonant actuator. Initial circuit schematics and printed circuit board designs were created in an Open Source Electronics Design Automation Suite KiCAD EDA. The resulting printed circuit board (PCB) design was made on the CS CNC Router , this produces the PCB by engraving the copper clad fibreglass-epoxy board with a Vee cutter.

PCBBare Circular Engraved PCB

The case for the PCB was created in Autodesk Inventor and was 3D printed using the CS Makerbot 2X 3D printer.

Blank PCB and 3D Printed Case

Haptic Wristband and Haptic Transducers

The wrist worn Haptic feedback device will be connected via an umbilical cable to the main control Feather M0 embedded ARM and Haptic Driver breadboard. This is an ARM microcontroller and wifi module which can be programmed using the Arduino IDE. Code for the ARM processor will enable stored and custom waveforms to be played on the haptic devices on the wrist.

Haptic Feedback Breadboard Assembly

Research on containers for HPC environments featured in CACM and HPC Wire

Rethinking High performance computing Platforms: Challenges, Opportunities and Recommendations, co-authored by Adam Barker and a team (Ole Weidner, Malcolm Atkinson, Rosa Filgueira Vicente) in the School of Informatics, University of Edinburgh was recently featured in the Communications of the ACM and HPC Wire.

The paper focuses on container technology and argues that a number of “second generation” high-performance computing applications with heterogeneous, dynamic and data-intensive properties have an extended set of requirements, which are not met by the current production HPC platform models and policies. These applications (and users) require a new approach to supporting infrastructure, which draws on container-like technology and services. The paper then goes on to describe cHPC: an early prototype of an implementation based on Linux Containers (LXC).

Ali Khajeh-Hosseini, Co-founder of AbarCloud and former co-founder of ShopForCloud (acquired by RightScale as PlanForCloud) said of this research, “Containers have helped speed-up the development and deployment of applications in heterogeneous environments found in larger enterprises. It’s interesting to investigate their applications in similar types of environments in newer HPC applications.

Computational Models of Tuberculosis

On 10th February, Michael Pitcher gave a talk on his upcoming work for his PhD.

Michael is a first-year PhD student based in the School of Computer Science, whose research also involves close collaboration with the School of Medicine. Michael’s work involves investigation of the use of computational models to simulate the progression and treatment of tuberculosis within individuals.
Continue reading