Adriana Wilde (St Andrews): Rising to challenges in assessment, feedback and encouraging gender diversity in computing (School Seminar)

Abstract

This talk is in two parts, in the first of which Adriana will focus on her experiences in assessment and feedback in large classes, and in the second part on her work in encouraging gender diversity in computer science.

The focus of the first part will be on her involvement in redesigning an undergraduate module on HCI, where the methods of assessment used were no suitable for increasingly larger classes (up to 160 students). Redesign decisions needed to preserve the validity and reliability of the assessment whilst respecting the need for timely feedback. Adriana will specifically talk about the exam and coursework, and how learning activities in the module were aligned to the assessment, through the use of PeerWise for student-authored MCQs, and the use of video for assessment to foster creativity and application of knowledge. During the talk, there will be an opportunity for discussion on the challenges then encountered.

A (shorter) second part of the talk will present her experiences in supporting women in computing, starting with a very small-scale intervention with staff and students at her previous institution, and concluding with her engagement at the Early Career Women’s Network in St Andrews.

Event details

  • When: 23rd January 2018 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

“Sensing and topology: some ideas by other people, and an early experiment” by Simon Dobson

Abstract
The core problem in many sensing applications is that we’re trying to
infer high-resolution information from low-resolution observations —
and keep our trust in this information as the sensors degrade. How can
we do this in a principled way? There’s an emerging body of work on
using topology to manage both sensing and analytics, and in this talk I
try to get a handle on how this might work for some of the problems
we’re interested in. I will present an experiment we did to explore
these ideas, which highlights some fascinating problems.

Event details

  • When: 30th November 2017 13:00 - 14:00
  • Where: Cole 1.33a
  • Series: Systems Seminars Series
  • Format: Seminar

Edgar Chavez (CICESE): The Metric Approach to Reverse Searching (School Seminar)

Abstract:
Searching for complex objects (e.g. images, faces, audio or video), is an everyday problem in computer science, motivated by many applications. Efficient algorithms are demanded for reverse searching, also known as query by content, in large repositories. Current industrial solutions are ad hoc, domain-dependant, hardware intensive and have limited scaling. However, those disparate domains can be modelled, for indexing and searching, as a metric space. This model has been championed to become a solution to generic proximity searching problems. In practice, however, the metric space approach has been limited by the amount of main memory available.

In this talk we will explore the main ideas behind this technology, present a successful example in audio indexing and retrieval. The application scales well for large amounts of audio because the representation is quite compact and the full audio streams are not needed for indexing and searching.

Speaker Bio:
Edgar Chavez received his Phd from the Center for Mathematical Research in Guanajuato, Mexico in 1999. He founded the information retrieval group at Universidad Michoacana where he worked until 2012. After a brief period in the Institute of Mathematics in UNAM, he joined the computer science department in CICESE in 2013, where he founded the data science group. His main research interest include access and retrieval of data and data representation, such as fingerprints and point clouds. In 2009 he obtained the Thompson-Reuters award for having the most cited paper in computer science in Mexico and Latin America. In 2008 he co-funded, with Gonzalo Navarro, the conference Similarity Search and Applications, which is an international reference in the area. He has published more than 100 scientific contributions, with about 3500 citations in google scholar.

Event details

  • When: 5th December 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: School Seminar Series
  • Format: Seminar

Computational Approaches for Accurate, Automated and Safe Cancer Care – HIG Seminar

Modern external beam radiation therapy techniques allow the design of highly conformal radiation treatment plans that permit high doses of ionsing radition to be delivered to the tumour in order to eradicate cancer cells while sparing surrounding normal tissue. However, since it is difficult to avoid irradiation of normal tissue altogether and ionising radiation also damages normal cells, patients may develop radiation-induced toxicity following treatment. Furthermore, the highly conformal nature of the radiation treatment plans makes them particularly susceptible to geometric or targeting uncertainties in treatment delivery. Geometric uncertainties may result in under-dosage of the tumour leading to local tumour recurrence or unacceptable morbidity from over-dosage of neighbouring healthy tissue.

I will present work in three areas that bear directly on treatment accuracy and safety in radiation oncology. The first area addresses the development of automated image registration algorithms for image-guided radiation therapy with the aim of improving the accuracy and precision of treatment delivery. The registration methods I will present are based on statistical and spectral models of signal and noise in CT and x-ray images. The second part of my talk addresses the identification of predictors of normal tissue toxicity after radiation therapy and the study of the spatial sensitivity of normal tissue to dose. I will address the development of innovative methods to accurately model the spatial characteristics of radiation dose distributions in 3D and results of the analysis of this important, but heretofore lacking, information as a contributing factor in the development of radiation-induced toxicity. Finally, given the increasing complexity of modern radiation treatment plans and a trend towards an escalation in prescribed doses, it is important to implement a safety system to reduce the risk of adverse events arising during treatment and improve clinical efficiency. I will describe ongoing efforts to formalise and automate quality assurance processes in radiation oncology.

Biography
Reshma Munbodh is currently an Assistant Professor in the Department of Diagnostic Imaging and Therapeutics at UConn Health. She received her undergraduate degree in Computer Science and Electronics from the University of Edinburgh and her PhD in medical image processing and analysis applied to cancer from Yale University. Following her PhD, she performed research and underwent clinical training in Therapeutic Medical Physics at the Memorial Sloan-Kettering Cancer Center. She is interested in the development and application of powerful analytical and computational approaches towards improving the diagnosis, understanding and treatment of cancer. Her current projects include the development of image registration algorithms for image-guided radiation therapy, the study of normal tissue toxicity following radiation therapy, longitudinal studies of brain gliomas to monitor tumour progression and treatment response using quantitative MRI analysis and the formalisation and automation of quality assurance processes in radiation oncology.

Event details

  • When: 22nd November 2017 14:00 - 15:00
  • Where: Cole 1.33a
  • Series: HIG Seminar Series
  • Format: Seminar

SRG Seminar: “Interactional Justice vs. The Paradox of Self-Amendment and the Iron Law of Oligarchy” by Jeremy Pitt

Self-organisation and self-governance offer an effective approach to resolving collective action problems in multi-agent systems, such as fair and sustainable resource allocation. Nevertheless, self-governing systems which allow unrestricted and unsupervised self-modification expose themselves to several risks, including the Suber’s paradox of self-amendment (rules specify their own amendment) and Michel’s iron law of oligarchy (that the system will inevitably be taken over by a small clique and be run for its own benefit, rather than in the collective interest). This talk will present an algorithmic approach to resisting both the paradox and the iron law, based on the idea of interactional justice derived from sociology, and legal and organizational theory. The process of interactional justice operationalised in this talk uses opinion formation over a social network with respect to a shared set of congruent values, to transform a set of individual, subjective self-assessments into a collective, relative, aggregated assessment.

Using multi-agent simulation, we present some experimental results about detecting and resisting cliques. We conclude with a discussion of some implications concerning institutional reformation and stability, ownership of the means of coordination, and knowledge management processes in ‘democratic’ systems.

Biography
Photograph of Professor Jeremy Pitt
Jeremy Pitt is Professor of Intelligent and Self-Organising Systems in the Department of Electrical & Electronic Engineering at Imperial College London, where he is also Deputy Head of the Intelligent Systems & Networks Group. His research interests focus on developing formal models of social processes using computational logic, and their application in self-organising multi-agent systems, for example fair and sustainable common-pool resource management in ad hoc and sensor network. He also has strong interests in human-computer interaction, socio-technical systems, and the social impact of technology; with regard to the latter he has edited two books, This Pervasive Day (IC Press, 2012) and The Computer After Me (IC Press, 2014). He has been an investigator on more than 30 national and European research projects and has published more than 150 articles in journals and conferences. He is a Senior Member of the ACM, a Fellow of the BCS, and a Fellow of the IET; he is also an Associate Editor of ACM Transactions on Autonomous and Adaptive Systems and an Associate Editor of IEEE Technology and Society Magazine.

Event details

  • When: 15th November 2017 13:00 - 14:00
  • Where: Cole 1.33a
  • Series: Systems Seminars Series
  • Format: Seminar

PhD viva success: Adam Barwell

Congratulations to Adam Barwell, who successfully defended his thesis yesterday. Adam’s thesis was supervised by Professor Kevin Hammond. He is pictured with second supervisor Dr Christopher Brown, Internal examiner Dr Susmit Sarkar and external examiner Professor Susan Eisenbach from Imperial College, London.

“Ambient intelligence with sensor networks” by Lucas Amos and “Location, Location, Location: Exploring Amazon EC2 Spot Instance Pricing Across Geographical Regions” by Nnamdi Ekwe-Ekwe

Lucas’s abstract

“Indoor environment quality has a significant effect on worker productivity through a complex interplay of factors such as temperature, humidity and levels of Volatile Organic Compounds (VOCs).

In this talk I will discuss my Masters project which used off the shelf sensors and Raspberry Pis to collect environmental readings at one minute intervals throughout the Computer Science buildings. The prevalence of erroneous readings due to sensor failure and the strategy used for the identification and correction of such faults will be presented. Identifiable correlations between environmental variables and attempts to model these relationships will be discussed

Past studies identifying the ideal environmental conditions for human comfort and productivity allow for the objective assessment of indoor environmental conditions. An adaptation of Frešer’s environment rating system will be presented, showing how VOC levels can be incorporated into assessments of environment quality and how this can be communicated to building users.”

Nnamdi’s abstract

“Cloud computing is becoming an almost ubiquitous part of the computing landscape. For many companies today, moving their entire infrastructure and workloads to the cloud reduces complexity, time to deployment, and saves money. Spot Instances, a subset of Amazon’s cloud computing infrastructure (EC2), expands on this. They allow a user to bid on spare compute capacity in Amazon’s data centres at heavily discounted prices. If demand was ever to increase such that the user’s maximum bid is exceeded, their compute instance is terminated.

In this work, we conduct one of the first detailed analyses of how location affects the overall cost of deployment of a spot instance. We simultaneously examine the reliability of pricing data of a spot instance, and whether a user can be confident that their instance has a low risk of termination.

We analyse spot pricing data across all available Amazon Web Services regions for 60 days on a variety of instance types. We find that location does play a critical role in spot instance pricing and also that pricing differs depending on the granularity of the location – from a more coarse-grained AWS region to a more fine-grained Availability Zone within a region. We relate the pricing differences we find to the price’s stability, confirming whether we can be confident in the bid prices we make.

We conclude by showing that it is very possible to run workloads on Spot Instances achieving
both a very low risk of termination as well as paying very low amounts per hour.”

Event details

  • When: 9th November 2017 13:00 - 14:00
  • Where: Cole 1.33a
  • Series: Systems Seminars Series
  • Format: Seminar

“A Decentralised Multimodal Integration of Social Signals: A Bio-Inspired Approach” by Esma Benssassi and “Plug and Play Bench: Simplifying Big Data Benchmarking Using Containers” by Sheriffo Ceesay

Esma’s abstract

The ability to integrate information from different sensory modalities in a social context is crucial for achieving an understanding of social cues and gaining useful social interaction and experience. Recent research has focused on multi-modal integration of social signals from visual, auditory, haptic or physiological data. Different data fusion techniques have been designed and developed; however, the majority have not achieved significant accuracy improvement in recognising social cues compared to uni-modal social signal recognition. One of the possible limitations is that these existing approaches have no sufficient capacity to model various types of interactions between different modalities and have not been able to leverage the advantages of multi-modal signals by considering each of them as complementary to the others. We introduce ideas for creating a decentralised model for social signals integration inspired by computational models of multi-sensory integration in neuroscience and the perception of social signals in the human brain.

Sheriffo’s abstract

The recent boom of big data, coupled with the challenges of its processing and storage gave rise to the development of distributed data processing and storage paradigms like MapReduce, Spark, and NoSQL databases. With the advent of cloud computing, processing and storing such massive datasets on clusters of machines is now feasible with ease. However, there are limited tools and approaches, which users can rely on to gauge and comprehend the performance of their big data applications deployed locally on clusters, or in the cloud. Researchers have started exploring this area by providing benchmarking suites suitable for big data applications. However, many of these tools are fragmented, complex to deploy and manage, and do not provide transparency with respect to the monetary cost of benchmarking an application.

In this talk, I will present Plug And Play Bench PAPB (https://github.com/sneceesay77/papb): an infrastructure aware abstraction built to integrate and simplify the process of big data benchmarking. PAPB automates the tedious process of installing, configuring and executing common big data benchmark workloads by containerising the tools and settings based on the underlying cluster deployment framework. Our proof of concept implementation utilises HiBench as the benchmark suite, HDP as the cluster deployment framework and Azure as the cloud platform. The talk will further illustrate the inclusion of cost metrics based on the underlying Microsoft Azure cloud platform.

Event details

  • When: 26th October 2017 13:00 - 14:00
  • Where: Cole 1.33a
  • Series: Systems Seminars Series
  • Format: Seminar

Distinguished Lecture Series 2017: Professor Ursula Martin

On October 10th, we were delighted to welcome back Professor Ursula Martin from the University of Oxford, to deliver the semester one distinguished lecture series in the Byre Theatre. Earlier in her career Prof Martin was professor of Computer Science here, and in fact only the second female professor in the history of the University of St Andrews.

The lectures covered numerous aspects of the history of computing. A particular highlight was to hear about Ada Lovelace’s early work, on Ada Lovelace day. As a trained mathematician and computer scientist who has studied her papers in detail, Ursula has discovered new insights about Ada’s education and work with Charles Babbage. She also focussed on aspects of computing history that are often ignored, such as history of computing in countries other than the USA or UK. Another aspect was how, even today, the contribution of women in history is often ignored, which Ursula herself has been able to correct in some cases.

The well received lectures centred around what every computer scientist should know about computer history. Professor Martin is pictured at various stages throughout the lectures and with Head of School, Prof Simon Dobson, DLS Coordinator, Prof Ian Gent and Principal and Vice-Chancellor, Prof Sally Mapstone. Read more about Professor Martin and the individual lectures in what every computer scientist should know about computer history. Recordings of each lecture can be viewed at the end of this post.

Images courtesy of Ryo Yanagida.

Lecture 1- The Early History of Computing: Ada Lovelace, Charles Babbage and the early history of programming.

Lecture 2 – Case Study, Alan Turing, Grace Hopper, and the history of programming.

Lecture 3- What do historians of computing do, and why is it important for computer scientists today.

SRG Seminar: “Adaptive Multisite Computation Offloading in Mobile Clouds” by Dawand Sulaiman and “Topological Ranking-Based Resource Scheduling for Multi-Accelerator Systems” by Teng Yu

Dawand’s abstract

The concept of using cloud hosted infrastructure as a means to overcome the resource-constraints of mobile devices is known as Mobile Cloud Computing (MCC), and allows applications to run partially on the device, and partially on a remote cloud instance, thereby overcoming any device-specific resource constraints. However, as smart phones and tablets gain more CPU power and longer battery life, the meaning of MCC gradually changes. Instead of being fully dependent on the cloud, a number of nearby devices can be used to coordinate and distribute content and resources in a decentralised manner; this is known as Mobile Ad hoc Cloud Computing. Mobile devices with less computational power and lower battery life can be leveraged by the nearby mobile devices to run resource-intensive applications. Therefore, more efficient and reliable methodologies need to be explored for resource hungry and real time applications such as face recognition, data-intensive, and augmented reality mobile applications.
We present a unified framework which allows each mobile device within the shared environment to intelligently offload its computation to other external platforms. For the individual mobile devices, it is important to make the offloading decision based on network conditions, load of other machines, and mobile device’s own constraints (e.g., mobility and battery). Moreover, to achieve a global optimal task completion time for tasks from all the mobile devices, it is necessary to devise a task scheduling solution that schedules offloaded tasks in real time. The offloading decision engine needs to adapt to the dynamic changes in both the host device and connected nearby and remote devices.

Teng’s abstract

Accelerators are becoming increasingly prevalent in distributed computation. FPGAs have been shown to be fast and power efficient for particular tasks, yet scheduling on multi-accelerator systems is challenging when workloads vary significantly in granularity in terms of task size and/or number of computational unit required.
We present a novel approach for dynamically scheduling tasks on networked multi-accelerator systems which maintains high performance, even in the presence of irregular jobs. Our topological ranking-based scheduling allows realistic irregular workloads to be processed while maintaining a significantly higher level of performance than existing schedulers.

Event details

  • When: 12th October 2017 13:00 - 14:00
  • Where: Cole 1.33b
  • Series: Systems Seminars Series
  • Format: Seminar