Clouded | Uncovering The Culture Of Cloud

Congratulations to Blesson Varghese who features in a newly released documentary in collaboration with Hewlett Packard Enterprise and VMWare.

An original film that is uncovering the realities of cloud technology and its effects on both business and society. Many have grown confused with our relationship with a rapidly expanding cloud market, others are reflecting on their strategies. The question is, when did cloud become so ‘Clouded’.

Clouded confronts some of the uncomfortable truths that exist in today’s cloud culture. This is a journey of discovery that uncovers topics which undoubtedly require further thought by governments, enterprise businesses and technology executives.

The full documentary can be watched at https://www.consciouslyhybrid.com

Research Away Day October 2022

Thanks to everyone who attended the school Research away day. The day was very informative and lots of ideas were created.

It was lovely to meet our Ukrainian Visiting Academics Maryna Novozhylova and Olga Chub.

We would also like to thank our guests, Ricky Shek from careers, Kirsty Ross from RIS,  Adeel Shafi and Jayshree Johnstone from business development.

PhD Viva Success: Chawanangwa Lupafya

Please join me in congratulating Chawanangwa Lupafya, who has just passed his PhD viva subject to minor corrections.

Chawanangwa is supervised by Dr Dharini Balasubramaniam.

Special thanks to Dr Ruth Hoffman for serving as internal examiner and  Dr Rami Bahsoon from the University of Birmingham for serving as the external examiner

 

PhD Viva Success: Yasir Alguwaifli

Please join me in congratulating Yasir Alguwaifli, who has just passed his PhD viva subject to minor corrections.

Yasir, who is supervised by Christopher Brown, has provided his thesis abstract below.

Thanks to Özgür Akgün for serving as internal examiner and Prof Christoph Kessler from Linköping University for serving as the external examiner.

Controlling energy consumption has always been a necessity in many computing contexts as the resources that provide said energy is limited, be it a battery supplying power to an Single Board Computer (SBC)/System-on-a-Chip (SoC), an embedded system, a drone, a phone, or another low/limited energy device, or a large cluster of machines that process extensive computations requiring multiple resources, such as a Non-Uniform Memory Access (NUMA) system. The need to accurately predict the energy consumption of such devices is crucial in many fields. Furthermore, different types of languages, e.g. Haskell and C/C++, exhibit different behavioural properties, such as strict vs. lazy evaluation, garbage collection vs. manual memory management, and different parallel runtime behaviours. In addition most software developers do not write software with energy consumption as a goal, this is mostly due to the lack of generalised tooling to help them optimise and predict energy consumption of their software. Therefore, the need to predict energy consumption in a generalised way for different types of languages that do not rely on specific program properties is needed. We construct several statistical models based on parallel benchmarks using regression modelling such as Non-negative Least Squares (NNLS), Random Forests, and Lasso and Elastic-Net Regularized Generalized Linear Models (GLMNET) from two different programming paradigms, namely Haskell and C/C++. Furthermore, the assessment of the statistical models is made over a complete set of benchmarks that behave similarly in both Haskell and C/C++. In addition to assessing the statistical models, we develop meta-heuristic algorithms to predict the energy consumed in parallel benchmarks from Haskell’s Nofib and C/C++’s Princeton Application Repository for Shared-Memory Computers (PARSEC) suites for a range of implementations in PThreads, OpenMP and Intel’s Threading Building Blocks (TBB). The results show that benchmarks with high scalability and performance in parallel execution can have their energy consumption predicted and even optimised by selecting the best configuration for the desired results. We also observe that even in degraded performance benchmarks, high core count execution can still be predicted to the nearest configuration to produce the lowest energy sample. Additionally, the meta-heuristic technique can be employed using a language- and architecture-agnostic approach to energy consumption prediction rather than requiring hand-tuned models for specific architectures and/or benchmarks. Although meta-heuristic sampling provided acceptable levels of accuracy, the combination of the statistical model with the meta-heuristic algorithms proved to be challenging to optimise. Except for low to medium accuracy levels for the Genetic algorithm, combining meta-heuristics demonstrated limited to poor accuracy.

Seminar Talk from a SICSA visitor (Daniel Garijo) Friday 10 June, 11.00am

Accelerating Research Software Understandability Through Knowledge Capture

Daniel Garijo

Summary: Research Software is key to understand, reproduce and reuse existing work in many disciplines, ranging from Geosciences to Astronomy or Artificial Intelligence. However, research software is usually difficult to find, reuse, compare and understand due to its disconnected documentation (dispersed in manuals, readme files, web sites, and code comments) and a lack of structured metadata to describe it. These problems affect not only researchers, but also students who aim to compare published findings and policy makers seeking clarity on a scientific result. In this talk I will present the main research challenges and our recent efforts towards facilitating software understanding by automatically capturing Knowledge Graphs from software documentation and code.

Short bio: Dr. Daniel Garijo Verdejo is a Distinguished Researcher at the Ontology Engineering Group of Universidad Politécnica de Madrid (UPM). Previously, he held a Research Computer Scientist position at the Information Sciences Institute of the University of Southern California, in Los Angeles. Daniel’s research activities focus on e-Science and Knowledge Capture, specifically on how to increase the understandability of research software and scientific workflows by creating Knowledge Graph from their documentation and provenance (i.e., steps, outputs, inputs, intermediate results).

For this talk we will use a hybrid approach: In person (Jack Cole, 1.33) and online, via Teams.

If you wish to attend it would be helpful if you could register on eventbrite to let us know if you intend to attend in person or online

All Welcome!