- When: 1st November 2018 13:00 - 14:00
- Where: Cole 1.33b
- Series: Systems Seminars Series
- Format: Seminar, Talk
AbstractRecord linkage is the process of identifying records that refer to the same real-world entities, in situations where entity identifiers are unavailable. Records are linked on the basis of similarity between common attributes, with every pair being classified as a link or non-link depending on their degree of similarity. Record linkage is usually performed in a three-step process: first groups of similar candidate records are identified using indexing, pairs within the same group are then compared in more detail, and finally classified. Even state-of-the-art indexing techniques, such as Locality Sensitive Hashing, have potential drawbacks. They may fail to group together some true matching records with high similarity. Conversely, they may group records with low similarity, leading to high computational overhead. We propose using metric space indexing to perform complete record linkage, which results in a parameter-free record linkage process combining indexing, comparison and classification into a single step delivering complete and efficient record linkage. Our experimental evaluation on real-world datasets from several domains shows that linkage using metric space indexing can yield better quality than current indexing techniques, with similar execution cost, without the need for domain knowledge or trial and error to configure the process.
Venue: The Old Course Hotel (Hall of Champions)
9:30 Lecture 1
10:30 Break with Coffee
11:15 Lecture 2
12:15 Break for Lunch (not provided)
14:15 Lecture 3
Lecture 1: Introduction to Scalable Intelligent Systems
Lecture 2: Foundations for Scalable Intelligent Systems
Lecture 3: Implications of Scalable Intelligent Systems
Professor Carl Hewitt is the creator (together with his students and other colleagues) of the Actor Model of computation, which influenced the development of the Scheme programming language and the π calculus, and inspired several other systems and programming languages. The Actor Model is in widespread industrial use including eBay, Microsoft, and Twitter. For his doctoral thesis, he designed Planner, the first programming language based on pattern-invoked procedural plans.
Professor Hewitt’s recent research centers on the area of Inconsistency Robustness, i.e., system performance in the face of continual, pervasive inconsistencies (a shift from the previously dominant paradigms of inconsistency denial and inconsistency elimination, i.e., to sweep inconsistencies under the rug). ActorScript and the Actor Model on which it is based can play an important role in the implementation of more inconsistency-robust information systems. Hewitt is an advocate in the emerging campaign against mandatory installation of backdoors in the Internet of Things.
Hewitt is Board Chair of iRobust™, an international scientific society for the promotion of the field of Inconsistency Robustness. He is also Board Chair of Standard IoT™, an international standards organization for the Internet of Things, which is using the Actor Model to unify and generalize emerging standards for IoT. He has been a Visiting Professor at Stanford University and Keio University and is Emeritus in the EECS department at MIT.
A project to build the technology stack outlined in these lectures can bring Scalable Intelligent Systems to fruition by 2025. Scalable Intelligent Systems have the following characteristics:
Technology stack for Scalable Intelligent Systems is outlined below:
For example, pain management could greatly benefit from Scalable Intelligent Systems. Complexities of dealing with pain have led to the current opioid crisis. According to Eric Rodgers, PhD., director of the VA’s Office of Evidence Based Practice:
“The use of opioids has changed tremendously since the 1990s, when we first started formulating a plan for guidelines. The concept then was that opioid therapy was an underused strategy for helping our patients and we were trying to get our providers to use this type of therapy more. But as time went on, we became more aware of the harms of opioid therapy and the development of pill mills. The problems got worse.
It’s now become routine for providers to check the state databases to see if there’s multi-sourcing — getting prescriptions from other providers. Providers are also now supposed to use urine drug screenings and, if there are unusual results, to do a confirmation. [For every death from an opioid overdose] there are 10 people who have a problem with opioid use disorder or addiction. And for every addicted person, we have another 10 who are misusing their medication.”
Pain management requires much more than just prescribing opioids, which are often critical for short-term and less often longer-term use. [Coker 2015; Friedberg 2012; Holt 2017; Marchant 2017; McKinney 2015; Spiegel 2018; Tedesco, et. al. 2017; White 2017] Organizational aspects play an important role in pain management. [Fagerhaugh and Strauss 1977]
Virtualisation is a powerful tool used for the isolation, partitioning, and sharing of physical computing resources. Employed heavily in data centres, becoming increasingly popular in industrial settings, and used by home-users for running alternative operating systems, hardware virtualisation has seen a lot of attention from hardware and software developers over the last ten?fifteen years.
From the hardware side, this takes the form of so-called hardware assisted virtualisation, and appears in technologies such as Intel-VT, AMD-V and ARM Virtualization Extensions. However, most forms of hardware virtualisation are typically same-architecture virtualisation, where virtual versions of the host physical machine are created, providing very fast isolated instances of the physical machine, in which entire operating systems can be booted. But, there is a distinct lack of hardware support for cross-architecture virtualisation, where the guest machine architecture is different to the host.
I will talk about my research in this area, and describe the cross-architecture virtualisation hypervisor Captive that can boot unmodified guest operating systems, compiled for one architecture in the virtual machine of another.
I will talk about the challenges of full system simulation (such as memory, instruction, and device emulation), our approaches to this, and how we can efficiently map guest behaviour to host behaviour.
Finally, I will discuss our plans for open-sourcing the hypervisor, the work we are currently doing and what future work we have planned.
The Carpentries (https://carpentries.org/) is a global community of
volunteers which teach foundational coding and data science skills to researchers
worldwide through Software Carpentry, Data Carpentry, and Library Carpentry
workshops. Being involved in the Carpentries since 2015, I organised and taught
at several workshops, developed new lessons, and trained new Carpentry instructors.
In my talk I will discuss the Carpentries pedagogical approach, and also consider
its applicability to teaching Computer Science students.
2016 was a weird year for Carron. On the plus side she was one of twelve women in Computing and Mathematics to receive a Suffrage Science Award, recognising both scientific achievement and ability to inspire others. She’s involved in lots of work to promote careers in science for women, having initiated and led the Athena SWAN programme of actions at Stirling for four years, and started building Cygnets: a good practice network of UK computing departments engaged in gender equality work. But 2016 was also one of the worst years of her life, with lots of stress, and consequent depression. She’ll talk about her journey from student to professor, with some thoughts about the people and qualities that lead to success, and how those qualities can also be enemies. This should be relevant for everyone, no matter their career stage, academic or professional services, or discipline. (Spoiler alert: she will probably have more questions than answers in this talk!)
Carron Shankland is a professor in Computing Science at the University of Stirling. Her research is about understanding the behaviour of biological systems through mathematical and computational models. Current projects include using data mining to understand disease dynamics, and modelling cancer therapies to try to understand how the actions of therapies might combine to greater effect. As a senior academic, she believes in participating in governance: she’s had positions on Academic Council and University Court, and was deputy head of Natural Sciences. Carron is passionate about the promotion of careers in science for women, having initiated and led the Athena SWAN programme of actions at Stirling 2012-2016. She chairs the BCS Women in Computing Research Group and is building DiVERct: a good practice network of ICT (computing and electronic and electrical engineering) departments engaged in diversity and inclusion work. In 2017 she won one of the first Scottish Women’s Awards for services to science and technology, and in 2016 she was one of twelve women in Computing and Mathematics to receive a Suffrage Science Award, recognising both scientific achievement and ability to inspire others. When she’s not doing computing science (or admin!) she likes to play classical chamber music (she plays clarinet and viola), chop things down in the garden, or visit galleries and coffee shops with her partner Pat (they’ve had a civil partnership since 2006).
This talk will concentrate on some successful applications of search-based and neural network algorithms in two distinctly different areas of real-time embedded systems development: scheduling and timing analysis, and Internet of Things. However it will then motivate some significant challenges for the artificial intelligence community that surprised a user from another research community. The talk will highlight how a seemingly simple-to-use powerful solutions would benefit from a more traditional engineering lifecycles.
Dr Iain Bate is a Reader within the Real-Time Systems (RTS) Research Group at York. His main interests include scheduling and timing analysis, and design assurance to achieve dependable operation even when there are complex failures. His original doctoral work on scheduling and timing analysis was first patented and then adopted by Rolls-Royce for use on current aircraft projects. His work on timing analysis has been used on a large fast jet project.
Recently he has worked on applying the principles of Dependable Real-Time Systems (DRTS) to more complex systems such as automotive systems and Wireless Sensor Networks (WSN) including for environmental monitoring. In particular he has concentrated on producing models of aspects of systems through the building of systematic methods based around multi-variate statistical models. Dr Bate has published over 100 papers and 30 industrial reports, and secured and managed over £5 million worth of grants. He was the Editor-in-Chief for the Journal of Systems Architecture for 10 years.
A job candidate has been pre-selected for shortlist by a neural net; an autonomous car has suddenly changed lanes almost causing an accident; the intelligent fridge has ordered an extra pint of milk. From the life changing or life threatening to day-to-day living, decisions are made by computer systems on our behalf. If something goes wrong, or even when the decision appears correct, we may need to ask the question, “why?” In the case of failures we need to know whether it is the result of a bug in the software,; a need for more data, sensors or training; or simply one of those things: a decision correct in the context, that happened to turn out badly. Even if the decision appears acceptable, we may wish to understand it for our own curiosity, peace of mind, or for legal compliance. In this talk I will pick up threads of research dating back to early work in the 1990s on gender and ethnic bias in black-box machine-learning systems, as well as more recent developments such as deep learning and concerns such as those that gave rise to the EPSRC human-like computing programme. In particular I will present nascent work on an AIX Toolkit (AI explainability): a structured collection of techniques designed to help developers of intelligent systems create more comprehensible representations of the reasoning. Crucial to the AIX Toolkit is the understanding that human-human explanations are rarely utterly precise or reproducible, but they are sufficient to inspire confidence and trust in a collaborative endeavour.
Alan Dix is Director of the Computational Foundry at Swansea University. Previously he has spent 10 years in a mix of academic and commercial roles, most recently as Professor in the HCI Centre at the University of Birmingham and Senior Researcher at Talis. He has worked in human–computer interaction research since the mid 1980s, and is the author of one of the major international textbooks on HCI as well as of over 450 research publications from formal methods to design creativity, including some of the earliest papers in the HCI literature on topics such as privacy, mobile interaction, and gender and ethnic bias in intelligent algorithms. Issues of space and time in user interaction have been a long term interest, from his “Myth of the Infinitely Fast Machine” in 1987, to his co-authored book, TouchIT, on physicality in a digital age, due to be published in 2018. Alan organises a twice-yearly workshop, Tiree Tech Wave, on the small Scottish island where he has lived for 10 years, and where he has been engaged in a number of community research projects relating to heritage, communications, energy use and open data. In 2013, he walked the complete periphery of Wales, over a thousand miles. This was a personal journey, but also a research expedition, exploring the technology needs of the walker and the people along the way. The data from this including 19,000 images, about 150,000 words of geo-tagged text, and many giga-bytes of bio-data is available in the public domain as an ‘open science’ resource. Alan’s new role at the Computational Foundry has brought him back to his homeland. The Computational Foundry is a 30 million pound initiative to boost computational research in Wales with a strong focus on creating social and economic benefit. Digital technology is at a bifurcation point when it could simply reinforce existing structures of industry, government and health, or could allow us to radically reimagine and transform society. The Foundry is built on the belief that addressing human needs and human values requires and inspires the deepest forms of fundamental science.
Update: The slides from the talk are available here.
Visualization and the Universe: How and why astronomers, doctors, and you need to work together to understand the world around us.
Astronomy has long been a field reliant on visualization. First, it was literal visualization—looking at the Sky. Today, though, astronomers are faced with the daunting task of understanding gigantic digital images from across the electromagnetic spectrum and contextualizing them with hugely complex physics simulations, in order to make more sense of our Universe. In this talk, I will explain how new approaches to simultaneously exploring and explaining vast data sets allow astronomers—and other scientists—to make sense of what the data have to say, and to communicate what they learn to each other, and to the public. In particular, I will talk about the evolution of the multi-dimensional linked-view data visualization environment known as glue (glueviz.org) and the Universe Information System called WorldWide Telescope (worldwidetelescope.org). I will explain how glue is being used in medical and geographic information sciences, and I will discuss its future potential to expand into all fields where diverse, but related, multi-dimensional data sets can be profitably analyzed together. Toward the aim of bringing the insights to be discussed to a broader audience, I will also introduce the new “10 Questions to Ask When Creating a Visualization” website, 10QViz.org.
Alyssa Goodman is the Robert Wheeler Willson Professor of Applied Astronomy at Harvard University, and a Research Associate of the Smithsonian Institution. Goodman’s research and teaching interests span astronomy, data visualization, and online systems for research and education. Goodman received her undergraduate degree in Physics from MIT in 1984 and a Ph.D. in Physics from Harvard in 1989. Goodman was awarded the Newton Lacy Pierce Prize from the American Astronomical Society in 1997, became full professor at Harvard in 1999, was named a Fellow of the American Association for the Advancement of Science in 2009, and chosen as Scientist of the Year by the Harvard Foundation in 2015. Goodman has served as Chair of the Astronomy Section of the American Association for the Advancement of Science and on the National Academy’s Board on Research Data and Information, and she currently serves on the both the IAU and AAS Working Groups on Astroinformatics and Astrostatistics. Goodman’s personal research presently focuses primarily on new ways to visualize and analyze the tremendous data volumes created by large and/or diverse astronomical surveys, and on improving our understanding of the structure of the Milky Way Galaxy. She is working closely with colleagues at the American Astronomical Society, helping to expand the use of the WorldWide Telescope program, in both research and in education.
The goals of building software in a professional environment are vastly different from those of a course assignment. In this talk, we’ll cover the differences between the environments, best practices during development and tips from years of experience with troubleshooting production issues.
Becky Plummer is the software engineering team leader responsible for content collaboration applications for the Bloomberg Terminal and the Global Head of the Engineering Champions Program. Becky made a name for herself as a software engineer by creating the trade confirmation alerting system that was fully crash recoverable for the Bloomberg Fixed Income Electronic Trading platform. She created the Engineering Champions program in 2011 to empower developers to influence change and collaborate on improving the development environment tools. Finally, she has run both small scale implementation projects as well as cross engineering projects including hundreds of developers. She is a graduate of University of Maine and Columbia University with a Master’s degree in Computer Science. Joined Bloomberg LP in New York in 2006 and moved to London in 2014 to gain a global perspective.
More information at this link.