Alumni Paul Dobra will host a short session about Salesforce graduate opportunities on Friday the 8th December, at 11am in Jack Cole 1.33. Target audience MSc, MSci and Honours students.
Event details
- When: 8th December 2017 11:00
- Where: Cole 1.33
Alumni Paul Dobra will host a short session about Salesforce graduate opportunities on Friday the 8th December, at 11am in Jack Cole 1.33. Target audience MSc, MSci and Honours students.
The School will celebrate more student successes and accomplishments next week, when our recent MSc and PhD students graduate. We look forward to toasting their success at our graduation reception in the School of Computer Science, next Thursday afternoon, between 12 and 3.30. Over the years graduation has involved cakes, fizz, laughter, changeable weather and lots of reminiscing as pictured below. For family and friends who can’t make it to graduation ceremonies, the University broadcasts each graduation ceremony live.
Abstract
The core problem in many sensing applications is that we’re trying to
infer high-resolution information from low-resolution observations —
and keep our trust in this information as the sensors degrade. How can
we do this in a principled way? There’s an emerging body of work on
using topology to manage both sensing and analytics, and in this talk I
try to get a handle on how this might work for some of the problems
we’re interested in. I will present an experiment we did to explore
these ideas, which highlights some fascinating problems.
Abstract:
Searching for complex objects (e.g. images, faces, audio or video), is an everyday problem in computer science, motivated by many applications. Efficient algorithms are demanded for reverse searching, also known as query by content, in large repositories. Current industrial solutions are ad hoc, domain-dependant, hardware intensive and have limited scaling. However, those disparate domains can be modelled, for indexing and searching, as a metric space. This model has been championed to become a solution to generic proximity searching problems. In practice, however, the metric space approach has been limited by the amount of main memory available.
In this talk we will explore the main ideas behind this technology, present a successful example in audio indexing and retrieval. The application scales well for large amounts of audio because the representation is quite compact and the full audio streams are not needed for indexing and searching.
Speaker Bio:
Edgar Chavez received his Phd from the Center for Mathematical Research in Guanajuato, Mexico in 1999. He founded the information retrieval group at Universidad Michoacana where he worked until 2012. After a brief period in the Institute of Mathematics in UNAM, he joined the computer science department in CICESE in 2013, where he founded the data science group. His main research interest include access and retrieval of data and data representation, such as fingerprints and point clouds. In 2009 he obtained the Thompson-Reuters award for having the most cited paper in computer science in Mexico and Latin America. In 2008 he co-funded, with Gonzalo Navarro, the conference Similarity Search and Applications, which is an international reference in the area. He has published more than 100 scientific contributions, with about 3500 citations in google scholar.
Type systems are often presented in a declarative style, but with an emphasis on ensuring that there is some sort of type synthesis algorithm. Since Pierce and Turner’s “Local Type Inference” system, however, there has been a small but growing alternative: bidirectional typing, where types are synthesized for variables and elimination forms, but must always be proposed in advance for introduction forms and checked. You can still get away without any type annotations, as long as you write only normal forms. But where’s the fun in that? If you want to write terms that compute, you need to write type annotations at exactly the point where an introduction form collides with its elimination form, showing exactly the type at which computation is happening.
For type systems with some sorts of value dependency, the bidirectional approach seriously cuts down on the amount of annotation required in terms, needed only to achieve type synthesis. We have a real opportunity to reduce clutter and also to give a clearer account of the connections between types and computation.
But it doesn’t stop there. A disciplined approach to the construction of bidirectional type systems makes it easier to get their metatheory right. I’ll show this by reconstructing Martin-Löf’s 1971 type theory (the inconsistent one) in a bidirectional style and show why it has type
preservation, even without normalization.
Modern external beam radiation therapy techniques allow the design of highly conformal radiation treatment plans that permit high doses of ionsing radition to be delivered to the tumour in order to eradicate cancer cells while sparing surrounding normal tissue. However, since it is difficult to avoid irradiation of normal tissue altogether and ionising radiation also damages normal cells, patients may develop radiation-induced toxicity following treatment. Furthermore, the highly conformal nature of the radiation treatment plans makes them particularly susceptible to geometric or targeting uncertainties in treatment delivery. Geometric uncertainties may result in under-dosage of the tumour leading to local tumour recurrence or unacceptable morbidity from over-dosage of neighbouring healthy tissue.
I will present work in three areas that bear directly on treatment accuracy and safety in radiation oncology. The first area addresses the development of automated image registration algorithms for image-guided radiation therapy with the aim of improving the accuracy and precision of treatment delivery. The registration methods I will present are based on statistical and spectral models of signal and noise in CT and x-ray images. The second part of my talk addresses the identification of predictors of normal tissue toxicity after radiation therapy and the study of the spatial sensitivity of normal tissue to dose. I will address the development of innovative methods to accurately model the spatial characteristics of radiation dose distributions in 3D and results of the analysis of this important, but heretofore lacking, information as a contributing factor in the development of radiation-induced toxicity. Finally, given the increasing complexity of modern radiation treatment plans and a trend towards an escalation in prescribed doses, it is important to implement a safety system to reduce the risk of adverse events arising during treatment and improve clinical efficiency. I will describe ongoing efforts to formalise and automate quality assurance processes in radiation oncology.
Biography
Reshma Munbodh is currently an Assistant Professor in the Department of Diagnostic Imaging and Therapeutics at UConn Health. She received her undergraduate degree in Computer Science and Electronics from the University of Edinburgh and her PhD in medical image processing and analysis applied to cancer from Yale University. Following her PhD, she performed research and underwent clinical training in Therapeutic Medical Physics at the Memorial Sloan-Kettering Cancer Center. She is interested in the development and application of powerful analytical and computational approaches towards improving the diagnosis, understanding and treatment of cancer. Her current projects include the development of image registration algorithms for image-guided radiation therapy, the study of normal tissue toxicity following radiation therapy, longitudinal studies of brain gliomas to monitor tumour progression and treatment response using quantitative MRI analysis and the formalisation and automation of quality assurance processes in radiation oncology.
Congratulations to David Castro, who successfully defended his thesis today. David’s thesis was supervised by Professor Kevin Hammond. He is pictured with Internal examiner, Dr Edwin Brady and external examiner Professor Graham Hutton, from the University of Nottingham.
Self-organisation and self-governance offer an effective approach to resolving collective action problems in multi-agent systems, such as fair and sustainable resource allocation. Nevertheless, self-governing systems which allow unrestricted and unsupervised self-modification expose themselves to several risks, including the Suber’s paradox of self-amendment (rules specify their own amendment) and Michel’s iron law of oligarchy (that the system will inevitably be taken over by a small clique and be run for its own benefit, rather than in the collective interest). This talk will present an algorithmic approach to resisting both the paradox and the iron law, based on the idea of interactional justice derived from sociology, and legal and organizational theory. The process of interactional justice operationalised in this talk uses opinion formation over a social network with respect to a shared set of congruent values, to transform a set of individual, subjective self-assessments into a collective, relative, aggregated assessment.
Using multi-agent simulation, we present some experimental results about detecting and resisting cliques. We conclude with a discussion of some implications concerning institutional reformation and stability, ownership of the means of coordination, and knowledge management processes in ‘democratic’ systems.
Biography
Jeremy Pitt is Professor of Intelligent and Self-Organising Systems in the Department of Electrical & Electronic Engineering at Imperial College London, where he is also Deputy Head of the Intelligent Systems & Networks Group. His research interests focus on developing formal models of social processes using computational logic, and their application in self-organising multi-agent systems, for example fair and sustainable common-pool resource management in ad hoc and sensor network. He also has strong interests in human-computer interaction, socio-technical systems, and the social impact of technology; with regard to the latter he has edited two books, This Pervasive Day (IC Press, 2012) and The Computer After Me (IC Press, 2014). He has been an investigator on more than 30 national and European research projects and has published more than 150 articles in journals and conferences. He is a Senior Member of the ACM, a Fellow of the BCS, and a Fellow of the IET; he is also an Associate Editor of ACM Transactions on Autonomous and Adaptive Systems and an Associate Editor of IEEE Technology and Society Magazine.
Congratulations to Adam Barwell, who successfully defended his thesis yesterday. Adam’s thesis was supervised by Professor Kevin Hammond. He is pictured with second supervisor Dr Christopher Brown, Internal examiner Dr Susmit Sarkar and external examiner Professor Susan Eisenbach from Imperial College, London.