Computer Gaming Industry careers: Aardvark Swift presentation – Get in the Game

Tuesday 27 November, 1400-1500, 1.33a Jack Cole building (Computer Science)

Aardvark Swift, recruitment agents for the gaming industry, will be talking about how to break into the sector. Get advice from those in the know on the key skills you will need, the common pitfalls, and how to maximise your chances. Ideal for programming enthusiasts of all disciplines, and for anyone interested in a gaming career. http://www.aswift.com/index.jsp#holder1-start

AS will also be giving details of how to enter their nationwide programming competition Search for a Star! SFAS is designed to highlight and reward the UK’s most promising video games developers. The winner will be announced at the Eurogamer 2013, with last years winner securing a job at Sony Evolution . This years competition is being sponsored by Microsoft http://www.aswift.com/searchforastar/

Event details

  • When: 27th November 2012 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Talk

Four Geeks and an Entrepreneur

Al Dearle, Monty Widenius, Steve Linton, Ian Gent

Al Dearle, Monty Widenius, Steve Linton, Ian Gent (left to right), St Andrews, 15 October 2012

We were privileged today to hear three lectures from Monty Widenius, main author of the MySQL database system.   His main focus was on entrepreneurship and being an entrepreneur while giving away source code on an open source basis.

Three staff members from St Andrews are pictured with Monty before the first lecture, in St Salvator’s quad at the University of St Andrews.

Event details

  • When: 15th October 2012
  • Series: Distinguished Lectures Series

School Seminar – Mari Ostendorf

Professor Mari Ostendorf of the University of Washington is visiting
Edinburgh, Glasgow and St Andrews as part of a SICSA Distinguishing
Fellowship.

Title: Rich Speech Transcription for Spoken Document Processing

Abstract:
As storage costs drop and bandwidth increases, there has been rapid growth of spoken information available via the web or in online archives — including radio and TV broadcasts, oral histories, legislative proceedings, call center recordings, etc. — raising problems of document retrieval, information extraction, summarization and translation for spoken language. While there is a long tradition of research in these technologies for text, new challenges arise when moving from written to spoken language. In this talk, we look at differences between speech and text, and how we can leverage the information in the speech signal beyond the words to provide a rich, automatically generated transcript that better serves language processing applications. In particular, we look at how prosodic cues can be used to recognize segmentation, emphasis and intent in spoken language, and how this information can impact tasks such as topic detection, information extraction, translation, and social group analysis.

Event details

  • When: 27th November 2012 15:00 - 16:00
  • Where: Phys Theatre C
  • Format: Seminar

Distinguished Lecture Series: MySQL and Open Source Business, by Monty Widenius

Monty Widenius delivered the Semester 1 Distinguished Lecture Series on Monday 15th October 2012, from 10am to 3.30pm, in Upper College Hall.

Monty is CEO & CTO at Monty Program Ab, and is perhaps best known as founder of MySQL, the world’s most used open source.

Monty delivered three lectures on MySQL and Open Source Business.  He has kindly made the slides available – linked to from the titles.

The lectures were  introduced by the Dean of Science, Prof Al Dearle, and refreshments were provided at 11am.

These lectures were open to all.

The detailed programme is available as a pdf: Monty Widenius DLS Programme

Event details

  • When: 15th October 2012 10:00 - 15:30
  • Series: Distinguished Lectures Series
  • Format: Seminar

Professor Aaron Quigley Inaugural lecture

Professor Aaron Quigley will be giving his Inaugural Lecture in School III on Wednesday 31st October at 5:15 p.m.

Billions of people are using interconnected computers and have come to rely on the computational power they afford us, to support their lives, or advance our global economy and society. However, how we interact with this computation is often limited to little “windows of interaction” with mobile and desktop devices which aren’t fully suited to their contexts of use. Consider the surgeon operating, the child learning to write or the pedestrian navigating a city and ask are the current devices and forms of human computer interaction as fluent as they might be? I contend there is a division between the physical world in which we live our lives and the digital space where the power of computation currently resides. Many day to day tasks or even forms of work are poorly supported by access to appropriate digital information. In this talk I will provide an overview of research I’ve been pursuing to bridge this digital-physical divide and my future research plans. This talk will be framed around three interrelated topics. Ubiquitous Computing, Novel Interfaces and Visualisation. Ubiquitous Computing is a model of computing in which computation is everywhere and computer functions are integrated into everything. Everyday objects are sites for sensing, input, processing along with user output. Novel Interfaces, which draw the user interface closer to the physical world, both in terms of input to the system and output from the system. Finally, the use of computer-supported interactive visual representations of data to amplify cognition with visualisation. In this talk I will demonstrate that advances in human computer interaction require insights and research from across the sciences and humanities if we are to bridge this digital-physical divide.

Event details

  • When: 31st October 2012 17:15 - 18:15
  • Where: Various
  • Format: Lecture

School Seminar – Andy Gordon

Reverend Bayes, meet Countess Lovelace: Probabilistic Programming for Machine Learning

Andrew D. Gordon, Microsoft Research and University of Edinburgh

Abstract: We propose a marriage of probabilistic functional programming with Bayesian reasoning. Infer.NET Fun turns the simple succinct syntax of F# into an executable modeling language – you can code up the conditional probability distributions of Bayes’ rule using F# array comprehensions with constraints. Write your model in F#. Run it directly to synthesize test datasets and to debug models. Or compile it with Infer.NET for efficient statistical inference. Hence, efficient algorithms for a range of regression, classification, and specialist learning tasks derive by probabilistic functional programming.

Bio: Andy Gordon is a Principal Researcher at Microsoft Research Cambridge, and is a Professor at the University of Edinburgh. Andy wrote his PhD on input/output in lazy functional programming, and is the proud inventor of Haskell’s “>>=” notation for monads. He’s worked on a range of topics in concurrency, verification, and security, never straying too far from his roots in functional programming. His current passion is deriving machine learning algorithms from F# programs.

Event details

  • When: 8th October 2012 15:00 - 16:00
  • Where: Phys Theatre C
  • Format: Seminar

School Seminar – Barry Brown

Mobility in vivo

Barry Brown, Co-director Mobile Life, University of Stockholm

barbro.tumblr.com
The Mobile VINN Excellence Centre

Abstract
Despite the widespread use of mobile devices, details of mobile technology use ‘in the wild’ have proven difficult to collect. For this study we uses video data to gain new insight into the use of mobile computing devices. Screen-captures of smartphone use, combined with video recordings from wearable cameras, allow for the analysis of the detail of device use in a variety of activity and settings. We use this data to describe how mobile device use is threaded into other co-present activities, focusing on the use of maps and internet searches to support users on a day-trip. Close analysis of the video data reveals novel aspects of how gestures are used on touch screens, in that they form a resource for the ongoing coordination of joint action. We go on to describe how the local environment and information in the environment are combined to guide and support action. In conclusion, we argue for the mobility of mobile devices being as much about this interweaving of activity and device use, as it is about physical portability.

Barry Brown

Event details

  • When: 1st October 2012 15:00 - 16:00
  • Where: Phys Theatre C
  • Format: Seminar

St Andrews Algorithmic Programming Competition

When: Wednesday 12th of September 9:30am – 5pm (with a 1 hour break for lunch)
Where: Sub-honours lab in Jack Cole building (0.35)

As part of this competition, you may be offered an opportunity to participate in a Human-Computer Interaction study on subtle interaction. Participation in this study is completely voluntary.

There will be two competitive categories:
HCI study participants:
1st prize: 7” Samsung Galaxy Tab 2
2nd prize: £50 Amazon voucher
3rd prize: £20 Amazon voucher
Everyone:
1st prize: £50 Amazon voucher
2nd prize: £20 Amazon voucher
3rd prize: £10 Amazon voucher

We will try to include as many programming languages as is reasonable, so if you have any special requests, let us know.
If you have one, bring a laptop in case we run out of lab computers!
If you have any questions, please email Jakub on jd67@st-andrews.ac.uk

Event details

  • When: 12th September 2012 09:30 - 17:00
  • Where: Cole 0.35 - Subhons Lab

Facing Healthcare’s Future: Designing Facial Expressivity for Robotic Patient Mannequins

Speaker: Laurel Riek, University of Notre Dame
Title: Facing Healthcare’s Future: Designing Facial Expressivity for Robotic Patient Mannequins

Abstract:

In the United States, there are an estimated 98,000 people per year killed and $17.1 billion dollars lost due to medical errors. One way to prevent these errors is to have clinical students engage in simulation-based medical education, to help move the learning curve away from the patient. This training often takes place on human-sized android robots, called high-fidelity patient simulators (HFPS), which are capable of conveying human-like physiological cues (e.g., respiration, heart rate). Training with them can include anything from diagnostic skills (e.g., recognizing sepsis, a failure that recently killed 12-year-old Rory Staunton) to procedural skills (e.g., IV insertion) to communication skills (e.g., breaking bad news). HFPS systems allow students a chance to safely make mistakes within a simulation context without harming real patients, with the goal that these skills will ultimately be transferable to real patients.

While simulator use is a step in the right direction toward safer healthcare, one major challenge and critical technology gap is that none of the commercially available HFPS systems exhibit facial expressions, gaze, or realistic mouth movements, despite the vital importance of these cues in helping providers assess and treat patients. This is a critical omission, because almost all areas of health care involve face-to-face interaction, and there is overwhelming evidence that providers who are skilled at decoding communication cues are better healthcare providers – they have improved outcomes, higher compliance, greater safety, higher satisfaction, and they experience fewer malpractice lawsuits. In fact, communication errors are the leading cause of avoidable patient harm in the US: they are the root cause of 70% of sentinel events, 75% of which lead to a patient dying.

In the Robotics, Health, and Communication (RHC) Lab at the University of Notre Dame, we are addressing this problem by leveraging our expertise in android robotics and social signal processing to design and build a new, facially expressive, interactive HFPS system. In this talk, I will discuss our efforts to date, including: in situ observational studies exploring how individuals, teams, and operators interact with existing HFPS technology; design-focused interviews with simulation center directors and educators which future HFPS systems are envisioned; and initial software prototyping efforts incorporating novel facial expression synthesis techniques.

Biography:

Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame. She directs the RHC Lab, and leads research on human-robot interaction, social signal processing, facial expression synthesis, and clinical communication. She received her PhD at the University of Cambridge Computer Laboratory, and prior to that worked for eight years as a Senior Artificial Intelligence Engineer and Roboticist at MITRE.

Event details

  • When: 4th September 2012 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar