Creating personalized digital human models of perception for visual analytics

Speaker: Aaron Quigley, SACHI University of St Andrews

Abstract:

Our bodies shape our experience of the world, and our bodies influence what we design. How important are the physical differences between people? Can we model the physiological differences and use the models to adapt and personalize designs, user interfaces and artifacts? Within many disciplines Digital Human Models and Standard Observer Models are widely used and have proven to be very useful for modeling users and simulating humans. In this paper, we create personalized digital human models of perception (Individual Observer Models), particularly focused on how humans see. Individual Observer Models capture how our bodies shape our perceptions. Individual Observer Models are useful for adapting and personalizing user interfaces and artifacts to suit individual users’ bodies and perceptions. We introduce and demonstrate an Individual Observer Model of human eyesight, which we use to simulate 3600 biologically valid human eyes. An evaluation of the simulated eyes finds that they see eye charts the same as humans. Also demonstrated is the Individual Observer Model successfully making predictions about how easy or hard it is to see visual information and visual designs. The ability to predict and adapt visual information to maximize how effective it is is an important problem in visual design and analytics.

About Aaron:

In this talk Professor Aaron Quigley will present a talk for a paper he is presenting at the User Modeling, Adaptation and Personalization (UMAP) conference 2011 on July 12th in Barcelona Spain. This work on Creating Personalized Digital Human Models of Perception for Visual Analytics is the work with and of his former PhD student Dr. Mike Bennett and now postdoctoral fellow in the Department of Psychology in Stanford University.

Professor Aaron Quigley is the Chair of Human Computer Interaction in the School of Computer Science at the University of St Andrews. He is the director of SACHI and his appointment is part of SICSA, the Scottish Informatics and Computer Science Alliance. Aaron’s research interests include surface and multi-display computing, human computer interaction, pervasive and ubiquitous computing and information visualisation.

Event details

  • When: 15th November 2011 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Interaction and Visualization Approaches for Artistic Applications

Speaker: Sean Lynch, Innovis group/Interactions lab, University of Calgary, Canada

Abstract:

Information visualization and new paradigms of interaction are generally applied to productive processes (i.e., at work) or for personal and entertainment purposes. In my work, I have looked instead at how to apply new technologies and visualization techniques to art. I will present mainly two projects that focus on multi-touch music composition and performance, and the visual analysis of the history and visual features of fine paintings.

About Sean:

Sean Lynch is a Master’s Student in Computer Science at the Interactions Lab at the University of Calgary. Sean’s research interests span interactive technologies (e.g., multi-touch), interactive art, and information visualization.

Event details

  • When: 28th September 2011 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Measuring the Effectiveness of Abstract Data Visualisations

Speaker: Mark Shovman, University of Abertay, Dundee

Abstract:
In natural and social sciences, novel insights are often derived from visual analysis of data. But what principles underpin the extraction of meaningful content from these visualisations? Abstract data visualisation can be traced at least as far back as 1801; but with the increase in the quantity and complexity of data that require analysis, standard tools and techniques are no longer adequate for the task. The ubiquity of computing power enables novel visualisations that are rich, multimodal and interactive; but what is the most effective way to exploit this power to support analysis of large, complex data sets? Often, the lack of fundamental theory is pointed out as a central ‘missing link’ in the development and assessment of efficient novel visualisation tools and techniques.

In this talk, I will present some first steps towards the theory of visualisation comprehension, drawing heavily on existing research in natural scene perception and reading comprehension. The central inspiration is the Reverse Hierarchy Theory of perceptual organisation, which is a recent (2002) development of the near-centennial Laws of Gestalt. The proposed theory comes complete with a testing methodology (the ‘pop-out’ effect testing) that is based on our understanding of the cognitive processes involved in visualisation comprehension.

About Mark:
Mark Shovman is a SICSA Lecturer in Information Visualisation in the Institute of Arts, Media and Computer Games Technology in the University of Abertay Dundee. He is an interdisciplinary researcher, studying the perception and cognition aspects of information visualisations, computer games, and immersive virtual reality. His recent research projects include the application of dynamic 3D link-charts in Systems Biology; alleviating cyber-sickness in VR helmets; and immersive VR as an art medium. Mark was born in Tbilisi, Georgia, and lived in Jerusalem, Israel since 1990. He can be found on http://www.linkedin.com/pub/mark-shovman/3/a4b/849

Event details

  • When: 13th September 2011 14:00 - 15:00
  • Where: Cole 1.33a
  • Format: Seminar

Energy-efficient location-awareness on mobile devices

Speaker: Peterri Nurmi,  Helsinki Institute for Information Technology HIIT
Abstract:
Contemporary mobile phones readily support different positioning techniques. In addition to integrated GPS receivers, GSM and WiFi can be used for position estimation, and other sensors such as accelerometers and digital compasses can be used to support positioning, e.g., through dead reckoning or the detection of stationary periods. Selecting which sensor technologies to use for positioning is, however, a non-trivial task as available sensor technologies vary considerably in terms of their energy demand and the accuracy of location estimates. To improve the energy-efficiency of mobile devices and to provide as accurate position estimates as possible, novel on-device positioning technologies together with techniques that select optimal sensor modalities based on positioning accuracy requirements are required. In this talk we first introduce novel GSM and WiFi fingerprinting algorithms that run directly on mobile devices with minimal energy consumption [1]. We also introduce our recent work on minimizing the power consumption of continuous location and trajectory tracking on mobile devices [2].
[1] P. Nurmi, S. Bhattacharya, J. Kukkonen: “A grid-based algorithm for on-device GSM positioning.” Proc. 12th ACM International Conference on Ubiquitous Computing (Ubicomp, Copenhagen, Denmark, September 2010). ACM Press, 2010, 227-236.
[2] M. B. Kjaergaard, S. Bhattacharya, H. Blunck, P. Nurmi, “Energy-efficient Trajectory Tracking for Mobile Devices”, Proc. 9th International Conference on Mobile Systems, Applications and Services (MobiSys, June-July, 2011).

About Petteri:
Dr. Petteri Nurmi is a Senior Researcher at the Helsinki Institute for Information Technology HIIT. He received a PhD in Computer Science from the University of Helsinki in 2009. He is currently co-leading the Adaptive Computing research group at HIIT together with Doc. Patrik Floréen. His research focuses on ubiquitous computing, user modeling and interaction with a view of making the life of ordinary people easier through easy-to-use mobile services. He regularly serves as Program Committee Member and reviewer for numerous leading conferences and journals. More information about his research can be found from the webpage of the research group: http://www.hiit.fi/adapc/

Event details

  • When: 29th July 2011 12:00 - 13:00
  • Where: Cole 1.33a
  • Format: Seminar

Sensing, understanding and modelling people using mobile phones

Speaker: Mirco Musolesi, Computer Science, University of St Andrews

Abstract:

Mobile phones are increasingly equipped with sensors, such as accelerometers, GPS receivers, proximity sensors and cameras, that can be used to sense and interpret people behaviour in real-time. Novel user-centered sensing applications can be built by exploiting the availability of such technologies in these devices that are part of our everyday experience. Moreover, data extracted from the sensors can also be used to model people behaviour and movement patterns providing a very rich set of multi-dimensional data, which can be extremely useful for social science, marketing and epidemiological studies.

In this talk I will present some of my recent work in this area including the design and implementation of the CenceMe platform, a system that supports the inference of activities and other presence information of individuals using off-the-shelf sensor-enabled phones and of EmotionSense, a system for supporting social psychology research. Finally, I will discuss the issues related to the design of energy-efficient social sensing systems.

About Mirco:

Dr. Mirco Musolesi is a SICSA Lecturer at the School of Computer Science at the University of St. Andrews. He received a PhD in Computer Science from University College London in 2007 and a Master in Electronic Engineering from the University of Bologna in 2002. From October 2005 to August 2007 he was a Research Fellow at the Department of Computer Science, University College London. Then, from September 2007 to August 2008 he was an ISTS Postdoctoral Research Fellow at Dartmouth College, NH, USA, and from September 2008 to October 2009 a Postdoctoral Research Associate at the Computer Laboratory, University of Cambridge. His research interests lie in the broad area of mobile systems and networking with a current focus on intelligent mobile systems, online social networks, application of complex network theory to networked systems design, mobility modelling and sensing systems based on mobile phones. More information about his research profile can be found at the following URL: http://www.cs.st-andrews.ac.uk/~mirco

Event details

  • When: 26th July 2011 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Narrative Generation: a case study in assistive technology

Speaker: Nava Tintarev, University of Aberdeen

Abstract:
Story-telling, (including personal narrative), is a big part of our personal and social communication. This talk will identify challenges and solutions that look at the generation of narrative for social communication. We describe a way to “automatically” generate personal stories. The stories which are mix of natural language and multimedia, are based on sensor, and other data, collected with a mobile phone. This study will place a particular focus on the natural language generation task of document structuring: segmenting this data into meaningful and distinct events.
About Nava:
Nava Tintarev has worked on applied HCI projects with themes such as explanations in recommender systems, recommendations in
a mobile travel scenario, and more recently, natural language generation for assistive technology.

Currently, she is working as a Research Fellow at the University of Aberdeen where she is a member of the Natural Language Generation Group.
She has been working on the “How was School today…?” project, which helps children with complex communication needs create and tell a story
about their day at school (which will be the applied setting for the talk on the 19th of July). Before that, she was at Telefónica Research, Barcelona,
working on user-centred issues in recommender systems.
Her doctoral thesis focused on explanations for recommender systems, and one of her papers on the topic
won her the James Chen best student paper award at the International Conference on Hypermedia (2008). For the last
three years she has also been co-organizing a workshop on explanation-aware computing (ExaCt) (http://exact2011.workshop.hm/).

Event details

  • When: 19th July 2011 13:00 - 14:00
  • Where: Cole 1.33a
  • Format: Seminar

Arduino workshop

The School will hold an all day Arduino workshop on Sunday the 26th of June hosted by Dr David McKeown from UCD in Ireland. Thanks also to Ben Arent, an interaction designer based in Dublin for his help in supporting this.
The Arduino workshop preceeds the Summer School on Multimodal Systems for Digital Tourism that will be held in the School from 27th June to 1st July.

Arduino and Kinect equipment

Arduino and Kinect equipment for the workshop and summer school


Continue reading

Event details

  • When: 26th June 2011
  • Where: Cole Bldg
  • Format: Workshop

Summer School on Multimodal Systems for Digital Tourism

The focus of this summer school is to introduce a new generation of researchers to the latest research advances in multimodal systems, in the context of applications, services and technologies for tourists (Digital Tourism). Where mobile and desktop applications can rely on eyes down interaction, the tourist aims to keep their eyes up and focussed on the painting, statue, mountain, ski run, castle, loch or other sight before them. In this school we focus on multimodal input and output interfaces, data fusion techniques and hybrid architectures, vision, speech and conversational interfaces, haptic interaction, mobile, tangible and virtual/augmented multimodal UIs, tools and system infrastructure issues for designing interfaces and their evaluation.
We have structured this summer school as a blend of theory and practice.

Further information on the summer school on the SACHI site
.

Event details

  • When: 27th June 2011 - 1st July 2011
  • Where: Honey Bldg
  • Format: Summer School