Virtual Reality Headset, Fun! + NuiLib upgrade.

I’ve been working on combining the Oculus Rift headset with the Kinect to take all our reconstructions to the next level of immersion. If you want to have a play with the viewer check out its page on the main Open Virtual Worlds blog. To do this we have updated NuiLib quite considerably. The changes aren’t yet in the master as they haven’t been tidied up enough for that, but if you want to play with them they are on the ACE branch of the NuiLib repository.

Posted in Uncategorized | Leave a comment

Papers and Progress and videos

The last few months have been a busy time. The Timespan installation is now up and running full time with an expanded feature list. A write up of the work that has been done on that has been accepted for publication at the IEEE sponsored Digital Heritage Congress 2013 which will take place in late October. For those interested in more information on the exhibit Greta Scott-Larsen has created two videos featuring interviews with those involved in the project and an overview of how the exhibit works.

The first video is a quick introduction to the project:

The second video expands on all aspects of the exhibit and its creation:

As well as the Timespan project preliminary work has now started on our next exhibit which will be in the Shetland Museum and Archives. This exhibit will be a virtual world based interactive recreation the way of life in the Fethaland fishing station.

Lastly there is the little matter of my thesis. The next several months will be focussed on that so there may not be too much content popping up on here.

Posted in Uncategorized | Leave a comment

Caen Township

First post in quite some time! The past 4 months have been spent working away on the Caen Township project and have been extremely productive on the coding front.

The Caen Township was a small collection of long houses up the Strath from the fishing village of Helmsdale, ~1.5hrs north of Inverness. It was cleared as part of the highland clearances 200 years ago. Helmsdale is home to Timespan, a Community Arts Centre / Museum that is part of the Tourist trail as you travel up the North East coast of Scotland. As this year represents the bicentenary of the highland clearances Timespan has been looking at ways to remember the stories and lives of the people who’s lives were so hugely affected by the changes. This is where I come in!

The Open Virtual Worlds group I work in had previously created some virtual content based around an excavation done in collaboration with Timespan in the neighbouring town of Brora. When we heard that they were planning to excavate the Caen Township a partnership was suggested where we would create a Virtual World model of the Township and install it in Timespan.

The exhibit we created launched 2 weeks ago. It is made up of a 9 region OpenSim grid representing the township, accessed through my Chimera software. Visitors to the Museum walk in to a small room with a large projector and are able to explore the space via a Kinect mounted under the screen. Using the Kinect they can view a slideshow of the site, step through a pre-recorded flythrough walking them through the township or explore freely (either with an Avatar or with a free camera).

To give a quick feeling of how the exhibit feels we recorded some videos whilst I was up north.
The first is just me talking about how the exhibit works.

The second is a demonstration of the exhibit itself.

Posted in Uncategorized | Leave a comment

Medieval Books in a Fish Tank

Just a quick update on what I have been working on the past week or so. I have been further developing the Chimera system outlined in the previous post. Originally it was designed to manipulate the frustums of multiple viewers to synch up into a CAVE system. A side effect of the maths to do these transformations is that the system can be turned on its head, rather than moving screens around the user you can move the ‘eye’ position relative to the various screens. With the correct transformations this can give you the impression that the monitor itself becomes a window into the virtual space.

As Chimera can already do the transformations I decided to hook the Kinect up and see what happened. The result is the following video. I did it as a screen capture so you can just the screen reacting but the camera is being manipulated by moving my head around in the real world. More videos to come on this.

The video:

Diagram of how the lines of site change as the eye moves relative to the screen

How the lines of site change as the eye moves relative to the screen.

Posted in Uncategorized | Leave a comment


Over the last few weeks I’ve been hard at work on the problem of how to synchronize many screens with the same virtual content. The solution I have come up with (With many thanks to Sianna Gearz on the #opensl IRC channel) is a proxy system based on the GridProxy library. GridProxy was developed by Ikalif for the libopenmateverse project. I’ve called my proxy Chimera after the beast with many heads.

The system is based around a master with many slaves. The master controls the camera position and the slaves all show views based on the master’s camera position/orientation. Different slaves can be given different offsets to show different views.

The proxy systems uses SetFollowCamProperties packets to update the position of the camera on slave viewers. These updated camera positions are broadcast by a master server which every slave connects to. The master can be controlled using another viewer (this solution proves rather laggy) or by direct mouse / keyboard input into the master GUI. The architecture of the system is shown below.

Architecture of the Proxy mechanism

Architecture of the Proxy mechanism

Currently the system works well, with some caveats. Firstly, using SetFollowCamProperties means that if the camera moves too far away from the location of the slave avatar it ceases to orient correctly. This will be fixed soon by having the slave proxy automatically re-position the slave client to match the camera position. Secondly there are some issues of perspective with different screens and what they should show. I aim to solve this by creating a modified viewer which can adjust its camera frustum position based on packets it receives or GUI input. The last change I aim to make is to incorporate NUI support directly into the master.

To try out the system clone this git repo (for those who don’t use git there is an option to download a zip file). The source code is available here. If you are using the source you will probably have to fix the project set up to point at your installation of the libopenmetaverse libraries. I use a slightly modified version of GridProxy.dll which fixes proxy responses to get_grid_info requests. The modified DLL is in the bin repo. Alternatively the .diff file for libopenmetaverse/Programs/GridProxy/GridProxy.cs is in the source repo. If you don’t use the modified GridProxy.dll you need to make sure you log in using the --grid flag (in the GUI Grid checkbox under the Proxy tab. In the config: UseGrid in the General section), and that you are using a viewer that supports this flag.

Once you have the executables you can run either Chimera[Master,Slave].exe to run single instances of the GUI, or Chimera.exe to run the system all together. How these run can be modified using config files or command line flags. Exactly the same options can be put into either. See the the *.config.example files for information on how to use the config files. To see help on the command line switches us the -h or --help flags.

Have fun! Any feedback would be greatly appreciated.

Posted in Uncategorized | Leave a comment

Armadillo Update

Just a quick update. I’ve recently done a little bit of work on the Armadillo client. The first change is that I’ve ported it over to the Dolphin client (version 3, the last version that supports OpenSim). While I was tweaking Armadillo I also edited it to make the controls ambidextrous and get rid of the bug where no input left you constantly spinning. Lastly I added a git repo with all the changes you need to make to a SecondLifeViewer based viewer to integrate the Kinect.

You can get the dolphin version here.

Posted in Uncategorized | Leave a comment


For anyone currently working on getting XMRMs integrated into their OpenSim installation I have uploaded a zip file with a Visual Studio project full of XMRMs I have written for various purposes. I can’t guarantee they work but they should give some ideas for how to use MRMs and the different things you can do with them. Also they provide a working Visual Studio project that is correctly configured to compile MRMs.

To try the scripts what you need to do is download this zip file and unzip its contents into a sub folder of the OpenSim bin directory. Once that is done you can open the solution file in Visual Studio and compile. All the references should be in the right place. Once you have compiled the project you can run the scripts in world.

MRM:X -a XMRMUtil/XMRMUtil.dll -c XMRMUtil.<script> where script is the name of the script class from Visual Studio.

Posted in Uncategorized | Leave a comment

Armadillos Everywhere!

This is a quick update on what I am currently working on. The Armadillo project is a Virtual World client designed to support immersive interaction. The first strand of this is already complete; hands free control. The second strand is to support immersive displays. This means multiple displays which can surround the users. Ideally this system needs to be scalable so multiple machines can drive the displays to allow for many high res displays to be powered of a single avatar. This is my current goal.

Today I completed the very first prototype of this functionality. I have created a modified Second Life client which will position it’s camera relative to the nearest other avatar on the server, rather than relative to the avatar the client is logged in as. This allows for a master/slave setup. The master moves around and the slave moves with it.

My current implementation is based on grabbing the information the client receives about other avatar’s positions and orientations from the server and adjusting the camera accordingly. This has proved to be rather laggy. My next challenge is to look into the various alternatives and pick the best one to move forward on. To figure out the various alternatives I have been chatting with the really helpful folks in the #opensl channel on freenode. Many thanks to SianaGearz and MartinRJ for your help on this. The alternatives we have hashed out are as follows:

  1. Slaves use information received from the server to update the camera to follow the master avatar.
  2. Master is modified to send UDP packets to all slaves containing information on position and rotation along a back channel that doesn’t pass through the server. Slaves use this information in place of information received from the server to modify the view.
  3. GridProxy is used to catch all packets sent by the master with updates about avatar position or orientation. These packets are then broadcast to the slaves which render them in the same way as #2.
  4. GridProxy is used to catch all packets sent by the master with updates about avatar position and orientation. Master then sends these packets to a slave GridProxy. Slave GridProxys are used by every slave client and insert packets which spoof the packets generated by LSL’s camera control functions to update the slaves’ views.
  5. Use LSL. Can control Camera Orientation with LSL scripts and get all relevant information about other avatars in the same way.

The pros and cons of the various alternatives are listed below:


  • Master doesn’t have to be altered.
  • No extra information required by the client.
  • Already implemented so definitely possible.
  • Lots of control over how the slaves render.

  • Slave is rather laggy when rotating the view on OpenSim. Either the client doesn’t send regular enough updates to the server or the server doesn’t broadcast the updates quickly enough.
  • No way to handle teleports.

  • Should be fast.
  • Half implemented already. Just need to add the back channel communication between master and slave.
  • Requires modified master and slave.
  • Have to initialise master and slave to be able to pass packets between each other.

  • Viewer agnostic on the master.
  • Slave modifications are half done already. Just need to add support for receiving and parsing UDP packets to get position/rotation data.
  • Could spoof teleport packets as well.

  • Adds an extra piece of software into the mix.
  • Have to be able to isolate the correct packets and process them correctly.
  • If the client doesn’t send orientation packets frequently enough the lag issue will still be in place.
  • Still have to modify the slave client.
  • Viewer agnostic.
  • Avoids having to use a modified client.
  • Could spoof teleport packets as well.

  • Adds an extra piece of software into the mix.
  • Have to be able to isolate the correct packets and process them correctly.
  • If the client doesn’t send orientation updates frequently enough the lag issue will still be in place.
  • Simple to do.
  • Viewer agnostic.

  • Dependant on LSL.
  • Has worse lag issues than #1 on current OpenSim test rigs.
  • Have to use Restrained Love Viewer to handle teleports.

Posted in Armadillo | Tagged , , | Leave a comment

API / Command line Docs

A first draft of the API and Command line documentation has gone up. This includes a browsable API reference for the MRM API, a little bit of guidance about where the important parts of the API are and how to start using it and a list of all the MRM commands that can be typed in to the server console.

Posted in Uncategorized | Leave a comment

Getting Started with XMRMs

The first few pieces of documentation for XMRMs are up. A simple quick start guide to setting up a server using pre-configured server set ups and a guide to taking a pre-existing server and enabling MRM support.

Posted in Teaching, XMRMs | Leave a comment