Laura Pecchioli and Fawzi Mohamed have a good approach to overcoming the dual problems of integration of context information into virtual worlds and the connection of those virtual worlds to the real one. They have developed a system, ArchApp that links context information to 3d environments by means of a view zone.
In their prototype they show a 3d representation of a city square (the Piazza Napoleone). As you navigate within this environment, an ‘information zone’ beside the 3d display shows what you are currently looking at (I’m not entirely convinced by the separation of view and information – perhaps simulated HUD approach would work?). The user can also search the textual information and be taken to the area showing those search results. Unfortunately the model does not change in response to the search results – it would great to be able to select, say the pre-1700 buildings.
David Bearman talks about this project as an important one that connects the object to its context: its story; the authority by which it is known etc. He discussed in the Futures panel how this project goes some way to break down the barriers between the object and the physical.
They also show an extension of the project with a video of an augmented experience approach – using geolocation to give information of the buildings you are currently looking at in the real world. When properly implemented, this is the dream application.
In our project to visualise Dunedin’s heritage, we’re using a laser sight in the 3d environment to target the buildings – the buildings respond to being “shot” in this way by releasing a pop-up sign. I like Laura’s view zone approach better, they’re going to make it open source – I’ll be in the queue.