Thursday, February 28, 2008

Microsoft Research - WorldWide Telescope

Science educator Roy Gould gives an astonishing sneak preview of Microsoft's new WorldWide Telescope -- a technology that combines feds from satellites and telescopes all over the world and the heavens, and builds a comprehensive view of our universe. - TED

Official website:

Wednesday, February 27, 2008

VideoTrace: Rapid interactive scene modelling from video

VideoTrace is a system for interactively generating realistic 3D models of objects from video—models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model. Each of the sketching operations in VideoTrace provides an intuitive and powerful means of modelling shape from video, and executes quickly enough to be used interactively. Immediate feedback allows the user to model rapidly those parts of the scene which are of interest and to the level of detail required. The combination of automated and manual reconstruction allows VideoTrace to model parts of the scene not visible, and to succeed in cases where purely automated approaches would fail. - Australian Centre for Visual Technologies


Make3D converts your single picture into a 3-D model, completely automatically.

It takes a two-dimensional image and creates a three-dimensional "fly around" model, giving the viewers access to the scene's depth and a range of points of view. After uploading your image, you can "fly" in the 3-D scene, or watch a rendered 3-d movie.

It uses powerful machine learning techniques, to learn the relation between small image patches and their depth and orientation. This allows it to model 3-d structures such as slopes of mountains or branches of trees.
- Make3D

Thursday, February 21, 2008


Reaction to the Hyposurface usually evolves quickly from "What could this be used for?" to "What couldn't this be used for?" It is mesmerizing and full of potential. The Hyposurface allows the participant to connect and interact with a massive, powerful force - it's like controlling a waterfall. -

Tele-existence wide-angle immersive stereoscope

3D Display and Interaction

Stanford camera chip can see in 3D

Instead of devoting the entire sensor for one big representation of the image, Fife's 3-megapixel sensor prototype breaks the scene up into many small, slightly overlapping 16x16-pixel patches called subarrays. Each subarray has its own lens to view the world--thus the term multi-aperture.

After a photo is taken, image-processing software then analyzes the slight location differences for the same element appearing in different patches--for example, where a spot on a subject's shirt is relative to the wallpaper behind it. These differences from one subarray to the next can be used to deduce the distance of the shirt and the wall.

Tuesday, February 5, 2008

Head Tracking for Desktop VR Displays using the Wii Remote

Using the infrared camera in the Wii remote and a head mounted sensor bar (two IR LEDs), you can accurately track the location of your head and render view dependent images on the screen. This effectively transforms your display into a portal to a virtual environment. The display properly reacts to head and body movement as if it were a real window creating a realistic illusion of depth and space. - Johnny Chung Lee

360° Light Field Display

We describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The display consists of a high-speed video projector, a spinning mirror covered by a holographic diffuser, and FPGA circuitry to decode specially rendered DVI video signals. The display uses a standard programmable graphics card to render over 5,000 images per second of interactive 3D graphics, projecting 360-degree views with 1.25 degree separation up to 20 updates per second. We describe the system's projection geometry and its calibration process, and we present a multiple-center-of-projection rendering technique for creating perspective-correct images from arbitrary viewpoints around the display. Our projection technique allows correct vertical perspective and parallax to be rendered for any height and distance when these parameters are known, and we demonstrate this effect with interactive raster graphics using a tracking system to measure the viewer's height and distance. We further apply our projection technique to the display of photographed light fields with accurate horizontal and vertical parallax. We conclude with a discussion of the display's visual accommodation performance and discuss techniques for displaying color imagery. - USC

Physics and Augmented Reality

Real-time 3D tracking in action. An example of use of live video imagery which is digitally processed and "augmented" by the addition of computer-generated graphics.

Sunday, February 3, 2008

touchless human / machine user interface for 3D navigation

Elliptic Labs is paving the way for use of computers and screens without touching, simply with the finger or hand in the air. Manipulate images, play computer games, control robotics or use touch screens without touching or without holding a hardware control unit. - Elliptic Labs