Instead of devoting the entire sensor for one big representation of the image, Fife's 3-megapixel sensor prototype breaks the scene up into many small, slightly overlapping 16x16-pixel patches called subarrays. Each subarray has its own lens to view the world--thus the term multi-aperture.
After a photo is taken, image-processing software then analyzes the slight location differences for the same element appearing in different patches--for example, where a spot on a subject's shirt is relative to the wallpaper behind it. These differences from one subarray to the next can be used to deduce the distance of the shirt and the wall. - news.com
Thursday, February 21, 2008
Stanford camera chip can see in 3D
at 8:24 PM
Categories: image, technology
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment