Friday, March 30, 2012

Apple Invents a Killer 3D Imaging Camera for iOS Devices

www.tech-sanity.com
6a0120a5580826970c0163036df229970d-800wi

Apple has invented a killer 3D imaging camera that will apply to both still photography and video. The new cameras in development will utilize new depth-detection sensors such as LIDAR, RADAR and Laser that will create stereo disparity maps in creating 3D imagery. Additionally, the cameras will use advanced chrominance and luminance Sensors for superior color accuracy. And if that wasn't enough, the new cameras will not only include facial recognition but also facial gesturing recognition. Intel discussed the coming 3D revolution back in 2010 and it appears that Apple wants to be one of the first to introduce this killer 3D camera. While others may have beaten Apple to market first, the technology described in today's invention will definitely provide iOS devices with the ability to view killer 3D images that could only be appreciated on Apple's "
Resolutionary" Retina Display. Apple's resolutionary experience has only begun. With the ability to view stunning 3D imagery, photos and videos on our new iPad displays, the resolutionary experience is only going pop our brains even further.  
 
Today's Cameras with Limited 3D Capabilities
 
Existing three-dimensional image capture devices, such as digital cameras and video recorders, can derive limited three-dimensional visual information for objects located within a captured area. For example, some imaging devices can extract approximate depth information relating to objects located within the captured area, but are incapable of obtaining detailed geometric information relating to the surfaces of these objects.
 
Such sensors may be able to approximate the distances of objects within the captured area, but cannot accurately reproduce the three-dimensional shape of the objects. Alternatively other imaging devices can obtain and reproduce surface detail information for objects within the captured area, but are incapable of extracting depth information.
 
Accordingly, these sensors may be incapable of differentiating between a small object positioned close to the sensor and a large object positioned far away from the sensor.
 
Apple's Advanced 3D Camera Solutions
 
Apple's invention relates to systems, apparatuses and methods for capturing a three-dimensional image using one or more dedicated cameras.
 
According to Apple, one embodiment may take the form of a three-dimensional camera configured to capture at least one image including one or more objects, comprising: a first sensor for capturing a polarized image, the first sensor including a camera and a polarized filter associated with the first camera; a second sensor for capturing a first non-polarized image; a third sensor for capturing a second non-polarized image; and at least one processing module for deriving depth information for the one or more objects utilizing at least the first non-polarized image and the second non-polarized image, the processing module further operative to combine the polarized image, the first non-polarized image, and the second non-polarized image to form a composite three-dimensional image.
 
Another embodiment may take the form of three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising: a first sensor for capturing a polarized chrominance image and determining surface information for the one or more objects, the first sensor including a color imaging device and a polarized filter associated with the color imaging device; a second sensor for capturing a first luminance image; a third sensor for capturing a second luminance image; and at least one processing module for deriving depth information for the one or more objects utilizing at least the first luminance image and the second luminance image and combining the polarized chrominance image, the first luminance image, and the second luminance image to form a composite three-dimensional image utilizing the surface information and the depth information.
 
Still another embodiment may take the form of a method for capturing at least one image of an object, comprising: capturing a polarized image of the object; capturing a first non-polarized image of the object; capturing a second non-polarized image of the object; deriving depth information for the object from at least the first non-polarized image and the second non-polarized image; determining a plurality of surface normals for the object, the plurality of surface normals derived from the polarized image; and creating a three-dimensional image from the depth information and the plurality of surface normals.
 
Sample image sensing devices include charge-coupled device (CCD) sensors, complementary metal-oxide-semiconductor sensors, infrared sensors, light detection and ranging sensors, and the like. Further, the image sensing devices may be sensitive to a range of colors and/or luminances, and may employ various color separation mechanisms such as Bayer arrays, Foveon X3 configurations, multiple CCD devices, dichroic prisms and the like.
 
Devices that will use the new 3D Capturing Technology
 
Apple states that in some embodiments, the image sensing device may be configured to convert or facilitate converting the captured image into digital image data. The image sensing device may be hosted in various electronic devices including, but not limited to, digital cameras, personal computers, personal digital assistants (PDAs), mobile telephones, a standalone camera, or any other devices that can be configured to process image data.
 
Components Integrated into a 3D Image Capturing Camera
 
Apple's patent FIG. 1A below is a functional block diagram that illustrates certain components of one embodiment of a three-dimensional camera.
 
6a0120a5580826970c01676462bc43970b-800wi

As shown in FIG. 1A above, the three-dimensional imaging apparatus/camera 100 may include a first imaging device 102, a second imaging device 104, and an image processing module 106. The first imaging device 102 may include a first imaging device and the second imaging device 104 may include a second imaging device and a polarizing filter 108 associated with the second imaging device.
 
Generating Stereo Disparity Maps
 
The fields of view of the first and second imaging devices 112 and 114 noted above may be offset so that the received images are slightly different. For example, the field of view 112 of the first imaging device 102 may be vertically, diagonally, or horizontally offset from the second imaging device 104, or may be closer or further away from a reference plane or point. Offsetting the fields of view of the first and second imaging devices 112 and 114 may provide data useful for generating stereo disparity maps, as well as extracting depth information.
 
Depth-Detection Technique Options: LIDAR, RADAR and Laser
 
Apple states that the first imaging device 102 noted in FIG. 1A above may be configured to derive an approximate relative distance of an object 110 by measuring properties of electromagnetic waves as they are reflected off or scattered by the object and captured by the first imaging device.
 
In one embodiment, the first imaging device may be a Light Detection and Ranging (LIDAR) sensor. The LIDAR sensor may emit laser pulses that are reflected off of the surfaces of objects in the image and detect the reflected signal. The LIDAR sensor may then calculate the distance of an object from the sensor by measuring the time delay between transmission of a laser pulse and the detection of the reflected signal. Other embodiments may utilize other types of depth-detection techniques, such as infrared reflection, RADAR, laser detection and ranging, and the like.
 
Utilizing Microlenses
 
Apple invention also touches on the fact that their 3D capturing camera will utilize microlenses that overly subfilters that focus on polarized light. The microlenses can be formed from any suitable material for transmitting and diffusing light through the light guide, including plastic, acrylic, silica, glass, and so on and so forth. Additionally, the light guide may include combinations of reflective material, highly transparent material, light absorbing material, opaque material, metallic material, optic material, and/or any other functional material to provide extra modification of optical performance.
 
In one embodiment, the microlenses may be convex and have a substantially rounded configuration. Other embodiments may have different configurations. For example, in one embodiment, the microlenses may have a conical configuration, in which the top end of each microlens is pointed.
 
In other embodiments, the microlenses may define truncated cones, in which the tops of the microlenses form a substantially flat surface. Additionally, in some embodiments, the microlenses may be concave surfaces, rather than convex.
 
As is known, the microlenses may be formed using a variety of techniques, including laser-cutting techniques, and/or micro-machining techniques, such as diamond turning. After the microlenses are formed, an electrochemical finishing technique may be used to coat and/or finish the microlenses to increase their longevity and/or enhance or add any desired optical properties.
 
Chrominance and Luminance Sensors
 
6a0120a5580826970c0163036df330970d-800wi

Other essentials noted in the 3D camera design include the use of a first chrominance sensor (202) and a luminance sensor (204). The luminance sensor may be configured to capture a luminance component of incoming light. Additionally, each of the chrominance sensors may be configured to capture color components of incoming light. In one embodiment, the chrominance sensors 202,206 may sense the R (Red), G (Green), and B (Blue) components of an image and process these components to derive chrominance information.
 
Other embodiments may be configured to sense other color components, such as yellow, cyan, magenta, and so on. Further, in some embodiments, two luminance sensors and a single chrominance sensor may be used. That is, certain embodiments may employ a first luminance sensor, a first chrominance sensor and a second luminance sensor, such that a stereo disparity (e.g., stereo depth) map may be generated based on the offsets of the two luminance images. Each luminance sensor captures one of the two luminance images in this embodiment.
 
Facial and Gesture Recognition
 
In another embodiment, the three-dimensional imaging apparatus may be used for recognizing facial gestures. Facial gestures may include, but are not limited to, smiling, grimacing, frowning, winking, and so on and so forth. In one embodiment, this may be accomplished by detecting the orientation of various facial muscles using surface geometry data, such as the mouth, eyes, nose, forehead, cheeks, and so on, and correlating the detected orientations with various gestures.
 
3D Models Created by Rotating Objects
 
In another embodiment, the three-dimensional imaging apparatus may be used to scan an object, for example, to create a three-dimensional model of the object. This embodiment may be accomplished by taking multiple photographs of the object or video while rotating the object. As the object is rotated, the image sensing device may capture more of the surface geometry and use the geometry to create a three-dimensional model of the object.
In another related embodiment, multiple photographs or video may be taken while the image sensing device is moved relative to the object, and used to construct a three-dimensional model of the objects within the captured image(s). For example, a user may take video of a home while walking through the home and the image sensing device could use the calculated depth and surface detail information to create a three-dimensional model of the home. The depth and surface detail information of multiple photographs or video stills may then be matched to construct a seamless composite three-dimensional model that combines the surface detail and depth from each of the photos or video.
 
The coming 3D Revolution was first discussed in our report titled "Intel's CES Keynote 2010, Apple and iLife 3D." The Intel rep stated that it would take 8 to 16 processors to pull off 3D in simple to use consumer applications. Fitting this into a camera would be stunning.
 
Patent Credits
 
Apple's patent application was originally filed in Q3 2011by inventors Brett Bilbrey, Michael Culbert, David Simon, Rich DeVaul, Mushtag Sarwar and David Gere and published today by the US Patent and Trademark Office.

No comments: