Published on : Jan 03, 2014
Capturing 3D scans in the low-light or dark conditions were impossible in the bygone days. Microsoft’s first laser system called Kinect was the best device on the market in terms of taking 3D scans in dim conditions. However, the sensor of the Kinect was quite incapable of searching the subject in the dark room, for which it had to be modified with a night-vision lens in order to get best results.
MIT researchers have overcome all these errors and have taken the efforts of resolving the issue with the development of two new 3D scanners that are capable of capturing subjective images in absolute dim light conditions or darkness.
The first scanner functions in a simple way. It is dubbed in the First-Photon imaging system, a device similar to a lidar scanner system that is detected in the land surveying and autonomous vehicles and equipment.
According to the principle of working, the lidar scanner gauges the depth of a particular object by reflecting the laser light on that object at several discreet points. It forms a grid-type of a structure or pattern that enables the user to measure the time it requires for the photons to travel back to the image that is produced on the screen/room. Nevertheless, in this case, the MIT’s system utilizes its single laser equipment which reflects light off one single position in the grid system until the photon is identified and moved to another spot in the grid. The concept of the photon and laser reflection is pretty clear. The more reflective the object the fewer photons are needed to bombard the target. This mechanism makes the system highly energy efficient. Sometimes during the process, photons can stray from the desired target which can result in immense background noise. This noise is filtered out by the system’s algorithm that connects each reflected photon pixel together, resulting in a high-resolution 3D image.
MIT is awaiting its launch of second 3D scanner system which is simply known as the nano-camera. The nano-camera can be used for several electronic applications, including medical imaging, collision avoidance detection, and gesture-based interactive gaming. This system is not affected by extreme conditions such as darkness, fog, rain, or translucence of an object. The nano-camera uses the same concept – Time of Flight technology, which is used by Microsoft’s Kinect sensor.
Prior to developing nano-camera, in 2011, MIT researchers developed a trillion-frame-per-second femto-camera. These systems had the capability of capturing a single beam of light in one single scene, and then use the lab-grade optical equipment to create an image at each pulse scene.
In the current times, to create nano-camera, MIT researchers used a real-time encoding application to bounce a single laser signal obtained in the telecommunications industry. This technology enabled them to calculate to find the depth and distance of a particular object while it was in a constant motion.