Saturday, May 15, 2010

First tests

Last Saturday we could finally get a projector and started making the first experiments of mapping and scanning.

Manual mapping

The goal of this experiment was to project two different images on two perpendicular walls (a corner of the room). The projector was not aligned with any of these walls so any planar projection resulted in deformed images. With a 3D software two textured planes were created simulating the target planes. Then the corner vertices were manually adjusted until the projected images looked undeformed on the walls.

















Remark: The deformed planes in the 3D software didn't represent the shape of the walls but a planar deformed shape that matches the projection. That lead us to think that we could accomplish the goal of project over real 3D structures using simplified 2D representations of that 3D object.


This first mapping was manually performed. Programs like modul8 map the images to the geometry manually.
In others like vvvv the definition of the geometry is needed to generate the virtual model. Then this virtual model is adjusted with the real one.
This last automatic approach is what we want to achieve.

3D Scanning

The second experiment was scanning objects using the structured light technique.
The goal was to obtain three photos projecting the three-phase patterns seen in the link above. These photos and the mentioned patterns are then processed by the application to generate a cloud of points that represents the retrieved 3D geometry.

Many configurations were tested using different camera positions and different distances from the projector to the target object. We also started using an standard web camera but we were having very bad resolution in the captured images and we had problems focusing the scanned object/person (it was too "far" in the image with the corresponding loss of detail).

The best results were obtained using a photo camera of 5 mega pixels and optical zoom x12 and with the camera field of view similar to the projector's. Also the pictures were cropped with an image editor so the applet focuses only on the target object.

The calibration of the projector seemed to be very important so the pattern projected looks as sharp as possible. That, summed up to the fact that we had to use a high resolution photo camera instead of a low resolution web camera, lead us to think that the structured light algorithm requires images with a mid-to-high resolution
Another conclusion, taken directly from the obtained results, is that the objects to scan shouldn't have glossy surfaces because the patterns are lost on those areas.

Taking into consideration the conclusions expressed above, we tend to think we will have problems implementing a real time solution of 3D reconstruction using standard market cameras.

See here the obtained pictures of the reconstructed Javier's (jefa) head.


Recognized 2.5D shape:




No comments:

Post a Comment