Sunday, May 30, 2010

Structured light - Continuity vs Discontinuity

A third experiment with structured light was performed trying to scan a simple and very used scene in the video-mapping field: a group of primitive shapes.
We faced a lot of problems with this apparently simple scene and that made us think that structured light wasnt suitable for discontinuous shapes, so we built a complex but continuous geometry by wrapping the scene with a piece of cloth. The SL-applet had no trouble generating the 3d geometry.

After getting these results we asked the creator of the applet for confirmation on our early conclusion and we got a response. It was true that the experiment performed wasnt suitable for discontinuous shapes because it uses phase-shifting scanning based on the principle of propagating depth values across a surface. So if two surfaces are disconnected, it cannot determine how they are related depth-wise.
However, it wasn't true that SL is inappropriate for discontinuous shapes. Other pattern codifications and algorithms not so based on real-time can return better results for these type of scenes.
This is what we'll be working on the next weeks.

Photos of the scenes - discontinuous and continuous

Thursday, May 20, 2010

Second scanning

On this second experiment with structured light we scanned a simple scene: two perpendicular walls.
A lot of video mapping applications like this one focus on objects with planar faces like boxes so the idea of this experiment was to test structured light with these type of objects.
The geometry was fairly well retrieved with minor tweakings. The pictures were only cropped around the target zone and the Z scale and Z skew were adjusted in the SL applet.
Here are the pictures taken and a snapshot of the retrieved cloud of points.



Wednesday, May 19, 2010

Saturday, May 15, 2010

First tests

Last Saturday we could finally get a projector and started making the first experiments of mapping and scanning.

Manual mapping

The goal of this experiment was to project two different images on two perpendicular walls (a corner of the room). The projector was not aligned with any of these walls so any planar projection resulted in deformed images. With a 3D software two textured planes were created simulating the target planes. Then the corner vertices were manually adjusted until the projected images looked undeformed on the walls.

















Remark: The deformed planes in the 3D software didn't represent the shape of the walls but a planar deformed shape that matches the projection. That lead us to think that we could accomplish the goal of project over real 3D structures using simplified 2D representations of that 3D object.


This first mapping was manually performed. Programs like modul8 map the images to the geometry manually.
In others like vvvv the definition of the geometry is needed to generate the virtual model. Then this virtual model is adjusted with the real one.
This last automatic approach is what we want to achieve.

3D Scanning

The second experiment was scanning objects using the structured light technique.
The goal was to obtain three photos projecting the three-phase patterns seen in the link above. These photos and the mentioned patterns are then processed by the application to generate a cloud of points that represents the retrieved 3D geometry.

Many configurations were tested using different camera positions and different distances from the projector to the target object. We also started using an standard web camera but we were having very bad resolution in the captured images and we had problems focusing the scanned object/person (it was too "far" in the image with the corresponding loss of detail).

The best results were obtained using a photo camera of 5 mega pixels and optical zoom x12 and with the camera field of view similar to the projector's. Also the pictures were cropped with an image editor so the applet focuses only on the target object.

The calibration of the projector seemed to be very important so the pattern projected looks as sharp as possible. That, summed up to the fact that we had to use a high resolution photo camera instead of a low resolution web camera, lead us to think that the structured light algorithm requires images with a mid-to-high resolution
Another conclusion, taken directly from the obtained results, is that the objects to scan shouldn't have glossy surfaces because the patterns are lost on those areas.

Taking into consideration the conclusions expressed above, we tend to think we will have problems implementing a real time solution of 3D reconstruction using standard market cameras.

See here the obtained pictures of the reconstructed Javier's (jefa) head.


Recognized 2.5D shape:




Tuesday, May 11, 2010

Dynamic Projection Environments for Immersive Visualization

This paper present a system for dynamic projection.
the projected surfaces are large screens (in human-body scale), each one has wheels that allow move it easyly. When the projection surfaces are moved in real-time, the application re-calculates the visualization on the fly.

They use a technique they call projection keyframing to provide continuity on moving surfaces
while waiting for simulations to complete.

The system allows multiple users to participate interactively with each other and the visualization application.

Initial target application for the system was interactive
architectural lighting visualization, they give a simulated environment to architects and

clients to evaluate the natural and artificial lighting of a proposed architectural design.

The distributed system allows for:
- Projector keyframing - a technique to impart slow applications with a dynamic, responsive feel
- Tracking projection surfaces of known geometry with simple IR-based LED markers, and
- A distributed rendering system which can be extended to drive an arbitrary number of projectors.








The system steps:
- The system use a single camara to obtain images of the scene
- determine the projection surface geometry whit the information of the camera
- use a gigabit-Ethernet connected camera to detect the LEDs sensor (ubicated in the top of each
screen)
- dinamic projection using 10 projectors

Aplications:
- architectural visualization
- explore volumetric data by defining cross-sections
- general purpose user-interace elements

see a video with examples here

Thursday, May 6, 2010

VVVV is a toolkit for real time video synthesis

It is designed to facilitate the handling of large media environments with physical
interfaces, real-time motion graphics, audio and video that can interact with many users simultaneously.

vvvv uses a visual programming interface. Therefore it provides a graphical programming language for easy prototyping and development.

vvvv is real time. where many other languages have distinct modes for building and running programs, vvvv only has one mode - runtime.

vvvv is free for non-commercial use.
Download --> http://vvvv.org/tiki-index.php?page=Downloads

here explains how to project on 3D Geometry
they explains how vvvv resolve projection on a flat surface and projection on an arbitrary surface.
In the first case vvvv use homography to resolve the projection.
In Second case they build a virtual replica of the real scene, whith three steps
1- defining the origin for your real worlds coordinate system
2- create the target projection surface as a 3d model and place it correctly in your virtual scene regarding the coordinate systems origin (they comment that could be done whith vvvv or other toolkit)
3- measure the position, orientation and lens-characteristics of the projector

all done in these three steps is manually by the user, using vvvv toolkit or other.

Wednesday, May 5, 2010

Fast 3D scanning methods for laser measurement systems

In this paper (writed by Oliver Wulf, Bernardo Wagner) present a laser time-of-flight method, it can provide distance measurements at 50 meters with error of centimeters.

They combines a 2D laser scanner with a servo drive, it allow different arrangements of scan planes and rotation axis that lead to different fields of view.

First defines 4 diferents combinations of the scanner and servo drive, then discuse the measurement density of each one and the measured points are not placed ina regular grid, the density is minimal for laser beams orthogonal to the rotation axis and maximal for beams parallel to this axis
example:






in this case there are two regions
with high measurement density







Then they show diferences in a experimental comarison, whith de same number of points and the same time of scanning the result was diferent, the region more detailed correspond whit the region of mayor density.

They show what case is better for indor or outdor scan depending in the choosed method (combination of scanner and axis of rotation).

To calculate 3D point cloud is needed a transformation whith input: 2D raw scan and the position of the 2D scanner.

Posible aplications for 3D laser scanners are: object localization and recognition for an automated system, safety systems, surveillance, navigation etc.

Fast 3D Scanning with Automatic Motion Compensation (Stereo approach)

An intrinsic problem of phase-shifting methods is the inability to deal with blurred images caused by motion of the scanned object or person. There are a lot of initiatives or modifications performed to the original structured light method to support scanning of objects in motion. Please refer to Zhang's paper "Recent progress on real-time 3D shape measurement ... - Song Zhang" mentioned in the previous post. There Zhang cover some techniques to deal with blur by motion images with acceptable results.

In the paper presented below, the authors have decided to replace the unwrap phase of the structured light method by an stereo-based approach to have the same problem of correspondence solved but by a different mechanism. They argued that the unwrapping phase does not solve the absolute phase and that if two surfaces been scanned have a discontinuity of more than 2pi then no method based on phase-shifting will correctly unwrap these two surfaces to each other.


* Fast 3D Scanning with Automatic Motion Compensation - Thibaut Weise, Bastian Leibe and Luc Van Gool

Abstract
We present a novel 3D scanning system combining stereo and active illumination based on phase-shift for robust and accurate scene reconstruction. Stereo overcomes the traditional phase discontinuity problem and allows for the reconstruction of complex scenes containing multiple objects. Due to the sequential recording of three patterns, motion will introduce artifacts in the reconstruction. We develop a closed-form expression for the motion error in order to apply motion compensation on a pixel level. The resulting scanning system can capture accurate depth maps of complex dynamic scenes at 17 fps and can cope with both rigid and deformable objects.

Real-time 3D shape measurement

Structures light as a technique for 3D reconstruction has being extensively adopted by the industry and it has proven to work in controlled environments. On the other hand, there's a lot of preoccupation today about the performance of phase-shifting algorithms mostly when real-time 3D measurement comes into play.

There are several directions the researchers are heading to have the performance of the algorithms improved, for example the ussage of different techniques when projecting the encoded stripes, delegate some calculations to the GPUs, or improve the mathematical model itself.

The following two papers present recent research about how to improve the computational cost of the phase-shifting algorithm, and both authors tackled the problem improving the mathematical model approximating the Arctan in the formula of the phase with an intensity ratio calculation and the use of a lookup table (LUT) to compensate the approximation error.

Actually, if you look at ThreePhase.java class of the Structured Light source code, you'll notice that there’s a comment within the code suggesting to do what these papers are proposing instead of using atan2 Java function:

public void phaseWrap() {
...
// this equation can be found in Song Zhang's
// "Recent progresses on real-time 3D shape measurement..."
// and it is the "bottleneck" of the algorithm
// it can be sped up with a look up table, which has the benefit
// of allowing for simultaneous gamma correction.
phase[y][x] = atan2(sqrt3 * (phase1 - phase3), 2 * phase2 - phase1 - phase3) / TWO_PI;
...
}

* Fast three-step phase-shifting algorithm - Peisen S. Huang and Song Zhang - 2006

Abstract
We propose a new three-step phase-shifting algorithm, which is much faster than the traditional three step algorithm. We achieve the speed advantage by using a simple intensity ratio function to replace the arctangent function in the traditional algorithm. The phase error caused by this new algorithm is compensated for by use of a lookup table. Our experimental results show that both the new algorithm and the traditional algorithm generate similar results, but the new algorithm is 3.4 times faster. By implementing this new algorithm in a high-resolution, real-time three-dimensional shape measurement system, we were able to achieve a measurement speed of 40 frames per second at a resolution of 532x500 pixels, all with an ordinary personal computer.


* Recent progress on real-time 3D shape measurement using digital fringe projection techniques - Song Zhang - 2009

Abstract
Over the past few years, we have been developing techniques for high-speed 3D shape measurement using digital fringe projection and phase-shifting techniques: various algorithms have been developed to improve the phase computation speed, parallel programming has been employed to further increase the processing speed, and advanced hardware techniques have been adopted to boost the speed of coordinate calculations and 3D geometry rendering. We have successfully achieved simultaneous 3D absolute shape acquisition, reconstruction, and display at a speed of 30 frames/s with 300 K points per frame. This paper presents the principles of the real-time 3D shape measurement techniques that we developed, summarizes the most recent progress that have been made in this field, and discusses the challenges for advancing this technology further.

Tuesday, May 4, 2010

Coded Structured light as a technique to solve the corresponding problem

This paper covers the motivation, history and different techniques regarding the Structured Light and Coded Structured light methodologies for 3D surface reconstruction.

First, a passive stereovision system with two sensor/cameras is explained and the mathematical equations and geometrical constraints are analyzed in detail. Then, structured light as an active method is covered and presented as an alternative to solve the "correspondence problem". Mathematical model is explained as well. Then, the purpose of the coded structured light is described, analyzing temporal dependence, emitted light dependence and depth surface discontinuity dependence. Finally several coded structured light techniques are covered, discussed and compared.

Active and passive techniques are covered in separate, and then, when the mathematical models are explained, the similarities are remarked.

Even though this paper is quite old (1998) it covers structured light and vision systems for 3D reconstruction from an historical perspective. That makes it a very useful source of information in order to understand structured light as a whole, and to be included, in the end, in our Sate of the Art document.

Abstract
We present a survey of the most significant techniques, used in the last few years, concerning the coded structured light methods employed to get 3D information. In fact, depth perception is one of the most important subjects in computer vision. Stereovision is an attractive and widely used method, but, it is rather limited to make 3D surface maps, due to the correspondence problem. The correspondence problem can be improved using a method based on structured light concept, projecting a given pattern on the measuring surfaces. However, some relations between the projected pattern and the reflected one must be solved. This relationship can be directly found codifying the projected light, so that, each imaged region of the projected pattern carries the needed information to solve the correspondence problem.

Automatic Projector Calibration

Johnny Lee presented on his thesis work an interesting way to automatically calibrate a projector by embedding optical sensors into the projection surface.
The procedure consists in detecting the individual pixels illuminating the optical sensors.
This is done by projecting a series of gray-coded binary patterns on the target surface. These patterns are coded in a way to detect every pixel projected on the screen. The amount of patterns to project depends on the resolution of the projector. For a 1024x768 resolution each pixel can be uniquely detected with only twenty patterns.

Once the key pixels have been detected on the target surface it is posible to find the homography that transforms the screen pixels to the projected locations. Once this transformation is known, pre-warped images may be transmitted to the projector.

This project has many applications besides projector calibration.
It can be used for creating a large display using tiled projection. This is known as stitching.
It can also be used to project multiple versions of the same content on a surface to reduce the shadows when one projector is blocked.
Finally, another of the possible uses is to register the orientation of a 3D surface. This requires the geometry of the surface to be known and the optical sensors to be in the visibility range from the projector.

A video explaining the usage and other details of the implementation can be seen here.