Wednesday, April 28, 2010

Augmented Reality System

Augmented Reality (AR) is the synthesis of real and virtual imagery. In contrast to Virtual Reality (VR) in which the user is immersed in an entirely artificial world, augmented reality overlays extra information on real scenes.

Interactive Augmented Reality Techniques for Construction at a Distance of 3D Geometry

Here present a Augmented Reality system consist in techniques for construction at distance ,
uses a mobile augmented reality wearable computer, the scale of the world is fixed and the user’s presence controls the view, that can be used outdoors, the user interacts with the computer using hand and head gestures.
  • The aplication developed to this system is Tinmith-Metro.
To see videos demos and more go http://www.tinmith.net/

Details of software architecture.

Monday, April 19, 2010

Building a three-dimensional model

One point to resolve is building a three-dimensional model from a given cloud of points, next we present papers that resolve distinct steeps in this process.


here present the problem segmented into patches each represent a discrete surface region on the physical object:
-Physical Design Model
-Digitize Model
-Cloud Data Set
-Apply Reverse Engineering Software
-Comuter based Design Model

Initialy utilize a laser-based range sensor to obtain the cloud data, then presents triangulation method and growth rules to build the mesh.

  • The complexity of the sample data has problems in: size (we need much memory to compute all data), quality (noise in the process of generate the samples) , to overcome this problems we can reduce or simplify the cloud points, this process before build the model could be made by algorithms,


In the paper Efficient Simplification of Point-Sampled Surfaces the methods presented to resolve the pre-processing problem are:

-clustering methods, split the point cloud in subsets, each subset is replaced by one representive.

-Iterative simplification, successively collapses point pairs in a point cloud according to a quadric error metric.

-particle simulation, computes new sampling positions by moving particles on the point-sampled surface according to interparticle repelling forces.

There compare and analyze each algorthms whit emphasis on efficiency and low memory footprint.



This approach is based on the representation of free-form surfaces, by building deferent meshes from each view, a curvature measure is computed at every node of the meshes and map to a spherical image.

Each mesh is represented whit a graph, each node has three neighbors and its curvature is computed from the relative position of its neighbours.

In Resume in this studies, we could review some mehtods and techniques that resolve distinct aspects in the building of a 3D model, is an introduction on the study of the problem.

Wednesday, April 7, 2010

Structured light - First experiment

An approach to the problem of scanning objects to 3D geometry is using a technique known as structured light. Kyle McDonald's instructable  clearly explains the steps needed to create a scan using this technique. The program necessary is available on google code.

Since this is a potential technique for our project, a good start would be to reproduce the experiment in  ideal conditions.
These conditions are:
  • Good lighting on the scene (only illuminating the target object)
  • Perfect camera orientation
  • A simple object to be scanned
Although structured light is robust enough to make successful scans in less strict conditions, it seems useful to have an idea of what are the best results we can expect. This can only be done in ideal conditions.

The experiment consisted of simulating the camera, projector and target object in a 3D software to generate three images. Then, using these images and the ThreePhase applet found in the google code project mentioned above, retrieve the 3D geometry.
A simple cone was used as the target object. The projector was simulated using a spot light with a projection map looking at the object in the exact same direction as the camera.
The scene was rendered three times to a bitmap using three different patterns as projector map. Said patterns can be found in the project folder. as projector map. Finally, the applet was used with these three renderings and the 3D geometry is retrieved as a cloud of points.

Experiment deployment

The first results were not ideal due to the projector and camera having the same location. Hence, the pattern projected on the target object was undeformed and no depth information could be retrieved.
To correct this, the projector was pulled away from the camera and the experiment was repeated twice using different distances.
The best results were obtained using the experimental setup shown above.

The acquired results can be used in next stages of the project as a starting point to create 3D Meshes from clouds of points.

The collected renderings using the three patterns:

Monday, April 5, 2010

Welcome!

Every blog starts with an introduction so this one won't be an exception.
The project is part of the computers engineering career. The people working on it are Adriana, Javier, Daniel and Tomas as tutor.
As stated in the description, this blog will demonstrate the development process of our project called "Automatic Modelling & Video Mapping". The original name in spanish is "Proyecciones sobre superficies irregulares" and the outline can be read here (spanish only).

We identified 3 major tasks:
  • Scan objects into 3D geometry
  • Allow users to edit the 3D geometry obtained in the previous step
  • Apply the needed transformations to allow video mapping on the 3D geometry and then onto the real life objects
We'll blog about different techniques related to these tasks, then we'll choose one or a few of these techniques to implement. Finally we'll put all the code together to try to make a useful and user friendly tool (open source of course)