initialize('web'); $snip = $modx->runSnippet("getSiteNavigation", array('id'=>5, 'phLevels'=>'sitenav.level0,sitenav.level1', 'showPageNav'=>'n')); $chunkOutput = $modx->getChunk("site-header", array('sitenav'=>$snip)); $bodytag = str_replace("[[+showSubmenus:notempty=`", "", $chunkOutput); $bodytag = str_replace("`]]", "", $bodytag); echo $bodytag; echo "\n"; ?>
The purpose of the in-hand scanner application is to obtain a 3D model from a small object. The user is turning the object around in front of the sensor while the geometry is reconstructed gradually. The rest of this tutorial assumes the usage of the Kinect sensor because the parameters of the application are set up for the device.
The application generates an initial surface mesh and gradually integrates new points into a common model. The scanning pipeline consists of several components:
The following image shows the general layout of the application.
The main canvas (1) is used for visualization of the data and for showing general information. The viewpoint can be changed with the mouse:
The various states of the application can be triggered by keyboard shortcuts which are listed in the help (2) or shown in tooltips when moving the mouse over the buttons. Please click into the main canvas to make sure that key press events are processed (the canvas looses focus when parameters are changed in the settings).
The buttons (3) above the main canvas change the current state of the application and allow triggering certain processing steps:
The buttons (4) set how the current data is drawn.
The settings of the application are shown in the toolbox on the right (5). The values have been tuned for scanning small objects with the Kinect so most of them don’t have to be changed. The values that have to be adjusted before scanning are the ones in the ‘Input data processing’ tab as it is explained in the next section.
The scanned model can be saved from the menu bar (not shown).
In the following section I will go through the steps to scan in a model of the ‘lion’ object which is about 15 cm high.
Once the application has connected to the device it shows the incoming data. The first step is to set up the thresholds for the object segmentation:
Now start with the continuous registration (press ‘3’). This automatically changes the coloring to a colormap according to the input confidence. The goal is to turn the object around until the whole surface becomes green. For this each point has to be recorded from as many different directions as possible. In the following image the object has been turned about the vertical axis. The newest points in the front have not been recorded by enough directions yet (red, orange, white) while the points on the right side have been scanned in sufficiently (green).
Avoid occluding the object by the hands and try to turn the object in such a way that as many geometric features of the shape are shown as possible. For example the lion object has one flat surface at the bottom (blue circle). It is not good to point this side directly towards to the sensor because the almost planar side has very few geometric features resulting in a bad alignment. Therefore it is best to include other sides while scanning as shown in the image. This procedure also helps reducing the error accumulation (loop closure problem).
After all sides have been scanned the registration can be stopped by pressing ‘5’ which shows the current model. Any remaining outliers can be removed by pressing ‘6’ (clean) as shown in the next image.
The eyes of the lion could not be scanned in because they were filtered out by the color segmentation. To circumvent this problem it is possible to resume the scanning procedure with the color segmentation disabled. Now one has to be very careful to keep the hands out of the cropping volume. This way it is possible to scan in additional parts as shown in the next image.
The following image shows the final model where the eyes have been scanned in as well. However this resulted integrating a few more isolated surface patches into the mesh (light blue). There are still small holes in the mesh which in theory could be closed by the application but this would take a long time.
The parameters in the ‘Registration’ and ‘Integration’ settings have not been covered so far. The registration parameters are described in the application’s help and there is usually no need to make big adjustments. You might want to tweak some of the integration settings: