Depth Stacking at Auto Turntable Mode

No idea if this makes sense, but I had the thought that stacking could be useful, especially when dealing with multiple depths. Similar to mFocus stacking in photography. I made a video about it, and it would be great if the R&D team could take a look. They can decide whether it’s actually beneficial and worth the effort or how feasible it would be to implement.

(AI summarized)
Concept: Quality-Based Depth Stacking for Point Clouds

Goal:
Combine multiple point cloud frames captured with different lighting conditions into a single high-quality point cloud. The process prioritizes regions from the frames with the least noise or highest quality.


Steps:

1. Noise Analysis for Each Frame:
Evaluate the quality of each frame by analyzing the noise level in its points. Noise can be quantified using metrics such as:

Point density: Regions with higher density are generally more reliable.

Standard deviation of point distances: Compute the average distance between neighboring points and assess the variation. Lower variation indicates less noise.

2. Divide Point Clouds into Regions:
Partition the space of each point cloud into smaller regions, such as voxels or grid cells. This allows localized analysis of quality for each area.

3. Quality Assessment for Each Region:
For each voxel or grid cell, compare the noise levels across all frames. Identify which frame provides the highest quality data for that specific region.

4. Select or Combine Points:
Selection: Use the points from the frame with the lowest noise in that region.

Weighted Combination (Optional): If multiple frames have similar quality, combine their points using a weighted average, where weights are based on quality scores.

5. Fuse Selected Points:
Combine the selected points from all regions into a single point cloud. Ensure that the resulting point cloud is smooth and consistent.

6. Filtering and Optimization:
Apply additional filters to the final point cloud to remove outliers and ensure uniform point density. Methods such as Statistical Outlier Removal (SOR) or Radius Outlier Removal can help.

7. Export and Visualization:
Save the resulting high-quality point cloud.


Benefits:

Leverages the best parts of each frame, enhancing the overall quality of the point cloud.

Reduces noise and artifacts caused by variations in lighting or sensor inaccuracies.

Maintains high precision by avoiding overly aggressive smoothing.

This process is ideal for combining data in applications where high-quality 3D models are required, such as reverse engineering, medical imaging, or detailed object scanning.

2 Likes

No matter how feasible or beneficial such concept is, I love your creative out-of-a-box thinking!:clap:t3:

1 Like

Would this be kind of HDR processing and combining of data?

Hmm yes and no…

HDR works with 2D images, focusing on pixel brightness and contrast to balance exposure, while depth stacking processes 3D point clouds to enhance spatial accuracy and reduce noise.

The key difference is that HDR addresses dynamic range in lighting, whereas depth stacking resolves measurement inaccuracies in 3D data and thus allows a larger field of vision.

1 Like

depth data is computed from 2D images. If the images were treated with HDR stacking, then over/underexposure could be reduced, leading to less noisy/more accurate 3D data.
It is surely better than scanning the subject on multiple exposure levels, cutting off noisy areas by hand and then aligning again.

Since there is enough control when scanning with the turntable, there could definitely be improvement especially if the object has varying bright/dark/reflective areas.

3 Likes