Feature Request: Pre-Fusion + Keyframe ICP + Final Fusion

I believe it might be possible to get another 0.01mm of precision out of the hardware (of my MINI 2) by re-aligning the individually captured frames before the Fusion into the high-resolution point cloud. I know I am comparing VERY different price classes here ($999 vs $29999 :exploding_head:), but that’s one on the reasons why the FreeScan UE Pro scanning data optimization produces such great results.

In effect, what I would like to do is this:

  1. I capture frames regularly in RevoScan
  2. I do a normal Fusion in RevoScan
  3. (this is the new part) RevoScan will do ICP [1] between my fused point cloud and each individual scanned keyframe. Optimizing the alignment between the fused point cloud and my keyframes will fix tiny tracking errors that might have occurred while I did the scanning.
  4. I now do a normal Fusion again, but using the optimized keyframe data from the previous step.

[1] By ICP I mean the Iterative Closest Point algorithm ( Iterative closest point - Wikipedia ), which is probably what you already use for the alignment when I merge two point clouds inside RevoScan.

I know that I can do this manually by exporting every keyframe out of RevoScan and then using CloudCompare ( ICP - CloudCompareWiki ) for the alignment optimization and the final fusion. But that is A LOT of manual work. And since competing 3D scanners have it integrated into their software, it would be great if Revopoint could integrate it into RevoScan, too. But of course, I can also understand that my $999 MINI 2 should be held against different standards than a $29999 competitor :wink:

BTW, here is someone else on YouTube who also has keyframe alignment errors and that leads them to a very negative conclusion about the MINI. Since the Fusion step still works in general, but only includes a part of the keyframes, what is seen in that YouTube video is exactly the type of problem that pre-fusion and ICP optimization can fix.

image

Hi @fxtentacle

The precision is where the scanner capture the same scans from the same objects with precision of 0.01mm , so there is nothing you can do , it is the target precision. Precision is where the 2 scans if the object are up to 0.01mm

Accuracy is the volume accuracy of the scanned object vs the original , you can’t adjust anything as Accuracy is based on the hardware , this include the blue laser lights, pattern , FOV, and depth sensors capturing the surface, so it all depends if the hardware and any processing can’t increase the accuracy .

What to make precision and a curse y better is :

  1. Preparing proper the scanning surface using as thin as possible 3D spray with very fine layer

  2. Keep the distance at Excellent level from the object at all times and around volume .

  3. Removing any parts of the scan that is not nesesery using Raw mode editing just before fusing .

  4. Use single shot frame in place of continuous mode , to avoid change in distance when the object rotate

  5. Capture at least 3 angles of the volume to capture all undercuts .

  6. Meshing according to Grid/Fusing settings and not over meshing at higher setting to prevent of firing artificial points what creates noises .

If you still thinking to get a better accuracy and precision please remember that 0.01mm is a size of a white blood cell that you can’t even see or measure with regular everyday tools , your naked eyes will never see the difference even if you accuracy and precision was 0.04mm .

MINI 2 is not a microscope , even scanners that cost $27 K with 0.01mm accuracy can’t capture white blood cells

Test like that are made in a lab with special tools that are expensive, and the measurements are based on the distances between one point to another in a distance , tools that are calibrated in laboratories before each use .

Remember the raw frames are not organized frame cells , not exportable in raw form , and need to be organized into solid point cloud to be editable in any other program .

In short, you want improve already excited precision and accuracy ? Build a new scanner with better hardware and sensors .

MINI 1 don’t have the finest level as MINI 2 , issues with alignment or lose of tracking are the user error and lack of scanning knowledge using this technology .

1 Like

A blood cell is 0.001mm, so 10x smaller.

But anyway, I didn’t mean that I will reach 0.01mm precision, but that using ICP can probably make things 0.01mm more precise, meaning from 0.08mm down to 0.07mm or something like that.

The raw frames can already be exported as point clouds: Revoscan Flayer (Frame Player)

Well, the hardware is capable of about 0.02mm accuracy in the Z direction, but only 0.1mm in X and Y. (600 px depth map over 64mm at 12cm distance)

The current software is aligning X and Y at about 0.1mm accuracy, which then becomes the accuracy of the resulting scan. But with the correct technique and multiple overlapping scans, exports with 0.02mm accuracy might be possible. Just like how iPhone cameras produce a picture quality that people initially thought impossible with such a small sensor, good software might well 2x or 5x the resolution of the resulting point cloud for a 3D scanner.

But in any case, what I’m asking for here isn’t anything exotic. The competition already has pre-fusion and ICP processing. But RevoScan doesn’t have it yet.

White blood cell is 10 microns what is 0.01mm a hair is 40-80 microns what is 0.04mm - 0.08mm

0.001mm is a 1 micron , 10 microns is 0.01mm

Do your home work before talking the talk my friend :wink:

You can only reach 0.02mm with this hardware , the unregistered frame cells are produced at 0.02mm , it is impossible to get better results than the hardware is capable of producing in first place , we talking here about the distance of points from each other as well that can’t be greater than 0.02mm at perfect condition .
I know what this device can do and what it can’t , I was part of creating MINI and the capacity runs at maximum already .

That is ok to post your suggestions and what you would like to see , and if this improve anything inside Revo Scan it will be for sure to be considered by the dev team .
So don’t let me stop you with your ideas .

I just explained to you basic of the technology and what is happening , and I can ensure you that it is impossible to reach lower level than 0.02mm = 20 microns

For this kind of work you need very stable conditions , secured scanner, secured turntable , eliminating any possible movement to be capable to capture the perfect frames .

Point cloud is as good , as it is generated by the scanner , any future processing downgrading its precision and accuracy and that is one of 3D scanning rules .

Thanks for sharing
I hope everyone else find it interesting as well .

fxtentacle, hi!
In the video that you posted, it seemed to me that the man didn’t quite understand what he was doing, which is why he had such an unsatisfying result. The bust was heavily overlighted during the scan, so the scan surface was noisy and had low detail. If the entire part does not fit within the scanner’s field of view, you do not need to use the turntable, if there are markers, then as many of them as possible should be in the scanner’s field of view.
It is very sad that a man without understanding makes wrong conclusions and posts it on YouTube. Those who watch it may get the wrong impression, this is apparent from the comments.

I’ll add a little more)
I own a pop 3 and it is a great scanner in its price range, there is no equal. I can say the same about the mini 2 even though I don’t own it. I was able to successfully scan a cap that is only 16mm high, it’s amazing! I didn’t expect this when I bought it). I was also able to easily scan my son’s pencil case, which does not fit in the entire field of view of the scanner. I’m amazed at what it can do.

The geometry of the cap is a little rough, but that was expected for such a small size, the main thing is that it was enough for my work.)

The pencil case turned out perfectly, I scanned it by hand, no tripod.

1 Like

Oops, yeah, it looks like I mixed up red and white cells.

says 0.0029mm but I overlooked that it says “erythrocyte”, meaning red blood cell.

I’ve been working in that field for a while and I still find it super weird that such magic is possible. But actually, you can use a low-res camera combined with some movements and advanced algorithms to generate high-res photos :exploding_head:

Here’s Google’s research for the Pixel 3 (and following) on how they do exactly that:

In my opinion, this illustration (from the linked PDF) nicely shows how they (Google researchers) can fuse 4 low res images into 1 high res result:

I understand that with the current software and algorithms, over-scanning harms the accuracy of the final model. But I am also very optimistic that implementing those Google algorithms would make the final scan more precise than what the hardware can deliver. It feels like magic, but it’s actually just mathematics.

I fully agree with you there. That’s what I’m asking for here: Feature Request: Turntable + Single frame
that the software will help me eliminate movement by pausing the turntable while capturing the frames.

1 Like

Yes that was requested already many times including by me to have the options in Revo Scan 5 to control the turntable at specific angle and I try to get that settings since POP2/ Dual Axis release, for that reason @Johnathan created the Android app with my help to get the required settings and it works great .
Hopefully this new feature will be available soon .

Regarding the accuracy , you can’t change that in MINI 2 or MiNI , the hardware is final and at the final capacity .
If that was so easy as it is in theory, Revopoint would already do that with MINI 2. However to have 0.01mm accuracy , you need to stabilize the scanner and it’s automatic turntable in one box to have precise distance and use automatic single shots only , no portable scanner in your hand will ever capture that accuracy . Imagine holding a microscope in your hand looking for a white blood cells on a table , same reason .
Also scanner with that accuracy can capture only very small objects , because the data will be so huge to process that only specific computer systems could process it , this would eliminate phones, laptops and tablets .
And of course the price would be at least 3 time or more of the scanner what not really consumer friendly .

It is not the question if you can do that , the question is if that is worth the money and effort to sell it only for the minority of professionals that most of them already own one .

But coming back to the original question , you can’t improve accuracy with MINI series hardware the way it was designed , unless you invent new algorithms, change the sensors, projector and write new software . The available algorithms with this hardware already reached the maximum possible for MINI series what still makes it the best accuracy scanner in it’s price range on the market . As not one company reached that level yet for portable scanners . Unless of course you want to spend $27K for scanning objects the size of a ring in your office with powerful processing system .

Here’s a picture that visualizes the tracking drift of my MINI 2 with RevoScan 5 exceptionally well:

This scan was done with the turntable and a fixed camera position, so it should have resulted in the camera positions located on a perfect circle. But instead, the tracking error accumulated so that by the end of the 360 degree turntable rotation, I started seeing ghosting in the point cloud. The object I scanned is just a large static block of crumpled paper, so it’s matte, white, and has plenty of surface detail everywhere.

In case anyone is wondering how I created this illustration, I extracted the camera positions from the .inf files and then used COLMAP to display the point cloud and the cameras.

My interpretation would be that from frame to frame, the MINI 2 actually does a great job at tracking the scene, because the camera path is exceptionally smooth. But over a larger distance, it should use global keypoints to stabilize, but apparently it does not. Still, this can surely be fixed because if you do a turntable scan with static camera, then the RevoScan software could just enforce that all camera positions stay on a circle. It could also cache the tracking features from the first few frames in a scan and use that for stabilizing the path once those initial features come back into view.

BTW, that solution already exists in other 3D tracking software and it is called SLAM Loop Closure: https://www.youtube.com/watch?v=OV6wNr62nqQ

2 Likes

And here’s the exact same raw scan after 8 hours of global optimization:

One can clearly see that it’s a circle now. And the ghosting in the point cloud is almost completely gone.

Hi @fxtentacle do you realize that it is exactly what the software algorithms doing while processing the raw frame cells into organized point cloud ?

If the results was the way you showing in the first post , there would be not one scan complete and it would be totally not usable for any 3D scanning results .
Meaning zero precision or accuracy .

What do you mean by the need of fixing ?
And what do you mean by ghosting ?

Yup. That is, sadly, correct. In this case, a turntable scan of a jewelry-sized object with my MINI 2 produced completely unusable results. The object was white, matte, and with insane amounts of detail. But the RevoScan software still failed with tracking.

The camera locations that RevoScan stores in the .inf files are wrong. Re-writing the .inf files with improved camera locations leads to improved reconstruction results.

In this area, there wasn’t any object. But the scan shows lots of points. It almost looks like I am seeing a see-through copy of the actual object, but in the wrong location. That’s why I called it ghosting. Just like with a ghost, you are seeing things that aren’t there.

I am scanning very small scans and I never have issues with tracking when using MINI 2 or any other scanner , this technology needs features to be able to be scanned and keep tracking , if it lost tracking , it lost the tracking of the object’s feature , there are rules that can’t be avoided while scanning , that why for featureless objects you need to use marker mode or add additional objects to support what is needed .

Beside the issues with not having enough tracking on the scanned object is proper calibration , calibration is crucial also for market mode that relays on proper calibration as well to be able to track well the points .

Remember that only 10% from the raw unorganized frame cells will be used to build the final organized point cloud , the 90% of it is trash data .
The accuracy of the scanner is already determined by it’s hardware , not a software , so if you align the raw cells outside Revo Scan and fuse it in another program , you probably don’t get the desired results after all or increase it accuracy , the point distance will be still the same as the hardware produce .

You will have to recreate actually the same process as the program already doing while processing the fusion and organize the frames .
So saying the raw frames are not aligned proper is true , but that is not the final result either, that are just raw unorganized cells that need to be fused and processed using Fusing process .

The raw frames are not usable by the 90% of users at all , or desired to be used outside Revo Scan 5.
User just want to scan, process and go with their business .

The question is , why you propose frame organizer , if it is already available while fuse process inside Revo Scan 5 ?
Of course we have some bugs from time to time that shifts the frames or do not align them proper while scanning , and in most cases that bugs are fixed in first run .

But issues with bad alignments due to lack of features is not under that category, it is the user error in most situations and lack of experience with this scanning technology .

Would be great to have additional frame alignment before fuse processing ? Yes if you have hours of waiting on your hand …and most of us don’t have that time .

if you think it can be done more efficiently, just make sure you calk tye sensors proper RGB or Depth sensors and not just a “camera” as there are 2 types of cameras and RGB is not used for scanning surfaces or in reconstruction , beside color Vertex and final texture mapping , be more specific to make it clear for the tech team for better understanding .

I suggest you write DM to @Revopoint-Jane with explanation of the error and she will forward your findings to the dev.team for evaluation. If you are up to something it would be better this way .

Of course , I understand , but it was unorganized cell frames that sadly sometimes are all over the place before fusing process, as long the tracking was proper , this is not big issue at all as the fusing process will eliminate it anyway .
As I said early 90% of the frames/points are not usable in the final point cloud , even if proper aligned it would be still discharged in the process , overscanning the same area will not provide more details , better accuracy since we talking about unorganized cell frames , raw data before it get actually processed .

The Raw frames don’t reflects the precision at all , since they need to be organized , cleaned and processed .
The final volume precision is available after meshing , the accuracy can’t be changed or improved as that relay on the hardware .

This is where the super-resolution that I talked about earlier comes in. You’ll find that the 800x600 pixel depth image that the camera sends to RevoScan comes out at about 0.15mm per pixel. But with a good super-resolution capable ICP alignment, the result of merging 100+ single scans with tiny camera movements in between is a scan with roughly 10x the hardware resolution. :exploding_head: I can see the curvature of a 0.1mm fillet on a 0.2mm chamfer in the point cloud. I’d estimate about 0.05mm accuracy for the final result.

Yes, it took me a while to build my own point cloud alignment algorithms :rofl:

In my case, I can easily let the PC run 8 hours over night. I agree with you that for 90% of users, this is probably an unnecessary level of accuracy. But for my use case, it is very helpful and worth the waiting time.

Agree. So my understanding is that the .inf files contain the estimated transform for rotating the point cloud (encoded as a depth map in the .dph files) such that it overlaps with the reference coordinate system. That means if the camera or RevoScan has tracking drift, it shows up as slightly wrong values in the .inf files. Fixing the inf files by re-aligning the point clouds will then improve the result of RevoScan’s Fusion.

@Revopoint-Jane If you look at this research report: https://mediatum.ub.tum.de/doc/800632/941254.pdf then on page 50 in chapter 4.4 they introduce the Point Feature Histograms (PFH). That with RANSAC is the kind of alignment procedure that I would like to see in RevoScan for recovering from tracking failures.

Actually, it does. Overscanning the same area is what makes super-resolution algorithms possible. It also helps with reducing noise through temporal antialiasing.

You’re, of course, correct that for 90%+ of users, it will never matter. But for the 10% that actually work with jewelry and operate at the limit of the hardware, going from 0.2mm to 0.05mm in accuracy is a big win.

1 Like

I do agree on your posts , don’t get me wrong , that would be great for building new scanner with proper software , sadly the current hardware and software will be not adjusted since it’s hardware already reached its full potential, the current software reached also it’s full potential regarding algorithms, any changes to the current algorithms corrupted the results and did not delivered better results .
I was working with the team on MINI 1 improvement shifting the accuracy from 0.05 and resolution of 0.1mm to where it is today , I was capable to.literally scan my fingerprints on my fingers what are usually around 7 microns , so the resolution and accuracy was great , I could scan a single volume to 0.65 mm .

Please remember that the product’s specifications are rounded out for all products , so if MINI 2 showing 0.05mm u der specifications, it is much better than that in practice .

To get anything below 0.02mm accuracy , you will have to enter the microscopic universe since that dimension and resolution is not visible to naked eye .

You will need a special created for that stabilizator to eliminate any movements to minimum , it can’t be portable scanner anymore to archive better results , in short for portable scan era it is already the maximum reach that no other companies in this price range reached yet ( mostly fake advertising ) portable scanners at 0.01mm accuracy are unrealistic .

The way the software is build at this moment , any overscanning multiple times of the object will produce a mess , not fir nothing a single frame accuracy and precision delivers the best results and not multiple overlapped frames .

I know where you go with your theory and you are not wrong at all , but it is totally different technology you propose here , and don’t expect Revopoint going to hack it’s own technology but hopefully learning from others and improve with the time and new products .

The issue here is that any changes to the software requires change to the firmware and from practice we know it is little too risky .

But feel free to share your findings and suggestions , since I never will say never in this case .

Yes the current algorithms already hit the wall and reach it’s maximum , it was changed so many time back and forth until I request to include both Standard and advanced algorithms since one of them works better for organic form and the others for hard surface scans so for now the situation is already better and I am getting the results I need .

Yes of course , but we will need 2 major adjusting here , I would have no issues with waiting especially if my project is very important , commercial and the best scanning results are the priority .
I did suggested not once to make second software version for the props , without downgrading the results or setting the limits, professionals has the proper computers to work it out , we don’t relay on fast food delivery , and I hate to see it , but in most cases my request are part of the algorithms issues and limitations and can’t move beyond the point , so I switching to other programs for improvements if needed.

This already happening inside the scanner hardware while scanning , it is not the software doing it , the software just records the data that is streamed to the project folders , what you see is just virtual preview of the data in virtual mode / written data down .

So again for changes to take place the firmware need to be updated.
Other scanners relay mostly on your computer power and the software , here the whole heart is already inside the scanner processed before it get written down by the software , that allows the scanner to be used with a simple mobile phone instead of a heavy duty PC with expensive graphic cards with lots of RAM and VRAM and the scanner just include the RGB and Depth sensors like we have in this case with Einstar that relays on specific computer specifications and can’t works with mobile devices .

Now I hear your answer saying , let’s the software correct the .inf
In real time it is actually not possible , and definitely impossible with mobile devices .

Unless Revopoint come with an idea to create completely different scanner type ( not for use with mobile device ) it may be impossible task to update the whole series that are already sold, and due to its hardware capacity .
Just my 5 cents , it is not that Revopoint don’t want to do that , it is actually big task to change not only the software , but also each firmware of existent scanner in the hands of customer. Risky business !

But I hope your input gives the dev.team some inspirations to look deeper into it , since we all are here for improvements, smaller or bigger . No matter in current products or in future products as we all longing for the best results !

1 Like