Are there any plans to improve software tracking?


I understand that even the topic’s name is a bit provocative and to some extent it might be, but still… I’ve had some small experience working with the Revopoint Mini for now and, to be frank, it frustrates. Sure, there are some useful and “you got to know it and not just think it’s just a plug’nplay stuff” tips like scanning tilted 45 degrees down and using a lot of “marker objects” along the one you want to scan to track the thing successfully… but still :slight_smile: The software itself from my opinion is not bad at all - yes, it is clumsy and there are annoying settings like camera preamp is resetting to default each time you are trying to make a new scan (no, really, fix it please, it is really easy to make it not forget the last preamp settings one’s use), but theese are merely minor, and the software works merely well in general

But there is one major issue. Yup, MAJOR one: tracking loss and tracking drift (when you move the scanner back to the origin point with some overlap to make a next “scan line”, but tracking points obviousely misalign with those taken in the beginning). It’s freaking horrible. I’ve gone youtubing on some comparisons of 3D scanners and just by writing “revopoint” in the YouTube’s search, I’ve got watching different reviews, in as seems each of them, where the author goes more than scanning the bust that come along, it goes to “I didn’t get it to work really”. No, really, theese are videos just in the order I’ve watched in last 24 hours considering Revopoint Mini and Pop 2 scanners (links are with timestamps):

Three moments from another video (one for the tracking the cube and the others with a general surface on PC and Android):

This one about POP 2:

And theese are just a few of those reviews and “let’s try to scan something” videos. Agree: “Hey, that guy didn’t use tracking cubes and that one should first cover it with Attblime” and stuff like that. But I think theese are “to improve scanning experience and maximize quality of the result”, but not “if you do all that, maybe you’ll get your scanning merely successfully… of some kind” :slight_smile:

No, I don’t really wish to look like a troll, behave in the style of “Arrgh! Give me my money back!” and stuff like that. Just pointing out the tracking of the Revopoint software is a real mess and it’s algorythms do really need attention and a major overhaul. Really. Don’t know, like taking phone accelerometers for a consideration for a mobil application (since both the scanner and the smartphone might be at the same tripod) and/or limiting maximum shift between neighbour frames. That’s besides that maybe it should be rewritten from the beginning.

By the way, tried to find out will the 3 Axis stabilizer help reduce tracking loss, and bought one locally, but Mini’s size didn’t allow it to be used with it - their (Mini and smartphone) weight is the same, but Mini sticks too much from the gimbal, so the motors can’t work with it.

What is this post about? To point out to the developers that tracking part of the software REALLY needs attention. Because the hardware is really nice, but it falls under the software tracking issues.


The software can’t improve tracking if the object don’t have proper features as that is the major key here in tracking .
No matter you use 10K scanner or whatever , if you do the way that the people do on the videos you going to have the same issues .

You need to observe well how big companies scanning using their scanners , they always have additional elements to keep the tracking , or there are cubes or the ground is not flat or markers places the proper way .

Another thing are scanning with phones , that is the last thing that will provide your with fast real time tracking , a small interruption in phone processing is already a frame or 2 lost causing issues .
And there is for example a huge difference in tracking data via a wifi on a phone than tablet via USB C .
6 vs 8 GB of RAM and many other factors .
And of course practice and proper object preparation for a scanning session .

Structured light 3D Scanners relay on object features , they can’t scan or tracking without it compared to laser or other scanner technology.

So the answer is no , there can’t be done any tracking improvement because that is not how the technology works .
The only thing that can improve is scanning with the proper mode according to object size , using more stable device to run Revo Scan with efficient RAM and additional additional objects for tracking your object that can be easily removed .

In short a little practice with various setups could teach you how well it can works.
I scanned so many objects and virtually I have never issues with any tracking .

So this is in most case the end user issue and lack of 3D knowledge and practice rather than the software .

The math is really simple
No features = No tracking

And that is how the hardware was designed for , to project pattern , scan and track difference of the patterns so it can be tracked …no changes in pattern ? No features , no tracking .

I will show you later today something interesting , so you can better understand what is really happening and the cause of losing tracking of your objects .
I think it may help you out improving the workflow .


Eh… Problem is we can start arguing the way “You don’t understand! - No, you don’t understand!” :slight_smile: Thing is I perfectly understand, that tracking is based upon object features, and if there are not enough, there problems gonna start to happen. But at the same time it is not a first time I see, when someone begin to see some specific small world from inside, looking at the world’s restrictions as objective laws of physics or whatever. Same was on some forum about flight simulators - “It is impossible to make better graphics, because instead of 3D shooters, that have a small 3D world around them, true flight sim have thousands of square km/miles, so it is objectively impossible…”. And then MFS2020 shows up and says “Hold my beer” :slight_smile:

What I am trying to say is: yes, I agree, that I still don’t know a lot of things I should to use a Revopoint scanner, but at the same time it will be impossible to explain to me that Revopoint done a good tracking system that works just as it should. That’s not because I just pick up a toy in my hand and capriciously say “It’s a bad toy, it doesn’t want to do what I want - I don’t know nothing about it, it just need to do it itself!”, but I can visually compare algorithm used by Revopoint and Creality. It is too bold of a statement, I understand them only superficially, but still.

Don’t get me wrong, I don’t say I don’t need to study 3D scanning methods and so on, I actually do - you helped me a lot already, but way it works, it is still a tar pit…

Just as a small example of algorithms: you register each frame with existing point cloud in a continuous way, means software knows where the scanner is relative to the object. And you can lock the possible drift between the frames to no more than, say, 30% of the scanning field. If it is more, then software cease to register points and going “Loose track” state. Sure you’ll get a lot more “loose tracks” in the scanning process, but at the same time you’ll get a lot less false “oblivion drifts” it currently produce. It is just a small example. I know it doesn’t work that simple, and things are more complicated, but still - there are lot of tweaks and algorithms that can improve tracking a lot, and you don’t need anything different in the hardware. Problem is I don’t know is there a reason to get that technical with you, won’t you just ignore this at all.

Again, I mean I don’t mind to study, but at the same time it will be almost impossible to explain to me, that tracking with Revopoint is just OK - I still believe my eyes more, than anyone’s explanations :slight_smile: Mean I really hope Revopoint programmer will once get interested again in the tracking system, that he made.

1 Like

Tracking is extremely difficult to get right. Even the $10,000 Einscan-HX I have access to at work loses tracking. Admittedly, less than the Pop2 etc. but it scans at 55fps compared to the 12fps of the Revopoint solution.
I’m frequently amazed at how good the Revopoint scanners are considering their price point.


Completely agree, I had a Lizard as well. Tracking logic, Tracking logic. The creality software would accurately detect a miss track and guess would stop getting useless data. Go over go scan area and it regains tracking and you can carry on. This pause and delete method isn’t optimal. It should have some logic to identify a random infinite length path…

1 Like

@Harh Ok I see where you go with that , but I am not sure it would be possible to do that considering the devices are portable and tracking happening in real time mostly thanks to the chip in your device what eliminates the needs for heavy GPUs and other resources while scanning that why it can run using phone and cost less .
12 fps is not much but enough , plus the main issue here is small FOV used on objects that are bigger and featureless in general that cause issue . A structured light scanner don’t see things the same way as our eyes do , so for example a rotating sphere is simple a steady disk to a scanner no matter what side you rotating it so the pattern never really change to be registried by the tracking algorithms.
The pattern always looks the same at all time and all angles regardless of it’s position .
The same goes for a cube at specific angle .
Revopoint devices needs at least 3 features visible at the same time while scanning to keep tracking , that include also 3 markets when scanning in Marker mode that completely ignores the features of the object and relays only on 3 markers in the FOV at a time .

Now in one of the video you posted the dude complaining about having to put too many markers on the scanning object , but then you see how much points it needs to keep tracking without issues , now translate the marker points to features then you better understand how Structured Lights scanners really works .

Of course you have there very expensive scanners that use precise instruments to pre-scan the volumes before final scanning creating virtual mocap of the volume but they cost a lot , so having system like that requires not only a new software but also additional hardware .
Since what you got can’t be changed anymore . It is a request for a totally new scanner .

You see the new upcoming 3D scanner by Revopoint called Range , use much bigger FOV where less markers are needed to tracking , what means less features are needed per square cm to be tracked , it is also offers 20fps and tracking issues are virtually not existent.

Still the software is the same but a new better hardware that do the job better and easier, however there is catch , it can’t scan smaller single volumes than 50x50x50 mm however it can scan a car without issues or markers .

So my point is that changing just the software will not do any good without proper hardware to go with it . Because everything what the whole scanning requires starting from a projected pattern on your surface and depth sensors reading it and having each frame with additional data regarding the space position is totally something else that the device offers .

The technology that was designed already by other companies is already patented and the licenses to use it are very expensive that why other more advanced scanners are so expensive.
So unless Revopoint come with own new invention that allow them to keep the low costs , I don’t see it happen .
As I don’t see you willing to pay $7K or more for a device that do what you wish.

For that reason there is no one scanner that can scan from 1cm to 3m objects using structured light 3D scanner .
Because for each size of the object the device need to have different wide angle of the sensors to be able to capture proper the pattern .
That why MINI was made for scanning mini objects only and Range for bigger objects only to produce the prepare results as the software by itself can’t do it .

So no matter what you can’t change with just a software how the device operate and how much features they can capture to keep tracking as it is already written in a stone and can’t be changed.

And if Revopoint come up with a new technology to improve the system would be great but it is nothing we can simple request to be done .

The tracking is based on the same logic as face recognition software based on features , if it can’t find the features when the position changed it can’t keep tracking .
The same if you put 3 markers on a object and scan using marker mode , once you changed the object position and the 3 markets are not visible or only 2 from the 3 it can’t track it as it need minimal 3 points to recognize it.
It relays heavy on the object featured and nothing else .
The rest of the surface it scanning is really not important in the tracking but only the specific features that changed the pattern that was registered as feature/marker points .
For that reason you can scan featureless objects using marker mode as the surface that get scanned do not influence the tracking at all but only the markers that was registries in space and location . The same is happening with your object features where the rest of the surface is not important while tracking .

So the whole point here is to provide enough of tracking features that will alter the projected pattern enough to be registries as a tracking point .

3D structured light technology is not a photogrammetry and you are not taking pictures of an object that is masked out from the rest of the background you are scanning , you scanning a distortions of a pattern and if there is not enough distortions there is no tracking and a software can’t make it up for you .

There are great additional methods to trick the scanner in better tracking like adding additional elements next to the object , but it only works with one scanning session as you can’t flip the object and continue scanning using this method and the same goes for marker mode . Separate session need to be performed and scans merged together after .

1 Like

That’s true , no matters how much you spend on a Structured 3D scanner with the best software the tracking of an object is always based on the object’s features or the amount of markers.
Unless you want to throw out like $24 K to $100K
For additional tracking technology and hardware .

But trust me the quality of the meshes are not better just because you spent $100K
Not for nothing scanning services are so expensive as that is nothing you just jump in and become an expert .

The secret here is to examine the object and use the best method to provide as many points for proper tracking .

There is no other magic button or solution but practice and learning how to proper use the tool as with any other tool .

I’m frequently amazed at how good the Revopoint scanners are considering their price point.

Undoubtedly, if we are are just talking about the hardware - Revopoint Mini can really take small features, and even Pop 2 seems to surpass CR-Scan Lizard.

but it scans at 55fps compared to the 12fps of the Revopoint solution.

BTW interesting info. Will be interesting to see how much improvement to the tracking will 12-20 FPS do (will it be for Grayscale/RGB or something dofferent BTW?).

Agree. The more is the scan area, the better tracking is going to be.

Would “3 markers” save the tracking, that guy up there in the Pop 2 video won’t swear that much about the tracking.

That’s why I’ve said up there about “when someone looks upon some system from inside”. From inside the limitations of the system will begin to look as the objective physical laws. This is the reason why I don’t really want to argue with you considering this stuff, we’ll not get anywhere, you’ll just see me as a stubborn person that ignores arguments. But I do read and understand that you said, we just look at this from different perspectives, and even we’ll begin discussing angles, markers, distances and object sizes, we’ll bump at the “you need different hardware” - “no, we need different software” stuff anyway.

The general idea of the post was not really a duscussion, but to draw developer’s attention to the thing if they visit the forum, as for the rest - we can help each other, but that won’t change the software “version”.

The improvements are great since I am testing the device at this moment , so double the fun , but as I already explained the frame count is not the only key, but also the size of the scanned area FOV , how bigger it is how less features the object needs .
It is for capturing of the frames , RGB has nothing to do with it , and what you see in the preview is not what the scanner capture , it capture just small area of it not the whole picture as that is not photogrammetry, it is only for references and only right channel , the whole channel is the one preview in blue

the video of the guy is painful to watching , the object is hige and should be scanned in a body mode with additional elements around it … and he need more markers to keep tracking and in a different pattern , not on repeat as it can confuse the tracking system .

I am testing the software for Revopoint so I know the possibilities it offers and what can be done or not , I do not argue with you at all , but only share with you my own point of view , and any additional features cost money or need license, or need to be invented , so it is not like you can copy one idea from another software as this will never happen , with big changes you need to change the hardware . So if there will be one day super scanner with a software and all the goodies the price gonna be 5 times bigger or more .
But as always suggestions are always welcome , I am just staying realistic becouse a lot of the suggestions do not apply to Structured Light 3D Scanner technology Revopoint offers with their own designed chip .
Most of them are or for a laser scanning or photogrammetry.
And suggestions like " do better tracking so I don’t have to think much about while scanning " simply don’t compute here , you choice the technology and you need to learn its rules . And couple of uneducated people on YT with troubles showing just zero skills , don’t do the justice , they showing how bad they are on scanning and that is all about .
I see a lot of users coming through phase, some of them learn and move forward with a great results, some of them just sell their device as they don’t want to be bothered with learning or blame it on the device instead what is of course false statement as we have so many great users around that use the same devices with amazing results .

I mean the 12-20 FPS range is stated as “12 FPS in this mode and 20 FPS in that”? As is it a limit of the different modes or is it a user choice to avoid too much of a scan data?

I am talking only about that - software optimisation is always a use of some tricks that can either reduce calculations (like FFT aganst full Fourier transform) or narrowing the possible modes of operation to avoid modelling mistakes (like not allowing the scanner to “jump” or “rotate” to some excessive degree it is not possible for the operator to perform) like I mentioned before.

“… and you never can get graphics to go better, than we have now in our great flight simulator, since you’ll need 100x processing power you do not have”. That’s the reason why I gave this as an example up there: there always can be done an explanation why this or that cannot be done for objective reasons.

Sure. But when you need an excessive ammount of knowledge to just hand scan a 40 mm cube, then there is something wrong with software, not with you. Note you need to know A LOT just to scan such a thing as a simple box with XYZ marked faces using free hand mode.

I understand your point. Though knowing about those “learning curves” like with 3D printing, photogrammetry, 3D modelling and alike, I can judge, where those are merely optimal and where you, as I say, go through the tar pit. Again, I understand that all I could do it to “point out”, the rest I write considering it, it would be just my bad, because the only answer will be, as I said from the beginning “you don’t understand”, that is to say “you just do not learn”, moving potential blame from the software to me in all cases :slight_smile: So let me hope I still can learn by stopping arguing :slight_smile:

Only the new upcoming Range can work at the speed of 20FPS , other devices at max 12FPS , more frames per second means also better tracking while moving the scanner back and forward , less chance of losing the tracking point while rotation or shake .
One of the best portable scanners have around 80FPS a true beasts but cis a little.

The moment you start scanning the first frame is registered in space and lock the object in position , the rest of the frames are following the position of the first frame , if it jump means the frames are aligned to the original space , sometimes happening that the frames split what should not happen and things like that can be improved , and the improvement never stops, if you see what the first Handy Scan offered and what the last version offer it is a huge jump in improvement. And we not talking about years but months in general .
I am sure things get only better so less little bugs will interupt the process .

You see if you want more frames per sec to improve tracking you can’t do that with older devices as it is impossible to change the chip , other scanners I’m general using the PC for it’s whole processing power so the software can be always adjusted as more powerful workstations hit the market , with small portable devices like Revopoint all the power is inside the hardware of the device that allows you to use it on PC or Android phone … so that is the difference here .

We still have firmware that can be improved and adjusted but the maximum but as I said before it is not unlimited .

Why you would scan a cube when every 3D software in the world include it as a standard prop ? Who buy a scanner to scans cubes, sphere or planes ?

And btw a cube scanned at 90 degrees do not have any visual features , you need to scan it at 45 degrees to have minimal 3 faces visible for better tracking , you see simple things still you learned something .

And again 3D light structure scanners are not made to scans primitives or surfaces with repeat features in general .
We going back in circles here again .

Exactly… there is no excuse to blame the software unless it is really the software , and I know when it is the software as i know it’s coons and pros .

So I think it would be better to put the energy to learn some tricks and tips in improving your workflow and show the YT Karen’s how wrong they were about , I think you gonna have more satisfaction from that showing your great results .

Unless you want to wait for AI to do the job for you in the future … but that would be no more special .

PopUpTheVolume pretty much explained everything but there is one last thing that I actually agree could and should be improved on the software side - turntable scanning. There is absolutely no justification for Revopoint to lose tracking while scanning a rotating object on a turn table. In this Situation the object (and the entire scanning system) has just a single degree of freedom, so if it ever loses tracking there is only one variable the algorithm has to wiggle to find the proper alignment. Now I can’t say I have a lot of experience with 3D scanning algorithms, but I do have half a lifetime of experience with matchmoving and 3D tracking algorithms for VFX and if there is an unsolvable shot and we can introduce a constraint (a limitation) for the tracking algorithm that “hey by the way, the camera was moving along a perfectly straight line or was circling the scene around following a spiral” then that shot becomes ridiculously simple to solve. Since 3D feature scanning is basically the same process but reversed (the camera is stationary, the object is moving) I seriously can’t imagine why the same approach wouldn’t be possible :confused:

1 Like

Don’t get me wrong Filip , I would love that as it would be much easier for everyone but the algorithms are pretty much limited and already pushed to the edge, but with with new faster hardware and wider field of view it was improved greatly with the upcoming new Range scanner , and it is really amazing .

Hi @Harh

I. want to show you here how the sensors actually see the objects

left is the human eye and the right the scanner

same when scanning cube from 90 degrees , there are not features to be registries in space and you need minimal 3- XYZ
but changing the angle to 45 degrees already exposing 3 edges of the cube allowing you to better tracking already .

now I want to show you how actually the sensor see your object , it is not the Depth Camera preview you see as reference , it is only for visualization but what actually the scanner can see is this

Left and Right sensor , only the areas that are marked with lighter vertical lines , they pattern shifting quick from left to right in real time creating fill frame that the software capture

but it do not capture everything , it only capture the closest fragment depends of the object shape .

so what is happening here ? nothing , it losing tracking as there are none and you getting a duplicate mandala effect into neverland .

Now when you turn the cube at 45 degrees the scanner will be able to see the pattern distortions and register better the tracking points .

and when you use very reach in feature objects , there will be enough tracking that you will be never worry to lose again .
and that is how it works . You scanning deformed patterns reflecting back to your sensors that are translated into 3Dimensional frames that warp itself into virtual object frame by frame like an onion or a shingles on the roof.

I hope it helps you out better understand the process and if you really want to scan features object just add some additional objects to the scene as a tracking dummies .

and if you use Markers in Marker mode the object surface is completely ignored for tracking so you can scan any objects regardless of the features .

forgot to add that as always only front edge of the object is scanned based on its depth of field , so not the whole object you see in the preview.
in this case will be only this fragment and its tracking points.
since 3D objects are not solid and actually just shells

I get that, but the point I was making is LOGIC. They Creality Scanner is basically the same device with similar SMALL FOV. I guess the point I’m making is logic like “Slow Down logic” if you aren’t moving and it starts taking off which should be programmable as A: Moving too fast or B: Physically impossible and flag for “LOST TRACK” There is no shame with the application flagging this vs just gathering information at a low confidence level. Remember people want reliable scan data not just any scan data. All i am saying is try the creality CR01 or Lizard, then an EINSTAR both similar Technolgies and they will preform like this while scanned. Big pop up “Lost tracking” Scanning data stops collecting until you go over a prescanned area. The Revo Scan already acts this way so I know this logic should work… Don’t recall it? (Pause) move the object, (Play) Scan data is RED until it finds its place again then its green and carries on. Yes I know it should work this way now during scanning… But only on clear miss tracks, not oh hey the scan operator is having a stroke… Lets just keep filming and see how it goes…

I see, Range - 20 FPS, others 12. Understood, thanks.

Really hope to that.

BTW registering point cloud to each frame brings ability to address tracking drift issue in the situation like scanning full cylinder, thus having 2-nd frame being “connected” only to the 1-st one, 3-rd to 2-nd and so on. When it gets to 300-th frame connecting to the 1-st one, that can allow to recalculate the entire row of frames by repositioning all of them each to the fraction of it’s number. At least it could be implemented in the turntable mode

I just use it as a simple example of the tracking issue.

We think different :slight_smile: Though, just as I said up there, there is no real point to argue with you, it is a road to nowhere since both will “you just don’t understand” )

Considering images with sensors, unless pattern used by the scanner is circular (as I know, they are planar), they seems to be wrong.

Yes, I should make a better example, but it is taking time.

1 Like

Of course we thinking differently and nothing wrong about that as everyone sees things differently.

The difference here is , that I can scan any damn object I wish with the way it is .

And that’s not just me …

But of course any improvements of the software will be a huge plus for everyone,
And any suggestions are welcome .
And I am sure if the development team sees an interest in it , they may try it as they always do .

I only don’t agree that POP2 is the worst scanner because the mesh quality speak for itself and we can’t relay on opinion from people that have not enough experiences or know what they really doing .

Sure. But for that one must first become an adept of 38 level of 3D scanning to actually do it. I would summurise it this way: if you have good hardware, good software and a merely experienced person to use it, you’ll get good results. If you have bad hardware, no matter what you will do, you just can’t scan anything with a brick. If you have merely good hardware, but awful software, you still can get good results, but it will be a bit of a torture.

You may not agree with me, but at least 2 of 3 people in the videos up there don’t say like “Hey, I just bought a 3D scanner and don’t know how to use it, but I’ll try - ow, it doesn’t work the way “Just push the red button and you’ve got the result””, they say like “Hey, I used some other scanners, and none of them have caused so much trouble to use”. So yes, you need some knowledge to actually scan things, but taking some knowledge, effort and education is not equal to worship the damn thing, making efforts and efforts to get any positive results. What am I saying - that guy in the video up there could get a point cloud of back side of the amplifier with Lizard without any additional tracking cubes or attblime, and absolutely no results with instantly losing track with Mini.

Me too. POP2 scanning quality isn’t bad at all… if you will get it from it.

Einscan products stop acquiring point data until the tracking has been restored. My POP2 just kept on recording, creating a cloud on data that was no use.