I also am disappointed in the lack of maturity of the software and documentation. The Covid lockdown in China should have given them MORE time to get things done… but…
I think you should hang in there. The biggest thing that I have discovered is that the exposure level changes when you switch from preview mode. If the depth camera level is too low, you will have tracking errors etc. The other settings are important, but only after you get a basic scan with proper tracking.
This is my latest attempt at a quick detailed scan and it is much closer to what I was expecting.
@glassTransition well done !
yes I can confirm that the exposure settings and sometimes in the Depth camera settings changing from Manual to Auto or vice versa when moving from preview , or when starting scanning , it is little annoying , but hey it is BETA … however the Depth sensor settings is the major problem why people have issues with tracking.
I have tried all depth settings, manaul and auto and it never gets it right. Just a jumble of overlapping clouds with features missing based on color. Only seems to work on the model bust and sort of on faces. I think, in my case at least, training videos and manuals are unlikely to solve this.
Software definitely picks its own settings when you switch from preview to scan. Manually twiddling settings while in scan still hasn’t been successful. Most everything is masked red no matter what number I set the Depth slider to during scan so by default that info is going to be ignored.
Again, I have been burned by kickstarters before so now I pick a company that has already demonstrated they can build the thing I want and are building at least a version 2. My intent was to buy a tool not and R&D/debug project.
And yet here I am fighting unfinished software and hardware that may not be working instead of successfully scanning parts. Spent half of yesterday failing to scan and the other half manually cadding a part I needed because the scanner is at present worthless.
If I had bought this directly they have a money back policy. I assume since I bought via kickstarter it is just a case of tough luck.
@phineasjw , I get your frustrations … It is not the device but rather the new software that is still in beta , maybe you try the old Handy scan software. It is much simpler and less buggy and better for practice , that is how I learned how to use the first POP1 .
I bought POP1 in the store last year , and I love it. That is why I purchased POP2 and I do not regret spending money on it twice with the full price of the premium package .
Make sure that you do not have too much light around your scanned object, I tried to scan textured objects yesterday and it took me a while to adjust the light right to not affect the tracking .
Also the new software sometimes does not allow you to adjust the Depth sensor camera , I have to click Manual and Auto back and forth to unlock it … so definitely it is not the device as it works just fine with older apps .
I really believe that everyone having issues scanning is due to the new software bugs… 100%
as I reconstructed the same errors already . And you will not find videos showing you the errors so hard to learn how to avoid them .
Revopoint should make a video showing all the errors and how to avoid all of them , the best way to learning by trial and error .
I am experienced 3D professional for the last 24 years , but it is first time that I own infrared based 3D scanner and the learning curve is different from anything else I used before , and on some point I was newbie to it as you are right now , so do not give up, it is great device ! I just hope they fix the bugs soon as the new software is so much better … just not ready yet
Hi @phineasjw ,
Sorry so much for your bad experience.
We’re planning an online live stream to show how to use the POP 2 (maybe in the following week).
If you have time, welcome to watch the live.
Hi @PopUpTheVolume ,
Thank you so much for sharing your experience.
“Revopoint should make a video showing all the errors and how to avoid all of them , the best way to learning by trial and error .”
Yes, I agree. That’s why we made these videos:
Do you think these are what users really need? If no, do you have any suggestions? Thank you in advance for your help.
These are good. How about one that shows adjustment of depth camera exposure to the proper level for different colored surfaces. We don’t need music in the background IMO.
UPDATE 3/26/2022: My device was completely out of calibration. I used the calibration tool and that brought the scanner hardware into usability.
Now I am struggling with the software. I have also tried the older handycam version but the results seemed pretty much the same. Doesn’t take much for the software to loose orientation and then end up with multiple surfaces at random angles. I end up stopping before I get the entire scan done for fear the loss of orientation will ruin whatever had been scanned up to that point. The other issue (inherent in this type of scanner tech) that the color/material drastically impacts whether the surface gets scanned in. I need to research tips and tricks for overcoming surface scanability issues.
One observation: I would not let the software record cloud data when the scanner says “Too near” or “Too far”. That is when the software looses orientation and creates the 1960’s psychedelic spinny cloud of mess. Instead, alert the operator (a beep would be nice) and wait for the scanner to move back to excellent/good range before resuming cloud data capture.
Cassie, very good tutorials , however there are bugs in the software and I am afraid tutorial will not help with the problem if the person is not aware of that issue . Major issue the Depth Camera sensor not always showing what is in the preview , it get stuck and if this happens the alignment is no right , it is not happening always but too often , I need to click manual and auto back and forth to restore the actual Depth camera preview and get the proper gain . I tried Handyscan it was working fine so definitely Revo Scan beta bug . Maybe tomorrow I will record it for you so you can see exactly the problem.
@phineasjw too far or too close can mess up stuff sometimes , but most important to avoid is too close , as you will lose the range of the infrared pattern that is projected on the model , too far is not doing it as I checked yesterday all combinations . There is huge room for improvement but the programmers need to see it so I will prepare some report this week on all the issues . The device is all fine after calibration , but the software need more work .
I think that is an excellent option to add to the software.
I would prefer a sliding tone for distance rather than a beep. It would allow a smoother feedback response when scanning manually.
The BEEB is there already and it annoying me for the reason the speaker icon is there to shut it down if desired.
I would prefer not having the scan stopping scanning if it is too far, only if is is too close … because if it is too far it does not really affect anything ( beside the color textures). I prefer to do it on a larger distance , less problems with alignments , scanning too close results in a bad alignment faster as the infrared light pattern is not projected in the whole frame but just half of it . I recorded the infrared light frame yesterday and many things become more clear to me .
Tracking was my biggest problem too until I stuck a huge amount of dots on a model, problem almost eliminated.
@glassTransition that scan actually looks really good! i can’t even get the pop2 to stay connected via wifi long enough for REVOSCAN to respond. the program keeps showing the rainbow wheel and the buttons dont respond.
I’m using a PC and the Pop2 is connected via USB. I haven’t had much luck with wifi scanning.
I would also like suggestions on how to deal with objects that are thin / flat. How can you scan the entire object? If I pause and flip it over, the scanning will not regain tracking on the second side because none of the first side is visible. If I place it on the turntable, the scanner loses tracking when it rotates past the edge for the same reason i.e. there is no visible reference.
If I create two scans and try to align them in the Revo Studio, I have the same problem because I can’t pick common points in the two halves.
If you can’t pick any points manually , then it is virtually impossible .
You will need to add some random object next to it for references and remove it later after merging it manually in Revo Studio. Not easy task
That’s a good suggestion. Adding on an object that smoothly transitions through the full rotation allows tracking to be maintained. Now I just need to figure out how to do that with something as fragile as a crab shell while supporting it by the points on either end.
I saw the crab shell , not easy as it is symmetrical and round, I had difficulty scanning a bigger shell, kinda in the form of your crab shell , the same pattern on it confused the scanner .
What I finally did was scanning using color data , exporting it as *.ply and later when merged I had visual color references on both sides to mark the points manually .
What you can try is putting the POP2 above the shell , scan the top using color made ( quick couple frames ), then scan the back of it with a second scanner and try to merge both in Revo Studio following the references you will see in the color data .
With other similar object that did not had any color , I market it with couple of dots on the edges using simple red marker for references , scanned with color to find later the points , remember to not generate any textures before exporting to *.ply so you keep your color per vertex data intact , once you convert the textures the color per vertex data is gone .
Thanks for the color suggestions.
The shell isn’t round though. It’s kind of a flattened oval. The tracking problem occurs when the thinner profile is facing the scanner since it can no longer see the previously scanned surface. In this case, I guess I could wrap registration lines around the edge. That would only constrain one degree of freedom though. I still wouldn’t know how far apart the two flat surfaces need to be.
I think the additional object approach may be the best choice. I am going to try that first. I’ll post results.