Trying to connect to an external positioning system

I am “upgrading” POP2 to use an external reference frame/coordinate system which will drastically improve the accuracy of scanning and I think the resolution of the final scan. For this I use a tracker rigidly mounted on the camera. Because of the fact that there is an offset (both rotation and translation) between this tracker and the cloud point space, I need to know what this matrix is to get it to work. This is made tricky by the fact that there are many coordinate frames associated with this type of camera. There’s RGB camera space, Left IR camera space, Right IR camera space, Structured light space (the center of the camera) and finally point cloud space which could correspond to none of the above depending on specifics. Inside the calibration parameters saved by “revopoint calibration” you will see some openCV calculation logs and other easily deciphered binary files (mostly 4 byte floats). This and experiments lead me to believe that the point cloud X center (axis of length of the device) is aligned with the LEFT infrared camera, not the center. Furthermore the point cloud origin appears to reside 6mm inside the body of the scanner (though this is an imperfect measurement and could well end up to be 0; at any rate the cloud origin is not dramatically shifted with respect to the forward surface of the scanner which is the positive Z direction). It appears that the 2D calibration (stretching and moving the rgb and depth images) is taken care of by the revopoint software before the point cloud is saved so I am ignoring this aspect of image acquisition (though I am not absolutely sure that it’s ok).

Viewing this problem from another angle (no pun intended), the problem is reduced to finding a matrix X such that multiplying a point cloud times [matrix T of the tracker world position] * [inverse of X] makes all the points go to their proper place in world space. So X is the transform that takes you from cloud space to tracker space.

X itself is composed of [X1 tracker to scanner body] * [X2 scanner body to left IR image sensor] * [X3 left IR image sensor to point cloud space]

What I tried:

  • used CloudCompare software to manually align two clouds (eg c2 to c1) and got a good matrix (c21) for that. This empirical result should be identical to the sequence of steps taking c2 back to world position and then back to c1. For instance c21 = X*inv(t2)t1inv(X) - However we were unable to solve for X using all the matlab toolboxes known to man

  • created 3d model of the entire assembly and the relative axes, and manually found a matrix that takes me from marker space to left IR camera space. This “sort of” places the clouds together but there isn’t a perfect match, because the cloud origin point is NOT exactly the left IR camera position on the surface of the device (could be a bit inside, not sure by how much, and not exactly in the vertical center of the scanner body, and also there could be some rotation, as evidenced by “left camera rotation” in the calibration files being non-zero while the translation offset is set to zero for the same camera).

If anyone is good with this type of task I can provide PLY and TXT files with the point clouds acquired, and the tracker position and rotation that correspond to each, and you can see if you can find a way to get an alignment.

A third thing I tried is to use aruco/opencv to output a calibration file for the RGB camera only - and then used the RGB camera to photograph a 3D tracker with an aruco marker located exactly at its center and aligned properly with the coordinate system of the tracker.
So now I have a set with

  • marker tvec, rvec according to the rgb image
  • world transform of the marker given by the tracker aligned to it
  • world transform of the tracker mounted on the scanner

From these we computed a matrix EC that ought to take one from the camera mounted tracker to the true rgb camera coordinate system. We were not able to manually tweak, even with the wisdom contained in the revopoint calibration files, to account for the fact that the RGB camera origin is not the point cloud origin. This is despite the fact that we know the offset between the RGB camera and the left IR sensor on which the point cloud appears to be almost perfectly based.

It would be great if revopoint tweaked their scanning software to accept an outside world coordinate system and did all the adjustments internally since they best know about the coordinate systems and calibration they are using.

1 Like

Since the POP2 can be mounted to a “cnc camera slider” it would be great if the POP2 or Revoscan could accept the xyz position data from the CNC software.

Need a external positioning system like htc vive track too

I’ll second this - I wish I could mount the mini to my CNC router and create a “flatbed scanner” system, it would be even cooler if it integrated with a six-axis robot like MyCoBot, Igus ReBeL, or the Ufactory Lite 6 and used the rigid kinematics of the robot to assist in the tracking.

If I were to simply mount my 3D scanner onto my CNC router as-is and move the machine using a simple parallel-scan motion, then I think it’s likely that at some point I would lose tracking and it would be impossible to recover without intervening, stopping the G-Code, and needing to start over.

The “DIY slider” driven by a cnc board is pretty much a “CNC Revopoint scanner”.

Hm, seems like there’s no real communication between the slider and the scanner, so no positional information available for the scanner to aid in the alignment.