pass3d recognition tool

HASH ID creation

Download pass3d release or build it yourself from the source code.
Requirements: OS Linux

Grid2d is the first recognition algorithm suggested by Michael Co.
Join the discussion here >> Make your suggestions about new algorithms to add and get rewarded from 3DPass contribution program.

  • The input is a 3D scan/model of the object (.stl or .obj formats required).
    For example, you can download these two ones:
    pir1.obj and pir2.obj
  • The output is a top10 hashes list inherent to the object shape
      pass3d --algo  --grid  --infile  --sect 
     -a, --algo         3d hash algorithm Algorithm. Supported algorithms: Grid2d
     -g, --grid         Number of cells in Grid2d algorithm
     -i, --infile     The path to the file to read
     -s, --sect         Number of sections in Grid2d algorithm

The object shape is considered to be recognized if there is at least one hash-value match among two different processing results. We have to process two or more different 3D scans of the same object and to compare the top10 results. We should use exactly the same parameters every time. It's recommended to use the same equipment, as well.

For example, we have two different 3D scans pir1.obj and pir2.obj of the same real physical object. In order to run the processing and create hashes from the first scan we have to run a command like this:

./pass3d -i pir1.obj -a grid2d -g 8 -s 68      

The output will be like this:

~/Desktop/3dpass$ ./pass3d -i pir1.obj -a grid2d -g 8 -s 68
Select top 10 hashes
* --> "aa4019c8c160da9d2af69edc19589aabd925bc696966b967f92b71947f75f8f0"
** --> "dd227121b91adcb5beabb0be9412613ebdfde8c5660301eb17583fa644b8793d"
*** --> "543e1c3929ea810f4e8c7cfc27f0b60df21a9374089f2278617dae327e32b034"         

The second scan processing outcome gives us this:

~/Desktop/3dpass$ ./pass3d -i pir2.obj -a grid2d -g 8 -s 68
Select top 10 hashes
* --> "aa4019c8c160da9d2af69edc19589aabd925bc696966b967f92b71947f75f8f0"
** --> "dd227121b91adcb5beabb0be9412613ebdfde8c5660301eb17583fa644b8793d"
*** -- > "543e1c3929ea810f4e8c7cfc27f0b60df21a9374089f2278617dae327e32b034"

Within those two processing results above there are three of top10 hash-values matched (they are marked as *, **, ***) So, we have the object recognized.

If we had three or more different 3D scans of the object processed, we could've picked up the most stable one Hash ID, meaning it's definitelly existing among the top10 hashes of every 3D scan we have. The more scans you process the more likely the best stable Hash ID you get. But practically it's enough to do 3-5 scans to choose. Sometimes you have no choice because there is only one hash is matched. If you have your 3D scanns in pretty good quality and it's assuming that the next ones are goint to have the similar, then you can pick up all of the hases matched. In our example it would be a combitation of theese three hashes:

* --> "aa4019c8c160da9d2af69edc19589aabd925bc696966b967f92b71947f75f8f0"
** --> "dd227121b91adcb5beabb0be9412613ebdfde8c5660301eb17583fa644b8793d"
*** -- > "543e1c3929ea810f4e8c7cfc27f0b60df21a9374089f2278617dae327e32b034"

Parameters adjustment

These are two key parameters we need to adjust in order to create the best possible Hash ID depending on 3D scans quality.

-g, --grid         Number of cells in Grid2d
-s, --sect         Number of cross-sections in Grid2d
Number of cells parameter -g:

Grid (-g) is the parameter which is about to help us to adjust the recognition algorithm to the particular 3D scan quality. The higher scan quality we get, the higher number of cells in the row we can set up for the processing. According to the Grid2d algorithm, by means of increasing number of cells, we are following a 3D scan cross-section contour more closely to the actual curve. That means that more precisely we can recognize the object shape. But, simultaneously, we're keeping less space for some error in the future. It's all about the balance between accuracy of the shape recognition and the ability to get the stable Hash ID.

Low definition scanners, especially smartphone apps, gives us a lot of error between two random scans taken from the same object. But High definition and professional ones might roll out not much than 3 micro meter error. So, it is recommended that we get several 3D scans made by the same equipment and then set the number of cells as hight as possible, provided it still rolls out successful recognition results. That is going to be the best set up. It might takes some attempts to adjust the optimal (-g) parameter’s value according to the scan quality.

grid 6x6

Parameter -g=6 (6x6 grid) example

grid 20x20

Parameter -g=20 (20x20 grid) example


Notice, that we should set the numebr of cells parameter (-g) up to the lowest quality of 3D scans we expect to process in the future. If we set the (-g) value to be appropriate for HD scanners (-g=20 or higher) but the scans won’t be there, then we’ll never reach the recognition success. -g=6 is recommended for low quality.

We should use exactly the same set of parameters for the same object while processing. Otherwise, we won’t succeed in recognition.

Number of cross-sections parameter -s:

The more cross-sections we set, the more hash strength we get. Each cross-section represents a unique contour which is, basically, the unique seed data the future hash would be created from. If we have captured more unique distinctions from the object shape, it would give us higher hash strength. For example, if we had set up just one cross-section (-s=1), we would leverage only one contour of the object which is really small amount of unique data. And it’s definitely not enough to describe the entire object shape. It’s like if we would try to describe the hole apple shape having just one slice of it. So, if you’re interested in recognition the entire object rather than a few slices of it, it’s recommended that you set up at least 100 cross-sections (-s=100).

Parameter -s=3 example:

Overall recommendations

  • We should use exactly the same set of parameters for the same object while processing. Otherwise, we will not succeed in recognition;
  • It's recommended to set up the grid parameter (-g) value according to the lowest scan definition we expect to process in the future. Such values as -g=6 or -g=7 (6x6 and 7x7 grid) would be recommended for smartphones and tablets;
  • It's recommended that we set up the number of cross-sections at least 100 (-s=100) in terms of leveraging the entire object shape instead of just a few slices.