Maximizing tracking and measurement accuracy

0 votes
asked Jul 13, 2021 by rtauziac (130 points)

Hi,

I’m using the Unity SDK with the WorldSensing MapBuilding_Sparse_Dense sample as the base for my project.

What I need to achieve is drawing a virtual fence around a place. I am placing the points of that fence by casting rays to the map. The device the app is running on is an Android Tablet.

An issue arises when closing the shape of the fence.

There’s differences in location and elevation between when I start the tracking and when I finish the fence in a place where the ground should remain the same.

On the picture above I show how I proceeded:

  1. I started the tracking by looking at the center of the zone I want to draw.
  2. I draw the fence by walking around the zone following the red arrow (the fence in white is rendered within the app), pointing the camera towards the edges of the fence, holding the tablet at around 1.40m high from the ground, until I close the shape.

As you can see there’s visible steps that appears in the map mesh (I circled them in blue) when the ground is in fact flat (They look about 5 to 10cm high).

On the image above, I’m looking at the last corner of my fence, then I stepped back and then I looked back at that same corner. It jumped to a new position away from where it should be placed.

The final usage of that app is for taking measurements within that fence. I would like to know what are the possible improvements I should make to avoid that situation, this includes changing parameters in the VIOCameraDevice components but also the recommendations to better scanning the place, like avoid creating loops in the tracking path like I did.

In this example the zone is quite small but ultimately they will be 3 to 5 times bigger in real conditions, we may not want to scan the entirety of the zone. So my question is what are the best practices to get the best accuracy possible for that scenario?

1 Answer

0 votes
answered Jul 13, 2021 by easyarguxin (960 points)

I need to know more.

1. VIOCameraDevice runs motiontracking or ARCore? 

2. Device name and model  of your Android Tablet

3.Which interface you call to implement hittest? 

commented Jul 15, 2021 by rtauziac (130 points)

Hi,

  1. I am using the System VIO first device strategy (it’s using ARCore if that’s what I understand).
  2. The current device is a Samsung Galaxy Tab S6.
  3. I am just using the basic Unity ray casting interface: Physics.RaycastAll(_mainCamera.ScreenPointToRay(screenPosition));

 Also I’m instantiating all the objects in the root of the scene and not as a child of `WorldRoot` or any.

commented Jul 15, 2021 by easyarguxin (960 points)
It seems that VIO drift affects the accuracy of the mesh. You can try to call the hittest in the sparsespatialmap, and be careful not to use densespatialmap.It seems that densespatialmap is unnecessary for you.
commented Aug 23, 2021 by ilanbps (120 points)
hello,

After several tries and using sparsespatialmap we still have the same problems, and that on different tablets (samsung tab S6, GALAXYTAB Active 3). how to avoid this shift of points and get more accuracy ?
commented Aug 23, 2021 by easyarguxin (960 points)
These devices use ARCore by default and appear to have pose drift.

Maybe you need to design a way to tolerate errors. I can't offer more ideas.
commented Aug 23, 2021 by ilanbps (120 points)

"These devices use ARCore by default and appear to have pose drift." ? you want to say Arkit offer better result ?

commented Aug 23, 2021 by easyarguxin (960 points)

Yes. But ARkit is also not good enough.  VIO drift affects the accuracy of the mesh.

If you are just distance measurement, it seems unnecessary to do mesh. Building a mesh consumes more resources. You can directly use the native ARCore or ARKit and call the hittest interface.

Welcome to EasyAR SDK Q&A, where you can ask questions and receive answers from other members of the community.
...