I’m using the Unity SDK with the WorldSensing MapBuilding_Sparse_Dense sample as the base for my project.
What I need to achieve is drawing a virtual fence around a place. I am placing the points of that fence by casting rays to the map. The device the app is running on is an Android Tablet.
An issue arises when closing the shape of the fence.
There’s differences in location and elevation between when I start the tracking and when I finish the fence in a place where the ground should remain the same.
On the picture above I show how I proceeded:
- I started the tracking by looking at the center of the zone I want to draw.
- I draw the fence by walking around the zone following the red arrow (the fence in white is rendered within the app), pointing the camera towards the edges of the fence, holding the tablet at around 1.40m high from the ground, until I close the shape.
As you can see there’s visible steps that appears in the map mesh (I circled them in blue) when the ground is in fact flat (They look about 5 to 10cm high).
On the image above, I’m looking at the last corner of my fence, then I stepped back and then I looked back at that same corner. It jumped to a new position away from where it should be placed.
The final usage of that app is for taking measurements within that fence. I would like to know what are the possible improvements I should make to avoid that situation, this includes changing parameters in the VIOCameraDevice components but also the recommendations to better scanning the place, like avoid creating loops in the tracking path like I did.
In this example the zone is quite small but ultimately they will be 3 to 5 times bigger in real conditions, we may not want to scan the entirety of the zone. So my question is what are the best practices to get the best accuracy possible for that scenario?