How i get distance from API SurafeceTracking?

0 votes
asked Jan 20, 2020 by bigbaak (720 points)
edited Jan 20, 2020 by bigbaak

Hello, i have question. How i can get distance(Vector3, cm, meter) between real object and camera? Which of the API may provide such an opportunity? 

I want to create application which scan QR code and put 3d object on the QR code. 

I did: 

1. Add library Zxing and recognize QR code from frame.

2. When successfully recognized I add new 3d object on the QR code. 

But, my  script does not know how far the QR code is.I inserted an object one meter away from the camera. But if I move to another point, then the object begins to fly into space. Have I done the right thing? Is there any other way to insert an object?

```
carObject.transform.position = Camera.main.transform.position + (Camera.main.transform.forward * 1f);

```

1 Answer

+1 vote
answered Jan 20, 2020 by kenn (18,790 points)
SurafeceTracking is not designed to do such things. You could try MotionTracking.

SurafeceTracking has no real scale, it does not keep real world distance and it is by design. This feature is used to put virtual object on the surface or anything in front of camera and never lost. If you are looking for a object that can keep in the same position in virtual space like in the real world, you should use MotionTracking.
commented Jan 20, 2020 by bigbaak (720 points)
How i could get frame from camera in MotionTracking? In 'SurfaceTracking' have 'VideoCameraDevice' which i can get frame.  Could you show me?
commented Jan 20, 2020 by kenn (18,790 points)
There is no difference. ARSession.FrameUpdate will always work.
commented Jan 21, 2020 by bigbaak (720 points)
Thank you. I did it. I found  tips in the ARSession code.
``
private void getFrameAndSendToQRDecoding(InputFrame inputFrame,OutputFrame outputFrame) {
    if (outputFrame != null)
    {
        Image i = inputFrame.image();
        Buffer b = i.buffer();
        byte[] bytes = new byte[b.size()];
        b.copyToByteArray(0, bytes, 0, bytes.Length);
        // use bytes here
        qrCodeReader.decodeQRCode(bytes, i.width(), i.height());
        b.Dispose();
        i.Dispose();
        inputFrame.Dispose();
        outputFrame.Dispose();
    }
}
``
commented Jan 22, 2020 by bigbaak (720 points)
Hello. I have questions about `InputFrame`.  I got frame with YUV_N21 pixels format on Android device,  but I got frame with BGR888 pixel format  on the Unity Editor . How i can manually set to one format?  I need just RGB pixel format.(
commented Jan 22, 2020 by bigbaak (720 points)
I found class 'CameraImageMaterial' which return Texture2D.
commented Jan 22, 2020 by kenn (18,790 points)
edited Jan 22, 2020 by kenn
In the latest EasyAR Sense Unity Plugin, we designed three ways to get Frame Image, for different purpose.

1) Get CPU data (every frame or when image data changed depend on the event) using ARSession.FrameUpdate or ARSession.FrameChange
Image can be acquired from OutputFrame using a few API callings. This image is from system camera API callback directly if not using a custom camera. We do not convert color to different format in the whole process to gain the best performance and just pass CPU memory reference straight down in the data flow.
So, the image format is different in each OS and cannot be changed. If you want a unified data format, color conversion is required after getting the image.

2) Get Material (every frame) using CameraImageRenderer.OnFrameRenderUpdate
The material (and along with texture size) can be used to manually render a RenderTexture every frame (for example, using Graphics.Blit). The texture data which the shader is using will be uploaded when CPU data changed from 1.

3) Get RenderTexture using CameraImageRenderer.RequestTargetTexture
The callback will only be called when the texture has recreated (for example, when screen rotated or resized). The RenderTexture is rendered in the timing of CameraEvent.BeforeForwardOpaque using Material from 2.
You can reference Coloring3D sample for this API usage.
commented Jan 22, 2020 by bigbaak (720 points)
Thank you for answer. I solved all problems with get picture from camera and successfully decode QR code. I have another question. How i can change camera focus in MotionTracking? The resulting pictures are too blurry.
commented Jan 23, 2020 by kenn (18,790 points)
Focus distance cannot be changed (for now) in MotionTrackerCameraDevice.
We are aware of the problem it brings. We will continue to improve MotionTracking experience.
Thank you for feedback.
Welcome to EasyAR SDK Q&A, where you can ask questions and receive answers from other members of the community.
...