This video shows the completion of work to split the tracking code into 3 threads – video capture, fast analysis and long analysis.
If the projected pose of an object doesn’t line up with the LEDs where we expect it to be, the frame is sent off for more expensive analysis in another thread. That way, it doesn’t block tracking of other objects – the fast analysis thread can continue with the next frame.
As a new blob is detected in a video frame, it is assigned an ID, and tracked between frames using motion flow. When the analysis results are available at some point in the future, the ID lets us find blobs that still exist in that most recent video frame. If the blobs are still unknowns in the new frame, the code labels them with the LED ID it found – and then hopefully in the next frame, the fast analysis is locked onto the object again.
There are some obvious next things to work on:
- It’s hard to decide what constitutes a ‘good’ pose match, especially around partially visible LEDs at the edges. More experimentation and refinement needed
- The IMU dead-reckoning between frames is bad – the accelerometer biases especially for the controllers tends to make them zoom off very quickly and lose tracking. More filtering, bias extraction and investigation should improve that, and help with staying locked onto fast-moving objects.
- The code that decides whether an LED is expected to be visible in a given pose can use some improving.
- Often the orientation of a device is good, but the position is wrong – a matching mode that only searches for translational matches could be good.
- Taking the gravity vector of the device into account can help reject invalid poses, as could some tests against plausible location based on human movements limits.
Code is at https://github.com/thaytan/OpenHMD/tree/rift-correspondence-search