First, this isn’t going to be a long post showing you how to install and use Unity.
In a few words, Unity is one of the most famous Game Engine. It allows you to develop games for PC, Android, iP Wii and almost every other platform. Unity is the most advanced Game Engine for VR (as opposed to Unreal Engine for example).
Here I am just going to share some links and info to help you jumpstart your development of VR applications in unity.
The most important step is to integrate OpenCV in Unity. Then we need to communicate with the cameras using websockets.
Finally we have to create a calibration sequence using OpenCV and add the tracking function. All of this had been done in the previous version using a Gateway, but I now have to translate all the code from python to C# and clean it of course.
Simply download and install the latest Unity Free version on your PC/Mac
Communicate with the cameras (Websocket)
The cameras communicates on your Wifi network using Websockets (each camera is a websocket server). To receive their 2D datas, your PC or phone has to be turned into a Websocket client. So we will implement a Websocket client in Unity.
Here is a Websocket library for Unity under MIT license. You can either buy the Unity asset for 15$, or simply download it, compile it with MonoDevelop and include it into Unity (as you wish). I’ll share a sample project with the library included when I have finished with coding 😉
Add OpenCV to Unity
OpenCV stands for Open Source Computer Vision. It is the best computer vision available for free. You can do amazing things with it, but we will limit our application to three things :
- Finding out the camera intrinsic parameters
- Finding out the camera extrinsic parameters
- Triangulate the position
The intrinsic parameters concerns the camera directly : its sensor size, ration, and lens distortion. OpenCV can help you getting those values using a chessboard calibration sequence. I will give you already the parameters I calculated for the CMUCam5.
The extrinsic parameters relates to the camera position and orientation in the real world. It take a calibration sequence where we record some 2D points with the camera, and associate them to real world 3D position. This calibration will output the true camera position and orientation in our world. It is necessary to be aware of the camera’s position for triangulation
The triangulation will consist of computing 2 2D positions (one from each camera) into a 3D position using the parameters we just talked about.
Now that you know what we are doing and why, let’s add OpenCV to Unity. To do so we will use EmguCV, which is a cross platform OpenCV wrapper, compatible with PC, Mac OS, Iphone, Android… all we need for our applications.