This is a post that many people have requested recently. I’m going to describe how you can use OpenCV in Unity. Of course we’ll be using official OpenCV libraries and not any assets or existing plugins. For those of you who aren’t familiar with the subject, Unity is a very popular game engine which allows building games, apps and so called experiences with much ease. Unity allows some modified form of JavaScript and also C# for its scripting. In this example project I’ll use C# since that’s the language I’m familiar with but it shouldn’t be hard to adapt this to JavaScript (but you’re gonna do it yourself if needed, sorry) so let’s start.
Here’s an screenshot of the project shared in this post. Hopefully if you follow all the steps correctly you’ll have it running on your PC and Mobile.
First and foremost, as you might already know, OpenCV or at least standard OpenCV uses C++ and not C# so we’ll need to somehow call OpenCV C++ functions from C#. Think about it like this, if you already know how to use OpenCV then you can create a very simple library file (.dll, .so, .a and so on) in C++ which uses OpenCV the normal way. Then export those functions from the library and call them from C#. But before that we need to define a struct that we’ll use to receive images and video frames from Unity C# code.
struct Color32
{
uchar r;
uchar g;
uchar b;
uchar a;
};
And here’s how we need to define our exported function in the library. In its header file:
extern "C"
{
__declspec(dllexport) void processImage(Color32* raw, int width, int height);
}
Note that this is the case for Windows. If you use some cross platform tool like Qt Creator you will have something like CVTESTSHARED_EXPORT instead of “__declspec(dllexport)” which will automatically adapt to Linux, macOS, iOS and so on.
And in source file we’ll have this which is the actual function and image processing part of the project:
extern "C"
{
void processImage(Color32* raw, int width, int height)
{
using namespace cv;
using namespace std;
Mat frame(height, width, CV_8UC4, raw);
// Process frame here ...
// Try imshow("frame", frame); for example ...
}
}
Notice how we’re creating an OpenCV Mat using an array of Color32 struct we created. It’s basically a 32-bit image with R, G, B and Alpha channels because that’s how unity will send over images and video frames:
Mat frame(height, width, CV_8UC4, raw);
Now build your library and get your library file. (That would be .DLL for Windows, .SO for Android and so on)
You can also grab the source code for library from here. (It needs OpenCV and is created using Qt Creator but it doesn’t use any Qt library. Let me know if you face any difficulties building/modifying the library to fit your needs)
And next for the Unity part of the project, we’re going to use WebcamTexture class to easily access camera on almost any platform. (Note that from here on codes are in C# not C++ though they’re very similar)
public WebCamTexture webcam;
Now the most important question is “How to pass images from Unity to OpenCV?” Well WebcamTexture provides a very simple method to get the raw image data. Similar functions exist for single images and video and so on. As you saw above we’ll use that raw data to create a Mat in OpenCV. Here’s how we’re going to get frames from the camera and pass it to our C++ code and library:
if (webcam.isPlaying)
{
Color32[] rawImg = webcam.GetPixels32();
System.Array.Reverse(rawImg);
processImage(rawImg, webcam.width, webcam.height);
}
Of course we’re just calling it now. We need to actually define our project in the C# code in Unity, here’s how:
[DllImport("CVTest", EntryPoint = "processImage")]
public static extern void processImage(Color32[] raw, int width, int height);
If you get a compiler error for not knowing what DllImport is, you should add this to the top of your C# code:
using System.Runtime.InteropServices;
You can find the whole Unity project along with all source codes mentioned from this link. Here are a few final (and very important) notes about using OpenCV in Unity. 1. You need to copy your C++ library to Plugins folder in Unity project. 2. You need to deploy OpenCV libraries used by your project along with the library itself. You might want to read about Plugins in Unity and also make sure to comment below if you face any issues with the source code or have any questions.
Would you happen to know where cascade xml files go with these plugins? They don’t seem to be found by the plugin when they’re in in the scripting folder, the plugins folder, or the root.
Cascade classifier files are just assets. Add them to your unity assets. They have nothing to do with the plugins.
I currently have the OpenCV for Unity asset from the Unity store, however, this asset is a copy of OpenCV for Java and doesn’t extend support for gpu resources to be used. Would implementing your suggested approach of loading the OpenCV dlls into Unity also allow for the traditional gpu resources to be leveraged?
Jevon,
That’s a very good question but I don’t know have the answer for it to be honest.
The only way to know for sure is to try, I’d say there’s a high chance that it will, especially if you try with more recent versions of OpenCV.
Feel free to share the results if you try.
Hi Amin
Have you figured out how to pass pixels between Unity and Android? In the same code that you provided, I tried to convert the image to grayscale and back to RGBA since we are passing data in rgba format, and it worked. It also worked when I tried to detect edges using Canny Edge detection.
But while trying out color detection using inRange, the app crashes. Even when it starts some times, it doesn’t work. I created 2 raw images that showed camera texture input and output. The input works fine, but in case of inRange use, the output doesn’t. I haven’t completely figured out how to pass pixels from Unity to android. Also, could the crashing and everything be happening due to bad memory management?
Have you checked out these posts?
https://amin-ahmadi.com/2019/06/03/how-to-use-opencv-in-unity-for-android/
https://amin-ahmadi.com/2019/06/01/how-to-pass-images-between-opencv-and-unity/
Heya,
just a small q, when you say deploy OpenCV libraries and C++ library, you mean the .so qt made using your code and all the libraries we linked to in the compilation of that right? I made a .so library for Android but Unity keeps spitting back that it can’t find the correct dll for it. Any ideas?
Please read the final notes carefully:
You must deploy the library that you created, plus any third party libraries used in your library, in this case OpenCV. Does that make sense now?!
Hi Amin, thank you for creating this tutorial. I hope you don’t mind my reviving an older thread, but I would appreciate help with errors I’m running into.
The OpenCV Mat construction seems to be failing, because when I call cvtColor on it, I get an assertion from the rows being negative. For reference, I’m linking OpenCV4Android to a native C++ .so library that I’m calling from an Android build. I’ve followed the format of your example; do you have any ideas what the source of the error might be? Thanks.
Check if that’s the exact first line of code that does anything using OpenCV. If that’s the case then you are not deploying OpenCV dependency libraries, runtime libraries of course.
Hello, I’m beginner of OpenCV, c++, c# and unity.
I found problem EntrypointNotFoundException. I try many ways for 2 days but I cannot solve it. Could you please tell me how I can solve this problem.
Do you have any suggestion for beginner like me to learn about this?
I have to work on it for my research and I have to create some program within 3 mouth.
Thank you for your tutorial and thank you for your answer
Have you built the library and included it in your project?
Check the following in my post:
1. You need to copy your C++ library to Plugins folder in Unity project.
2. You need to deploy OpenCV libraries used by your project along with the library itself.
Here is where you can read about plugins in Unity:
https://docs.unity3d.com/Manual/Plugins.html
I found that I make some mistake.
I built the library by debugging so I got problem EntrypointNotFoundException.
Thank you so much for your reply and sorry for late relpy.
Glad to hear you figured it out 🙂
In my system opencv sdk is imported in unity,now can i run your project in my system
Unfortunately just having OpenCV for Unity is not enough. As it is described in the tutorial, you must build a library using OpenCV that does what you want, and use that. The simple idea here is that Unity uses your library which uses OpenCV. Hope that helps.
Well certainly I did get that I need .dll files, but where I am gonna get them? Do I need to install CV? Download source then build .dll myself?
Exactly. I’ve not provided any prebuilt files in the post. They’re all source codes that you need to build by yourself.
its not clear you just took the frame and did not process it or return it.
You should read about passing variables by reference to functions, passing arrays to functions and passing pointers to functions in C++.
In the provided example that is what has happened so there is no need to return anything.
please tell me how did you return Mat into unity again? please help
please tell me how did you return Mat into unity again?
Hello, thank you for this tutorial, I need to pass the frame (after applying canny edge detection to it) back to unity and I haven’t found a solution that works. I appreciate any tip you might have (I tried asking on stackoverflow but it got flagged).
This is my c++ function (this is my first time using c++ and opencv)
extern "C" {
void ProcessFrame(unsigned char* data, int width, int height)
{
cv::Mat resizedMat(height, width, imgOriginal.type());
cv::resize(imgOriginal, resizedMat, resizedMat.size(), cv::INTER_CUBIC);
cv::Mat imgGrayscale; // grayscale of input image
cv::Mat imgBlurred; // intermediate blured image
cv::Mat imgCanny; // Canny edge image
cv::flip(resizedMat, resizedMat, 0);
cv::cvtColor(resizedMat, imgGrayscale, CV_BGR2GRAY); // convert to grayscale
cv::GaussianBlur(imgGrayscale, // input image
imgBlurred, // output image
cv::Size(15, 15), // smoothing window width and height in pixels
1.5); // sigma value, determines how much the image will be blurred
cv::Canny(imgBlurred, // input image
imgCanny, // output image
50, // low threshold
100);
return imgCanny.data;
}
}
Hi Safa,
Did you check the comments made by other users in this post? Especially the one by Juan José and my answer to it is related to your question.
Let me know if this helps.
Hello,
First of all this has been a lot of help for my Uni project. I am working with computer vision and have to make an app that uses OpenCV. I just have one question, since I am rather new to C++ and Unity as such and have spent some time trying to figure this out.
In the first comment you explain how to pass Mat back to Color32 so we can send it back to Unity. I understand passing Mat data to Color32 for the most part, the actual passing to Unity is what I am unsure about. I would be v thankful if you could help a girl out since none of my classmates/colleagues have ever worked with Unity in such manner.
Thanks 😀
Well in this example the image is not really passed back, but rather it is processed to get a result, which can be anything depending on the process. If you need to pass it back though, you can use the same (or a similar) method, which is passing a pointer to the OpenCV side and modifying it so that you get the result in Unity side. You need to be careful with memory handling with such a code. Good luck.
Thanks for the great tutorial. I tried this approch to make a simple imshow() function. but the result is completely black. I did exactly as per ur tutorial…
I would suggest you to try the source codes which I have put there in the post. Download and see if they work then we can check out the issue. Or be more specific about the issue you have faced.
Thanks for the reply @Amin,
Yes that code worked well. The reason that resulted a complete black image for me is, I call my dll fuction in side strat(){} function of unity, which only execute once. So I found in my case, that the typical 1st image acquired to dll will mostly a black one. So I created another function inside dll to check if the true image is acquired before processing by checking for feature points. Now that is perfectly resolved. Thank You very much for this tutorial.
And I’d like to ask another question. Currently I’m trying to write an augmented reality application by processing the image inside a dll and render the webcamtexture frames to a game object simultaneously. When I implemented, it is too slow. Is there any way to run the webCameraTexture feeds on to a game object without making a significant slow effect for such an application.
The way you are doing it should be the fastest with the logic provided in this tutorial and the way OpenCV and Unity interact. I would say first try to see if the processing and displaying codes are performant or not. Try benchmarking the image processing code separately from Unity. Maybe your image processing is what makes the whole thing slow. Try to narrow down the issue first. Let me know if this helps.
Thanks for the reply again @Amin,
I’m actually using the dll to compute the pose of camera using feature points. When I run that code in my IDE, it runs with 20 FPS, which is faster. And I’m using the results of transformation to move the camera in unity space.
Hi, thank you a lot for your work.
I am having trouble using your solutions, Unity crashes when I run it in play mode.
I have this error in the crash repport :
Unity Editor [version: Unity 5.3.0f4_2524e04062b4]
opencv_highgui2413d.dll caused an Access Violation (0xc0000005)
in module opencv_highgui2413d.dll at 001b:34f77648.
Error occurred at 2017-11-15_124706.
C:\Program Files\Unity\Editor\Unity.exe, run by Nathan.
74% memory in use.
3326 MB physical memory [845 MB free].
0 MB paging file [2942 MB free].
2048 MB user address space [857 MB free].
Read from location 00220065 caused an access violation.
If anyone has a solution, I will be please to have it.
Thank you
Please try to run it as executable instead of play mode.
Let me know if this helps.
Hi,
Actually, I was using some functions implemented within the same cpp file that the final function that I exported. When I rewrote the function to use only opencv function inside the function I export, it was running. I think that I shoulde export every function that I wrote even if it is just an intermediate one. I was the first time I build my own dll so I am not sure how it really works. Any way, it is running now. Again thank you for making that open.
No thank you for sharing what you experienced 🙂
I am kind of confused, downloaded expamle from link you put in the end and nothing works.
Lot of scripts missing.
Make sure you pay attention to the final steps mentioned in the tutorial. For instance, you need to have the plugin built and put in the right place.
Hi Mr. Ahmadi,
Thank you very much for this useful tutorial,
I want to use exported frame from OpenCV as background of the unity scene in android platform, in other word, I want to give a frame from Unity and modify it in openCV and then use it in a Augmented reality application in Unity engine.
I guess if I pass the frame via pointer, all of opencv changes will be effected in the output background, but I need some more guidance about it.
Do you think there is a better way to achieve better result?
Hi Mohammad,
There’s really nothing else worth mentioning, except the fact that you need to be careful with memory management especially when casting data types to each other, for use in OpenCV or Unity.
Hi, thanks very much for the tutorial, it is very usefull.
Could you tell me how to convert from Mat image to Color32 struct so I can send an image from OpenCV to Unity?
The answer to your question is actually in the post. I’ll explain it a little bit more.
We have the following:
Mat frame(height, width, CV_8UC4, raw);
This is how we created a Mat using Color32 struct.
One way of passing any other Mat to Color32 is to make sure 1. Your Mat has the type 8UC4 and 2. Pass the data of Mat over to Unity again as described above.
Note that you need to be careful about memory management here, so I would really suggest (if it’s possible) just send an empty Color32 (with the required width and height from Unity to OpenCV and then fill it in your OpenCV code.
Since you have access to its pointer, the operation is not going to be too slow anyway.