Object Detection Using Qt, C++, QML and OpenCV

In this post I’ll describe how to combine the power of Qt and OpenCV to develop a good looking and fun object detector. The method explained here contains quite a few things to learn and use in your current and future projects, so let’s get started.

What you will learn

If you carefully go through the instructions and code samples provided in this post, you’ll learn:

  • How to use OpenCV with Qt (How to include OpenCV libraries in a Qt Project)
  • How to use Qt C++ classes in QML code
  • How to access camera using QML Camera type
  • How to subclass QAbstractVideoFilter and QVideoFilterRunnable classes to process QVideo frame objects using OpenCV
  • How to convert QVideoFrame to QImage
  • How to convert QImage to OpenCV Mat
  • How to detect objects using cascade classifiers

and many more …

What is needed

Obviously you need to have Qt Framework and OpenCV installed on your computer. You can use the latest versions of both of them, but just as a reference, I’ll be using Qt 5.10 and OpenCV 3.3.1.

How it is done

First of all, you need to create a Qt Quick application project. This type of project template in Qt Creator allows you to to create a QML based project that can be expanded using Qt C++ classes.

Add OpenCV and Qt Multimedia to your project

You must add Qt Multimedia module and OpenCV libraries to your project. Since I’m on windows operating system I’m using the following lines in my project *.pro file to add OpenCV libraries:

INCLUDEPATH += C:/path_to_opencv/include
LIBS += -LC:/path_to_opencv/lib

Debug: {
LIBS += -lopencv_core331d \
-lopencv_imgproc331d \
-lopencv_imgcodecs331d \
-lopencv_videoio331d \
-lopencv_flann331d \
-lopencv_highgui331d \
-lopencv_features2d331d \
-lopencv_photo331d \
-lopencv_video331d \
-lopencv_calib3d331d \
-lopencv_objdetect331d \
-lopencv_videostab331d \
-lopencv_shape331d \
-lopencv_stitching331d \
-lopencv_superres331d \
-lopencv_dnn331d \

Release: {
LIBS += -lopencv_core331 \
-lopencv_imgproc331 \
-lopencv_imgcodecs331 \
-lopencv_videoio331 \
-lopencv_flann331 \
-lopencv_highgui331 \
-lopencv_features2d331 \
-lopencv_photo331 \
-lopencv_video331 \
-lopencv_calib3d331 \
-lopencv_objdetect331 \
-lopencv_videostab331 \
-lopencv_shape331 \
-lopencv_stitching331 \
-lopencv_superres331 \
-lopencv_dnn331 \

Needless to say, you need to replace the paths with the ones on your computer. You also need to add OpenCV bin folder to the PATH environment variable, otherwise OpenCV DLL files won’t be visible to your app and it will crash as soon as you start it.

As for adding Qt Multimedia module, use the following line in your *.pro file:

QT += multimedia

Processing Video Frames Grabbed by QML Camera

We’ll be using QML Camera type later on to access and read video frames from the camera. In Qt framework, this is done by subclassing QAbstractVideFilter and QVideoFilterRunnable classes, as seen here:

class QCvDetectFilter : public QAbstractVideoFilter
QVideoFilterRunnable *createFilterRunnable();

void objectDetected(float x, float y, float w, float h);

public slots:


class QCvDetectFilterRunnable : public QVideoFilterRunnable
QCvDetectFilterRunnable(QCvDetectFilter *creator) {filter = creator;}
QVideoFrame run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags);

void dft(cv::InputArray input, cv::OutputArray output);
QCvDetectFilter *filter;

The implementation code for createFilterRunnable is quite easy, as seen here:

QVideoFilterRunnable* QCvDetectFilter::createFilterRunnable()
return new QCvDetectFilterRunnable(this);

For the run function though, we need to take care of quite a few things, staring with converting QVideoFrame to QImage and then to OpenCV Mat, as seen here:

QVideoFrame QCvDetectFilterRunnable::run(QVideoFrame *input, const QVideoSurfaceFormat &surfaceFormat, RunFlags flags)

if(surfaceFormat.handleType() == QAbstractVideoBuffer::NoHandle)
QImage image(input->bits(),
image = image.convertToFormat(QImage::Format_RGB888);
cv::Mat mat(image.height(),


// The image processing will happen here …

qDebug() << “Other surface formats are not supported yet!”;

return *input;

The image processing part, and namely the actual detection part of the code starts with a flip, to make sure the data is not reversed at it is when converting from QVideoFrame to Mat:

cv::flip(mat, mat, 0);

The cascade classifier XML (in this case a face classifier) embedded into the executable is then converted into a temporary file and loaded. This is because OpenCV will not be able to load the classifier as it can’t access Qt resource system by default:

QFile xml(“:/faceclassifier.xml”);
if(xml.open(QFile::ReadOnly | QFile::Text))
QTemporaryFile temp;
qDebug() << “Successfully loaded classifier!”;
qDebug() << “Could not load classifier.”;
qDebug() << “Can’t open temp file.”;
qDebug() << “Can’t open XML.”;

Note that this doesn’t need to be done at each and every frame, since that will make things quite slow. Detection is a time taking process so do this only at the first frame, or as it is done in our example code, only when the classifier is empty, as seen here. If the classifier is not empty, proceed with the detection code:

// Load the classifier
std::vector<cv::Rect> detected;

* Resize in not mandatory but it can speed up things quite a lot!
QSize resized = image.size().scaled(320, 240, Qt::KeepAspectRatio);
cv::resize(mat, mat, cv::Size(resized.width(), resized.height()));

classifier.detectMultiScale(mat, detected, 1.1);

// We’ll use only the first detection to make sure things are not slow on the qml side
if(detected.size() > 0)
// Normalize x,y,w,h to values between 0..1 and send them to UI
emit filter->objectDetected(float(detected[0].x) / float(mat.cols),
float(detected[0].y) / float(mat.rows),
float(detected[0].width) / float(mat.cols),
float(detected[0].height) / float(mat.rows));
emit filter->objectDetected(0.0,

Obviously the classifier is defined globally in our source codes, like this:

cv::CascadeClassifier classifier;

Using Qt C++ classes in QML code

This is done by first registering the class using qmlRegisterType function and then importing it into our QML code. Something like this in your main.cpp file:

qmlRegisterType<QCvDetectFilter>(“com.amin.classes”, 1, 0, “CvDetectFilter”);

Then you can use the following at the top of your QML fileto import CvDetectFilter which is now a known QML type:

import com.amin.classes 1.0

Following that, we can have the next code in our QML file to define and use CvDetectFilter, and to effectively respond to detection and missed detection:

id: testFilter
if((w == 0) || (h == 0))
// Not detected
// detected

Defining the Camera and setting filters (in VideoOutput) is done as seen here:

id: camera

id: video

source: camera
autoOrientation: false

filters: [testFilter]

id: smile
source: “qrc:/smile.png”
visible: false

So, you can use the following code in onObjectDetected function of CvDetectFilter to draw the smiley on the detected face:

if((w == 0) || (h == 0))
smile.visible = false;
var r = video.mapNormalizedRectToItem(Qt.rect(x, y, w, h));
smile.x = r.x;
smile.y = r.y;
smile.width = r.width;
smile.height = r.height;
smile.visible = true;

Here’s how it looked like when I was playing around with this example. You can get the source codes for this down below:

Where to get the source codes

You can use the following link to get the source codes for CuteDetector project, that contains all of what is explained in this post. Just replace the include(path/opencv.pri) line with the OpenCV include lines mentioned here and you’ll be fine:


3 Replies to “Object Detection Using Qt, C++, QML and OpenCV”

Leave a Reply

Your email address will not be published. Required fields are marked *