Skip to content
Ranarivelo Andre edited this page Jun 14, 2015 · 1 revision

1. Presentation of OpenCV

"OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code." OpenCV Official Website

2. OpenCV Workflow in our project

  • Grab the frame delivered by the drone embedded camera
  • Process this frame through OpenCV
  • Add UI indications on the frame if needed (highlight the tracked object,number of objects,...)
  • Re-deliver the frame on the user's phone's screen with modifications
  • Controlling the drone autmotically regarding the feature of OpenCV wanted by the user (color detection,object tracking,...)

3. Grab the Drone's Camera's frame

The OpenCV SDK for Android has been made in a way that it will work perfectly with the embedded camera of the Android device and so deliver a class dedicated to the camera events (frame change,stop or start of the camera,...). But you can’t link the class to a video stream that is distant to the phone.

Hopefully we have with the Vitamio library, methods that are equivalent to the methods contained in the OpenCV SDK for managing camera events.

Especially this method :

public void onBufferingUpdate(MediaPlayer mp, int percent) {
   mp.getCurrentFrame();
}

onBufferingUpdate(MediaPlayer mp, int percent) is a event listener that triggers each time there is a new frame in the video buffer, so we can grab the frame on the Vitamio MediaPlayer Object with the method getCurrentFrame() and we receive it as a Bitmap object.

4. OpenCV Frame processing

In our project we have enabled color detection and tracking with OpenCV. We are going to explain the worflow of this feature :

  • The user touches on his device screen's (where the video stream of the drone's camera is shown) the colored object he wants to track.

  • We gather the coordinates of the touch on the screen, then we give it to the OpenCV engine

  • OpenCV detects the color of the object at the touch coordinates

  • OpenCV search the presence of this color on the frame and return us array of matrix of points representing areas on the frame that matches with this color.

  • We gather the first matrix of points of the array that represents the area on the frame where the selected color is the most concentrated

  • We draw on the frame the matrix of points to highlight the tracked object on the UI

  • We extract a OpenCV Rect Object of the matrix of point

  • From this OpenCV Rect Object we can gather the coordinates of middle of the colored object so we can send Control commands to the Drone in consequence

Clone this wiki locally