How to implement Object Detection in Video with Gstreamer in Python using Tensorflow?
In this tutorial we are going to implement Object Detection plugin for Gstreamer using pre-trained models from Tensorflow Models Zoo and inject it into Video Streaming Pipeline.
- Time to read: 20 min
- Code
- Libraries: OpenCV, Gstreamer, Numpy, Tensorflow
- Plugins: videotestsrc, videoconvert, fakesink, filesrc, fpsdisplaysink, decodebin, gtksink, autovideosink, capsfilter, souphttpsrc
- Goal: Create Gstreamer Plugin that detects objects in each Video Frame using Tensorflow Models Zoo
In previous posts we’ve already learnt How to create simple Gstreamer Plugin in Python and implemented simple Blur Filter as Gstreamer Plugin using OpenCV . Now let’s make a step forward.
Requirements
- OS: Ubuntu 16,18 LTS
- Python 3
- Install Gstreamer (latest version). Use guide “How to install Gstreamer from sources“
How it works?
Setup Environment
- Download video (ex: video.mp4) (Note: check additional section to use get stream from raw Youtube link)
- Check simple Gstreamer pipeline works with downloaded video well
1 |
gst-launch-1.0 filesrc location=video.mp4 ! decodebin ! videoconvert ! video/x-raw,format=RGB ! videoconvert ! gtksink sync=False |
Note: video/x-raw,format=RGB (capsfilter plugin). Converts output format from previous plugin to input format of next plugin.
- Download and Extract ANY model from Tensorflow Models Zoo (example: ssdlite_mobilenet_v2_coco).
- Get code
1 |
git clone https://github.com/jackersson/gst-plugins-tf.git |
3. Setup Environment using commands
Note:
- Install Tensorflow (CPU or GPU version)
12 pip install tensorflow # For CPU versionpip install tensorflow-gpu # For GPU version
- Install Gstreamer (latest version). Use guide “How to install Gstreamer from sources”
- Make sure gst-python is installed (to enable Python support for Gstreamer Plugins)
Note: Install gst-python . Highly recommended to use guide from “How to install Gstreamer from sources” (Gst-Python Section). And before going to next part check that test case is working and no error being printed
12 wget https://bit.ly/2Eg4b8spython3 gstreamer_empty_plugin_test_case.py
Run
1. Edit config (format explained)
1 2 3 4 5 6 7 8 |
weights: "ssdlite_mobilenet_v2_coco/frozen_inference_graph.pb" # path to model threshold: 0.4 per_process_gpu_memory_fraction: 0.0 device: "/device:GPU:0" labels: "data/mscoco_label_map.yml" input_shape: [300, 300] log_device_placemenent: falsewget https://bit.ly/2Eg4b8s python3 gstreamer_empty_plugin_test_case.py |
2. Launch (with print to console)
1 2 3 4 5 |
cd gst-plugins-tf export GST_PLUGIN_PATH=$PWD/gst GST_DEBUG=python:4 gst-launch-1.0 filesrc location=video.mp4 ! decodebin ! videoconvert ! \ video/x-raw,format=RGB ! gst_tf_detection config=data/tf_object_api_cfg.yml ! videoconvert ! gtksink sync=False |
Note: <GST_DEBUG=python:4> enables printing logs to console when launching pipeline (gstreamer debugging tools).
In Terminal you should see something like this:
Note: App should print something like this (list of dicts with object’s class_name, confidence, bounding_box):
12 [{'confidence': 0.6499642729759216, 'bounding_box': [402, 112, 300, 429], 'class_name': 'giraffe'}, \{'confidence': 0.4659585952758789, 'bounding_box': [761, 544, 67, 79], 'class_name': 'person'}]
3. Launch (with object’s bounding boxes over video)
1 2 |
gst-launch-1.0 filesrc location=video.mp4 ! decodebin ! videoconvert ! video/x-raw,format=RGB ! \ gst_tf_detection config=data/tf_object_api_cfg.yml ! videoconvert ! gst_detection_overlay ! videoconvert ! gtksink sync=False |
Note: Look at gst_detection_overlay to check how drawing functions implemented
Explanation
The steps to create such a plugin a heavily built on top of this tutorial .
Let’s dive.
- Define buffer format for SRC/SINK of plugin. Since Tensorflow Object Detection models works mostly with RGB define next templates (pay attention to video/x-raw,format={RGB}):
1 2 3 4 5 6 |
_srctemplate = Gst.PadTemplate.new('src', Gst.PadDirection.SRC, Gst.PadPresence.ALWAYS, Gst.Caps.from_string("video/x-raw,format={RGB}")) _sinktemplate = Gst.PadTemplate.new('sink', Gst.PadDirection.SINK, Gst.PadPresence.ALWAYS, Gst.Caps.from_string("video/x-raw,format={RGB}")) |
2. Define properties for simple arguments passing to gstreamer plugin.
First, you have noticed that plugin has “config” property. Config’s file format is described here (YAML). In general the main purpose of this config is to pass parameters to our Tensorflow Model (ex.: confidence threshold, model’s weights path, device to place graph, labels, etc.)
1 |
gst_tf_detection config=tf_object_api_cfg.yml |
Note: Labels is also just a file with next format (YAML): <class_id: class_name>. For example:
123456 1: person2: bicycle3: car4: motorcycle...90:toothbrush
Also config file for Gstreamer Plugin is the common practice to pass more than 3 parameters, as it much easier to reuse and extend in future. For example, look at DeepStream SDK plugins.
To allow Gstreamer Plugin get/set properties define next:
1 2 3 4 5 6 7 8 9 10 11 12 |
__gproperties__ = { "model": (GObject.TYPE_PYOBJECT, "model", "Contains model TfObjectDetectionModel", GObject.ParamFlags.READWRITE), "config": (str, "Path to config file", "Contains path to config *.yml supported by TfObjectDetectionModel", None, # default GObject.ParamFlags.READWRITE), } |
Note: Property “model” is used to pass existing model to plugin, without creating it. It could be useful if you want just to share single model without creating multiple instances.
3. Implement chainfunc() with Tensorflow model Inference. So the main steps are: Convert Gst.Buffer to np,ndarray, perform inference of image, write objects to Gst.Buffer as metadata (recap: How to add metadata to gstreamer buffer). Check the following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
def chainfunc(self, pad: Gst.Pad, parent, buffer: Gst.Buffer) -> Gst.FlowReturn: """ :param parent: GstDetectionOverlay """ if self.model is None: return self.srcpad.push(buffer) try: # Convert Gst.Buffer to np.ndarray image = gst_buffer_with_pad_to_ndarray(buffer, pad, self._channels) # model inference objects = self.model.process_single(image) Gst.info(str(objects)) # write objects to as Gst.Buffer's metadata # Explained: http://lifestyletransfer.com/how-to-add-metadata-to-gstreamer-buffer-in-python/ gst_meta_write(buffer, objects) except Exception as e: logging.error(e) traceback.print_exc() return Gst.FlowReturn.ERROR return self.srcpad.push(buffer) |
We won’t deep dive much into implementation on how to load Tensorflow model. Just have a look at code here. (If there any questions on those lines of code, leave a comment)
Additional
# Gstreamer + Youtube
To run Gstreamer pipeline from Youtube link (without downloading the video file) we are going to use youtube-dl library. First install dependencies:\
1 2 |
source venv/bin/activate pip install youtube-dl |
And let’s edit command from previous examples link described in here (gstreamer commands cheatsheet)
1 2 3 |
gst-launch-1.0 souphttpsrc is-live=true location="$(youtube-dl --format mp4 -get-url https://www.youtube.com/watch?v=xjDjIWPwcPU)" ! \ decodebin ! videoconvert ! video/x-raw,format=RGB ! gst_tf_detection config=data/tf_object_api_cfg.yml \ ! videoconvert ! gst_detection_overlay ! videoconvert ! gtksink sync=false |
Note: Next command prints link (https) to Youtube Video with specified format (mp4)
1 youtube-dl --format mp4 --get-url <youtube_link>And souphttpsrc is a plugin that allows to read buffers from http stream specified by url
Hope everything works as expected 😉 In case of troubles with running code leave comments or open an issue on Github.
No element “gst_tf_detection”
I ran into this error when i tried to implement the example. U have any idea can help me ?Thanks alot
Hi,
Here I tried it on another PC:
—————-
git clone https://github.com/jackersson/gst-plugins-tf.git
#
# Create environment
#
cd gst-plugins-tf/
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
#
# Make plugins visible
#
export GST_PLUGIN_PATH=$PWD/gst # Fixed path in repository
#
# create symbol link for gi package
#
ln -s /usr/lib/python3/dist-packages/gi venv/lib/python3.6/site-packages
#
# Check plugin works
#
gst-inspect-1.0 gst_tf_detection
> Prints plugin's info
To get error message (Not just "No element “gst_tf_detection”":
rm -rf ~/.cache/gstreamer-1.0
and then:
gst-inspect-1.0 gst_tf_detection
——————-
Hope this works)
To fix this I needed to install libgst_objects_info_meta.so from your https://github.com/jackersson/pygst-utils into venv/lib/python3.6/site-packages/pygst_utils/3rd_party/gstreamer/build/.
Hi, Jose
Thanks for update.
Just checked that this dependency is present in requirements.txt. Next commands should help you to avoid described problem.
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Best regards,
Taras Lishchenko
I am having a similar problem as the above poster- I am working on a Jetson TX2 and am receiving the following error message when I try to run the example plugin (after removing ~/.cache/gstreamer-1.0):
(gst-plugin-scanner:25395): GStreamer-WARNING **: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstclutter-3.0.so’: /usr/lib/aarch64-linux-gnu/libgbm.so.1: undefined symbol: drmGetDevice2
WARNING: erroneous pipeline: no element “gst_tf_detection”
Any help would really be appreciated!
Hi, Caroline McKee
Thank you for your question.
This seems to be OpenGL problem. Can you share the pipeline itself, that you are using? (Maybe you are using some OpenGL plugins for decoding/visualization)
Make sure that you are using proper pipeline On Jetson TX2. For example:
gst-launch-1.0 filesrc location=video.mp4 ! qtdemux name=demuxer_0 demuxer_0.video_0 ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=RGBA ! videoconvert n-threads=0 ! video/x-raw,format=RGB ! gst_detection_tf ! fpsdisplaysink video-sink=fakesink sync=false signal-fps-measurements=True
Best regards,
Taras Lishchenko
I am having a very similar problem to the above commenter. I am working on a Jetson TX2 and when I try to run your example plugin, I get the follow error message (after removing ~/.cache/gstreamer-1.0):
(gst-plugin-scanner:25395): GStreamer-WARNING **: Failed to load plugin ‘/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstclutter-3.0.so’: /usr/lib/aarch64-linux-gnu/libgbm.so.1: undefined symbol: drmGetDevice2
WARNING: erroneous pipeline: no element “gst_tf_detection”
Any help would be really appreciated!