Rtsp python client opencv

Read RTSP Stream from UDP sink using Python OpenCV and GStreamer

so the VideoWriter should be: Solution 2: I’ve noticed that most questions related to this topic ended up been answered by their own author, like these two How to Stream PC Webcam with RTSP and Gstreamer on Python Write OpenCV frames into Gstreamer RTSP server pipeline In gst-launch case, your are receiving UYVY frames and send these to h264 encoder, while in opencv case, you are getting BGR frames that may not be supported as input by encoder.

Read RTSP Stream from UDP sink using Python OpenCV and GStreamer

We’re developing a software to stream videos from two different cameras with RTSP using gstreamer. To simplify the acquisition process, we’re using OpenCV with Python 3.

The problem is: we wanna push the stream to UDP sink, republish it over our LAN as a RTSP Stream and than read it in another PC. But we can’t get this to work.

This is the Python code that gets the camera images and starts the streaming with udpsink . In this case we’re accessing our local webcam so that the code may be straightforwardly tested by anyone.

import cv2 import time from multiprocessing import Process def send(): video_writer = cv2.VideoWriter( 'appsrc ! ' 'videoconvert ! ' 'x264enc tune=zerolatency speed-preset=superfast ! ' 'rtph264pay ! ' 'udpsink host=127.0.0.1 port=5000', cv2.CAP_GSTREAMER, 0, 1, (640, 480), True) video_getter = cv2.VideoCapture(0) while True: if video_getter.isOpened(): ret, frame = video_getter.read() video_writer.write(frame) # cv2.imshow('send', data_to_stream) time.sleep(0.1) if __name__ == '__main__': s = Process(target=send) s.start() s.join() cv2.destroyAllWindows() 

When running this, we only get one warning:

[ WARN:0] global ../modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1 

Then we try to republish the stream over our LAN as RTSP using the examples/test-launch from GStreamer

./test-launch " udpsrc port=5000 ! h264parse ! rtph264pay name=pay0 pt=96" 

This gives us no errors but the default message

stream ready at rtsp://127.0.0.1:8554/test 

And then VLC fails to open the stream at this address.

 ~$ vlc -v rtsp://127.0.0.1:8554/test VLC media player 3.0.11 Vetinari (revision 3.0.11-0-gdc0c5ced72) [000055f5dacf3b10] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. Qt: Session management error: None of the authentication protocols specified are supported [00007f5ea40010f0] live555 demux error: Failed to connect with rtsp://127.0.0.1:8554/test [00007f5ea4003120] satip stream error: Failed to setup RTSP session 

I guess it’s something with our pipelines, but I really don’t get what it could be. Any help will be appreciated. Thanks in advance.

you can remove the rtph264pay in VideoWriter, then the python script send h264 data, and test-launch receive h264 data, and do rtp packetize, and then rtsp.

so the VideoWriter should be:

video_writer = cv2.VideoWriter( 'appsrc ! ' 'videoconvert ! ' 'x264enc tune=zerolatency speed-preset=superfast ! ' 'udpsink host=127.0.0.1 port=5000', cv2.CAP_GSTREAMER, 0, 1, (640, 480), True) 

I’ve noticed that most questions related to this topic ended up been answered by their own author, like these two

How to Stream PC Webcam with RTSP and Gstreamer on Python

It actually shows how specific these questions can be, and that’s why I decided to share here what I’ve been coming up with. It’s not the perfect solution right now, but it’s «kind of» working.

I’d like to thanks kqmh00 for the support, since it gave me some insights.

First, I stepped back and tried a simpler example using videotestsrc outside Python. After a few tries, these were the pipelines that worked

gst-launch-1.0 videotestsrc is-live=true ! video/x-raw ! videoconvert ! x264enc tune=zerolatency speed-preset=superfast ! udpsink host=127.0.0.1 port=30000 

Intercessor

./test-launch "udpsrc port=30000 ! h264parse ! rtph264pay name=pay0 pt=96" 

Ok. It only prooved that it’s possible to send a video by UDP without the rtph264pay , which is received by test-launch that resends it with RTP packaging as RTSP.

Next, I tried Python again. After searching for similar questions I found this one that served as inspiration to the actual pipeline I’m using, which is:

 pipeline = ( "appsrc format=GST_FORMAT_TIME ! " "video/x-raw,format=BGR,width=1280,height=720 ! " "videoconvert ! " "video/x-raw,format=I420 ! " "x264enc tune=zerolatency speed-preset=superfast byte-stream=true threads=2 ! " "video/x-h264, stream-format=(string)byte-stream ! " "mpegtsmux alignment=7 ! " "udpsink host=127.0.0.1 port=30000" ) 

The rest of the Python code is the same, except for the video_writer , that was updated with the correct frame width and height:

 video_writer = cv2.VideoWriter( pipeline, cv2.CAP_GSTREAMER, 0, 1, (1280, 720), True) 

The other pipelines ( Intercessor and Receiver ) remain the same. With this I’m able to retrieve some pretty monocromatic images (they are entirely gray or green) , but at least I can retrieve something.

VLC throws lots of errors, such as

[h264 @ 0x7ff8e0001da0] Invalid NAL unit 0, skipping. [h264 @ 0x7ff8e0001da0] top block unavailable for requested intra mode -1 [h264 @ 0x7ff8e0001da0] error while decoding MB 1 0, bytestream 112420 

Probably it’s some setting missing or some property that must be explicitly set. I’ll keep digging, but feel free to add here any possible improvements.

Streaming Opencv videocapture frames using GStreamer, i am trying to stream a videocapture over network. I have used fastapi and uvicorn for this and it worked well but now i am moving to wireless network and the network can’t handle the stream, im getting 2-3fps with 5 sec lag. I read that gstreamer is the best way to stream the frames, although i will need …

RPi tcp video streaming with opencv and gstreamer using v4l2h264enc

I am trying to stream frames using OpenCV and Gstreamer in Python. I’m on a 64 bit Bulseye Raspberry Pi 4. This is the pipeline I am using on the Raspberry:

pipeline = 'appsrc ! "video/x-raw,framerate=25/1,format=BGR,width=640,height=480" ! ' \ 'queue ! v4l2h264enc ! "video/x-h264,level=(string)4" ! h264parse ! ' \ 'rtph264pay ! gdppay ! tcpserversink host=0.0.0.0 port=7000 ' cv2.VideoWriter(pipeline, cv2.CAP_GSTREAMER, 0, args.fps, (args.width, args.height)) 

It seems to be some problem with v4l2h264enc . Enabling GST_DEBUG=4 gives me

0x3e39a00 ERROR GST_PIPELINE gst/parse/grammar.y:1007:priv_gst_parse_yyparse: no source element for URI "/x-raw,framerate=25/1,format=BGR,width=640,height=480"" 0:00:00.087855767 92892 0x3e39a00 ERROR GST_PIPELINE gst/parse/grammar.y:1007:priv_gst_parse_yyparse: no source element for URI "/x-h264,level=(string)4"" 

These two errors look most important to me, but you can see the full log here.

Using a similar CLI pipeline the stream connects just fine (except for some encoding grayness, which isn’t the most critical to me right now).

# Stream gst-launch-1.0 v4l2src device=/dev/video0 ! \ 'video/x-raw,framerate=30/1,format=UYVY,width=1280,height=720' ! \ v4l2h264enc ! 'video/x-h264,level=(string)4' ! h264parse ! \ rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=0.0.0.0 port=7000 # Client sudo gst-launch-1.0 -v tcpclientsrc host=yraspberry ip> port=7000 ! \ gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! \ autovideosink sync=false 

With appsrc and opencv I also tried writing to a file without success.

The opencv library is compiled with Gstream support. This is what I get from cv2.getBuildInformation() :

Video I/O: DC1394: NO FFMPEG: YES avcodec: YES (58.91.100) avformat: YES (58.45.100) avutil: YES (56.51.100) swscale: YES (5.7.100) avresample: NO GStreamer: YES (1.18.4) v4l/v4l2: YES (linux/videodev2.h) 

Any help would be most welcome!

Not sure this is the solution for your case, but the following may help:

  1. Don’t use RTP for TCP streaming. AFAIK, RTP mostly relies on UDP packetization (although not impossible as done by RTSP servers when requesting TCP transport). You may just use a container such as flv, matroska or mpegts:
. ! h264parse ! matroskamux ! tcpserversink . ! h264parse ! flvmux ! tcpserversink . ! h264parse ! mpegtsmux ! tcpserversink 

and adjust receiver such as:

tcpclientsrc ! matroskademux ! h264parse ! . tcpclientsrc ! flvdemux ! h264parse ! . tcpclientsrc ! tsdemux ! h264parse ! . 
  1. In gst-launch case, your are receiving UYVY frames and send these to h264 encoder, while in opencv case, you are getting BGR frames that may not be supported as input by encoder. Just add plugin videoconvert before encoder.
  2. You may also set h264 profile with level.

As mentioned by @SeB, the BGR frames might not be supported by v4l2h264enc . And leads to this error, which videoconvert fixes:

opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (2293) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline 

But the main cause for the no source element for URI errors turned out to be the double quotes around video/x-raw and video/x-h264 .

This is the final pipeline that works.

pipeline = 'appsrc ! videoconvert ! v4l2h264enc ! video/x-h264,level=(string)4 ! ' \ 'h264parse ! matroskamux ! tcpserversink host=0.0.0.0 port=7000 ' 

Also, as @SeB suggested, I also included the matroskamux instead of rtph264pay ! gdppay , since it gives better stream performance.

Using gstreamer with Python OpenCV to capture live, First of all I have Python 3 with the Gstreamer library in it. print (cv2.getBuildInformation ()) It shows Gstreamer with YES next to it. Here is the transmitter code using gstreamer in RaspberryPi 3: gst-launch-1.0 v4l2src device=»/dev/video0″ ! video/x-raw,width=320,height=240 ! videoconvert ! …

Streaming openCV frame using h264 encoding

I created a python program using OpenCV and GStreamer to stream the frames to a GStreamer udpsink . Here is the code :

import cv2 import config def send(): cap = cv2.VideoCapture(0) #open the camera fourcc = cv2.VideoWriter_fourcc(*'H264') out = cv2.VideoWriter('appsrc ! videoconvert ! x264enc tune=zerolatency noise-reduction=10000 bitrate=2048 speed-preset=superfast ! rtph264pay config-interval=1 pt=96 ! udpsink host=127.0.0.1 port=5000',fourcc,config.CAP_PROP_FPS, (800,600),True) #ouput GStreamer pipeline if not out.isOpened(): print('VideoWriter not opened') exit(0) while cap.isOpened(): ret,frame = cap.read() if ret: # Write to pipeline out.write(frame) if cv2.waitKey(1)&0xFF == ord('q'): break cap_send.release() out_send.release() send() 

Then, in my terminal, my GStreamer receiver pipeline is :

gst-launch-1.0 -v udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink 

The problem is that the frames that I receive are like that : https://drive.google.com/open?id=14PeiGlEfcSuzRjSPENrCjGQIQk-04OHb

I guess it’s all about color space conversion in openCV. What do you think ? Thank you !

with the 640×480 resolution, I can see some improvements but it’s still far from being acceptable. (I put the link of the image here: https://drive.google.com/open?id=1YBNEKOcC9fK6hS5RatvkO9pjKhcbh6Eu)

But anyway, I found that for a 1280×720 resolution, it is pretty good! Though, my camera is supporting other resolution (like 800×600 or 640×480) but there are not working as expected.

Check the Video Resolution your camera is supporting .

Use the same resolution in your GStreamer pipeline. ( Unless you are doing some rescaling)

How to get Gstreamer live stream using opencv and, However, I want to get that stream using python and opencv inorder to make some operations on the image. Hello, my opencv was compiled with Gstreamer support, but unfortunately: frame is also always None after calling ret, frame = cap.read() @Oncel Umut TURER: I am exactly in the same position … Usage examplepip unistall opencv-pythonFeedback

Источник

Читайте также:  Php die on any error
Оцените статью