Opencv python cv2 videowriter

Read, Write and Display a video using OpenCV

In this post, we will learn how to Read, Write and Display a video using OpenCV. Code in C++ and Python is shared for study and practice.

Before we do that, allow me a digression into a bit of history of video capture.

On June 15, 1898, in Palo Alto, California, a remarkable experiment was conducted to determine whether a galloping horse ever had all four feet off the ground at the same time. This historic experiment by photographer Eadweard Muybridge was the first time a motion sequence was captured in real time. It was financed by Leland Stanford of the Standford University fame.

Eadweard placed multiple cameras, 27 inches apart along the side of the race track. To every camera’s shutter was connected a thread that ran across the track. When the horse ran on the track, it broke one thread after the other triggering the camera shutters in series and exposing the films for one-thousandth of a second!

This remarkable story almost did not happen. Just a few years before this achievement, Muybridge shot and killed his wife’s lover. The jury acquited him on grounds of “justifiable homicide!” But we have digressed a bit too far.

So, first up, what is a video? A video is a sequence of fast moving images. The obvious question that follows is how fast are the pictures moving? The measure of how fast the images are transitioning is given by a metric called frames per second(FPS).

Читайте также:  Обход массива json python

When someone says that the video has an FPS of 40, it means that 40 images are being displayed every second. Alternatively, after every 25 milliseconds, a new frame is displayed. The other important attributes are the width and height of the frame.

Master Generative AI for CV

Get expert guidance, insider tips & tricks. Create stunning images, learn to fine tune diffusion models, advanced Image editing techniques like In-Painting, Instruct Pix2Pix and many more

Reading a Video

In OpenCV, a video can be read either by using the feed from a camera connected to a computer or by reading a video file. The first step towards reading a video file is to create a VideoCapture object. Its argument can be either the device index or the name of the video file to be read.

In most cases, only one camera is connected to the system. So, all we do is pass ‘0’ and OpenCV uses the only camera attached to the computer. When more than one camera is connected to the computer, we can select the second camera by passing ‘1’, the third camera by passing ‘2’ and so on.

# Create a VideoCapture object and read from input file # If the input is taken from the camera, pass 0 instead of the video file name. cap = cv2.VideoCapture('chaplin.mp4')
// Create a VideoCapture object and open the input file // If the input is taken from the camera, pass 0 instead of the video file name VideoCapture cap("chaplin.mp4");

After the VideoCapture object is created, we can capture the video frame by frame.

Displaying a video

After reading a video file, we can display the video frame by frame. A frame of a video is simply an image and we display each frame the same way we display images, i.e., we use the function imshow().

As in the case of an image, we use the waitKey() after imshow() function to pause each frame in the video. In the case of an image, we pass ‘0’ to the waitKey() function, but for playing a video, we need to pass a number greater than ‘0’ to the waitKey() function. This is because ‘0’ would pause the frame in the video for an infinite amount of time and in a video we need each frame to be shown only for some finite interval of time. So, we need to pass a number greater than ‘0’ to the waitKey() function. This number is equal to the time in milliseconds we want each frame to be displayed.

While reading the frames from a webcam, using waitKey(1) is appropriate because the display frame rate will be limited by the frame rate of the webcam even if we specify a delay of 1 ms in waitKey.

While reading frames from a video that you are processing, it may still be appropriate to set the time delay to 1 ms so that the thread is freed up to do the processing we want to do.

In rare cases, when the playback needs to be at a certain framerate, we may want the delay to be higher than 1 ms.

The Python and C++ implementation of reading and displaying a video file follows.

Download Code To easily follow along this tutorial, please download code by clicking on the button below. It’s FREE!

import cv2 import numpy as np # Create a VideoCapture object and read from input file # If the input is the camera, pass 0 instead of the video file name cap = cv2.VideoCapture('chaplin.mp4') # Check if camera opened successfully if (cap.isOpened()== False): print("Error opening video stream or file") # Read until video is completed while(cap.isOpened()): # Capture frame-by-frame ret, frame = cap.read() if ret == True: # Display the resulting frame cv2.imshow('Frame',frame) # Press Q on keyboard to exit if cv2.waitKey(25) & 0xFF == ord('q'): break # Break the loop else: break # When everything done, release the video capture object cap.release() # Closes all the frames cv2.destroyAllWindows()
#include "opencv2/opencv.hpp" #include using namespace std; using namespace cv; int main() < // Create a VideoCapture object and open the input file // If the input is the web camera, pass 0 instead of the video file name VideoCapture cap("chaplin.mp4"); // Check if camera opened successfully if(!cap.isOpened())< cout while(1)< Mat frame; // Capture frame-by-frame cap >> frame; // If the frame is empty, break immediately if (frame.empty()) break; // Display the resulting frame imshow( "Frame", frame ); // Press ESC on keyboard to exit char c=(char)waitKey(25); if(c==27) break; > // When everything done, release the video capture object cap.release(); // Closes all the frames destroyAllWindows(); return 0; >

Writing a video

After we are done with capturing and processing the video frame by frame, the next step we would want to do is to save the video.

For images, it is straightforward. We just need to use cv2.imwrite(). But for videos, we need to toil a bit harder. We need to create a VideoWriter object. First, we should specify the output file name with its format (eg: output.avi). Then, we should specify the FourCC code and the number of frames per second (FPS). Lastly, the frame size should be passed.

# Define the codec and create VideoWriter object.The output is stored in 'outpy.avi' file. # Define the fps to be equal to 10. Also frame size is passed. out = cv2.VideoWriter('outpy.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))
// Define the codec and create VideoWriter object.The output is stored in 'outcpp.avi' file. // Define the fps to be equal to 10. Also frame size is passed. VideoWriter video("outcpp.avi",CV_FOURCC('M','J','P','G'),10, Size(frame_width,frame_height));

FourCC is a 4-byte code used to specify the video codec. The list of available codes can be found at fourcc.org. There are many FOURCC codes available, but in this post, we will work only with MJPG.

Note: Only a few of the FourCC codes listed above will work on your system based on the availability of the codecs on your system. Sometimes, even when the specific codec is available, OpenCV may not be able to use it. MJPG is a safe choice.

The Python and C++ implementation of capturing live stream from a camera and writing it to a file follows.

import cv2 import numpy as np # Create a VideoCapture object cap = cv2.VideoCapture(0) # Check if camera opened successfully if (cap.isOpened() == False): print("Unable to read camera feed") # Default resolutions of the frame are obtained.The default resolutions are system dependent. # We convert the resolutions from float to integer. frame_width = int(cap.get(3)) frame_height = int(cap.get(4)) # Define the codec and create VideoWriter object.The output is stored in 'outpy.avi' file. out = cv2.VideoWriter('outpy.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height)) while(True): ret, frame = cap.read() if ret == True: # Write the frame into the file 'output.avi' out.write(frame) # Display the resulting frame cv2.imshow('frame',frame) # Press Q on keyboard to stop recording if cv2.waitKey(1) & 0xFF == ord('q'): break # Break the loop else: break # When everything done, release the video capture and video write objects cap.release() out.release() # Closes all the frames cv2.destroyAllWindows()
#include "opencv2/opencv.hpp" #include using namespace std; using namespace cv; int main() < // Create a VideoCapture object and use camera to capture the video VideoCapture cap(0); // Check if camera opened successfully if(!cap.isOpened())< cout // Default resolutions of the frame are obtained.The default resolutions are system dependent. int frame_width = cap.get(cv::CAP_PROP_FRAME_WIDTH); int frame_height = cap.get(cv::CAP_PROP_FRAME_HEIGHT); // Define the codec and create VideoWriter object.The output is stored in 'outcpp.avi' file. VideoWriter video("outcpp.avi", cv::VideoWriter::fourcc('M','J','P','G'), 10, Size(frame_width,frame_height)); while(1)< Mat frame; // Capture frame-by-frame cap >> frame; // If the frame is empty, break immediately if (frame.empty()) break; // Write the frame into the file 'outcpp.avi' video.write(frame); // Display the resulting frame imshow( "Frame", frame ); // Press ESC on keyboard to exit char c = (char)waitKey(1); if( c == 27 ) break; > // When everything done, release the video capture and write object cap.release(); video.release(); // Closes all the frames destroyAllWindows(); return 0; >

Summary

In this post we have learned the basic aspects of how to Read, Write and Display a video using OpenCV. These basic steps are the very foundation for many interesting and problem solving Computer Vision and Machine Learning applications such as Video Classification and Human Activity Recognition, and Help robots with a vision to navigate autonomously, grasp different objects or avoid collisions while moving.

Subscribe & Download Code

If you liked this article and would like to download code (C++ and Python) and example images used in this post, please click here. Alternately, sign up to receive a free Computer Vision Resource Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.

Key takeaways:

  1. A video can be read either by using the feed from a camera connected to a computer or by reading a video file.
  2. Displaying a video is done frame by frame. A frame of a video is simply an image and we display each frame the same way we display images.
  3. To write a video we need to create a VideoWriter object.
    • First, specify the output file name with its format (eg: output.avi).
    • Then, we should specify the FourCC code and the number of frames per second (FPS).
    • Lastly, the frame size should be passed.

Pitfall : If the video file you are reading is in the same folder as your code, simply specify the correct file name. Else, you would have to specify the complete path to the video file.

Источник

Opencv python cv2 videowriter

Often, we have to capture live stream with a camera. OpenCV provides a very simple interface to do this. Let’s capture a video from the camera (I am using the built-in webcam on my laptop), convert it into grayscale video and display it. Just a simple task to get started.

To capture a video, you need to create a VideoCapture object. Its argument can be either the device index or the name of a video file. A device index is just the number to specify which camera. Normally one camera will be connected (as in my case). So I simply pass 0 (or -1). You can select the second camera by passing 1 and so on. After that, you can capture frame-by-frame. But at the end, don’t forget to release the capture.

cap.read() returns a bool ( True / False ). If the frame is read correctly, it will be True . So you can check for the end of the video by checking this returned value.

Sometimes, cap may not have initialized the capture. In that case, this code shows an error. You can check whether it is initialized or not by the method cap.isOpened(). If it is True , OK. Otherwise open it using cap.open().

You can also access some of the features of this video using cap.get(propId) method where propId is a number from 0 to 18. Each number denotes a property of the video (if it is applicable to that video). Full details can be seen here: cv::VideoCapture::get(). Some of these values can be modified using cap.set(propId, value). Value is the new value you want.

For example, I can check the frame width and height by cap.get(cv.CAP_PROP_FRAME_WIDTH) and cap.get(cv.CAP_PROP_FRAME_HEIGHT) . It gives me 640×480 by default. But I want to modify it to 320×240. Just use ret = cap.set(cv.CAP_PROP_FRAME_WIDTH,320) and ret = cap.set(cv.CAP_PROP_FRAME_HEIGHT,240) .

Note If you are getting an error, make sure your camera is working fine using any other camera application (like Cheese in Linux).

Playing Video from file

Playing video from file is the same as capturing it from camera, just change the camera index to a video file name. Also while displaying the frame, use appropriate time for cv.waitKey() . If it is too less, video will be very fast and if it is too high, video will be slow (Well, that is how you can display videos in slow motion). 25 milliseconds will be OK in normal cases.

Источник

Оцените статью