Raspberry Pi Documentation

Camera software

Introducing the Raspberry Pi cameras

There are now several official Raspberry Pi camera modules. The original 5-megapixel model was released in 2013. It was followed by an 8-megapixel Camera Module 2, which was released in 2016. The latest camera model is the 12-megapixel Camera Module 3 which was released in 2023. The original 5MP device is no longer available from Raspberry Pi.

Additionally a 12-megapixel High Quality Camera with CS- or M12-mount variants for use with external lenses was released in 2020 and 2023 respectively. There is no infrared version of the HQ Camera.

All of these cameras come in visible light and infrared versions, while the Camera Module 3 also comes as a standard or wide field of view (FoV) model for a total of four different variants.

Further details on the camera modules can be found in the camera hardware page.

All Raspberry Pi cameras are capable of taking high-resolution photographs, along with full HD 1080p video, and can be controlled programmatically. This documentation describes how to use the camera in various scenarios, and how to use the various software tools.

Once you’ve installed your camera module, there are various ways the cameras can be used. The simplest option is to use one of the provided camera applications, such as rpicam-still or rpicam-vid.

libcamera and rpicam-apps


rpicam-apps applications have been renamed from libcamera-* to rpicam-*. Symbolic links are installed to allow users to keep using the old application names, but these will be deprecated in the future. Users are encouraged to adopt the new application names as soon as possible.


libcamera is a new software library aimed at supporting complex camera systems directly from the Linux operating system. It enables us to drive the Raspberry Pi camera system directly from open-source code running on Arm processors. The proprietary code running on the Broadcom GPU, to which users have no access, is almost completely bypassed.

libcamera presents a C++ API to applications. It works at the level of configuring the camera and then allowing an application to request image frames. These image buffers reside in system memory and can be passed directly to still image encoders (such as JPEG) or to video encoders (such as h.264). Ancillary functions such as encoding images or displaying them are beyond the purview of libcamera itself.

For this reason Raspberry Pi supplies a small set of example rpicam-apps. These are simple applications, built on top of libcamera, and are designed to emulate the function of the legacy stack built on Broadcom’s proprietary GPU code (some users will recognise these legacy applications as raspistill and raspivid). The applications we provide are:

  • rpicam-hello A simple "hello world" application which starts a camera preview stream and displays it on the screen.

  • rpicam-jpeg A simple application to run a preview window and then capture high-resolution still images.

  • rpicam-still A more complex still-image capture application, which emulates more of the features of the original raspistill application.

  • rpicam-vid A video-capture application.

  • rpicam-raw A basic application for capturing raw (unprocessed Bayer) frames directly from the sensor.

  • rpicam-detect This application is not built by default, but users can build it if they have TensorFlow Lite installed on their Raspberry Pi. It captures JPEG images when certain objects are detected.

Raspberry Pi’s rpicam-apps are command-line applications that make it easy to capture images and video from the camera. They are also examples of how users can create their own rpicam-based applications with custom functionality to suit their own requirements. The source code for the rpicam-apps is freely available under a BSD 2-Clause licence.

More about libcamera

libcamera is an open source Linux community project. More information is available at the libcamera website.

The libcamera source code can be found and checked out from the official libcamera repository, although we work from a fork that lets us control when we get libcamera updates.

Underneath the libcamera core, Raspberry Pi provides a custom pipeline handler, which is the layer that libcamera uses to drive the sensor and image signal processor (ISP) on the Raspberry Pi. There is also a collection of well-known control algorithms, or image-processing algorithms (IPAs) in libcamera parlance, such as auto exposure/gain control (AEC/AGC), auto white balance (AWB), auto lens-shading correction (ALSC) and so on.

All this code is open source and runs on the Raspberry Pi’s Arm cores. There is only a very thin layer of code on the GPU which translates Raspberry Pi’s own control parameters into register writes for the Broadcom ISP.

Raspberry Pi’s implementation of libcamera supports not only the four standard Raspberry Pi cameras (the OV5647 or V1 camera, the IMX219 or V2 camera, the IMX477 or HQ camera and the IMX708 or Camera Module 3) but also third-party senors such as the IMX290, IMX327, OV9281, IMX378. We are always pleased to work with vendors who would like to see their sensors supported directly by libcamera.

We also supply a tuning file for each of these sensors, which can be edited to change the processing performed by the Raspberry Pi hardware on the raw images received from the image sensor, including aspects like the colour processing, the amount of noise suppression or the behaviour of the control algorithms.

For further information on libcamera for the Raspberry Pi, please consult the Tuning guide for the Raspberry Pi cameras and libcamera.

Getting started

Using the camera for the first time

On Raspberry Pi 3 and earlier devices running Bullseye, you need to re-enable Glamor in order to make the X Windows hardware accelerated preview window work. Enter sudo raspi-config at a terminal window and then choose Advanced Options, Glamor and Yes. Quit raspi-config and let it reboot your Raspberry Pi.

When running a recent version of Raspberry Pi OS, the five basic rpicam-apps are already installed. In this case, official Raspberry Pi cameras will also be detected and enabled automatically.

You can check that everything is working by entering:


You should see a camera preview window for about five seconds.

Raspberry Pi 3 and older devices running Bullseye may not by default be using the correct display driver. Refer to the /boot/firmware/config.txt file and ensure that either dtoverlay=vc4-fkms-v3d or dtoverlay=vc4-kms-v3d is currently active. Please reboot if you needed to change this.

If you do need to alter the configuration

You may need to alter the camera configuration in your /boot/firmware/config.txt file if:

  • You are using a third-party camera (the manufacturer’s instructions should explain the changes you need to make)

  • You are using an official Raspberry Pi camera but wish to use a non-standard driver/overlay

If you do need to add your own dtoverlay, the following are currently recognised.

Camera Module In /boot/firmware/config.txt

V1 camera (OV5647)


V2 camera (IMX219)


HQ camera (IMX477)


GS camera (IMX296)


Camera Module 3 (IMX708)


IMX290 and IMX327

dtoverlay=imx290,clock-frequency=74250000 or dtoverlay=imx290,clock-frequency=37125000 (both modules share the imx290 kernel driver; please refer to instructions from the module vendor for the correct frequency)





To override the automatic camera detection, you will need to delete the entry camera_auto_detect=1 if present in the config.txt file. Your Raspberry Pi will need to be rebooted after editing this file.

Setting camera_auto_detect=0 disables the boot-time detection completely.


If the Camera Module isn’t working correctly, there are number of things you can try:

  • Is the flat flexible cable attached to the Camera Serial Interface (CSI), not the Display Serial Interface (DSI)? The connector will fit into either port. The Camera port is located near the HDMI connector.

  • Are the connectors all firmly seated, and are they the right way round? They must be straight in their sockets.

  • Is the Camera Module connector, between the smaller black Camera Module itself and the PCB, firmly attached? Sometimes this connection can come loose during transit or when putting the Camera Module in a case. Using a fingernail, flip up the connector on the PCB, then reconnect it with gentle pressure. It engages with a very slight click. Don’t force it; if it doesn’t engage, it’s probably slightly misaligned.

  • Have sudo apt update and sudo apt full-upgrade been run?

  • Is your power supply sufficient? The Camera Module adds about 200-250mA to the power requirements of your Raspberry Pi.

  • If you’ve checked all the above issues and the Camera Module is still not working, try posting on our forums for more help.


rpicam-hello is the equivalent of a "hello world" application for the camera. It starts the camera, displays a preview window, and does nothing else. To display a preview window for about 5 seconds:


The -t <duration> option lets the user select how long the window is displayed, where <duration> is given in milliseconds. To run the preview indefinitely, use:

rpicam-hello -t 0

The preview can be halted either by clicking the window’s close button, or using Ctrl-C in the terminal.


rpicam-apps uses a third-party library to interpret command line options. This includes long-form options where the option name consists of more than one character preceded by --, and short-form options which can only be a single character preceded by a single -. For the most part, option names are chosen to match those used by the legacy raspicam applications with the exception that we can no longer handle multi-character option names with a single -. Any such legacy options have been dropped, and the long form with -- must be used instead.

The options are classified broadly into three groups: those that are common, those that are specific to still images, and those that are for video encoding. They are supported in an identical manner across all the applications where they apply.

Please refer to the command line options documentation for a complete list.

The tuning file

Raspberry Pi’s libcamera implementation includes a tuning file for each different type of camera module. This is a file that describes or tunes the parameters that will be passed to the algorithms and hardware to produce the best image quality. libcamera is only able to automaticaly determine the image sensor being used, not the module as a whole, even though the whole module affects the "tuning". For this reason it is sometimes necessary to override the default tuning file for a particular sensor.

For example, the no-IR-filter (NoIR) versions of sensors require different AWB settings to the standard versions, so the IMX219 NoIR being used with a Pi 4 or earlier device should be run using:

rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/vc4/imx219_noir.json

Raspberry Pi 5 uses a different tuning file in a different folder, so here you would use:

rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/pisp/imx219_noir.json

If you are using a Soho Enterprises SE327M12 module with a Pi 4 you would use:

rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/vc4/se327m12.json

This also means that users can copy an existing tuning file and alter it according to their own preferences, so long as the --tuning-file parameter is pointed to the new version.

The --tuning-file parameter, in common with other rpicam-hello command line options, applies identically across all the rpicam-apps.

Preview window

Most of the rpicam-apps display a preview image in a window. If there is no active desktop environment, it will draw directly to the display using Linux Direct Rendering Manager (DRM), otherwise it will attempt to use the desktop environment. Both paths use zero-copy buffer sharing with the GPU, and a consequence of this is that X forwarding is not supported.

For this reason there is a third kind of preview window which does support X forwarding, which can be requested with the --qt-preview option. This implementation does not benefit from zero-copy buffer sharing, nor from 3D acceleration, which makes it computationally expensive (especially for large previews), and is not recommended.

Older systems using Gtk2 may, when linked with OpenCV, produce Glib-GObject errors and fail to show the Qt preview window. In this case edit the file /etc/xdg/qt5ct/qt5ct.conf as root and replace the line containing style=gtk2 with style=gtk3.

The preview window can be suppressed entirely with the -n (--nopreview) option.

The --info-text option allows the user to request that certain helpful image information is displayed on the window title bar using "% directives". For example:

rpicam-hello --info-text "red gain %rg, blue gain %bg"

…​will display the current red and blue gain values.

For the HQ camera, use --info-text "%focus" to display the focus measure, which will be helpful for focusing the lens.

A full description of the --info-text parameter is given in the command line options documentation.


rpicam-jpeg is a simple still image capture application. It deliberately avoids some of the additional features of rpicam-still, which attempts to emulate raspistill more fully. This means that the code is significantly easier to understand, while still providing many of the same features.

To capture a full resolution JPEG image use:

rpicam-jpeg -o test.jpg

This will display a preview for about five seconds, and then capture a full resolution JPEG image to the file test.jpg.

The -t <duration> option can be used to alter the length of time the preview shows, and the --width and --height options will change the resolution of the captured still image. For example:

rpicam-jpeg -o test.jpg -t 2000 --width 640 --height 480

…​will capture a VGA sized image.

Exposure control

All the rpicam-apps allow the user to run the camera with fixed shutter-speed and gain.

Capture an image with an exposure of 20ms and a gain of 1.5×:

rpicam-jpeg -o test.jpg -t 2000 --shutter 20000 --gain 1.5

The gain will be applied as analogue gain within the sensor, up until it reaches the maximum analogue gain permitted by the kernel sensor driver, after which the remainder will be applied as digital gain.

Raspberry Pi’s AEC/AGC algorithm allows applications to specify exposure compensation: the ability to make images darker or brighter by a given number of stops.

rpicam-jpeg --ev -0.5 -o darker.jpg
rpicam-jpeg --ev 0 -o normal.jpg
rpicam-jpeg --ev 0.5 -o brighter.jpg
Further remarks on digital gain

Digital gain is applied by the ISP, not by the sensor. The digital gain will always be very close to 1.0 unless:

  • The total gain requested (either by the --gain option, or by the exposure profile in the camera tuning) exceeds that which can be applied as analogue gain within the sensor. Only the extra gain required will be applied as digital gain.

  • One of the colour gains is less than 1 (note that colour gains are applied as digital gain too). In this case the advertised digital gain will settle to 1 / min(red_gain, blue_gain). This means that one of the colour channels - just not the green one - is having unity digital gain applied to it.

  • The AEC/AGC is changing. When the AEC/AGC is moving the digital gain will typically vary to some extent to try and smooth out any fluctuations, but it will quickly settle back to its normal value.


rpicam-still is very similar to rpicam-jpeg but supports more of the legacy raspistill options. As before, a single image can be captured with

rpicam-still -o test.jpg


rpicam-still allows files to be saved in a number of different formats. It supports both png and bmp encoding. It also allows files to be saved as a binary dump of RGB or YUV pixels with no encoding or file format at all. In these latter cases the application reading the files will have to understand the pixel arrangement for itself.

rpicam-still -e png -o test.png
rpicam-still -e bmp -o test.bmp
rpicam-still -e rgb -o test.data
rpicam-still -e yuv420 -o test.data

Note that the format in which the image is saved depends on the -e (equivalently --encoding) option and is not selected automatically based on the output file name.

Raw Image Capture

Raw images are the images produced directly by the image sensor, before any processing is applied to them either by the ISP (Image Signal Processor) or any of the CPU cores. For colour image sensors these are usually Bayer format images. Note that raw images are quite different from the processed but unencoded RGB or YUV images that we saw earlier.

To capture a raw image use

rpicam-still -r -o test.jpg

Here, the -r option (also --raw) indicates to capture the raw image as well as the JPEG. In fact, the raw image is the exact image from which the JPEG was produced. Raw images are saved in DNG (Adobe Digital Negative) format and are compatible with many standard applications, such as dcraw or RawTherapee. The raw image is saved to a file with the same name but the extension .dng, thus test.dng in this case.

These DNG files contain metadata pertaining to the image capture, including black levels, white balance information and the colour matrix used by the ISP to produce the JPEG. This makes these DNG files much more convenient for later "by hand" raw conversion with some of the aforementioned tools. Using exiftool shows all the metadata encoded into the DNG file:

File Name                       : test.dng
Directory                       : .
File Size                       : 24 MB
File Modification Date/Time     : 2021:08:17 16:36:18+01:00
File Access Date/Time           : 2021:08:17 16:36:18+01:00
File Inode Change Date/Time     : 2021:08:17 16:36:18+01:00
File Permissions                : rw-r--r--
File Type                       : DNG
File Type Extension             : dng
MIME Type                       : image/x-adobe-dng
Exif Byte Order                 : Little-endian (Intel, II)
Make                            : Raspberry Pi
Camera Model Name               : /base/soc/i2c0mux/i2c@1/imx477@1a
Orientation                     : Horizontal (normal)
Software                        : rpicam-still
Subfile Type                    : Full-resolution Image
Image Width                     : 4056
Image Height                    : 3040
Bits Per Sample                 : 16
Compression                     : Uncompressed
Photometric Interpretation      : Color Filter Array
Samples Per Pixel               : 1
Planar Configuration            : Chunky
CFA Repeat Pattern Dim          : 2 2
CFA Pattern 2                   : 2 1 1 0
Black Level Repeat Dim          : 2 2
Black Level                     : 256 256 256 256
White Level                     : 4095
DNG Version                     :
DNG Backward Version            :
Unique Camera Model             : /base/soc/i2c0mux/i2c@1/imx477@1a
Color Matrix 1                  : 0.8545269369 -0.2382823821 -0.09044229197 -0.1890484985 1.063961506 0.1062747385 -0.01334283455 0.1440163847 0.2593136724
As Shot Neutral                 : 0.4754476844 1 0.413686484
Calibration Illuminant 1        : D65
Strip Offsets                   : 0
Strip Byte Counts               : 0
Exposure Time                   : 1/20
ISO                             : 400
CFA Pattern                     : [Blue,Green][Green,Red]
Image Size                      : 4056x3040
Megapixels                      : 12.3
Shutter Speed                   : 1/20

We note that there is only a single calibrated illuminant (the one determined by the AWB algorithm even though it gets labelled always as "D65"), and that dividing the ISO number by 100 gives the analogue gain that was being used.

Very long exposures

To capture very long exposure images, we need to be careful to disable the AEC/AGC and AWB because these algorithms will otherwise force the user to wait for a number of frames while they converge. The way to disable them is to supply explicit values. Additionally, the entire preview phase of the capture can be skipped with the --immediate option.

So to perform a 100 second exposure capture, use

rpicam-still -o long_exposure.jpg --shutter 100000000 --gain 1 --awbgains 1,1 --immediate

For reference, the maximum exposure times of the three official Raspberry Pi cameras can be found in this table.


The video capture application for Raspberry Pi is rpicam-vid. It will display a preview window and write the encoded bitstream to the specified output. For example, to write a ten-second video to file:

On those Raspberry Pi models with a hardware H.264 encoder it is used by default.
rpicam-vid -t 10000 -o test.h264

The resulting file can be played with vlc (among other applications).

vlc test.h264

This is an unpackaged video bitstream, and is not wrapped in any kind of container format (such as an mp4 file). The --save-pts option can be used to output frame timestamps so that the bitstream can subsequently be converted into an appropriate format using a tool like mkvmerge.

rpicam-vid -o test.h264 --save-pts timestamps.txt

If you want an mkv file:

mkvmerge -o test.mkv --timecodes 0:timestamps.txt test.h264


There is support for motion JPEG, and also for uncompressed and unformatted YUV420:

rpicam-vid -t 10000 --codec mjpeg -o test.mjpeg
rpicam-vid -t 10000 --codec yuv420 -o test.data

In both cases the --codec parameter determines the output format, not the extension of the output file.

The --segment parameter breaks output files up into chunks of the segment size (given in milliseconds). This is handy for breaking a motion JPEG stream up into individual JPEG files by specifying very short (1 millisecond) segments.

rpicam-vid -t 10000 --codec mjpeg --segment 1 -o test%05d.jpeg

The output file name may not make sense unless we avoid overwriting the previous file every time, such as by using a file name that includes a counter (as above).

Network streaming

This section describes native streaming from rpicam-vid. It is also possible to use the libav backend for network streaming.

To stream video using UDP, on the Raspberry Pi (server) use:

rpicam-vid -t 0 --inline -o udp://<ip-addr>:<port>

…​where <ip-addr> is the IP address of the client, or multicast address (if appropriately configured to reach the client). On the client use (for example):

vlc udp://@:<port> :demux=h264

Alternatively, using the same <port> value:

ffplay udp://<ip-addr-of-server>:<port> -fflags nobuffer -flags low_delay -framedrop

Video can be streamed using TCP. To use the Raspberry Pi as a server:

rpicam-vid -t 0 --inline --listen -o tcp://<port>

On the client:

vlc tcp/h264://<ip-addr-of-server>:<port>


ffplay tcp://<ip-addr-of-server>:<port> -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop

…​for a 30 frames per second stream with low latency.

The Raspberry Pi will wait until the client connects, and then start streaming video.


If formatting an RTSP stream, vlc is useful on the Raspberry Pi. (Other RTSP servers are available.)

rpicam-vid -t 0 --inline -o - | cvlc stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/stream1}' :demux=h264

This can be played with:

vlc rtsp://<ip-addr-of-server>:8554/stream1


ffplay rtsp://<ip-addr-of-server>:8554/stream1 -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop

In all cases, the preview window on the server (the Raspberry Pi) can be suppressed with the -n (--nopreview) option. Note also the use of the --inline option, which forces the stream header information to be included with every intra (I) frame. This is important so that a client can correctly understand the stream if it missed the very beginning.

Recent versions of VLC seem to have problems with playback of H.264 streams. We recommend using ffplay for playback using the above commands until these issues have been resolved.

High framerate capture

Using rpicam-vid to capture high framerate video (generally anything over 60fps) while minimising frame drops requires a few considerations.

  • The H.264 target level must be set to 4.2 with the --level 4.2 argument.

  • Software colour denoise processing must be turned off with the --denoise cdn_off argument.

  • For rates over 100 fps, disabling the display window with the -n option would free up some additional CPU cycles to help avoid frame drops.

  • It is advisable to set force_turbo=1 in /boot/firmware/config.txt to ensure the CPU clock does not get throttled during the video capture. See the force_turbo documentation for further details.

  • Adjust the ISP output resolution with --width 1280 --height 720 or something even lower to achieve your framerate target.

  • On a Pi 4, you can overclock the GPU to improve performance by adding gpu_freq=550 or higher in /boot/firmware/config.txt. See the overclocking documentation for further details.

An example command for 1280×720 120fps video encode would be:

rpicam-vid --level 4.2 --framerate 120 --width 1280 --height 720 --save-pts timestamp.pts -o video.264 -t 10000 --denoise cdn_off -n

libav integration with rpicam-vid

rpicam-vid can use the ffmpeg/libav codec backend to encode audio and video streams and either save to a local file or stream over the network. At present, video is encoded through the hardware H.264 encoder on those models of Raspberry Pi that support hardware encoding, and audio is encoded by a number of available software encoders. To list the available output formats, use the ffmpeg -formats command.

To enable the libav backend, use the --codec libav command line option. Once enabled, the following configuration options are available:

    --libav-format,     libav output format to be used <string>

Set the libav output format to use. These output formats can be specified as containers (e.g. mkv, mp4, avi), or stream output (e.g. h264 or mpegts). If this option is not provided, libav tries to deduce the output format from the filename specified by the -o command line argument.

Example: To save a video in an mkv container, the following commands are equivalent:

rpicam-vid --codec libav -o test.mkv
rpicam-vid --codec libav --libav-format mkv -o test.raw
    --libav-audio,     Enable audio recording

Set this option to enable audio encoding together with the video stream. When audio encoding is enabled, an output format that supports audio (e.g. mpegts, mkv, mp4) must be used.

    --audio-codec,     Selects the audio codec <string>

Selects which software audio codec is used for encoding. By default aac is used. To list the available audio codecs, use the ffmpeg -codec command.

    --audio-bitrate,     Selects the audio bitrate <number>

Sets the audio encoding bitrate in bits per second.

Example: To record audio at 16 kilobits/sec with the mp2 codec use rpicam-vid --codec libav -o test.mp4 --audio_codec mp2 --audio-bitrate 16384

    --audio-samplerate,     Set the audio sampling rate <number>

Set the audio sampling rate in Hz for encoding. Set to 0 (default) to use the input sample rate.

    --audio-device,     Chooses an audio recording device to use <string>

Selects which ALSA input device to use for audio recording. The audio device string can be obtained with the following command:

$ pactl list | grep -A2 'Source #' | grep 'Name: '
    Name: alsa_output.platform-bcm2835_audio.analog-stereo.monitor
    Name: alsa_output.platform-fef00700.hdmi.hdmi-stereo.monitor
    Name: alsa_output.usb-GN_Netcom_A_S_Jabra_EVOLVE_LINK_000736B1214E0A-00.analog-stereo.monitor
    Name: alsa_input.usb-GN_Netcom_A_S_Jabra_EVOLVE_LINK_000736B1214E0A-00.mono-fallback
    --av-sync,     Audio/Video sync control <number>

This option can be used to shift the audio sample timestamp by a value given in microseconds relative to the video frame. Negative values may also be used.

Network streaming with libav

It is possible to use the libav backend as a network streaming source for audio/video. To do this, the output filename specified by the -o argument must be given as a protocol url, see ffmpeg protocols for more details on protocol usage. Some examples:

To stream audio/video using TCP

rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio -o "tcp://"

To stream audio/video using UDP

rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio  -o "udp://<ip-addr>:<port>"


rpicam-raw behaves like a video-recording application, except that it records raw Bayer frames directly from the sensor. It does not show a preview window. For a two-second raw clip:

rpicam-raw -t 2000 -o test.raw

The raw frames are dumped with no formatting information at all, one directly after another. The application prints the pixel format and image dimensions to the terminal window so that the user can see how to interpret the pixel data.

By default, the raw frames are saved in a single (and potentially very large) file. As we saw previously, the --segment option can be used to direct each to a separate file.

rpicam-raw -t 2000 --segment 1 -o test%05d.raw

In good conditions (using a fast SSD), rpicam-raw can get close to writing 12MP HQ camera frames (18MB of data each) to disk at 10fps. It writes the raw frames with no formatting in order to achieve these speeds; it has no capability to save them as DNG files like rpicam-still. If you want to be sure not to drop frames, you can reduce the framerate slightly using the --framerate option.

rpicam-raw -t 5000 --width 4056 --height 3040 -o test.raw --framerate 8

For more information on the raw formats, including how to choose between packed and unpacked versions, as well as the differences between Pi 5 and earlier models, please refer to the --mode option in the camera resolution options section.


rpicam-detect is not supplied by default in any Raspberry Pi OS distribution, but can be built by users who have installed TensorFlow Lite. Please refer to the rpicam-apps build instructions. You will need to run cmake with -DENABLE_TFLITE=1.

This application runs a preview window and monitors the contents using a Google MobileNet v1 SSD (Single Shot Detector) neural network that has been trained to identify about 80 classes of objects using the Coco dataset. It should recognise people, cars, cats and many other objects.

Its starts by running a preview window, and whenever the target object is detected it will perform a full-resolution JPEG capture, before returning back to the preview mode to continue monitoring. It provides a couple of additional command line options that do not apply elsewhere:

--object <name>

Detect objects with the given <name>. The name should be taken from the model’s label file.

--gap <number>

Wait at least this many frames after a capture before performing another. This is necessary because the neural network does not run on every frame, so it is best to give it a few frames to run again before considering another capture.

Please refer to the TensorFlow Lite object detector section for more general information on how to obtain and use this model. For example, you might spy secretly on your cats while you are away with:

rpicam-detect -t 0 -o cat%04d.jpg --lores-width 400 --lores-height 300 --post-process-file object_detect_tf.json --object cat

Common Command Line Options

The following options apply across all the rpicam-apps with similar or identical semantics, unless noted otherwise.

	--help,		-h		Print help information for the application

The --help option causes every application to print its full set of command line options with a brief synopsis of each, and then quit.

	--version			Print out a software version number

All rpicam-apps will, when they see the --version option, print out a version string both for libcamera and rpicam-apps and then quit, for example:

rpicam-apps build: ca559f46a97a 27-09-2021 (14:10:24)
libcamera build: v0.0.0+3058-c29143f7
	--list-cameras			List the cameras available for use

The --list-cameras will display the available cameras attached to the board that can be used by the application. This option also lists the sensor modes supported by each camera. For example:

Available cameras
0 : imx219 [3280x2464] (/base/soc/i2c0mux/i2c@1/imx219@10)
    Modes: 'SRGGB10_CSI2P' : 640x480 [206.65 fps - (1000, 752)/1280x960 crop]
                             1640x1232 [41.85 fps - (0, 0)/3280x2464 crop]
                             1920x1080 [47.57 fps - (680, 692)/1920x1080 crop]
                             3280x2464 [21.19 fps - (0, 0)/3280x2464 crop]
           'SRGGB8' : 640x480 [206.65 fps - (1000, 752)/1280x960 crop]
                      1640x1232 [41.85 fps - (0, 0)/3280x2464 crop]
                      1920x1080 [47.57 fps - (680, 692)/1920x1080 crop]
                      3280x2464 [21.19 fps - (0, 0)/3280x2464 crop]
1 : imx477 [4056x3040] (/base/soc/i2c0mux/i2c@1/imx477@1a)
    Modes: 'SRGGB10_CSI2P' : 1332x990 [120.05 fps - (696, 528)/2664x1980 crop]
           'SRGGB12_CSI2P' : 2028x1080 [50.03 fps - (0, 440)/4056x2160 crop]
                             2028x1520 [40.01 fps - (0, 0)/4056x3040 crop]
                             4056x3040 [10.00 fps - (0, 0)/4056x3040 crop]

In the above example, the IMX219 sensor is available at index 0 and IMX477 at index 1. The sensor mode identifier takes the following form:

S<Bayer order><Bit-depth>_<Optional packing> : <Resolution list>

For the IMX219 in the above example, all modes have a RGGB Bayer ordering and provide either 8-bit or 10-bit CSI2 packed readout at the listed resolutions. The crop is specified as (<x>, <y>)/<Width>x<Height>, where (x, y) is the location of the crop window of size Width x Height in the sensor array. The units remain native sensor pixels, even if the sensor is being used in a binning or skipping mode.

	--camera			Selects which camera to use <index>

The --camera option will select which camera to use from the supplied <index> value. The <index> value can be obtained from the --list-cameras option.

	--config,	-c		Read options from the given file <filename>

Normally options are read from the command line, but in case multiple options are required it may be more convenient to keep them in a file.

Example: rpicam-hello -c config.txt

This is a text file containing individual lines of key=value pairs, for example:


Note how the = is required even for implicit options, and that the -- used on the command line are omitted. Only long form options are permitted (t=99000 would not be accepted).

	--timeout,	-t		Delay before application stops automatically <milliseconds>

The --timeout option specifies how long the application runs before it stops, whether it is recording a video or showing a preview. In the case of still image capture, the application will show the preview window for this long before capturing the output image.

If unspecified, the default value is 5000 (5 seconds). The value zero causes the application to run indefinitely.

Example: rpicam-hello -t 0

Preview window

	--preview,	-p		Preview window settings <x,y,w,h>

Sets the size and location of the preview window (both desktop and DRM versions). It does not affect the resolution or aspect ratio of images being requested from the camera. The camera images will be scaled to the size of the preview window for display, and will be pillar/letter-boxed to fit.

Example: rpicam-hello -p 100,100,500,500

Letterboxed preview image
	--fullscreen,	-f		Fullscreen preview mode

Forces the preview window to use the whole screen, and the window will have no border or title bar. Again the image may be pillar/letter-boxed.

Example rpicam-still -f -o test.jpg

	--qt-preview			Use Qt-based preview window

The preview window is switched to use the Qt-based implementation. This option is not normally recommended because it no longer uses zero-copy buffer sharing nor GPU acceleration and is therefore very expensive, however, it does support X forwarding (which the other preview implementations do not).

The Qt preview window does not support the --fullscreen option. Generally it is advised to try and keep the preview window small.

Example rpicam-hello --qt-preview

	--nopreview,	-n		Do not display a preview window

The preview window is suppressed entirely.

Example rpicam-still -n -o test.jpg

	--info-text			Set window title bar text <string>

The supplied string is set as the title of the preview window (when running on a desktop environment). Additionally the string may contain a number of % directives which are substituted with information from the image metadata. The permitted directives are

Directive Substitution


The sequence number of the frame


The instantaneous frame rate


The shutter speed used to capture the image, in microseconds


The analogue gain applied to the image in the sensor


The digital gain applied to the image by the ISP


The gain applied to the red component of each pixel


The gain applied to the blue component of each pixel


The focus metric for the image, where a larger value implies a sharper image


The current lens position in dioptres (1 / distance in metres).


The autofocus algorithm state (one of idle, scanning, focused or failed).

When not provided, the --info-text string defaults to "#%frame (%fps fps) exp %exp ag %ag dg %dg".

Example: rpicam-hello --info-text "Focus measure: %focus"

Image showing focus measure

Camera Resolution and Readout

	--width				Capture image width <width>
	--height			Capture image height <height>

These numbers specify the output resolution of the camera images captured by rpicam-still, rpicam-jpeg and rpicam-vid.

For rpicam-raw, it affects the size of the raw frames captured. Where a camera has a 2x2 binned readout mode, specifying a resolution not larger than this binned mode will result in the capture of 2x2 binned raw frames.

For rpicam-hello these parameters have no effect.


rpicam-vid -o test.h264 --width 1920 --height 1080 will capture 1080p video.

rpicam-still -r -o test.jpg --width 2028 --height 1520 will capture a 2028x1520 resolution JPEG. When using the HQ camera the sensor will be driven in its 2x2 binned mode so the raw file - captured in test.dng - will contain a 2028x1520 raw Bayer image.

	--viewfinder-width		Capture image width <width>
	--viewfinder-height		Capture image height <height>

These options affect only the preview (meaning both rpicam-hello and the preview phase of rpicam-jpeg and rpicam-still), and specify the image size that will be requested from the camera for the preview window. They have no effect on captured still images or videos. Nor do they affect the preview window as the images are resized to fit.

Example: rpicam-hello --viewfinder-width 640 --viewfinder-height 480

	--rawfull			Force sensor to capture in full resolution mode

This option forces the sensor to be driven in its full resolution readout mode for still and video capture, irrespective of the requested output resolution (given by --width and --height). It has no effect for rpicam-hello.

Using this option often incurs a frame rate penalty, as larger resolution frames are slower to read out.

Example: rpicam-raw -t 2000 --segment 1 --rawfull -o test%03d.raw will cause multiple full resolution raw frames to be captured. On the HQ camera each frame will be about 18MB in size. Without the --rawfull option the default video output resolution would have caused the 2x2 binned mode to be selected, resulting in 4.5MB raw frames.

	--mode				Specify sensor mode, given as <width>:<height>:<bit-depth>:<packing>

This option is more general than --rawfull and allows the precise selection of one of the camera modes. The mode should be specified by giving its width, height, bit-depth and packing, separated by colons. These numbers do not have to be exact as the system will select the closest it can find. Moreover, the bit-depth and packing are optional (defaulting to 12 and P for "packed" respectively).

On a Pi 4 or earlier device, "packed" modes will return pixels that are packed according to the MIPI CSI-2 standard, meaning:

  • 10 bit camera modes will be packed with 4 pixels in 5 bytes. The first 4 bytes contain the 8 MSBs (most significant bits) of each pixel, and the final byte contains the 4 pairs of LSBs.

  • 12 bit camera modes will be packed with 2 pixels in 3 bytes. The first 2 bytes contain the 8 MSBs, and the final byte contains the 4 LSBs of both pixels.

"Unpacked" modes will use exactly 2 bytes per pixel. The 2-byte words will be zero padded at the most significant end, meaning that, for example, a pixel from a 10-bit camera mode cannot exceed the value 1023.

On a Pi 5 (and any subsequent) devices, raw modes are handled somewhat differently. The "packed" modes will give you pixel values that are compressed with a visually lossless compression scheme into 8 bits, therefore using only 1 byte per pixel.

"Unpacked" modes on a Pi 5 will be interpreted as a request for uncompressed and unpacked pixels, again using 16 bits per pixel. However, in contrast to the Pi 4, these values are zero-padded at the least significant end. Therefore, they will use the full 16 bit dynamic range, whatever pixel depth the sensor was delivering.

In both cases, users wishing to access the pixel values themselves are advised to use the "unpacked" formats as these are much easier to manipulate.

  • 4056:3040:12:P - 4056x3040 resolution, 12 bits per pixel, packed. On a Pi 4 (or earlier) the raw image buffers will be packed so that 2 pixel values occupy only 3 bytes. On a Pi 5 the pixels will be compressed to 1 byte per pixel.

  • 1632:1224:10 - 1632x1224 resolution, 10 bits per pixel. It will default to "packed". A 10-bit packed mode would store 4 pixels in every 5 bytes on a Pi 4, or 1 byte per pixel (compressed) on a Pi 5.

  • 2592:1944:10:U - 2592x1944 resolution, 10 bits per pixel, unpacked. An unpacked format will store every pixel in 2 bytes. On a Pi 4 the top 6 bits of each value will be zero, but on a Pi 5 the bottom 6 bits of each value will be zero.

  • 3264:2448 - 3264x2448 resolution. It will try to select the default 12-bit mode but in the case of the v2 camera there isn’t one, so a 10-bit mode would be chosen instead.

The --mode option affects the mode choice for video recording and stills capture. To control the mode choice during the preview phase prior to stills capture, please use the --viewfinder-mode option.

	--viewfinder-mode		Specify sensor mode, given as <width>:<height>:<bit-depth>:<packing>

This option is identical to the --mode option except that it applies only during the preview phase of stills capture (also used by the rpicam-hello application).

	--lores-width			Low resolution image width <width>
	--lores-height			Low resolution image height <height>

libcamera allows the possibility of delivering a second lower resolution image stream from the camera system to the application. This stream is available in both the preview and the video modes (i.e. rpicam-hello and the preview phase of rpicam-still, and rpicam-vid), and can be used, among other things, for image analysis. For stills captures, the low resolution image stream is not available.

The low resolution stream has the same field of view as the other image streams. If a different aspect ratio is specified for the low resolution stream, then those images will be squashed so that the pixels are no longer square.

During video recording (rpicam-vid), specifying a low resolution stream will disable some extra colour denoise processing that would normally occur.

Example: rpicam-hello --lores-width 224 --lores-height 224

Note that the low resolution stream is not particularly useful unless used in conjunction with image post-processing.

	--hflip				Read out with horizontal mirror
	--vflip				Read out with vertical flip
	--rotation			Use hflip and vflip to create the given rotation <angle>

These options affect the order of read-out from the sensor, and can be used to mirror the image horizontally, and/or flip it vertically. The --rotation option permits only the value 0 or 180, so note that 90 or 270 degree rotations are not supported. Moreover, --rotation 180 is identical to --hflip --vflip.

Example: rpicam-hello --vflip --hflip

	--roi				Select a crop (region of interest) from the camera <x,y,w,h>

The --roi (region of interest) option allows the user to select a particular crop from the full field of view provided by the sensor. The coordinates are specified as a proportion of the available field of view, so that --roi 0,0,1,1 would have no effect at all.

The --roi parameter implements what is commonly referred to as "digital zoom".

Example rpicam-hello --roi 0.25,0.25,0.5,0.5 will select exactly a quarter of the total number of pixels cropped from the centre of the image.

	--hdr				Run the camera in HDR mode <mode>

The --hdr option causes the camera to be run in the HDR (High Dynamic Range) mode given by <mode>. On Pi 4 and earlier devices, this option only works for certain supported cameras, including the Raspberry Pi Camera Module 3, and on Pi 5 devices it can be used with all cameras. <mode> may take the following values:

  • off - HDR is disabled. This is the default value if the --hdr option is omitted entirely.

  • auto - If the sensor supports HDR, then the on-sensor HDR mode is enabled. Otherwise, on Pi 5 devices, the Pi 5’s on-chip HDR mode will be enabled. On a Pi 4 or earlier device, HDR will be disabled if the sensor does not support it. This mode will be applied if the --hdr option is supplied without a <mode> value.

  • single-exp - On a Pi 5, the on-chip HDR mode will be enabled, even if the sensor itself supports HDR. On earlier devices, HDR (even on-sensor HDR) will be disabled.

Example: rpicam-still --hdr -o hdr.jpg for capturing a still image, or rpicam-vid --hdr -o hdr.h264 to capture a video.

When sensors support on-sensor HDR, use of the that option may generally cause different camera modes to be available, and this can be checked by comparing the output of rpicam-hello --list-cameras with rpicam-hello --hdr sensor --list-cameras.

For the Raspberry Pi Camera Module 3, the non-HDR modes include the usual full resolution (12MP) mode as well as its half resolution 2x2 binned (3MP) equivalent. In the case of HDR, only a single half resolution (3MP) mode is available, and it is not possible to switch between HDR and non-HDR modes without restarting the camera application.

Camera Control

The following options affect the image processing and control algorithms that affect the camera image quality.

	--sharpness			Set image sharpness <number>

The given <number> adjusts the image sharpness. The value zero means that no sharpening is applied, the value 1.0 uses the default amount of sharpening, and values greater than 1.0 use extra sharpening.

Example: rpicam-still -o test.jpg --sharpness 2.0

	--contrast			Set image contrast <number>

The given <number> adjusts the image contrast. The value zero produces minimum contrast, the value 1.0 uses the default amount of contrast, and values greater than 1.0 apply extra contrast.

Example: rpicam-still -o test.jpg --contrast 1.5

	--brightness			Set image brightness <number>

The given <number> adjusts the image brightness. The value -1.0 produces an (almost) black image, the value 1.0 produces an almost entirely white image and the value 0.0 produces standard image brightness.

Note that the brightness parameter adds (or subtracts) an offset from all pixels in the output image. The --ev option is often more appropriate.

Example: rpicam-still -o test.jpg --brightness 0.2

	--saturation			Set image colour saturation <number>

The given <number> adjusts the colour saturation. The value zero produces a greyscale image, the value 1.0 uses the default amount of sautration, and values greater than 1.0 apply extra colour saturation.

Example: rpicam-still -o test.jpg --saturation 0.8

	--ev				Set EV compensation <number>

Sets the EV compensation of the image in units of stops, in the range -10 to 10. Default is 0. It works by raising or lowering the target values the AEC/AGC algorithm is attempting to match.

Example: rpicam-still -o test.jpg --ev 0.3

	--shutter			Set the exposure time in microseconds <number>

The shutter time is fixed to the given value. The gain will still be allowed to vary (unless that is also fixed).

Note that this shutter time may not be achieved if the camera is running at a frame rate that is too fast to allow it. In this case the --framerate option may be used to lower the frame rate. The maximum possible shutter times for the official Raspberry Pi supported can be found in this table.

Using values above these maximums will result in undefined behaviour. Cameras will also have different minimum shutter times, though in practice this is not important as they are all low enough to expose bright scenes appropriately.

Example: rpicam-hello --shutter 30000

	--gain				Sets the combined analogue and digital gains <number>
	--analoggain			Synonym for --gain

These two options are actually identical, and set the combined analogue and digital gains that will be used. The --analoggain form is permitted so as to be more compatible with the legacy raspicam applications. Where the requested gain can be supplied by the sensor driver, then only analogue gain will be used. Once the analogue gain reaches the maximum permitted value, then extra gain beyond this will be supplied as digital gain.

Note that there are circumstances where the digital gain can go above 1 even when the analogue gain limit is not exceeded. This can occur when

  • Either of the colour gains goes below 1.0, which will cause the digital gain to settle to 1.0/min(red_gain,blue_gain). This means that the total digital gain being applied to any colour channel does not go below 1.0, as that would cause discolouration artifacts.

  • The digital gain can vary slightly while the AEC/AGC changes, though this effect should be only transient.

	--metering			Set the metering mode <string>

Sets the metering mode of the AEC/AGC algorithm. This may one of the following values

  • centre - centre weighted metering (which is the default)

  • spot - spot metering

  • average - average or whole frame metering

  • custom - custom metering mode which would have to be defined in the camera tuning file.

For more information on defining a custom metering mode, and also on how to adjust the region weights in the existing metering modes, please refer to the Tuning guide for the Raspberry Pi cameras and libcamera.

Example: rpicam-still -o test.jpg --metering spot

	--exposure			Set the exposure profile <string>

The exposure profile may be either normal, sport or long. Changing the exposure profile should not affect the overall exposure of an image, but the sport mode will tend to prefer shorter exposure times and larger gains to achieve the same net result.

Exposure profiles can be edited in the camera tuning file. Please refer to the Tuning guide for the Raspberry Pi cameras and libcamera for more information.

Example: rpicam-still -o test.jpg --exposure sport

	--awb				Set the AWB mode <string>

This option sets the AWB algorithm into the named AWB mode. Valid modes are:

Mode name Colour temperature


2500K to 8000K


2500K to 3000K


3000K to 3500K


4000K to 4700K


3000K to 5000K


5500K to 6500K


7000K to 8500K


A custom range would have to be defined in the camera tuning file.

There is no mode that turns the AWB off, instead fixed colour gains should be specified with the --awbgains option.

Note that these values are only approximate, the values could vary according to the camera tuning.

For more information on AWB modes and how to define a custom one, please refer to the Tuning guide for the Raspberry Pi cameras and libcamera.

Example: rpicam-still -o test.jpg --awb tungsten

	--awbgains				Set fixed colour gains <number,number>

This option accepts a red and a blue gain value and uses them directly in place of running the AWB algorithm. Setting non-zero values here has the effect of disabling the AWB calculation.

Example: rpicam-still -o test.jpg --awbgains 1.5,2.0

	--denoise				Set the denoising mode <string>

The following denoise modes are supported:

  • auto - This is the default. It always enables standard spatial denoise. It uses extra fast colour denoise for video, and high quality colour denoise for stills capture. Preview does not enable any extra colour denoise at all.

  • off - Disables spatial and colour denoise.

  • cdn_off - Disables colour denoise.

  • cdn_fast - Uses fast colour denoise.

  • cdn_hq - Uses high quality colour denoise. Not appropriate for video/viewfinder due to reduced throughput.

Note that even the use of fast colour denoise can result in lower framerates. The high quality colour denoise will normally result in much lower framerates.

Example: rpicam-vid -o test.h264 --denoise cdn_off

	--tuning-file				Specify the camera tuning to use <string>

This identifies the name of the JSON format tuning file that should be used. The tuning file covers many aspects of the image processing, including the AEC/AGC, AWB, colour shading correction, colour processing, denoising and so forth.

For more information on the camera tuning file, please consult the Tuning guide for the Raspberry Pi cameras and libcamera.

Example: rpicam-hello --tuning-file ~/my-camera-tuning.json

	--autofocus-mode			Specify the autofocus mode <string>

Specifies the autofocus mode to use, which may be one of

  • default (also the default if the option is omitted) - normally puts the camera into continuous autofocus mode, except if either --lens-position or --autofocus-on-capture is given, in which case manual mode is chosen instead

  • manual - do not move the lens at all, but it can be set with the --lens-position option

  • auto - does not move the lens except for an autofocus sweep when the camera starts (and for rpicam-still, just before capture if --autofocus-on-capture is given)

  • continuous - adjusts the lens position automatically as the scene changes.

This option is only supported for certain camera modules (such as the Raspberry Pi Camera Module 3).

	--autofocus-range			Specify the autofocus range <string>

Specifies the autofocus range, which may be one of

  • normal (the default) - focuses from reasonably close to infinity

  • macro - focuses only on close objects, including the closest focal distances supported by the camera

  • full - will focus on the entire range, from the very closest objects to infinity.

This option is only supported for certain camera modules (such as the Raspberry Pi Camera Module 3).

	--autofocus-speed			Specify the autofocus speed <string>

Specifies the autofocus speed, which may be one of

  • normal (the default) - the lens position will change at the normal speed

  • fast - the lens position may change more quickly.

This option is only supported for certain camera modules (such as the Raspberry Pi Camera Module 3).

	--autofocus-window			Specify the autofocus window

Specifies the autofocus window, in the form x,y,width,height where the coordinates are given as a proportion of the entire image. For example, --autofocus-window 0.25,0.25,0.5,0.5 would choose a window that is half the size of the output image in each dimension, and centred in the middle.

The default value causes the algorithm to use the middle third of the output image in both dimensions (so 1/9 of the total image area).

This option is only supported for certain camera modules (such as the Raspberry Pi Camera Module 3).

	--lens-position				Set the lens to a given position <string>

Moves the lens to a fixed focal distance, normally given in dioptres (units of 1 / distance in metres). We have

  • 0.0 will move the lens to the "infinity" position

  • Any other number: move the lens to the 1 / number position, so the value 2 would focus at approximately 0.5m

  • default - move the lens to a default position which corresponds to the hyperfocal position of the lens.

It should be noted that lenses can only be expected to be calibrated approximately, and there may well be variation between different camera modules.

This option is only supported for certain camera modules (such as the Raspberry Pi Camera Module 3).

Output File Options

	--output,	-o			Output file name <string>

--output sets the name of the output file to which the output image or video is written. Besides regular file names, this may take the following special values:

  • - - write to stdout

  • udp:// - a string starting with this is taken as a network address for streaming

  • tcp:// - a string starting with this is taken as a network address for streaming

  • a string containing a %d directive is taken as a file name where the format directive is replaced with a count that increments for each file that is opened. Standard C format directive modifiers are permitted.


rpicam-vid -t 100000 --segment 10000 -o chunk%04d.h264 records a 100 second file in 10 second segments, where each file is named chunk.h264 but with the inclusion of an incrementing counter. Note that %04d writes the count to a string, but padded up to a total width of at least 4 characters by adding leading zeroes.

rpicam-vid -t 0 --inline -o udp:// stream H.264 video to network address on port 5000.

	--wrap					Wrap output file counter at <number>

When outputting to files with an incrementing counter (e.g. %d in the output file name), wrap the counter back to zero when it reaches this value.

Example: rpicam-vid -t 0 --codec mjpeg --segment 1 --wrap 100 -o image%d.jpg

	--flush					Flush output files immediately

--flush causes output files to be flushed to disk as soon as every frame is written, rather than waiting for the system to do it.

Example: rpicam-vid -t 10000 --flush -o test.h264

Post Processing Options

The --post-process-file option specifies a JSON file that configures the post-processing that the imaging pipeline applies to camera images before they reach the application. It can be thought of as a replacement for the legacy raspicam "image effects".

Post-processing is a large topic and admits the use of 3rd party software like OpenCV and TensorFlowLite to analyse and manipulate images. For more information, please refer to the section on post-processing.

Example: rpicam-hello --post-process-file negate.json

This might apply a "negate" effect to an image, if the file negate.json is appropriately configured.

Still Command Line Options

	--quality,	-q		JPEG quality <number>

Set the JPEG quality. 100 is maximum quality and 93 is the default. Only applies when saving JPEG files.

Example: rpicam-jpeg -o test.jpg -q 80

	--exif,		-x		Add extra EXIF tags <string>

The given extra EXIF tags are saved in the JPEG file. Only applies when saving JPEG files.

EXIF is supported using the libexif library and so there are some associated limitations. In particular, libexif seems to recognise a number of tags but without knowing the correct format for them. The software will currently treat these (incorrectly, in many cases) as ASCII, but will print a warning to the terminal. As we come across these they can be added to the table of known exceptions in the software.

Clearly the application needs to supply EXIF tags that contain specific camera data (like the exposure time). But for other tags that have nothing to do with the camera, a reasonable workaround would simply be to add them post facto, using something like exiftool.

Example: rpicam-still -o test.jpg --exif IDO0.Artist=Someone

	--timelapse			Time interval between timelapse captures <milliseconds>

This puts rpicam-still into timelapse mode where it runs according to the timeout (--timeout or -t) that has been set, and for that period will capture repeated images at the interval specified here. (rpicam-still only.)

Example: rpicam-still -t 100000 -o test%d.jpg --timelapse 10000 captures an image every 10s for about 100s.

	--framestart			The starting value for the frame counter <number>

When writing counter values into the output file name, this specifies the starting value for the counter.

Example: rpicam-still -t 100000 -o test%d.jpg --timelapse 10000 --framestart 1 captures an image every 10s for about 100s, starting at 1 rather than 0. (rpicam-still only.)

	--datetime			Use date format for the output file names

Use the current date and time to construct the output file name, in the form MMDDhhmmss.jpg, where MM = 2-digit month number, DD = 2-digit day number, hh = 2-digit 24-hour hour number, mm = 2-digit minute number, ss = 2-digit second number. (rpicam-still only.)

Example: rpicam-still --datetime

	--timestamp			Use system timestamps for the output file names

Uses the current system timestamp (the number of seconds since the start of 1970) as the output file name. (rpicam-still only.)

Example: rpicam-still --timestamp

	--restart			Set the JPEG restart interval <number>

Sets the JPEG restart interval to the given value. Default is zero.

Example: rpicam-still -o test.jpg --restart 20

	--keypress,	-k		Capture image when Enter pressed

This switches rpicam-still into keypress mode. It will capture a still image either when the timeout expires or the Enter key is pressed in the terminal window. Typing x and Enter causes rpicam-still to quit without capturing.

Example: rpicam-still -t 0 -o test.jpg -k

	--signal,	-s		Capture image when SIGUSR1 received

This switches rpicam-still into signal mode. It will capture a still image either when the timeout expires or a SIGUSR1 is received. SIGUSR2 will cause rpicam-still to quit without capturing.


rpicam-still -t 0 -o test.jpg -s &


kill -SIGUSR1 $!

	--thumb				Set thumbnail parameters <w:h:q> or none

Sets the dimensions and quality parameter of the associated thumbnail image. The defaults are size 320x240 and quality 70.

Example: rpicam-still -o test.jpg --thumb 640:480:80

The value none may be given, in which case no thumbnail is saved in the image at all.

	--encoding,	-e		Set the still image codec <string>

Select the still image encoding to be used. Valid encoders are:

  • jpg - JPEG (the default)

  • png - PNG format

  • bmp - BMP format

  • rgb - binary dump of uncompressed RGB pixels

  • yuv420 - binary dump of uncompressed YUV420 pixels.

Note that this option determines the encoding and that the extension of the output file name is ignored for this purpose. However, for the --datetime and --timestamp options, the file extension is taken from the encoder name listed above. (rpicam-still only.)

Example: rpicam-still -e png -o test.png

	--raw,		-r		Save raw file

Save a raw Bayer file in DNG format alongside the usual output image. The file name is given by replacing the output file name extension by .dng. These are standard DNG files, and can be processed with standard tools like dcraw or RawTherapee, among others. (rpicam-still only.)

The image data in the raw file is exactly what came out of the sensor, with no processing whatsoever either by the ISP or anything else. The EXIF data saved in the file, among other things, includes:

  • exposure time

  • analogue gain (the ISO tag is 100 times the analogue gain used)

  • white balance gains (which are the reciprocals of the "as shot neutral" values)

  • the colour matrix used by the ISP.

	--latest			Make symbolic link to latest file saved <string>

This causes rpicam-still to make a symbolic link to the most recently saved file, thereby making it easier to identify. (rpicam-still only.)

Example: rpicam-still -t 100000 --timelapse 10000 -o test%d.jpg --latest latest.jpg

	--autofocus-on-capture			Whether to run an autofocus cycle before capture

If set, this will cause an autofocus cycle to be run just before the image is captured.

If --autofocus-mode is not specified, or was set to default or manual, this will be the only autofocus cycle.

If --autofocus-mode was set to auto, there will be an additional autofocus cycle at the start of the preview window.

If --autofocus-mode was set to continuous, this option will be ignored.

You can also use --autofocus-on-capture 1 in place of --autofocus-on-capture, and --autofocus-on-capture 0 as an alternative to omitting the parameter entirely.

Example: rpicam-still --autofocus-on-capture -o test.jpg

This option is only supported for certain camera modules (such as the Raspberry Pi Camera Module 3).

Video Command Line Options

	--quality,	-q		JPEG quality <number>

Set the JPEG quality. 100 is maximum quality and 50 is the default. Only applies when saving in MJPEG format.

Example: rpicam-vid --codec mjpeg -o test.mjpeg -q 80

	--bitrate,	-b		H.264 bitrate <number>

Set the target bitrate for the H.264 encoder, in bits per second. Only applies when encoding in H.264 format.

Example: rpicam-vid -b 10000000 --width 1920 --height 1080 -o test.h264

	--intra,	-g		Intra-frame period (H.264 only) <number>

Sets the frequency of I (Intra) frames in the H.264 bitstream, as a number of frames. The default value is 60.

Example: rpicam-vid --intra 30 --width 1920 --height 1080 -o test.h264

	--profile			H.264 profile <string>

Set the H.264 profile. The value may be baseline, main or high.

Example: rpicam-vid --width 1920 --height 1080 --profile main -o test.h264

	--level				H.264 level <string>

Set the H.264 level. The value may be 4, 4.1 or 4.2.

Example: rpicam-vid --width 1920 --height 1080 --level 4.1 -o test.h264

	--codec				Encoder to be used <string>

This can select how the video frames are encoded. Valid options are:

  • h264 - use H.264 encoder (the default)

  • mjpeg - use MJPEG encoder

  • yuv420 - output uncompressed YUV420 frames.

  • libav - use the libav backend to encode audio and video (see the libav section for further details).


rpicam-vid -t 10000 --codec mjpeg -o test.mjpeg

rpicam-vid -t 10000 --codec yuv420 -o test.data

	--keypress,	-k		Toggle between recording and pausing

Pressing Enter will toggle rpicam-vid between recording the video stream and not recording it (i.e. discarding it). The application starts off in the recording state, unless the --initial option specifies otherwise. Typing x and Enter causes rpicam-vid to quit.

Example: rpicam-vid -t 0 -o test.h264 -k

	--signal,	-s		Toggle between recording and pausing when SIGUSR1 received

The SIGUSR1 signal will toggle rpicam-vid between recording the video stream and not recording it (i.e. discarding it). The application starts off in the recording state, unless the --initial option specifies otherwise. SIGUSR2 causes rpicam-vid to quit.


rpicam-vid -t 0 -o test.h264 -s


kill -SIGUSR1 $!

	--initial			Start the application in the recording or paused state <string>

The value passed may be record or pause to start the application in, respectively, the recording or the paused state. This option should be used in conjunction with either --keypress or --signal to toggle between the two states.

Example: rpicam-vid -t 0 -o test.h264 -k --initial pause

	--split				Split multiple recordings into separate files

This option should be used in conjunction with --keypress or --signal and causes each recording session (in between the pauses) to be written to a separate file.

Example: rpicam-vid -t 0 --keypress --split --initial pause -o test%04d.h264

	--segment			Write the video recording into multiple segments <number>

This option causes the video recording to be split across multiple files where the parameter gives the approximate duration of each file in milliseconds.

One convenient little trick is to pass a very small duration parameter (namely, --segment 1) which will result in each frame being written to a separate output file. This makes it easy to do "burst" JPEG capture (using the MJPEG codec), or "burst" raw frame capture (using rpicam-raw).

Example: rpicam-vid -t 100000 --segment 10000 -o test%04d.h264

	--circular			Write the video recording into a circular buffer of the given <size>

The video recording is written to a circular buffer which is written to disk when the application quits. The size of the circular buffer may be given in units of megabytes, defaulting to 4MB.

Example: rpicam-vid -t 0 --keypress --inline --circular -o test.h264

	--inline			Write sequence header in every I frame (H.264 only)

This option causes the H.264 sequence headers to be written into every I (Intra) frame. This is helpful because it means a client can understand and decode the video sequence from any I frame, not just from the very beginning of the stream. It is recommended to use this option with any output type that breaks the output into pieces (--segment, --split, --circular), or transmits the output over a network.

Example: rpicam-vid -t 0 --keypress --inline --split -o test%04d.h264

	--listen			Wait for an incoming TCP connection

This option is provided for streaming over a network using TCP/IP. Using --listen will cause rpicam-vid to wait for an incoming client connection before starting the video encode process, which will then be forwarded to that client.

Example: rpicam-vid -t 0 --inline --listen -o tcp://

	--frames			Record exactly this many frames <number>

Exactly <number> frames are recorded. Specifying a non-zero value will override any timeout.

Example: rpicam-vid -o test.h264 --frames 1000

Differences compared to Raspicam Apps

Whilst the rpicam-apps attempt to emulate most features of the legacy Raspicam applications, there are some differences. Here we list the principal ones that users are likely to notice.

  • The use of Boost program_options doesn’t allow multi-character short versions of options, so where these were present they have had to be dropped. The long form options are named the same, and any single character short forms are preserved.

  • rpicam-still and rpicam-jpeg do not show the capture image in the preview window.

  • libcamera performs its own camera mode selection, so the --mode option is not supported. It deduces camera modes from the resolutions requested. There is still work ongoing in this area.

  • The following features of the legacy apps are not supported as the code has to run on the ARM now. But note that a number of these effects are now provided by the post-processing mechanism.

    • opacity (--opacity)

    • image effects (--imxfx)

    • colour effects (--colfx)

    • annotation (--annotate, --annotateex)

    • dynamic range compression, or DRC (--drc)

  • stereo (--stereo, --decimate and --3dswap). There is no support in libcamera for stereo currently.

  • There is no image stabilisation (--vstab) (though the legacy implementation does not appear to do very much).

  • There are no demo modes (--demo).

  • The transformations supported are those that do not involve a transposition. 180 degree rotations, therefore, are among those permitted but 90 and 270 degree rotations are not.

  • There are some differences in the metering, exposure and AWB options. In particular the legacy apps conflate metering (by which we mean the "metering mode") and the exposure (by which we now mean the "exposure profile"). With regards to AWB, to turn it off you have to set a pair of colour gains (e.g. --awbgains 1.0,1.0).

  • libcamera has no mechanism to set the AWB into "grey world" mode, which is useful for "NOIR" camera modules. However, tuning files are supplied which switch the AWB into the correct mode, so for example, you could use rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/vc4/imx219_noir.json (for Pi 4 and earlier devices) or rpicam-hello --tuning-file /usr/share/libcamera/ipa/rpi/pisp/imx219_noir.json (Pi 5 and later devices).

  • There is support for setting the exposure time (--shutter) and analogue gain (--analoggain or just --gain). There is no explicit control of the digital gain; you get this if the gain requested is larger than the analogue gain can deliver by itself.

  • libcamera has no understanding of ISO, so there is no --ISO option. Users should calculate the gain corresponding to the ISO value required (usually a manufacturer will tell you that, for example, a gain of 1 corresponds to an ISO of 40), and use the --gain parameter instead.

  • There is no support for setting the flicker period yet.

  • rpicam-still does not support burst capture. In fact, because the JPEG encoding is not multi-threaded and pipelined it would produce quite poor framerates. Instead, users are advised to consider using rpicam-vid in MJPEG mode instead (and --segment 1 can be used to force each frame into a separate JPEG file).

  • libcamera uses open source drivers for all the image sensors, so the mechanism for enabling or disabling on-sensor DPC (Defective Pixel Correction) is different. The imx477 (HQ cam) driver enables on-sensor DPC by default; to disable it the user should, as root, enter

echo 0 > /sys/module/imx477/parameters/dpc_enable


rpicam-apps share a common post-processing framework. This allows them to pass the images received from the camera system through a number of custom image processing and image analysis routines. Each such routine is known as a post-processing stage and the description of exactly which stages should be run, and what configuration they may have, is supplied in a JSON file. Every stage, along with its source code, is supplied with a short example JSON file showing how to enable it.

For example, the simple negate stage (which "negates" all the pixels in an image, turning light pixels dark and vice versa) is supplied with a negate.json file that configures the post-processing pipeline to run it:

rpicam-hello --post-process-file /path/to/negate.json

Example JSON files can be found in the assets folder of the rpicam-apps repository at https://github.com/raspberrypi/rpicam-apps/tree/main/assets.

The negate stage is particularly trivial and has no configuration parameters of its own, therefore the JSON file merely has to name the stage, with no further information, and it will be run. Thus negate.json contains


To run multiple post-processing stages, the contents of the example JSON files merely need to be listed together, and the stages will be run in the order given. For example, to run the Sobel stage (which applies a Sobel filter to an image) followed by the negate stage we could create a custom JSON file containing

        "ksize": 5

The Sobel stage is implemented using OpenCV, hence cv in its name. Observe how it has a user-configurable parameter, ksize that specifies the kernel size of the filter to be used. In this case, the Sobel filter will produce bright edges on a black background, and the negate stage will turn this into dark edges on a white background, as shown.

Image with Sobel and negate

Some stages actually alter the image in some way, and this is their primary function (such as negate). Others are primarily for image analysis, and while they may indicate something on the image, all they really do is generate useful information. For this reason we also have a very flexible form of metadata that can be populated by the post-processing stages, and this will get passed all the way through to the application itself.

Image analysis stages often prefer to work on reduced resolution images. rpicam-apps are able to supply applications with a ready-made low resolution image provided directly by the ISP hardware, and this can be helpful in improving performance.

Furthermore, with the post-processing framework being completely open, Raspberry Pi welcomes the contribution of new and interesting stages from the community and would be happy to host them in our rpicam-apps repository. The stages that are currently available are documented below.

The rpicam-apps supplied with the operating system will be built without any optional 3rd party libraries (such as OpenCV or TensorFlow Lite), meaning that certain post-processing stages that rely on them will not be enabled. To use these stages, please follow the instructions for building rpicam-apps for yourself.

negate stage

The negate stage requires no 3rd party libraries.

On a Raspberry Pi 3 device or a Raspberry Pi 4 running a 32-bit OS, it may execute more quickly if recompiled using -DENABLE_COMPILE_FLAGS_FOR_TARGET=armv8-neon. (Please see the build instructions.)

The negate stage has no user-configurable parameters.

Default negate.json file:



Image with negate

hdr stage

The hdr stage implements both HDR (high dynamic range) imaging and DRC (dynamic range compression). The terminology that we use here regards DRC as operating on single images, and HDR works by accumulating multiple under-exposed images and then performing the same algorithm as DRC.

The hdr stage has no dependencies on 3rd party libraries, but (like some other stages) may execute more quickly on Raspberry Pi 3 or Raspberry Pi 4 devices running a 32-bit OS if recompiled using -DENABLE_COMPILE_FLAGS_FOR_TARGET=armv8-neon (please see the build instructions). Specifically, the image accumulation stage will run quicker and result in fewer frame drops, though the tonemapping part of the process is unchanged.

The basic procedure is that we take the image (which in the case of HDR may be multiple images accumulated together) and apply an edge-preserving smoothing filter to generate a low pass (LP) image. We define the high pass (HP) image to be the difference between the LP image and the original. Next we apply a global tonemap to the LP image and add back the HP image. This procedure, in contrast to applying the tonemap directly to the original image, prevents us from squashing and losing all the local contrast in the resulting image.

It is worth noting that this all happens using fully-processed images, once the ISP has finished with them. HDR normally works better when carried out in the raw (Bayer) domain, as signals are still linear and have greater bit-depth. We expect to implement such functionality once libcamera exports an API for "re-processing" Bayer images that do not come from the sensor, but which application code can pass in.

In summary, the user-configurable parameters fall broadly into three groups: those that define the LP filter, those responsible for the global tonemapping, and those responsible for re-applying the local contrast.


The number of frames to accumulate. For DRC (in our terminology) this would take the value 1, but for multi-frame HDR we would suggest a value such as 8.


The coefficient of the low pass IIR filter.


A piecewise linear function that relates the pixel level to the threshold that is regarded as being "meaningful detail".


A list of points in the input image histogram and targets in the output range where we wish to move them. We define an inter-quantile mean (q and width), a target as a proportion of the full output range (target) and maximum and minimum gains by which we are prepared to move the measured inter-quantile mean (as this prevents us from changing an image too drastically).


Strength of application of the global tonemap.


A piecewise linear function that defines the gain applied to local contrast when added back to the tonemapped LP image, for positive (bright) detail.


A piecewise linear function that defines the gain applied to local contrast when added back to the tonemapped LP image, for negative (dark) detail.


An overall gain applied to all local contrast that is added back.


A factor that allows the output colours to be affected more or less strongly.

We note that the overall strength of the processing is best controlled by changing the global_tonemap_strength and local_tonemap_strength parameters.

The full processing takes between 2 and 3 seconds for a 12MP image on a Raspberry Pi 4. The stage runs only on the still image capture, it ignores preview and video images. In particular, when accumulating multiple frames, the stage "swallows" the output images so that the application does not receive them, and finally sends through only the combined and processed image.

Default drc.json file for DRC:

    "hdr" :
	"num_frames" : 1,
	"lp_filter_strength" : 0.2,
	"lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ],
	"global_tonemap_points" :
	    { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 1.5, "max_down": 0.7 },
	    { "q": 0.5, "width": 0.05, "target": 0.5, "max_up": 1.5, "max_down": 0.7 },
	    { "q": 0.8, "width": 0.05, "target": 0.8, "max_up": 1.5, "max_down": 0.7 }
	"global_tonemap_strength" : 1.0,
	"local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ],
	"local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ],
	"local_tonemap_strength" : 1.0,
	"local_colour_scale" : 0.9


Without DRC:

Image without DRC processing

With full-strength DRC: (use rpicam-still -o test.jpg --post-process-file drc.json)

Image with DRC processing

Default hdr.json file for HDR:

    "hdr" :
	"num_frames" : 8,
	"lp_filter_strength" : 0.2,
	"lp_filter_threshold" : [ 0, 10.0 , 2048, 205.0, 4095, 205.0 ],
	"global_tonemap_points" :
	    { "q": 0.1, "width": 0.05, "target": 0.15, "max_up": 5.0, "max_down": 0.5 },
	    { "q": 0.5, "width": 0.05, "target": 0.45, "max_up": 5.0, "max_down": 0.5 },
	    { "q": 0.8, "width": 0.05, "target": 0.7, "max_up": 5.0, "max_down": 0.5 }
	"global_tonemap_strength" : 1.0,
	"local_pos_strength" : [ 0, 6.0, 1024, 2.0, 4095, 2.0 ],
	"local_neg_strength" : [ 0, 4.0, 1024, 1.5, 4095, 1.5 ],
	"local_tonemap_strength" : 1.0,
	"local_colour_scale" : 0.8


Without HDR:

Image without HDR processing

With HDR: (use rpicam-still -o test.jpg --ev -2 --denoise cdn_off --post-process-file hdr.json)

Image with DRC processing

motion_detect stage

The motion_detect stage works by analysing frames from the low resolution image stream, which must be configured for it to work. It compares a region of interest ("roi") in the frame to the corresponding part of a previous one and if enough pixels are sufficiently different, that will be taken to indicate motion. The result is added to the metadata under "motion_detect.result".

This stage has no dependencies on any 3rd party libraries.

It has the following tunable parameters. The dimensions are always given as a proportion of the low resolution image size.


x-offset of the region of interest for the comparison


y-offset of the region of interest for the comparison


width of the region of interest for the comparison


height of the region of interest for the comparison


Linear coefficient used to construct the threshold for pixels being different


Constant coefficient used to construct the threshold for pixels being different according to threshold = difference_m * pixel_value + difference_c


The motion detector will run only this many frames


The pixel tests are subsampled by this amount horizontally


The pixel tests are subsampled by this amount vertically


The proportion of pixels (or "regions") which must be categorised as different for them to count as motion


Print messages to the console, including when the "motion"/"no motion" status changes

Default motion_detect.json configuration file:

    "motion_detect" :
	"roi_x" : 0.1,
	"roi_y" : 0.1,
	"roi_width" : 0.8,
	"roi_height" : 0.8,
	"difference_m" : 0.1,
	"difference_c" : 10,
	"region_threshold" : 0.005,
	"frame_period" : 5,
	"hskip" : 2,
	"vskip" : 2,
	"verbose" : 0

Note that the field difference_m and difference_c, and the value of region_threshold, can be adjusted to make the algorithm more or less sensitive to motion.

If the amount of computation needs to be reduced (perhaps you have other stages that need a larger low resolution image), the amount of computation can be reduced using the hskip and vskip parameters.

To use the motion_detect stage you might enter the following example command:

rpicam-hello --lores-width 128 --lores-height 96 --post-process-file motion_detect.json

Post-Processing with OpenCV

These stages all require OpenCV to be installed on your system. You may also need to rebuild rpicam-apps with OpenCV support - please see the instructions for building rpicam-apps for yourself.

sobel_cv stage

The sobel_cv stage has the following user-configurable parameters:


Kernel size of the Sobel filter

Default sobel_cv.json file:

        "ksize": 5


Image with Sobel filter

face_detect_cv stage

This stage uses the OpenCV Haar classifier to detect faces in an image. It returns the face locations in the metadata (under the key "face_detect.results"), and optionally draws them on the image.

The face_detect_cv stage has the following user-configurable parameters:


Name of the file where the Haar cascade can be found.


Determines range of scales at which the image is searched for faces.


Minimum number of overlapping neighbours required to count as a face.


Minimum face size.


Maximum face size.


How many frames to wait before trying to re-run the face detector.


Whether to draw face locations on the returned image.

The `face_detect_cv" stage runs only during preview and video capture; it ignores still image capture. It runs on the low resolution stream which would normally be configured to a resolution from about 320x240 to 640x480 pixels.

Default face_detect_cv.json file:

        "cascade_name" : "/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml",
        "scaling_factor" : 1.1,
        "min_neighbors" : 2,
        "min_size" : 32,
        "max_size" : 256,
        "refresh_rate" : 1,
        "draw_features" : 1


Image showing faces

annotate_cv stage

This stage allows text to be written into the top corner of images. It allows the same % substitutions as the --info-text parameter.

Additionally to the flags of --info-text you can provide any token that strftime understands to display the current date / time. The --info-text tokens are interpreted first and any percentage token left is then interpreted by strftime. To achieve a datetime stamp on the video you can use e.g. %F %T %z (%F for the ISO-8601 date (2023-03-07), %T for 24h local time (09:57:12) and %z for the timezone difference to UTC (-0800)).

The stage does not output any metadata, but if it finds metadata under the key "annotate.text" it will write this text in place of anything in the JSON configuration file. This allows other post-processing stages to pass it text strings to be written onto the top of the images.

The annotate_cv stage has the following user-configurable parameters:


The text string to be written.


Foreground colour.


Background colour.


A number proportional to the size of the text.


A number that determines the thickness of the text.


The amount of "alpha" to apply when overwriting the background pixels.

Default annotate_cv.json file:

    "annotate_cv" :
	"text" : "Frame %frame exp %exp ag %ag dg %dg",
	"fg" : 255,
	"bg" : 0,
	"scale" : 1.0,
	"thickness" : 2,
	"alpha" : 0.3


Image with text overlay

Post-Processing with TensorFlow Lite

These stages require TensorFlow Lite (TFLite) libraries to be installed that export the C++ API. Unfortunately the TFLite libraries are not normally distributed conveniently in this form, however, one place where they can be downloaded is lindevs.com. Please follow the installation instructions given on that page. Subsequently you may need to recompile rpicam-apps with TensorFlow Lite support - please follow the instructions for building rpicam-apps for yourself.

object_classify_tf stage

object_classify_tf uses a Google MobileNet v1 model to classify objects in the camera image. It can be obtained from https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz, which will need to be uncompressed. You will also need the labels.txt file which can be found in https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_1.0_224_frozen.tgz.

This stage has the following configuratble parameters.


How many results to show


The number of frames that must elapse before the model is re-run


Confidence threshold (between 0 and 1) where objects are considered as being present


Confidence threshold which objects must drop below before being discarded as matches


Pathname to the tflite model file


Pathname to the file containing the object labels


Whether to display the object labels on the image. Note that this causes annotate.text metadata to be inserted so that the text can be rendered subsequently by the annotate_cv stage


Output more information to the console

Example object_classify_tf.json file:

        "top_n_results" : 2,
        "refresh_rate" : 30,
        "threshold_high" : 0.6,
        "threshold_low" : 0.4,
        "model_file" : "/home/pi/models/mobilenet_v1_1.0_224_quant.tflite",
        "labels_file" : "/home/pi/models/labels.txt",
        "display_labels" : 1
    "annotate_cv" :
	"text" : "",
	"fg" : 255,
	"bg" : 0,
	"scale" : 1.0,
	"thickness" : 2,
	"alpha" : 0.3

The stage operates on a low resolution stream image of size 224x224, so it could be used as follows:

rpicam-hello --post-process-file object_classify_tf.json --lores-width 224 --lores-height 224

Image showing object classifier results

pose_estimation_tf stage

pose_estimation_tf uses a Google MobileNet v1 model posenet_mobilenet_v1_100_257x257_multi_kpt_stripped.tflite that can be found at https://github.com/Qengineering/TensorFlow_Lite_Pose_RPi_32-bits.

This stage has the following configurable parameters.


The number of frames that must elapse before the model is re-run


Pathname to the tflite model file


Output more information to the console

Also provided is a separate plot_pose_cv stage which can be included in the JSON configuration file and which will draw the detected pose onto the main image. This stage has the following configuration parameters.


A confidence level determining how much is drawn. This number can be less than zero; please refer to the GitHub repository for more information.

Example pose_estimation_tf.json file:

        "refresh_rate" : 5,
        "model_file" : "posenet_mobilenet_v1_100_257x257_multi_kpt_stripped.tflite"
    "plot_pose_cv" :
	"confidence_threshold" : -0.5

The stage operates on a low resolution stream image of size 257x257 (but which must be rounded up to 258x258 for YUV420 images), so it could be used as follows:

rpicam-hello --post-process-file pose_estimation_tf.json --lores-width 258 --lores-height 258

Image showing pose estimation results

object_detect_tf stage

object_detect_tf uses a Google MobileNet v1 SSD (Single Shot Detector) model. The model and labels files can be downloaded from https://storage.googleapis.com/download.tensorflow.org/models/tflite/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29.zip.

This stage has the following configurable parameters.


The number of frames that must elapse before the model is re-run


Pathname to the tflite model file


Pathname to the file containing the list of labels


Minimum confidence threshold because a match is accepted.


Determines the amount of overlap between matches for them to be merged as a single match.


Output more information to the console

Also provided is a separate object_detect_draw_cv stage which can be included in the JSON configuration file and which will draw the detected objects onto the main image. This stage has the following configuration parameters.


Thickness of the bounding box lines


Size of the font used for the label

Example object_detect_tf.json file:

	"number_of_threads" : 2,
	"refresh_rate" : 10,
	"confidence_threshold" : 0.5,
	"overlap_threshold" : 0.5,
	"model_file" : "/home/pi/models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/detect.tflite",
	"labels_file" : "/home/pi/models/coco_ssd_mobilenet_v1_1.0_quant_2018_06_29/labelmap.txt",
	"verbose" : 1
	"line_thickness" : 2

The stage operates on a low resolution stream image of size 300x300. The following example would pass a 300x300 crop to the detector from the centre of the 400x300 low resolution image.

rpicam-hello --post-process-file object_detect_tf.json --lores-width 400 --lores-height 300

Image showing detected objects

segmentation_tf stage

segmentation_tf uses a Google MobileNet v1 model. The model file can be downloaded from https://tfhub.dev/tensorflow/lite-model/deeplabv3/1/metadata/2?lite-format=tflite, whilst the labels file can be found in the assets folder, named segmentation_labels.txt.

This stage runs on an image of size 257x257. Because YUV420 images must have even dimensions, the low resolution image should be at least 258 pixels in both width and height. The stage adds a vector of 257x257 values to the image metadata where each value indicates which of the categories (listed in the labels file) that the pixel belongs to. Optionally, a representation of the segmentation can be drawn into the bottom right corner of the image.

This stage has the following configurable parameters.


The number of frames that must elapse before the model is re-run


Pathname to the tflite model file


Pathname to the file containing the list of labels


When verbose is set, the stage prints to the console any labels where the number of pixels with that label (in the 257x257 image) exceeds this threshold.


Set this value to draw the segmentation map into the bottom right hand corner of the image.


Output more information to the console

Example segmentation_tf.json file:

	"number_of_threads" : 2,
	"refresh_rate" : 10,
	"model_file" : "/home/pi/models/lite-model_deeplabv3_1_metadata_2.tflite",
	"labels_file" : "/home/pi/models/segmentation_labels.txt",
	"draw" : 1,
	"verbose" : 1

This example takes a square camera image and reduces it to 258x258 pixels in size. In fact the stage also works well when non-square images are squashed unequally down to 258x258 pixels without cropping. The image below shows the segmentation map in the bottom right hand corner.

rpicam-hello --post-process-file segmentation_tf.json --lores-width 258 --lores-height 258 --viewfinder-width 1024 --viewfinder-height 1024

Image showing segmentation in the bottom right corner

Writing your own Post-Processing Stages

The rpicam-apps post-processing framework is not only very flexible but is meant to make it easy for users to create their own custom post-processing stages. It is easy to include algorithms and routines that are already available both in OpenCV and TensorFlow Lite.

We are keen to accept and distribute interesting post-processing stages contributed by our users.

Basic Post-Processing Stages

Post-processing stages have a simple API, and users can create their own by deriving from the PostProcessingStage class. The member functions that must be implemented are listed below, though note that some may be unnecessary for simple stages.

char const *Name() const

Return the name of the stage. This is used to match against stages listed in the JSON post-processing configuration file.

void Read(boost::property_tree::ptree const &params)

This method will read any of the stage’s configuration parameters from the JSON file.

void AdjustConfig(std::string const &use_case, StreamConfiguration *config)

This method gives stages a chance to influence the configuration of the camera, though it is not often necessary to implement it.

void Configure()

This is called just after the camera has been configured. It is a good moment to check that the stage has access to the streams it needs, and it can also allocate any resources that it may require.

void Start()

Called when the camera starts. This method is often not required.

bool Process(CompletedRequest &completed_request)

This method presents completed camera requests for post-processing and is where the necessary pixel manipulations or image analysis will happen. The function returns true if the post-processing framework is not to deliver this request on to the application.

void Stop()

Called when the camera is stopped. Normally a stage would need to shut down any processing that might be running (for example, if it started any asynchronous threads).

void Teardown()

Called when the camera configuration is torn down. This would typically be used to de-allocate any resources that were set up in the Configure method.

Some helpful hints on writing your own stages:

  • Generally, the Process method should not take too long as it will block the imaging pipeline and may cause stuttering. When time-consuming algorithms need to be run, it may be helpful to delegate them to another asynchronous thread.

  • When delegating work to another thread, the way image buffers are handled currently means that they will need to be copied. For some applications, such as image analysis, it may be viable to use the "low resolution" image stream rather than full resolution images.

  • The post-processing framework adds multi-threading parallelism on a per-frame basis. This is helpful in improving throughput if you want to run on every single frame. Some functions may supply parallelism within each frame (such as OpenCV and TFLite). In these cases it would probably be better to serialise the calls so as to suppress the per-frame parallelism.

  • Most streams, and in particular the low resolution stream, have YUV420 format. These formats are sometimes not ideal for OpenCV or TFLite so there may sometimes need to be a conversion step.

  • When images need to be altered, doing so in place is much the easiest strategy.

  • Implementations of any stage should always include a RegisterStage call. This registers your new stage with the system so that it will be correctly identified when listed in a JSON file. You will need to add it to the post-processing folder’s CMakeLists.txt too, of course.

The easiest example to start with is negate_stage.cpp, which "negates" an image (turning black white, and vice versa). Aside from a small amount of derived class boiler-plate, it contains barely half a dozen lines of code.

Next up in complexity is sobel_cv_stage.cpp. This implements a Sobel filter using just a few lines of OpenCV functions.

TFLite Stages

For stages wanting to analyse images using TensorFlowLite we provide the TfStage base class. This provides a certain amount of boilerplate code and makes it much easier to implement new TFLite-based stages by deriving from this class. In particular, it delegates the execution of the model to another thread, so that the full camera framerate is still maintained - it is just the model that will run at a lower framerate.

The TfStage class implements all the public PostProcessingStage methods that normally have to be redefined, with the exception of the Name method which must still be supplied. It then presents the following virtual methods which derived classes should implement instead.

void readExtras()

The base class reads the named model and certain other parameters like the refresh_rate. This method can be supplied to read any extra parameters for the derived stage. It is also a good place to check that the loaded model looks as expected (i.e. has right input and output dimensions).

void checkConfiguration()

The base class fetches the low resolution stream which TFLite will operate on, and the full resolution stream in case the derived stage needs it. This method is provided for the derived class to check that the streams it requires are present. In case any required stream is missing, it may elect simply to avoid processing any images, or it may signal a fatal error.

void interpretOutputs()

The TFLite model runs asynchronously so that it can run "every few frames" without holding up the overall framerate. This method gives the derived stage the chance to read and interpret the model’s outputs, running right after the model itself and in that same thread.

void applyResults()

Here we are running once again in the main thread and so this method should run reasonably quickly so as not to hold up the supply of frames to the application. It is provided so that the last results of the model (which might be a few frames ago) can be applied to the current frame. Typically this would involve attaching metadata to the image, or perhaps drawing something onto the main image.

For further information, readers are referred to the supplied example code implementing the ObjectClassifyTfStage and PoseEstimationTfStage classes.

Multiple Cameras Usage

Basic support for multiple cameras is available within rpicam-apps. Multiple cameras may be attached to a Raspberry Pi in the following ways:

In the latter case, only one camera may be used at a time since both cameras are attached to a single Unicam port. For the former, both cameras can run simultaneously.

To list all the cameras available on your platform, use the --list-cameras command line option. To choose which camera to use, use the --camera <index> option, and provide the index value of the requested camera.

libcamera does not yet provide stereoscopic camera support. When running two cameras simultaneously, they must be run in separate processes. This means there is no way to synchronise sensor framing or 3A operation between them. As a workaround, you could synchronise the cameras through an external sync signal for the HQ (IMX477) camera, and switch the 3A to manual mode if necessary.

libcamera and rpicam-apps Packages

A number of apt packages are provided for convenience. In order to access them, we recommend keeping your OS up to date in the usual way.

Binary Packages

There are two rpicam-apps packages available, that contain the necessary executables:

  • rpicam-apps contains the full applications with support for previews using a desktop environment. This package is pre-installed in Raspberry Pi OS.

  • rpicam-apps-lite omits desktop environment support and only the DRM preview is available. This package is pre-installed in Raspberry Pi OS Lite.


These applications depend on a number of library packages which are named library-name<n> where <n> is a version number (actually the ABI, or Application Binary Interface, version), and which stands at zero at the time of writing. Thus we have the following:

  • The package libcamera0 contains the libcamera libraries.

  • The package libepoxy0 contains the libepoxy libraries.

These will be installed automatically when needed.

Dev Packages

rpicam-apps can be rebuilt on their own without installing and building libcamera and libepoxy from scratch. To enable this, the following packages should be installed:

  • libcamera-dev contains the necessary libcamera header files and resources.

  • libepoxy-dev contains the necessary libepoxy header files and resources. You will only need this if you want support for the GLES/EGL preview window.

Subsequently rpicam-apps can be checked out from GitHub and rebuilt.

Building libcamera and rpicam-apps

Building libcamera and rpicam-apps for yourself can bring the following benefits.

  • You can pick up the latest enhancements and features.

  • rpicam-apps can be compiled with extra optimisation for Raspberry Pi 3 and Raspberry Pi 4 devices running a 32-bit OS.

  • You can include the various optional OpenCV and/or TFLite post-processing stages (or add your own).

  • You can customise or add your own applications derived from rpicam-apps.

When building on a Raspberry Pi with 1GB or less of RAM, there is a risk that the device may run out of swap and fail. We recommend either increasing the amount of swap, or building with fewer threads (the -j option to ninja and to make).

Building rpicam-apps without rebuilding libcamera

You can rebuild rpicam-apps without first rebuilding the whole of libcamera and libepoxy. If you do not need support for the GLES/EGL preview window then libepoxy can be omitted entirely. Mostly this will include Raspberry Pi OS Lite users, and they must be sure to use -Denable_egl=false when running meson setup later. These users should run:

sudo apt install -y libcamera-dev libjpeg-dev libtiff5-dev libpng-dev

All other users should execute:

sudo apt install -y libcamera-dev libepoxy-dev libjpeg-dev libtiff5-dev libpng-dev

If you want to use the Qt preview window, please also execute

sudo apt install -y qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5

If you want libav support in rpicam-vid, additional libraries must be installed:

sudo apt install libavcodec-dev libavdevice-dev libavformat-dev libswresample-dev

Now proceed directly to the instructions for building rpicam-apps. Raspberry Pi OS Lite users should check that git is installed first (sudo apt install -y git).

Building libcamera

Rebuilding libcamera from scratch should be necessary only if you need the latest features that may not yet have reached the apt repositories, or if you need to customise its behaviour in some way.

First install all the necessary dependencies for libcamera.

Raspberry Pi OS Lite users will first need to install the following additional packages if they have not done so previously:
sudo apt install -y python3-pip git python3-jinja2

All users should then install the following:

sudo apt install -y libboost-dev
sudo apt install -y libgnutls28-dev openssl libtiff5-dev pybind11-dev
sudo apt install -y qtbase5-dev libqt5core5a libqt5gui5 libqt5widgets5
sudo apt install -y meson cmake
sudo apt install -y python3-yaml python3-ply

In the meson commands below we have enabled the gstreamer plugin. If you do not need this you can set -Dgstreamer=disabled instead and the next pair of dependencies will not be required. But if you do leave gstreamer enabled, then you will need the following:

sudo apt install -y libglib2.0-dev libgstreamer-plugins-base1.0-dev

Now we can check out and build libcamera itself. We check out Raspberry Pi’s fork of libcamera which tracks the official repository but lets us control exactly when we pick up new features.

git clone https://github.com/raspberrypi/libcamera.git
cd libcamera

Next, please run

meson setup build --buildtype=release -Dpipelines=rpi/vc4,rpi/pisp -Dipas=rpi/vc4,rpi/pisp -Dv4l2=true -Dgstreamer=enabled -Dtest=false -Dlc-compliance=disabled -Dcam=disabled -Dqcam=disabled -Ddocumentation=disabled -Dpycamera=enabled

To complete the libcamera build, use

ninja -C build   # use -j 2 on Raspberry Pi 3 or earlier devices
sudo ninja -C build install
At the time of writing libcamera does not yet have a stable binary interface. Therefore, if you have rebuilt libcamera we recommend continuing and rebuilding rpicam-apps from scratch too.

Building libepoxy

Rebuilding libepoxy should not normally be necessary as this library changes only very rarely. If you do want to build it from scratch, however, please follow the instructions below.

Start by installing the necessary dependencies.

sudo apt install -y libegl1-mesa-dev

Next, check out and build libepoxy.

git clone https://github.com/anholt/libepoxy.git
cd libepoxy
mkdir _build
cd _build
sudo ninja install

Building rpicam-apps

First fetch the necessary dependencies for rpicam-apps.

sudo apt install -y cmake libboost-program-options-dev libdrm-dev libexif-dev
sudo apt install -y meson ninja-build

The rpicam-apps build process begins with the following:

git clone https://github.com/raspberrypi/rpicam-apps.git
cd rpicam-apps

At this point you will need to run meson setup after deciding what extra flags to pass it. The valid flags are:

  • -Dneon_flags=armv8-neon - you may supply this when building for Raspberry Pi 3 or Raspberry Pi 4 devices running a 32-bit OS. Some post-processing features may run more quickly.

  • -Denable_libav=true or -Denable_libav=false - this enables or disables the libav encoder integration.

  • -Denable_drm=true or -Denable_drm=false - this enables or disables the DRM/KMS preview rendering. This is what implements the preview window when a desktop environment is not running.

  • -Denable_egl=true or -Denable_egl=false - this enables or disables the desktop environment-based preview. You should disable this if your system does not have a desktop environment installed.

  • -Denable_qt=true or -Denable_qt=false - this enables or disables support for the Qt-based implementation of the preview window. You should disable it if you do not have a desktop environment installed, or if you have no intention of using the Qt-based preview window. The Qt-based preview is normally not recommended because it is computationally very expensive, however it does work with X display forwarding.

  • -Denable_opencv=true or -Denable_opencv=false - you may choose one of these to force OpenCV-based post-processing stages to be linked (or not). If you enable them, then OpenCV must be installed on your system. Normally they will be built by default if OpenCV is available.

  • -Denable_tflite=true or -Denable_tflite=false - choose one of these to enable TensorFlow Lite post-processing stages (or not). By default they will not be enabled. If you enable them then TensorFlow Lite must be available on your system. Depending on how you have built and/or installed TFLite, you may need to tweak the meson.build file in the post_processing_stages directory.

For Raspberry Pi OS users we recommend the following meson setup command:

meson setup build -Denable_libav=true -Denable_drm=true -Denable_egl=true -Denable_qt=true -Denable_opencv=false -Denable_tflite=false

and for Raspberry Pi OS Lite users:

meson setup build -Denable_libav=false -Denable_drm=true -Denable_egl=false -Denable_qt=false -Denable_opencv=false -Denable_tflite=false

In both cases, consider -Dneon_flags=armv8-neon if you are using a 32-bit OS on a Raspberry Pi 3 or Raspberry Pi 4. Consider -Denable_opencv=true if you have installed OpenCV and wish to use OpenCV-based post-processing stages. Finally also consider -Denable_tflite=true if you have installed TensorFlow Lite and wish to use it in post-processing stages.

After executing the meson setup command of your choice, the whole process concludes with the following:

meson compile -C build # use -j1 on Raspberry Pi 3 or earlier devices
sudo meson install -C build
sudo ldconfig # this is only necessary on the first build
If you are using an image where rpicam-apps have been previously installed as an apt package, and you want to run the new rpicam-apps executables from the same terminal window where you have just built and installed them, you may need to run hash -r to be sure to pick up the new ones over the system supplied ones.

Finally, if you have not already done so, please be sure to follow the dtoverlay and display driver instructions in the Getting Started section (and rebooting if you changed anything there).

Understanding and Writing your own Apps

rpicam-apps are not supposed to be a full set of all the applications with all the features that anyone could ever need. Instead, they are supposed to be easy to understand, such that users who require slightly different behaviour can implement it for themselves.

All the applications work by having a simple event loop which receives a message with a new set of frames from the camera system. This set of frames is called a CompletedRequest. It contains all the images that have been derived from that single camera frame (so perhaps a low resolution image in addition to the full size output), as well as metadata from the camera system and further metadata from the post-processing system.


rpicam-hello is much the easiest application to understand. The only thing it does with the camera images is extract the CompletedRequestPtr (a shared pointer to the CompletedRequest) from the message:

	CompletedRequestPtr &completed_request = std::get<CompletedRequestPtr>(msg.payload);

and forward it to the preview window:

	app.ShowPreview(completed_request, app.ViewfinderStream());

One important thing to note is that every CompletedRequest must be recycled back to the camera system so that the buffers can be reused, otherwise it will simply run out of buffers in which to receive new camera frames. This recycling process happens automatically when all references to the CompletedRequest are dropped, using C++'s shared pointer and custom deleter mechanisms.

In rpicam-hello therefore, two things must happen for the CompletedRequest to be returned to the camera.

  1. The event loop must go round again so that the message (msg in the code), which is holding a reference to the shared pointer, is dropped.

  2. The preview thread, which takes another reference to the CompletedRequest when ShowPreview is called, must be called again with a new CompletedRequest, causing the previous one to be dropped.


rpicam-vid is not unlike rpicam-hello, but it adds a codec to the event loop and the preview. Before the event loop starts, we must configure that encoder with a callback which says what happens to the buffer containing the encoded image data.

	app.SetEncodeOutputReadyCallback(std::bind(&Output::OutputReady, output.get(), _1, _2, _3, _4));

Here we send the buffer to the Output object which may write it to a file, or send it over the network, according to our choice when we started the application.

The encoder also takes a new reference to the CompletedRequest, so once the event loop, the preview window and the encoder all drop their references, the CompletedRequest will be recycled automatically back to the camera system.


rpicam-raw is not so very different from rpicam-vid. It too uses an encoder, although this time it is a "dummy" encoder called the NullEncoder. This just treats the input image directly as the output buffer and is careful not to drop its reference to the input until the output callback has dealt with it first.

This time, however, we do not forward anything to the preview window, though we could have displayed the (processed) video stream if we had wanted.

The use of the NullEncoder is possibly overkill in this application, as we could probably just send the image straight to the Output object. However, it serves to underline the general principle that it is normally a bad idea to do too much work directly in the event loop, and time-consuming processes are often better left to other threads.


We discuss rpicam-jpeg rather than rpicam-still as the basic idea (that of switching the camera from preview into capture mode) is the same, and rpicam-jpeg has far fewer additional options (such as timelapse capture) that serve to distract from the basic function.

rpicam-jpeg starts the camera in preview mode in the usual way, but at the appropriate moment stops it and switches to still capture:


Then the event loop will grab the first frame that emerges once it’s no longer in preview mode, and saves this as a JPEG.

Python Bindings for libcamera

The Picamera2 library is a rpicam-based replacement for Picamera, which was a Python interface to Raspberry Pi’s legacy camera stack. Picamera2 presents an easy to use Python API.

Documentation about Picamera2 is available on Github and in the Picamera2 Manual.


Picamera2 is only supported on Raspberry Pi OS Bullseye (or later) images, both 32- and 64-bit.

As of September 2022, Picamera2 is pre-installed on images downloaded from Raspberry Pi. It works on all Raspberry Pi boards right down to the Pi Zero, although performance in some areas may be worse on less powerful devices.

Picamera2 is not supported on:

  1. Images based on Buster or earlier releases.

  2. Bullseye images where the legacy camera stack has been re-enabled.

On Raspberry Pi OS images, Picamera2 is now installed with all the GUI (Qt and OpenGL) dependencies. On Raspberry Pi OS Lite, it is installed without the GUI dependencies, although preview images can still be displayed using DRM/KMS. If these users wish to use the additional GUI features, they will need to run

$ sudo apt install -y python3-pyqt5 python3-opengl
No changes are required to Picamera2 itself.

If your image did not come pre-installed with Picamera2 apt is the recommended way of installing and updating Picamera2.

$ sudo apt update
$ sudo apt upgrade

Thereafter, you can install Picamera2 with all the GUI (Qt and OpenGL) dependencies using

$ sudo apt install -y python3-picamera2

If you do not want the GUI dependencies, use

$ sudo apt install -y python3-picamera2 --no-install-recommends
If you have installed Picamera2 previously using pip, then you should also uninstall this, using the command pip3 uninstall picamera2.
If Picamera2 is already installed, you can update it with sudo apt install -y python3-picamera2, or as part of a full system update (for example, sudo apt upgrade).

Camera Tuning and supporting 3rd Party Sensors

The Camera Tuning File

Most of the image processing applied to frames from the sensor is done by the hardware ISP (Image Signal Processor). This processing is governed by a set of control algorithms and these in turn must have a wide range of parameters supplied to them. These parameters are tuned specifically for each sensor and are collected together in a JSON file known as the camera tuning file.

This tuning file can be inspected and edited by users. Using the --tuning-file command line option, users can point the system at completely custom camera tuning files.

3rd Party Sensors

libcamera makes it possible to support 3rd party sensors (that is, sensors other than Raspberry Pi’s officially supported sensors) on the Raspberry Pi platform. To accomplish this, a working open source sensor driver must be provided, which the authors are happy to submit to the Linux kernel. There are a couple of extra files need to be added to libcamera which supply device-specific information that is available from the kernel drivers, including the previously discussed camera tuning file.

Raspberry Pi also supplies a tuning tool which automates the generation of the tuning file from a few simple calibration images.

Both these topics are rather beyond the scope of the documentation here, however, full information is available in the Tuning Guide for the Raspberry Pi cameras and libcamera.

Known Issues

We are aware of the following issues in libcamera and rpicam-apps.

  • On Raspberry Pi 3 (and earlier devices) the graphics hardware can only support images up to 2048x2048 pixels which places a limit on the camera images that can be resized into the preview window. In practice this means that video encoding of images larger than 2048 pixels across (which would necessarily be using a codec other than h.264) will not support, or will produce corrupted, preview images. For Raspberry Pi 4 the limit is 4096 pixels. We would recommend using the -n (no preview) option for the time being.

  • The preview window shows some display tearing when using a desktop environment. This is not likely to be fixable.

Getting Help

For further help with libcamera and the rpicam-apps, the first port of call will usually be the Raspberry Pi Camera Forum. Before posting, it’s helpful to:

  • Make a note of your operating system version (uname -a).

  • Make a note of your libcamera and rpicam-apps versions (rpicam-hello --version).

  • Please report the make and model of the camera module you are using. Note that when third party camera module vendors supply their own software then we are normally unable to offer any support and all queries should be directed back to the vendor.

  • Please also provide information on what kind of a Raspberry Pi you have, including memory size.

  • If it seems like it might be relevant, please include any excerpts from the application’s console output.

When it seems likely that there are specific problems in the camera software (such as crashes) then it may be more appropriate to create an issue in the rpicam-apps Github repository. Again, please include all the helpful details that you can.

Application Notes

Creating Timelapse Video

To create a time-lapse video, you simply configure the Raspberry Pi to take a picture at a regular interval, such as once a minute, then use an application to stitch the pictures together into a video.

Using rpicam-still Timelapse Mode

rpicam-still has a built in time-lapse mode, using the --timelapse command line switch. The value that follows the switch is the time between shots in milliseconds:

rpicam-still -t 30000 --timelapse 2000 -o image%04d.jpg

The %04d in the output filename: this indicates the point in the filename where you want a frame count number to appear. So, for example, the command above will produce a capture every two seconds (2000ms), over a total period of 30 seconds (30000ms), named image0001.jpg, image0002.jpg, and so on, through to image0015.jpg.

The %04d indicates a four-digit number, with leading zeros added to make up the required number of digits. So, for example, %08d would result in an eight-digit number. You can miss out the 0 if you don’t want leading zeros.

If a timelapse value of 0 is entered, the application will take pictures as fast as possible. Note that there’s an minimum enforced pause of approximately 30 milliseconds between captures to ensure that exposure calculations can be made.

Automating using cron Jobs

A good way to automate taking a picture at a regular interval is running a script with cron. First create the script that we’ll be using with your editor of choice, replacing the <username> placeholder below with the name of the user you created during first boot:

DATE=$(date +"%Y-%m-%d_%H%M")
rpicam-still -o /home/<username>/camera/$DATE.jpg

and save it as camera.sh. You’ll need to make the script executable:

$ chmod +x camera.sh

and also create the camera directory into which you’ll be saving the pictures:

$ mkdir camera

Now open the cron table for editing:

$ crontab -e

This will either ask which editor you would like to use, or open in your default editor. Once you have the file open in an editor, add the following line to schedule taking a picture every minute, replacing the <username> placeholder with the username of your primary user account:

* * * * * /home/<username>/camera.sh 2>&1

Save and exit and you should see the message:

crontab: installing new crontab

Make sure that you use e.g. %04d to ensure that each image is written to a new file: if you don’t, then each new image will overwrite the previous file.

Stitching Images Together

Now you’ll need to stitch the photos together into a video. You can do this on the Raspberry Pi using ffmpeg but the processing will be slow. You may prefer to transfer the image files to your desktop computer or laptop and produce the video there.

First you will need to install ffmpeg if it’s not already installed.

sudo apt install ffmpeg

Now you can use the ffmpeg tool to convert your JPEG files into an mp4 video:

ffmpeg -r 10 -f image2 -pattern_type glob -i 'image*.jpg' -s 1280x720 -vcodec libx264 timelapse.mp4

On a Raspberry Pi 3, this can encode a little more than two frames per second. The performance of other Raspberry Pi models will vary. The parameters used are:

  • -r 10 Set frame rate (Hz value) to ten frames per second in the output video.

  • -f image2 Set ffmpeg to read from a list of image files specified by a pattern.

  • -pattern_type glob When importing the image files, use wildcard patterns (globbing) to interpret the filename input by -i, in this case image*.jpg, where * would be the image number.

  • -i 'image*.jpg' The input file specification (to match the files produced during the capture).

  • -s 1280x720 Scale to 720p. You can also use 1920x1080, or lower resolutions, depending on your requirements.

  • -vcodec libx264 Use the software x264 encoder.

  • timelapse.mp4 The name of the output video file.

ffmpeg has a comprehensive parameter set for varying encoding options and other settings. These can be listed using ffmpeg --help.

Using Gstreamer

Gstreamer is a Linux framework for reading, processing and playing multimedia files. There is a lot of information and many tutorials at the gstreamer website. Here we show how rpicam-vid can be used to stream video over a network.

On the server we need rpicam-vid to output an encoded h.264 bitstream to stdout and can use the gstreamer fdsrc element to receive it. Then extra gstreamer elements can send this over the network. As an example we can simply send and receive the stream on the same device over a UDP link. On the server:

rpicam-vid -t 0 -n --inline -o - | gst-launch-1.0 fdsrc fd=0 ! udpsink host=localhost port=5000

For the client (type this into another console window) we can use:

gst-launch-1.0 udpsrc address=localhost port=5000 ! h264parse ! v4l2h264dec ! autovideosink

Using RTP

To stream using the RTP protocol, on the server you could use:

rpicam-vid -t 0 -n --inline -o - | gst-launch-1.0 fdsrc fd=0 ! h264parse ! rtph264pay ! udpsink host=localhost port=5000

And in the client window:

gst-launch-1.0 udpsrc address=localhost port=5000 caps=application/x-rtp ! rtph264depay ! h264parse ! v4l2h264dec ! autovideosink

We conclude with an example that streams from one machine to another. Let us assume that the client machine has the IP address On the server (a Raspberry Pi) the pipeline is identical, but for the destination address:

rpicam-vid -t 0 -n --inline -o - | gst-launch-1.0 fdsrc fd=0 ! h264parse ! rtph264pay ! udpsink host= port=5000

If the client is not a Raspberry Pi it may have different gstreamer elements available. For a Linux PC we might use:

gst-launch-1.0 udpsrc address= port=5000 caps=application/x-rtp ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink

The libcamerasrc element

libcamera provides a libcamerasrc gstreamer element which can be used directly instead of rpicam-vid. On the server you could use:

gst-launch-1.0 libcamerasrc ! capsfilter caps=video/x-raw,width=1280,height=720,format=NV12 ! v4l2convert ! v4l2h264enc extra-controls="controls,repeat_sequence_header=1" ! h264parse ! rtph264pay ! udpsink host=localhost port=5000

and on the client we use the same playback pipeline as previously.

Using libcamera and Qt together

Qt is a popular application framework and GUI toolkit, and indeed rpicam-apps optionally makes use of it to implement a camera preview window.

However, Qt defines certain symbols as macros in the global namespace (such as slot and emit) and this causes errors when including libcamera files. The problem is common to all platforms trying to use both Qt and libcamera and not specific to Raspberry Pi. Nonetheless we suggest that developers experiencing difficulties try the following workarounds.

  1. libcamera include files, or files that include libcamera files (such as rpicam-apps files), should be listed before any Qt header files where possible.

  2. If you do need to mix your Qt application files with libcamera includes, replace signals: with Q_SIGNALS:, slots: with Q_SLOTS:, emit with Q_EMIT and foreach with Q_FOREACH.

  3. Before any libcamera include files, add

    #undef signals
    #undef slots
    #undef emit
    #undef foreach
  4. If you are using qmake, add CONFIG += no_keywords to the project file. If using cmake, add SET(QT_NO_KEYWORDS ON).

We are not aware of any plans for the underlying library problems to be addressed.

V4L2 Drivers

V4L2 drivers provide a standard Linux interface for accessing camera and codec features. They are loaded automatically when the system is started, though in some non-standard situations you may need to load camera drivers explicitly.

Device nodes when using libcamera

/dev/videoX Default Action


Unicam driver for the first CSI-2 receiver.


Unicam driver for the second CSI-2 receiver.


Video decode.


Video encode.


Simple ISP. Can perform conversion and resizing between RGB/YUV formats, and also Bayer to RGB/YUV conversion.


Input to fully programmable ISP.


High resolution output from fully programmable ISP.


Low result output from fully programmable ISP.


Image statistics from fully programmable ISP.


HEVC Decode

Using the Driver

Please see the V4L2 documentation for details on using this driver.

Camera Serial Interface 2 (CSI2) "Unicam"

The SoC’s used on the Raspberry Pi range all have two camera interfaces that support either CSI-2 D-PHY 1.1 or CCP2 (Compact Camera Port 2) sources. This interface is known by the codename "Unicam". The first instance of Unicam supports 2 CSI-2 data lanes, whilst the second supports 4. Each lane can run at up to 1Gbit/s (DDR, so the max link frequency is 500MHz).

However, the normal variants of the Raspberry Pi only expose the second instance, and route out only 2 of the data lanes to the camera connector. The Compute Module range route out all lanes from both peripherals.

Software Interfaces

The V4L2 software interface is now the only means of communicating with the Unicam peripheral. There used to also be "Firmware" and "MMAL rawcam component" interfaces, but these are no longer supported.


The V4L2 interface for Unicam is available only when using libcamera.

There is a fully open source kernel driver available for the Unicam block; this is a kernel module called bcm2835-unicam. This interfaces to V4L2 subdevice drivers for the source to deliver the raw frames. This bcm2835-unicam driver controls the sensor, and configures the CSI-2 receiver so that the peripheral will write the raw frames (after Debayer) to SDRAM for V4L2 to deliver to applications. Except for this ability to unpack the CSI-2 Bayer formats to 16bits/pixel, there is no image processing between the image source (e.g. camera sensor) and bcm2835-unicam placing the image data in SDRAM.

|     bcm2835-unicam     |
     ^             |
     |      |-------------|
 img |      |  Subdevice  |
     |      |-------------|
     v   -SW/HW-   |
|---------|   |-----------|
| Unicam  |   | I2C or SPI|
|---------|   |-----------|
csi2/ ^             |
ccp2  |             |
    |     sensor      |

Mainline Linux has a range of existing drivers. The Raspberry Pi kernel tree has some additional drivers and device tree overlays to configure them that have all been tested and confirmed to work. They include:

Device Type Notes

Omnivision OV5647

5MP Camera

Original Raspberry Pi Camera

Sony IMX219

8MP Camera

Revision 2 Raspberry Pi camera

Sony IMX477

12MP Camera

Raspberry Pi HQ camera

Sony IMX708

12MP Camera

Raspberry Pi Camera Module 3

Sony IMX296

1.6MP Camera

Raspberry Pi Global Shutter Camera Module

Toshiba TC358743

HDMI to CSI-2 bridge

Analog Devices ADV728x-M

Analog video to CSI-2 bridge

No interlaced support

Infineon IRS1125

Time-of-flight depth sensor

Supported by a third party

As the subdevice driver is also a kernel driver, with a standardised API, 3rd parties are free to write their own for any source of their choosing.

Developing Third-Party Drivers

This is the recommended approach to interfacing via Unicam.

When developing a driver for a new device intended to be used with the bcm2835-unicam module, you need the driver and corresponding device tree overlays. Ideally the driver should be submitted to the linux-media mailing list for code review and merging into mainline, then moved to the Raspberry Pi kernel tree, but exceptions may be made for the driver to be reviewed and merged directly to the Raspberry Pi kernel.

Please note that all kernel drivers are licensed under the GPLv2 licence, therefore source code MUST be available. Shipping of binary modules only is a violation of the GPLv2 licence under which the Linux kernel is licensed.

The bcm2835-unicam has been written to try and accommodate all types of CSI-2 source driver as are currently found in the mainline Linux kernel. Broadly these can be split into camera sensors and bridge chips. Bridge chips allow for conversion between some other format and CSI-2.

Camera sensors

The sensor driver for a camera sensor is responsible for all configuration of the device, usually via I2C or SPI. Rather than writing a driver from scratch, it is often easier to take an existing driver as a basis and modify it as appropriate.

The IMX219 driver is a good starting point. This driver supports both 8bit and 10bit Bayer readout, so enumerating frame formats and frame sizes is slightly more involved.

Sensors generally support V4L2 user controls. Not all these controls need to be implemented in a driver. The IMX219 driver only implements a small subset, listed below, the implementation of which is handled by the imx219_set_ctrl function.

  • V4L2_CID_PIXEL_RATE / V4L2_CID_VBLANK / V4L2_CID_HBLANK: allows the application to set the frame rate.

  • V4L2_CID_EXPOSURE: sets the exposure time in lines. The application needs to use V4L2_CID_PIXEL_RATE, V4L2_CID_HBLANK, and the frame width to compute the line time.

  • V4L2_CID_ANALOGUE_GAIN: analogue gain in sensor specific units.

  • V4L2_CID_DIGITAL_GAIN: optional digital gain in sensor specific units.

  • V4L2_CID_HFLIP / V4L2_CID_VFLIP: flips the image either horizontally or vertically. Note that this operation may change the Bayer order of the data in the frame, as is the case on the imx219.

  • V4L2_CID_TEST_PATTERN / V4L2_CID_TEST_PATTERN_*: Enables output of various test patterns from the sensor. Useful for debugging.

In the case of the IMX219, many of these controls map directly onto register writes to the sensor itself.

Further guidance can be found in libcamera’s sensor driver requirements, and also in chapter 3 of the Raspberry Pi Camera Tuning Guide.

Device Tree

Device tree is used to select the sensor driver and configure parameters such as number of CSI-2 lanes, continuous clock lane operation, and link frequency (often only one is supported).

Bridge chips

These are devices that convert an incoming video stream, for example HDMI or composite, into a CSI-2 stream that can be accepted by the Raspberry Pi CSI-2 receiver.

Handling bridge chips is more complicated, as unlike camera sensors they have to respond to the incoming signal and report that to the application.

The mechanisms for handling bridge chips can be broadly split into either analogue or digital.

When using ioctls in the sections below, an S in the ioctl name means it is a set function, whilst G is a get function and _ENUM enumerates a set of permitted values.

Analogue video sources

Analogue video sources use the standard ioctls for detecting and setting video standards. VIDIOC_G_STD, VIDIOC_S_STD, VIDIOC_ENUMSTD, and VIDIOC_QUERYSTD

Selecting the wrong standard will generally result in corrupt images. Setting the standard will typically also set the resolution on the V4L2 CAPTURE queue. It can not be set via VIDIOC_S_FMT. Generally requesting the detected standard via VIDIOC_QUERYSTD and then setting it with VIDIOC_S_STD before streaming is a good idea.

Digital video sources

For digital video sources, such as HDMI, there is an alternate set of calls that allow specifying of all the digital timing parameters (VIDIOC_G_DV_TIMINGS, VIDIOC_S_DV_TIMINGS, VIDIOC_ENUM_DV_TIMINGS, and VIDIOC_QUERY_DV_TIMINGS).

As with analogue bridges, the timings typically fix the V4L2 CAPTURE queue resolution, and calling VIDIOC_S_DV_TIMINGS with the result of VIDIOC_QUERY_DV_TIMINGS before streaming should ensure the format is correct.

Depending on the bridge chip and the driver, it may be possible for changes in the input source to be reported to the application via VIDIOC_SUBSCRIBE_EVENT and V4L2_EVENT_SOURCE_CHANGE.

Currently supported devices

There are 2 bridge chips that are currently supported by the Raspberry Pi Linux kernel, the Analog Devices ADV728x-M for analogue video sources, and the Toshiba TC358743 for HDMI sources.

Analog Devices ADV728x(A)-M Analogue video to CSI2 bridge

These chips convert composite, S-video (Y/C), or component (YPrPb) video into a single lane CSI-2 interface, and are supported by the ADV7180 kernel driver.

Product details for the various versions of this chip can be found on the Analog Devices website.

Because of some missing code in the current core V4L2 implementation, selecting the source fails, so the Raspberry Pi kernel version adds a kernel module parameter called dbg_input to the ADV7180 kernel driver which sets the input source every time VIDIOC_S_STD is called. At some point mainstream will fix the underlying issue (a disjoin between the kernel API call s_routing, and the userspace call VIDIOC_S_INPUT) and this modification will be removed.

Please note that receiving interlaced video is not supported, therefore the ADV7281(A)-M version of the chip is of limited use as it doesn’t have the necessary I2P deinterlacing block. Also ensure when selecting a device to specify the -M option. Without that you will get a parallel output bus which can not be interfaced to the Raspberry Pi.

There are no known commercially available boards using these chips, but this driver has been tested via the Analog Devices EVAL-ADV7282-M evaluation board

This driver can be loaded using the config.txt dtoverlay adv7282m if you are using the ADV7282-M chip variant; or adv728x-m with a parameter of either adv7280m=1, adv7281m=1, or adv7281ma=1 if you are using a different variant. e.g.


Toshiba TC358743 HDMI to CSI2 bridge

This is a HDMI to CSI-2 bridge chip, capable of converting video data at up to 1080p60.

Information on this bridge chip can be found on the Toshiba Website

The TC358743 interfaces HDMI in to CSI-2 and I2S outputs. It is supported by the TC358743 kernel module.

The chip supports incoming HDMI signals as either RGB888, YUV444, or YUV422, at up to 1080p60. It can forward RGB888, or convert it to YUV444 or YUV422, and convert either way between YUV444 and YUV422. Only RGB888 and YUV422 support has been tested. When using 2 CSI-2 lanes, the maximum rates that can be supported are 1080p30 as RGB888, or 1080p50 as YUV422. When using 4 lanes on a Compute Module, 1080p60 can be received in either format.

HDMI negotiates the resolution by a receiving device advertising an EDID of all the modes that it can support. The kernel driver has no knowledge of the resolutions, frame rates, or formats that you wish to receive, therefore it is up to the user to provide a suitable file. This is done via the VIDIOC_S_EDID ioctl, or more easily using v4l2-ctl --fix-edid-checksums --set-edid=file=filename.txt (adding the --fix-edid-checksums option means that you don’t have to get the checksum values correct in the source file). Generating the required EDID file (a textual hexdump of a binary EDID file) is not too onerous, and there are tools available to generate them, but it is beyond the scope of this page.

As described above, use the DV_TIMINGS ioctls to configure the driver to match the incoming video. The easiest approach for this is to use the command v4l2-ctl --set-dv-bt-timings query. The driver does support generating the SOURCE_CHANGED events should you wish to write an application to handle a changing source. Changing the output pixel format is achieved by setting it via VIDIOC_S_FMT, however only the pixel format field will be updated as the resolution is configured by the dv timings.

There are a couple of commercially available boards that connect this chip to the Raspberry Pi. The Auvidea B101 and B102 are the most widely obtainable, but other equivalent boards are available.

This driver is loaded using the config.txt dtoverlay tc358743.

The chip also supports capturing stereo HDMI audio via I2S. The Auvidea boards break the relevant signals out onto a header, which can be connected to the Raspberry Pi’s 40-pin header. The required wiring is:

Signal B101 header 40-pin header BCM GPIO

















The tc358743-audio overlay is required in addition to the tc358743 overlay. This should create an ALSA recording device for the HDMI audio. Please note that there is no resampling of the audio. The presence of audio is reflected in the V4L2 control TC358743_CID_AUDIO_PRESENT / "audio-present", and the sample rate of the incoming audio is reflected in the V4L2 control TC358743_CID_AUDIO_SAMPLING_RATE / "Audio sampling-frequency". Recording when no audio is present will generate warnings, as will recording at a sample rate different from that reported.