We use some essential cookies to make our website work.

We use optional cookies, as detailed in our cookie policy, to remember your settings and understand how you use our website.

Software updates for Raspberry Pi AI products

Raspberry Pi Software Engineering Manager Naush Patuck explains how users of our AI products can take advantage of our most recent software updates, including Hailo support for the Trixie release of Raspberry Pi OS and an input tensor injection feature for our AI Camera.

Raspberry Pi AI HAT+ and Raspberry Pi AI Kit

The Raspberry Pi AI HAT+ and the Raspberry Pi AI Kit, both based on Hailo AI accelerators, are now fully supported on the recently released Trixie version of Raspberry Pi OS. All the required software packages are available and ready to install from our apt repo.

This package release does contain one significant change: we have removed the Hailo device driver from our kernel builds and are now using DKMS to build and install the kernel driver as part of the package installation. This decoupling not only enables more flexibility with software releases going forward, but also allows our users to downgrade the device driver without downgrading the kernel itself. Downgrading a driver is only necessary if custom-built models were generated from an older version of the Hailo Dataflow Compiler.

The installation instructions are exactly the same as before, with the additional step of installing the DKMS framework needed to compile the kernel device driver:

sudo apt install dkms
sudo apt install hailo-all

In related news, Hailo have also recently launched their application infrastructure framework on GitHub. This framework provides a foundation for developing your own AI-based applications by using reusable pipelines and components! Head over to the repo to check out the examples and demos.

Raspberry Pi AI Camera

One previously missing but frequently requested feature on the Raspberry Pi AI Camera is the ability to easily debug custom neural networks running on the device. We have now implemented an input tensor injection feature on the AI Camera that fulfils this request. Input tensor injection allows users to validate the quality and/or performance of the network running on the device using an existing image dataset in a repeatable way. These images may come from a standard dataset (e.g. COCO) or an entirely custom dataset tailored to your application.

Raspberry Pi AI Camera

To use this feature, make sure your software is fully up to date:

sudo apt update
sudo apt full-upgrade -y

You can also give our example input tensor injection script a try.

6 comments
Jump to the comment form

AniYolo avatar

Which AI model formats are supported by the Raspberry Pi AI Camera’s onboard processor?

Reply to AniYolo

Naush Patuck avatar

You can find a flavour of support models in our model zoo at https://github.com/raspberrypi/imx500-models/.

Note that there are model families that can also be supported.

Reply to Naush Patuck

Mike avatar

I would like to be able to use the AI products in a system that does not have development tools (e.g. gcc, kernel headers, etc.) installed. Is there a way to cross-compile the driver and applications on a separate development machine, and only deploy, run, and debug them on a Raspberry Pi target?

Reply to Mike

Mark Tomlin avatar

The thing that most matters to me is being able to use Whisper(.cpp). I do a lot of Software Defined Radio work, and being able to do real time speech to text is the number one use case that I have. Ideally, being able to use the medium.en model.

Reply to Mark Tomlin

Replying to Mike
Cancel reply?