Raspberry Pi AI Kit update: Dataflow Compiler now available
Our recent release of the Raspberry Pi AI Kit got quite a bit of attention from the community. At launch we provided a number of computer vision-based AI demos and examples, built on well known state-of-the-art neural network models. However, our power users quickly asked for more — in particular the ability to re-train these models with their own datasets, or even to compile custom models to run on the Hailo AI accelerator. Hailo has been working hard behind the scenes, and we are excited to announce the release of the Hailo Dataflow Compiler (DFC). The DFC will allow our users to extend the ability of the Raspberry Pi AI Kit and fine-tune its performance for their specific use cases.

Bring Your Own Data (BYOD)
Want to make a wildlife camera to detect certain types of animals? Using the DFC in BYOD mode will allow users to take advantage of some of the most popular neural network models, re-trained on their own custom datasets. Hailo has created an end-to-end tutorial outlining how to re-train an existing neural network model.
Bring Your Own Model (BYOM)
If our existing demos and the neural network models available in Hailo’s model zoo don’t do what you want, you can use the DFC to convert and compile models from ONNX or TensorFlow Lite (TFLite) to Hailo’s HEF format for running on the Hailo AI accelerator. Not for the faint of heart, BYOM requires a deep understanding of the model and the conversion flow — but some will see this as an interesting challenge. Take a look at the DFC tutorials in Hailo’s developer zone.
What’s next?
Users have also asked about Whisper, Stable Diffusion, and so on running on the Hailo AI accelerator. These very large network models cannot yet run, but Hailo is working hard to port some of them.

Also coming soon is Python/Picamera2 integration with the Raspberry Pi AI Kit. We intend to make full support for Python and Picamera2, including demos and examples, available in our next package release.
5 comments
Anders
A grand for an RTX 4080.
Seems like the excuse I needed to get one.
Szaja
Great news! Thank you!
crumble
Why can’t large models run on this device (but on the one with 8GB local RAM)? Great to see improvements. But on an education system I like to get some high level explanation why it is working or not.
Is it simply too slow to push that much data over PCIe without a big cache on the device?
Or does Hailo using a compression like BitNet, which has a huge impact on the result?
PCJR
Can the dual M2 be used with the AI M2? Or does it have to be the hat shown?
T L
A tutorial explaining exactly how to install the Dataflow Compiler on the Raspberry Pi 5 and what to do with it would be great… For us AI noobs.
Comments are closed