Roofline’s secret sauce lies within the retargetable AI compiler. Building on MLIR and IREE, our SDK provides the first truely flexible, and thus scalable, deployment solution for edge AI.

















































Roofline’s secret sauce lies within the retargetable AI compiler. Building on MLIR and IREE, our SDK provides the first truely flexible, and thus scalable, deployment solution for edge AI.
Roofline works seamlessly with PyTorch, TensorFlow Lite, TensorFlow, and ONNX. Our main focus is on PyTorch given its increasing importance for developers.
We test against a model zoo of 100s of models across frameworks on a daily basis, including many of the most popular architectures from Hugging Face. A selection of supported models can be found in the Supported Applications section. Reach out if you are interested in support for a specific model.
Yes, we support proprietary models. They typically run out of the box, as our compiler covers the majority of common layers, operators and quantization techniques. Your models stay private at all times and can be deployed using the same simple workflow as any other model.
No, models should run without modification. We already support the majority of common layers, operators, and also quantization techniques.
Roofline supports any relevant CPU and mobile GPU. For NPUs, we are partnering with leading chip and IP vendors to integrate their hardware. A selection of hardware and links to case studies are available in the Supported Hardware section.