We are thrilled to be part of the next edition of the LLVM Foundation Developers’ Meeting this week.
We will be presenting some of roofline's latest compiler work: We are especially proud to show our joint work by Ege Beysel and Andrzej Warzyński from Arm, focusing on MLIR and data tiling for Arm’s Scalable Vector Extensions. Also, Maximilian Bartel will deep dive into critical data layout transformations for tensor computations in MLIR.
Our team will be in Santa Clara the whole week. So please reach out to Maximilian Bartel, Christopher M., or Ege Beysel, if you’re around!

Enabling specific devices for AI deployment is one thing, making it developer-friendly is another. Most edge AI toolchains are complex and hard to use.
roofline changes that. In this short demo, our CTO Maximilian Bartel shows how simple deployment can be with our SDK: By changing just one line of code, he compiles and runs Qwen 2.5 0.5B on CPU (Arm Cortex-A) and GPU (Arm Immortalis). No manual configuration, no hassle.
This is what retargetable AI deployment truly means. We unlock full SoCs for the edge AI products you dream of.
Edge SoCs are becoming increasingly complex and heterogeneous. Unlocking the entire chip for AI workloads (CPU, GPU, NPU) is key to building performant edge AI products. For example, this paves the way for on-device agentic AI, where multiple models can run in parallel across different compute cores.
roofline's SDK provides one integrated toolchain to easily switch execution targets. Enabled by our MLIR-based compiler, retargeting takes just one line of code.
In this demo, we showcase migration of Qwen3 from Qualcomm's Oryon™ CPU to Adreno™ GPU. Realizing a speed-up of ~2x. Without any switching of tools or manual rewrites.
We are honored to announce that roofline has been selected by the European Innovation Council (EIC) Accelerator, receiving €2.5M in grant funding and a pre-committed equity investment for our next funding round.
The EIC Accelerator is Europe’s most competitive deep tech funding program. In this round, only 40 startups were selected from nearly 1,000 applicants: https://lnkd.in/e4aeD_sc
The recognition of our project "Retargetable AI Compiler Technology for Scalable Edge Deployment of Next-Generation AI Models" not only validates the potential of our technology, it fuels our mission to enable the edge AI products you dream of. Thank you to European Innovation Council and SMEs Executive Agency (EISMEA) and everyone who supported us on this journey.

On-device image-to-text with multimodal LLMsLLMs are getting smaller and now fit on edge systems. But bringing them into products and unlocking disruptive use cases remains a challenge. Common edge AI deployment tools cannot keep up with the pace of AI innovation, especially with cutting-edge models like multimodal LLMs.
Here is a look at what roofline's MLIR-based compiler can do. We run an image-to-text task using Google DeepMind's Gemma-3-4B, fully compiled, on real Qualcomm edge hardware:
🖼️ Input: Camera view of a mobile robot in an aisle.
💬 Output: Natural language reasoning. The mobile robot decides to slow down and adjust its path.
⚡ Performance: ~9x faster than TorchInductor.
Curious? Let's talk
We’re proud to be part of the Edge AI Foundation as a new Strategic Partner. At Roofline, we help developers go from idea to deployment—without the usual months of onboarding or complexity. Our AI compiler technology enables faster, more flexible, and efficient deployment of GenAI models on edge SoCs. Through close collaboration with leading chip vendors, we’re unlocking the full potential of edge AI hardware and supporting innovation across the ecosystem.We’re excited to contribute to the shared mission of advancing the future of edge AI—together with the Edge AI Foundation and its community.

We are excited to be on the ground and hold two talks at this year's EuroLLVM Developer meeting by the LLVM Foundation.
In their talks, our CTO Maximilian Bartel and Lead Engineer Christopher McGirr will be sharing key insights from our compiler work — focusing on MLIR and how to effectively engage with modern compiler infrastructure and the upstream community.
If you’re working on compilers, AI workloads, or MLIR — let’s meet up! We love to exchange ideas and discuss how we are building our AI deployment stack for the next generation of edge AI.
