AI TechSales Blog AKA The Watchtower Brief

The End of “Code First” in Embedded Systems

Written by Simon Bennett | Apr 2, 2026 1:31:10 AM

For decades, embedded systems have been built the same way. You start with requirements. You write code. You integrate against hardware. You debug. You iterate. And if you’re building something complex—robotics, autonomous systems, industrial edge devices—you repeat that loop for months. Sometimes years. This model has survived every wave of innovation:

  • better compilers
  • better SDKs
  • better simulation
  • even better silicon

But it is now running into a hard limit.

The System Has Outgrown the Workflow

Modern embedded systems are no longer just firmware. They are:

  • AI/ML pipelines running on heterogeneous compute
  • firmware interacting with sensors, networks, and real-world variability
  • hardware-specific optimizations across CPUs, GPUs, NPUs, and FPGAs
  • continuously evolving models that degrade and retrain over time

The result is that development workflows can no longer keep up. A perception pipeline for a robot. A multi-camera edge AI deployment. An IoT device expected to adapt in the field. These are not static builds anymore. And yet we still build them as if they were fixed artifacts.

The Rise of “Intent-Driven” Development

A different model is starting to emerge. Instead of beginning with code, engineers begin with intent. What do you want the system to do? What hardware is it targeting? What constraints matter—latency, power, memory, reliability? From there, the system is not manually constructed. It is generated. Not as snippets. Not as drafts. But as a full, deployable system:

  • architecture defined
  • hardware context applied
  • code generated
  • tests created
  • binaries produced

In other words: From prompt to deployable system. This is not a coding acceleration. It is a workflow inversion.

Why This Didn’t Happen Sooner

At first glance, this sounds like an extension of what teams are already doing with GPT or Claude. It isn’t. Most teams experimenting with LLMs today are seeing modest gains—20 to 30% productivity improvements. Useful, but limited. Because those tools operate at the wrong layer. They generate:

  • functions
  • snippets
  • partial implementations

But embedded development is not a code-generation problem. It is a systems orchestration problem. To move beyond incremental gains, three things are required:

  1. Hardware Awareness
    Code must reflect real deployment targets—Jetson, microcontrollers, FPGA fabrics—not abstract environments.
  2. Architectural Translation
    Requirements must be transformed into system-level designs before code is generated.
  3. Deterministic Execution + Validation
    Outputs must compile, run, and pass tests—not just “look correct.”

This requires more than a model. It requires a system of agents, each responsible for a different stage of the workflow:

  • translating intent into requirements
  • defining architecture
  • applying hardware context
  • generating code
  • validating outputs

And critically, combining:

  • LLM-driven reasoning
  • rule-based determinism
  • accumulated domain knowledge

That combination is what turns generation into something deployable.

From Projects to Systems That Evolve

There is a second shift happening underneath this one. In the traditional model, firmware is treated as a project:

  • build it
  • ship it
  • maintain it

In reality, especially in edge AI and intelligent systems, firmware is becoming a continuously evolving layer of the product. New requirements can emerge after deployment:

  • bugs discovered in the field
  • performance constraints shift
  • models degrade or drift
  • hardware behaviors change

Each of these triggers another development cycle. The difference in the new model is that this cycle is no longer manual.

The system:

  • ingests new requirements
  • regenerates affected components
  • revalidates against constraints
  • redeploys updated binaries

Over time, it accumulates:

  • prior fixes
  • optimized kernels
  • reusable components

What emerges is something closer to a virtual firmware lab than a traditional toolchain.

What This Means for Engineering Teams

This shift does not remove engineers from the process. It changes where they operate. From:

  • writing and stitching code

To:

  • defining intent
  • evaluating outputs
  • guiding system behavior

It also changes how teams scale.

Today:

  • progress depends on individual expertise
  • knowledge is fragmented across engineers
  • experimentation is informal

Tomorrow:

  • workflows are encoded
  • knowledge is systematized
  • iteration is repeatable

This matters for a simple reason: Most teams building embedded systems today are already experimenting with AI tools. But very few have turned that experimentation into a reliable development system.

The Beginning of a New Default

The “code first” model will not disappear overnight. But it is starting to look like an intermediate step—not the foundation. As systems become more complex and more adaptive, the pressure to move upstream—from code to intent—will only increase. The teams that make that shift early will not just move faster. They will build systems that are:

  • easier to evolve
  • easier to validate
  • and ultimately, harder to compete with

Because they are no longer just writing software. They are generating systems.