Skip to content
HLS EDA EDA3.0

AI Will Reshape Chip Design — But Not Through RTL Generation Alone

Simon Bennett
Simon Bennett

Anyone working in semiconductors has spent the last two years inside the same conversation. AI is going to reshape chip design. The question is no longer whether — it is where, and how, and what the architecture under it actually has to look like to be deployable on a real tape-out. The loudest answer in the market right now is also the simplest: cloud-based AI that generates RTL from natural language. There are well-funded startups built around that pitch. There are products from the largest EDA vendors targeting the same market. The demos are impressive on small, contained blocks. 

The trouble shows up when you push that motion into production silicon. Engineering leads who put these tools through their paces in 2025 are quietly recalibrating in 2026. The reason is older than any of the tools and is one of the first things anyone with a CS degree learns: Amdahl’s Law. If you optimize one stage of a multi-stage process, your total speedup is bounded by the fraction of the overall cycle that stage consumes. RTL coding is not the dominant cost of a tape-out cycle. Verification is. Architectural exploration is. HW/SW co-design and late-stage integration are. So even if a tool makes the RTL-writing step ten or a hundred times faster, the realistic gain in the total design cycle plateaus somewhere around 30%. That is the ceiling for “AI-assisted RTL” as a product category, and more industry conversations are starting to acknowledge it.

The harder version of the question is the one VCs and Boards are asking now. Where in the design flow does AI actually move the needle, and what does it take to put it there? Three observations are converging on the same answer. First, the AI has to operate at the abstraction level where the design’s complexity actually lives — which is not RTL. It is the executable specification. That is the layer where a small number of decisions cascade into millions of lines of downstream artifacts, and where architectural choices can still be evaluated cheaply enough for exploration to be feasible. Second, the AI has to reason from real hardware knowledge — coding standards, IP libraries, QoR patterns, microarchitectural preconditions — and not from probabilistic patterns mined from generic source-code repositories. Third, every AI proposal must be validated using tools the industry already trusts before it is accepted. AI proposes. Synthesis, simulation, and formal verification measure. The grounded tool result, not the model’s statistical confidence, drives the decision.

Some of the more advanced teams in our industry are now calling this new discipline deterministic AI. It is a specific architectural pattern: agentic skills operating at a high level of abstraction, in a closed loop with traditional EDA ("EDA 2.0") tools, where every output is validated against real measurements before advancing. The propose-evaluate-decide cycle is what makes AI’s output reliable enough to sit in a tape-out flow. Without it, AI for chip design is interesting research. With it, AI for chip design is something a serious shop can actually deploy.

Rise Design Automation has been articulating this most clearly in our market, and the architecture they describe is structurally different from cloud-based RTL generators in ways that matter. Their system runs roughly fifty distinct AI skills — covering architecture, verification, UVM, and other domain areas — that reason at the executable specification layer. A next-generation high-level synthesis platform translates the validated spec into RTL and a matching verification environment. Real synthesis, simulation, and formal results feed measurements back into the loop. The whole flow operates as a closed loop with full visibility for the designer. The output is production RTL, but the design intelligence is applied two layers of abstraction above, where token-by-token RTL generators spend their compute. That is exactly where Amdahl’s Law math finally tilts in the buyer’s favor, because the productivity is being captured against the parts of the cycle that dominate it: architectural exploration and verification.

Two further architectural details matter for anyone evaluating this seriously. The system is model-agnostic. It works with frontier foundation models, with internal enterprise models, and with whatever the LLM landscape looks like eighteen months from now. That addresses one of the two enterprise objections that have killed most AI pilots in semiconductors over the past two years: vendor lock-in. And it is deployable into existing flows today — SystemVerilog, C++, and SystemC inputs, with integration into the simulators and formal tools teams already run. That addresses the other objection: integration risk. Neither of these is a marketing flourish. They are the difference between a tool that ships to production and one that runs in a sandbox.

There is also a quieter capability emerging in this space that is worth flagging because of its implications rather than its current performance. Some of the most interesting recent work involves running the abstraction lift in reverse — reading existing RTL, building a structured mental model of it, and reconstructing a higher-level executable specification from there. The implication is that legacy IP can be addressed by the same architectural-exploration flow. A design organization with a 10-year archive of proven RTL does not have to start over to benefit from spec-level reasoning. The technology to accomplish this is in early trials, currently lifting RTL to SystemC with further abstraction work in progress. If the capability matures, it stops being just a feature and becomes a re-platforming on-ramp for the entire installed base of in-house RTL across the industry.

Two SemiWiki pieces are worth reading if you are tracking this seriously. Daniel Nenni’s 2026 outlook interview with Rise CEO Badru Agarwala lays out the architectural argument in the founder’s own words. Bernard Murphy’s piece on architectural exploration in the age of AI makes a similar observation from a different angle: the interesting question is no longer whether AI can generate RTL, but whether AI can be made trustworthy enough to operate above RTL in the parts of the design flow where time actually goes.

For founders, GTM leaders, and design executives looking ahead to the next 90 days, three things are worth watching. The first is vocabulary. Vendors leading with “AI-generated RTL” are pitching against the thirty-percent ceiling, whether they intend to or not. Vendors leading with executable specifications, agentic skill libraries, tool-grounded closed loops, and abstraction lift are pitching the architecture above it. The vocabulary distinction is doing real signaling work right now, and it will only sharpen over the course of this year. The second is the integration story. Anything that requires a serious shop to abandon SystemVerilog, its existing simulators, or its formal flow is not going to land this decade. The deployable-today filter does more work than the demo theater. The third is the demo discipline at the upcoming events. The ESD Alliance Executive Outlook on agentic AI at Cadence on June 10 and DAC 2026 in Long Beach in late July will feature competing claims about agentic AI in semiconductor design. Watch what gets demonstrated. Booths showing closed-loop runs on real customer designs — not handcrafted FIFOs, not slideware, not hypothetical workflows — will be where the next two years of buying signals get made.

AI is going to reshape semiconductor design. That part is no longer in dispute. What is in dispute is the architecture beneath it. The teams that have studied where the design cycle actually spends its time are not building AI-for-RTL tools. They are building closed-loop, spec-level, tool-grounded systems in which AI serves as the orchestrator and traditional EDA as the validator. That is the architecture that breaks past the thirty-percent plateau. That is the architecture that survives the move from a polished demo into a production tape-out. And that is the architecture that, ten years from now, people will look back on as the actual AI-led design flow. The first generation of “AI generates your RTL” tools will be remembered the way the industry now remembers the first generation of cloud IDEs — useful, formative, and ultimately not where the productivity lived.

Rise Design Automation is the clearest example I am tracking of a team that has built around this discipline from the architecture up. There will be others. The category is forming in real time. If you are building, buying, or selling into it, the framing matters more than the feature list — because the framing tells you whether you are looking at a tool, or at the next layer of the flow.

Share this post