Chris Waters · 2026-03-31 · ruby, typescript, ai

The architectural trade-offs of AI code generation

We recently introduced Aha! Builder, our new AI-powered product for creating prototypes and business applications. It allows you to describe the app you need, and Elle (our AI assistant) builds it for you.

But Elle is not just a UI trick. When a user asks for a new feature, Elle is in the background writing actual database schemas, controllers, and views to make it work.

Building a product like this forces an engineering team to answer fundamental questions:

  • What stack should the platform itself run on?
  • And what language should the AI generate for the user?

If you look at the current ecosystem of AI coding tools, the default answer to both questions is typically JavaScript or TypeScript. React dominates the front end, so using a unified language across the platform and the generated output may feel like the natural choice.

But picking a framework for an LLM to use is very different from picking one for a human developer to maintain. Humans have persistent memory, use IDEs to jump between hundreds of files, and can infer architectural intent. LLMs have strict memory limits, process text linearly, and get easily confused by too much architectural freedom.

As we debated the architecture for Aha! Builder over the last few months, we had to weigh the popularity of modern defaults against the raw efficiency of older, stricter frameworks. Popularity and suitability turned out to be different questions.

If you are building an AI tool that generates code, here are the primary trade-offs I think you have to consider for both your platform and your output.

Token efficiency vs. type safety

Tokens equal time and money. The more tokens a task requires, the longer it will take the AI to generate the code and the more it will cost. Managing the context window — the model's working memory, which dictates exactly how big a problem it can process at one time — is equally important, because fewer tokens means the AI can tackle bigger problems at once.

When code is too verbose, the model cannot hold the entire application's logic in its memory. You pay more, and you get less.

This is where older, terser languages have an advantage. Ruby, for example, is famous for being concise. A recent benchmark test of Claude Code across different languages actually found that Ruby yielded the highest success rate for AI-generated tasks. The author noted that Ruby's terseness allows the LLM to process logic using fewer tokens, leaving more room in the context window for actual problem-solving.

TypeScript offers a meaningful counterweight: strict typing. While TypeScript is more verbose — costing more tokens — those explicit types act as a map for the AI. When an LLM knows the exact shape of the data it is passing between functions, it is less likely to make logical errors. The TypeScript compiler can provide compile-time errors, letting the AI find and fix problems without needing to run the code.

It is a real trade-off. But token efficiency is not just a performance concern — it impacts what the AI can actually attempt. That puts Ruby in a stronger starting position.

Convention vs. flexibility

There is satisfaction in working with a framework that has strong opinions.

Frameworks like Ruby on Rails are built on "convention over configuration." They have strict rules on where files go and how they are named. This predictability is a superpower for an AI. When an LLM generates a new feature within a framework with strong conventions, it does not have to guess where the model, view, and controller should live. The guardrails are already in place.

In the JavaScript ecosystem, there is immense freedom. You can structure an application however you like. But for an AI assistant, that freedom is a liability. If there are a dozen ways to organize an application, the AI has to spend time figuring out which pattern to follow — increasing the chance of hallucinations. And choosing poorly can act as a tax on future code that has to work around or within earlier bad choices.

A team that chooses a flexible stack like Node and TypeScript can compensate by building its own conventions and linting rules. Done well, that produces a bespoke framework tuned to their specific use case. But it is upfront work you have to do yourself before the AI can be reliable. With Rails, that structure is just part of the framework.

Mature foundation vs. cutting-edge AI tooling

AI agents are great at writing code, but you rarely want them writing foundational infrastructure from scratch.

If an AI is building a business application, it needs robust solutions for authentication, background jobs, and security. Mature frameworks like Ruby on Rails have a library for almost every business problem. This allows the AI to act as an orchestrator — pulling in proven libraries — rather than inventing custom security logic. That distinction matters more than you might think. An AI that reaches for a well-tested gem is far safer than one improvising an auth flow.

The counterargument is the AI tooling ecosystem itself. Right now, the vast majority of cutting-edge AI SDKs, agent frameworks, and vector database libraries are being built for JavaScript and Python first. If you want your platform to integrate seamlessly with the absolute latest LLM technology, the TypeScript ecosystem moves faster.

Which direction to go in depends on the kind of application you are building. The libraries Rails provides have been tested in production for years. That is hard to replicate, and for an AI generating code it cannot fully reason about, starting from proven ground is often a safer bet.

The frontend bridge vs. the unified stack

Of course, the backend is only half the story. Users expect modern applications to feel snappy and reactive. React dominates this space for a reason.

If you optimize your architecture to use a conventional backend language like Ruby, you have to bridge the gap to a modern front end.

Tools like Inertia.js have become incredibly valuable here, allowing teams to build a classic monolith on the backend while using React for the view layer.

A unified TypeScript stack offers something appealing: shared context. If the AI is writing TypeScript on the back end and TypeScript on the front end, it can share interfaces and types across the entire stack. When the AI updates a database schema, the frontend components instantly know about the change. That full-stack synergy is probably the strongest argument for going all-in on TypeScript.

But shared context only helps if the AI can stay oriented across the whole stack. In practice, the memory constraints that make token efficiency so important also limit how much of a large TypeScript application an LLM can hold at once. A well-structured Rails monolith with a React front end via Inertia gives the AI smaller, clearer units of work. This tends to produce more reliable output than a sprawling unified stack with too many degrees of freedom.

Weighing the trade-offs

Engineering is rarely about finding a flawless solution. It is about choosing the right set of compromises.

When you introduce an LLM into your architecture, you are doing more than adding a new API call. You are accommodating a system with severe memory limits, a tendency to drift, and a deep need for structural boundaries. Those constraints do not disappear because your stack is modern.

The properties that make Ruby on Rails feel old-fashioned to some developers — its conventions, its opinions, its maturity — are exactly the properties that serve AI code generation well. Fewer decisions for the model to make. More reliable libraries to reach for. Smaller, more predictable units of work.

The TypeScript ecosystem is fast-moving and has genuine strengths, particularly around type safety and AI tooling. But speed of movement is less valuable when the thing moving is an LLM with a limited context window and a tendency to hallucinate in unfamiliar territory.

There is no stack that resolves all of these tensions perfectly. It takes discipline to map out the constraints of your specific problem and build a foundation that serves both the human engineers maintaining the platform and the AI generating the code. But doing that work upfront is what leads to software that lasts.


Ready to build? Learn more about Aha! Builder and sign up for the early access program.

Chris Waters

Chris Waters

Dr. Chris Waters is the co-founder and CTO of Aha! — the world's #1 product development software. Chris is a serial entrepreneur who loves writing code and building products. He holds 16 patents for innovations in network security, database queries, and wireless devices.