AI code generation tools like Gemini Code Assist are revolutionizing software development. They act as powerful accelerators, enabling both seasoned engineers and domain experts to ship functional prototypes in a single day. But when the clock strikes midnight, that superpower often turns back into a pumpkin.
Once the initial euphoria fades, many teams are left with tangled, unmaintainable code. In my work as a Field Solution Architect at Google Cloud, I’ve encountered this challenge repeatedly—especially in the complex domain of Generative AI applications, where human-in-the-loop interactions and probabilistic logic demand thoughtful structure. AI code-assist tools aren’t the problem—the real issue is that we often use them without the thoughtful scaffolding they deserve.
This blog shares a lightweight methodology inspired by Domain-Driven Design (DDD), adapted for the AI-assisted era. It enables developers to move fast without sacrificing clarity, sustainability, or agility.
Generative AI applications are complex—not because of sheer scale, but because they intertwine human input, nuanced business rules, and inherently probabilistic model behavior. Traditional architectural patterns often fall short here. Domain-Driven Design (DDD) helps tame this kind of complexity by modeling software around the problem domain, not the underlying infrastructure.
It’s not about heavyweight frameworks or ceremony. It’s about shared language, modular boundaries, and keeping your focus on what matters: the logic users and businesses actually care about.
That said, DDD has a reputation for being heavy and academic. If you’ve ever cracked open the book, you’ve likely seen diagrams with hexagons, aggregates, bounded contexts, and more. It can be overwhelming. The good news? You don’t need to master the entire canon to see real benefits. The methodology outlined below borrows the spirit of DDD—clear domain models, strong separation of concerns, and iterative modeling—without the overhead. It’s a lightweight, practical approach that delivers clarity and speed, especially in fast-moving environments like AI-assisted development.
At the heart of every prototype I build is a well-defined functional core. A functional core is the business logic of the application freed of any other application details. Crucially, it does not handle API routing, UI rendering, database access, or external service integrations. These concerns belong to other layers.
I use Pydantic models and plain Python functions to represent the essential domain logic—particularly the generative AI operations. These often include:
Human-in-the-loop conversations.
Structured parsing of documents.
Detecting inconsistencies between related documents (e.g., contracts vs. invoices).
Consider representing core entities at different levels of abstraction. Powered by AI code generation, you should be able to “play out” different designs in order to evaluate the trade-offs. Talk with stakeholders and subject matter experts.
Investing time and mental effort into the functional core pays huge dividends. Decoupling the business logic from the rest of the application will make it more testable, reliable and flexible to future extension.
Once the core logic is in place, I add a UI using FastAPI and Jinja2 templates. This stack allows a high degree of customization, while avoiding the need for duplicating the data model—with Jinja2, I can render the Pydantic objects directly in the templates.
Why is this important? In early-stage development, duplicating models introduces friction. Every schema change in the core would have to be mirrored in the UI, increasing maintenance overhead and risk of bugs. By relying on a single language (Python + Jinja2), you maintain consistency and accelerate iteration.
Before the rise of AI code-assist tools, developers tended to gravitate towards lightweight tools like streamlit and dash for prototypes. In the AI era, these framework libraries are obsolete. Why? Developers are better served by letting AI deal with the low-level yuckiness of HTML rather than settling for a restrictive framework that they will inevitably outgrow.
[functional core files] [state flow diagrams] Generate the UI using FastAPI, Jinja2, and Tailwind.
Finally, I implement persistence using BigQuery. Here again, I avoid introducing another schema or ORM layer. My interfaces for get/put/list operations interact directly with the Pydantic objects.
Why avoid duplicating the model in a separate ORM or schema? This keeps the data layer thin and flexible, ideal for fast-moving prototyping work. However, it's a trade-off:
Pros: Fewer layers to maintain, faster schema evolution, less boilerplate.
Cons: You give up strict schema enforcement and rich type relationships that an ORM might offer. For production systems, some teams may choose to introduce schemas later.
Modern data warehouses like BigQuery can infer schemas automatically when ingesting structured data. This makes it easy to write Pydantic objects directly to BigQuery without defining tables up front.
The end result is an architecture that’s clean, understandable, and ready to evolve—perfectly suited for the rapid prototyping demands of pre-sales work in the AI space.
This approach is designed for speed with structure. By isolating the core logic from infrastructure and UI concerns, you get the flexibility to iterate quickly—without sacrificing long-term maintainability.
Here’s what that looks like in practice:
In short: this methodology strikes a balance. You move fast, but you stay clean. And to reiterate: staying clean is what allows you to keep moving fast after day one.
Just because you can generate a working application in 10 minutes, doesn’t mean you should. When AI code generators are tasked with writing an application in one fell swoop, they tangle up the core logic with application details. In doing so, they put the entire application architecture on shaky foundations.
Don’t throw out decades of software development best practices. Invest in clear thinking, rigorous diagramming, and start with the functional core of the application: the domain model.
It’s easy to fall into the trap of “productionizing” too soon—adding layers of abstraction, trying to future-proof everything. For early-stage AI apps, that usually backfires. Keep your code simple and direct. Favor duplication over premature abstraction. You can always refactor later once the design stabilizes.
When using AI tools to generate code, be vigilant about where state and dependencies sneak in. It’s common to see API routes importing from database modules or UI components reaching into core logic. These shortcuts may work at first, but they accumulate complexity fast. Stick to clean, directional boundaries between layers.
In the AI-assisted era, some old stack criteria—like boilerplate availability or simplicity of syntax—don’t matter as much. AI can generate scaffolding and handle complexity with ease.
What does matter: choose a stack that lets you define a clear functional core and separate concerns cleanly. Avoid low-code tools that hide logic or force you into rigid patterns—they’re fast at first, but brittle over time.
And don’t shy away from lower-level stacks. With AI in your corner, writing clean Go, Rust, or even C++ is more accessible than ever. Choose tools that align with long-term clarity, not just day-one ease.
We’re entering a new era of software development—one shaped not just by what we build, but how we build it alongside AI.
The old lifecycle—prototype, MVP, production—used to mean rewriting the same idea three different ways. But with AI tools accelerating development, those lines are starting to blur. What used to be throwaway code can now evolve directly into scalable software—if it starts from the right foundation.
This methodology is about building that foundation: a lightweight, domain-driven approach that supports rapid iteration, keeps complexity under control, and scales gracefully as needs grow. It’s not anti-speed—it’s pro-sustainability.
In this new paradigm, success isn’t just about writing code faster. It’s about designing systems AI can collaborate on—systems that stay clear, flexible, and human-friendly even as they evolve. We’re not just coding faster. We’re coding smarter.