Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Applying AI to the craft of software engineering

liamconnell
Staff
A practical method for building fast, flexible, and maintainable applications in the era of AI-assisted development.

AI code generation tools like Gemini Code Assist are revolutionizing software development. They act as powerful accelerators, enabling both seasoned engineers and domain experts to ship functional prototypes in a single day. But when the clock strikes midnight, that superpower often turns back into a pumpkin.

Once the initial euphoria fades, many teams are left with tangled, unmaintainable code. In my work as a Field Solution Architect at Google Cloud, I’ve encountered this challenge repeatedly—especially in the complex domain of Generative AI applications, where human-in-the-loop interactions and probabilistic logic demand thoughtful structure. AI code-assist tools aren’t the problem—the real issue is that we often use them without the thoughtful scaffolding they deserve.

This blog shares a lightweight methodology inspired by Domain-Driven Design (DDD), adapted for the AI-assisted era. It enables developers to move fast without sacrificing clarity, sustainability, or agility.

Why Domain-Driven Design (DDD)?

Generative AI applications are complex—not because of sheer scale, but because they intertwine human input, nuanced business rules, and inherently probabilistic model behavior. Traditional architectural patterns often fall short here. Domain-Driven Design (DDD) helps tame this kind of complexity by modeling software around the problem domain, not the underlying infrastructure.

It’s not about heavyweight frameworks or ceremony. It’s about shared language, modular boundaries, and keeping your focus on what matters: the logic users and businesses actually care about.

That said, DDD has a reputation for being heavy and academic. If you’ve ever cracked open the book, you’ve likely seen diagrams with hexagons, aggregates, bounded contexts, and more. It can be overwhelming. The good news? You don’t need to master the entire canon to see real benefits. The methodology outlined below borrows the spirit of DDD—clear domain models, strong separation of concerns, and iterative modeling—without the overhead. It’s a lightweight, practical approach that delivers clarity and speed, especially in fast-moving environments like AI-assisted development.

A lightweight DDD-inspired approach for generative AI

Functional core with Pydantic

At the heart of every prototype I build is a well-defined functional core. A functional core is the business logic of the application freed of any other application details. Crucially, it does not handle API routing, UI rendering, database access, or external service integrations. These concerns belong to other layers.

💡 TIP: How do you know if your application has a functional core? Track dependencies! If your business logic Python files import from database or router files, you’ve broken the pattern.

I use Pydantic models and plain Python functions to represent the essential domain logic—particularly the generative AI operations. These often include:

  • Human-in-the-loop conversations.

  • Structured parsing of documents.

  • Detecting inconsistencies between related documents (e.g., contracts vs. invoices).

Consider representing core entities at different levels of abstraction. Powered by AI code generation, you should be able to “play out” different designs in order to evaluate the trade-offs. Talk with stakeholders and subject matter experts.

Investing time and mental effort into the functional core pays huge dividends. Decoupling the business logic from the rest of the application will make it more testable, reliable and flexible to future extension.

💡 TIP: Use AI tools to generate diagrams as code—like flowcharts or entity-relationship diagrams (ERDs) in PlantUML or Mermaid—to visualize complex data flows and relationships. This helps clarify the model and can even improve prompting when using code generation tools.
UI layer with FastAPI + Jinja2

Once the core logic is in place, I add a UI using FastAPI and Jinja2 templates. This stack allows a high degree of customization, while avoiding the need for duplicating the data model—with Jinja2, I can render the Pydantic objects directly in the templates.

Why is this important? In early-stage development, duplicating models introduces friction. Every schema change in the core would have to be mirrored in the UI, increasing maintenance overhead and risk of bugs. By relying on a single language (Python + Jinja2), you maintain consistency and accelerate iteration.

Before the rise of AI code-assist tools, developers tended to gravitate towards lightweight tools like streamlit and dash for prototypes. In the AI era, these framework libraries are obsolete. Why? Developers are better served by letting AI deal with the low-level yuckiness of HTML rather than settling for a restrictive framework that they will inevitably outgrow.

💡 TIP: The spectacular one-shot UI — If you've built a solid functional core and created a few diagrams representing the application flow, you can often pass that into a code generation tool like Gemini Code Assist and generate the UI layer with little to no additional prompting. A functional core implies an interface, and AI models are remarkably adept at inferring this accurately.

After investing so much time in the functional core, it's refreshing to breeze through this stage with a single prompt. Here's an example:

[functional core files] [state flow diagrams] Generate the UI using FastAPI, Jinja2, and Tailwind.
Storage layer with BigQuery

Finally, I implement persistence using BigQuery. Here again, I avoid introducing another schema or ORM layer. My interfaces for get/put/list operations interact directly with the Pydantic objects.

Why avoid duplicating the model in a separate ORM or schema? This keeps the data layer thin and flexible, ideal for fast-moving prototyping work. However, it's a trade-off:

  • Pros: Fewer layers to maintain, faster schema evolution, less boilerplate.

  • Cons: You give up strict schema enforcement and rich type relationships that an ORM might offer. For production systems, some teams may choose to introduce schemas later.

Modern data warehouses like BigQuery can infer schemas automatically when ingesting structured data. This makes it easy to write Pydantic objects directly to BigQuery without defining tables up front.

💡 TIP: Don’t create a table for every object in your data model. Use BigQuery’s nested and repeated fields to pack data into core entity tables instead of normalizing everything into separate tables. This denormalized approach simplifies querying and aligns well with your domain model.

The trade-off? It may reduce flexibility in certain types of analytical queries and increase complexity when you need to extract deeply nested fields.

The end result is an architecture that’s clean, understandable, and ready to evolve—perfectly suited for the rapid prototyping demands of pre-sales work in the AI space.

Practical benefits of this methodology

This approach is designed for speed with structure. By isolating the core logic from infrastructure and UI concerns, you get the flexibility to iterate quickly—without sacrificing long-term maintainability.

Here’s what that looks like in practice:

  • Rapid iteration with minimal rework
    Start with a clean domain model and evolve it as your understanding deepens. Because your business logic is decoupled, changes remain localized and low-risk.
  • Consistent data flow across the stack
    A single Pydantic model drives your application—from the functional core to the UI to the database. This eliminates schema duplication, reduces bugs, and keeps the entire codebase easier to reason about.
  • AI-friendly architecture
    When your application is well-structured, AI tools like Gemini Code Assist can do more of the heavy lifting. Clear boundaries and consistent types help the model generate accurate, maintainable code with less prompting.
  • Clean handoff between prototype and production
    Unlike throwaway MVPs, this structure supports real evolution. You can add features, swap out infrastructure, or gradually harden components—all without rewriting from scratch.

In short: this methodology strikes a balance. You move fast, but you stay clean. And to reiterate: staying clean is what allows you to keep moving fast after day one.

Lessons learned and common pitfalls to avoid

Don’t skip the domain model

Just because you can generate a working application in 10 minutes, doesn’t mean you should. When AI code generators are tasked with writing an application in one fell swoop, they tangle up the core logic with application details. In doing so, they put the entire application architecture on shaky foundations.

Don’t throw out decades of software development best practices. Invest in clear thinking, rigorous diagramming, and start with the functional core of the application: the domain model.

Avoid over-abstraction

It’s easy to fall into the trap of “productionizing” too soon—adding layers of abstraction, trying to future-proof everything. For early-stage AI apps, that usually backfires. Keep your code simple and direct. Favor duplication over premature abstraction. You can always refactor later once the design stabilizes.

Watch for AI-created entanglement

When using AI tools to generate code, be vigilant about where state and dependencies sneak in. It’s common to see API routes importing from database modules or UI components reaching into core logic. These shortcuts may work at first, but they accumulate complexity fast. Stick to clean, directional boundaries between layers.

Rethink your stack choices

In the AI-assisted era, some old stack criteria—like boilerplate availability or simplicity of syntax—don’t matter as much. AI can generate scaffolding and handle complexity with ease.

What does matter: choose a stack that lets you define a clear functional core and separate concerns cleanly. Avoid low-code tools that hide logic or force you into rigid patterns—they’re fast at first, but brittle over time.

And don’t shy away from lower-level stacks. With AI in your corner, writing clean Go, Rust, or even C++ is more accessible than ever. Choose tools that align with long-term clarity, not just day-one ease.

Conclusion: A new paradigm for building software

We’re entering a new era of software development—one shaped not just by what we build, but how we build it alongside AI.

The old lifecycle—prototype, MVP, production—used to mean rewriting the same idea three different ways. But with AI tools accelerating development, those lines are starting to blur. What used to be throwaway code can now evolve directly into scalable software—if it starts from the right foundation.

This methodology is about building that foundation: a lightweight, domain-driven approach that supports rapid iteration, keeps complexity under control, and scales gracefully as needs grow. It’s not anti-speed—it’s pro-sustainability.

In this new paradigm, success isn’t just about writing code faster. It’s about designing systems AI can collaborate on—systems that stay clear, flexible, and human-friendly even as they evolve. We’re not just coding faster. We’re coding smarter.

0 0 34
Authors