Beyond the hype

What AI-assisted coding actually delivers

Published on: May 27, 2025
Last update: May 29, 2025

You’ve probably been hearing about “vibe coding” everywhere. The idea that developers can build entire apps just by prompting AI has sparked a wave of excitement, speculation, and, in many cases, inflated expectations.

Some say developers are on their way out. Others believe anyone, regardless of training, can now build software. For business leaders focused on speed and efficiency, the promise of AI-augmented development is hard to ignore.

But does it actually deliver?

At Modus, we didn’t want to guess. So we put AI-assisted development to the test on a real product, with real constraints. What we found reshaped how we think about speed, scale, and the role of software teams in an AI-enabled future.

What is vibe coding?

“Vibe coding” is a relatively new and evolving term (first coined in early 2025). Broadly, it refers to developers prompting AI tools to generate code rapidly, sometimes with very little upfront planning.

While the name implies a loose or improvisational style, that’s not how we work at Modus. We think of vibe coding differently: as a structured, AI-assisted workflow where skilled engineers prompt intentionally, review rigorously, and guide the output within strong architectural boundaries.

In other words, we don’t trust the vibes; we guide them.

Our experiment: Same app, two teams, real results

To find out what AI-assisted development can actually deliver, we ran a controlled experiment. Two teams. One product. Same scope, same stack, same timeline.

  • Team DIY wrote code manually using traditional workflows.
  • Team AI used AI tools to guide development through prompting and refinement.

The AI team was intentionally leaner (by 30%) and still delivered the same product scope more than 60% faster when adjusted for headcount. Even more telling: they introduced fewer bugs, logged 45% less total development time, and had lower tooling costs across the two-month engagement.

Overall, the code quality held up. After integration, both sets of code went through peer review, automated analysis (including static analysis using quality management tool Sonar), and QA testing. The AI-generated code presented more minor issues, but fewer critical ones, showing no meaningful drop in stability or maintainability.

Our takeaway: AI won’t replace skilled developers, but it will change the way they work, ultimately amplifying and empowering them.

With clear prompts, guardrails, and validation steps combined with strong architecture, AI helped a smaller team move faster and deliver more flexibly. Not by skipping steps but by accelerating the slow ones: setting up foundational code, handling repetitive boilerplate, switching between tools, services, and layers in the stack, and maintaining documentation.

Quote bar

The AI team was intentionally leaner (by 30%) and still delivered the same product scope more than 60% faster when adjusted for headcount.

What helps AI coding work, and where it breaks down

AI-assisted development doesn’t succeed because of any specific tool, it succeeds because of how teams use the tools they’re given.

In our experiment, the biggest gains came from how work was structured, not which models were used. Teams that broke tasks into small, incremental steps—build, validate, test—got better results. This aligned closely with an Agile mindset: setting short-term goals, iterating quickly, and validating along the way gave the AI clearer context to work with. Vague prompts? They led to bugs and bloated output.

The AI team approached every suggestion with healthy skepticism, treating AI like a junior collaborator. It can work quickly, but never autonomously. Every output was reviewed, refined, and integrated within existing engineering standards.

The real differentiators weren’t the tools themselves, but the habits around them:

  • Tight scoping to improve prompt accuracy
  • Clean, structured specs that gave AI clear context
  • Automated checks to catch issues early in the process
  • A review-first culture that valued precision over speed

When those pieces weren’t in place, the gains disappeared. 

Our advice is not to think of this as a fully automated process. Think of it as augmentation, with humans firmly in the loop.

What leaders need to know

If you’re a technology leader, you’re likely asking: How do I roll this out? How do I train my teams? What does a realistic AI-enabled workflow actually look like?

Here’s the catch: formal training for this doesn’t exist—it’s too new. There’s no certified path for prompting well, no universal playbook for working with AI.

So what works? Curiosity. Room to experiment. Permission to get it wrong (with clear goals in place to guide that experimentation).

We found that the most effective developers weren’t necessarily the most senior, but they were the most adaptable.

AI-assisted development isn’t about the specific tools, it’s a behavioral shift. Your teams need to think differently about their work: less about hand-writing code, more about reasoning problems and structuring solutions, prompting clearly, validating output, and improving through iteration. That shift won’t happen in a sprint cycle. And it won’t happen under delivery pressure.

As a leader, your role isn’t to “train” in the traditional sense. It’s to build the space where learning happens organically.

That means:

  • Giving teams protected time to explore and self-educate
  • Encouraging prompt libraries and internal knowledge sharing
  • Supporting iteration, not just output
  • Treating AI as a tool for creative acceleration, not full automation, or as a headcount replacement
  • Framing experimentation with clear goals, so teams stay focused on outcomes that matter

If you want speed later, you need patience now. A culture of curiosity is the foundation for building AI fluency at scale

Are you ready to build AI-powered products that increase ROI? Let’s find out.

Rewire how you build—start with a 30-minute strategy session.

Risks, tradeoffs, and what AI still can’t do

AI-assisted development delivers real gains, but it’s not without tradeoffs. The most consistent risk? Hallucination. AI tools can confidently generate incorrect, insecure, or misaligned code when given vague or underspecified prompts. And without strong review habits, small errors can snowball into technical debt or production bugs.

In our experiment, prompt quality was the clearest predictor of success. Teams that used structured and specific instructions saw better results. Teams that didn’t:

  • Got insecure or verbose code
  • Dealt with AI hallucinations and coding mistakes
  • Spent more time debugging
  • Risked building the wrong thing, faster
  • Wasted time (and tokens) with poorly structured prompts

AI struggled most with complex systems. Tasks like setting up infrastructure, coordinating multiple services, or handling security often tripped up the models, especially when they didn’t have enough context about how the system was designed.

AI can tell you how to build—but it still takes a human to decide what to build and why. After all, AI can probably generate plans for a skyscraper, but you’d still want a structural engineer in the loop to make sure that the building will stand. AI’s output will only be as good as your inputs and oversight.

This is where leadership plays a critical role. Responsible implementation means:

  • Building in validation at every step
  • Standardizing clear, machine-readable documentation
  • Defining boundaries for where AI is helpful and where it’s not

At its core, AI is a multiplier. If your workflows are strong, it can make them better. If they’re broken, it will scale the mess.

Quote bar

AI can probably generate plans for a skyscraper, but you’d still want a structural engineer in the loop to make sure that the building will stand

From guesswork to strategy

Understandably, most organizations are still in the early stages: piloting tools, running siloed experiments, and hoping AI leads to faster delivery.

At Modus, we've moved beyond guesswork. We ran the experiment. We tracked the hours. We measured the impact. And what we found is simple:

  • AI works when teams are trained and workflows are structured
  • Smaller teams can move faster, without sacrificing quality
  • Real ROI depends on smart prompting, strong documentation, and constant validation

The leaders who win with AI won’t be the ones who adopt tools the fastest. They’ll be the ones who operationalize the shift by building the habits, systems, and culture to make it scale.

Blog CTA 1

STOP GUESSING AND START SCALING

Embed AI into your product development process today.

Share this

Wesley Fuchter

Wesley Fuchter is a Senior Principal Engineer at Modus Create, with over 13 years of experience building cloud-native applications for web and mobile. Working as a tech leader he's spending most of his time working closely with engineering, product, and design to solve customers' business problems. His experience sits at the intersection of hands-on coding, innovation, and people management with a mix of experiences going from AWS, Java and TypeScript to startups, agile and lean practices.