AI Coding Agents: From Advanced Autofill to Valued Team Member

Contributors
Andreas Granmo
CTO, Obvlo
Share

Not long ago, using AI to write production software was a nice idea that didn't hold up in practice. Today, it's how we ship features. Here's what changed, and what it actually means for a small team like ours.

The timeline of progress

A year ago, AI code assistance was glorified autocomplete. It predicted the next few words. Then it started suggesting relevant code blocks. Useful, but limited. It couldn't write a complete function without introducing bugs, odd patterns, or code that just didn't fit the codebase.

Every model release moved the needle. Each one a bit more capable, a bit more context-aware. The improvements were incremental but compounding.

Then Opus 4.5 landed. That was the step change. Not incremental. The jump in code quality, context retention, and ability to reason about larger codebases was significant. We restructured our workflows around it.

I was drafting this post when Opus 4.6 and GPT 5.3 Codex came out. Both solid improvements. I updated my notes. Days later, Sonnet 4.6 arrived and I had to update them again. For the cost and speed profile we need on a daily basis, it hit a sweet spot we hadn't seen before.

That's the real story here. The pace of progress is so fast that I can't finish a blog post about it before it needs rewriting.

It's not just the models

Better models matter. But the real shift is in how they're integrated into the development workflow.

For us, the end-to-end GitHub ecosystem is where things got interesting. We can write features interactively in the IDE, with the agent as a pairing partner. We can spin up agents in the background to handle parallel tasks. We can run them in the cloud for longer jobs. All from the same interface, using the same tools, the same structure, the same prompts.

The consistency matters. We're not context-switching between different tools for different modes of work. The agent is just there, in the flow, ready to pick up tasks.

It is almost too easy. Which sounds like a good problem to have, and it is. But it also means you need discipline. The ease of generating code doesn't remove the need to review it, test it, and understand it. If anything, the review step becomes more important when code is produced faster.

Does it replace developers?

No.

These agents are tools. Powerful tools, but tools. They don't understand the product. They don't know why a decision was made three months ago, or what the customer actually needs. They don't prioritise. They don't architect systems with long-term maintainability in mind unless you tell them to.

What they do is remove friction. The backlog items that kept getting pushed to the next sprint? We're getting through them. The prototype we never had time to build? Built and tested in an afternoon. The repetitive scaffolding and boilerplate? Handled.

This frees up our developers to focus on what actually matters: core product innovation. The things that differentiate Obvlo from everything else in the market. The hard problems that require domain knowledge, creative thinking, and judgement.

We're not replacing anyone. We're making the team we have significantly more effective.

Are agents really valued team members?

Yes. Without hesitation.

They don't attend standups or have opinions about the office coffee. But they pick up work reliably, they don't get tired, and they improve with every model release. They handle the breadth of tasks that would otherwise slow us down, and they do it at a pace that lets a small team punch well above its weight.

The ecosystem around them is still maturing. Prompting strategies, context management, code review workflows for agent-generated code — we're all still figuring this out. But the trajectory is clear.

Every new model brings a measurable improvement. Every ecosystem update removes another point of friction. The gap between what an agent can do and what we need it to do is closing fast.

We've built AI into the core of everything we do at Obvlo since the earliest commercial LLMs became available. Using AI agents to build the product that itself runs on AI feels like a natural next step.

Looking forward to seeing where this goes next.