Skip to main content
  1. Blog/

AI Coding Assistants Are Growing Up — Beyond Autocomplete

·988 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Developer Tooling - This article is part of a series.
Part : This Article

It’s been roughly two and a half years since GitHub Copilot went generally available and kicked off the AI coding assistant wave. In that time, we’ve gone from “it’s autocomplete on steroids” to tools that can reason about entire codebases, plan multi-file changes, and execute complex refactoring tasks with increasing reliability. The latest updates from GitHub, Cursor, and several other players this month make it clear: we’re entering a new phase where these tools are less about generating code snippets and more about augmenting the entire development workflow.

From Suggestions to Agents
#

The most significant shift in AI coding assistants over the past year has been the move from passive suggestion to active agency. GitHub Copilot Workspace, Cursor’s composer mode, and similar features don’t just suggest the next line of code — they understand the intent behind a task and can propose coordinated changes across multiple files.

I’ve been using Copilot Workspace for several months now, and the experience is qualitatively different from traditional autocomplete. When I describe a bug fix or a feature in natural language, it generates a plan that identifies the relevant files, proposes specific changes, and lets me review and modify the plan before executing it. It’s not perfect — maybe 60-70% of the plans need adjustment — but even imperfect plans save time because they front-load the thinking about which files need to change.

Cursor has taken a slightly different approach with its tight integration of AI into the editor itself. The ability to select a block of code, describe what you want to change about it, and get a contextually aware diff has become part of my daily workflow. For routine refactoring — renaming concepts across a codebase, updating API call patterns, migrating from one library to another — these tools are genuinely faster than doing it manually.

The Codebase Understanding Problem
#

The limiting factor for AI coding assistants has always been context. A model that can only see the current file is dramatically less useful than one that understands your entire project. This is where the recent improvements have been most impactful.

RAG (Retrieval-Augmented Generation) over codebases has become standard. Tools now index your project, understand import relationships, and can pull in relevant context from files you haven’t opened. When I ask for help with a function, the assistant knows about the types it depends on, the tests that cover it, and the patterns used elsewhere in the project.

The practical impact is significant. Earlier this year, I was onboarding to a large Python codebase — about 200,000 lines across dozens of packages. Being able to ask the AI “how does the authentication flow work in this project?” and get an accurate walkthrough with file references saved me days of manual code archaeology. This is where AI assistants provide the most value — not in writing new code, but in understanding existing code.

What’s Actually Working in Production
#

After extensive use across several projects, I’ve developed a clear mental model of where AI coding assistants deliver real value and where they fall short:

High value: Boilerplate generation, test writing, documentation, code explanation, regex and SQL writing, API integration code, standard CRUD operations. Anything where the pattern is well-established and the AI has seen thousands of examples.

Medium value: Bug diagnosis (pointing in the right direction), refactoring suggestions, code review assistance, learning new frameworks. Useful but requires active human judgment.

Low value: Complex architectural decisions, novel algorithm design, performance optimization of critical paths, security-sensitive code. These still require deep human expertise and the AI can actually be dangerous here by producing plausible-looking but subtly wrong solutions.

The teams I see getting the most value from AI assistants are the ones that have internalized this mental model. They use AI aggressively for the high-value tasks, critically for the medium-value ones, and avoid over-relying on it for low-value scenarios.

The Productivity Question
#

The elephant in the room is productivity measurement. GitHub’s internal studies claim 55% faster task completion with Copilot. Various other studies have shown numbers ranging from 20% to 75% improvement depending on the task and the developer’s experience level.

My personal experience: for the tasks where AI assistants excel (boilerplate, tests, documentation), the speedup is easily 2-3x. For complex feature work, the benefit is more modest — maybe 10-20%, mostly from faster context gathering and less time looking up API documentation. Overall, I’d estimate a 25-30% productivity improvement for my typical work mix, which is substantial.

But raw speed isn’t the whole story. I’ve noticed that AI assistants subtly change how I work. I write more tests because the marginal cost of writing tests has dropped dramatically. I add more documentation because generating a good docstring takes seconds. I refactor more aggressively because the AI can handle the mechanical parts of a refactoring. These qualitative improvements in code quality might matter more than the raw speed gains.

My Take
#

We’re past the “is AI coding useful?” debate. It is. The interesting questions now are about integration depth, team workflows, and long-term skill development. I have some concerns about junior developers over-relying on AI for code they don’t fully understand — the learning process of struggling with a problem has real value that you lose when the AI just gives you the answer.

But for experienced developers who can critically evaluate AI output, these tools are a genuine force multiplier. My recommendation: invest time in learning the advanced features of your chosen tool. Most developers I talk to are using maybe 20% of what their AI assistant can do. Explore multi-file editing, codebase Q&A, and the emerging agentic features. The productivity ceiling is much higher than most people realize.

The next frontier is AI assistants that can run code, execute tests, and iterate on their own output. We’re seeing early versions of this already, and it’s going to change the development workflow even more fundamentally than autocomplete did.

Developer Tooling - This article is part of a series.
Part : This Article