Two days ago, GitHub officially launched Copilot as a generally available product, moving it out of the free technical preview that’s been running since June 2021 and into a paid offering at $10 per month (or $100 per year). Free for verified students and maintainers of popular open-source projects. After a year of using the preview, I have thoughts — and they’re more nuanced than the “AI will replace developers” headlines suggest.
What Copilot Actually Is (and Isn’t)#
For those who haven’t used it: GitHub Copilot is a code completion tool powered by OpenAI’s Codex model, which is a descendant of GPT-3 fine-tuned on code. It integrates into your editor (VS Code primarily, with support for JetBrains IDEs, Neovim, and others) and provides inline suggestions that range from completing a single line to generating entire functions.
It’s not a chatbot. It’s not a code reviewer. It’s not going to architect your system. What it does is predict what you’re likely to type next, based on the context of your current file, your comments, function signatures, and surrounding code. Think of it as autocomplete on steroids — really powerful steroids, but autocomplete nonetheless.
During the technical preview, I used Copilot daily across Python, TypeScript, and Go projects. My experience was consistent: it’s remarkably good at boilerplate and pattern-matching tasks, occasionally brilliant at complex logic, and sometimes confidently wrong in ways that could introduce subtle bugs if you’re not paying attention.
Where Copilot Shines#
The best use cases I’ve found over the past year:
Test generation. Write a function, start writing a test, and Copilot will often generate reasonable test cases covering happy paths and common edge cases. It won’t replace a thought-through testing strategy, but it can scaffold the repetitive parts of test suites incredibly quickly.
Boilerplate reduction. Setting up Express route handlers, writing Terraform resource blocks, crafting SQL queries from comments — the kind of code where the pattern is well-established and you’re essentially translating intent into syntax. Copilot handles this at near-perfect accuracy.
Learning new APIs. When I started working with a Go library I hadn’t used before, Copilot’s suggestions taught me idiomatic patterns faster than reading documentation. It’s not a replacement for understanding what the code does, but it’s a remarkably efficient way to see how APIs are typically used.
Docstrings and comments. Writing documentation for functions is the kind of tedious task that Copilot handles well. It reads the function implementation and generates a description that’s usually accurate and well-formatted.
Where It Falls Short#
The failure modes are important to understand, because they’re not always obvious:
Subtle logic errors. Copilot might generate a sorting function that looks correct but uses an unstable sort when stability matters, or a date comparison that doesn’t account for timezones. The code compiles, the tests pass for most cases, and the bug hides until production.
Security-sensitive code. I’ve seen Copilot suggest SQL queries without parameterization, crypto implementations with hardcoded IVs, and authentication logic with timing vulnerabilities. It optimizes for “code that looks right” not “code that is secure.” Never accept Copilot suggestions in security-critical paths without thorough review.
Outdated patterns. The training data has a cutoff, and Copilot sometimes suggests deprecated APIs or patterns that were common in older codebases. If you’re working with rapidly evolving libraries, the suggestions may not reflect current best practices.
Over-reliance risk. This is the one that concerns me most. I’ve caught myself accepting suggestions without fully reading them, especially when under time pressure. The cognitive shortcut of “Copilot suggested it, it’s probably fine” is dangerous and insidious.
The Pricing and Open Source Question#
The $10/month pricing is reasonable for professional developers — if Copilot saves you even 30 minutes a month, it’s paid for itself. The free tier for students and open-source maintainers is a smart move that will build loyalty and keep the training pipeline flowing.
But the training pipeline is exactly where the controversy lies. Copilot was trained on public GitHub repositories, including those with copyleft licenses like GPL. The legal and ethical implications are far from settled. If Copilot suggests a block of code that’s substantially similar to GPL-licensed source code, and you incorporate it into a proprietary project, are you violating the license? GitHub’s position is that training on public code constitutes fair use, but this hasn’t been tested in court.
The Software Freedom Conservancy and others have raised serious concerns. I think these concerns are legitimate. The open-source community created the training data, and the fact that a commercial product is being built on that data without clear consent mechanisms is worth scrutinizing — even if you ultimately conclude it’s legally permissible.
The Bigger Picture#
Copilot’s general availability is a milestone, but it’s also just the beginning. GitHub is clearly going to expand Copilot’s capabilities — I’d expect deeper IDE integration, more language support, and potentially features that go beyond code completion into code review and refactoring suggestions.
More broadly, Copilot is the most visible example of a trend that’s going to reshape software development: AI as a development tool. Not replacing developers, but changing the nature of the work. Less time typing boilerplate, more time thinking about architecture, reviewing AI suggestions, and solving problems that require genuine understanding.
My Take#
After a year with Copilot, I’m cautiously positive. It makes me faster at tasks I was already good at, and it’s a useful learning aid for unfamiliar territories. But I’m under no illusion that it makes me a better engineer — if anything, it requires more discipline to maintain code quality standards when a tool is constantly offering to write the next line for you.
At $10/month, I’ll be subscribing. The productivity gains are real, even if modest. But I’d strongly recommend establishing team guidelines around Copilot usage: always review suggestions before accepting, never use them blindly in security-critical code, and maintain your ability to write code without AI assistance. The tool is most valuable when you’re good enough to know when it’s wrong.
The AI coding assistant era is officially here. It’s not the revolution the hype suggests, but it’s not a gimmick either. It’s a useful tool that requires skilled hands to wield effectively — which, come to think of it, describes most tools worth using.
