**Written by Germán Valencia, Software Engineer
When AI coding assistants first hit the scene, I wasn’t impressed. To me, they looked like autocomplete on steroids: clever, maybe, but hardly “intelligent.” I stayed away for months, convinced the tools would only get in my way. But when I finally gave them a shot, my perspective shifted. Slowly, AI moved from something I distrusted to something that now lives in my daily toolbox.
Here’s what I learned along the way.
My first real test was with Cursor. At the start, I dismissed it as just a VS Code fork with flashy marketing.
But the more I used it, the more I noticed how well it inferred intent. Small changes in my code would trigger completions that actually lined up with what I was trying to do. It wasn’t guessing; it was reading the signals from my edits and responding with useful suggestions.
That was the first time I felt AI was more than a gimmick.
I installed Warp for cosmetic reasons: better fonts and a cleaner terminal. I didn’t expect it to become a partner in debugging.
One day, Warp offered to fix a broken test. Out of curiosity, I let it try. Not only did it repair the test, but it also explained what might have gone wrong. Even more surprising, it proposed file-level changes that made sense.
When the test passed, I had to laugh. AI had just earned its keep.
The biggest breakthrough came with Claude Code and its CLAUDE.md file. For the first time, I could load a persistent project context into the model. No more re-explaining the codebase every session.
This was transformative. With structured context, the AI respected our team’s coding style and project constraints. Other platforms rushed to copy the idea (Cursor added rules files, Google and OpenAI caught up), but the lesson stuck: Context is what turns AI from novelty into a reliable collaborator.
Then I tried connecting AI agents through the Model Context Protocol (MCP). By linking agents to Jira, I could feed them ticket descriptions directly, and even push back clarifications or updates as development progressed.
Suddenly, documentation was no longer an afterthought; it became a living part of the workflow.
The potential here is huge: design, planning, and implementation stitched together by context-aware agents.
If Cursor was my gateway, Zed feels like a glimpse of the future.
The read-only mode quickly became my favorite. It forces the AI to explain its reasoning before it touches a single line of code, a perfect balance between guidance and control.
AI tools can accelerate your workflow, but they can also wreak havoc if left unsupervised. I learned a few rules the hard way:
Supervision is everything. Without it, the same speed that makes AI attractive becomes a liability.
I began this journey skeptical, convinced that AI was hype. Now, I see it differently.
LLMs aren’t replacing engineers. They’re partners; validators, implementers, and thought sparring partners. They accelerate learning, improve focus, and handle the repetitive edges of the job.
But like any tool, they’re only as good as the craftsperson guiding them. Healthy skepticism, clear direction, and disciplined oversight are what transform AI from novelty into genuine productivity.
If you’re still on the fence, here’s my advice: give it a try. Use it, supervise it, challenge it. You may find, like I did, that AI can move from distraction to indispensable.
Because in the end, AI won’t do our jobs for us. But it can help us do them better.