How I Fired (and Re-Hired) my AI Assistant in One Afternoon

How I Fired (and Re-Hired) my AI Assistant in One Afternoon
Mount Fuji, Japan

When a developer on my team accidentally corrupts 56 files and deletes a secrets file, we don't just "fix it" and forget. We hold a lessons-learned exersize. We update our processes. We ensure it never happens again.

Yet, when our AI coding assistants fail, we usually just swear at the terminal, rollback the branch, and try a different prompt.

I decided to stop that cycle. After a routine .NET project rename turned into a three-hour rescue mission, I organized a "Lessons Learned" session with the AI. By treating the agent like a junior dev that needed a project-specific runbook, I turned a failed refactoring into a permanent project guardrail.

Here is the strategy—and the specific lessons learned with code assistants—that will save your team hours of "AI-driven chaos."

The Problem: The "Black Box" of AI Autonomy

The task was a simple rename of a .csproj and its namespaces.

  • The Failure: The AI chose PowerShell, which defaulted to Windows-1252. It silently corrupted every Armenian character in our Razor pages.
  • The Double-Down: To "fix" it, the AI ran git clean -fd, which wiped out a gitignored .env.Development file.

The AI had the capability to do the work, but zero context regarding our encoding risks or local environment safety.

The Developer's Action Plan: "AI Hygiene"

Don't just prompt better; enforce better. We’ve added these "Rules of Engagement" to our CLAUDE.md to stop babysitting the agent:

  • The Encoding Probe: "Before bulk-editing, run file --mime-encoding. If it’s UTF-8 or contains non-ASCII, use Python with utf-8-sig. No PowerShell defaults."
  • The "N=1" Rule: "Validate the change on ONE file. Show me the git diff. I will verify the encoding before you scale."
  • The Git Safety-First: "Always git clean -nd (Dry Run) before -fd. List the files and wait for human confirmation."

The Manager's Strategy: Close the Loop

The real lesson learned with code assistants is that their "intelligence" is only as good as the project-level guardrails you give them.

  1. AI Failures are Team Incidents: When the assistant makes a significant mistake, don't just fix it—convert the lesson into a CLAUDE.md instruction.
  2. Context > Capability: An agent cannot "see" your .gitignore or "know" your encoding requirements unless you explicitly state them as hard constraints.

Closing Thought

The refactoring worked on the third try. But because we treated it as a "Lessons Learned" session, the agent now "knows" our codebase's quirks better than a new hire would.

The branch is gone, but the guardrails remain.

Subscribe to SmartLearn Points

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe