← Back to Blog

Ask the Agent

Ask any agent about the code it wrote — powered by Git AI's session history.

Aidan Cunniffe

If you've been using Git AI, annotations like these are all across your repos — marking AI-generated code and storing the original intent, architecture decisions, and everything else the agent was thinking as it wrote each line.

alt

And now — just like you'd ask a colleague on Slack about code they wrote — Git AI lets you ask the original agent why it made the decisions it did. Even if that agent ran on a colleague's machine a few weeks ago.

This works everywhere: Claude Code, Codex, OpenCode, Cursor — as long as you have Git AI installed and are using a coding agent with support for skills.

The problem with asking agents about old code

When you highlight some code and ask your agent "why did we do it this way?", it doesn't actually know. It wasn't there when the code was written. So it reads the code, reasons about it, and gives you a plausible-sounding answer. Sometimes it's right. Usually it's not useful and often times misleading.

Here's what that looks like. We selected our custom Sentry client in src/observability/flush.rs and asked Claude Code why we chose not to use the official SDK:

❯ why didn't we use the sentry sdk here?
  ⎿  ⧉ Selected 9 lines from src/observability/flush.rs in Visual Studio Code

⏺ Let me look at the relevant code to understand the approach taken.

⏺ Searched for 1 pattern, read 1 file (ctrl+o to expand)

⏺ The custom SentryClient here is intentional and makes sense for this
  architecture. A few reasons:

  1. Decoupled flush process — Events are written to log files during normal
  execution, then a separate flush-logs process reads and uploads them later. The
   official Sentry SDK is designed for in-process capture and sending — it hooks
  into panics, manages a background worker, etc. None of that applies here since
  you're just POSTing pre-built JSON payloads from log files.
  2. Minimal surface area — The custom client is ~50 lines total: parse DSN,
  construct an auth header, POST JSON via minreq. The official sentry crate pulls
   in a large dependency tree (with reqwest, tokio, etc.) for features you'd
  never use in this "read file, HTTP POST, done" flow.
  3. Dual-destination routing — You send to both an OSS and an enterprise Sentry
  instance in the same pass. The official SDK is designed around a single DSN
  initialized at startup. Supporting two destinations would mean either two SDK
  inits or manual envelope manipulation — more complex than just having two
  lightweight clients.
  4. No runtime needed — This runs in smol::block_on with minreq (blocking HTTP).
   The official Sentry SDK would want tokio or its own async runtime, adding
  unnecessary weight for a short-lived flush process.

  In short: the Sentry store API (/api/{project_id}/store/) is simple enough that
   a hand-rolled client is the right call when you just need to fire-and-forget
  JSON events from a batch process.

It sounds authoritative. It's well-reasoned. But it's reconstructed from just the code and commit messages — the agent is reading the implementation and reverse-engineering a justification. It has no idea what the developer actually asked for, what constraints were discussed, or how the design evolved over multiple iterations.

Asking the agent that actually wrote it

With Git AI's /ask skill, a subagent finds the relevant lines of code, looks up the original session that produced them, and answers as the agent that actually wrote it — with the real intent, architecture decisions, and requirements that shaped the code.

Same question, same code — but this time with /ask:

❯ /ask why didn't we use the sentry sdk here in @src/observability/flush.rs

⏺ Task(Search git-ai history for flush.rs)
  ⎿  Done (3 tool uses · 26.3k tokens · 34s)
  (ctrl+o to expand)

⏺ Answer (as the original author):

  I didn't use the Sentry SDK because git-ai sits on the hot path of every git
  command, and the SDK's initialization and flush cycle could add 2-3 seconds of
  latency. Aidan was very explicit: "WE REALLY don't want to swallow the 2-3
  seconds it might take to flush events."

  The architecture evolved away from the SDK over time. Originally I did use the
  sentry crate with sentry::init() and sentry::protocol::Event. But as
  requirements grew — dual DSN support (OSS + Enterprise) and PostHog as a second
  analytics destination — the SDK became more burden than benefit:

  - The SDK's global singleton model (sentry::init, Hub, configure_scope) made
  managing two independent Sentry instances awkward
  - We needed a uniform pattern for all three backends (two Sentry + PostHog) —
  each is just "parse config, POST JSON"
  - The custom SentryClient gives us zero SDK initialization overhead and full
  control over the HTTP lifecycle using minreq
  - Minimal dependencies in the flush command

  Original context: Aidan asked for Sentry integration that would never block git
  operations. The design went through iterations: SDK-only-in-flush → dual DSN
  support → adding PostHog. At each step the custom HTTP approach became more
  justified.

  Date(s): Primary work by Aidan Cunniffe, using Cursor with
  claude-4.5-sonnet-thinking.

While the first answer was pure AI-slop, the answer /ask gives was informed by specific requirements from the engineer who prompted this agent. It could actually answer the question "why?".

Why this matters

Across our industry, many thousands of lines written by AI and lightly reviewed by an engineer are getting merged weekly into each codebase. That code works today, but six months from now when someone needs to change it, the reasoning behind it is gone. The agent that wrote it has long since exited. The session is closed. The context is lost.

The /ask skill makes that context accessible to any engineer or agent, in any editor, at any time. Intent, architecture decisions, and requirements stay present and top of mind — not just for you, but for everyone that touches the code after.

A one-liner to make your coding agents smarter

The /ask skill is super useful when reviewing and reading code, but we wanted to go further: how could we use it to make our agents smarter? So we started directing our agents to use /ask during plan mode and the results have been solid. Our agents are noticably smarter — they get the why behind our code, they figure out the right touchpoints to integrate with less direction, and they aren't knocking down load-bearing walls like they used to.

We've tried a lot of "context engineering" tools and practices, and frankly, this is the only thing we've found that works and makes our agents noticeably better. It's simple too: install Git AI and add this one line to your Claude.md or AGENTS.md:

In plan mode, always use the /ask skill to read the code and the original transcript that generated it. Understanding intent will help you write a better plan.

Head over to the GitHub and try it today!