Error Signal Fidelity: When the Compiler Speaks to the AI, Not the Human
Most compiler errors are written for human eyes — prose messages, implicit assumptions, single-shot reads. When AI is generating the source, the reader is a language model. RECALL's diagnostic output was designed for that reader: stable codes, structured JSON, per-code hints, and exact location — all the inputs a generation loop needs to close.
The Assumption Hidden in Every Error Message
Every compiler error contains a hidden assumption: that a human will read it.
The message is prose because prose is how humans parse context quickly. The location is a file path and line number because a human will open the file, scroll to the line, read the surrounding code, and figure it out. The hint — if there is one — is written to be understood by someone who already knows the language and just made a small mistake.
These are the right design choices when the reader is human. They are not the only choices when the reader is an AI system generating the source.
A language model generating RECALL source does not open files. It does not scroll. It does not read surrounding code in the sense a human does — it processes a token sequence, and the quality of that processing depends on the structure of what it receives. Prose errors are signal. Structured errors are better signal. The difference compounds.
AI generation errors compound
A human programmer notices invalid syntax immediately and corrects it before the output leaves the editor. An AI system generating invalid syntax at position N continues generating from the invalid state — subsequent tokens are conditioned on the error. The compiler's diagnostic is not the end of the process. It is the input to the next generation step.
Two Errors for the Same Mistake
Consider an AI generating RECALL source that uses a slightly wrong element name. A human writing HTML or Markdown would get something like this:
error: unexpected identifier at line 42
unexpected token 'HEADING-TITLE'
expected: valid element nameWhat does this give an AI? A line number, a token it produced, and the information that it was wrong. No category. No suggestion. No indication of what the valid vocabulary looks like. The AI has to infer the fix from partial signal.
The same mistake in a RECALL source file, checked with recall check --format json, returns this:
{
"errors": 1,
"warnings": 0,
"diagnostics": [
{
"code": "RCL-003",
"severity": "error",
"category": "unknown-element",
"file": "page.rcl",
"line": 42,
"col": 11,
"message": "Unknown element",
"why": "HEADING-TITLE is not a registered element. Did you mean HEADING-1?",
"hint": "Run `recall schema` to see all valid elements.",
"source": " DISPLAY HEADING-TITLE PAGE-TITLE.",
"caret": " ^^^^^^^^^^^^^"
}
]
}Every field is machine-readable. The code is stable — RCL-003 is always RCL-003, not prose that varies by context. The category tells the AI what class of error this is. The hint points to the corrective action. The source line and caret give the AI the exact re-entry point. The why field already contains a "did you mean?" suggestion derived from the closed vocabulary.
The AI does not need to infer the fix. The diagnostic supplies it.
The Generation Loop
Error Signal Fidelity is about what happens inside this loop. Every step depends on the quality of what it receives from the previous step.
The loop is not a workflow added on top of RECALL. It is a design assumption baked into the diagnostic system from the start. The JSON output format, the stable codes, the per-code hints — all of it exists because this loop was the intended use case.
The Four Properties
Error Signal Fidelity in RECALL rests on four design decisions that were made together. Removing any one of them degrades the loop.
Stable codes
RCL-001 is always RCL-001. Not "type error", not a message string that changes with context. A stable code is a token the AI can match against a known fix strategy. Prose error messages are not stable tokens.
Structured JSON output
recall check --format json returns every field the AI needs: code, severity, category, file, line, col, message, why, hint, source line, caret. Not a formatted string — a structured object. The AI reads it the same way it reads the schema.
Per-code hints
Every error code in the registry carries a fix hint. Not "invalid syntax" — "Declare the field as PIC X if it is intended for display as text, or use LABEL which explicitly accepts numeric values." The hint is part of the code definition, not generated at runtime.
Exact re-entry location
file:line:col plus the full source line plus a caret pointing at the offending token span. The AI does not need to search for the error. It re-enters at the exact position. The caret is rendered in the JSON as a string — machine-readable as well as human-readable.
Five Error Codes in the Registry
The diagnostic registry contains 30+ error and warning codes. Each is defined with a stable code, description, example snippet, fix hint, and related codes. Click any entry to expand it.
Design, Not Tooling
The distinction matters. A tooling decision can be added later — you can wrap any compiler in a JSON formatter. A design decision cannot be retrofitted easily because it shapes what the system was built to do.
RECALL's diagnostic system was built knowing that the primary consumer of compiler output would be the AI compositor — not the human developer watching the terminal. That assumption is visible in the registry structure: every code carries a description, an example, a fix, and seeAlso links — not as documentation for humans, but as a machine-readable repair manual for the generation loop.
HTML has no error codes. Markdown has no error codes. JSX surfaces type errors from TypeScript, which were designed for human developers. RECALL has a diagnostic registry designed for the AI that generated the source in the first place.
That is the separation. Most notations treat errors as the end of a failed generation. RECALL treats errors as the input to the next one.