# PROMPT RUNTIME FOR MACHINES

Right now, your prompt isn't even a prompt. It's fragments — system instructions, templates, context injections, tool descriptions — scattered across your codebase and assembled at runtime into something you never actually see.

When the output is wrong, you don't know why. Was it the prompt? The model? The variables? The context window? You can't see the compiled input. There's no diff. No trace. No history. The actual behavior of your AI is a black box.

There is no workspace for the behavior.

reprom is that workspace.

...

Define AI behavior in files — prompt.md, memory.md, reprom.json. Call /run. Every execution is recorded with the full pipeline trace: compiled prompt, model response, tool calls, tokens, timing, and the exact commit that produced it.

Change a line. Commit. See the diff. Run again. Compare outputs. Roll back. The behavior has a history now — and it's separate from your application code.

...
```
const response = await fetch(
  'https://api.reprom.run/v1/run', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer rp_live_...',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    input: 'What neighborhoods do you cover?',
    session_id: 'visitor_92f1',
    vars: {
      owner: 'Andrew Gierke',
      area: 'Hayes Valley',
    },
  }),
});

// → response

{
  "output": "We cover Hayes Valley, NoPa, and
    the Western Addition. Want details on any
    of those?",
  "steps": [
    { "type": "model", "tokens": 142, "ms": 830 }
  ],
  "version": {
    "commit": "a3f7c2e",
    "message": "tighten neighborhood scoping"
  },
  "meta": {
    "session_id": "visitor_92f1",
    "program": "listing-assistant",
    "model": "claude-sonnet-4-20250514"
  }
}
```
...
# HOW IT WORKS

You think about two things: behavior and runs.

Behavior lives in files — prompt.md, memory.md, reprom.json — version-controlled, diffable, blameable. Change a line, commit, see exactly what changed in every run after.

Runs are executions. Every call to /run records the full pipeline — the compiled prompt, model response, tool calls, extraction, tokens, timing. Replay any run. Swap the model. Compare outputs.

Sessions persist context across runs. Same session_id, continuous conversation. Flat message log, automatic context management.

...
# WHAT YOU GET
* Structured response format
  (output, steps, version, meta)
* Template interpolation
  ({{vars.x}}, {{input}})
* Session memory with automatic context windowing
* Git-style commit history for prompt files
* Blame, diff, and rollback on every change
* Anthropic today. OpenAI and others next.
* Full execution trace on every run
...

For access or to request a demonstration, contact: andrew@reprom.run


© reprom, 2026.