Skip to the content.

日本語版

LLM Adapter — Shadow Execution

The adapter keeps your primary LLM provider intact while running a shadow provider in parallel. It records response diffs and anomaly events as JSONL, offering a lightweight measurement foundation for fallbacks and vendor comparisons.

Highlights

Key Artifacts

How to Reproduce

  1. In projects/04-llm-adapter-shadow/, create a virtual environment and run pip install -r requirements.txt.
  2. Execute python demo_shadow.py to observe shadow execution and the resulting artifacts/runs-metrics.jsonl log.
  3. Run pytest -q to verify tests covering shadow diffs and error handling.

Next Steps