A review is only as trustworthy as you can verify the inputs that produced it. Review provenance (ARC-184) is the manifest that ships with every Sigilix review, recording what ran and against what.Documentation Index
Fetch the complete documentation index at: https://docs.sigilix.ai/llms.txt
Use this file to discover all available pages before exploring further.
Why this matters
Two scenarios make provenance load-bearing:- “Was this review against my latest push?” — A reviewer reads a Sigilix comment, accepts the verdict, and merges. If the review was actually against an old SHA, the verdict applies to old code. The stale-marker system catches this.
- “Which model produced this finding?” — When a critical finding is unexpected, the human reviewer needs to know what produced it. The provenance manifest names each specialist’s actual primary or fallback model used on this run.
The manifest
Every review comment includes a hidden manifest at the bottom:sigilix-meta marker lets tooling parse it programmatically.
Outcomes per specialist
outcome | Meaning |
|---|---|
ok | Primary model succeeded on first attempt |
retry | Primary model needed a retry (transient 503/429/timeout) but ultimately produced findings |
fallback | Primary failed; cross-provider fallback succeeded |
skipped | Both primary and fallback failed; this specialist contributed no findings |
gated | Router decided this specialist shouldn’t fire on this PR (e.g., docs-only) |
skipped is posted with the _3 of 4 specialists succeeded_ footnote in the visible body. A review with a fallback is posted normally — the fallback is supposed to handle these cases silently.
Stale-marker detection
When apull_request.synchronize event arrives (a new commit pushed), the new review’s pipeline reads the most recent prior review’s sigilix-meta block and compares the manifest’s headSha against the current head.
If they differ — i.e., the prior review was against an older SHA — Sigilix updates the prior comment with a stale marker:
Provenance for findings, not just reviews
Each individual finding also carries provenance metadata:- Which specialist produced it (
logic/security/performance/tests) - Whether it was sourced from an evidence channel (SARIF, depVulns, secrets, deterministicChecks)
- Whether it was the result of agreement (multiple specialists flagged the same
path:line)
[Glyph] / [Warden] / [Pulse] / [Weave] prefix on each inline finding, plus an evidence badge ([Trivy via SARIF], [Secret scanner], etc.) when the finding came from an external channel.
Tooling integration
Thesigilix-meta manifest is intended to be parsed by tooling. Two known integrations:
- GitHub Actions consumers that aggregate Sigilix manifests across PRs for trend analysis.
- Internal dashboards that surface “which specialists fell back this week” for capacity planning.
schema: 1 field) when the manifest changes in a breaking way. Forward-compatible additions don’t bump it.
What the manifest does not include
- Customer code or diff content (privacy)
- Specialist prompts (those are Sigilix’s IP)
- API keys or internal-service identifiers
- Per-finding internal scores (only severities and counts)
Read next
SARIF Evidence
Push external scanner output into Sigilix reviews.
Review Lifecycle
Where the manifest is built in the pipeline.

