Four parallel specialists produce a lot of overlap. Without a synthesizer, that overlap shows up as 40 redundant comments on the PR. Core is the synthesizer that turns four streams into one signal.Documentation Index
Fetch the complete documentation index at: https://docs.sigilix.ai/llms.txt
Use this file to discover all available pages before exploring further.
The four stages
1. Collect
Specialists submit findings in a structured shape:2. Cross-reference
Core performs structural-provenance checks against the source code to suppress hallucinations:- Line-validity check. Does the finding’s
path:lineactually exist in the diff? Hallucinated line numbers are dropped. - Symbol-resolution check. Does the function/variable referenced in the finding actually exist in the file? Hallucinated identifiers are dropped.
- Pattern-match check. For security findings, does the claimed unsafe pattern (e.g., “passes user input to SQL template”) actually match the code? Pattern-mismatches are dropped or down-graded.
3. Calibrate
Core then performs deduplication and severity calibration: Deduplication. Overlapping findings (samepath:line from multiple specialists) are merged into one. The merged finding’s body draws from each specialist’s contribution; the severity is the maximum of the inputs.
Severity calibration. Each finding’s severity is recalculated:
| Inputs | Final severity |
|---|---|
| 1 specialist, low confidence | Info |
| 1 specialist, high confidence | Warning |
| 2+ specialists, agreement | Warning or Critical |
| Critical-tagged + structural check passed | Critical |
| Critical-tagged + structural check skeptical | Warning (down-graded) |
4. Render
Core writes the final comment. The shape:- Synthesizer summary — what was reviewed, how many findings survived, what verdict
- Inline findings — anchored to specific
path:line, tagged by specialist + severity - Suggested patches — included where Core’s structural check confirms a clean fix is in scope
Failure modes
Specialist 503s
If one specialist’s model returns 503 (overloaded), Sigilix’s cross-provider fallback kicks in: a different model on a different provider attempts the same prompt. If that fails too, the specialist’s findings are skipped — but Core still synthesizes from the remaining specialists. The verdict is still posted, marked with a footnote:_3 of 4 specialists succeeded._
Stale-head guards
If the user pushes a new commit while Sigilix is mid-review, the old review would be stale. Sigilix has two stale-head guards:- Before fan-out. If the PR’s head SHA changed since the webhook fired, abort.
- Before posting. If the head SHA changed during specialist execution, abort and let the new webhook fire its own review.
Submit failures
If GitHub rejects the inline-anchor positions in the review payload (typical 422 for a bad line number), Sigilix falls back to an anchorless review with all findings rolled into the body. The user sees one coherent review, just without inline anchors. This recovers verdicts that would otherwise be silently lost.Why this beats single-agent review
A single-agent reviewer has no synthesis stage. It produces raw output and posts it. There’s no deduplication, no cross-reference, no calibration. Every false positive ships. Every redundant comment ships. Every hallucinated line number ships. Core is the difference. The four-stage pipeline is what makes Sigilix’s reviews readable.Read next
Confidence Scoring
The numeric details behind ranking and suppression.
Review Lifecycle
Trigger conditions, pipeline stages, what happens on each push.

