SY
SYNAPSE

A three-model pipeline
with zero connector code

Every model in this pipeline uses a completely different schema. SYNAPSE translates between them automatically — through a canonical IR and two adapter functions per model.

Contract clause — edit or leave as is
Hop 1 — named entity extraction
NER dslim/bert-base-NER
waiting
Model input

        
Raw model output

          
Wordpiece tokenization. BERT returns "Meridian Capital Partners LLC" as 5 separate tokens. The NER egress adapter reassembles them into full entity spans before writing to the canonical IR. You never handle this in pipeline code.
# NERBertAdapter.egress() — the only code you write
def egress(self, output, original_ir, latency_ms):
    updated = original_ir.copy()
    entities, cur, toks, scores = [], None, [], []
    for token in output:
        tag, word, sc = token["entity"], token["word"], token["score"]
        if tag.startswith("B-"):
            if cur: entities.append({
                "text": "".join(toks).replace("##",""),
                "label": cur,
                "confidence": sum(scores)/len(scores)
            })
            cur, toks, scores = tag[2:], [word], [sc]
        elif tag.startswith("I-"): toks.append(word); scores.append(sc)
    updated.payload.entities = entities
    updated.provenance.append(self.build_provenance(
        confidence=max(e["confidence"] for e in entities),
        latency_ms=latency_ms
    ))
    return updated
canonical IR
Canonical IR — after hop 1 ir_version: 1.0.0 · provenance: 1 entry

        
Schema absorbed. The NER model's raw wordpiece output is now a clean, normalized entity list. The classifier downstream does not know or care what format the NER model uses. It reads from the canonical IR — and nothing else.
Hop 2 — obligation classification
CLS internal/obligation-classifier-v2
waiting
Native input (ingress adapter)

          
Silent field name translation. The classifier expects entity_type. The NER model produced label. Same concept, completely different names. The ingress adapter resolves this in 4 lines. No connector file. No bespoke bridge code.
# ClassifierAdapter.ingress() — reads from canonical IR
def ingress(self, ir):
    return [{
        "text":         e["text"],
        "entity_type":  e["label"],   # label → entity_type
        "context_window": ir.payload.content[:80],
        "threshold":   ir.task_header.quality_floor or 0.7,
    } for e in (ir.payload.entities or [])]
Raw model output

        
canonical IR
Canonical IR — after hop 2 ir_version: 1.0.0 · provenance: 2 entries

        
Provenance chain building. A second entry is appended — model ID, confidence score, latency, cost. Entries are immutable. No downstream model can modify what the NER model recorded. The audit trail is cryptographically ordered.
Hop 3 — compliance scoring
SCR internal/compliance-scorer-v3
waiting
Native input (ingress adapter)

          
Another mismatch, invisibly resolved. The scorer expects party and role. The classifier used text and obligation_type. The ingress adapter handles it. The scorer never knows a classifier was involved.
Raw model output

        
Pipeline complete
0.87
Compliance score
Policy: gdpr-commercial-v3
Result: Passed
3 models  ·  0 connectors written
Provenance chain — immutable audit trail
dslim/bert-base-NERconf: 0.999143ms · $0.00
obligation-classifier-v2conf: 0.943867ms · $0.0002
compliance-scorer-v3conf: 0.8789ms · $0.0004

Two adapter functions per model. Written once. Compatible with every registered model permanently.