Every model in this pipeline uses a completely different schema. SYNAPSE translates between them automatically — through a canonical IR and two adapter functions per model.
# NERBertAdapter.egress() — the only code you write def egress(self, output, original_ir, latency_ms): updated = original_ir.copy() entities, cur, toks, scores = [], None, [], [] for token in output: tag, word, sc = token["entity"], token["word"], token["score"] if tag.startswith("B-"): if cur: entities.append({ "text": "".join(toks).replace("##",""), "label": cur, "confidence": sum(scores)/len(scores) }) cur, toks, scores = tag[2:], [word], [sc] elif tag.startswith("I-"): toks.append(word); scores.append(sc) updated.payload.entities = entities updated.provenance.append(self.build_provenance( confidence=max(e["confidence"] for e in entities), latency_ms=latency_ms )) return updated
entity_type. The NER model produced label. Same concept, completely different names. The ingress adapter resolves this in 4 lines. No connector file. No bespoke bridge code.
# ClassifierAdapter.ingress() — reads from canonical IR def ingress(self, ir): return [{ "text": e["text"], "entity_type": e["label"], # label → entity_type "context_window": ir.payload.content[:80], "threshold": ir.task_header.quality_floor or 0.7, } for e in (ir.payload.entities or [])]
party and role. The classifier used text and obligation_type. The ingress adapter handles it. The scorer never knows a classifier was involved.
Two adapter functions per model. Written once. Compatible with every registered model permanently.