Sovel
Methods

How Sovel finds and fixes
knowledge risk

The Sovel method is detection-led, reviewer-governed, and outcome-tracked. This page explains the logic, the rules, and the loop — from raw work order data to a placed operational knowledge object.

The Core Loop

Detect → Capture → Govern → Place → Monitor

1

Detect

Gap engine scans WO history. Issues ranked by severity and risk type.

2

Capture

Technicians contribute context via voice or text when a WO ties to a flagged issue.

3

Govern

Reviewers approve, edit, or reject AI-structured drafts. Nothing advances without sign-off.

4

Place

Approved knowledge becomes a versioned Operations Skill bound to the maintenance ontology.

5

Monitor

Placement reduces issue risk. MTTR and response-time improvements close the ROI loop.

The Gap Engine

Six rules that find what your CMMS misses

Sovel does not require any modification to your CMMS. It reads a standard work order export and applies six detection rules. Each rule produces prioritized issues with a severity score, confidence estimate, and linked evidence.

Recurring failures

Work orders for the same asset and failure mode that close without documented root cause or resolution steps. Repeated corrective work with no captured fix signals undocumented expertise.

Signal: failure frequency × absence of structured closure notes

Knowledge concentration

Critical assets handled predominantly by one or two technicians, with measurable resolution-time gaps when they are absent. Concentration risk is scored against coverage breadth and expert tenure proximity.

Signal: assignee dominance ratio × resolution-time variance × tenure risk

Procedure drift

WO descriptions diverge from the documented procedure for the same task. Drift flags where field practice has evolved away from written SOPs — sometimes a quality risk, sometimes a smarter field adaptation that should update the procedure.

Signal: text similarity delta between WO notes and mapped procedure

High-cost blind spots

High labor cost or high downtime assets that lack documented troubleshooting coverage. The combination of maintenance spend and knowledge gap represents the highest marginal value of capture.

Signal: total corrective labor cost × absence of structured troubleshooting entries

Retirement risk

Long-tenured experts holding concentrated knowledge on critical assets. When tenure is high and coverage is narrow, each day without capture is a shrinking window. Retirement risk issues are prioritized for immediate capture outreach.

Signal: expert tenure × concentration score × asset criticality

Shadow work

Undocumented workarounds that show up in WO narratives but never make it into official procedures. Shadow work often represents the real institutional knowledge — the informal fix that works when the manual approach fails.

Signal: unstructured action patterns in WO notes not matched to any procedure node
Governance Architecture

Reviewers decide. Models surface.

Sovel may use language models for structured extraction, similarity scoring, and assistive evidence summarization. The product stance is consistent: reviewers decide. Sign-off, safety culture, and your change process stay with your people and procedures.

  • Structured drafts, not decisions. The AI produces a draft knowledge entry with source citations. The reviewer reads, edits, and approves — or rejects.
  • Immutable audit trail. Every approval, edit, and rejection is logged with timestamp, author, and reason code. The full decision chain is discoverable in any audit.
  • Contradiction detection. When a new entry conflicts with an existing Operations Skill, a propagation alert surfaces the conflict. Reviewers resolve it; the AI does not auto-overwrite.
  • Correction inference. Over time, the reviewer's editing patterns inform which extraction dimensions need improvement — improving future drafts without overriding human authority.
Reviewer Inbox — Decision Flow
Propagation conflict detected
New entry contradicts existing Skill 003 — reviewer arbitration required
Draft: RAS pump declog procedure
Captured from Ray Delgado — structured by AI, awaiting sign-off
Pending
Approved: Clarifier startup sequence
→ Placed as Operations Skill KS-0047 · J. Miller · 2025-11-03
Reason code log
APPROVE · "Consistent with field verification on P-204A"
EDIT · "Adjusted torque spec — OEM guidance superseded by site test"
REJECT · "Applies to Asset v1 only — v2 replacement changes procedure"
Roadmap preview · #45

Federation: pattern shapes, never entity data

Every reviewer-approved decision at one plant sharpens the detection model at every other plant in the network — without any customer name, asset tag, telemetry value, or entry body crossing the boundary. Opt-in, one-way hashed, and gated behind a legal data-sharing agreement.

Cross-Plant Federated Review — Frame 7
opt-in · hash-anonymous
Sovel federated pattern review screen with anonymized cross-plant matches and share contract
What crosses the boundary
  • ✓ Anonymized pattern shapes
  • ✓ Reviewer decisions + rejection reasons
  • ✓ One-way-hashed plant + reviewer IDs
What never leaves your plant
  • ✗ Customer or facility names
  • ✗ Asset identifiers, serial numbers, telemetry
  • ✗ Raw reviewer-entry bodies
See the full federation contract →
Method Deep-Dives

Category-defining detection surfaces

Four long-form explanations of how Sovel detects, gates, and benchmarks specific knowledge-risk archetypes. Written for reliability engineers, maintenance directors, and plant managers who want the reasoning, not the pitch.

How Sovel Compounds.

  • Provenance per inference

    Every suggestion carries cited sources, model version, confidence, and reviewer history.

  • Immutable audit trail

    Every decision — accept, edit, reject — is recorded with reason code and reviewer identity.

  • Reviewer is final governor

    No autonomous commits to governed truth. Humans decide; Sovel remembers.

  • Inferred from your corrections

    The Correction Inference Engine personalizes suggestions based on each reviewer's history.

  • Knowledge-governance-first

    Detection, capture, governance, placement, monitoring — designed as one continuous loop.

State of Practice

Industrial-governance HITL patterns we implement

Five patterns identified as the 2026 state-of-practice for safe LLM use in regulated industrial workflows. All five are scaffolded in Sovel today. The architecture is not novel by accident — it matches what serious operators are converging on.

Structured side-effect blocks

Every governed write is a typed, content-hashed JSON object — never free-text into production. Reviewer sees the structure before placement.

Bounded auto-retry before human handoff

The AI co-reviewer attempts a finite number of retrieval and refinement passes, then yields to the human gate. No infinite loops, no silent failures.

Dry-run / approval gates on every governed write

No autonomous commits to governed knowledge. Ever. Reviewer is the final governor of what becomes operational truth.

Append-only audit logs

Model version, confidence, cited sources, reviewer identity, timestamp — captured per inference, never mutated. The regulator-audit-ready substrate.

Drift scanning

Freshness/decay signals + contradiction detection continuously surface entries whose context has shifted, so the governed knowledge base does not silently rot.

Third-Party Validation

Why our architectural choices look like 2026 best practice

The CIE design did not emerge in a vacuum. It matches what published 2026 research is converging on for safe HITL governance in regulated industrial workflows.

LangGraph contained inside the provider boundary

We deliberately keep LangGraph inside the CorrectionInferenceProvider rather than exposing it as the app-level orchestrator. This sidesteps the LangMem p95 latency trap (59.82s reported in published benchmarks vs Mem0's 0.200s) and keeps the reviewer experience snappy.

Operational metrics over synthetic benchmarks

We measure FirstPassRate, Validation Convergence, and MTTR lift on governed assets — not Codeforces-style scores. Published research (the "SWE-Bench Illusion" finding) documents 23–25% real-world collapse from synthetic benchmark scores.

Plan-then-Execute UX paradigm

The 2026 industry pattern for HITL governance: AI proposes a structured plan, the human pressure-tests and governs. Sovel's reviewer workflow is a Plan-then-Execute system by construction — every governed write goes through human approval.

Causal inference for AI-inferred edges

When the engine proposes a relationship in the knowledge graph, we hold it in a governed-edge proposal queue rather than auto-committing. Published methodology (DoWhy: Model → Identify → Estimate → Refute) provides the validation pattern; reviewer judgment provides the final governance.

Operations Skills

The atomic unit of governed knowledge

An Operations Skill is a governed, versioned knowledge object that pairs a specific failure mode and asset context with structured resolution guidance and provenance. It is the output of the Govern → Place step and the query target during operations.

Operations Skill structure
asset_id: "PUMP-RAS-04"
failure_mode: "Impeller clog — high-solids influent"
resolution_steps: […structured steps…]
author: "Ray Delgado (Sr. Tech, 14 yrs)"
reviewer: "J. Miller · APPROVE · 2025-11-14"
reason_code: "Field-verified over 9 recurrence events"
confidence: "HIGH"
version: "3"
Maintenance Ontology

A typed graph of what you know

Operations Skills are placed into a Maintenance Ontology — a typed graph connecting assets, failure modes, experts, and governed knowledge. The ontology enables relationship queries ("what breaks if Tom retires?") that raw WO history cannot answer.

Asset → Expert → Skill linkage

Every skill placed in the ontology creates a provenance edge. The graph shows which experts hold knowledge for which assets — and how that coverage changes as people retire or are transferred.

Contradiction detection at placement

When a new skill conflicts with an existing ontology node, the system flags the contradiction before placement — forcing reviewers to adjudicate rather than silently overwriting organizational memory.

Staleness and freshness scoring

Skills decay in confidence over time without revalidation. The lint loop surfaces stale entries for reviewer attention — preventing governed knowledge from drifting out of date silently.

See it run on your data.

Share a 6-month work order export from one asset area. We run the gap engine and return a ranked list of issues specific to your plant — in 48 hours.