CHC-0 Technical Overview
CHC-0 is the first, deliberately small version of the Causal Halting Calculus.
Its goal is not to make every halting question decidable. Its goal is to isolate one structural pattern:
prediction about an execution -> result -> control of that same execution
Roles
CHC-0 separates four roles that the classical diagonal argument freely collapses:
Code inert program description
Exec live execution event
H halting observation operator
HaltResult causal token produced by H
The key restriction:
HaltResult is not ordinary data.
It cannot be passed into opaque code, treated as a normal value, or hidden inside another function. It can only be discarded or eliminated by a dedicated halting branch.
Causal Graph
The checker builds a symbolic graph with two node types:
E(p,a) execution of program p on argument a
R(p,a) result of observing E(p,a)
Observation adds:
E(p,a) -> R(p,a)
Branching on the result inside the current execution adds:
R(p,a) -> E(current)
The forbidden shape is:
E(p,a) -> R(p,a) -> E(p,a)
Why Unification Matters
The graph can be symbolic.
For example:
E(y,y) -> R(y,y)
R(y,y) -> E(D,y)
There is no literal syntactic cycle. But the path:
E(y,y) ->+ E(D,y)
becomes a cycle under:
y = D
So CHC-0 checks whether any reachable pair of E nodes has labels that are first-order unifiable.
Classification
CHC-0 uses two main outcomes:
causal_paradox unifiable prediction-feedback cycle exists
unproved no causal paradox, but halting status is not proven
This is the point of the framework. It separates a structural feedback problem from ordinary semantic difficulty.
Diagonalization
The classical diagonal program has this form:
D(y) =
if H(y,y) then loop else halt
Running D(D) creates:
E(D,D) -> R(D,D) -> E(D,D)
CHC-0 rejects this as a causal type error.
That does not refute Turing. It says that the classical diagonal construction requires a prediction-feedback loop, and this calculus refuses that loop as a valid construction.
Semantic Undecidability Still Survives
Now consider an H-free object-language program:
Q_e() =
simulate e(e)
if e(e) halts, halt
else diverge
Q_e produces no CHC-0 halting observation, so its causal graph is empty. It is causally valid.
But deciding Q_e for all e would decide the classical Halting Problem.
So the hard semantic cases remain:
D(D) -> causal_paradox
Q_e -> valid_acyclic but unproved
Checker
The repository includes a small Python checker:
python scripts/chc_check.py examples/diagonal.graph
python scripts/chc_check.py examples/qe-valid-acyclic.chc
In v2.0, the same checker also supports CHC-1 recursive effect summaries and CHC-2 higher-order effect annotations. See CHC-1 and CHC-2.
The portable skill package also includes the checker:
cd skills\causal-halting
python scripts\chc_check.py examples\diagonal.graph
Practical Verification Pipeline
The practical tooling uses CHC-0 as a boundary check for agent and workflow designs.
Natural language is not classified by scripts. The LLM first writes explicit
DesignIR v1.0:
executions
observations
controls
uncertain
semantic_evidence
The deterministic analyzer then classifies that structured artifact.
For runtime evidence, traces use four event types:
exec_start
observe
consume
control_exec
exec_end
The same rule applies:
observe(run-1) -> result r-1 -> consume by run-1 before exec_end(run-1)
is a causal_paradox.
OpenTelemetry, LangGraph, and Temporal/Airflow adapters are available, but
they only consume explicit causal fields. They do not infer meaning from span
names, node names, DAG labels, or prose. Adapter output preserves source IDs
and optional metadata such as event_source, timestamp, span_id,
parent_id, and confidence.
Repairs emit proof obligations. The core obligation is:
result_not_consumed_by_observed_execution_before_end
In plain language: a result produced by observing a run must not be consumed by that same run before it ends.
Full Reference
The full formal note is in the repository: