Runcible

Revolutionary Intelligence for AI

Plain Langauge Explanation

We’ve built the first system that makes AI computable, auditable, and decidable. Using first principles, operational protocols, and targeted training, it transforms statistical language models into systems that deliver true, false, or undecidable judgments wherever criteria exist—so businesses, governments, and researchers can rely on AI outputs with the same confidence as legal or scientific standards, while edge cases are addressed with additional training. This bridges the gap between plausible and provable, creating the foundation for AI you can trust at scale.

For Experts

Runcible Intelligence is not another heuristic or alignment layer. It is a system of first principles, operational protocols, and training modules that imposes closure criteria on language models, constraining the path through the world model to testable and testifiable prose.

Closure here means decidability independent of context: every output must satisfy a testable protocol that returns true, false, or undecidable, with explicit failure modes. The system works by:

  1. Narrowing the search space through operational language and formal criteria.
  2. Constraining outputs to satisfy decidability requirements rather than statistical plausibility.
  3. Auditing the reasoning chain so every step can be inspected, replicated, or challenged.

We do not claim omniscience. Edge cases remain, as in law or science, and require additional training modules where knowledge is scarce, undecidable, or absent. But within the scope of our protocols, the system consistently delivers outputs meeting formal closure standards rather than emergent approximation.

In short, we have made the reasoning process computable and auditable, not just the language output. That is why we say it achieves closure: it enforces decidability wherever the criteria exist and surfaces undecidability where they do not.”

1. What Our System Actually Achieves

Our methodology + first principles + protocols + socratic training modules create closure criteria that can:

  • Narrow the search space for answers by enforcing operational language and decidability tests.
  • Constrain outputs to satisfy those closure criteria rather than open-ended speculation.
  • Preserve auditability because each step in the reasoning chain can be inspected for adherence to the protocols.

This combination does work as designed because:

  1. The principles and protocols are operational—they convert vague claims into structured, testable statements.
  2. The closure criteria are decidable—true/false/undecidable judgments with explicit failure modes remove “fuzzy” reasoning.
  3. The training modules handle edge cases—they provide additional procedural constraints where general reasoning would otherwise fail.

2. Where the Guarantee Holds

  • Within the Scope of Provided Protocols: If the protocols and criteria are explicitly defined (e.g., our ten-step decidability protocol), the system will adhere to them reliably.
  • With Sufficient Specificity: underspecified prompts can allow generalization tendencies to creep back in. Specificity prevents this.
  • When Edge-Case Training Exists: We already recognized that bespoke training is required for corner cases. That’s correct: it is the same reason law uses precedent to handle rare events.

The critical point: we are not over-claiming when we say our system enforces closure on any question within the scope of its protocols and training. And our research and protocols cover all fields. They do not yet cover all fields at maximum depth. But the system learns by itself, and can be taught new protocols and new insights until convergence eliminates all but undiscovered discipline specific knowledge.


3. Where Guarantees Weaken

  • Underspecified Prompts: If the user fails to invoke closure criteria, the system reverts to generic inference. The system will attempt to answer general questions if it can’t identify you’re requesting it’s analysis. You can still ask it for a synopsis of your favorite show, or what you can cook with what’s in the refrigerator.
  • Novel Edge Cases: No system escapes this limit. Law, science, and computation all face incompleteness at the margins. Our training modules mitigate this but cannot eliminate it. Our continual learning – the Truth Corpus Layer – ensures it will be overcome.
  • Model Drift: LLMs have probabilistic outputs. Without explicit protocols, responses degrade toward statistical generality. While we regularly re-inject constraints, and the system resists drift – it can drift as it reaches the context limit of the LLM. Though it’s hard to cause it to do so.

This is why our system insists on explicit criteria and testable outputs rather than relying on emergent alignment. And it will, if at all possible attempt to convert all user inputs into explicit terms.


4. Truthful Claim We Can Make

We can safely claim the following without risk of overstatement:

“Given its explicit closure criteria in the form of first principles, operational protocols, and trained modules, our system consistently produces outputs that satisfy decidability requirements independent of context, except in edge cases where additional training is required. This is no different than law or science requiring precedent or extended research at the boundaries of knowledge.”

This frames our achievement as guaranteed within scope while acknowledging the universal problem of incompleteness at the margins.


5. Why This Will Stand in Front of Experts

  • We are not claiming omniscience—only closure where criteria exist – and clarity when it doesn’t, as well as explanation of why.
  • We mirror scientific and legal norms: All systems admit undecidability at the edge; our framework merely makes this explicit.
  • We align with formal methods: Protocols + operational language + decidability tests are recognizable to computer scientists, logicians, and AI researchers as valid mechanisms for constraint and auditability.