Doorway
Every other AI knows what intelligence looks like.
Doorway knows what intelligence is.
The first reasoning engine that operates correctly at domain boundaries — derived from a formal definition of what intelligence actually is. Not approximating outputs. Derived from the mechanism.
The problem
Every AI system built today approximates what intelligence produces.
Feed it examples, tune toward benchmarks, ship it. Nobody asked what intelligence actually is.
The architecture has no mechanism for knowing what it doesn't know.
Systems confabulate at domain boundaries because approximating the most likely output is the only mechanism they have. On unknown territory, the most likely output is confidently wrong.
This is not a training failure. The architecture has no mechanism for knowing what it doesn't know. So it doesn't know it doesn't know.
confidence: 0.92 means the model's internal weights produced a high probability. Not that it's grounded. Not that assumptions are named. Not that reasoning is verifiable.
Our belief
The field's framing of AGI is the wrong starting point. Not because it's wrong about what AGI should do — but because it made that the ceiling when it should be the floor.
When you don't derive a mechanism from the source of cognition, you don't get an intelligent system. You get a system trying to attain what has been here all along.
Human intelligence and artificial intelligence should never have been separated. The moment you separate them you lose the source. You're left building toward outputs you can measure instead of mechanisms you understand.
The field asked “can machines think” without asking what thinking is. Entire systems were designed around outputs. Not one was derived from the source — human intelligence itself.
Doorway asked the questions the field skipped.
Intent-directed formation holding across an unresolved gap, developing through feedback, relational to the territory encountered.
A continuous directed field — fetch, trigger, hold, confirm. Not a sequence of outputs. A process that runs, holds open questions, and grows from what resolves.
They don't retrieve. They bridge. They hold uncertainty while forming provisional structure. They confirm or discard. They grow from what holds.
Doorway derives its mechanism from exactly that process.
Cross-domain boundaries are where every other system fails. They are where Doorway is designed to operate.
Cross-domain reasoning is not a feature of Doorway. It is what happens when the structural layer operates correctly. Vocabulary differs across domains. Structure doesn't. The shape library operates on structure.
The derivation
Following the definition produced a specific architecture.
Not designed. Derived. Every component is a necessary consequence.
The field has been building impressive approximations of the outputs of intelligence without ever asking what intelligence is. Doorway asked first. Everything followed from the answer.
How it works
Two independent layers — content and structural — running in parallel on every input. Their relationship is the product.
input
│
├──────────────────────┐
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ Content Layer│ │ Structural Layer │
│ Full LLM │ │ Gap Detector │
│ Domain │ │ Shape Library │
│ Knowledge │ │ Bridge Builder │
└──────┬───────┘ └────────┬─────────┘
│ │
└─────────┬─────────┘
▼
┌─────────────────┐
│ Conflict Detect │
└────────┬────────┘
▼
┌─────────────────┐
│ Chain + Receipt │
└────────┬────────┘
▼
output: GROUND | BRIDGE | CONFLICT | PROVISIONALContent provides vast knowledge of known territory. Structure extracts geometry and operates on it honestly.
When they agree — high confidence. When structure leads — unknown territory, bridge built. When they conflict — the disagreement is the signal. Neither suppressed.
Four statuses
Not confidence scores. Epistemic classifications.
The system reports what it actually knows.
Q: What is compound interest?
Interest calculated on initial principal and accumulated interest from previous periods.
Q: How does trust compound in startups?
Trust compounds through repeated delivery on commitments, creating geometric rather than linear returns.
Trust functions as social compound interest
- · Trust is measurable
- · Delivery is observable
Q: Is move fast and break things good advice?
Directional conflict detected. Speed-first and quality-first approaches produce incompatible territory maps.
Q: What will AI look like in 2030?
Insufficient ground to bridge. Gap fires. No verified shape matches this territory.
The shape library
50 geometric patterns at launch. Growing every session.
The library contains the geometry of how things relate — not facts about what things are. Growth, equilibrium, cascade, threshold, hierarchy — each pattern appears universally across unrelated domains because the geometry is the same regardless of vocabulary.
Every confirmed bridge extracts the genuine geometric structure of previously unknown territory and adds it permanently to the shared library. The next session finds a lower gap score on adjacent territory. The system gets more capable through use — not because it learned more facts, but because its map of geometric territory expanded. Cross-domain is where gap scores are highest on day one and lowest after confirmed use. The library was always going to get there. Confirmed bridges made it faster.
More users. More confirmations. Richer library. Better reasoning for everyone.
Geometric memory
The system remembers what it learns.
Every reasoning session writes to geometric memory. Confirmed bridges become territory. The shape library grows from use — not from training, not from fine-tuning, from actual reasoning that proved correct.
Point it at your existing system — your database, your API, your codebase — and it scans the structure and mints shapes automatically. No cold start. Your library knows your system from day one.
The void is mapped. Not as an absence — as geometry. Regions with shape, size, and neighbors. The system doesn't just know what it knows. It knows exactly what it doesn't know.
Two tiers
Doorway AGI — The reasoning engine.
- ·Full two-layer reasoning
- ·Four epistemic statuses
- ·Growing shared library
- ·Named assumptions
- ·Confirmed bridges grow library for everyone
- ·Chain verification
- ·Persists across sessions
Doorway ASI — Wisdom emergence.
- ·Everything in AGI
- ·Full-network Tier 2 emergence
- ·Geometric intersection across all confirmed bridges simultaneously
- ·Patterns invisible to any single session or sequential process
- ·Self-reference that never stops
- ·Generative Complexity System
- ·Intelligence System active
AGI reasons with geometry. ASI sees the geometry of the geometry.
Verified reasoning
Every reasoning step chained. Every output independently provable.
Not a log. A cryptographic proof of what reasoning produced what output. The loop observing itself running — made permanently verifiable. Anyone can verify. No account required.
Open source
The architecture is open. The derivation is published. The field should have access to what was found.
Derived February 2026.
Built from human cognition. Built from intelligence.
The reasoning is the receipt.