
Apr 2026
The Knowledge Factory
How Thalius Hippocampus Turns Your Organization Into a Learning Machine
Something remarkable happened: we learned to train machines to think. But the architecture behind that breakthrough has real problems-costs that don't scale, outputs you can't fully trust, safety that's more hope than guarantee. We're working on solutions to these fundamental issues. Not by adding layers on top, but by rethinking what's underneath.
AI is genuinely impressive. It's also genuinely limited in ways that matter.

The attention mechanism at the heart of modern AI checks every piece of input against every other piece. That's quadratic scaling — double your input, and compute costs multiply. Because these systems are largely black boxes, progress has come from brute-force scaling rather than understanding and improving how they work internally. The result is enormous energy and infrastructure demands, with costs rising faster than usefulness. If this approach continues, future models would require resources far beyond what's economically sustainable. The human brain performs similar integration on about 20 watts. Today's AI systems require megawatts. There's clearly room to do better.

Large language models don't really know what they know. They generate plausible responses by pattern-matching across training data, which works remarkably well-until it doesn't. When they get things wrong, they get them wrong confidently, with the same fluency they use when they're right. That's fine for some applications. It's a serious issue for others.

Current safety measures are statistical, not structural. An LLM might refuse a harmful request nineteen times, then comply on the twentieth. That's not a robust security model-it's a persistence test. We think we can do better here too.
There's a tension in deep tech: you can do fundamental research that might matter someday, or you can ship products that work today. We're trying to do both at once. We work on hard problems in AI architecture-efficiency, reliability, interpretability. Then we build products that demonstrate whether our solutions actually work. The products aren't distractions from the research; they're how we know if we're on the right track. We think of it as applied basic research. The feedback loop between theory and application is the whole point.
Documents and connections go in. The AI reads them, summarizes them, and links them into a wiki your organization can browse, edit, and trust. Every answer links back to its source, fully traceable.
Built to run on your infrastructure, with any modern model. Trusted answers. Verifiable sources. Your data stays yours.
Standard embeddings are opaque-you put text in, you get numbers out, and you hope those numbers capture something meaningful. Rosetta makes the process transparent: you can see how concepts are being represented, adjust what's not working, and audit the results.
This matters for trust, for customization, and increasingly for compliance. It's also 500–1000× more efficient than running inference through a full LLM.
Thoughts on AI architecture, product discovery, and building systems that actually work.

Apr 2026
How Thalius Hippocampus Turns Your Organization Into a Learning Machine

Mar 2026
Finding a Moat When Reverse Engineering Becomes Free

Feb 2026
Paper summary of Scaling Laws for Neural Language Models by Kaplan et al.
