Thalius Logotype
Vibe SearchHippocampusResearch
Thalius Logotype White

Building AI that actually works. Research that becomes products. Products that prove the research. Stockholm, Sweden

COMPANY
AboutResearchArticlesContact
PRODUCTS
HippocampusVibe Search
CONNECT
[email protected]LinkedIn
© 2026 Thalius AI AB. All rights reserved.
Privacy PolicyGDPR

We're building AI that actually works.

Something remarkable happened: we learned to train machines to think. But the architecture behind that breakthrough has real problems-costs that don't scale, outputs you can't fully trust, safety that's more hope than guarantee. We're working on solutions to these fundamental issues. Not by adding layers on top, but by rethinking what's underneath.

Here's what we keep running into

AI is genuinely impressive. It's also genuinely limited in ways that matter.

Efficiency

The efficiency problem

The attention mechanism at the heart of modern AI checks every piece of input against every other piece. That's quadratic scaling — double your input, and compute costs multiply. Because these systems are largely black boxes, progress has come from brute-force scaling rather than understanding and improving how they work internally. The result is enormous energy and infrastructure demands, with costs rising faster than usefulness. If this approach continues, future models would require resources far beyond what's economically sustainable. The human brain performs similar integration on about 20 watts. Today's AI systems require megawatts. There's clearly room to do better.

Reliability

The reliability problem

Large language models don't really know what they know. They generate plausible responses by pattern-matching across training data, which works remarkably well-until it doesn't. When they get things wrong, they get them wrong confidently, with the same fluency they use when they're right. That's fine for some applications. It's a serious issue for others.

Safety

The safety problem

Current safety measures are statistical, not structural. An LLM might refuse a harmful request nineteen times, then comply on the twentieth. That's not a robust security model-it's a persistence test. We think we can do better here too.

There's a tension in deep tech: you can do fundamental research that might matter someday, or you can ship products that work today. We're trying to do both at once. We work on hard problems in AI architecture-efficiency, reliability, interpretability. Then we build products that demonstrate whether our solutions actually work. The products aren't distractions from the research; they're how we know if we're on the right track. We think of it as applied basic research. The feedback loop between theory and application is the whole point.

What we're working on right now.

Hippocampus

An external memory layer for organizations, built and maintained by AI.

Documents and connections go in. The AI reads them, summarizes them, and links them into a wiki your organization can browse, edit, and trust. Every answer links back to its source, fully traceable.

Built to run on your infrastructure, with any modern model. Trusted answers. Verifiable sources. Your data stays yours.

Thalius Vibe Search

Semantic search for e-commerce. Our first shipped product.

This is where we prove our research works in practice. It combines keyword matching, semantic understanding, and interactive exploration-search that understands what you mean, not just what you typed.

Rosetta Embeddings

Embeddings you can actually understand and control.

Standard embeddings are opaque-you put text in, you get numbers out, and you hope those numbers capture something meaningful. Rosetta makes the process transparent: you can see how concepts are being represented, adjust what's not working, and audit the results.

This matters for trust, for customization, and increasingly for compliance. It's also 500–1000× more efficient than running inference through a full LLM.


Featured articles

Thoughts on AI architecture, product discovery, and building systems that actually work.

The Knowledge Factory

Apr 2026

The Knowledge Factory

How Thalius Hippocampus Turns Your Organization Into a Learning Machine

Read article
Knowledge & Memory
Tech Companies in the AI Age

Mar 2026

Tech Companies in the AI Age

Finding a Moat When Reverse Engineering Becomes Free

Read article
Strategy
What do lemons have to do with AI-scaling?

Feb 2026

What do lemons have to do with AI-scaling?

Paper summary of Scaling Laws for Neural Language Models by Kaplan et al.

Read article
Research
Thalius Logotype