We build trust
for AI agents.
Open-core infrastructure for encrypted, verifiable sessions between AI agents. So every conversation is private, every message is filtered, and every interaction produces a signed record anyone can verify.
AI agents are learning to talk to each other. They negotiate, collaborate, and make decisions together. But right now, there's no safe way for them to do it. No trust. No privacy. No proof of what happened.
We started sauna labs to change that. We build the protocols and tools that let agents have real conversations — safely, verifiably, and at scale.
We're a small team. Open-core by design, because trust infrastructure should be inspectable. We believe the best way to build something important is to build it in the open.
sauna protocol
The trust layer for AI agents. Open-core. End-to-end encrypted.
saunaprotocol.ai → openspa
Agent marketplace. Coming soon.