AI agents will become persistent, autonomous, and deeply integrated into everyday workflows. But when they are able to act on our behalf, more difficult questions arise. Who controls the data, execution, and trust layer?
—
today, $NEAR AI has provided the answer. Announced live at NEARCON 2026, IronClaw is a new open-source verifiable AI agent runtime designed for a future where agents run continuously without exposing sensitive data, credentials, or user intent.
A runtime built for autonomous AI — no blind faith
IronClaw builds on the original OpenClaw vision, but fundamentally enhances it with cryptographic guarantees. Written in Rust and deployed inside an encrypted trusted execution environment (TEE). $NEAR The AI Cloud runtime allows AI agents to access tools, maintain memory, and perform actions on your behalf. All this happens within a tightly controlled security perimeter.
Rather than asking users to trust an opaque platform, IronClaw shifts the trust model to: Verifiable execution. Data and inference remain protected at the hardware level, and agents operate based on explicit and enforceable permissions.
Security through architecture, not add-ons
IronClaw is designed with the core principle of defense in depth.
Loading tweets…
View tweet
All untrusted and third-party tools run in their own sandbox, limited to only the resources they are explicitly allowed to access. Network calls are restricted to approved destinations. Sensitive credentials are only injected at runtime and are never exposed directly to tools or external services.
Agent activity is continuously monitored to detect exploits, including protection against prompt injection attacks and unauthorized consumption of resources. All user data is stored locally in PostgreSQL, encrypted with AES-256-GCM, and never shared externally. What matters is what IronClaw collects. No telemetry or analyticsensuring that execution remains completely private.
Complete audit logs give users visibility into every interaction with the tool, providing transparency without oversight.
Deploy privacy-first AI now
IronClaw launches with a free starter tier that includes one hosted agent instance running under the hood. $NEAR Leverage AI’s secure environment and its inference infrastructure. Developers and organizations can scale up through flexible paid tiers as their needs grow.
The goal is not just to make agents more secure, but to actually deploy them without forcing teams to choose between convenience and control.
Loading tweets…
View tweet
why is this important
As AI systems increasingly serve corporate incentives and rely on opaque data pipelines, IronClaw points in a different direction. Local control, verifiable execution, and privacy by default.
Ilya Poloskin, Co-Founder $NEAR protocol and founder $NEAR AI describes IronClaw as an “agent harness designed for security.” $NEARA full-stack trust model from the blockchain infrastructure to the AI layer itself.
Rather than building security into agent AI after the fact, IronClaw builds security into the runtime, combining confidential inference, cryptographic verification, and hardware execution into one system.
The foundation of responsible agent AI
George Zeng, Chief Product Officer and General Manager $NEAR AI puts the announcement more bluntly:
“AI agents are already entering critical workflows, but security, compliance, and data ownership remain unresolved. IronClaw aims to fill that gap, giving developers and enterprises the confidence to deploy always-on agents without giving up transparency or control.”
IronClaw is available now and the code can be accessed below. $NEAR AI GitHub.
As AI moves from tool to actor, IronClaw takes a clear position. Autonomy should not come at the expense of privacy, nor should intelligence require blind trust.

