Skip to content
Nuro AI Labs
Solutions · For governments & regulated enterprise

Frontier capability, inside your perimeter.

Three companies hold the closed frontier. We don't think the systems on the path to general intelligence should be black boxes you rent by the call. AVALON-2B is Apache 2.0. Hypersave runs on-prem. The whole personal-intelligence stack can sit entirely inside your jurisdiction — air-gapped if you need it.

License
Apache 2.0
modify · redistribute · commercial
Parameters
1.88B
Self-RAG · sub-3B
On Apple M3
40 tok/s
enterprise-class hardware
Deploy
On-prem
K8s · bare metal · air-gap
What you can build

Frontier AI without the dependency.

01

Run frontier models air-gapped

AVALON-2B is Apache 2.0. GGUF quants on disk. Ollama-ready. Runs on a workstation; no outbound calls; no telemetry by default.

02

Cognitive memory inside the perimeter

Hypersave deploys on-prem in Kubernetes or bare metal. Same SDK as the cloud, same five sectors, same RRF retrieval. Your data never leaves.

03

Self-RAG for verifiable answers

AVALON’s reflection vocabulary tells operators when the model retrieved, when it didn’t, and how confident it was. Auditable by design.

04

Modify and re-train

Apache 2.0 means you can fine-tune AVALON on classified corpora, modify the router, ship internal forks. No call-home, no licence revocation surface.

The stack, on your hardware

Pull the model. Run the memory. No call-home.

The same APIs as the managed offering, deployed inside your network. Operations teams keep audit logs locally; security teams keep keys locally; legal keeps the data residency argument simple.

install.shbash
# Pull the open-weights model
ollama pull nuroai/avalon-2b

# Hypersave on-prem (Helm)
helm repo add hypersave https://charts.hypersave.io
helm install hypersave hypersave/hypersave \
  --set storage.postgres.host=pg.internal \
  --set vector.backend=pgvector \
  --set telemetry.enabled=false \
  --set network.egress=deny
agent.pypy
from hypersave import Hypersave

# Points at the on-prem cluster, never leaves the perimeter.
memory = Hypersave(base_url="https://hypersave.gov.internal")

memory.remember(user_id="case-7421", text="Subject A confirmed alibi.", sector="episodic")

result = memory.recall(user_id="case-7421", query="what is the alibi status?")
# result.answer, result.citations, result.confidence
Built on

Two layers, both yours to operate.

AVALON-2B

Open-weights runtime

1.88B parameters, Self-RAG. Apache 2.0. Hugging Face + GGUF + Ollama. 40 tok/s on Apple M3 — comfortable on enterprise hardware. Beats Qwen 3.5 2B, Gemma 4 E2B, SmolLM3 3B.

Read the paper →

Hypersave

On-prem cognitive memory

Same product as the cloud, packaged for self-hosted deployment. Postgres + your vector store of choice. SOC 2 Type II controls available for the operating org.

Read the docs →
The split — open-weights model, on-prem memory, Apache licence — is the only configuration that procurement can sign in a single quarter. Everything else takes a year.
Composite design-partner feedback · Public-sector evaluation · 2026
Get started

Sovereign deployments aren't self-serve. Talk to us first.

The model is open. The memory layer self-hosts. But every sovereign deployment has its own threat model, network constraints and audit requirements. Reach press@nuroailabs.com and we'll route you to the right person within a working day.