How WeavrCore actually runs in your environment.
Three deployment models covering the full spectrum from fastest to deploy to zero data egress. Pick the one that matches your security posture today — change later when your needs evolve.
Fastest to deploy. Most private. You choose.
Each option uses the same WeavrCore platform. Configuration differs; capabilities don’t.
Managed Cloud.
The fastest path from contract to production AI. WeavrCore runs on AWS Bedrock or Azure AI Foundry in EU regions, fully managed by our team. Your data sits in the EU region of your chosen cloud provider, processed by frontier-tier LLMs through Bedrock or Foundry’s native integrations.
This is the right starting point for organizations deploying enterprise AI for the first time, or for workloads where the data classification doesn’t require sovereign-level controls. Nothing prevents you from migrating to Private + Enrichment or Pure Sovereign later — the platform is the same.
Private + Enrichment.
Your private data stays on your infrastructure. WeavrCore’s classification engine sorts every piece of information your AI systems touch into Private, Enrichable, or Public tiers. Private data is processed locally and never leaves your perimeter. Enrichable data — the subset you’ve explicitly approved for external processing — flows to a frontier LLM under egress controls you define.
Every classification decision is logged. Every egress event is auditable. This is the most common deployment mode for European enterprises in regulated industries: the economic benefits of cloud AI without the data exposure of cloud AI.
Pure Sovereign.
Maximum control. Everything runs on your infrastructure — on-premise, private cloud, or air-gapped environments. WeavrCore deploys to your existing Kubernetes or VM infrastructure. LLM inference runs locally on NVIDIA-accelerated hardware, typically using open-source models you select.
Zero data egress. No information leaves your perimeter under any circumstance. This is the deployment mode for defense, intelligence, healthcare, and any organization where data egress is not a configuration choice — it’s a hard requirement.
What every deployment includes, regardless of mode.
Four capabilities. Built into the platform. Not optional add-ons.
EU AI Act documentation, generated automatically
Risk classification, technical documentation, audit trails — produced as a byproduct of how WeavrCore operates, in every deployment mode. Not bolted on at the end.
Role-based access control via your existing IAM
SAML 2.0 or OIDC. Active Directory, Azure AD, Okta, Google Workspace. Permissions scoped to data classification tier and use case. Your permission model, enforced on every AI interaction.
Model-agnostic by architecture
Provider abstraction layer. Switch between Anthropic, OpenAI, open-source, or self-hosted models per use case. No application changes required.
Audit trail by default
Append-only logging of every query, response, data access, and model invocation. Queryable, exportable, retained per your data retention policy.
Which model fits?
| If you… | Consider |
|---|---|
| Need to ship a first production AI system in under a month | Managed Cloud |
| Process personal data of EU residents | Private + Enrichment or Pure Sovereign |
| Have existing cloud infrastructure you want to use | Private + Enrichment |
| Operate in defense, intelligence, or critical infrastructure | Pure Sovereign |
| Are prototyping and plan to tighten security later | Managed Cloud, then migrate |
| Have strict data egress prohibitions | Pure Sovereign |
| Want mixed workloads across sensitivity levels | Private + Enrichment |
| Are unsure | Take the Readiness Assessment first |
Setup timing varies by deployment mode and integration complexity. The figures above are typical for single-use-case deployments; complex environments take longer.
Want to discuss which mode fits your environment?
30-minute call with our technical team. No prepared pitch — we look at your specific environment and tell you which deployment mode fits.