The Deployment · Three Ways, One Platform

How WeavrCore actually runs in your environment.

Three deployment models covering the full spectrum from fastest to deploy to zero data egress. Pick the one that matches your security posture today — change later when your needs evolve.

Timeline Under 5 weeks (typical)
Regions EU only
Models Claude, GPT, open-source
Egress Configurable per tier
01 / 05

Fastest to deploy. Most private. You choose.

Managed Cloud Private + Enrichment Pure Sovereign
fastest most common maximum control
Managed Cloud fastest
Private + Enrichment most common
Pure Sovereign maximum control

Each option uses the same WeavrCore platform. Configuration differs; capabilities don’t.

02 / 05

Managed Cloud.

The fastest path from contract to production AI. WeavrCore runs on AWS Bedrock or Azure AI Foundry in EU regions, fully managed by our team. Your data sits in the EU region of your chosen cloud provider, processed by frontier-tier LLMs through Bedrock or Foundry’s native integrations.

This is the right starting point for organizations deploying enterprise AI for the first time, or for workloads where the data classification doesn’t require sovereign-level controls. Nothing prevents you from migrating to Private + Enrichment or Pure Sovereign later — the platform is the same.

Runs on AWS Bedrock (eu-west-1 or eu-central-1) or Azure AI Foundry (EU regions)
Managed by DataWeavrs team
Data residency EU only
LLM access Frontier models via Bedrock or Foundry
Your data Stored in EU region of your chosen cloud
Best for First enterprise AI deployment, non-sensitive workloads
Typical setup 2–4 weeks, depending on data ingestion complexity
03 / 05

Private + Enrichment.

Your private data stays on your infrastructure. WeavrCore’s classification engine sorts every piece of information your AI systems touch into Private, Enrichable, or Public tiers. Private data is processed locally and never leaves your perimeter. Enrichable data — the subset you’ve explicitly approved for external processing — flows to a frontier LLM under egress controls you define.

Every classification decision is logged. Every egress event is auditable. This is the most common deployment mode for European enterprises in regulated industries: the economic benefits of cloud AI without the data exposure of cloud AI.

Runs on Your infrastructure (on-prem or your cloud tenant) + DataWeavrs-managed enrichment layer
Managed by Co-managed (DataWeavrs + your IT)
Classification 3-tier (Private / Enrichable / Public)
LLM access Per-tier policy enforced at egress
Private data Never leaves your perimeter
Enrichable data Leaves only under approved egress rules
Audit trail Every classification decision logged
Best for Regulated industries, mixed-sensitivity data
Typical setup 3–5 weeks, depending on classification policy depth and integration scope
04 / 05

Pure Sovereign.

Maximum control. Everything runs on your infrastructure — on-premise, private cloud, or air-gapped environments. WeavrCore deploys to your existing Kubernetes or VM infrastructure. LLM inference runs locally on NVIDIA-accelerated hardware, typically using open-source models you select.

Zero data egress. No information leaves your perimeter under any circumstance. This is the deployment mode for defense, intelligence, healthcare, and any organization where data egress is not a configuration choice — it’s a hard requirement.

Runs on Your infrastructure (on-prem, private cloud, air-gapped options available)
Managed by Your team, with DataWeavrs support
LLM inference Local, NVIDIA-accelerated
Default models Open-source (Llama 3.x, Mistral, or your chosen model)
Data egress Zero
Identity Integrated with your existing IAM (Active Directory, Okta, Azure AD, Google Workspace)
Workflow Guardrails enforced at every step
Audit trail Local, append-only, queryable
Best for Defense, intelligence, healthcare, critical infrastructure, government agencies, organizations with strict data egress prohibitions
Setup timeline Varies. Pure Sovereign deployments depend on infrastructure readiness, air-gap requirements, and existing stack maturity. We scope this in the first conversation.
05 / 05

What every deployment includes, regardless of mode.

Four capabilities. Built into the platform. Not optional add-ons.

01

EU AI Act documentation, generated automatically

Risk classification, technical documentation, audit trails — produced as a byproduct of how WeavrCore operates, in every deployment mode. Not bolted on at the end.

02

Role-based access control via your existing IAM

SAML 2.0 or OIDC. Active Directory, Azure AD, Okta, Google Workspace. Permissions scoped to data classification tier and use case. Your permission model, enforced on every AI interaction.

03

Model-agnostic by architecture

Provider abstraction layer. Switch between Anthropic, OpenAI, open-source, or self-hosted models per use case. No application changes required.

04

Audit trail by default

Append-only logging of every query, response, data access, and model invocation. Queryable, exportable, retained per your data retention policy.

Which model fits?

If you… Consider
Need to ship a first production AI system in under a month Managed Cloud
Process personal data of EU residents Private + Enrichment or Pure Sovereign
Have existing cloud infrastructure you want to use Private + Enrichment
Operate in defense, intelligence, or critical infrastructure Pure Sovereign
Are prototyping and plan to tighten security later Managed Cloud, then migrate
Have strict data egress prohibitions Pure Sovereign
Want mixed workloads across sensitivity levels Private + Enrichment
Are unsure Take the Readiness Assessment first
Need to ship a first production AI system in under a monthManaged Cloud
Process personal data of EU residentsPrivate + Enrichment or Pure Sovereign
Have existing cloud infrastructure you want to usePrivate + Enrichment
Operate in defense, intelligence, or critical infrastructurePure Sovereign
Are prototyping and plan to tighten security laterManaged Cloud, then migrate
Have strict data egress prohibitionsPure Sovereign
Want mixed workloads across sensitivity levelsPrivate + Enrichment
Are unsureTake the Readiness Assessment first

Setup timing varies by deployment mode and integration complexity. The figures above are typical for single-use-case deployments; complex environments take longer.

Want to discuss which mode fits your environment?

30-minute call with our technical team. No prepared pitch — we look at your specific environment and tell you which deployment mode fits.