VyriAI | AI Security Platform for Startups

Protect Your AI Apps from Data Leaks & Prompt Injection

VyriAI is an AI security platform for startups. Secure your AI systems with real-time protection against prompt injection, data leaks, and model abuse. SOC2 automation using AI. Secure LLM usage in enterprise with autonomous security operations.

13ms
Avg governance overhead
52+
Detection patterns
6/6
Threat types blocked
API · IDE · Network
Three deployment modes
Real incidents

AI Model Data Privacy Risks: Your AI Calls Are Leaking Data Right Now

Samsung
Engineers pasted proprietary source code into ChatGPT. Code left the company.
Result: ChatGPT banned company-wide. Months of recovery.
Healthcare SaaS
Patient records in AI prompts. PHI sent to third-party LLM APIs.
Result: HIPAA breach, $250K fine, mandatory audit.
FinTech Startup
Customer SSNs, account numbers in AI assistant context windows.
Result: PCI-DSS violation, customer notification, audit.
Your Company?
Your developers are building AI features today. Do you know how to prevent data leaks in AI?
Without VyriAI: you find out when it's too late.
Drop-in proxy

AI Pentesting Automation: One Line to Secure LLM Usage in Enterprise

Change your API base URL from api.openai.com to your VyriAI proxy. That's it. Every LLM call now runs through governance — scanning, blocking, redacting, and auditing — without changing a single line of application code. AI pentesting automation for secure LLM usage in enterprise.

  • 1. Intercept: all prompts pass through your VyriAI instance
  • 2. Scan: 52 patterns check for PII, source code, secrets, PHI
  • 3. Enforce: block, redact, or allow based on your policy
  • 4. Audit: every decision written to a tamper-evident hash chain
Live governance pipeline
📱
Your app sends prompt
incoming
🔍
Content scan (52 patterns)
~3ms
🚫
SSN / source code detected
blocked
✏️
PII redacted from prompt
redacted
🤖
Clean prompt → LLM API
forwarded
📋
Decision written to audit chain
sha-256
2-minute integration

Change one URL. Protect everything.

No SDK. No agent. No code changes beyond the base URL. Works with any LLM library.

before_vyriai.py
import openai

# ❌ Prompt goes directly to OpenAI
# No visibility. No control. No audit.

client = openai.OpenAI(
  api_key="sk-..."
)

response = client.chat.completions.create(
  model="gpt-4o",
  messages=[{
    "role": "user",
    "content": prompt # could contain SSN, code...
  }]
)
after_vyriai.py
import openai

# ✅ One line change. Full governance.

client = openai.OpenAI(
  api_key="sk-...",
  base_url="https://proxy.vyriai.com/v1" # ← add this
)

# Everything else stays identical.
# VyriAI intercepts, scans, enforces,
# and audits every prompt automatically.

response = client.chat.completions.create(
  model="gpt-4o",
  messages=[{..}]
)
🛡️
Instant protection
Every prompt scanned for SSNs, emails, phone numbers, credit cards, PHI, source code, API keys, and passwords before reaching the LLM.
Imperceptible overhead
Governance adds ~13ms avg (P95: 26ms) to your LLM calls. Your OpenAI call takes 1–3 seconds — users never notice. Measured: 300 requests, 10 concurrent.
🔒
Fail-closed by default
If VyriAI goes down, requests are denied — not passed through. Your data never bypasses governance, even during incidents.
🔄
Hot-reload policies
Change what's blocked or redacted via API — no restart, no deployment. Policy takes effect within milliseconds.
Full governance stack

Everything your team needs to ship AI safely

Not just a filter. A complete governance layer — policy engine, audit trail, memory isolation, and compliance exports built in.

🔍
52-pattern content scanner
Detects SSNs, emails, credit cards, PHI, HIPAA data, source code in 11 languages, API keys, passwords, SQL injection, and meeting notes. Add your own patterns via API.
📜
Policy engine with hot-reload
Define per-tenant policies: block source code, redact PII, allow everything else. Patch policies live via API with no downtime. Stealth mode keeps enforcement invisible to end users.
🔗
Tamper-evident audit trail
Every governance decision is written to a SHA-256 hash chain. Auditors can verify that no decisions were deleted or modified — even by your own engineers.
📦
One-click compliance exports
SOC2 and GDPR evidence packages as ZIP/JSON/CSV with full audit trail, integrity manifest, and hash verification. Ready for your next audit in 30 seconds.
🏢
Multi-tenant isolation
PostgreSQL Row-Level Security enforced at the database layer. One tenant can never see another's data — even if there's a bug in application code.
🌐
Multi-LLM support
Works with OpenAI, Anthropic Claude, and Google Gemini. Switch providers without changing governance configuration. Automatic fallback if primary provider is unavailable.
🔐
OIDC/SSO ready
Connects to Auth0, Okta, Google Workspace, and Azure AD via standard OIDC. Tenant isolation from JWT claims — no custom integration code needed.
📊
SLO monitoring + fail-closed
Tracks P95 latency and error rate in real time. When governance degrades beyond your SLO, fail-closed activates: requests are denied, never passed through unguarded.
💡
Explainability API
Every blocked or redacted request has a full timeline: why it was blocked, which pattern matched, risk score, and which policy triggered. No black boxes.
Cryptographic audit trail

Prove to auditors you never leaked a byte

Every governance decision is SHA-256 chained. Each record contains the hash of the previous one. If any audit event is deleted, modified, or backdated — the chain breaks and you'll know instantly.

  • SOC2 Type 1 audit prep: 70% complete, firm engagement in progress
  • GDPR right-to-erasure: delete subject data across all audit records
  • HIPAA de-identification: PHI scrubbed before any LLM sees it
timedecisionreasonhash
09:14:02 deny python_function detected a3f2b1...
09:14:09 redact ssn_pattern → [PII_REDACTED] c7d91e...
09:14:15 allow No patterns matched 88fa3c...
09:14:22 deny sql_drop_attack detected 1bc04a...
09:14:30 redact api_key_pattern → [SECRET_REDACTED] f29d77...
Chain integrity: VALID ✓ — 5 audits verified
Why VyriAI

Not a filter. A governance layer.

Prompt Shield, Bedrock Guardrails, and Lakera are stateless filters. VyriAI is a stateful control plane with memory, audit trail, and multi-tenant policy isolation.

Stateless filters (Prompt Shield, Lakera, Guardrails)
  • Single request, no memory of context
  • No per-tenant policy isolation
  • No cryptographic audit trail
  • No compliance export (SOC2/GDPR evidence)
  • No hot-reload — deploy to change policy
  • No explainability API (why was it blocked?)
  • Fail-open by default
VyriAI — stateful AI DLP
  • Full stateful context: memory, session, tenant
  • Per-tenant policies with RLS isolation at DB layer
  • SHA-256 hash chain — tamper-evident, verifiable
  • One-click SOC2/GDPR evidence packages
  • Hot-reload policies via API, zero downtime
  • Explainability: full decision timeline per request
  • Fail-closed: requests denied if governance degrades
Integrations

Works with your existing stack

VyriAI is an HTTP proxy — it integrates with any language, framework, LLM library, or IDE.

LLM Providers
🤖 OpenAI
🧠 Anthropic Claude
✨ Google Gemini
🔗 LangChain
🦙 LlamaIndex
IDEs & Developer Tools
💻 VS Code
🧠 JetBrains
🖱️ Cursor
🔄 Continue.dev
🐙 GitHub Copilot
⚙️ GitHub Actions
Identity & Auth
🔐 Auth0
🔒 Okta
☁️ Azure AD
🏢 Google Workspace
Infrastructure & Languages
🐳 Docker
☸️ Kubernetes
📊 Kafka
🗄️ PostgreSQL
⚡ Redis
🐍 Python
📦 Node.js
☕ Java
🦀 Go
💎 Ruby
IDE & Developer Integration

Governance where developers actually work

API proxy secures your production apps. IDE extensions secure your developers' machines. Network integration secures everything in between.

🔌
VS Code Extension
Phase 3 — Q3 2026
Intercepts AI calls from Copilot, Continue, Codeium, and any OpenAI-compatible extension via a local proxy. Shows real-time notifications when source code or secrets are blocked. Zero code changes — install and configure your tenant ID.
🧠
JetBrains Plugin
Phase 3 — Q3 2026
Same governance for IntelliJ, PyCharm, GoLand, WebStorm, and Rider. Plugin injects proxy config into IDE network settings. Works with JetBrains AI Assistant and all third-party AI plugins.
🖱️
Cursor & Custom LLM Backends
Available now
Cursor, Continue, and OpenAI-compatible IDEs support custom base_url. Point it to your VyriAI endpoint — every completion request is governed. No extension needed.
⚙️
GitHub Actions / CI Scanner
Phase 3 — Q3 2026
Scan AI-generated code before it merges. A pipeline step calls the VyriAI scan API on PR diffs — blocks merges containing API keys, credentials, or PHI patterns. Integrates with GitHub Actions, GitLab CI, Jenkins.
🐙
GitHub Copilot (Network Route)
Phase 4 — Q4 2026
GitHub Copilot Business doesn't support custom API endpoints — but it does respect corporate proxies. The network integration path (PAC file or DNS) intercepts Copilot traffic transparently without touching any developer configuration.
🔑
How IDE Interception Works
Architecture ready
A lightweight local agent (Docker or native binary) runs on the developer machine. It acts as an HTTPS MITM proxy using a VyriAI-issued root CA cert trusted by the OS. All AI traffic flows through it before reaching the provider — governed, logged, audited.
Enterprise Network Integration

Govern AI traffic across your entire company

No software on developer machines required. One network-layer deployment covers every AI call — from every laptop, server, and CI runner on your network.

📋
PAC File / Transparent Proxy
Deploy a Proxy Auto-Config file via MDM (Jamf, Intune) to all company machines. All traffic to *.openai.com, *.anthropic.com, *.googleapis.com is automatically routed through VyriAI. No dev knows it's there.
🌐
DNS-Based Routing
Your internal DNS resolver returns VyriAI's proxy IP for AI provider domains. Works with Active Directory, BIND9, or corporate VPN split-tunnel DNS. Smaller teams can use a Pi-hole-style setup. Zero client config changes.
🛡️
Zscaler / Netskope Integration
Already running a CASB? VyriAI complements it. Zscaler or Netskope forwards AI traffic to VyriAI's scan API before forwarding. VyriAI provides the AI-specific policy engine (PII, source code, secrets) that generic CASB tools lack.
🌍
Browser Extension (Web AI Tools)
Chrome and Firefox extensions intercept fetch/XHR calls to ChatGPT, Claude.ai, and Gemini web interfaces — the ones your team uses directly in the browser. Deploy via Chrome Enterprise Policy or Firefox Enterprise. Phase 4 roadmap.
📱
MDM Deployment Guide
Full deployment runbooks for Jamf Pro (macOS), Microsoft Intune (Windows), and BYOD profiles. Covers: root CA cert trust, proxy policy, VPN split-tunnel, and compliance reporting integration. Ship in a weekend.
🏢
On-Premises / Air-Gapped
Full VyriAI stack deployed inside your network. Traffic never leaves your perimeter — not even metadata. Single Docker Compose for small teams; Kubernetes Helm chart for enterprise. BYOK encryption, audit logs stored on your infra.
Product Roadmap

From API proxy to enterprise AI firewall

Transparent, dated, honest. We publish what's done, what's being built, and what's next.

Phase 1 & 2 — Done · April 2026
API-Layer DLP Proxy
Full governance pipeline in production-quality Go: 52-pattern content scanner (PII, PHI, source code, secrets, SQL injection), SHA-256 hash chain audit trail, OIDC RS256 auth (Auth0/Okta/Azure AD), per-tenant rate limiting, hot-reload policy engine, memory CRUD with PostgreSQL RLS, one-click SOC2/GDPR evidence export. Benchmark: avg 13ms overhead, P95 26ms, 6/6 threat types blocked.
Go 1.24 + Gin 52 patterns OIDC RS256 Rate limiting Hash chain audit Evidence export Docker + k8s
Phase 3 — Now · Q2 2026
First Revenue & Showcase
First paying customer. POC support for SaaS, fintech, and healthtech teams. Prometheus metrics endpoint. Distributed SLO tracking (multi-pod). SOC2 Type 1 audit firm engagement. CI/CD scanner step (GitHub Actions, GitLab CI). Cursor / Continue IDE integration guide published.
First customer Prometheus /metrics SOC2 audit start CI scanner Cursor integration
Phase 4 — Q3 2026
IDE Extensions
VS Code extension and JetBrains plugin. Local agent (Docker/native) runs an HTTPS MITM proxy using a trusted root CA cert. Real-time block notifications in the IDE. Audit dashboard side-panel. Connects to your cloud or on-prem AMCP instance for policy sync and audit storage. SOC2 Type 1 certification.
VS Code extension JetBrains plugin Local MITM agent SOC2 Type 1 cert SAML SSO
Phase 5 — Q4 2026
Enterprise Network Control
PAC file / transparent proxy via MDM (Jamf, Intune). DNS-based routing for zero-client-config coverage. Zscaler and Netskope API integration. Browser extension for ChatGPT, Claude.ai, and Gemini web. BYOK per-tenant encryption. Penetration test. HIPAA BAA. GitHub Copilot Business coverage via network route.
PAC file / MDM DNS routing Zscaler integration Browser extension BYOK HIPAA BAA Pen test
Phase 6 — 2027
AI Firewall for the Enterprise
Full Zscaler-for-AI positioning: every AI call on your network governed, logged, and auditable — regardless of tool, provider, or device. Air-gapped deployment option. Custom ML classifiers per customer. Real-time DLP alerts with PagerDuty/Opsgenie. SIEM integration (Splunk, Datadog, Elastic). SOC2 Type 2.
AI firewall Air-gapped Custom ML classifiers SIEM integration SOC2 Type 2

Your AI calls are unprotected right now.

Book a 30-minute demo. We'll scan your own prompts live — show you what's leaking, block it, and produce a compliance export in under 60 seconds.

Docker Compose up in 5 minutes. Works with OpenAI, Anthropic, and Gemini out of the box.