Security architecture for serious AI work.

ContinueVault is designed to make your AI conversations portable, searchable, and usable across AI sessions without creating a standing pool of readable user data on our servers. This page documents how that works, what the system can and cannot do with your data, and where the design boundaries are.

This page is intended for security researchers, procurement teams, and anyone who needs more detail than the homepage provides. If you find a claim here that you cannot reconcile with our actual behavior, please report it.

How your data is stored

Every ContinueVault user has a per-user encrypted vault. Your conversations, threads, and extracted knowledge live in that vault. Each user vault is encrypted at rest using industry-standard database encryption. There is no shared database holding readable conversation contents across users.

The encryption key for your vault is derived for each worker/session assignment from your live authenticated identity material. The identity material used to authorize access is not stored as a reusable server-side credential.

How your data is decrypted

Your encrypted vault is decrypted only by a server process that is actively handling your authenticated requests.

  1. 1

    An authenticated request reaches ContinueVault.

  2. 2

    A short-lived server process decrypts only what it needs to handle your active work.

  3. 3

    When you go idle and your in-flight work drains, that process shuts down and the keys go with it — typically within minutes.

Server processes are assigned to one user for their lifetime. A process that has served one user is never reused for a different user; when it finishes, it is destroyed.

The practical consequence: when you stop using ContinueVault, the only server-side state that holds the keys to your conversations is gone within minutes. Longer if you have heavy background work (such as knowledge extraction) still running when you go idle, but the destruction is automatic — no human action required.

What the system cannot do

The current architecture forecloses several categories of action that other AI-related products do perform. We document them explicitly because they are part of the trust model — things the system cannot do under its current design, not things we have not built yet.

No scheduled jobs on user data.

The current architecture cannot run timed actions on your conversations. Those would require holding decryption keys for users who are not currently active, which the architecture does not permit.

No analytics over decrypted user contents.

The system cannot run queries that read across users' conversation contents. Encryption is per-user; there is no global view of decrypted data.

No standing support access to your conversations.

Support and engineering staff have no path to decrypt your data unless you explicitly enable temporary access. There is no administrator override and no internal escalation path that bypasses the user-controlled toggle. This applies to every account on the system without exception.

No model training on your data.

Your conversations are not used to train any model, ours or anyone else's.

These describe an architectural posture, not a marketing promise. Adding any of these capabilities would require building and deploying a separate service with its own credentials and key-derivation path. That service does not exist. If it ever does, the change will be visible here and in our changelog.

Temporary support access

If you need support that requires looking at data inside your vault, you must enable that access yourself. The toggle is in your dashboard settings.

When enabled

  • Access is granted for a maximum of 24 hours.
  • The grant expires automatically; no manual revocation is required.
  • Grant and access events are visible to you in your dashboard.

When off (default)

  • No support or engineering staff can read your conversations.
  • There is no internal escalation path that bypasses the toggle.
  • This applies to every account on the system without exception.

What we don't capture from CLI sessions

For Claude Code and Codex CLI sessions, ContinueVault's default behavior is conversation-only capture. The session files contain two kinds of content: conversational content (your prompts, the AI's reasoning and responses, decisions, dead-ends, design discussion, and any images you deliberately attached) and tool inputs and outputs (file reads, file writes, diff blocks, shell command inputs and outputs, contents of files the AI touched, and tool-generated images).

ContinueVault's local sync agent filters tool inputs and outputs out before any data leaves your machine. The cv-sync agent runs locally on your computer, reads the session files Claude Code and Codex already write to disk, applies the filter rules, and uploads only the filtered output to your vault. In default conversation-only mode, your code never touches the network.

Full session capture is available only through an explicit user-enabled setting. It is off by default and shows a clear confirmation before activation. The setting can be controlled globally in your dashboard and overridden per-project, so a project under client NDA can stay in conversation-only mode even if the global default is full sync.

A codebase is a different trust class

ContinueVault is not a code indexer. It does not scan repos, watch your IDE, or upload your source tree. It does not monitor arbitrary file changes on your system. The CLI capture surface is bounded to the AI session files that Claude Code and Codex already write locally — and even there, the default is conversation-only.

This is a deliberate scope boundary. Adding repo indexing, IDE monitoring, or terminal capture would require building separate services with separate credentials and separate trust models. None of those services exist. ContinueVault's category is continuity infrastructure for AI work, not developer workflow tooling.

Not purely local-first — by design

ContinueVault is not a local-only product. It uses server-side encrypted storage so your conversation context can move across browsers, devices, accounts, and AI platforms. That portability is a core feature, not a workaround.

This is a deliberate tradeoff. A purely local-first product makes cross-device continuity a different problem, usually requiring explicit sync or a separate trust model. We chose server-side encrypted storage with the constraints described above as the right balance for the use case (continuity across AI platforms) while keeping the trust posture honest.

If a purely local product is what you want, ContinueVault may not be the right tool — and that is a legitimate position. We document it here rather than soft-pedaling it.

Encryption summary

Data at rest: Per-user encrypted vaults using industry-standard database encryption.
Data in transit: TLS 1.2 or higher on all customer-facing endpoints.
Export archives: Encrypted with HMAC-signed download tokens for one-time, time-limited retrieval.
Key handling: Per-user keys derived per worker/session assignment from authenticated identity material. The identity material used to authorize access is not stored as a reusable server-side credential.

Account deletion and data export

Both are available on every tier, including Free.

  • Export Downloads the entire contents of your vault as encrypted archives. Format documented for portability.
  • Account deletion Permanently removes your per-user encrypted vault and scrubs personally identifying information from global records. Deletion is processed within 30 days.

Neither is gated behind upgrades or retention rules. Portability is treated as a baseline trust property, not a paid feature.

Reporting and disclosure

Found a claim on this page that does not match the system's actual behavior? Please report it to security@continuevault.com.