Privacy & Security

Learn how Verida APIs enforce privacy and security of user data within a decentralized environment

Verida APIs are built on a first iteration of the Verida Confidential Compute infrastructure. They are designed to find the optimum balance between decentralization, security, privacy and performance.

Confidential Compute

Verida APIs are running within a confidential computation environment. This means that no-one, not even the underlying infrastructure provider running the API server can access any user data or view the computation occurring on the node.

The first iteration of Verida's Confidential Compute nodes are running inside Marlin Oyster Trusted Execution Environments (TEE). These nodes provide numerous security guarantees and capabilities:

  • Computation occurs within a secure enclave where the node operator has zero visibility

  • SSL terminates within the secure enclave, eliminating man-in-the-middle attacks

  • Server code is verified to be the expected code

  • No data is stored to external disks. All data is secured in memory.

The Verida Foundation is operating the first cohort of Confidential Compute nodes and will open up to node operators in the future.

Confidential Storage

Verida APIs integrate the Verida Client SDK within the secure enclave on each confidential compute node. User data is syncronized from the Verida network, decrypted and then loaded into memory for rapid access via API endpoints.

As such, user data retains all the security and privacy benefits of the Verida Network and user data never leaves the secure enclave, accept via user authorized API requests.

LLM Privacy [alpha]

Important privacy notice for the alpha release

The large language models (LLM) currently used in the Verida APIs are not currently running in a Verida Confidential Compute secure enclave. Secure enclaves do not currently support GPU access which is necessary for performant LLM operations.

The alpha release provides the option of using Amazon Web Services Bedrock, Groq or your own LLM.

This is a temporary solution as we are collaborating with partners to enable LLM's to run efficiently and cost effectively within secure enclaves. While this is not perfect, we believe the AWS Bedrock privacy architecture and security model and the Groq privacy policy provide adequate protections for this alpha release, while those with highly sensitive requirements can still provide their own custom LLM.

Custom LLM

You can provide your own OpenAI compatible LLM endpoint and API key through the Private Data APIs and the Private Data Bridge developer interface.

Source Code

The source code for the User Data APIs are open source and are contained within the Data Connector Server Github repo.

Last updated