Project Veraison (VERificAtIon of atteStatiON) builds software components that can be used to build an Attestation Verification Service.
Background
Attestation is the system by which an entity produces evidence about itself that another party can use to evaluate the trustworthiness of that entity. A critical part of establishing trust based on that evidence is Verifying what is presented to ensure it meets a policy established to prove trustworthiness. The act of Verification will often require comparing evidence against reference values or checking cryptographic signatures from a known trust anchor. This is a complex task to be performed by an arbitrary user of a system and it is often delegated to a trusted Verification Service instead. The nature of the Service will depend upon the deployment - it may be a local software component or it may reside as a network or internet accessible component. The Verification Service will need up to date data for the process of appraising the evidence presented to it. This must be obtained from authoritative sources, which normally implies establishing business relationships between the Verification Service and those sources.
What part does Veraison play in this?
Given the above, it can be challenging to build just one Verification Service solution which can address all deployments for a technology that needs to produce Attestation reports to prove its trustworthiness. If that then implies that each deployment needs a custom service, there is a significant software barrier and hence cost of entry to establishing a system that can be used in a secure manner. Veraison aims to provide consistency and convenience to solving this problem by building the software components that can be used to build Attestation Verification Services. The components encompass a core structure of verification and provisioning pipelines that can be extended to support specific attestation technologies by the use of plugins. The core components relate to the deployment environment via abstractions.The Veraison project will build some reference implementations in order to prove the integration principles of those components. The reference implementations will not be production quality, but may provide a convenient basis for substantive deployments.
Why "Veraison"?
Veraison (pronunciation "ver-ayy-sjon") is a term used in winemaking to indicate the point at which grapes start to ripen. It was chosen for this project mainly because it could fit into the project intent (verification of attestation) and because it didn't clash with any other project name. There is also an allegorical aspect about good/bad verifications being reflected by ripe / unripe states or even common uses of green / red colours, but musing on such interpretations is left to the reader!
Feedback
Source code for this book is located on GitHub. Please submit any feedback via issues or updates via pull requests there.
Overview
This is an overview of the major components that par part of Veraison, and the related git repositories.
Remote attestation verification service
The main part of the project is the remote attestation verification service implemented inside services repo. This service provides REST API for verifying attestation evidence based on provisioned endorsements and polices.
Client-side libraries for the services API are inside apiclient (for Go), c-apiclient (for C), and rust-apiclient (for Rust) repos.
There are also command line interface application for interacting with the service:
- cocli for compiling and provisioning PSA and CCA endorsements and trust anchors.
- evcli for verifying PSA and CCA evidence.
- pocli for managing verification policies.
Standards driven work
The Veraison Project supports Attestation related working groups in standards bodies, in particular IETF & TCG. This set of repos provide test bed implementations for some of the standards work, as related to Veraison services.
EAT
eat Golang library for manipulating Entity Attestation Tokens (draft-ietf-rats-eat).
EAR
These libraries provide functions for working with EAR (EAT Attestation Results), an EAT/JWT serialisation of the Attestation Result for Secure Interactions (AR4SI) information model - see draft-fv-rats-ear
ear: Golang implementation of EAT Attestation Results rust-ear: Rust implementation of EAT Attestation Results c-ear: C implementation of EAT Attestation Results
cmw: implementation for RATS WG Conceptual Message Wrappers (draft-ftbs-rats-msg-wrap) for attestation related data encapsulation.
Verifier Provisioning
These libraries provide support for the standard information models used to convey data to a Verifier.
corim: manipulation of Concise Reference Integrity Manifest (CoRIM) and Concise Module Identifier (CoMID) tags. Also includes cocli CLI tool, that assists users creating CoRIM & CoMID tags.
swid : SWID and CoSWID manipulation library
COSE
go-cose: go library for CBOR Object Signing and Encryption (COSE)
Architecture specific libs
This collection of libraries provides manipulation and verification functionality for Attestation formats of various architectures.
psatoken: Platform Security Abstraction (PSA) Attestation Token manipulation library.
ccatoken A library for the Arm Confidential Computing Architecture (CCA) Attestation Token.
dice: library providing support functions for manipulating various profiles of DICE.
parsec: Library support for handling the Parsec Key Attestation formats used in the attested TLS PoC.
enact-demo: EnactTrust TPM/Veraison interop demo and related docs
See also: repo guide inside docs repo.
Remote Attestation Verification Services
Veraison provides a reference implementation of remote verification of attestation services. This implementation provides REST API for
- provisioning of trust anchors and endorsements/reference values
- verification of evidence based on pre-defined attestation scheme, and (optionally) deployment-specific policies
- management of deployment-specific policies
The implementation broadly adheres to RATS architectures defined by RFC 9334. The core functionality is agnostic of attestation scheme details. Scheme-specific logic is provided via plugins. OPA-based policies can optionally be used to augment the plugin-defined process.
This implementation can be found on GitHub.
For installation instructions please see Deployments.
Architecture
Veraison services consist of the following
- a provisioning service that provides provisioning REST API
- a verification service that provides challenge/response REST API
- a VTS (Veraison Trusted Services) service that serves a trusted backend for the above
- a management service that provides policy management API
- a number of attestation scheme plugins
Optionally, a Keycloak server can be used for authentication, and a PostgreSQL or MySQL/MariaDB DBMS can be used for endorsements, trust anchors, and policies stores.

Veraison broadly adheres to RATS architecture, with the following exceptions:
- RATS separates provisioned artefacts into Endorsements and Reference Values. Veraison separates them into Endorsements and Trust Anchors. An "endorsement" in Veraison broadly includes what RATS terms "reference values", as well as "endorsements". A trust anchor in Veraison is something that can be used to verify the integrity of provisioned evidence (e.g. a CA certificate, or a public key).
- What RATS calls "policy" is divided in Veraison into policies and attestation schemes. A scheme in Veraison is implemented via a plugin and contains the common part of the RATS "policy" that is likely to be stable able across deployments (how evidence and provisioned values should be appraised). Veraison also allows specifying additional policies written in rego language that can provide deployment-specific overrides the results produced by a scheme.
Attestation results are generated in EAR format.
Device and Supply Chain modelling
Introduction
This document sets out the accumulated knowledge and / or assumptions about:
- the evidence available within a device which can be used for attestation
- models for firmware release lifecycles
- observations on how multiple products may be released from a single source and the implications for provisioning data
Why are these significant? The models here have driven the design of the Veraison system, in particular the way supply chain data is provisioned into the system and the Endorsement store is constructed to allow verification queries to access reference values to be used for evidence appraisal. Those design conclusions can be found in other documents.
The Veraison project welcomes feedback from real world examples as to the accuracy of the models and assumptions used here. The project will be seeking confirmation on these models from industry partners. If you have views on these topics, please provide relevant commentary via a github issue or start a conversation on the Zulip channel.
Device evidence classification
The evidence within an attestation report can be categorised into the following classes. These classes are not an exhaustive set of all types of evidence that may be found within a report but are those necessary for correlating evidence with endorsement / reference value queries.
Trust Anchor
All devices are ASSUMED to contain a Trust Anchor (TA) or a descendant from a Trust Anchor. This is normally provisioned into the device at manufacturing time but may also be provisioned to the device on enrolling into a management structure. The TA takes the form of a key or a seed from which an Attestation signing key can be derived. The attestation report will contain information, referred to here as a Trust Anchor ID (TA-ID). A Verifier can use the TA-ID to locate an endorsement providing information on the public value of the relevant key to use to validate the signature on a report. The Trust Anchor may have unique-to-device or group scope (i.e. multiple devices contain the same trust anchor). The latter case has been used as one solution for privacy protection use cases.
Hardware Group
A device MAY contain a Hardware Group ID that is presented in attestation evidence. This is an identifier, provisioned during manufacturing, which can be used to classify the SoC used within the device and will be the same for any devices built using that SoC. Although termed as a hardware ID, in fact this identifier will classify the immutable Root of Trust, being the un-modifiable and un-measured first boot component as well as the specific SoC hardware within the SoC. The identifier is a public value and has no security value, but aids classification for a verifier. If no Hardware Group ID is included within the attestation evidence then an equivalent group identifier can be provisioned into the verifier as a correlation to the Trust Anchor ID.
Firmware
A device is ASSUMED to produce evidence allowing the set of loaded firmware components to be identified. In some cases the device may summarise all loaded components into a single version number. In other cases a full set of measurements and metadata may be be made available to identify the set of firmware components loaded.
Workload
An attestation report may include a series of measurements for the purpose of describing the workload running on a device. While a Veraison based attestation service can be capable of verifying such a series, there may not be a standard model for the reference values needed for this work. The workload is normally also divorced from the TCB and may require verification data to be obtained from a different supply chain from that used for device verification. This classification is therefore excluded from the device modelling assumptions.
Other Evidence
Other evidence may be contained within the Attestation Report that does not fall into the above categories. This evidence may need to be referenced as part of an evidence appraisal policy, but no additional classification for a group of evidence is known at the moment to affect the modelling of Endorsements for the project.
Note: attestation reports may contain additional complexities to modelling evidence, such as having multiple attesters within a device or receiving an encrypted report. However, it is expected that these can be handled as precursors to report verification by deconstruction into core parts in a pre-verification pass.
Diagram: relevant groupings for evidence
Diagram: Known Tokens
Firmware Lifecycles
Where an attestation report contains only a firmware version, the reference or "known good" value(s) specifying what the policy should compare the version with in order to make a determination of validity may need to be provisioned from the supply chain. Alternatively the handling of this value may be handled by alterations to the evidence appraisal policy.
If the attestation report contains a set of measurements describing the loaded firmware components, the situation is more complex but can offer the opportunity for a policy to make more nuanced appraisal choices. This case requires that the supply chain provisions information to the verifier about released firmware items for use by the appraisal policy.
At the same time, correlations can be established between firmware components and other evidence categories. This can be used to identify policy appraisals such as 'this hardware requires a certain set of firmware components to be trustworthy'.
Correlations can also be established between different released versions of firmware releases as a component is upgraded or patched. This allows a policy to gain appraisal policies such as whether a device is running up to date firmware or whether a firmware component is marked as having some vulnerability. Note that a distinction is made between an Update as being more likely to involve multiple fixes or components released at one time while a patch is more likely to be only a single component change to address a specific issue.
The project has considered multiple models for likely firmware component lifecycles over revised releases. These can be classified into the following categories:
- Model 1:
- Updates: Release all components for whether changed or not (unchanged components have the same file released as on previous occasion)
- Patches: Release only the patched (changed) component
- Model 2:
- Updates: As model 1
- Patches: Release all components, only patch components will be changed unpatched components have the same file released as on previous occasion
- Model 3:
- Updates & Patches: Always release all components; always rebuild all components whether changed or not.
- Note that if a vendor is following practices such as from Reproducible Builds then Models 2 & 3 are equivalent.
Metadata can be added at any time to any released component via the provisioning system.
Diagram: Firmware lifecycle models
Beware clashes from multiple products
The above classifications can be used to build a data model which will allow the relevant and related reference values / endorsements to be available for evidence appraisal. However, if a manufacturer releases multiple products using the same components and tokens from these products are handled by the same verification service, the creation of the data model must take into account these distinct products to avoid false lifecycle graphs. This is necessary to avoid, say, the implication that a firmware component is not up to date because a new release is available, which applies only to a different product. The concept of a product identity can be either created from a combination of evidence fields or be a created entity that is correlated to the TA-ID.
Diagram: Multi Product Models
Queries
Overview
This document contains definitions of "standard" endorsement queries. Queries are the interface by which endorsements are exposed to the rest of the system. A query definition consists of the following parts.
- Name: the name is how a query is referenced. It must be unique in a deployment.
- Parameters: the descriptions of the parameters the query expects, i.e. the input against which the query is run.
- Result: the description of the matches returned by the query. (note: this documents the format of a single match -- the query result is a slice of such matches).
Standard Queries
hardware_id
Parameters
platform_id: a string
representation of the platform identifier extracted from the
evidence.
Result
A string
containing the hardware identifier for the platform from which the
evidence was extracted.
software_components
Parameters
platform_id: a string
representation of the platform identifier extracted from the
evidence.
Result
A slice of SoftwareEndorsement
instances containing the measurements of endorsed
software component versions associated with a particular platform.
Attestation Token use cases & assumptions
The following captures what is assumed about the nature of the attestation tokens received for verification.
A token is presented as a single 'blob' (byte stream).
Tokens are presented individually. If a batch mode submission is required, it is the responsibility of an API tier to pass them for verification one by one.
Encoding in transit is the responsibility of the API tier, the verification pipeline works on a byte stream.
The verification pipeline will receive some context information which will indicate or help to deduce the nature of the token and hence which plugins will be needed in the verification pipeline. In a dedicated deployment this may be hardwired, in an environment supporting multiple token types, this is likely to come from the API tier.
A token may have an encrypted body. Assuming the correct decryption key is available, the body will be decrypted and then passed into the pipeline in the same manner as an unencrypted token.
A presented token blob may actually contain multiple tokens, each of which will be handled - e.g., deserialised and have signing checked - independently such that an aggregation of all evidence contained is extracted and presented for appraisal
Bundled token models include both concatenation (e.g., a chain of certs packaged serially) and nesting (e.g., the value of a claim within one token may be a serialised independent token)
The set of endorsements / reference values required to appraise the evidence in a token is assumed to have been provisioned to the verifier service prior to the token being presented. The only exception to this is if an endorsement is included inline within the token and independently verifiable.
The exception to the above might be where the endorsement / reference values are held in an external service which the verifier must consult. In this case, the prior assumption applies to that external service.
Verification Machinery
Table of Contents
Verification From Ten Thousand Feet
The picture below is yet another way to slice the RATS architecture, which makes the (important) distinction between "identity endorsements" (e.g., verification keys), and "endorsed values" (e.g., certification status of the gadget, color, etc.), which is not explicit in RFC9334.
This distinction becomes relevant when zooming in to the appraisal process where one kind or the other is used depending on the verification "stage."
flowchart TD R[(reference</br>values)] Ev[(endorsed</br>values)] Ei[(identity</br>endorsement)] P[(appraisal policy</br>for evidence)] VO([verifier</br>owner]) RFP([reference</br>value</br>provider]) E([endorser]) RP([relying</br>party]) A([attester]) e((Evidence)) a((Attestation</br>Result)) V([verifier]) subgraph " " Ev Ei end subgraph " " R end subgraph " " P end VO --> P RFP --> R E --> Ev E --> Ei P --> V R --> V RP --> e A --> e Ev --> V Ei --> V V --> a e --> V A -.-> RP
A Closer Look to the Appraisal Process
The picture below provides a closer look in to the appraisal process, describing what a typical verification flow could look like. The flow is broken down into four separate stages:
Crypto Verification Reference Values match Endorsed Values Decoration Attestation Result assembly and signing
Notes:
- The appraisal policy is an input to all the processing stages
- All non-final processing stages can output a "failure" signal that short circuits the appraisal
flowchart TD v1[crypto</br>verification] v2[refval</br>match] v3[endorsed values</br>decoration] v4[AR assembly</br>and signing] Ei[(identity</br>endorsement)] e((Evidence)) P[(appraisal policy</br>for evidence)] skV>verifier signing key] a((Attestation</br>Result)) R[(reference</br>values)] Ev[(endorsed</br>values)] subgraph " " Ev Ei end subgraph " " R end subgraph " " P end subgraph stages["verifier stages"] v1 v2 v3 v4 end subgraph " " skV end e --> v1 Ei --> v1 v1 -- evidence claims-set --> v2 P --> stages v2 -- evidence claims-set --> v3 v3 -- evidence claims-set</br>derived claims-set --> v4 skV --> v4 v4 --> a R --> v2 Ev --> v3 v1 -- failure -->v4 v2 -- failure -->v4
Crypto Verification
Input:
- Evidence to be verified
- Identity endorsements
- Look up the matching identity endorsement / trust anchor (e.g., a raw public key, an x509 cert, etc.)
- Use the key material to verify the Evidence's cryptographic envelope
Output:
- the (cryptographically verified) Evidence's claims-set
flowchart TD v1[crypto</br>verification] Ei[(identity</br>endorsement)] e((Evidence)) P[(appraisal policy</br>for evidence)] Ecs((evidence</br>claims-set)) FAIL((("failure"))) v1 --> FAIL Ei --> v1 e --> v1 v1 --> Ecs P --> v1
Reference Values match
Input:
- Evidence claims-set
- Reference values
- Look up any applicable reference values
- Match reference values against the claims-set
Output:
- (validated) Evidence claims-set
flowchart TD v2[refval</br>match] P[(appraisal policy</br>for evidence)] R[(reference</br>values)] Ecs1((evidence</br>claims-set)) Ecs2((evidence</br>claims-set)) FAIL((("failure"))) v2 --> FAIL v2 --> Ecs2 R --> v2 Ecs1 --> v2 P --> v2
Endorsed Values Decoration
Input:
- The Evidence claims-set
- Endorsed values
- Look up applicable endorsed values
- decide (based on policy) which endorsed values can be added to the claims-set
Output:
- Evidence claims-set
- Derived claims-set
flowchart TD v3[endorsed values</br>decoration] P[(appraisal policy</br>for evidence)] Ev[(endorsed</br>values)] Ecs1((evidence</br>claims-set)) Ecs2((evidence</br>claims-set)) Dcs((derived</br>claims-set)) FAIL((("failure"))) Ev --> v3 Ecs1 --> v3 v3 --> FAIL v3 --> Dcs v3 --> Ecs2 P --> v3
Attestation Result assembly and signing
Input:
- Evidence and Derived claims-set, or a failure signal
- the Verifier private key that signs the attestation result
- Determine the attestation result claims-set based on the input claims-set and policy
- Sign the final attestation result statement
Output:
- Signed attestation results
flowchart TD v4[AR assembly</br>and signing] P[(appraisal policy</br>for evidence)] skV>verifier signing key] a((Attestation</br>Result)) Ecs((evidence</br>claims-set)) Dcs((derived</br>claims-set)) FAIL((("failure"))) FAIL --> v4 Ecs --> v4 Dcs --> v4 P --> v4 skV --> v4 v4 --> a
Veraison in a TEE
Abstract Model
A Veraison services workload () runs in a TEE ().
Keys
has its own attestation key pair .
At start-up, generates its signing key-pair .
- :
- :
where
Appraisal
We assume a pre-existing trust relationship between and , i.e.: trusts to be associated with .
When appraisal is requested by a relying party (or an attester, if in passport mode), replies with an attestation result that, along with the appraisal result for the submitted evidence (), contains the most recently obtained workload and key attestation:
- :
- :
The relying party (or an auditor in its stead) can verify that:
- The key used to sign the attestation result matches the attested key ( is either published at a known location, or inlined in the attestation result);
- The security state of platform and workload is as expected, and that their measurements are fresh enough.
This binding scheme guarantees that 's identities (both as a cryptographic signer, and as a piece of running code & configuration) cannot be separated, which prevents a forwarding attack where a rogue verifier can use platform evidence not correlated to the outer signing key -- obtained in some way from a genuine verifier -- to trick into thinking that platform and workload security state is as expected.
Note: if 's are paced with a frequency that is than the number of expected appraisals from the same , a hash of can be used instead, which reduces bandwidth requirements.
Nitro Instantiation
When run in a Nitro Enclave, the abstract protocol elements are instantiated using the following claims in the AttestationDocument
payload:
- : there is no such thing in Nitro, one has to trust AWS via the AWS Nitro Attestation PKI, which includes a root certificate for the commercial AWS partitions.
- :
pcrs["PCR0"]
(Enclave image file),pcrs["PCR1"]
(Linux kernel and bootstrap), andpcrs["PCR2"]
(Application)
- :
public_key
- :
timestamp
- :
certificate
The complete AttestationDocument
is provided below:
AttestationDocument = {
module_id: text, ; issuing Nitro hypervisor module ID
timestamp: uint .size 8, ; UTC time when document was created, in
; milliseconds since UNIX epoch
digest: digest, ; the digest function used for calculating the
; register values
pcrs: { + index => pcr }, ; map of all locked PCRs at the moment the
; attestation document was generated
certificate: cert, ; the infrastructure certificate used to sign this
; document, DER encoded
cabundle: [* cert], ; issuing CA bundle for infrastructure certificate
? public_key: user_data, ; an optional DER-encoded key the attestation
; consumer can use to encrypt data with
? user_data: user_data, ; additional signed user data, defined by protocol
? nonce: user_data, ; an optional cryptographic nonce provided by the
; attestation consumer as a proof of authenticity
}
cert = bytes .size (1..1024) ; DER encoded certificate
user_data = bytes .size (0..1024)
pcr = bytes .size (32/48/64) ; PCR content
index = 0..31
digest = "SHA384"
Notes
- The signed
AttestationDocument
can be stored in the discovery API endpoint of the verification frontend in aear-verification-key-attestation
claim. - We could use
user_data
in case we want to stash Veraison-specific data in theAttestationDocument
, for example as a bstr-wrapped CBOR/JSON map. - The
nonce
field is conveniently optional, which plays well with our timestamp-based freshness model.
Deployments
A deployment contains the Veraison services executables along with configuration and other artefacts that are needed for them to work together. Essentially, it is an installation of Veraison services.
Currently, the following deployments are supported:
aws
: installs Veraison on AWS (requires an AWS account)debian
: creates a .deb packagedocker
: builds and installs Veraison inside Docker containersrpm
: creates a .rpm packagenative
: builds and installs Veraison on the current system, using existing Go toolchain
What deployment should I use?
The native
deployment is the quickest to build and easiest to work with. If
you want to try out Veraison, this is usually the way to go.
The docker
deployment has fewer prerequisites and runs inside a more
controlled environment, however it is much slower to build. If you have trouble
building or running Veraison natively, docker
deployment is worth trying
instead.
debian
and rpm
deployments create packages that may be installed on other
systems.
aws
deployment runs Veraison on AWS. This is a reference for a more
"realistic" deployment of AWS.
AWS Deployment
This deployment runs Veraison services and a Keycloak instance on EC2 instances behind load balancers. An RDS instance running Postgres is used for key-value stores, and a memcached instance is used for session management.
Dependencies
This deployment depends on the debian
deployment (which, in turn, depends on
the native
deployment). Please see its
README for the dependencies list.
Additionally, the following dependencies are required specifically for AWS deployment:
curl
: used to transfer the Debian package to the EC2 node.openssl
: used to generate TLS certs (note: unlike withnative
deployment, where pre-generated certs may optionally be used, cert generation is mandatory for this deployment, as the certs must be specific to the created EC2 instance).packer
: used to build AMI images using temporary EC2 instances.psql
: Postgres client used to initialise the stores (may be packaged on its own or as part ofpostgres
, depending on the platform).- A number of Python packages used by the deployment script. Please see requirements.txt for details.
curl
and openssl
should be available from your OS's package manager. Python
dependencies are installable via pip
/PyPI
. For packer
, please see its
documentation.
Bootstrap
To simplify dependency installation, the deployment script implements bootstrap for Arch, Ubuntu, and MacOSX (using homebrew).
git clone https://github.com/veraison/services.git
cd services/deployments/aws
make bootstrap
(this will only work on the above-mentioned platforms).
AWS account
You need an existing AWS account. Please see boto3
documentation
for how to configure aws
CLI to access this account.
The AWS credentials will also need to available to packer
tool. Where
packer
looks for them is described by packer
docs.
The simplest way to configure credentials is probably via Single Sign On (SSO).
- Run
and follow the interactive wizard to specify configuration.aws configure sso
- Make sure the configuration works by doing
You will need to periodically re-do this step when your current login expires (usually around once a day).aws sso login
- Create
AWS_PROFILE
environment variable pointing to profile created in step (1).export AWS_PROFILE=<YOU PROFILE NAME>
The steps above are not the only way to establish AWS credentials. Please refer to the documentation linked above for alternatives.
Domain and certificate
Finally, you need a domain registered in Route53, with a corresponding certificate created in ACM. If you already have a domain with a different registrar, you will need to transfer it to Route53.
The certificate MUST cover subdomains as well. For example, if you have
registered my-domain.com
in Route53, the certificate should have
*.my-domain.com
in its Subject Alternative Names.
For creating a new domain, please see Register a new domain or Transferring registration for a domain. For information on setting up a certificate in ACM please see AWS Certificate Manager
Quickstart
Before creating a deployment, you need to provide account-specific configuration that specifies the IDs of the VPC and subnets that will be used for the deployment as well as the CIDR that will be granted access to the deployment. Please use misc/arm.cfg for an example.
Once the account-specific config file is created, define AWS_ACCOUNT_CFG
environment variable to point to it and execute make deploy
to create the
deployment.
export AWS_ACCOUNT_CFG=misc/arm.cfg # replace with path to your config
make deploy
note
This could take a while -- in the order of 45 minutes, depending on how responsive AWS is.
Deployment can be accessed via CLI front end:
source env/env.bash # for bash, or alternatively, env/env.zsh for Zsh users
veraison status
This should display the DNS name and IP address of the instance and show Veraison services as active and running.
To make sure the deployment works, you can run through end-to-end flow.
For example
# env/env.bash must be sourced
../../end-to-end/end-to-end-aws provision
# followed by
../../end-to-end/end-to-end-aws verify rp
Finally, to remove the deployment, you can run
make really-clean
Deployment overview
Veraison is deployed into a dedicated VPC created within the AWS region
specified by VERAISON_AWS_REGION
inside deployment.cfg
(the region must
contain at least two availability zones). The address space for the VPC is
determined by VERAISON_VPC_CIDR
. It must be large enough to accommodate four
subnets of equal size, with a subnet being large enough to hold the expected
number of instances (at least /16
CIDR is recommended).
The VPC spans two availability zones, each containing a public and a private subnet. The public subnets host load balancers for Veraison services and Keycloak. The first public subnet also hosts the sentinel EC2 instance which is used to configure RDS database. The private subnets host auto-scaled service instances, keycloak instance, and the RDS instance.
The deployment adds three subdomains to the domain registered in route 53 -- one each for the services load balancer, keycloak load balancer, and the sentinel instance.
Deployment bring up and tear down
make
calls shown above just invoke deployment.sh
with an appropriate
command. make bootstrap
calls deployment.sh bootstrap
, make deploy
calls
deployment.sh bringup
, and make really-clean
calls deployment.sh teardown
.
deployment.sh
loads the environment from deployment.cfg
and the additional
file specified by AWS_ACCOUNT_CFG
, and executes the function associated with
the specified command, which, in turn, calls through to the bin/veraison
script.
The first time veraison
script is invoked, --deployment-name
should be
specified before the subcommand to indicate which deployment the script's
subcommand should apply to. The script will remember it for future invocations,
so there is no need to specify it again unless you want to switch the current
deployment. deployment.sh
take the value for this flag from
VERAISON_AWS_DEPLOYMENT
environment variable (specified inside
deployment.cfg
).
Bring up
The bring up function creates the deployment. The deployment is created in
stages, with each stage corresponding to a subcommand of the veraison
script.
The bring up stages are:
veraison configure --init [...]
veraison create-deb
veraison create-key-pair
veraison create-vpc-stack
veraison create-sentinel-image
veraison create-rds-stack
veraison update-security-groups
veraison setup-rds
veraison create-signing-key
veraison create-services-image
veraison create-keycloak-image
veraison create-services-stack
configure
configure
command is used to configure the deployment. It has optional
arguments to specify individual configuration points for the deployment. (use
veraison configure --help
to list them). Configuration will be cached locally
and used by subsequent commands.
The --init
flag indicates that this is an initial configuration for the
deployment. Any arguments not specified by this command will be set to default
values (or an error will be raised if no defaults exist). Without --init
flag,
only the specified parameters will configured (this can be used to update an
existing configuration).
create-deb
This is used to create a Debian package for Veraison services using the debian deployment. This is used to create the services image later.
create-key-pair
This creates an AWS key pair that will be used to configure access to EC2 instances.
create-sentinel-image
Creates an AMI image for the sentinel instance. The instance will be
provisioned with its own version of veraison
script which can be found
here. The sentinel will be used to mediate access to
the RDS instance, which will not be directly accessible outside the VPC.
create-rds-stack
This creates the RDS and sentinel instances via a CloudFormation stack.
update-security-groups
This updates the sentinel instance's security group with your current public IP
address, which will enable SSH access to the instance. If your current address
is covered by VERAISON_AWS_ADMIN_CIDR
, then this can be skipped.
If your ISP periodically changes your IP address, you may need to re-run this command in the future.
setup-rds
This initializes the RDS instance for use by Veraison services.
create-signing-key
Create a new signing key in JWK format and upload it to AWS Secrets Manager.
create-services-image and create-keycloak-image
This creates AMI images that will be used by the various EC2 instances in the
deployment. Both of these need configuration for connecting to the RDS
instance, which is why the must be created after setup-rds
.
create-services-stack
Finally, this creates the load balancers, auto-scaling groups, etc to complete the deployment. This also creates CNAME records for subdomains under the Route 53 domain associated with the deployment.
Teardown
Teardown, like bring up, invokes a number of veraison
subcommands:
veraison delete-stack services
veraison delete-stack rds
veraison delete-stack vpc
veraison delete-image keycloak
veraison delete-image services
veraison delete-image sentinel
veraison delete-signing-key
veraison delete-key-pair
veraison delete-deb
This is more-or-less a reverse of what was done during bring up and should be self-explanatory.
Managing the deployment
In addition to containing the commands used during deployment bring up and
teardown, veraison
script also acts as a CLI front-end for the deployment.
veraison status
This shows a brief overview of the current state of the deployment -- which images have been created, which stacks have been deployed, and the currently running instances.
veraison cache
This dumps the local cache associated with the deployment. This includes the
settings configured earlier using veraison configure
, as well as various
outputs from bring up stages. Notably this contains keycloak_admin_password
,
which will allow you to log into the Keycloak web interface.
veraison stores
This shows the contents of the K-V stores used by the services. This includes endorsements and trust anchors provisioned via the provisioning API, and policies uploaded via the management API.
veraison clear-store
This will empty the stores of all existing values. This can be useful during testing.
veraison shell
This opens an interactive shell on the sentinel instance. This can be used to examine and debug the internal state of the deployment (aside from the API endpoints, the sentinel is the only thing accessible outside the VPC).
veraison dbshell
This opens an interactive Postgres shell on the RDS instance (via the
sentinel). The instance is used both for the Veraison services' K-Stores
(veraison
database -- should be the default), and for Keycloak (keycloak
database).
veraison follow [-s FROM] SERVICE_NAME
This outputs the last 10 minutes (by default) of logs from the specified service and continues to print out new log entries in (near) real-time until stopped with CTRL-C. How far back event history is printed can be changed via -s/--start option which can take either an absolute date (in one of the common formats) or relative time. E.g.
# follow VTS service going back 2 hours
veraison follow -s 2h vts
# follow verification events since yesterday
veraison follow --start yesterday verification
# follow provisioning events since 20th July 2024 (note the US date ordering)
veraison follow --start 7/20/2024 provisioning
veraison follow --start 2024-07-20 provisioning
# follow VTS service going back 3 days
veraison follow --start "3 days ago" provisioning
The maximum unit of time for relative time is weeks (e.g. 2w
or 2 weeks
). If
you want a longer time period you have to convert it into days/weeks, or specify
and absolute starting date.
veraison get-logs [-S SERVICE] [-s FROM] [-e TO] OUTPUT_DIRECTORY
This retrieves logs from CloudWatch and writes them to files in the specified
output directory. The files are called {service}-stdout.log
. If the output
directory doesn't exist, it will be created (along with intermediate
directories, if necessary). It's possible to specify a service using -S
option, in which case only logs for the services will be obtained. The option
may be specified multiple times. It is possible to restrict the time period for
which the logs will be obtained using -s/--start
and -e/--end
options.
Parameters for these options should be in the same format as for --start
option for follow
command (see above). There is also a shorter alias logs
for
this command. E.g.
# Obtain logs for all services writing them into files in "veraison-logs"
# subdirectory under current directory (creating it if necessary).
veraison get-logs ./veraison-logs
# Obtain VTS log for today writing it into ./veraison-logs/vts-stdout.log
veraison get-logs -S vts -s today veraison-logs
# Obtain logs for provisioning and verification services spanning time
# period from 10th December 2024 until yesterday
veraison logs -S provisioning -S verification -s 2024-12-10 \
-e yesterday veraison-logs
As with follow
command, the maximum time unit is weeks.
veraison delete-logs
Completely delete logs from CloudWatch. This is mostly useful for debugging.
This command is also aliased as clear-logs
.
veraison restart-cwagent
Restarts CloudWatch agent running on service EC2 instances. This is necessary
if something happens to the log groups in CloudWatch -- the agent will not
upload events until restarted. (note: delete-logs
command does this
automatically, so there is normally no need to use this command unless you,
e.g., manually delete the log groups in AWS web console).
Debian deployment
This creates a .deb package that may be installed on Linux distributions that
use dpkg
as the package manager (e.g. Debian, Ubuntu, Mint). The package
installs Veraison into /opt/veraison
and sets up systemd units for the
services.
Dependencies
In addition to dependencies for the native
deployment, dpkg
must be installed. If you
are on a Debian or Ubuntu system, dpkg
will already be present as it the
package manager for your system. If you are on Arch, you can install it via
# on Arch
pacman -S dpkg
If you are on another system, you will need to find how to install dpkg
on
your own (first check that it is not the package manager for the system, then
search the system's standard packages; if all else fails -- duckduckgo/brave is
your friend).
Building the package
The location where the package will be built is specified with PACKAGE_DEST
environment variable. It will default to /tmp
if not set. To build the
package simply do
git clone https://github.com/veraison/services.git
cd services/deployments/debian
make deb
This will create
${PACKAGE_DEST}/veraison_deb_package/veraison_VERSION_ARCH.deb
, where VERSION
is the Veraison version as reported by the
get-veraison-version
script, and ARCH
is
the architecture of your system as reported by dpkg --print-architecture
.
Alongside the package, there will be a subdirectory with the same name but without the .deb suffix that contains the "sources" used to build the package.
Docker Deployment
This deploys Veraison a series of Docker containers on a dedicated Docker network on the local machine.
Dependencies
You will need to make sure you have make
, git
, docker
(and its builder),
and jq
installed on your system. Optionally, tmux
can be used to
conveniently monitor output from running services. The CLI front-end relies on
bash
shell, which should already be installed on most systems, but if not, it
would also need to be installed.
# on Ubuntu
sudo apt install bash make git docker.io docker-buildx jq tmux
# on Arch
sudo pacman -S bash make git docker docker-buildx jq tmux
# On MacOSX with Homebrew
brew install step coreutils gettext openssl sqlite3 protobuf jq
brew link --force gettext
important
Ubuntu users: Make sure you have docker installed natively (via sudo apt install docker.io
), and not via snap
, as the later won't have access to
the default build contexts under /tmp
. You could, of course, change those
locations via the config, however we cannot guarantee that there won't be
other issues -- if you decide to stick with snap
, you're on your own.
Docker set up
If you're not already set up to use Docker on your system, you will need to
make sure that it is enabled and running, and that you can access it by adding
your account to the docker
group:
sudo systemctl enable --now docker
sudo usrmod -a -G docker $USER
The user group modification won't take effect until you log out and log back in, or, alternatively, you can force in a specific shell with
newgrp docker
Creating the Deployment
Once you have the pre-requisites, you can create the deployment on your local system
git clone https://github.com/veraison/services.git
cd services/deployments/docker
make
Once the deployment is created, you can set up the front end by executing
source env.bash
inside a bash shell. There is an equivalent env.zsh
for zsh (other shells are
currently not supported). You can interact with the deployment via the
veraison
command. E.g.:
veraison status
You should see output similar to
vts: stopped
provisioning: stopped
verification: stopped
management: stopped
keycloak: stopped
To start Veraison services run:
veraison start
The provisioning service is now listening on port 9443, and verification service on port 8443 (these can be changed via configuration -- see below).
Use the -h
option to see the full list of available commands.
Deployment Composition
The deployment exists on its own Docker network. Each of the services runs in its own container. Service logs and VTS stores reside inside docker volumes, independent of any container.
On the diagram above, the network is shown in orange, the volumes in purple, the containers in blue, and the images the containers are based on in green.
In addition to the service container images, there also two images without
permanent containers. They are used to spawn ephemeral containers via docker run --rm
.
The builder
image contains the Go toolchain and other tools necessary to
build Veraison. It is used to compile the service binaries and their plugins
that are then used to construct the service containers. This image is also used
to spawn debug shells.
The manager
image is used by the veraison
frontend to, among other things,
access the volumes where the logs and provisioned values are stored.
Configuration
The deployment may be configured by changing the settings inside deployment.cfg. See the comments inside that file for the explanation of the individual configuration values.
note
The individual services rely on configuration inside config.yaml.template
.
This currently uses the sqlite3
driver for the store backend, which limits
request throughput and makes the deployment unsuitable for production or
performance testing (for the latter, one can modify the template to use
memory
backend for the stores).
make
Options
You can set the following variables to influence the behaviour while executing make targets:
DEBUG_HOST
: sets the host name for the debug container. This is useful if you
want to substitute the debug container in place of one of the service
containers.
DEBUG_PORT
: if set with a port number, that port on the debug container will
be forwarded to the host (debug container is run with -p $(DEBUG_PORT):$(DEBUG_PORT)
).
DOCKER_BUILD_FLAGS
: additional flags to be passed to Docker when building
various images. This is passed to all image build invocations, so should only
be used for globally-applicable flags such as --no-cache
.
Debugging
Running make debug
will pop up a "debug" shell inside the builder image. The
image has the build toolchain and the debugger installed, and
/veraison/build/
directory is mapped onto the root of Veraison repo on your
host. This means you can make modifications to the source using your preferred
editor/IDE on the host, and then build and debug them in the debug shell.
Example: debugging scheme verification plugin
For example, say you are implementing a new attestation scheme, and are running into issues when attempting to verify your token (e.g. via an integration tests test-case). There is not enough output in the logs to identify the issue, and so, you'd like to set a breakpoint inside your code.
This needs to be done inside vts-service
, as this is the component that
loads the evidence handler plugin used on the verification path.
First, assuming you have frontend set up, and the services running, (if not,
you can do so with source env.bash; veraison start
), you will need to stop
the "production" VTS service with:
veraison stop vts
this will stop vts-service
but should leave the other services running (you
can verify that by running veraison status
).
Next, enter the debug shell:
make DEBUG_HOST=vts-service debug
this will pop open a bash
shell inside the builder. The DEBUG_HOST
argument
will set the hostname of the builder container. Here, we're setting it to the
hostname of the VTS service we just stopped, so that the provisioning and
verification services that are still running will now talk to our debug container
instead.
(Note: if you were replacing a REST service, rather than VTS, you would also
set the DEBUG_PORT
variable to the value appropriate to the service (see
deployment.cfg.)
Next, navigate to the location of the VTS service executable (keeping in mind
that the root of the repo is mapped to /veraison/build
inside the container),
and start the debugger:
# inside the debug shell:
cd /veraison/build/vts/cmd/vts-service
debug
The debug
command is an alias for dlv debug
that will make sure that the
debug executable will be built with evidence handling plugins compiled in (same
as if you ran make SCHEME_LOADER=builtin
). This means you can set the
breakpoint inside your code directly from here, and you don't need to worry
about attaching the debugger to a separate plugin process.
(Note: the config.yaml files in service directories -- such as
/veraison/build/vts/cmd/vts-service/config.yaml
-- that are loaded by
default, are configured for running the service directly on the build host,
outside docker. For running inside a docker container, the config-docker.yaml
in the same location should be used. debug
command demonstrated above does
that, so you don't need to worry about it. However, if you want to run the
service executable directly, then you must remember to specify the appropriate
config, e.g.
# inside the debug shell:
cd /veraison/build/vts/cmd/vts-service
./vts-service --config config-docker.yaml
Native deployment
The native deployment builds and installs Veraison services on the current system using its native tools.
Dependencies
To build Veraison services, you will need a Go toolchain, at least v1.22. If Go is already installed, you can check the version:
go version
If not, the instructions for downloading and installing Go on your platform can be found here.
You will also need the protobuf compiler, protoc
, with plugins for Go. The
plugins may be installed via go install
:
go install google.golang.org/protobuf/cmd/protoc-gen-go@v1.28
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@v1.2
go install github.com/mitchellh/protoc-gen-go-json@latest
You will need GNU make
(at least version 3.81) to drive the build process.
The deployment script is written for bash
shell, and relies on envsubst
utility (which comes as part of gettext
package on most systems), as well as
some common UNIX utilities that are often already present on a system but may
sometimes need to be installed: find
, grep
, sed
.
important
MacOSX users: To enable envsubst
and force it to link properly:
brew install gettext
brew link --force gettext
For a full deployment, the following tools will also be needed:
sqlite3
to initialize and manipulate the storesjq
to display the stores' contentsopenssl
(at least version 3.0) to generate the TLS certificates (not needed if you plan to use the included example certs).step
to generate the signing key (not needed if you plan to use the included example key).tmux
can optionally be used to run the deployment (not required).
All except step
should be available from package managers of most
distributions. For installing step
, please see its
documentation.
Bootstrap scripts
To simplify dependency installation, we have bootstrap scripts available for
Arch, Ubuntu, and MacOSX (using homebrew) (see bootstrap/
subdirectory). Running
git clone https://github.com/veraison/services.git
cd services/deployments/native
make bootstrap
will try to automatically select the appropriate script to run.
Checking dependencies
You can check whether the required dependencies are present by running
make check
This will report an error if a mandatory dependency is missing, or warnings if one or more optional dependencies are missing.
Quick deployment
To get Veraison running quickly you can run
make quick-deploy
in this file's directory, or make native-deploy
in the repo's top-level
directory.
This will create a deployment under ~/veraison-deployment
using the included
example key and certificates.
The deployment is controlled via a CLI frontend script,
~/veraison-deployment/bin/veraison
. You can create an alias for it so that it
may be invoked simply as veraison
by sourcing the deployments env file:
source ~/veraison-deployment/env/env.bash
(the command above assumes you're using bash
shell; there is also an
equivalent file for zsh
).
Running Veraison
You have three options for running the services:
# option 1
veraison start-term
Will spawn a virtual terminal for each service.
# option 2
export SESSION_NAME=veraison
veraison start-tmux $SESSION_NAME
will create a tmux session with the specified name (if no name is specified, the default is "veraison"), and will start services inside panes within that session. You can then attach to the session with
tmux attach -t $SESSION_NAME
Note that this requires that tmux
is installed on the system.
Finally
# option 3
veraison start-services
will install and start systemd/launchd services for the current user.
(Note: if you've deployed by running make native-deploy
at top level, that will
automatically try to start services via systemd/launchd on systems that use them.)
Testing the deployment
You can check that Veraison is running properly by running through the end-to-end flow.
Step-by-step deployment
There are multiple steps to creating a functioning Veraison deployment. make quick-deploy
described above executes these steps in sequence with default
options for minimal hassle.
This section describes how to create a deployment by executing each step
individually using the deployment.sh
script, and the options available for
each step.
Deployment destination
The destination for the deployment is specified by VERAISON_ROOT
environment
variable. If the variable does not exist, it defaults to
~/veraison-deployment
(as seen above).
export VERAISON_ROOT=~/alternate-deployment
If the specified location does not exist, it will be created by the script.
Step 1: build Veraison
Veraison services can be built with
./deployment.sh build
This is equivalent to running make COMBINED_PLUGINS=1
at the top level. Note
that COMBINED_PLUGINS
must be set, as the deployment will not use split
-handler
plugins.
Step 2: create and populate deployment directory
./deployment.sh deploy
This create the deployment directory structure under VERAISON_ROOT
(including
the VERAISON_ROOT
itself if it doesn't already exist) and will copy the
service executables, (combined) plugins, the CLI frontend, and configuration
into the directory structure.
Alternatively, if you specify -s
option:
./deployment.sh -s deploy
then the service executables and plugins will be symbolically linked, rather than copied. This is useful for development, so that you don't have keep re-copying the executables after rebuilding them.
Step 3: set up TLS certificates
Veraison services communicate with clients and with each other over TLS. Each
service will use its own certificate and associated key that it will pick up
from ${VERAISON_ROOT}/certs/
based on its name (e.g. VTS service will use
${VERAISON_ROOT}/certs/vts.crt
.
These certificates can be generated for a deployment by providing a root certificate that will be used to sign them. The root certificate maybe obtained from a known Certificate Authority (such as Let's Encrypt), or created locally (in which case, it will be self-signed).
Certificate generation relies on openssl
which must be installed on the
system.
Create root certificate
A root certificate may be created by running
./deployment.sh create-root-cert "/C=US/O=Acme Inc."
This will create ${VERAISON_ROOT}/certs/rootCA.crt
(and the associated
${VERAISON_ROOT}/certs/rootCA.key
) with the specified subject. The subject
may be omitted from the invocation, in which case /O=Veraison
will be used.
Create service certificates
Once you have a root certificate (either generated in the sub-section above, or obtained via some other means), you can use it to generate certificates for the individual services.
In addition to the root certificate and key, you will also need to specify a comma-separated list of names that will be used to populate the Common Name field and Subject Alternative Name extension in the generated certificates.
./deployment.sh init-certificates $(cat /etc/hostname),localhost \
${VERAISON_ROOT}/certs/rootCA.crt ${VERAISON_ROOT}/certs/rootCA.key
If you only plan to use the services locally, it is sufficient to only provide "localhost" as the first argument.
The names in the list may contain the placeholder @@
that will be replaced
with the name of the service.
For example, if you specify the first argument as
@@.my-domain.com,@@-service.alt-domain.com,localhost
then the certificate generated for the VTS service will contain the Subject
/CN=vts.my-domain.com
and the SAN extension with
DNS:vts.my-domain.com, DNS:vts-service.alt-domain.com, DNS:localhost
note
if you're using your own root certificate rather than generating one as in
sub-section above, you do not need to rename or copy it into
${VERAISON_ROOT}/certs/
-- this will be done as part of init-certificates
command (the root cert key will not be copied as it won't be needed after
service certificates are generated).
Using example certificates
It is also possible to just use included example certificates rather then generating new ones for the deployment (these will, obviously, only work for local use, as they won't be configured with your host's name).
./deployment.sh -e init-certificates
When doing this, you do not need to create/provide a root certificate.
Step 4: set up signing key
The signing key should be provided as ${VERAISON_ROOT}/signing/skey.jwk
. It
will be used by the verification service to sign attestation results. The key
must be in JWK format.
You can generate a new key for the deployment with
./deployment.sh init-signing-key
Signing key generation relies on step
which must be installed on the system
(see dependencies section above).
Using example signing key
As with certificates, it is possible use the example key instead of generating a new one:
./deployment.sh -e init-signing-key
Step 5: initialize stores
Veraison uses sqlite3 databases to store endorsements, trust anchors, and policies. When started, services will expect these database to be initialized with appropriate tables. This can be done by running
./deployment.sh init-stores
Step 6: set up clients
Veraison services can be interacted with via CLI applications, cocli
for
provisioning, evcli
for verification, and pocli
for policy management.
These clients can be installed, along with appropriate configuration with
./deployment.sh init-clients
Using existing clients with the deployment
If you already cocli
or other clients installed, you can use them, rather
than the ones bundled with the deployment. You can use --config
option
to point to the deployment's config file for the client, e.g.
cocli --config ${VERAISON_ROOT}/config/cocli/config.yaml OTHER_ARGS...
To avoid having to specify the config file each time, you can copy the
client-specific sub-directory into your XDG_CONFIG_HOME
(usually
~/.config/
)
cp -r ${VERAISON_ROOT}/config/cocli ~/.config/
cocli OTHER_ARGS...
(The examples above use cocli
but the same goes for evcli
and pocli
as
well.)
Deployment directory structure
Once deployment has been created, ${VERAISON_ROOT}
has the following
structure
.
├── bin
├── certs
├── config
├── env
├── logs
├── plugins
├── signing
├── stores
└── systemd (or launchd)
bin
This directory contains all executables associated with deployment, including the CLI frontend, services, and clients.
(note: if the deployment was created with -s
option, the service executables
will in fact be symlinks to their source locations.)
config
This directory has a number of sub-directories, one for the services, and one
for each client. Each subdirectory contains a config.yaml
with associated
configuration.
env
This directly contains env files that may be sourced by various shells in order to setup up the shell for use with the deployment (e.g. allowing the frontend to be invoked without specifying its full path).
Currently, only bash
and zsh
are supported.
logs
This directory contains service logs.
plugins
This directly contains attestation scheme plugins.
(note: if the deployment was crated with -s
option, the plugins will in fact
be symlinks to their source locations.)
signing
This directly contains skey.jwk
, the key that will be used by the
verification service to sign attestation results.
stores
This directory contains sqlite3 database for the endorsements, trust anchors, and policies stores.
systemd
(Linux only)
This directory contains systemd unit files for the Veraison services. It is
split into two sub-directories: system
and user
. The latter contains units
meant to be installed into the user-specific service manager (i.e. using
systemctl --user
).
launchd
(MacOSX only)
This directory contains launchd user agent files for the Veraison services.
Setting up authentication with Keycloak
Authentication is disabled by default in the native deployment. To enable it, you first need to have the Keycloak authentication service running.
Installing and configuring Keycloak
note
Proper installation and setup of Keycloak is outside the scope of this README. Please refer to the Keycloak documentation for complete installation instructions. The instructions below show how to get something running with minimal effort for local development and testing only.
Keycloak requires OpenJDK 21. This is not part of Veraison dependencies and is not installed as part of bootstrap. If you don't already have it on your system, you will need to install it.
# On Arch
sudo pacman -S jdk21-openjdk
# On Ubuntu
sudo apt install openjdk-21-jdk
You will then need to make sure that JAVA_HOME
points to it:
export JAVA_HOME=/usr/lib/jvm/java-21-openjdk
Pick an install destination (must exist), and download and extract Keycloak into it:
export INSTALL_DEST=${HOME}
wget -O- https://github.com/keycloak/keycloak/releases/download/25.0.2/keycloak-25.0.2.tar.gz | \
tar xzf - -C ${INSTALL_DEST}
export KEYCLOAK_ROOT=${INSTALL_DEST}/keycloak-25.0.2
Setup the new install to work with the deployment:
./deployment.sh setup-keycloak ${KEYCLOAK_ROOT} localhost \
${VERAISON_ROOT}/certs/rootCA.{crt,key} 11111
The first argument to the setup-keycloak
command is the root directory of the
keycloak installation. The second argument is a comma-separated list of names
that will be used in the certificate (in the listing above it's just
"localhost"). The third and fourth arguments are the root certificate and its
key that will be used to sign Keycloak's cert. This must be the same root cert
that was used for the Veraison deployment (the listing above assumes that
you've generated the root with gen-root-cert
command as described above, if
not, then adjust the paths as appropriate). The final argument is the port
Keycloak will be listening on.
The setup command will also generate a new key for the server, and copy the example Veraison realm into its data path.
Finally, re-build, and start the server:
${KEYCLOAK_ROOT}/bin/kc.sh build
${KEYCLOAK_ROOT}/bin/kc.sh start --import-realm
The --import-realm
option will cause the server to import the example
veraison realm.
Create Veraison realm
If you imported the example realm when running the server, you do not need to do this step.
- Veraison will expect the realm containing its configuration to be called "veraison".
- This realm must define "manager" and "provisioner" roles.
- There must be at least one user associated with each role.
- There must be at least one client setup with Authentication Flow capabilities "Standard Flow" and "Direct Access Grants" enabled.
Please refer to the Keycloak docs for how to go about setting up the above.
Enable authentication in the deployment
Finally, enable authentication in the config files for services, cocli, and pocli inside the deployment. This just means uncommenting the relevant config. If you have set up a new veraison realm instead of using the example one, the config values will also need to be updated.
Using an alternative DBMS
By default, the deployment will use sqlite3
databases located under
${VERAISON_ROOT}/stores/
for key-value stores. It is possible to use either
MySQL/MariaDB or Postgres instead.
- Ensure the relevant DBMS is installed and running. Please refer to the DBMS' and/or your OS' documentation for instructions on how to set that up.
- Create a user and a database for the deployment, and initialize k-v store tables for endorsements, trust anchors, and policies. Subdirectory kvstore-backends contains scripts to automate this step.
- Edit
*-store
entires inside${VERAISON_ROOT}/config/services/config.yaml
to comment out thesqlite3
config, and uncommenting config for the relevant driver (pgx
for Postgres, andmysql
for MySQL/MariaDB). You may also need to edit the uncommented settings to match your set up in the previous steps.
RPM Deployment
This deployment creates a .rpm package that can be installed on Red Hat family
of distribution (e.g. Fedora, RHEL, Oracle Linux).The package installs Veraison
into /opt/veraison
and sets up systemd units for the services.
Dependencies
In addition to dependencies for the native
deployment, rpm-build
must be installed. To
install all dependencies to build an rpm, run
git clone https://github.com/veraison/services.git
cd services/deployments/rpm
make bootstrap
Building the package
The location where the package will be built is specified with PACKAGE_DEST
environment variable. It will default to /tmp
if not set. To build the
package simply do
make rpm
This will create the following RPM package
${PACKAGE_DEST}/veraison_VERSION_ARCH/rpmbuild/RPMS/ARCH/veraison-VERSION.FLA.ARCH.rpm
where VERSION
is the Veraison version as reported by the
get-veraison-version
script,
ARCH
is the architecture of your system as reported by arch
, and
FLA
is the distro flavor such as el8 and el9.
Configuration
Configuration for Veraison services is specified in YAML format. By default,
each executable will try to read configuration from a file called config.yaml
in that executable's current working directory (i.e. the directory it was
launched from -- not the directory the executable itself is located). An
alternative configuration file may be specified using --config
option when
launching the executable.
Deployment configuration
Services configured to run as part of a deployment (e.g. via systemd units)
will typically look for configuration in config/services/config.yaml
under
the deployment's location. For example, services installed via deb
or rpm
packages will have their configuration inside
/opt/veraison/config/services/config.yaml
.
For docker
and aws
deployments, you normally shouldn't be modifying the
configuration inside the containers/EC2 instances directly. Please refer to the
deployments' documentation for information on how they may be configured.
Top-level entries
The following top-level entries will be read for the config file:
auth
: authentication configuration (used by management and provisioning services)ear-signer
: EAR signer configuration (used by vts service)en-store
: endorsements key-value store configuration (used by vts service)logging
: logging configuration (used by all services)management
: management REST API service configuration (used by management service)plugin
: plugin loader configuration (used by vts and management services)provisioning
: provisioning REST API service configuration (used by provisioning service)po-store
: policy key-value store configuration (used by vts and management services)sessionmanager
: verification service's session manager configuration (used by verification service)ta-store
: trust anchors key-value store configuration (used by vts service)verification
: verification REST API service configuration (used by verification service)vts
: Veraison Trusted Services configuration (used by vts service)
Each service executable will only look for the top-level entries it expects and will ignore the rest.
auth
configuration
See Authentication configuration below.
ear-signer
configuration
alg
: the JWS algorithm used for signing, e.g.:ES256
,RS512
.key
: URL with the location of the private key to be used withalg
. The following URL schems are supported:file
: URL path is the path to a local fileaws
: URL path is in the form<region>:<secret-name>
where<region>
is an AWS region, and<secrete-name>
is the name under which the key is stored in AWS Secrets Manager. If a scheme is not specified,file
is assumed. The key is in JWK format.
en-store
configuration
See KV Store configuration below.
logging
configuration
level
(optional): specify the minimum enabled logging level. Supported values (from lowest to highest) are:debug
,info
,warn
,error
. Defaults toinfo
.format
(optional): specify the format of the log records (e.g. which entries appear in it). Currently supported formats:
production
: defaultzap
production config.
development
: defaultzap
development config.
bare
: a relatively bare format, featuring only log level (colored), logger name, and its message, suffixed with any fields.development
(optional): set totrue
to put the logger into development mode. This changes the behavior ofDPanic
and takes stacktraces more liberally.disable-stacktrace
(optional): set totrue
to completely disable automatic stacktrace capturing.sampling
(optional): set the sampling policy. There are two possible sub-settings:initial
andthereafter
, each is an integer value.initial
specifies how many messages, of a given level, will be logged as-is every second. After than, only everythereafter
'th message will be logged within that seconds. e.g.
means, for very level (debug, info, etc) log the first 10 messages that occur in 1 second. If more messages occur within the second, log every 5th message after the first 10.sampling: initial: 10 thereafer: 5
encoding
(optional): specifies logger encoding. Supported values:console
,json
.output-paths
(optional): a list of URLs or file paths to write logging output to. By default, output is written tostdout
(which may also be specified as a "path" along other locations). In case the same configuration is used by multiple services, you can insert"{{ .service }}"
somewhere in paths (part fromstdout
/stderr
) to have different services log into different files.err-output-paths
(optional): a list of URLs or file paths to write internal logger errors to (note: not the error-level logging output). Defaults tostderr
. In case the same configuration is used by multiple services, you can insert"{{ .service }}"
somewhere in paths (part fromstdout
/stderr
) to have different services log into different files.initial-fields
(optional): a map of key-value fields to add to log records (in addition to those added by specific logging sites).
management
configuration
listen-addr
(optional): the address, in the form<host>:<port>
the management server will be listening on. If not specified, this defaults tolocalhost:8088
.protocol
(optional): the protocol that will be used. Must be either "http" or "https". Defaults to "https" if not specified.cert
: path to the x509 certificate to be used. Must be specified if protocol is "https"cert-key
: path to the key associated with the certificate specified incert
. Must be specified if protocol is "https"
plugin
configuration
The following directives are currently supported:
backend
: specifies which plugin mechanism will be used. Currently the only supported plugin backend isgo-plugin
.<backend name>
(in practice, justgo-plugin
): configuration specific to the backend.builtin
: configuration used when plugins are disabled and scheme-specific functionally is compiled into the VTS service executable. (Currently just a place-holder as there is no configuration for the built-in loading mechanism.) Note: enabling or disabling of plugins is a build-time option. It is not possible to do so via configuration.
go-plugin
backend configuration
dir
: path to the directory that will be scanned for plugin executables.
po-store
configuration
See KV Store configuration below.
po-agent
configuration
The following policy agent configuration directives are currently supported:
backend
: specified which policy backend will be used. Currently supported backends:opa
.<backend name>
: an entry with the name of a backend is used to specify configuration for that backend. Multiple such entries may exist in a single config, but only the one for the backend specified by thebackend
directive will be used.
opa
backend configuration
Currently, opa
backend does not support any configuration.
provisioning
configuration
listen-addr
(optional): the address, in the form<host>:<port>
the provisioning server will be listening on. If not specified, this defaults tolocalhost:8443
.protocol
(optional): the protocol that will be used. Must be either "http" or "https". Defaults to "https" if not specified.cert
: path to the x509 certificate to be used. Must be specified if protocol is "https"cert-key
: path to the key associated with the certificate specified incert
. Must be specified if protocol is "https"
sessionmanager
configuration
Session manager has a single configuration point: backend
. This specifies
which ISessionManager
implementation will be used. The following backends are
supported:
ttlcache
: the default; this creates the session cache in memory of theverification-service
process.memcached
: uses an external memcached server.
All other entries under sessionmanager
must be backend names (i.e. ttlcache
or memcached
), providing backend-specific configuration. Only configuration
for the backend selected by backend
entry will actually be used.
ttlcache
backend
ttlcache
backend does not have any configuration points.
memcached
backend
memcached
backend has the following configuration points:
servers
(optional): a list of servers in ": " format that will be used by the backend. The servers will be used with equal weight. Adding the same entry multiple times increases its weight. If this is not specified, it will default to ["localhost:11211"]
.
ta-store
configuration
See KV Store configuration below.
verification
configuration
listen-addr
(optional): the address, in the form<host>:<port>
the verification server will be listening on. If not specified, this defaults tolocalhost:8080
.protocol
(optional): the protocol that will be used. Defaults to "https" if not specified. Must be either "http" or "https".cert
: path to the x509 certificate to be used. Must be specified if protocol is "https"cert-key
: path to the key associated with the certificate specified incert
. Must be specified if protocol is "https"
vts
configuration
server-addr
(optional): address of the VTS server in the form<host>:<port>
. If not specified, this defaults to127.0.0.1:50051
. Unlesslisten-addr
is specified (see below), VTS server will extract the port to listen on from this setting (but will listen on all local interfaces)listen-addr
(optional): The address the VTS server will listen on in the form<host>:<port>
. Only specify this if you want to restrict the server to listen on a particular interface; otherwise, the server will listen on all interfaces on the port specified inserver-addr
.tls
(optional): specifies whether TLS should be used for client connections. Defaults totrue
.cert
: path to the file containing the certificate that should be used by the server iftls
(see above) istrue
.cert-key
: path to the file containing the key associated with the certificate specified byserver-cert
(see above).ca-certs
(optional): a list of paths to certificates that will be used in addition to system certs during mutual validation with the client whentls
(see above) istrue
.
Authentication configuration
Authentication configuration is specified under auth
top-level entry.
-
backend
: specifies which auth backend will be used by the service. The valid options are:passthrough
: a backend that does not perform any authentication, allowing all requests.none
: alias forpassthrough
.basic
: Uses the Basic HTTP authentication scheme. See RFC7617 for details. This is not intended for production.keycloak
: Uses OpenID Connect protocol as implemented by the Keycloak authentication server.
The rest of the expected entries are defined by the value of backend
. See
below for details of how to configure individual backends.
Passthrough
No additional configuration is required. passthrough
will allow all requests.
This is the default if auth
configuration is not provided.
Basic
-
users
: this is a mapping of user names onto their bcrypt password hashes and roles. The key of the mapping is the user name, the value is a further mapping for the details with the following fields:password
: the bcrypt hash of the user's password.roles
: either a single role or a list of roles associated with the user. API authrization will be performed based on the user's roles.
On Linux, bcrypt hashes can be generated on the command line using mkpasswd
utility, e.g.:
mkpasswd -m bcrypt --stdin <<< Passw0rd!
For example:
auth:
backend: basic
users:
user1:
password: "$2b$05$XgVBveh6QPrRHXI.8S/J9uobBR7Wv9z4CL8yACHEmKIQmYSSyKAqC" # Passw0rd!
roles: provisioner
user2:
password: "$2b$05$x5fvAV5WPkX0KXzqf5FMKODz0uyi2ioew1lOrF2Czp2aNH1LQmhki" # @s3cr3t
roles: [manager, provisioner]
Keycloak
host
(optional): host name of the Keycloak service. Defaults tolocalhost
.port
(optional): the port on which the Keycloak service is listening. Defaults to11111
.realm
(optional): the Keycloak realm used by Veraison. A realm contains the configuration for clients, users, roles, etc. It is roughly analogous to a "tenant id". Defaults toveraison
.ca-cert
: the path to a PEM-encoded x509 cert that will be added to CA certs when establishing connection to the Keycloak server. This should be specified if the server has HTTPS enabled and the root CA for its cert is not installed in the system.
For example:
auth:
backend: keycloak
host: keycloak.example.com
port: 11111
realm: veraison
KV Store configuration
kvstore
expects the following entries in configuration:
backend
: the name of the backend to use for the store. Currently supported backends:memory
,sql
.<backend name>
: an entry with the name of a backend is used to specify the configuration for that backend. There may be multiple such entries for different backends. Only the entry matching the active backend specified bybackend
directive will actually be used. The contents for each entry is specific to the backend.
Note: in a config file, kvstore
configuration will typically be namespaced
under the name of a particular store instance, e.g.
ta-store:
backend: sql
sql:
driver: sqlite3
memory
backend configuration
Currently, memory
backend does not support any configuration.
sql
backend configuration
note
sqlite3
, the default driver for the backend, is unsuitable for
production or performance testing.
driver
: The name of the golang SQL driver. Veraison currently includes the following drivers:sqlite3
,mysql
(MySQL and MariaDB), andpgx
(Postgres).datasource
: This points to the database the driver will access. The format is driver-dependent.sqlite3
: the path to the sqlite3 database filepgx
: a URL in the formpostgresql://<user>:<passwd>@<host>:<port>/<database>
mysql
: string in the form<user>:<password>@<protocol>(<address>)/<database>
(Please see the drivers' documentation for more details.)
tablename
(optional): the name of the table within the SQL database that will be used by the store. If this is not specified, it will default to"kvstore"
.max_connections
(optional): the maximum number of parallel connections to DBMS that will be opened by the SQL driver. Once this number is reached, further transactions will block until a connection becomes available. Defaults to10
.
note
max_connections
, like all settings above, is specific to the
store for which it is being configured. This means that the total maximum
numer of connectins Veraison will try to open is the sum of the
max_connections
values for all three stores (30
by default).
Example
This is an example of a complete configuration for all Veraison services.
auth:
backend: keycloak
host: keycloak.example.com
port: 11111
ear-signer:
alg: ES256
key: /opt/veraison/signing/skey.jwk
en-store:
backend: sql
sql:
max_connections: 8
driver: pgx
datasource: postgres://veraison:p4ssw0rd@postgres.example.com:5432/veraison
tablename: endorsements
logging:
level: info
output-paths:
- stdout
- /opt/veraison/logs/{{ .service }}-stdout.log
management:
listen-addr: 0.0.0.0:8088
protocol: https
cert: /opt/veraison/certs/management.crt
cert-key: /opt/veraison/certs/management.key
plugin:
backend: go-plugin
go-plugin:
dir: /opt/veraison/plugins/
po-agent:
backend: opa
po-store:
backend: sql
sql:
max_connections: 8
driver: pgx
datasource: postgres://veraison:p4ssw0rd@postgres.example.com:5432/veraison
tablename: policies
provisioning:
listen-addr: 0.0.0.0:8888
protocol: https
cert: /opt/veraison/certs/provisioning.crt
cert-key: /opt/veraison/certs/provisioning.key
sessionmanager:
backend: memcached
memcached:
servers:
- memcached1.example.com:11211
- memcached2.example.com:11211
ta-store:
backend: sql
sql:
max_connections: 8
driver: pgx
datasource: postgres://veraison:p4ssw0rd@postgres.example.com:5432/veraison
tablename: trust_anchors
verification:
listen-addr: 0.0.0.0:8080
protocol: https
cert: /opt/veraison/certs/verification.crt
cert-key: /opt/veraison/certs/verification.key
vts:
server-addr: localhost:50051
tls: true
cert: /opt/veraison/certs/vts.crt
cert-key: /opt/veraison/certs/vts.key
ca-certs: /opt/veraison/certs/rootCA.crt
Attestation Schemes
An attestation scheme describes how an attestation should be verified. Specifically, it defines
- The structure of attestation evidence submitted for verification
- The endorsements and trust anchors needed
- How all of the above is evaluated to produce an attestation result
Veraison supports different attestation schemes via plugins.
Current Schemes
Currently the following schemes are implemented:
arm-cca
Arm Confidential Compute Architecture attestation.psa-iot
: Arm Platform Security Architecture attestation.riot
: RIoT based DICE-compatible attestation (note: this does not implement any specific DICE architecture).tmp-enacttrust
: TPM-based attestation for EnactTrust security cloud.parsec-tpm
: Parsec TPM based hardware-backed attestation, details hereparsec-cca
: Parsec CCA based hardware-backed attestation, details here
Implementing Attestation Scheme Support
note
If you already have attestation scheme plugins implemented for an earlier version of Veraison, please see the migration guide for how to convert them to the new framework.
Supporting a new attestation scheme requires defining how to provision
endorsements (if any) by implementing IEndorsementHandler
,
how to process evidence tokens by implementing IEvidenceHandler
and
how to create and obtain scheme-specific keys used to store and retrieve endorsements and trust anchors
by implementing IStoreHandler
.
Finally, an executable should be created that registers and serves them.
package main
import (
"github.com/veraison/services/decoder"
"github.com/veraison/services/plugin"
)
type MyEvidenceHandler struct {}
// ...
// Implementation of IEvidenceHandler for MyEvidenceHandler
// ...
type MyEndrosementHandler struct {}
// ...
// Implementation of IEndrosementHandler for MyEndrosementHandler
// ...
type MyStoreHandler struct {}
// ...
// Implementation of IStoreHandler for MyStoreHandler
// ...
func main() {
handler.RegisterEndorsementHandler(&MyEndorsementHandler{})
handler.RegisterEvidenceHandler(&MyEvidenceHandler{})
handler.RegisterStoreHandler(&MyStoreHandler{})
plugin.Serve()
}
Arm PSA and CCA Attestation
Evaluation of Arm Platform Security
Architecture
and Confidential Compute
Architecture attestation evidence is implemented via PSA_IOT
and ARM_CCA
schemes, respectively.
Endorsements and Trust Anchors
Endorsements and trust anchors for both schemes can be created and provisioned
using cocli
command line tool. The tool
has sample
templates
for describing endorsements and trust anchors for both attestation schemes.
Endorsements can also be generated from evidence using gen-corim
command
line tool.
Attestation Evidence
Attestation evidence can be submitted for verification using evcli
command
line tool.
Evidence formats for both schemes are defined as Entity Attestation Token (EAT) profiles. See draft-tschofenig-rats-psa-token and draft-ffm-rats-cca-token.
Library support exists for PSA tokens in Go.
Library support exists for CCA tokens in Go and Rust.
DICE
Standard Documents
Device Identifier Composition Engine (DICE) is an attestation standard developed by the Trusting Computing Group (TCG). DICE builds on the device identity and device attestation concepts introduced in Robust Internet-of-Things (RIoT) from Microsoft Research.
The DICE specification is split across the following core documents:
- Hardware Requirements, describing the DICE Root of Trust (RoT) and the construction of the (unique) Compound Device Identifier (CDI).
- Layering Architecture, describing the construction of a device's layered identity rooted in the CDI and obtained via the recursive invocation of a simple primitive based on measurements and per-device secret seeding.
- Attestation Architecture, describing how DICE-based attestation fits in the wider RATS architecture.
Other relevant documents include:
- Certificate Profiles, describing how X.509 certificates that carry public keys derived from DICE identities look like.
- Symmetric Identity Based Device Attestation, describing how to derive symmetric key material from DICE identities and how that could be used in authentication and attestation protocols.
A further "DICE Endorsements Architecture" document, describing the information model (IM) for endorsed and reference values, is currently under revision in the TCG. The official document is not yet published, but a data model based on the IM is available in the CoRIM repository of which Veraison provides a Golang implementation.
Google's Open Profile for DICE
Google has released the specification of an Open Profile for DICE based on TCG's "Hardware Requirements" and "Layering Architecture" documents, along with a C99 implementation as part of the Pigweed project.
While the core concepts are unchanged, there are a few notable differences from TCG DICE, mainly around the encoding of payloads. See the Open DICE Formats section for the details.
DICE Concepts
Glossary
-
DICE Hardware RoT - The HW implementation of the DICE processing engine, acting as the terminal trust anchor in a DICE device.
-
Unique Device Secret (UDS) - A permanent, (statistically) unique, secret value. Every instance of a DICE device has its own UDS stashed away in a location only accessible to the DICE engine. The UDS can be injected at manufacturing time, or generated by the device itself using a suitable PRNG.
-
Trusted Computing Base (TCB) : All the security relevant HW and SW in a DICE device. DICE organizes the TCB in multiple "components", which are grouped in "layers" stacked one on top of the other, starting with the DICE engine. A layer boundary represents an explicit handover point from one component to one or more components in the "next" stage of the boot chain.
-
TCB Component Identifier (TCI) - A (statistically) unique identifier for a TCB component. This value is computed by digesting the component's measurement along with other component-specific attributes, e.g., vendor name, version, etc.
-
TCB Info - The information model of a TCB component. It gathers a standard set of attributes (and associate types) that describe a TCB, for example: its model, vendor, Firmware digests, version, etc.
-
Compound Device Identifier (CDI) - This value represents the (statistically) unique identity of each layer in the bootstrap sequence. The CDI for a given layer is obtained by hashing together the predecessor CDI and the TCI for the current layer. This recursive construction is such that the resulting CDIs are inextricably chained: when one CDI changes (e.g., because the measured software component is updated, and the TCI mutates accordingly), all its successor CDIs change as well. Crucially, the CDI is directly used as a seed for deriving cryptographic credentials (e.g., an ECDSA key pair) for the corresponding TCB component.
NOTE: At first it might seem like the scalability properties of DICE verification vis-a-vis software & configuration updates of TCB components are terrible. The verifier would have to be re-provisioned with all the updated identities of all the devices of the same "class" if a bit was flipped in their TCB. Luckily, the situation is not so gloomy: as long as the lowest layer in the DICE chain is stable (as in "never updated") and the DICE compound identity is provided as a certification chain rooted either in the lowest layer or in an external CA that issues the cert for the lowest layer, an identity update of the i-th layer can be re-computed dynamically by a verifier without need to further interact with the supply chain -- except for re-provisioning the verifier with the updated measurement. The key requirement is the stability of the so-called "Layer 0" / "First Mutable Code" (FMC), and therefore of its cryptographic identity. The latter is a pretty important bit in DICE and it's given the proper name of "Device Id". The Device Id key pair is associated with an IEEE 802.1AR IDevID cert.
Evidence
DICE models attestation evidence as a layered & chained set of claims. The claims in each layer are of two kinds:
- The cryptographic identity associated with the layer (typically, a public key),
- The layer's measurements used as input in the identity derivation function.
The recommended way to encode evidence is using standard X.509 scaffolding:
- A certificate for the layer's public key signed by the previous layer's private key,
- A set of custom certificate extensions to encode the TCB measurements.
Theoretically, other formats are acceptable; for example, Open-DICE envisages the use of chained CWTs).
TCG DICE Formats
tcg OBJECT IDENTIFIER ::= { 2 23 133 }
tcg-dice OBJECT IDENTIFIER ::= { tcg platformClass(5) 4 }
TCB Info
TCB Info represents the data and metadata associated with the component identity. It can be roughly split into two logically separate parts: naming / identification attributes on one side (i.e., vendor
, vendorInfo
, model
, version
, svn
, layer
and index
), measurement values on the other (i.e., fwids
, flags
and type
)
See Section 6.1.1 of the Attestation Architecture document for a detailed description of the semantics associated with individual fields.
tcg-dice-TcbInfo OBJECT IDENTIFIER ::= {tcg-dice 1}
DiceTcbInfo ::== SEQUENCE {
vendor [0] IMPLICIT UTF8String OPTIONAL,
model [1] IMPLICIT UTF8String OPTIONAL,
version [2] IMPLICIT UTF8String OPTIONAL,
svn [3] IMPLICIT INTEGER OPTIONAL,
layer [4] IMPLICIT INTEGER OPTIONAL,
index [5] IMPLICIT INTEGER OPTIONAL,
fwids [6] IMPLICIT FWIDLIST OPTIONAL,
flags [7] IMPLICIT OperationalFlags OPTIONAL,
vendorInfo [8] IMPLICIT OCTET STRING OPTIONAL,
type [9] IMPLICIT OCTET STRING OPTIONAL
}
FWIDLIST ::== SEQUENCE SIZE (1..MAX) OF FWID
FWID ::== SEQUENCE {
hashAlg OBJECT IDENTIFIER,
digest OCTET STRING
}
OperationalFlags ::= BIT STRING {
notConfigured (0),
notSecure (1),
recovery (2),
debug (3)
}
Multiple TCB Info payloads, each associated with a different component in the layer, can be put together into the same certificate using a DiceTcbInfoSeq
:
tcg-dice-MultiTcbInfo OBJECT IDENTIFIER ::= { tcg-dice 5 }
DiceTcbInfoSeq ::= SEQUENCE SIZE (1..MAX) OF DiceTcbInfo
Even though further extensions have been defined (for the full list see Section 6.1 of "Attestation Architecture" -- for example one that allows unsigned EAT claims to be embedded in an X.509 extension -- DiceTcbInfo
is the (currently) preferred way.
Open DICE Formats
In Open DICE, the equivalent of the TCB Info is called OpenDiceInput
and has a very regular layout, with data and metadata for code, configuration and authority respectively:
Mode ::= INTEGER (0..3)
OpenDiceInput ::= SEQUENCE {
codeHash [0] EXPLICIT OCTET STRING OPTIONAL,
codeDescriptor [1] EXPLICIT OCTET STRING OPTIONAL,
configurationHash [2] EXPLICIT OCTET STRING OPTIONAL,
configurationDescriptor [3] EXPLICIT OCTET STRING OPTIONAL,
authorityHash [4] EXPLICIT OCTET STRING OPTIONAL,
authorityDescriptor [5] EXPLICIT OCTET STRING OPTIONAL,
mode [6] EXPLICIT Mode OPTIONAL,
}
For further details regarding the Open DICE X.509 profile, see the Certificate Details section of the spec.
Open DICE also allows encoding UDS and CDI certificates in CBOR Web Tokens (CWT) similar to EAT. It also allows for hybrid cert chains, with co-existing X.509 and CWT certs in the same logical chain, to support legacy component integrations.
DICE from a Veraison's Perspective
Miscellaneous implementation & deployment considerations.
Asymmetric Crypto Identities
Typically, DICE identities are associated with asymmetric key pairs.
NOTE: This mode is, from a security perspective, the most desirable. In fact, when using public keys for verification, the verifier only needs to take care of their availability and integrity.
Effectively a verifier only stores the device trust anchor, which could be an external root CA -- for example the ODM's -- or the self-signed certificate of the device in case there is no external CA to anchor the cert chain. The former providing a much more scalable solution, with individual device revocation handled via standard CRLs.
Symmetric Crypto Identities
"Symmetric Identity Based Device Attestation" describes how to associate DICE identities with symmetric crypto keys.
In this case, the verifier must have a-priori knowledge of the device secret as well as a detailed understanding of the identity derivation algorithm in order to verify the device's authenticated assertions. This assumes a level of coupling between attesters and verifiers that, I reckon, is not achievable using only the available TCG standards, which tend to be quite liberal around the implementation details. The Open-DICE specification only allows identities based on asymmetric keys.
NOTE: This mode shifts all the costs on the verification infra. It is an operational nightmare with a vast attack surface and very limited scalability. It should be deemed acceptable only when the cost of a key compromise / disclosure is (very) low, and the device fleet is roughly static with very infrequent software updates.
Verification
We expect that, typically, DICE evidence submitted for verification is not bound to an explicit challenge-response session. This is likely to require defining a new Veraison API alongside the existing challenge-response endpoint.
Given the fact that TCG DICE has a few loose ends, mainly in terms of evidence encoding formats and the nature of cryptographic identities, Veraison will have to be as flexible as possible while clearly documenting the supported feature set.
Open-DICE defines a stricter (and smaller) set of requirements and therefore it'd be probably easier to implement fully.
Veraison DICE implementation will need to be extended with APIs for claims extraction. The existing certificate chain verification API should be OK as-is.
Endorsements Provisioning
By definition, native CoRIM fully supports TCG DICE, so there should be no extra work needed to profile the corim API, which already implements the entirety of CoRIM.
Open-DICE has a slightly different description of the TCB which will need to be reflected in a CoRIM profile. It should be a pretty straightforward operation.
Policies
Policies allow tenants to perform additional evaluation of attestation evidence that is not covered by a particular attestation scheme. The policy can be used to override the overall attestation status and/or the trust vector values in the result (e.g. rejecting a token considered valid by the scheme if the more stringent constraints described in the policy are not met).
note
Policy administration framework is to be determined in the future. The short-term plan is to make this a part of the deployment flow, but a more complete policy admin flow may follow.
The syntax of the policy depends on the agent used to evaluate it. At the moment, the following policy agents are supported:
"opa" -- Open Policy Agent is a flexible, generic Open Source policy agent that utilizes its own policy language called Rego. See README.opa.md.
Configuration
Configuration for the policy agent is specified under top-level entry
po-agent
.
po-agent
configuration
The following policy agent configuration directives are currently supported:
backend
: specified which policy backend will be used. Currently supported backends:opa
.<backend name>
: an entry with the name of a backend is used to specify configuration for that backend. Multiple such entries may exist in a single config, but only the one for the backend specified by thebackend
directive will be used.
opa
backend configuration
Currently, opa
backend does not support any configuration.
Policy Identification
There are three different ways of identifying a policy:
appraisal policy ID
This is an identifier used in the attestation result as defined by EAR Internet draft. Note that in this case, "policy" is used in the sense of "Appraisal Policy for Evidence" as per RATS architecture. In Veraison, this encompasses both, the scheme, and the policy applied from the policy store by a policy engine.
An appraisal policy ID is a URI with
the scheme policy
followed by a rootless path indicating the (RATS) policy
using which the appraisal has been generated. The first segment of the path is
the name of the scheme used to create the appraisal. The second segment, if
present, is the individual policy ID (see below) of the policy that has been
applied to the appraisal created by the scheme.
For example:
policy:TPM_ENACTTRUST
: the appraisal has been created using "TPM_ENACTTRUST" scheme, with no additional policy applied.policy:PSA_IOT/340d22f7-9eda-499f-9aa2-5af295d6d812
: the appraisal has been created using "PSA_IOT" scheme and has subsequently been updated by the policy with unique policy ID "ae19cc27-a449-1fb8-6c10-00f47ad1c55c".
Potential future extensions
These indicate potential future enhancements, and are not supported by the current implementation.
Cascading policies
In the future we may support applying multiple individual policies to a single appraisal. In that case, each path segment after the first (the scheme) is the individual policy ID of a policy that has been applied. The ordering of the segments matches the order in which the policies were applied.
For example:
policy:PSA_IOT/340d22f7-9eda-499f-9aa2-5af295d6d812/ae19cc27-a449-1fb8-6c10-00f47ad1c55c
: the appraisal has been created using "PSA_IOT" scheme, it was then updated by a policy with the individual policy id340d22f7-9eda-499f-9aa2-5af295d6d812
, followed by a policy with the individual policy IDae19cc27-a449-1fb8-6c10-00f47ad1c55c
.
policy store key
Policies are stored, retrieved from, and updated in the policy store using a key. The key is a string consisting of the tenant id, scheme, and policy name delimited by colons.
For example:
0:PSA_IOT:opa
: the key for tenant "0"'s policy for scheme "PSA_IOT" with name "opa".
policy name
The name exists to support cascading policies in the future (see above). At the moment, as there is only one active policy allowed per appraisal, the name is not necessary and is always set to the name of the policy engine ("opa"). While this unnecessarily increases the key size and is somewhat wasteful, given that the number of the policies a typical deployment is expected to be, at most, in the hundreds, and the relatively negligible overhead compared to the size of the policies themselves, this is not deemed to be a major concern.
individual policy ID
The individual policy ID identifies the specific policy that was applied to an appraisal. It forms a component of the appraisal policy ID (which also includes the scheme, and possibly, in the future, individual IDs from multiple policies). It differs from the policy store key in that it also incorporates versioning information.
The individual policy id is the UUID of the specific policy instance.
For example: 340d22f7-9eda-499f-9aa2-5af295d6d812
Writing Policies
Please see the OPA official documentation for information on how to write Rego policies. This section describes what is necessary to implement a valid Veraison policy, and assumes a general familiarity with Rego.
Package declaration
Your policy should specify policy
as the package.
package policy
Evaluation Data
Data to be evaluated is defined as the following variables in the policy
package:
evidence
contains the JSON representation of proto.EvidenceContext.Evidence
structure. This is the scheme-specific values extracted from the attestation
token.
endorsements
is an array of endorsement JSON objects. Their structure is
scheme-specific.
result
is a JSON object representing proto.AttestationResult
that was
generated by the scheme.
scheme
is the name of the attestation scheme.
Rules
You can update the attestation result by defining one or more of the following rules:
status
hw_authenticity
sw_integrity
sw_up_to_dateness
config_integrity
runtime_integrity
certification_status
The first overwrites the overall attestation result status. The subsequent rules overwrite the corresponding entries in the trust vector.
The rules must be defined such that they produce the string "SUCCESS"
if the
conditions have been met, and the string "FAILURE"
otherwise.
As various versions are often represented using Semantic Versioning, policy
package provides a utility function, semver_cmp
to aid in their comparison
(note: currently, pre-release suffixes are not supported).
For example:
sw_integrity = "SUCCESS" {
# note: This is an example. The evidence entries do not correspond to any
# actual scheme.
evidence["application-hash"] == "h0KPxSKAPTEGXnvOPPA/5HUJZjHl4Hu9eg/eYMTPJcc="
evidence["firmware-hash"] == "h0KPxSKAPTEGXnvOPPA/5HUJZjHl4Hu9eg/eYMTPJcc="
semver_cmp(evidence["firmware-version"], "1.2.0") >= 0
} else = "FAILURE"
Dealing With Multiple Attestation Schemes
If you expect your policy to be applied to inputs from multiple attestation
schemes, you differentiate between them using the format
variable defined by
the policy
package. For example you can define rules for each scheme, and
then alias the generic rules above to one of them based on the value of the
format
variable. For example:
sw_up_to_dateness = psa_sw_up_to_dateness { format == "PSA_IOT" }
else = enacttrust_sw_up_to_dateness { format == "TPM_ENACTTRUST" }
psa_sw_up_to_dateness = "SUCCESS" {
# ...
} else = "FAILURE"
enacttrust_sw_up_to_dateness = "SUCCESS" {
# ...
} else = "FAILURE"
Example Policy
This is a example of a policy that sets the SoftwareUpToDateness
value in the
trust vector for PSA_IOT
and TPM_ENACTTRUST
schemes (neither scheme has
built-in definition for software up-to-dateness, and so that entry is undefined
in the results generated by the schemes).
package policy
# Use the psa_sw_up_to_dateness rules iff the attestation format is PSA_IOT, and
# to enacttrust_sw_up_to_dateness iff the format is TPM_ENACTTRUST, otherwise,
# sw_up_to_dateness will remain undefined.
sw_up_to_dateness = psa_sw_up_to_dateness { format == "PSA_IOT" }
else = enacttrust_sw_up_to_dateness { format == "TPM_ENACTTRUST" }
# This sets sw-up-to-dateness trust vector value to success iff BL version is
# 3.5 or greater, and to failure otherwise.
psa_sw_up_to_dateness = "SUCCESS" {
# there exisists some i such that...
some i
# ...the i'th software component has type "BL", and...
evidence["psa-software-components"][i]["measurement-type"] == "BL"
# ...the version of this component is greater or equal to 3.5.
# (semver_cmp is defined by the policy package. It returns 1 if the first
# parameter is greater than the second, -1 if it is less than the second,
# and 0 if they are equal.)
semver_cmp(evidence["psa-software-components"][i].version, "3.5") >= 0
} else = "FAILURE" # unless the above condition is met, return "FAILURE"
# Unlike the PSA token, the EnactTrust token does not include information about
# multiple sofware componets and instead has a single "firmware" entry.
enacttrust_sw_up_to_dateness = "SUCCESS" {
evidence["firmware"] >= 8
} else = "FAILURE"
Managing Policies
Policies can be added to, and later managed in, the service using pocli
command line tool.
API
This chapter documents the REST API exposed by Veraison services. These API are grouped into three categories:
- Provisioning of endorsements and trust anchors.
- Challenge/response for verifying attestation evidence.
- Management of verification policies.
Endorsement Provisioning Interface
The API described here are can be used to provision Endorsements into a Verifier. The API are agnostic with regards to the specific data model used to transport Endorsements. HTTP Content negotiation is used to determine the precise message structure and format of the information exchanged between the clients and the server. One specific example of information exchange using Concise Reference Integrity Manifest (CoRIM) is given below.
Provisioning API
The provisioning API allows authorized supply chain actors to communicate reference and endorsed values, verification and identity key material, as well as other kinds of endorsements to Veraison. The supported format is CoRIM.
- To initiate a provisioning session, a client
POST
's the CoRIM containing the endorsements to be provisioned to the/submit
URL. - If the transaction completes synchronously, a
200
response is returned to the client to indicate the submission of the posted CoRIM has been fully processed. The response body contains a "session" resource whosestatus
field encodes the outcome of the submission (see below). - The provisioning processing may also happen asynchronously, for example when submitting a large CoRIM. In this case,
the server returns a
201
response, with aLocation
header pointing to a session resource that the client can regularly poll to monitor any change in the status of its request. - A session starts in
processing
state and ends up in one ofsuccess
orfailed
. When infailed
state, thefailure-reason
field of the session resource contains details about the error condition. - The session resource has a defined time to live: upon its expiry, the resource is garbage collected. Alternatively, the client can dispose the session resource by issuing a
DELETE
to the resource URI.
Synchronous submission
o (1) POST .-------------.
/|\ ------------------------>| /submit |
/ \ <------------------------| |
200 (OK) '-------------'
{ "status": "success|failed" }
- Client submits the endorsement provisioning request
- Server responds with response code
200
indicating processing is complete. The response body contains a session resource with astatus
indicating the outcome of the submission operation.
Example of a successful submission
>> Request:
POST /endorsement-provisioning/v1/submit
Host: veraison.example
Content-Type: application/rim+cbor
...CoRIM as binary data...
<< Response:
HTTP/1.1 200 OK
Content-Type: application/vnd.veraison.provisioning-session+json
{
"status": "success",
"expiry": "1970-01-01T00:00:00Z"
}
Example of a failed submission
>> Request:
POST /endorsement-provisioning/v1/submit
Host: veraison.example
Content-Type: application/rim+cbor
...CoRIM as binary data...
<< Response:
HTTP/1.1 200 OK
Content-Type: application/vnd.veraison.provisioning-session+json
{
"status": "failed",
"failure-reason": "invalid signature",
"expiry": "1970-01-01T00:00:00Z"
}
Asynchronous submission
o (1) POST .-------------.
/|\ ------------------------>| /submit |
/ \ \ <----------------------| |
\ \ 201 Created '-------------'
\ \ Location: /session/01 |
\ \ V
\ \ (2) GET .-------------.
\ '------------------>| /session/01 |
`<-------------------| |
200 OK '-------------'
{ "status": "processing|success|failed" }
- Client submits the endorsement provisioning request
- Server responds with response code
201
indicating that the request has been accepted and will be processed asynchronously - Server returns the URI of a time-bound session resource in the
Location
header. The resource can be polled at regular intervals to check the progress of the submission, until the processing is complete (either successfully or with a failure)
Example
>> Request:
POST /endorsement-provisioning/v1/submit
Host: veraison.example
Content-Type: application/rim+cbor
...CoRIM as binary data...
<< Response:
HTTP/1.1 201 Created
Content-Type: application/vnd.veraison.provisioning-session+json
Location: /endorsement-provisioning/v1/session/1234567890
{
"status": "processing",
"expiry": "2030-10-12T07:20:50.52Z"
}
>> Request:
GET /endorsement-provisioning/v1/session/1234567890
Host: veraison.example
Accept: application/vnd.veraison.provisioning-session+json
<< Response:
HTTP/1.1 200 OK
Content-Type: application/vnd.veraison.provisioning-session+json
{
"status": "success",
"expiry": "2030-10-12T07:20:50.52Z"
}
Challenge/Response
Interaction models
The APIs described here are meant to be instantiations of the abstract protocols described in Reference Interaction Models for Remote Attestation Procedures.
Session flow
Each Challenge-Response session is associated to its own resource, with the following attributes:
- The session nonce;
- An expiry date after which the session is garbage collected;
- The accepted MIME types for Evidence to submit;
- The session state (
waiting
->processing
->complete
|failed
) - The submitted Evidence;
- The produced Attestation Result.
The resource is created in response to a client POST
(1). Subsequently, the
client interacts with its session resource by POST
ing Evidence to verify (2),
and possibly polling it until the Attestation Result pops up (3). In (2), the
server may decide to reply synchronously by including the Attestation Result
directly in the response. In such case, (3) is not necessary. The optional
cleanup step in (4) allows a client to explicitly destroy the session resource.
In any case the resource is garbage collected at any point in time after the
session expiry has elapsed.
o (1) POST .-------------.
/|\ ------------------------>| /newSession |
/ \ \ '------+------'
\ \ \ |
\ \ \ V
\ \ \ (2) POST Evidence .-------------.
\ \ '------------------->| /session/01 |
\ \ '-------------'
\ \ ^ ^
\ \ (3) GET | |
\ '----------------------' |
\ (4) DELETE |
'-------------------------'
Session setup
- To start a new session and obtain a time-bounded nonce value:
>> Request:
POST /challenge-response/v1/newSession?nonceSize=32
Host: veraison.example
Accept: application/vnd.veraison.challenge-response-session+json
<< Response:
HTTP/1.1 201 Created
Content-Type: application/vnd.veraison.challenge-response-session+json
Location: https://veraison.example/challenge-response/v1/session/1234567890
{
"nonce": "MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=",
"expiry": "2030-10-12T07:20:50.52Z",
"accept": [
"application/psa-attestation-token"
],
"state": "waiting"
}
nonceSize
must be between 8
and 64
. Analogously, nonce
must be between
8 and 64 bytes when decoded (which means the specified base64 string must be no
longer than 88 bytes).
Asynchronous verification
- Submit evidence for this session:
>> Request:
POST /challenge-response/v1/session/1234567890
Host: veraison.example
Accept: application/vnd.veraison.challenge-response-session+json
Content-Type: application/psa-attestation-token
.....
<< Response:
HTTP/1.1 202 Accepted
Content-format: application/vnd.veraison.challenge-response-session+json
{
"nonce": "MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=",
"expiry": "2031-10-12T07:20:50.52Z",
"accept": [
"application/psa-attestation-token"
],
"state": "processing",
"evidence": {
"type": "application/psa-attestation-token",
"value": "eyJhbGciO...RfrKmTWk"
}
}
- Since we got back a 202, we need to poll for result:
>> Request:
GET /challenge-response/v1/session/1234567890
Host: veraison.example
Accept: application/vnd.veraison.challenge-response-session+json
<< Response:
HTTP/1.1 200 OK
Content-format: application/vnd.veraison.challenge-response-session+json
{
"nonce": "MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=",
"expiry": "2031-10-12T07:20:50.52Z",
"accept": [
"application/psa-attestation-token"
],
"state": "complete",
"evidence": {
"type": "application/psa-attestation-token",
"value": "eyJhbGciO...RfrKmTWk"
},
"result": {
"is_valid": true,
"claims": {
// ...
}
}
}
Synchronous verification
- Submit evidence for this session and obtain the attestation result right away (200):
>> Request:
POST /challenge-response/v1/session/1234567890
Host: veraison.example
Accept: application/vnd.veraison.challenge-response-session+json
Content-Type: application/psa-attestation-token
.....
<< Response:
HTTP/1.1 200 OK
Content-format: application/vnd.veraison.challenge-response-session+json
{
"nonce": "MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=",
"expiry": "2030-10-12T07:20:50.52Z",
"accept": [
"application/psa-attestation-token"
],
"state": "complete",
"evidence": {
"type": "application/psa-attestation-token",
"value": "eyJhbGciO...RfrKmTWk"
},
"result": {
"is_valid": true,
"claims": {
// ...
}
}
}
Policy management
Policies allow augmenting attestation scheme generated Attestation Results on
per-deployment basis, by specifying additional rules to be applied to these
results. This may result in the change of the trust vector values (and,
correspondingly, the overall result status), and/or addition of supplementary
claims under the policy-added-claims
EAR extension.
A policy is defined by a text document describing the rules to be applied by the policy engine. Currently, Veraison only support the OPA policy engine. Correspondingly, policies must be written in Rego language used by OPA. See OPA policy README in services repo for details.
During verification, the active policy for attestation scheme/tenant combination is retrieved. Currently, only one policy per scheme/tenant may be active at a time.
The policy management API exposes each scheme's policy via a unique resource
path in the form /policy/v1/{scheme}
. It supports the basic CRUD operations,
typical to any resource management, in the following way:
- create: A new policy is added to a scheme via a POST request to its path.
- read: Policy can be retrieved via a GET request to its path.
- update: A direct update is not possible, however a new policy can be added and activated. Activating the new policy will deactivate the old one, effectively replacing it.
- delete: True deletion is not possible. This allows retrieval of the specific policy version that was used to generate an attestation result. However, all policies may be deactivated, resulting in no policy being used during verification.
REST API
Adding a new policy
- A policy is added for a new scheme (with tenant implicit via auth).
- Optionally, a name for the policy may be specified via query arguments. If a
name is not specified, the default name of
"default"
will be used. - On success, the new policy object is returned, with Location header set to its unique URI.
- The newly added policy is inactive. It must be activated via a separate request (see below).
Request:
POST /management/v1/policy/PSA_IOT&name=base HTTP/1.1
Host: veraison.example
Content-Type: application/vnd.veraison.policy.opa
Accept: application/vnd.veraison.policy+json
hardware = "AFFIRMING" {
// secured state
state := bits.rsh(bits.and(evidence["psa-security-lifecycle"], 0xFFFF0000), 8)
state >= 0x3000
state <= 0x30FF
} else = "CONTRAINDICATED"
Response:
HTTP/1.1 201 Created
Content-Type: application/vnd.veraison.policy+json
Location: /management/v1/policy/PSA_IOT/340d22f7-9eda-499f-9aa2-5af295d6d812
{
"type": "opa",
"name": "base",
"uuid": "340d22f7-9eda-499f-9aa2-5af295d6d812",
"active": false,
"ctime": "2023-07-06 00:00:00",
"rules": "hardware = \"AFFIRMING\" {\n // secured state\n state := bits.rsh(bits.and(evidence[\"psa-security-lifecycle\"], 0xFFFF0000), 8)\n state >= 0x3000\n state <= 0x30FF\n } else = \"CONTRAINDICATED\"\n }"
}
Retrieving a specific policy object
A policy may be retrieved via its unique URI.
Request:
GET /management/v1/policy/PSA_IOT/340d22f7-9eda-499f-9aa2-5af295d6d812 HTTP/1.1
Host: veraison.example
Accept: application/vnd.veraison.policy+json
Response:
HTTP/1.1 200 OK
Content-Type: application/vnd.veraison.policy+json
{
"type": "opa",
"name": "base",
"uuid": "340d22f7-9eda-499f-9aa2-5af295d6d812",
"active": false,
"ctime": "2023-07-06 00:00:00",
"rules": "hardware = \"AFFIRMING\" {\n // secured state\n state := bits.rsh(bits.and(evidence[\"psa-security-lifecycle\"], 0xFFFF0000), 8)\n state >= 0x3000\n state <= 0x30FF\n } else = \"CONTRAINDICATED\"\n }"
}
Retrieving the active policy object
The active policy object for a scheme may be retrieved by omitting the UUID.
Request:
GET /management/v1/policy/PSA_IOT HTTP/1.1
Host: veraison.example
Accept: application/vnd.veraison.policy+json
Response:
HTTP/1.1 200 OK
Content-Type: application/vnd.veraison.policy+json
{
"type": "opa",
"name": "base",
"uuid": "ae19cc27-a449-1fb8-6c10-00f47ad1c55c",
"active": true,
"ctime": "2023-07-06 00:00:00",
"rules": "hardware = \"AFFIRMING\" // always affirm"
}
Retrieving all policy objects for a scheme
Optionally, a name query param may be used to filter the result and only return policies for the scheme associated with the specified name.
Request:
GET /management/v1/policies/PSA_IOT&name=base HTTP/1.1
Host: veraison.example
Accept: application/vnd.veraison.policy+json
Response:
HTTP/1.1 200 OK
Content-Type: application/vnd.veraison.policies+json
[
{
"type": "opa",
"name": "base",
"uuid": "ae19cc27-a449-1fb8-6c10-00f47ad1c55c",
"active": true,
"ctime": "2023-07-06 00:00:00",
"rules": "hardware = \"AFFIRMING\" // always affirm"
},
{
"type": "opa",
"name": "base",
"uuid": "340d22f7-9eda-499f-9aa2-5af295d6d812",
"active": false,
"ctime": "2023-07-06 00:00:00",
"rules": "hardware = \"AFFIRMING\" {\n // secured state\n state := bits.rsh(bits.and(evidence[\"psa-security-lifecycle\"], 0xFFFF0000), 8)\n state >= 0x3000\n state <= 0x30FF\n } else = \"CONTRAINDICATED\"\n }"
}
]
Activate a policy
Set a given policy to become the active one. This operation atomically deactivates any previously active policy for the scheme.
Request:
POST /management/v1/policy/PSA_IOT/340d22f7-9eda-499f-9aa2-5af295d6d812/activate HTTP/1.1
Host: veraison.example
Response:
HTTP/1.1 200 OK
Deactivate all policies
Request:
POST /management/v1/policies/PSA_IOT/deactivate HTTP/1.1
Host: veraison.example
Response:
HTTP/1.1 200 OK
Endorsement and Reference Value Provisioning
Concise Reference Integrity Manifest
The Concise Reference Integrity Manifest (CoRIM) is a format for representing Endorsements and Reference Values as defined by the RATS architecture. See this IETF Draft for the detailed specification.
A Go implementation exists on GitHub.
CoRIM is the format used for provisioning Endorsements and Reference Values by all attestation schemes currently supported by Veraison Services.
Concise Software Identification Tags
Concise Software Identification Tags (CoSWID) is a format for describing software components (eg. a specific release of software, a patch, or an installation bundle). The full specification is defined by RFC9393.
A Go implementation exists on GitHub.
Extending CoRIM
CoRIM specification may be extended by CoRIM Profiles documented in other specifications at well defined points identified in the base CoRIM spec using CDDL extension sockets. CoRIM profiles may
- Introduce new data fields to certain objects.
- Further constrain allowed values for existing fields.
- Introduce new type choices for fields whose values can be one of several types.
This implementation, likewise, allows dependent code to register extension types. This is done via three distinct extension mechanisms:
-
Structures that allow extensions embed an
Extensions
object, which allows registering a user-provided struct. The user-provided struct can extend their containing structures in two ways:- the fields of the user-provided struct become additional fields in the containing structure.
- the user-provided struct may define additional constraints on the containing structure by defining an appropriate validation method for it.
This corresponds to the map-extension sockets in the spec.
-
Some fields may have values chosen from a set of pre-determined types (e.g., an instance ID may be either a UUID or UEID). These (mostly) correspond to type-choice sockets in the CoRIM spec. The set of allowed types for a field may be extended by registering a factory function for a new type, using the registration function associated with the type choice.
-
A couple of type-choice sockets (
$tag-rel-type-choice
,$corim-role-type-choice
and$comid-role-type-choice
) define what, in effect, are extensible enums. They allow providing additional values, rather than types. This implementation provides registration functions for new values for those types.
note
CoRIM also "imports" CDDL from the CoSWID spec. Some of
these CoSWID CDDL definitions also feature extension sockets.
However, as they are defined in a different spec and are implemented
in the veraison/swid
library, they
cannot be extended using the extension feature provided in the CoRIM library.
The extension support in CoRIM library is applicable ONLY to CoRIM and CoMID
maps and type choices
Map Extensions
Map extensions allow extending CoRIM and CoMID maps with additional keys,
effectively defining new fields for the corresponding structures. In the code
base, these can be identified by the embedded Extensions
struct. Each
extensible type has a corresponding extensions.Point
. These are:
Extended Type | Extension Point(s) | Parent Structure | Where to Call RegisterExtensions() |
---|---|---|---|
comid.Comid | comid.ExtComid | comid.Comid (the top-level CoMID) | On a comid.Comid instance (e.g. myComid.RegisterExtensions(extMap)) |
comid.Entity | comid.ExtEntity | comid.Entity | Usually indirect via myComid.RegisterExtensions(...) (the Comid sees it). |
comid.Triples | comid.ExtTriples | comid.Triples | Typically indirect via myComid.RegisterExtensions(...). |
comid.Mval | comid.ExtReferenceValue, comid.ExtEndorsedValue, comid.ExtMval | comid.Mval (measurement-value in reference/endorsed) | Usually indirect via myComid.RegisterExtensions(...). |
comid.FlagsMap | comid.ExtReferenceValueFlags, comid.ExtEndorsedValueFlags, comid.ExtFlags | comid.FlagsMap | Typically indirect via myComid.RegisterExtensions(...). |
corim.UnsignedCorim | corim.ExtUnsignedCorim | corim.UnsignedCorim (the top-level CoRIM) | On a corim.UnsignedCorim instance (e.g. myCorim.RegisterExtensions(...)) |
corim.Entity | corim.ExtEntity | corim.Entity | Usually indirect via myCorim.RegisterExtensions(...). |
corim.Signer | corim.ExtSigner | corim.Signer | Usually indirect via myCorim.RegisterExtensions(...). |
Note that comid.Mval
and comid.FlagsMap
are used for both reference values
and endorsed values, which may be extended separately. This is why there are
two extension points associated with each. Additionally, comid.ExtMval
and
comid.ExtFlags
also exist when you want to register extensions with a
comid.Mval
or comid.Measurment
(and so don't have the context of whether it
will be going into a reference or an endorsed value).
The diagram below shows a visual representation of where these extension points
originate in the struct
hierarchy, and which CoRIM object are "aware" of
which extension points:
(note: the diagram shows the logical relationships between structures and does not meant to accurately reflect the code -- e.g. it omits the container types, representing them as slices)
To extend the above types, you need to define a struct containing your
extensions for each extension point that you want to extend. You then need to
create an extensions.Map
that maps the extension points to a pointer to an
instance of the corresponding struct. Finally, you need to pass the map to the
RegisterExtensions()
method of an instance of the top-most type that is being
extended. For example, if you want to extend the comid, reference values, and
entities, you would create a struct defining the extensions for each, call
extensions.NewMap()
to create a new map, call Add()
on the map to add
mappings from the three extension points to pointers to empty instances of your
structs, and, finally, pass the map to comid.Comid.RegisterExtensions()
. This
should be done as early as possible, before any marshaling is performed (see
the example below).
These types can be extended in two ways: by adding additional fields, and by introducing additional constraints over existing fields.
Adding new fields
To add new fields, simply add them to your extensions struct, ensuring that
the cbor
and json
tags on those fields are set correctly. As CoRIM
mandates integer keys, you must use the keyasint
option for the cbor
tag.
To access the values of those fields, you can call the extended type instance's
Extensions.Get()
passing in the name of the field you want to access. The
name can be either the Go struct's field name, the name specified in the json
tag, or (a string containing) the integer specified in the cbor
tag.
Get()
returns an interface{}
. There are equivalent GetInt()
,
GetString()
, etc. methods that perform the required conversions, and return
the value of the indicated type, along with possible errors. ("Must" versions
of these also exist, e.g. MustGetString()
, that do not return an error and
instead call panic()
).
You can also get the pointer to your extension's instance itself by calling
the extended type instance's GetExtensions()
. This returns an interface{}
, so
you will need to type assert to be able to access the fields directly.
Introducing additional constraints
To introduce new constraints, add a method called Constrain<TYPE>(v *<TYPE>)
to your extensions struct, where <TYPE>
is the name of the type being
extended (one of the ones listed above) -- e.g.
ConstrainComid(v *comid.Comid)
when extending comid.Comid
. This method, if
it exists, will be invoked inside the extended type instance's Valid()
method, passing itself as the parameter.
You do not need to define this method unless you actually want to enforce some constraints (i.e., if you just want to define additional fields).
Example
The following example illustrates how to implement a map extension by extending
comid.Entity
with the following features:
- an optional "email" field
- additional constraint on the existing "name" field that it contains a
valid UUID (note: since
NameEntry
is a type choice extensible, this can also be done by defining a new value type forNameEntry
-- see the following section).
package main
import (
"encoding/json"
"fmt"
"log"
"github.com/google/uuid"
"github.com/veraison/corim/comid"
"github.com/veraison/corim/extensions"
)
// the struct containing the extensions
type EntityExtensions struct {
// a string field extension
Email string `cbor:"-1,keyasint,omitempty" json:"email,omitempty"`
}
// custom constraints for Entity
func (o EntityExtensions) ConstrainEntity(val *comid.Entity) error {
_, err := uuid.Parse(val.EntityName.String())
if err != nil {
return fmt.Errorf("invalid UUID: %w", err)
}
return nil
}
var sampleText = `
{
"name": "31fb5abf-023e-4992-aa4e-95f9c1503bfa",
"regid": "https://acme.example",
"email": "info@acme.com",
"roles": [
"tagCreator",
"creator",
"maintainer"
]
}
`
func main() {
var entity comid.Entity
extMap := extensions.NewMap().Add(comid.ExtEntity, &EntityExtensions{})
entity.RegisterExtensions(extMap)
if err := json.Unmarshal([]byte(sampleText), &entity); err != nil {
log.Fatalf("ERROR: %s", err.Error())
}
if err := entity.Valid(); err != nil {
log.Fatalf("failed to validate: %s", err.Error())
} else {
fmt.Println("validation succeeded")
}
// obtain the extension field value via a generic getter
email := entity.Extensions.MustGetString("email")
fmt.Printf("entity email: %s\n", email)
// retrive the extensions struct and get value via its field.
exts := entity.GetExtensions().(*EntityExtensions)
fmt.Printf("also entity email: %s\n", exts.Email)
}
Profiles
Map extensions may be grouped into profiles. A profile is registered,
associating an eat.Profile
with an extensions.Map
. A registered profile can
be obtained by calling corim.GetProfileManifest()
, which returns a corim.ProfileManifest
object which can be used to obtain corim.UnsignedCorim
, corim.SignedCorim
and comid.Comid
instances that have the associated extensions registered.
corim.UnmarshalUnsignedCorimFromCBOR()
will automatically look up a
registered profile based on the eat.Profile
in the provided data (there are
corresponding functions for JSON and signed CoRIM).
// define extensions
type EntityExtensions struct {
Address *string `cbor:"-1,keyasint,omitempty" json:"address,omitempty"`
}
type RefValExtensions struct {
Timestamp *int `cbor:"-1,keyasint,omitempty" json:"timestamp,omitempty"`
}
// register profile
func init() {
profileID, err := eat.NewProfile("http://example.com/example-profile")
if err != nil {
panic(err) // will not error, as the hard-coded string above is valid
}
extMap := extensions.NewMap().
Add(comid.ExtEntity, &EntityExtensions{}).
Add(comid.ExtReferenceValue, &RefValExtensions{})
if err := RegisterProfile(profileID, extMap); err != nil {
// will not error, assuming our profile ID is unique, and we've
// correctly set up the extensions Map above
panic(err)
}
}
// use the profile
func main() {
buf, err := os.ReadFile("testcases/unsigned-example-corim.cbor")
if err != nil {
log.Fatalf("could not read corim file: %v", err)
}
// UnmarshalUnsignedCorimFromCBOR will detect the profile and ensure
// the correct extensions are loaded before unmarshalling
extractedCorim, err := UnmarshalUnsignedCorimFromCBOR(buf)
if err != nil {
log.Fatalf("could not unmarshal corim: %v", err)
}
// ...
}
Please see example_profile_test.go for the complete example of creating and using CoRIM profiles.
note
Currently, only map extensions can be associated with profiles. Type choice and enum value extensions (described below) can only be registered globally.
Type Choice Extensions
Type Choice extensions allow specifying alternative types for existing CoRIM fields by defining a type that implements an appropriate interface and registering it with a CBOR tag.
A type choice struct contains a single field, Value
, that contains the actual
object represented by the type choice. The Value
implements an interface
that is specific to the type choice and is derived from ITypeChoiceValue
:
type ITypeChoiceValue interface {
// String returns the string representation of the ITypeChoiceValue.
String() string
// Valid returns an error if validation of the ITypeChoiceValue fails,
// or nil if it succeeds.
Valid() error
// Type returns the type name of this ITypeChoiceValue implementation.
Type() string
}
The following is the full list of type choice structs:
comid.ClassID
comid.CryptoKey
comid.EntityName
comid.Group
comid.Instance
comid.Mkey
comid.SVN
corim.EntityName
To provide a new value type, the following is required:
- Define a type that implements the value interface for the type choice you
want to extend. This interface is called
I<NAME>Value
, where<NAME>
is the name of the type choice type(e.g.IClassIDValue
). These interfaces always embedITypeChoiceValue
and possibly define additional methods. - Create a factory function for your type, with the signature
func (any) (*<NAME>, error)
, where<NAME>
is the name of the type choice type that will contain your value. (Note that the function must return a pointer to the container type choice struct, not to the value type you define.) This function should create an instance of your value type from the provided input and return a new type choice struct instance containing it. The range of valid inputs is up to you, however it must handlenull
, returning the zero-value for your type in that case. - Register your factory function with the CBOR tag for your new type by
passing it to the registration function corresponding to the type choice
struct. It will have the name
Register<NAME>Type
, where<NAME>
is the name of the type choice struct that will contain your value (e.g.RegisterClassIDType
).
Example
The following example illustrates how to add a new type choice value
implementation by extending the CryptoKey
type to support DER values.
package main
import (
"crypto"
"crypto/x509"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"log"
"github.com/veraison/corim/comid"
)
// the CBOR tag to be used for the new type
var DerKeyTag = uint64(9999)
// new implementation of ICryptoKeyValue type
type TaggedDerKey []byte
// The factory function for the new type
func NewTaggedDerKey(k any) (*comid.CryptoKey, error) {
var b []byte
var err error
if k == nil {
k = *new([]byte)
}
switch t := k.(type) {
case []byte:
b = t
case string:
b, err = base64.StdEncoding.DecodeString(t)
if err != nil {
return nil, err
}
default:
return nil, fmt.Errorf("value must be a []byte; found %T", k)
}
key := TaggedDerKey(b)
return &comid.CryptoKey{Value: key}, nil
}
func (o TaggedDerKey) String() string {
return base64.StdEncoding.EncodeToString(o)
}
func (o TaggedDerKey) Valid() error {
_, err := o.PublicKey()
return err
}
func (o TaggedDerKey) Type() string {
return "pkix-der-key"
}
func (o TaggedDerKey) PublicKey() (crypto.PublicKey, error) {
if len(o) == 0 {
return nil, errors.New("key value not set")
}
key, err := x509.ParsePKIXPublicKey(o)
if err != nil {
return nil, fmt.Errorf("unable to parse public key: %w", err)
}
return key, nil
}
var testKeyJSON = `
{
"type": "pkix-der-key",
"value": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEW1BvqF+/ry8BWa7ZEMU1xYYHEQ8BlLT4MFHOaO+ICTtIvrEeEpr/sfTAP66H2hCHdb5HEXKtRKod6QLcOLPA1Q=="
}
`
func main() {
// register the factory function under the CBOR tag.
if err := comid.RegisterCryptoKeyType(DerKeyTag, NewTaggedDerKey); err != nil {
log.Fatal(err)
}
var key comid.CryptoKey
if err := json.Unmarshal([]byte(testKeyJSON), &key); err != nil {
log.Fatal(err)
}
fmt.Printf("Decoded DER key: %x\n", key)
}
Enum extensions
The following enum types may be extended with additional values:
comid.Rel
comid.Role
corim.Role
This can be done by calling RegisterRel
or RegisterRole
, as appropriate,
and providing a new uint64
value and corresponding string
name.
Example
package main
import (
"encoding/json"
"fmt"
"log"
"github.com/veraison/corim/comid"
)
var sampleText = `
{
"name": "Acme Ltd.",
"regid": "https://acme.example",
"roles": [
"tagCreator",
"owner"
]
}
`
func main() {
// associate role value 4 with the name "owner"
comid.RegisterRole(4, "owner")
var entity comid.Entity
if err := json.Unmarshal([]byte(sampleText), &entity); err != nil {
log.Fatalf("ERROR: %s", err.Error())
}
if err := entity.Valid(); err != nil {
log.Fatalf("failed to validate: %s", err.Error())
} else {
fmt.Println("validation succeeded")
}
fmt.Println("roles:")
for _, role := range entity.Roles {
fmt.Printf("\t%s\n", role.String())
}
}
Attestation Results
Attestation results produced by Remote Attestation Services are in EAT Attestation Result (EAR) format.
There is library support for working with EARs available for Go, Rust, C, and Python languages.
At the core of an EAR is a Trust Vector defined by AR4SI IETF Draft, which breaks down the appraisal of attestation evidence into eight categories (a particular Attestation Scheme may not utilize all of the categories).
EARs maybe serialised as either JWT or COSE messages (Remote Attestation Services produces results as JWT only).
Example attestation result
This is JSON serialization of a sample attestation result as produced by Remote Attestation Services for Arm PSA evidence.
{
"ear.status": "affirming",
"ear.trustworthiness-vector": {
"instance-identity": 2,
"configuration": 0,
"executables": 2,
"file-system": 0,
"hardware": 2,
"runtime-opaque": 2,
"storage-opaque": 2,
"sourced-data": 0
},
"ear.appraisal-policy-id": "policy:PSA_IOT",
"ear.veraison.annotated-evidence": {
"eat-profile": "http://arm.com/psa/2.0.0",
"psa-client-id": 1,
"psa-security-lifecycle": 12288,
"psa-implementation-id": "YWNtZS1pbXBsZW1lbnRhdGlvbi1pZC0wMDAwMDAwMDE=",
"psa-boot-seed": "3q2+796tvu/erb7v3q2+796tvu/erb7v3q2+796tvu8=",
"psa-software-components": [
{
"measurement-type": "BL",
"measurement-value": "h0KPxSKAPTEGXnvOPPA/5HUJZjHl4Hu9eg/eYMTPJcc=",
"signer-id": "rLsRx+TaIXIFUjzkzhokWuGiOa48a/2eeHH35di66Gs=",
"version": "2.1.0"
},
{
"measurement-type": "PRoT",
"measurement-value": "AmOCmYm2/ZVPcrqvL8ZLwuLwHWktTecphuqAj26ZgT8=",
"signer-id": "rLsRx+TaIXIFUjzkzhokWuGiOa48a/2eeHH35di66Gs=",
"version": "1.3.5"
},
{
"measurement-type": "ARoT",
"measurement-value": "o6XnFfDMV0pzw/m+u2vCTzL/1bZ7OHJEwskJ2neaFHg=",
"signer-id": "rLsRx+TaIXIFUjzkzhokWuGiOa48a/2eeHH35di66Gs=",
"version": "0.1.4"
}
],
"psa-instance-id": "Ac7rrnuJJ6MiflMDz14PH3s0u1Qq1yUKwD+83jbsLxUI",
"psa-verification-service-indicator": "https://psa-verifier.org",
"psa-nonce": "QUp8F0FBs9DpodKK8xUg8NQimf6sQAfe2J1ormzZLxk="
}
}
Endorsement Distribution API
This musing discusses the addition of a new Veraison service that provides an Endorsement Distribution API endpoint.
In the reminder of this musing, we will use the shorthand "EAPI" to refer to this new service.
Simplifying Assumptions
At this stage, this work is intended as a proof-of-concept, therefore we will make a few simplifying assumptions:
- Arm CCA only
- Simplified RBAC model with fixed tenant
- Transactional (request-response) API
- All responses will be "full" in the PoC: we won't try to exemplify any kind of fine-grained partial patching of data.
- The client is expected to perform a periodic fetch to refresh its local cache.
- API output in rust-ccatoken format
- The output format will generally depend on the attestation scheme. In the prototype, the Arm CCA scheme identifier will be included during the API negotiation process. Any other scheme identifier will result in a failure.
These choices minimise the amount of new client code required while still creating many of the necessary building blocks on the server side.
New bits and pieces
EAPI will be added as a new service alongside the existing verification, management and provisioning components.
Strictly speaking, EAPI depends only on VTS (Veraison trusted services) which is the exclusive owner of the stores from which the endorsements are fetched. EAPI is likely to execute in combination with the provisioning service, which allows ingesting up-to-date endorsements from "upstream" endorsers and reference value providers via its push API. In an EAPI-driven deployment, the verification service can be muted. (Full blown RBAC support will need integration with KeyCloak; this can be deferred to a later stage.)
EAPI exposes a REST API endpoint over HTTPS that allows a client to retrieve "endorsements" (i.e., reference values and trust anchors) pertaining to a given attester ((A) in the figure below.)
Since the endorsement stores are owned by the VTS component, one or more gRPC services (B) need to be defined to allow EAPI to query the stores (C) through VTS. Their exact shape will be driven by the interface offered by the external REST API.
VTS will be extended to receive the query via the new gRPC service, create the corresponding query (or queries) to the stores and forward the result set back to EAPI. EAPI will repackage the results according to the REST spec and send them in the HTTP response to the client.
If needed, scheme-specific query adapters will be encapsulated in a "EAPI query plugin" (D).
.--------. .------. .-----. .--------.
| Vfying | | | | | |'------'|
| RP +----o )-----+ EAPI +-----+ VTS +-----+ Stores |
| | (A) | | (B) | | (C) | |
'--------' '------' '--+--' '--------'
| (D)
.---+---.
|query |
|plugin |
'-------'
Attester Identification
When requesting endorsements for a specific target or attesting environment, the client must provide an identifier for that environment. These identifiers are specific to the attestation scheme being used.
A suitable data model for encapsulating different identifiers is the environment-map
from CoRIM.
This prototype will utilize the environment-map
along with an attestation scheme identifier to supply the necessary query parameters:
- The attestation scheme identifier is used to locate the right "query plugin".
- The
environment-map
is treated as opaque data until it reaches the intended plugin, where it's interpreted and mapped to a suitable KV-store key for lookup.
Which schemes are supported in the prototype (Arm CCA only), and which attester identification data model (CoRIM’s environment-map
) will be exposed via a discovery resource at .well-known/veraison/endorsement-distribution
RATS Conceptual Message Collection Daemon
See GitHub.