Signer service provides REST API for creating Verifiable Credentials (VC) and Verifiable Presentations (VP) in the W3C 1.0 credential format. It also provides more generic endpoints for signing arbitrary data, for adding cryptographic proofs to existing VC/VP and for fetching public keys necessary for signature verification.
It is developed using the Goa v3 framework.
A helper script named
goagen.shcan be found inside the root directory of the service. It can be used to generate the transport layer code from the Goa DSL definitions in the design directory. The script should be executed everytime the design definitions are updated. It also generates updated OpenAPI documentation from the DSL.
In the local docker-compose environment a live Swagger UI is exposed at http://localhost:8085/swagger-ui/.
flowchart LR
A([client]) -- HTTP --> B[crypto service API]
subgraph crypto service
B --GRPC--> C[GRPC Crypto Engine]
C --> D[Vault\nTransit API]
D --> E[(Vault)]
C --> F[Local Crypto API]
C --> G[HSM API]
G --> H[(HSM)]
end
The signer supports linked data proofs and optionally sd-jwt(experimental). Sd-JWT is not yet finally standardized, so the implementation is just an experimental demonstration how to do it. SD- Jwt used the sd-jwt service (https://gitlab.eclipse.org/eclipse/xfsc/common-services/sd-jwt-service). It support also just did:jwk as proof signature.
A engine can be set in the helm chart like:
engine:
image: node-654e3bca7fbeeed18f81d7c7.ps-xaas.io/tsa/crypto-provider-hashicorp-vault-plugin:v2.0.3
address: 0.0.0.0:50051
pullPolicy: Always
env:
VAULT_ADRESS: http://vault.vault.svc.cluster.local:8200
secretEnv:
VAULT_TOKEN:
name: vault
key: token
In general the Crypto Engine is configurable over setting the CRYPTO_GRPC_ADDR env for instance to 127.0.0.1:50051.
can be created by using credential proof route:
{
"namespace" :"xyz",
"group" : "xyz",
"key":"test",
"format":"ldp_vc",
"credential":{
"@context": [
"https://www.w3.org/2018/credentials/v1",
"https://w3id.org/security/suites/jws-2020/v1",
"https://schema.org"
],
"credentialSubject": {
"id":"did:jwk:eyJhbGciOiJFUzI1NiIsImNydiI6IlAtMjU2Iiwia3R5IjoiRUMiLCJ4IjoibE01NDNya2xwUUV3T2oyMFowRmdnMUhjMHlkZlhJRU05ckEzRzNSNXdFVSIsInkiOiJvVERYTVNuOWxlMGhrMC1pemFHRUF5OFBxT2pncWUtNWpVMldzbEZBcUw0In0",
"testdata": {"hello":"world", "testXY":"1234"}
},
"issuanceDate": "2022-06-02T17:24:05.032533+03:00",
"issuer": "https://example.com",
"type": "VerifiableCredential"
}
}
can be created by the following structure.DisclosureFrame describes what to be disclosed.
{
"namespace" :"xyz",
"group" : "xyz",
"key":"test",
"format":"vc+sd-jwt",
"credential":{
"@context": [
"https://www.w3.org/2018/credentials/v1",
"https://w3id.org/security/suites/jws-2020/v1",
"https://schema.org"
],
"credentialSubject": {
"id":"did:jwk:eyJhbGciOiJFUzI1NiIsImNydiI6IlAtMjU2Iiwia3R5IjoiRUMiLCJ4IjoibE01NDNya2xwUUV3T2oyMFowRmdnMUhjMHlkZlhJRU05ckEzRzNSNXdFVSIsInkiOiJvVERYTVNuOWxlMGhrMC1pemFHRUF5OFBxT2pncWUtNWpVMldzbEZBcUw0In0",
"testdata": {"hello":"world", "testXY":"1234"}
},
"issuanceDate": "2022-06-02T17:24:05.032533+03:00",
"issuer": "https://example.com",
"type": "VerifiableCredential"
},
"DisclosureFrame": ["testdata"]
}
The current signer supports multiple crypto engines, which can be loaded from the internal image by setting the var ENGINE_PATH to the availble engine.
The signer service is configured using the Configuration File. All configurations are expected as Environment variables specified in the configuration file. For managing the configuration data from ENV variables, envconfig library is used.
The service uses Hashicorp Vault for crypto operations and storage of key material. The Vault client is defined as Go interface, so an implementer can provide different crypto engine implementations for crypto operations and key storage.
Vault setup is described here. When it's up and running, the Vault Transit Engine must be enabled and at least one asymmetric key should be created inside for use by the Signer service. The Vault provides Web UI interface and a terminal CLI application which can be used to manage Vault Engines and keys.
When a client requests a proof or creates a VC, it must specify the name of the transit engine, (optionall engine type if no transit engine e.g. kv or transit;kv) and signing key to be used. The key must be of a supported type, as not all key types can be used for generating signatures.
Check out Hashicorp Vault docs for all supported key types by Vault Transit Engine. Keep in mind, that not all Vault key types are supported by the Aries framework signature suites and the Signer service (for example RSA). Keys we have tested with are ECDSA and ED25519 for VC/VP, while for signing arbitrary data used for policy bundles signing, all asymmetric key types are supported (including RSA).
The service exposes two endpoints for getting public keys - one for getting a single key by name and the other for getting all possible public keys of the signer service.
The keys are returned in JWK format and are wrapped in a DID Verification Method envelope, so that the response can be used more easily during DID proofs verification process. Example key response:
{
"id": "key1",
"publicKeyJwk": {
"crv": "P-256",
"kid": "key1",
"kty": "EC",
"x": "RTx_2cyYcGVSIRP_826S32BiZxSgnzyXgRYmKP8N2l0",
"y": "unnPzMAnbByBMq2l9WWKsDFE-MDvX6hYhrESsjAaT50"
},
"type": "JsonWebKey2020"
}Terms of Use can be appended via Policy. Under the Variable TERMSOFUSE_POLICY can be a policy configured which can return service endpoints which are inserted in the did doc during generation. The policy is called via POST, expects a field tenant and group and must return the following structure:
{
{
"result": [
{
"id": "http://example.com/policies/credential/4",
"profile": "http://example.com/profiles/credential",
"type": "IssuerPolicy"
...
}
]
}
}
The service is able to generate now did documents for a dedicated engine. It lists for a engine all keys in verification methods when the method is called (see open api). The method is designed to be used by load balancer over passing the X-Headers. Headers:
| Header | Purpose |
|---|---|
| X-namespace | Namespace of the Keys |
| X-group | Group of the keys, can be empty |
| X-engine | Type of the Engine, can be kv and/or transit seperated by ; |
| X-did | DID which shall be used inside of the document as basis id for referencing the keys |
Service Endpoint Policy Usage: Under the Variable SERVICE_ENDPOINT can be a policy configured which can return service endpoints which are inserted in the did doc during generation. The policy is called via POST, expects a field did and must return the following structure:
{
"result":[
{
"id":"xxxx",
"type":"xxxx",
"serviceEndpoint":"xxxx"
}
]
}
The did configuration is generated according to W3C Did Configuration Spec and designed to be used by load balancers over passing the X-Headers. Headers:
| Header | Purpose |
|---|---|
| X-namespace | Namespace of the Keys |
| X-group | Group of the keys, can be empty |
| X-did | DID which shall be used inside of the document as basis id for referencing the keys |
| X-origin | Origin which shall be proven by the did config e.g. https://example |
| X-nonce | Nonce for creating the proof on the did config. |
The jwks endpoint generates an standard jwk key set output which can be used for open id or other key set purposes. It's also designed to be used over loadbalancers by using x-headers. Headers:
| Header | Purpose |
|---|---|
| X-namespace | Namespace of the Keys |
| X-group | Group of the keys, can be empty |
| X-engine | Type of the Engine, can be kv and/or transit seperated by ; |
goa gen github.com/eclipse-xfsc/crypto-provider-service/design
To make the service binary locally, you can run the following command from the root directory (you must have Go installed):
go build -o signer ./cmd/signer/...You can see the Dockerfile of the service under the deployment directory. There is one Dockerfile for use during local development with docker-compose and one for building an optimized production image: deployment/docker/Dockerfile.
There is one global exported variable named Version in main.go
This variable is set to the latest tag or commit hash during the build process. You can
look in the production Dockerfile to see how the Version is set during build. The version
is printed in the service log on startup and can be checked to verify which specific version
of the code is deployed.
Version should not be set or modified manually in the source code.
When given a VC/VP for verification, the service checks the validity of the JSON structure
against the included schemas (context), and verifies all proofs inside. Additional custom
verifiers could be written to extend the verification process. Currently, there is one such
extended verification component named train. It can be used as an example for how to write
more verifications if needed.
Extended verification modules are enabled by a configuration variable and the corresponding
implementation. The ENV variable that specifies extended verifiers is named CREDENTIAL_VERIFIERS
and holds comma-separated strings which denote a particular verifier implementation. Inside the
service all listed verifiers are constructed and used during the VC verification process.
CREDENTIAL_VERIFIERS="train,mynewverifier"Of course, for
mynewverifierto be a usable option, it must have been implemented inside the service and created when its name is given in the config.
All extended verifiers implement a common interface and provide two methods.
type Verifier interface {
VerifyCredential(ctx context.Context, vc *verifiable.Credential) error
VerifyPresentation(ctx context.Context, vp *verifiable.Presentation) error
}The service outputs all logs to stdout as defined by the best practices in the Cloud Native
community. See here for more details 12 Factor App.
From there logs could be processed as needed in the specific running environment.
The standard log levels are [debug,info,warn,error,fatal] and info is the default level.
If you want to set another log level, use the ENV configuration variable LOG_LEVEL to set it.
The project uses Go modules for managing dependencies and we commit the vendor directory. When you add/change dependencies, be sure to clean and update the vendor directory before submitting your Merge Request for review.
go mod tidy
go mod vendorTo execute the units tests for the service go to the root project directory and run:
go test -race $(go list ./... | grep -v /integration)To run the linters go to the root project directory and run:
golangci-lint runIntegration tests are inside the integration directory.
The only configuration option they need is the base URL of the signer service.
It must be specified in the SIGNER_ADDR environment variable.
The tests can be executed against different environments by setting the
value for SIGNER_ADDR.
SIGNER_ADDR=https://{{SIGNER_ADDRESS}} go testNote: these tests are not executed in the CI pipeline currently.