Skip to content

Add did:webplus Universal Resolver Driver.#534

Open
vdods wants to merge 7 commits intodecentralized-identity:mainfrom
LedgerDomain:add-did-webplus
Open

Add did:webplus Universal Resolver Driver.#534
vdods wants to merge 7 commits intodecentralized-identity:mainfrom
LedgerDomain:add-did-webplus

Conversation

@vdods
Copy link
Copy Markdown

@vdods vdods commented Feb 20, 2026

No description provided.

@bumblefudge
Copy link
Copy Markdown

image

I was able to get it running on a little Hetzner Ubuntu x86_64/AMD64 VPS, seems to work fine?

@BernhardFuchs
Copy link
Copy Markdown
Member

I tested the driver and it starts fine but when trying to resolve the example DIDs it throws a 500 Storage error:

2026-03-14T15:00:53.178268Z  INFO main ThreadId(01) did_webplus_urd::listen: did-webplus/urd/src/listen.rs:65: Creating DIDResolverFull with vdg_host_o: None
2026-03-14T15:00:53.179584Z  INFO main ThreadId(01) did_webplus_urd_lib::spawn_urd: did-webplus/urd-lib/src/spawn_urd.rs:20: did:webplus URD (Universal Resolver Driver) listening on port 80
2026-03-14T15:11:26.947039Z ERROR tokio-runtime-worker ThreadId(04) did_webplus_urd_lib::spawn_urd: did-webplus/urd-lib/src/spawn_urd.rs:55: error=(500, "Storage error: error returned from database: (code: 1) no such table: did_document_records")

@vdods
Copy link
Copy Markdown
Author

vdods commented Mar 16, 2026

@BernhardFuchs Thanks for the heads up. I haven't been able to repro that -- if you're willing, could you edit your .env file and change uniresolver_driver_did_webplus_RUST_LOG=did_webplus=info to uniresolver_driver_did_webplus_RUST_LOG=debug, and restart the driver? For me, it shows it running the necessary DB migrations that create the tables, so I'm curious what it will show for you.

@bumblefudge
Copy link
Copy Markdown

wait is this running on AWS linux perchance? uname -m is the bane of my existence when it comes to Rust...

@BernhardFuchs
Copy link
Copy Markdown
Member

The infrastructure we use is AWS EKS, so I guess they are using AWS Linux for the EC2 instances.

Here is the log with debug switched on:

2026-03-18T12:50:27.456326Z  INFO main ThreadId(01) did_webplus_urd::listen: did-webplus/urd/src/listen.rs:65: Creating DIDResolverFull with vdg_host_o: None
2026-03-18T12:50:27.456854Z DEBUG sqlx-sqlite-worker-0 ThreadId(06) sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="PRAGMA foreign_keys = ON; …" db.statement="\n\nPRAGMA foreign_keys = ON; \n" rows_affected=0 rows_returned=0 elapsed=36.94µs elapsed_secs=3.694e-5
2026-03-18T12:50:27.457181Z DEBUG sqlx-sqlite-worker-0 ThreadId(06) sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="CREATE TABLE IF NOT …" db.statement="\n\n\nCREATE TABLE IF NOT EXISTS _sqlx_migrations (\n    version BIGINT PRIMARY KEY,\n    description TEXT NOT NULL,\n    installed_on TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,\n    success BOOLEAN NOT NULL,\n    checksum BLOB NOT NULL,\n    execution_time BIGINT NOT NULL\n);\n                \n" rows_affected=0 rows_returned=0 elapsed=207.353µs elapsed_secs=0.000207353
2026-03-18T12:50:27.457285Z DEBUG sqlx-sqlite-worker-0 ThreadId(06) sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="SELECT version FROM _sqlx_migrations …" db.statement="\n\nSELECT version FROM _sqlx_migrations WHERE success = false ORDER BY version LIMIT 1\n" rows_affected=0 rows_returned=0 elapsed=53.931µs elapsed_secs=5.3931e-5
2026-03-18T12:50:27.457377Z DEBUG sqlx-sqlite-worker-0 ThreadId(06) sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="SELECT version, checksum FROM …" db.statement="\n\nSELECT version, checksum FROM _sqlx_migrations ORDER BY version\n" rows_affected=0 rows_returned=0 elapsed=25.04µs elapsed_secs=2.504e-5
2026-03-18T12:50:27.457596Z DEBUG sqlx-sqlite-worker-0 ThreadId(06) sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="CREATE TABLE did_document_records ( …" db.statement="\n\nCREATE TABLE did_document_records (\n    self_hash TEXT NOT NULL PRIMARY KEY,\n    did TEXT NOT NULL,\n    version_id BIGINT NOT NULL,\n    valid_from DATETIME NOT NULL,\n    -- This is the size (in bytes) of the did-documents.jsonl file that ends with this DID document, including\n    -- the trailing newline.  This must be equal to the did_documents_jsonl_octet_length field of the previous DID document\n    -- row + OCTET_LENGTH(did_document_jcs) + 1.\n    did_documents_jsonl_octet_length BIGINT NOT NULL,\n    -- This must be exactly the JCS of the DID document, not including the trailing newline.\n    did_document_jcs TEXT NOT NULL,\n\n    CONSTRAINT did_version_idx UNIQUE (did, version_id),\n    CONSTRAINT did_valid_from_idx UNIQUE (did, valid_from),\n    CONSTRAINT did_did_documents_jsonl_octet_length_idx UNIQUE (did, did_documents_jsonl_octet_length)\n);\n\n" rows_affected=0 rows_returned=0 elapsed=125.651µs elapsed_secs=0.000125651
2026-03-18T12:50:27.457703Z DEBUG sqlx-sqlite-worker-0 ThreadId(06) sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="INSERT INTO _sqlx_migrations ( …" db.statement="\n\n\n    INSERT INTO _sqlx_migrations ( version, description, success, checksum, execution_time )\n    VALUES ( ?1, ?2, TRUE, ?3, -1 )\n                \n" rows_affected=1 rows_returned=0 elapsed=44.45µs elapsed_secs=4.445e-5
2026-03-18T12:50:27.457815Z DEBUG sqlx-sqlite-worker-0 ThreadId(06) sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="UPDATE _sqlx_migrations SET execution_time …" db.statement="\n\n\n    UPDATE _sqlx_migrations\n    SET execution_time = ?1\n    WHERE version = ?2\n                \n" rows_affected=1 rows_returned=0 elapsed=24.44µs elapsed_secs=2.444e-5
2026-03-18T12:50:27.458018Z  INFO                 main ThreadId(01) did_webplus_urd_lib::spawn_urd: did-webplus/urd-lib/src/spawn_urd.rs:20: did:webplus URD (Universal Resolver Driver) listening on port 80
2026-03-18T15:39:26.602740Z  INFO tokio-runtime-worker ThreadId(02) request: tower_http::trace::make_span: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tower-http-0.6.6/src/trace/make_span.rs:108: new method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1
2026-03-18T15:39:26.602771Z DEBUG tokio-runtime-worker ThreadId(02) request: tower_http::trace::on_request: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tower-http-0.6.6/src/trace/on_request.rs:80: started processing request method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1
2026-03-18T15:39:26.602829Z DEBUG tokio-runtime-worker ThreadId(02) request:resolve_did: did_webplus_urd_lib::spawn_urd: did-webplus/urd-lib/src/spawn_urd.rs:55: new method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1 request_header_map={"accept": "application/did-resolution,application/did", "host": "driver-did-webplus:80", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.14 (Java/21.0.10)", "accept-encoding": "gzip,deflate"} query="did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow"
2026-03-18T15:39:26.602842Z DEBUG tokio-runtime-worker ThreadId(02) request:resolve_did: did_webplus_urd_lib::spawn_urd: did-webplus/urd-lib/src/spawn_urd.rs:61: Resolving DID query: did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow with Accept: Some("application/did-resolution,application/did") method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1 request_header_map={"accept": "application/did-resolution,application/did", "host": "driver-did-webplus:80", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.14 (Java/21.0.10)", "accept-encoding": "gzip,deflate"} query="did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow"
2026-03-18T15:39:26.602871Z DEBUG tokio-runtime-worker ThreadId(02) request:resolve_did: did_webplus_resolver::did_resolver_full: did-webplus/resolver/src/did_resolver_full.rs:669: DIDResolverFull::resolve_did_document_string; did_query: did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow; did_resolution_options: DIDResolutionOptions { accept_o: None, request_creation: true, request_next: true, request_latest: true, request_deactivated: true, local_resolution_only: false } method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1 request_header_map={"accept": "application/did-resolution,application/did", "host": "driver-did-webplus:80", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.14 (Java/21.0.10)", "accept-encoding": "gzip,deflate"} query="did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow"
2026-03-18T15:39:26.603262Z DEBUG sqlx-sqlite-worker-1 ThreadId(07) request:resolve_did: sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="PRAGMA foreign_keys = ON; …" db.statement="\n\nPRAGMA foreign_keys = ON; \n" rows_affected=0 rows_returned=0 elapsed=26.12µs elapsed_secs=2.612e-5 method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1 request_header_map={"accept": "application/did-resolution,application/did", "host": "driver-did-webplus:80", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.14 (Java/21.0.10)", "accept-encoding": "gzip,deflate"} query="did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow"
2026-03-18T15:39:26.603502Z DEBUG sqlx-sqlite-worker-1 ThreadId(07) request:resolve_did: sqlx::query: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/sqlx-core-0.8.6/src/logger.rs:143: summary="SELECT did, version_id, valid_from, …" db.statement="\n\n\n                SELECT did, version_id, valid_from, self_hash, did_documents_jsonl_octet_length, did_document_jcs\n                FROM did_document_records\n                WHERE did = $1 AND version_id = $2\n            \n" rows_affected=0 rows_returned=0 elapsed=103.181µs elapsed_secs=0.000103181 method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1 request_header_map={"accept": "application/did-resolution,application/did", "host": "driver-did-webplus:80", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.14 (Java/21.0.10)", "accept-encoding": "gzip,deflate"} query="did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow"
2026-03-18T15:39:26.603549Z ERROR tokio-runtime-worker ThreadId(05) request:resolve_did: did_webplus_urd_lib::spawn_urd: did-webplus/urd-lib/src/spawn_urd.rs:55: error=(500, "Storage error: error returned from database: (code: 1) no such table: did_document_records") method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1 request_header_map={"accept": "application/did-resolution,application/did", "host": "driver-did-webplus:80", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.14 (Java/21.0.10)", "accept-encoding": "gzip,deflate"} query="did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow"
2026-03-18T15:39:26.603577Z DEBUG tokio-runtime-worker ThreadId(05) request:resolve_did: did_webplus_urd_lib::spawn_urd: did-webplus/urd-lib/src/spawn_urd.rs:55: close time.busy=531µs time.idle=220µs method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1 request_header_map={"accept": "application/did-resolution,application/did", "host": "driver-did-webplus:80", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.14 (Java/21.0.10)", "accept-encoding": "gzip,deflate"} query="did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow"
2026-03-18T15:39:26.603765Z  INFO tokio-runtime-worker ThreadId(05) request: tower_http::trace::on_response: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tower-http-0.6.6/src/trace/on_response.rs:114: finished processing request latency=1 ms status=500 method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1
2026-03-18T15:39:26.603779Z ERROR tokio-runtime-worker ThreadId(05) request: tower_http::trace::on_failure: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tower-http-0.6.6/src/trace/on_failure.rs:93: response failed classification=Status code: 500 Internal Server Error latency=1 ms method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1
2026-03-18T15:39:26.603887Z  INFO tokio-runtime-worker ThreadId(05) request: tower_http::trace::make_span: /usr/local/cargo/registry/src/index.crates.io-1949cf8c6b5b557f/tower-http-0.6.6/src/trace/make_span.rs:108: close time.busy=738µs time.idle=411µs method=GET uri=/1.0/identifiers/did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow version=HTTP/1.1

…imeouts resulting in wiping the in-memory DB.
@vdods
Copy link
Copy Markdown
Author

vdods commented Mar 18, 2026

This should be fixed.

This bug was the result of an architectural limitation with universal resolver drivers -- each driver is limited to a single docker container, but in order for did:webplus's driver to function as intended, it should have a Postgres DB backing its storage. So the compromise was to use an in-container SQLite database. The bug stemmed from use of an in-memory SQLite database (intended to minimize latency and reduce concurrency problems), but such a DB gets wiped when all the connections to it time out/are dropped, and with it, the DB migrations. So the fix is to use a file-backed SQLite database within the container so that the migrations that create the tables persist. Still not ideal.

@peacekeeper I believe we talked about this before, maybe on a Slack channel, but I can't find the communication -- would it be permissible for a driver to use a second docker container (e.g. running a Postgres server)? I just feel bad that the default configuration for the did:webplus driver is a rather improper configuration.

@BernhardFuchs
Copy link
Copy Markdown
Member

In general I don't see an issue of having multiple containers per driver. If you are using a DB like Postgres there are some contraints:

  • Containers in this application are ephemeral. We have not and don't plan on adding persistent volumes. If permanent storage is needed that has to be done outside of the cluster.
  • We don't offer any secret management for drivers. If you need secrets for connecting to a DB outside of the cluster, they have to be shipped within the container.

I tested the current state of the PR and the first example DID did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiANVlMledNFUBJNiZPuvfgzxvJlGGDBIpDFpM4DXW6Bow does work fine but for the second one did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiDBw4xANa8sR_Fd8-pv-X9A5XIJNS3tC_bRNB3HUYiKug we get a 410 error despite that the DID document is retrieved.

DID document:

{
  "id": "did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiDBw4xANa8sR_Fd8-pv-X9A5XIJNS3tC_bRNB3HUYiKug",
  "selfHash": "uFiB22brlXeP5TPc7qqOxeOJsxuixRv2jE9rmFCRLVBizHw",
  "prevDIDDocumentSelfHash": "uFiCCY8US1SG4VLelUh4IXDZ8V8We1djyolblOJ675tQotg",
  "updateRules": {},
  "proofs": [
    "eyJhbGciOiJFZDI1NTE5Iiwia2lkIjoidTdRRnRtNnFzUnNrNDdDYlhJWUhoLWttMmVncmJneWxLbWV5cTFuakVPS0tlWkEiLCJjcml0IjpbImI2NCJdLCJiNjQiOmZhbHNlfQ..HpjXgqWQj70j0VA8-godJfIdop4RsSqEQBUJieJi_MhFxgM_sIlX8Yj1Wf_kRHCWQC0Ps2HlZaKe2H5SuBGzAA"
  ],
  "validFrom": "2026-02-11T06:49:42.596Z",
  "versionId": 2,
  "verificationMethod": [],
  "authentication": [],
  "assertionMethod": [],
  "keyAgreement": [],
  "capabilityInvocation": [],
  "capabilityDelegation": []
}

Resolution Metadata:

{
  "driverDuration": 61,
  "contentType": "application/did",
  "fetchedUpdatesFromVDR": true,
  "didDocumentResolvedLocally": false,
  "didDocumentMetadataResolvedLocally": false,
  "pattern": "^(did:webplus:.+)$",
  "driverUrl": "http://driver-did-webplus:80/1.0/identifiers/$1",
  "duration": 61,
  "did": {
    "didString": "did:webplus:ledgerdomain.github.io:did-webplus-spec:uFiDBw4xANa8sR_Fd8-pv-X9A5XIJNS3tC_bRNB3HUYiKug",
    "methodSpecificId": "ledgerdomain.github.io:did-webplus-spec:uFiDBw4xANa8sR_Fd8-pv-X9A5XIJNS3tC_bRNB3HUYiKug",
    "method": "webplus"
  }
}

Document Metadata:

{
  "created": "2026-02-11T06:49:42Z",
  "createdMilliseconds": "2026-02-11T06:49:42.557Z",
  "updated": "2026-02-11T06:49:42Z",
  "updatedMilliseconds": "2026-02-11T06:49:42.596Z",
  "versionId": "2",
  "deactivated": true
}

@peacekeeper
Copy link
Copy Markdown
Member

would it be permissible for a driver to use a second docker container (e.g. running a Postgres server)

@BernhardFuchs knows this better, but personally I feel like this could create a lot of complications. As far as I know, we have various scripts in place for building, configuring, and deploying the Universal Resolver, and I'm pretty sure something would break if two containers are required..

Why would a resolver require a database? Is this just some sort of cache, or is it necessary for the DID method to function properly?

@peacekeeper
Copy link
Copy Markdown
Member

we get a 410 error despite that the DID document is retrieved.

I think this is not an error, but actually normal behavior for a DID that has been deactivated, according to the HTTP(S) binding of the DID Resolution spec: https://www.w3.org/TR/did-resolution/#bindings-https

@vdods
Copy link
Copy Markdown
Author

vdods commented Mar 25, 2026

Why would a resolver require a database? Is this just some sort of cache, or is it necessary for the DID method to function properly?

Good question. It is to cache DID documents that have been fetched and verified. This makes repeated resolution into a constant-time operation. It's not strictly necessary, but it would degrade the performance of DID resolution unacceptably. The driver for did:webplus runs a "Full" DID resolver, which stores all fetched+verified DID documents.

The did:webplus Verifiable Data Gateway (VDG) is a web service (that anyone could deploy) that plays a role similar to the universal resolver driver, in that it fetches, verifies, and resolves/serves DID documents, but is meant to be shared by many parties so that they may all partake in the same "scope of agreement" for DID updates. For did:webplus specifically, the VDG is preferred to the universal resolver driver, because it (along with the other did:webplus components) is architected to provide low resolution latency, resolution robustness+redundancy, and the "scope of agreement" property.

@vdods
Copy link
Copy Markdown
Author

vdods commented Apr 8, 2026

I'm assuming that a second container running a postgres server isn't considered "go", so this PR is complete as-is (it uses an in-container SQLite database for its storage, understanding that it is not permanent storage), and is ready to be merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants