Conversation
…at/outlayer-service
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/outlayer/service/src/env.rs`:
- Around line 30-31: The call method currently returns ready(Ok(ProjectEnv)) in
fn call(&mut self, _project_id: AccountId) -> Self::Future which hides
unimplemented fetch/decrypt logic; change it to explicitly return an error until
the real fetch/decrypt is implemented (e.g., return
ready(Err(your_error_variant)) or use a clear NotImplemented/Unimplemented
error) so callers of call/ProjectEnv must handle failure; ensure the returned
error matches the Self::Future result type and use an existing error
enum/variant from this module or crate rather than panicking.
In `@crates/outlayer/service/src/on_chain.rs`:
- Around line 85-87: The code risks TOCTOU by calling
client.oa_code_hash().await? and client.oa_code_url().await? separately; instead
fetch both values in a single atomic contract view or pin both reads to the same
block reference so they cannot diverge. Update the client API or add a new
helper (e.g., client.oa_code_info() or client.view_code_at_block(block)) that
returns (url, hash) in one RPC/contract view, or perform the first read to
obtain the block/height and pass that same block/height into the second read to
ensure both values are from the same chain state; replace the two calls in
on_chain.rs with that single-call or block-pinned variant.
In `@crates/outlayer/service/src/outlayer_app_client.rs`:
- Line 35: The hex decoding failure for the local variable hash_hex is being
mapped to OutlayerAppClientError::ViewCall; change the mapping so that
hex::decode(&hash_hex) maps decoding errors to
OutlayerAppClientError::InvalidHash (preserving the original error details via
e.into() or equivalent) instead of OutlayerAppClientError::ViewCall so that
invalid local hash formats are classified as InvalidHash.
In `@crates/outlayer/service/src/resolver/inline.rs`:
- Around line 34-38: The code currently base64-decodes an unbounded `data`
string (using `STANDARD.decode(&data)`), which can cause excessive memory use;
before calling `STANDARD.decode` (and after stripping whitespace with
`data.chars().filter(...).collect()`), check the byte/char length of the
sanitized `data` and enforce a hard maximum size (e.g., fail fast if > N
bytes/chars such as 1MB) and return a clear error (e.g., a new
`ResolveError::InlineTooLarge` or reuse `ResolveError::InlineDecode` with a
distinct message); only proceed to call `STANDARD.decode(&data)` and construct
`Arc::new(Bytes::from(b))` if the size is within the limit.
In `@crates/outlayer/service/src/resolver/mod.rs`:
- Around line 64-67: The tracing span currently logs the raw URL via the
attribute fields url = tracing::field::display(format_args!("{url:.50}")) in the
resolve instrumentation, which can leak query or fragment secrets; modify the
resolver to sanitize or redact the URL before logging (e.g., create a
sanitized_url by parsing the input URL and stripping query and fragment or
replacing them with "<redacted>") and change the tracing field to log that
sanitized value instead while leaving the existing hash field
(hex_encode(&expected_hash)) intact; update the code paths that call resolve so
they pass or compute the sanitized_url used in the tracing::instrument fields.
In `@crates/outlayer/service/src/service.rs`:
- Around line 71-76: The poll_ready implementation for the service is missing
readiness check for the on_chain Service; inside fn poll_ready(&mut self, cx:
&mut Context<'_>) add a readiness check like
ready!(self.on_chain.poll_ready(cx)) (and map or convert the error to the
appropriate ExecutionStackError variant consistent with the other checks) before
returning Poll::Ready(Ok(())) so the on_chain Service contract is preserved.
In `@crates/outlayer/service/src/signing_service.rs`:
- Around line 36-38: The sign_response function currently calls
serde_json::to_vec(...).expect(...) which can panic; change sign_response<T:
Serialize>(key: &InMemorySigner, response: T) -> SignedExecutionResponse<T> to
return a Result<SignedExecutionResponse<T>, ExecutionStackError> (or the crate's
existing error type), replace the expect with
serde_json::to_vec(&response).map_err(|e|
ExecutionStackError::from_serialization(e) /* or appropriate variant */)?, and
propagate that Result through the function so any serialization failure is
returned as an ExecutionStackError instead of aborting; update call sites
accordingly and preserve use of InMemorySigner and SignedExecutionResponse
types.
In `@crates/outlayer/service/src/storage.rs`:
- Around line 35-37: The current call(&mut self, _project_id: AccountId) ->
Self::Future unconditionally returns Ok(ProjectStorage) which masks that RPC
fetch is not implemented; change it to return an explicit error until fetching
is implemented (e.g., ready(Err(...))) using the crate's storage error type
(create or use an existing variant such as StorageError::UnimplementedFetch or
StorageError::RpcNotImplemented) so callers observe a failure instead of a
spurious successful ProjectStorage; keep the signature and types (AccountId,
ProjectStorage, Self::Future) the same and adjust the returned Result to Err.
In `@crates/outlayer/service/src/utils/cache.rs`:
- Around line 95-106: The cloned service (`let mut inner = self.inner.clone()`)
is used for .call(...) without performing readiness on that same clone, which
can break Tower backpressure semantics; fix by awaiting readiness on the same
cloned service before calling it (e.g., make `inner` mutable, call the
ServiceExt::ready/ready_ready on `inner` and await it), then invoke
`inner.call(key).await` and store the result in cache; reference
`self.inner.clone()`, `inner`, `call`, and `cache_key` to locate and update the
readiness/check-before-call sequence.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6b6335f3-9d08-4bcd-bbba-ad2ea500c14a
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockres/wasm_stub.wasmis excluded by!**/*.wasm
📒 Files selected for processing (24)
Cargo.tomlcrates/outlayer/host/src/lib.rscrates/outlayer/service/Cargo.tomlcrates/outlayer/service/examples/run.rscrates/outlayer/service/src/config.rscrates/outlayer/service/src/env.rscrates/outlayer/service/src/error.rscrates/outlayer/service/src/executor.rscrates/outlayer/service/src/lib.rscrates/outlayer/service/src/on_chain.rscrates/outlayer/service/src/outlayer_app_client.rscrates/outlayer/service/src/resolver/http.rscrates/outlayer/service/src/resolver/inline.rscrates/outlayer/service/src/resolver/mod.rscrates/outlayer/service/src/service.rscrates/outlayer/service/src/signing_service.rscrates/outlayer/service/src/storage.rscrates/outlayer/service/src/types.rscrates/outlayer/service/src/utils/cache.rscrates/outlayer/service/src/utils/mod.rscrates/outlayer/service/src/utils/retry.rscrates/outlayer/service/tests/on_chain_fetch.rscrates/outlayer/vm-runner/src/lib.rscrates/testing/utils/src/wasms.rs
| fn call(&mut self, _project_id: AccountId) -> Self::Future { | ||
| ready(Ok(ProjectEnv)) |
There was a problem hiding this comment.
Avoid placeholder success in env fetch path.
Line 30/Line 31 currently return success even though fetch/decryption are unimplemented. This can hide real failures and produce invalid execution context. Fail explicitly until implemented.
Suggested minimal guard
fn call(&mut self, _project_id: AccountId) -> Self::Future {
- ready(Ok(ProjectEnv))
+ ready(Err(EnvFetchError::Rpc(
+ "env fetch is not implemented yet".to_string(),
+ )))
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/env.rs` around lines 30 - 31, The call method
currently returns ready(Ok(ProjectEnv)) in fn call(&mut self, _project_id:
AccountId) -> Self::Future which hides unimplemented fetch/decrypt logic; change
it to explicitly return an error until the real fetch/decrypt is implemented
(e.g., return ready(Err(your_error_variant)) or use a clear
NotImplemented/Unimplemented error) so callers of call/ProjectEnv must handle
failure; ensure the returned error matches the Self::Future result type and use
an existing error enum/variant from this module or crate rather than panicking.
| let code_hash = client.oa_code_hash().await?; | ||
| let code_url = client.oa_code_url().await?; | ||
| Ok((code_url, code_hash)) |
There was a problem hiding this comment.
TOCTOU risk: hash and URL are fetched in separate chain reads.
Line 85 and Line 86 can observe different on-chain states if the project updates between calls, producing an inconsistent (wasm_url, wasm_hash) pair and downstream hash-mismatch failures. Prefer a single atomic contract view for both fields, or pin both reads to the same block reference.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/on_chain.rs` around lines 85 - 87, The code risks
TOCTOU by calling client.oa_code_hash().await? and client.oa_code_url().await?
separately; instead fetch both values in a single atomic contract view or pin
both reads to the same block reference so they cannot diverge. Update the client
API or add a new helper (e.g., client.oa_code_info() or
client.view_code_at_block(block)) that returns (url, hash) in one RPC/contract
view, or perform the first read to obtain the block/height and pass that same
block/height into the second read to ensure both values are from the same chain
state; replace the two calls in on_chain.rs with that single-call or
block-pinned variant.
| .map(|d| d.data) | ||
| .map_err(|e| OutlayerAppClientError::ViewCall(e.into()))?; | ||
| let bytes = | ||
| hex::decode(&hash_hex).map_err(|e| OutlayerAppClientError::ViewCall(e.into()))?; |
There was a problem hiding this comment.
Map hash decode failures to InvalidHash, not ViewCall.
Line 35 currently classifies local decoding/format failures as RPC call failures, which blurs error telemetry and handling paths.
Suggested fix
- let bytes =
- hex::decode(&hash_hex).map_err(|e| OutlayerAppClientError::ViewCall(e.into()))?;
+ let bytes = hex::decode(&hash_hex).map_err(|_| OutlayerAppClientError::InvalidHash)?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/outlayer_app_client.rs` at line 35, The hex
decoding failure for the local variable hash_hex is being mapped to
OutlayerAppClientError::ViewCall; change the mapping so that
hex::decode(&hash_hex) maps decoding errors to
OutlayerAppClientError::InvalidHash (preserving the original error details via
e.into() or equivalent) instead of OutlayerAppClientError::ViewCall so that
invalid local hash formats are classified as InvalidHash.
| let data: String = data.chars().filter(|c| !c.is_ascii_whitespace()).collect(); | ||
| let bytes = STANDARD | ||
| .decode(&data) | ||
| .map(|b| Arc::new(Bytes::from(b))) | ||
| .map_err(|e| ResolveError::InlineDecode(e.to_string()))?; |
There was a problem hiding this comment.
Add a hard size limit before decoding inline WASM.
Line 34 through Line 38 decode unbounded input. A large data: URL can force high memory allocation and degrade service availability.
Suggested shape of fix
+const MAX_INLINE_WASM_BYTES: u64 = 100 * 1024 * 1024; // keep in sync with fetch limits
+
let data: String = data.chars().filter(|c| !c.is_ascii_whitespace()).collect();
+let approx_decoded = (data.len() as u64 / 4) * 3;
+if approx_decoded > MAX_INLINE_WASM_BYTES {
+ return Err(ResolveError::TooLarge {
+ size: approx_decoded,
+ limit: MAX_INLINE_WASM_BYTES,
+ });
+}
let bytes = STANDARD
.decode(&data)
.map(|b| Arc::new(Bytes::from(b)))
.map_err(|e| ResolveError::InlineDecode(e.to_string()))?;
+if bytes.len() as u64 > MAX_INLINE_WASM_BYTES {
+ return Err(ResolveError::TooLarge {
+ size: bytes.len() as u64,
+ limit: MAX_INLINE_WASM_BYTES,
+ });
+}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/resolver/inline.rs` around lines 34 - 38, The
code currently base64-decodes an unbounded `data` string (using
`STANDARD.decode(&data)`), which can cause excessive memory use; before calling
`STANDARD.decode` (and after stripping whitespace with
`data.chars().filter(...).collect()`), check the byte/char length of the
sanitized `data` and enforce a hard maximum size (e.g., fail fast if > N
bytes/chars such as 1MB) and return a clear error (e.g., a new
`ResolveError::InlineTooLarge` or reuse `ResolveError::InlineDecode` with a
distinct message); only proceed to call `STANDARD.decode(&data)` and construct
`Arc::new(Bytes::from(b))` if the size is within the limit.
| #[tracing::instrument(level = "debug", name = "resolve", skip_all, fields( | ||
| url = tracing::field::display(format_args!("{url:.50}")), | ||
| hash = hex_encode(&expected_hash), | ||
| ))] |
There was a problem hiding this comment.
Potential secret leakage in resolver URL logging.
Line 65 logs the URL prefix directly; this can leak query credentials (e.g., pre-signed tokens). Log a sanitized form (scheme/host/path only) or redact query/fragment before recording.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/resolver/mod.rs` around lines 64 - 67, The
tracing span currently logs the raw URL via the attribute fields url =
tracing::field::display(format_args!("{url:.50}")) in the resolve
instrumentation, which can leak query or fragment secrets; modify the resolver
to sanitize or redact the URL before logging (e.g., create a sanitized_url by
parsing the input URL and stripping query and fragment or replacing them with
"<redacted>") and change the tracing field to log that sanitized value instead
while leaving the existing hash field (hex_encode(&expected_hash)) intact;
update the code paths that call resolve so they pass or compute the
sanitized_url used in the tracing::instrument fields.
| fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), ExecutionStackError>> { | ||
| ready!(self.wasm.poll_ready(cx))?; | ||
| ready!(self.env.poll_ready(cx))?; | ||
| ready!(self.storage.poll_ready(cx))?; | ||
| ready!(self.executor.poll_ready(cx)).map_err(ExecutionStackError::Executor)?; | ||
| Poll::Ready(Ok(())) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that OffChain path calls on_chain and poll_ready currently omits it.
rg -n -C3 'fn poll_ready|on_chain\.call|Request::OffChain' crates/outlayer/service/src/service.rsRepository: near/intents
Length of output: 985
🏁 Script executed:
# Get the full poll_ready implementation and struct definition
cat -n crates/outlayer/service/src/service.rs | sed -n '1,120p'Repository: near/intents
Length of output: 5000
🏁 Script executed:
# Check the struct fields to see all services, including on_chain
rg -n 'struct|on_chain|wasm|env|storage|executor' crates/outlayer/service/src/service.rs | head -20Repository: near/intents
Length of output: 846
🏁 Script executed:
# Verify the type of on_chain field
rg -n 'on_chain:' crates/outlayer/service/src/service.rsRepository: near/intents
Length of output: 170
on_chain service readiness is not polled before use.
The on_chain field is a Tower Service (line 63) that gets called for Request::OffChain (line 102), but poll_ready never checks its readiness. Add on_chain.poll_ready(cx) to maintain the Service contract.
Suggested patch
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), ExecutionStackError>> {
+ ready!(self.on_chain.poll_ready(cx)).map_err(ExecutionStackError::from)?;
ready!(self.wasm.poll_ready(cx))?;
ready!(self.env.poll_ready(cx))?;
ready!(self.storage.poll_ready(cx))?;
ready!(self.executor.poll_ready(cx)).map_err(ExecutionStackError::Executor)?;
Poll::Ready(Ok(()))
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/service.rs` around lines 71 - 76, The poll_ready
implementation for the service is missing readiness check for the on_chain
Service; inside fn poll_ready(&mut self, cx: &mut Context<'_>) add a readiness
check like ready!(self.on_chain.poll_ready(cx)) (and map or convert the error to
the appropriate ExecutionStackError variant consistent with the other checks)
before returning Poll::Ready(Ok(())) so the on_chain Service contract is
preserved.
| fn sign_response<T: Serialize>(key: &InMemorySigner, response: T) -> SignedExecutionResponse<T> { | ||
| let json = serde_json::to_vec(&response).expect("response serialization is infallible"); | ||
| let signature = |
There was a problem hiding this comment.
Avoid panicking on signing-path serialization failures.
Line 37 uses expect, which can panic the request path. Convert this to a regular ExecutionStackError and propagate it instead of aborting the task.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/signing_service.rs` around lines 36 - 38, The
sign_response function currently calls serde_json::to_vec(...).expect(...) which
can panic; change sign_response<T: Serialize>(key: &InMemorySigner, response: T)
-> SignedExecutionResponse<T> to return a Result<SignedExecutionResponse<T>,
ExecutionStackError> (or the crate's existing error type), replace the expect
with serde_json::to_vec(&response).map_err(|e|
ExecutionStackError::from_serialization(e) /* or appropriate variant */)?, and
propagate that Result through the function so any serialization failure is
returned as an ExecutionStackError instead of aborting; update call sites
accordingly and preserve use of InMemorySigner and SignedExecutionResponse
types.
| fn call(&mut self, _project_id: AccountId) -> Self::Future { | ||
| ready(Ok(ProjectStorage)) | ||
| } |
There was a problem hiding this comment.
Do not return success for unimplemented storage fetch.
Line 35 currently ignores the input and Line 36 always returns placeholder storage, which can make execution/signing appear successful with incorrect state. Return an explicit error until RPC fetching is implemented.
Suggested minimal guard
fn call(&mut self, _project_id: AccountId) -> Self::Future {
- ready(Ok(ProjectStorage))
+ ready(Err(StorageFetchError::Rpc(
+ "storage fetch is not implemented yet".to_string(),
+ )))
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn call(&mut self, _project_id: AccountId) -> Self::Future { | |
| ready(Ok(ProjectStorage)) | |
| } | |
| fn call(&mut self, _project_id: AccountId) -> Self::Future { | |
| ready(Err(StorageFetchError::Rpc( | |
| "storage fetch is not implemented yet".to_string(), | |
| ))) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/storage.rs` around lines 35 - 37, The current
call(&mut self, _project_id: AccountId) -> Self::Future unconditionally returns
Ok(ProjectStorage) which masks that RPC fetch is not implemented; change it to
return an explicit error until fetching is implemented (e.g., ready(Err(...)))
using the crate's storage error type (create or use an existing variant such as
StorageError::UnimplementedFetch or StorageError::RpcNotImplemented) so callers
observe a failure instead of a spurious successful ProjectStorage; keep the
signature and types (AccountId, ProjectStorage, Self::Future) the same and
adjust the returned Result to Err.
| let mut inner = self.inner.clone(); | ||
| let hash = cache_key(&key); | ||
| Box::pin( | ||
| async move { | ||
| let cached = cache.lock().unwrap().get(&hash).cloned(); | ||
| if let Some(cached) = cached { | ||
| tracing::debug!("cache hit"); | ||
| return Ok(cached); | ||
| } | ||
| tracing::debug!("cache miss, fetching"); | ||
| let value = inner.call(key).await?; | ||
| cache.lock().unwrap().put(hash, value.clone()); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Tower's Service contract require poll_ready and call to be coordinated on the same service instance, and is cloning a service after poll_ready then calling the clone without ready/oneshot considered unsafe?
💡 Result:
Yes, Tower's Service contract requires poll_ready and call to be coordinated on the same service instance. The official documentation states that before dispatching a request, poll_ready must be called and return Poll::Ready(Ok()), and implementations are permitted to panic if call is invoked without this. Once poll_ready returns Ready, a request may be dispatched using call on that instance. Yes, cloning a service after poll_ready and then calling the clone without calling poll_ready/ready on the clone is considered unsafe. The docs explicitly warn: "You should therefore be careful when cloning services for example to move them into boxed futures. Even though the original service is ready, the clone might not be." A PR specifically clarified the subtlety around cloning and readiness, and examples show incorrect code cloning into futures without re-checking readiness on the clone, which might panic, versus correct usage with std::mem::replace to use the ready instance.
Citations:
- 1: https://docs.rs/tower/latest/tower/trait.Service.html
- 2: https://docs.rs/tower-service/latest/tower_service/trait.Service.html
- 3: service: Clarify subtly around cloning and readiness tower-rs/tower#548
🏁 Script executed:
# First, find the cache.rs file and examine the actual implementation
find . -path "*/outlayer/service/src/utils/cache.rs" -type fRepository: near/intents
Length of output: 101
🏁 Script executed:
# Read the cache.rs file to verify the code structure
cat -n crates/outlayer/service/src/utils/cache.rs | head -120Repository: near/intents
Length of output: 3854
🏁 Script executed:
# Check imports to see if ServiceExt is already imported
rg "use tower" crates/outlayer/service/src/utils/cache.rs -A 2 -B 2Repository: near/intents
Length of output: 163
Readiness is checked on one service instance but call uses another clone.
Line 95 clones inner, then Line 105 calls it without an explicit readiness step for that clone. For Tower services with backpressure, this can violate service contract expectations and cause flaky runtime behavior.
Suggested fix
-use tower::{Layer, Service};
+use tower::{Layer, Service, ServiceExt};
// ...
- let value = inner.call(key).await?;
+ let value = inner.oneshot(key).await?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let mut inner = self.inner.clone(); | |
| let hash = cache_key(&key); | |
| Box::pin( | |
| async move { | |
| let cached = cache.lock().unwrap().get(&hash).cloned(); | |
| if let Some(cached) = cached { | |
| tracing::debug!("cache hit"); | |
| return Ok(cached); | |
| } | |
| tracing::debug!("cache miss, fetching"); | |
| let value = inner.call(key).await?; | |
| cache.lock().unwrap().put(hash, value.clone()); | |
| let mut inner = self.inner.clone(); | |
| let hash = cache_key(&key); | |
| Box::pin( | |
| async move { | |
| let cached = cache.lock().unwrap().get(&hash).cloned(); | |
| if let Some(cached) = cached { | |
| tracing::debug!("cache hit"); | |
| return Ok(cached); | |
| } | |
| tracing::debug!("cache miss, fetching"); | |
| let value = inner.oneshot(key).await?; | |
| cache.lock().unwrap().put(hash, value.clone()); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/outlayer/service/src/utils/cache.rs` around lines 95 - 106, The cloned
service (`let mut inner = self.inner.clone()`) is used for .call(...) without
performing readiness on that same clone, which can break Tower backpressure
semantics; fix by awaiting readiness on the same cloned service before calling
it (e.g., make `inner` mutable, call the ServiceExt::ready/ready_ready on
`inner` and await it), then invoke `inner.call(key).await` and store the result
in cache; reference `self.inner.clone()`, `inner`, `call`, and `cache_key` to
locate and update the readiness/check-before-call sequence.
mitinarseny
left a comment
There was a problem hiding this comment.
Great job, but we need to refactor it massively:
- remove/reuse existing abstractions
- remove everything connected to WASM storage and other environment
- etc...
| /// SHA-256 of the WASM binary (64 hex chars). | ||
| #[arg(long)] | ||
| wasm_hash: String, | ||
|
|
There was a problem hiding this comment.
Maybe calculate hash from downloaded file by default?
| /// echo 'hello world' | run 'data:application/wasm;base64,AGFzbQ...' --wasm-hash <64 hex chars> | ||
| /// run '<https://example.com/component.wasm>' --wasm-hash <64 hex chars> < input.bin | ||
| #[derive(Parser)] | ||
| struct Args { |
| #[derive(Parser)] | ||
| struct Args { | ||
| /// WASM URL — inline (`data:application/wasm;base64,…`) or remote (`http(s)://…`). | ||
| url: String, |
There was a problem hiding this comment.
| url: String, | |
| url: Url, |
There was a problem hiding this comment.
Can we accept file:// URLs as well?
| /// Opaque identifier for this request, used for logging only. | ||
| pub request_id: String, | ||
| pub project_id: AccountId, | ||
| pub wasm_url: String, |
There was a problem hiding this comment.
| pub wasm_url: String, | |
| pub wasm_url: Url, |
| pub struct ExecutionRequest { | ||
| /// Opaque identifier for this request, used for logging only. | ||
| pub request_id: String, | ||
| pub project_id: AccountId, |
There was a problem hiding this comment.
We call id app_id: AppId
| } | ||
|
|
||
| #[derive(Clone)] | ||
| pub struct CacheLayer<K, V> { |
There was a problem hiding this comment.
| pub struct CacheLayer<K, V> { | |
| pub struct InMemoryLruCache<K, V> { |
| } | ||
|
|
||
| fn cache_key<K: Hash>(key: &K) -> [u8; 32] { | ||
| struct Sha256Hasher(Sha256); |
There was a problem hiding this comment.
let's make it reusable and generic over Digest and put it in a separate hash-utils crate?
| } | ||
| } | ||
|
|
||
| fn cache_key<K: Hash>(key: &K) -> [u8; 32] { |
There was a problem hiding this comment.
Are use sure that Vec it its Hash impl doesn't prepend its length?
| fn poll_ready(&mut self, _cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> { | ||
| Poll::Ready(Ok(())) | ||
| } |
There was a problem hiding this comment.
We need to have configurable limits on how many concurrent executions we can make
| use tower::retry::Policy; | ||
|
|
||
| #[derive(Clone)] | ||
| pub struct Attempts(pub usize); |
There was a problem hiding this comment.
This, and backoff impls already exist in tower::retry
For now targetting
feat/outlayer-vm-runnerwill switch to `main' after #253 is merged.Still need to do a bit of cleanup but
Summary by CodeRabbit
Release Notes
New Features
Tests
Chores