Skip to content

feat(proplet): enable proplet run encrypted workloads#15

Open
rodneyosodo wants to merge 19 commits intomainfrom
encrypted-workload
Open

feat(proplet): enable proplet run encrypted workloads#15
rodneyosodo wants to merge 19 commits intomainfrom
encrypted-workload

Conversation

@rodneyosodo
Copy link
Owner

@rodneyosodo rodneyosodo commented Jan 13, 2026

Summary by Sourcery

Add confidential-computing support to proplet, including TEE detection, attestation-based key retrieval, and decryption of encrypted workloads before execution.

New Features:

  • Introduce TEE environment detection and expose helper APIs for querying the current TEE type and environment description.
  • Add attestation configuration and service that integrate with a Key Broker Service to retrieve encryption keys based on workload identity.
  • Extend task and start request types with a confidential flag and related attestation fields to mark workloads as confidential.
  • Enable encrypted WebAssembly workloads to be decrypted at runtime when running in a TEE with attestation enabled.

Enhancements:

  • Refine configuration handling and tests, including env-var driven attestation settings.
  • Tidy various tests and assertions for MQTT and configuration parsing to use clearer patterns.

Build:

  • Restructure the proplet crate into a library plus binary target and add dependencies for attestation and AES-GCM encryption.

Tests:

  • Add integration tests covering the confidential-computing decision matrix across confidential/non-confidential and TEE/non-TEE scenarios.
  • Add unit tests for new TEE detection, crypto helpers, and attestation configuration and service.

Summary by CodeRabbit

  • New Features

    • Added TEE support with multiple attestation options and a TEE-capable runtime; service chooses TEE or fallback runtime at startup.
    • Integrated Key Broker Service (KBS) support and encrypted WASM workload handling (new encrypted/kbs_resource fields).
  • Documentation

    • Expanded README and platform guides with TEE/KBS setup, encrypted workload steps, and runtime/build instructions.
  • Other

    • Updated VM setup script and defaults for local testing (ports, services, and cloud-init).

✏️ Tip: You can customize this high-level summary in your review settings.

@sourcery-ai
Copy link

sourcery-ai bot commented Jan 13, 2026

Reviewer's Guide

Adds confidential computing support to Proplet by detecting TEE environments, wiring in an attestation service and KBS client, enforcing a decision matrix for confidential workloads, and decrypting encrypted Wasm workloads with AES‑256‑GCM, along with smaller config, test, and API shape cleanups.

Sequence diagram for starting a confidential workload with attestation

sequenceDiagram
    actor Controller
    participant PropletService
    participant TeeDetector
    participant AttestationService
    participant KbsClient as KBS
    participant Crypto as CryptoModule

    Controller->>PropletService: new(PropletConfig, PubSub, Runtime)
    PropletService->>TeeDetector: detect()
    TeeDetector-->>PropletService: TeeType
    alt attestation_enabled
        PropletService->>AttestationService: new(AttestationConfig)
    end
    PropletService-->>Controller: PropletService

    Controller->>PropletService: handle_start(StartRequest{confidential:true})
    PropletService->>PropletService: validate_request()
    PropletService->>PropletService: check tee_type.is_tee()
    alt confidential_and_no_tee
        PropletService-->>Controller: error("SECURITY ERROR")
    else confidential_and_tee_present
        PropletService->>AttestationService: get_resource("default/keys/{task_id}", workload_id)
        AttestationService->>KbsClient: build_with_evidence_provider()
        KbsClient-->>AttestationService: client
        AttestationService->>KbsClient: get_resource(ResourceUri)
        KbsClient-->>AttestationService: key_bytes
        AttestationService-->>PropletService: key_bytes
        PropletService->>PropletService: download_or_decode_wasm()
        PropletService->>Crypto: decrypt_workload(wasm_encrypted, key_bytes)
        Crypto-->>PropletService: wasm_decrypted
        PropletService-->>Controller: task_started
    end
Loading

Class diagram for new confidential computing and attestation types

classDiagram
    class PropletService {
        +PropletConfig config
        +Arc~Mutex~Proplet~~ proplet
        +PubSub pubsub
        +Arc~dyn Runtime~ runtime
        +Arc~Mutex~HashMap~String, ChunkAssemblyState~~~ chunk_assembly
        +Arc~Mutex~HashMap~String, TaskState~~~ running_tasks
        +Option~Arc~AttestationService~~ attestation
        +TeeType tee_type
        +new(config: PropletConfig, pubsub: PubSub, runtime: Arc~dyn Runtime~) PropletService
        +start_chunk_expiry_task() void
        +handle_start(req: StartRequest) Result~()~
    }

    class PropletConfig {
        +u8 mqtt_qos
        +u64 mqtt_timeout
        +u64 liveliness_interval
        +String client_key
        +Option~String~ k8s_namespace
        +Option~String~ external_wasm_runtime
        +AttestationConfig attestation
        +qos() QoS
        +mqtt_timeout() Duration
        +liveliness_interval() Duration
        +from_env() PropletConfig
    }

    class AttestationConfig {
        +bool enabled
        +String kbs_url
        +String evidence_type
        +u64 timeout_secs
        +default() AttestationConfig
    }

    class AttestationService {
        -AttestationConfig config
        +new(config: AttestationConfig) AttestationService
        +get_resource(resource_path: &str, workload_id: &str) Result~Vec~u8~~
        +health_check() Result~bool~
        +decrypt_workload(encrypted_binary: &[u8], resource_path: &str, workload_id: &str) Result~Vec~u8~~
    }

    class TeeType {
        <<enum>>
        Tdx
        Sgx
        Snp
        Sev
        Se
        Cca
        None
        +as_str() &str
        +is_tee() bool
    }

    class TeeDetector {
        +detect() TeeType
        +is_in_tee() bool
        +describe_environment() String
        -check_tdx() bool
        -check_sgx() bool
        -check_snp() bool
        -check_sev() bool
        -check_se() bool
        -check_cca() bool
    }

    class EncryptedWorkload {
        +[u8; NONCE_SIZE] nonce
        +Vec~u8~ ciphertext
        +from_bytes(data: &[u8]) Result~EncryptedWorkload~
        +to_bytes() Vec~u8~
    }

    class CryptoModule {
        <<module>>
        +KEY_SIZE: usize
        +NONCE_SIZE: usize
        +decrypt_workload(encrypted_data: &[u8], key: &[u8]) Result~Vec~u8~~
        +encrypt_workload(plaintext: &[u8], key: &[u8]) Result~Vec~u8~~
    }

    class StartRequest {
        +String id
        +String name
        +String image_url
        +String file
        +Vec~u64~ inputs
        +bool daemon
        +Option~HashMap~String, String~~ env
        +bool confidential
        +validate() Result~()~
    }

    class Task {
        +String id
        +String name
        +State state
        +String image_url
        +[]byte file
        +[]string cli_args
        +[]uint64 inputs
        +map~string, string~ env
        +bool daemon
        +string proplet_id
        +any results
        +string error
        +time.Time start_time
        +time.Time finish_time
        +time.Time created_at
        +time.Time updated_at
        +bool confidential
        +bool require_attestation
        +string attestation_evidence_type
    }

    PropletService --> PropletConfig
    PropletService --> AttestationService : uses
    PropletService --> TeeType
    PropletService ..> TeeDetector : calls
    PropletService ..> CryptoModule : decrypt_workload
    PropletService --> StartRequest : handles

    PropletConfig --> AttestationConfig

    AttestationService --> AttestationConfig

    CryptoModule --> EncryptedWorkload

    StartRequest ..> Task : maps_to
Loading

File-Level Changes

Change Details Files
Introduce TEE detection, attestation plumbing, and confidential workload enforcement in the Proplet service, including decryption of encrypted Wasm workloads.
  • Extend PropletService to hold an optional AttestationService and detected TeeType, initializing them at startup via TeeDetector and attestation config.
  • Implement a decision matrix in start() handling to gate confidential workloads on the presence of a TEE and enabled attestation, perform key retrieval via AttestationService, and bail out with structured error reporting when requirements are not met.
  • Integrate AES‑256‑GCM decryption of Wasm binaries when a key is returned from attestation, failing tasks and publishing errors on decryption failure while leaving non‑confidential workloads unchanged.
proplet/src/service.rs
proplet/src/crypto.rs
proplet/src/attestation/mod.rs
proplet/src/attestation/service.rs
proplet/src/tee/mod.rs
proplet/src/tee/detector.rs
Add configuration, wiring, and library exports to support attestation, TEE utilities, and confidential workloads throughout the Proplet crate.
  • Extend PropletConfig to include AttestationConfig with env‑var based overrides (enabled flag, KBS URL, evidence type, timeout) and update defaults and tests accordingly.
  • Define AttestationConfig and AttestationService types, plus re‑exports from lib.rs, and add new dependencies for guest attestation components and AES‑GCM in Cargo.toml (including split lib/bin targets).
  • Expose TEE helper functions and types (detect_tee_type, describe_environment, TeeType) at the crate root for use by integration tests and external callers.
proplet/src/config.rs
proplet/src/attestation/config.rs
proplet/src/lib.rs
proplet/Cargo.toml
Extend task and request types, and add integration tests to validate the confidential computing decision matrix and TEE behavior.
  • Add confidential flag to StartRequest (with serde default) and update unit tests to populate it, ensuring backward compatibility.
  • Extend Go Task struct with confidential and attestation‑related fields (RequireAttestation, AttestationEvidenceType) for API-level control.
  • Add integration tests covering the four confidential/non‑confidential × TEE/non‑TEE scenarios, TEE detection, environment description, and decision logic utilities, using the newly exported TEE helpers.
proplet/src/types.rs
task/task.go
proplet/tests/integration_test.rs
Minor test and style cleanups unrelated to core confidential computing behavior.
  • Replace assert_eq!(.., false/true) with assert!/assert!(!..) in various tests for clearer expectations.
  • Simplify address_without_scheme parsing in MQTT tests by avoiding redundant borrowing semantics.
proplet/src/types.rs
proplet/src/mqtt.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link

coderabbitai bot commented Jan 13, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Adds optional TEE support: runtime, detection, attestation/KBS integration, encrypted WASM retrieval/decryption, config/env extensions, service routing for encrypted workloads, and deployment/docs updates for TEE-enabled environments.

Changes

Cohort / File(s) Summary
Build & Feature Flags
proplet/Cargo.toml
Added optional TEE/attestation dependencies and a [features] section defining tee and multiple attester feature flags.
TEE Detection
proplet/src/tee_detection.rs
New module with TeeType, TeeDetection, detection helpers and detect_tee() to identify TDX/SEV/SGX at startup.
Config
proplet/src/config.rs
PropletConfig extended with TEE fields (tee_enabled, kbs_uri, aa_config_path, layer_store_path, pull_concurrent_limit); load/from_env augmented to detect/enforce KBS settings under feature gate.
Runtime Surface & Impl
proplet/src/runtime/mod.rs, proplet/src/runtime/tee_runtime.rs
New, feature-gated tee_runtime module and TeeWasmRuntime implementing Runtime with attestation-agent integration, ocicrypt/OCI pull+decrypt, embedded and external WASM execution paths.
Service & Startup
proplet/src/main.rs, proplet/src/service.rs
Conditional instantiation of TeeWasmRuntime; new with_tee_runtime constructor, tee_runtime field, set_tee_runtime, and routing logic to use TEE runtime for encrypted workloads.
API Types & Validation
proplet/src/types.rs
StartRequest gains encrypted: bool and kbs_resource_path: Option<String>; validation updated to require image_url and reject file uploads for encrypted workloads.
Go Task/Manager Payloads
task/task.go, manager/service.go
Task struct gains Encrypted and KBSResourcePath fields; StartTask() payload includes these fields.
Docs & Deployment Scripts
proplet/README.md, hal/ubuntu/README.md, hal/ubuntu/qemu.sh
Documented TEE/KBS/attestation setup and encrypted workload flow; updated qemu cloud-init and scripts to add attestation-agent and CoCo keyprovider services, ports (50010/50011), and provisioning changes.

Sequence Diagram

sequenceDiagram
    participant Client as Client
    participant Service as PropletService
    participant Detect as TeeDetection
    participant TeeRT as TeeWasmRuntime
    participant AA as AttestationAgent
    participant KBS as KeyBrokerService
    participant OCI as OCI_Registry

    Client->>Service: StartRequest (encrypted=true, image_url, kbs_resource_path)
    Service->>Detect: detect_tee()
    Detect-->>Service: TeeDetection (type, method)

    alt TEE detected
        Service->>TeeRT: init/new(config)
        TeeRT->>AA: connect()/init()
        AA-->>TeeRT: initialized
    end

    Service->>TeeRT: start_app(encrypted, image_url, kbs_resource_path)

    alt Encrypted workload
        TeeRT->>OCI: pull encrypted manifest/layers (image_url)
        OCI-->>TeeRT: encrypted layers
        TeeRT->>AA: collect_evidence()
        AA-->>TeeRT: evidence
        TeeRT->>KBS: request_decryption_key(evidence, kbs_resource_path)
        KBS-->>TeeRT: decryption key
        TeeRT->>TeeRT: decrypt layers (ocicrypt) -> wasm_blob
    else Non-encrypted
        TeeRT->>OCI: pull plain wasm or use embedded bytes
    end

    TeeRT->>TeeRT: run_wasm(wasm_blob) (embedded or external)
    TeeRT-->>Service: stdout/stderr + result
    Service-->>Client: execution result
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

"🐰
Keys and kernels hum beneath the sod,
I nibble logs where attestations nod.
Encrypted hops and layers unwrap with glee,
KBS gifts secrets, Attestation sings to me,
Proplet hops secure — a rabbit's jubilee!"

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding encrypted workloads support to proplet, which aligns with the substantial changes across configuration, runtime, and infrastructure files.
Docstring Coverage ✅ Passed Docstring coverage is 80.75% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 5 issues, and left some high level feedback:

  • There are now two describe_environment functions (tee::describe_environment and TeeDetector::describe_environment) with different semantics and formats; consider consolidating them or clearly separating their roles to avoid confusion for callers and tests.
  • The doc comment for AttestationService::get_resource says it returns a base64-encoded key, but the function returns raw Vec<u8> bytes from the KBS; update either the implementation or the comment so the encoding contract is explicit and accurate.
  • The new task fields RequireAttestation and AttestationEvidenceType in task.Task are not currently wired into the Rust StartRequest/attestation flow, so either hook them into the decision logic or defer adding them until they are used to avoid drift between client and server semantics.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- There are now two `describe_environment` functions (`tee::describe_environment` and `TeeDetector::describe_environment`) with different semantics and formats; consider consolidating them or clearly separating their roles to avoid confusion for callers and tests.
- The doc comment for `AttestationService::get_resource` says it returns a base64-encoded key, but the function returns raw `Vec<u8>` bytes from the KBS; update either the implementation or the comment so the encoding contract is explicit and accurate.
- The new task fields `RequireAttestation` and `AttestationEvidenceType` in `task.Task` are not currently wired into the Rust `StartRequest`/attestation flow, so either hook them into the decision logic or defer adding them until they are used to avoid drift between client and server semantics.

## Individual Comments

### Comment 1
<location> `proplet/src/service.rs:331-340` </location>
<code_context>
+                        );
+                        Some(key)
+                    }
+                    Err(e) => {
+                        let error_msg = format!(
+                            "Attestation failed for confidential task '{}': {}. \
</code_context>

<issue_to_address>
**issue (bug_risk):** On decryption failure the task is never removed from `running_tasks`, leaving stale task state.

Other confidential-workload error paths (no TEE, attestation failure, attestation disabled) remove the entry from `running_tasks` before returning. In the decryption failure branch you publish the result and return an error but never clear `running_tasks`, so the task remains marked as running even though it failed pre-execution. Please also remove it from `running_tasks` here to prevent stale state and downstream monitoring/housekeeping issues.
</issue_to_address>

### Comment 2
<location> `proplet/src/attestation/service.rs:35-36` </location>
<code_context>
+            return Err(anyhow::anyhow!("Attestation is disabled"));
+        }
+
+        info!(
+            "Requesting resource from KBS: {} (evidence type: {})",
+            resource_path, self.config.evidence_type
+        );
</code_context>

<issue_to_address>
**suggestion:** `AttestationConfig` fields like `evidence_type` and `timeout_secs` are not reflected in the KBS client configuration.

`self.config.evidence_type` is only logged, not passed into `NativeEvidenceProvider`/`KbsClientBuilder`, and `timeout_secs` is also unused. If these are meant to be operator-controlled settings, they should be wired through to the attester/KBS client (e.g., via feature selection, provider options, or HTTP client timeouts). Otherwise, consider removing them from `AttestationConfig` for now to avoid suggesting they have an effect.

Suggested implementation:

```rust
        info!("Requesting resource from KBS: {}", resource_path);

```

To fully implement your review comment, you should also:

1. Update the `AttestationConfig` definition (likely in `proplet/src/attestation/mod.rs` or a nearby config module):
   - Remove the `evidence_type` and `timeout_secs` fields entirely **if** they are not actually used to configure any component (attester, KBS client, HTTP client).
   - Alternatively, if you want to keep them for future use:
     - Add clear documentation comments to each field indicating that they are currently unused and do not affect behavior.
     - Avoid logging them in a way that suggests they influence the KBS request.

2. Remove or adjust all remaining references to `evidence_type` and `timeout_secs`:
   - Search for `AttestationConfig` usages (e.g., `self.config.evidence_type`, `self.config.timeout_secs`) and:
     - Delete those references if the fields are removed, or
     - Update logs and comments to avoid implying that they are active configuration knobs.

3. If you later decide to wire these settings through instead of removing them:
   - Extend the KBS/attestation client construction to accept:
     - `evidence_type` as a parameter/option for `NativeEvidenceProvider` or `KbsClientBuilder`.
     - `timeout_secs` as an HTTP client timeout (e.g., via a `reqwest::ClientBuilder` or a timeout option on the KBS client).
   - In that case, you can re-introduce relevant logging reflecting the actual configuration passed to the client.
</issue_to_address>

### Comment 3
<location> `proplet/src/attestation/service.rs:27-30` </location>
<code_context>
+    /// * `workload_id` - Identifier for the workload requesting the key
+    ///
+    /// # Returns
+    /// * Base64-encoded encryption key
+    pub async fn get_resource(&self, resource_path: &str, _workload_id: &str) -> Result<Vec<u8>> {
+        if !self.config.enabled {
+            return Err(anyhow::anyhow!("Attestation is disabled"));
</code_context>

<issue_to_address>
**suggestion:** The documented return format for `get_resource` doesn’t match how the value is used.

The function returns the raw `Vec<u8>` from `client.get_resource`, and callers use it directly as the AES key in `decrypt_workload`. If the KBS is meant to return raw key bytes (as current usage suggests), please update the return description to reflect this and avoid future confusion or accidental double-decoding.

```suggestion
    ///
    /// # Returns
    /// * Raw encryption key bytes as returned by the KBS
    pub async fn get_resource(&self, resource_path: &str, _workload_id: &str) -> Result<Vec<u8>> {
```
</issue_to_address>

### Comment 4
<location> `proplet/tests/integration_test.rs:7-16` </location>
<code_context>
+/// Test scenario 1: Confidential workload + No TEE
</code_context>

<issue_to_address>
**suggestion (testing):** Integration tests exercise only high-level decision comments and prints, but do not actually drive `PropletService` or assert the new confidential/attestation behavior.

Right now these tests mostly branch on `detect_tee_type()`, log messages, and in some paths don’t assert behavior. They also never spin up a real `PropletService` or send a `StartRequest`, so we don’t actually verify that: (1) confidential workloads are rejected without a TEE, (2) attestation runs when required, and (3) non-confidential workloads still execute. Please consider an integration harness that starts a `PropletService` with mocked `Runtime`/`AttestationService`, issues `StartRequest`s for the four decision-matrix scenarios, and asserts on the resulting result/error. That would ensure the tests validate the feature behavior, not just the control-flow narrative.

Suggested implementation:

```rust
// Copyright (c) Abstract Machines
// Integration tests for confidential computing scenarios

use proplet::{
    detect_tee_type,
    describe_environment,
    TeeType,
    // Adjust these paths/names to match the actual crate structure:
    service::PropletService,
    runtime::{Runtime, StartRequest, StartResult},
    attestation::{AttestationService, AttestationConfig, AttestationResult},
};
use std::sync::{Arc, Mutex};

/// Simple mock runtime that records whether `start()` was invoked and with what
#[derive(Default, Clone)]
struct MockRuntimeState {
    pub last_request: Option<StartRequest>,
    pub start_called: bool,
}

#[derive(Clone)]
struct MockRuntime {
    state: Arc<Mutex<MockRuntimeState>>,
    // configurable behavior, e.g., force failure, etc., if needed later
}

impl MockRuntime {
    fn new() -> Self {
        Self {
            state: Arc::new(Mutex::new(MockRuntimeState::default())),
        }
    }

    fn state(&self) -> MockRuntimeState {
        self.state.lock().unwrap().clone()
    }
}

impl Runtime for MockRuntime {
    fn start(&self, req: StartRequest) -> StartResult {
        let mut s = self.state.lock().unwrap();
        s.start_called = true;
        s.last_request = Some(req);
        // In these tests we only care that `start` is invoked or not, so we
        // treat all starts as success by default.
        Ok(())
    }
}

/// Simple mock attestation service that records invocations
#[derive(Default, Clone)]
struct MockAttestationState {
    pub last_config: Option<AttestationConfig>,
    pub attest_called: bool,
}

#[derive(Clone)]
struct MockAttestationService {
    state: Arc<Mutex<MockAttestationState>>,
    // Allow tests to force attestation failure if needed
    pub should_fail: bool,
}

impl MockAttestationService {
    fn new() -> Self {
        Self {
            state: Arc::new(Mutex::new(MockAttestationState::default())),
            should_fail: false,
        }
    }

    fn with_failure() -> Self {
        Self {
            should_fail: true,
            ..Self::new()
        }
    }

    fn state(&self) -> MockAttestationState {
        self.state.lock().unwrap().clone()
    }
}

impl AttestationService for MockAttestationService {
    fn attest(&self, cfg: AttestationConfig) -> AttestationResult {
        let mut s = self.state.lock().unwrap();
        s.attest_called = true;
        s.last_config = Some(cfg);

        if self.should_fail {
            // Adjust the error type/variant to match your real API
            Err("attestation failed".into())
        } else {
            Ok(())
        }
    }
}

/// Helper to construct a `StartRequest` for tests
fn make_start_request(confidential: bool) -> StartRequest {
    StartRequest {
        // Fill in the fields your real `StartRequest` requires:
        confidential,
        // ..Default::default()
    }
}

/// Test scenario 1: Confidential workload + No TEE
/// Expected: Should fail with security error and never start the runtime
#[test]
fn test_scenario_1_confidential_no_tee() {
    // This test validates the security model:
    // Confidential workloads MUST NOT run on non-TEE hardware
    let tee_type = TeeType::None; // override to explicitly test "no TEE" behavior

    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::new();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(true);

    let result = service.start(req);

    // Adjust these assertions to your actual error type/variant
    assert!(result.is_err(), "confidential workload should be rejected on non-TEE");
    let runtime_state = runtime.state();
    let att_state = attestation.state();

    // Runtime must not be started
    assert!(!runtime_state.start_called, "runtime must NOT be invoked for confidential workload on non-TEE");
    // Attestation should not run because we short-circuit on policy
    assert!(!att_state.attest_called, "attestation should not run when there is no TEE");
}

/// Test scenario 2: Confidential workload + TEE
/// Expected: Attestation runs, runtime starts on success
#[test]
fn test_scenario_2_confidential_with_tee_attestation_runs() {
    let tee_type = TeeType::SevSnp; // or whatever represents a supported TEE type

    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::new();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(true);

    let result = service.start(req);

    assert!(result.is_ok(), "confidential workload on TEE with successful attestation should start");

    let runtime_state = runtime.state();
    let att_state = attestation.state();

    assert!(att_state.attest_called, "attestation must run for confidential workload on TEE");
    assert!(runtime_state.start_called, "runtime must be started after successful attestation");
}

/// Test scenario 3: Confidential workload + TEE but attestation fails
/// Expected: Attestation runs, runtime does NOT start, error returned
#[test]
fn test_scenario_3_confidential_with_tee_attestation_fails() {
    let tee_type = TeeType::SevSnp;

    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::with_failure();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(true);

    let result = service.start(req);

    assert!(result.is_err(), "confidential workload must fail when attestation fails");

    let runtime_state = runtime.state();
    let att_state = attestation.state();

    assert!(att_state.attest_called, "attestation must run for confidential workload on TEE");
    assert!(
        !runtime_state.start_called,
        "runtime must NOT start when attestation fails"
    );
}

/// Test scenario 4: Non-confidential workload + No TEE
/// Expected: No attestation, runtime starts successfully
#[test]
fn test_scenario_4_non_confidential_no_tee() {
    let tee_type = TeeType::None;

    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::new();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(false);

    let result = service.start(req);

    assert!(result.is_ok(), "non-confidential workload should start even without TEE");

    let runtime_state = runtime.state();
    let att_state = attestation.state();

    assert!(
        runtime_state.start_called,
        "runtime must start for non-confidential workload on non-TEE"
    );
    assert!(
        !att_state.attest_called,
        "attestation must NOT run for non-confidential workload"
    );
}

/// Test scenario 5: Non-confidential workload + TEE
/// Expected: No attestation required, runtime starts successfully
#[test]
fn test_scenario_5_non_confidential_with_tee() {
    let tee_type = detect_tee_type();

    // If the host has no TEE, this scenario is equivalent to scenario 4.
    // If the host has a TEE, we still don't require attestation for non-confidential workloads.
    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::new();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(false);

    let result = service.start(req);

    assert!(
        result.is_ok(),
        "non-confidential workload should start regardless of TEE presence"
    );

    let runtime_state = runtime.state();
    let att_state = attestation.state();

    assert!(runtime_state.start_called, "runtime must be started");
    assert!(
        !att_state.attest_called,
        "attestation must not be required for non-confidential workload"
    );
}

```

1. **Adjust imports and paths**:
   - Replace `service::PropletService`, `runtime::{Runtime, StartRequest, StartResult}`, and `attestation::{AttestationService, AttestationConfig, AttestationResult}` with the correct module paths and type names used in your `proplet` crate.
2. **Align trait signatures**:
   - Update the `impl Runtime for MockRuntime` and `impl AttestationService for MockAttestationService` blocks to exactly match your real trait method signatures (parameter names, return types, error types).
   - Replace `Ok(())`, `Err("attestation failed".into())`, and the `StartResult`/`AttestationResult` aliases with the actual result types your traits use.
3. **Match `StartRequest` fields**:
   - The `make_start_request` helper currently assumes a `confidential: bool` field. Replace this with the real fields/constructors on your `StartRequest` (e.g., `confidentiality: ConfidentialityLevel`, workload identifiers, etc.).
4. **Constructing `PropletService`**:
   - Replace `PropletService::new(runtime.clone(), attestation.clone(), tee_type)` with the actual constructor or builder pattern used by your service, passing the runtime, attestation service, and TEE type as appropriate.
5. **Error assertions**:
   - The tests currently use generic `result.is_err()` checks. For stronger guarantees, refine these to assert the specific error variants that your service returns for:
     - “confidential workload on non-TEE” (e.g., `Error::SecurityPolicy`),
     - “attestation failure” (e.g., `Error::AttestationFailed`).
   - Update pattern matches and imports accordingly.
6. **Optional: use `describe_environment` for logging**:
   - If you want to keep the original narrative logging, you can log `describe_environment(&tee_type)` inside tests, but avoid branching behavior exclusively on the host’s `detect_tee_type()` to keep tests deterministic.
</issue_to_address>

### Comment 5
<location> `proplet/tests/integration_test.rs:241-250` </location>
<code_context>
+
+/// Test that confidential flag is properly parsed
+#[test]
+fn test_confidential_flag_parsing() {
+    // This would test the StartRequest deserialization
+    // For now, we just verify the logic exists
+    
+    let test_cases = vec![
+        (true, "confidential workload"),
+        (false, "normal workload"),
+    ];
+    
+    for (confidential, description) in test_cases {
+        println!("Testing: {} (confidential={})", description, confidential);
+        assert_eq!(confidential, confidential); // Tautology, but demonstrates the test
+    }
+}
</code_context>

<issue_to_address>
**issue (testing):** The `test_confidential_flag_parsing` test is tautological and does not validate JSON parsing or `StartRequest` behavior.

This test only checks `confidential == confidential`, so it doesn’t validate any actual behavior. Instead, deserialize JSON into `StartRequest` and assert that `confidential` is parsed and defaulted correctly—for example, parse a payload without `"confidential"` and confirm it defaults to `false` via `#[serde(default)]`, and then payloads with `"confidential": true` and `"confidential": false` to verify those values are honored.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines 7 to 16
/// Test scenario 1: Confidential workload + No TEE
/// Expected: Should fail with security error
#[test]
fn test_scenario_1_confidential_no_tee() {
// This test validates the security model:
// Confidential workloads MUST NOT run on non-TEE hardware

let tee_type = detect_tee_type();
let is_confidential = true;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Integration tests exercise only high-level decision comments and prints, but do not actually drive PropletService or assert the new confidential/attestation behavior.

Right now these tests mostly branch on detect_tee_type(), log messages, and in some paths don’t assert behavior. They also never spin up a real PropletService or send a StartRequest, so we don’t actually verify that: (1) confidential workloads are rejected without a TEE, (2) attestation runs when required, and (3) non-confidential workloads still execute. Please consider an integration harness that starts a PropletService with mocked Runtime/AttestationService, issues StartRequests for the four decision-matrix scenarios, and asserts on the resulting result/error. That would ensure the tests validate the feature behavior, not just the control-flow narrative.

Suggested implementation:

// Copyright (c) Abstract Machines
// Integration tests for confidential computing scenarios

use proplet::{
    detect_tee_type,
    describe_environment,
    TeeType,
    // Adjust these paths/names to match the actual crate structure:
    service::PropletService,
    runtime::{Runtime, StartRequest, StartResult},
    attestation::{AttestationService, AttestationConfig, AttestationResult},
};
use std::sync::{Arc, Mutex};

/// Simple mock runtime that records whether `start()` was invoked and with what
#[derive(Default, Clone)]
struct MockRuntimeState {
    pub last_request: Option<StartRequest>,
    pub start_called: bool,
}

#[derive(Clone)]
struct MockRuntime {
    state: Arc<Mutex<MockRuntimeState>>,
    // configurable behavior, e.g., force failure, etc., if needed later
}

impl MockRuntime {
    fn new() -> Self {
        Self {
            state: Arc::new(Mutex::new(MockRuntimeState::default())),
        }
    }

    fn state(&self) -> MockRuntimeState {
        self.state.lock().unwrap().clone()
    }
}

impl Runtime for MockRuntime {
    fn start(&self, req: StartRequest) -> StartResult {
        let mut s = self.state.lock().unwrap();
        s.start_called = true;
        s.last_request = Some(req);
        // In these tests we only care that `start` is invoked or not, so we
        // treat all starts as success by default.
        Ok(())
    }
}

/// Simple mock attestation service that records invocations
#[derive(Default, Clone)]
struct MockAttestationState {
    pub last_config: Option<AttestationConfig>,
    pub attest_called: bool,
}

#[derive(Clone)]
struct MockAttestationService {
    state: Arc<Mutex<MockAttestationState>>,
    // Allow tests to force attestation failure if needed
    pub should_fail: bool,
}

impl MockAttestationService {
    fn new() -> Self {
        Self {
            state: Arc::new(Mutex::new(MockAttestationState::default())),
            should_fail: false,
        }
    }

    fn with_failure() -> Self {
        Self {
            should_fail: true,
            ..Self::new()
        }
    }

    fn state(&self) -> MockAttestationState {
        self.state.lock().unwrap().clone()
    }
}

impl AttestationService for MockAttestationService {
    fn attest(&self, cfg: AttestationConfig) -> AttestationResult {
        let mut s = self.state.lock().unwrap();
        s.attest_called = true;
        s.last_config = Some(cfg);

        if self.should_fail {
            // Adjust the error type/variant to match your real API
            Err("attestation failed".into())
        } else {
            Ok(())
        }
    }
}

/// Helper to construct a `StartRequest` for tests
fn make_start_request(confidential: bool) -> StartRequest {
    StartRequest {
        // Fill in the fields your real `StartRequest` requires:
        confidential,
        // ..Default::default()
    }
}

/// Test scenario 1: Confidential workload + No TEE
/// Expected: Should fail with security error and never start the runtime
#[test]
fn test_scenario_1_confidential_no_tee() {
    // This test validates the security model:
    // Confidential workloads MUST NOT run on non-TEE hardware
    let tee_type = TeeType::None; // override to explicitly test "no TEE" behavior

    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::new();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(true);

    let result = service.start(req);

    // Adjust these assertions to your actual error type/variant
    assert!(result.is_err(), "confidential workload should be rejected on non-TEE");
    let runtime_state = runtime.state();
    let att_state = attestation.state();

    // Runtime must not be started
    assert!(!runtime_state.start_called, "runtime must NOT be invoked for confidential workload on non-TEE");
    // Attestation should not run because we short-circuit on policy
    assert!(!att_state.attest_called, "attestation should not run when there is no TEE");
}

/// Test scenario 2: Confidential workload + TEE
/// Expected: Attestation runs, runtime starts on success
#[test]
fn test_scenario_2_confidential_with_tee_attestation_runs() {
    let tee_type = TeeType::SevSnp; // or whatever represents a supported TEE type

    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::new();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(true);

    let result = service.start(req);

    assert!(result.is_ok(), "confidential workload on TEE with successful attestation should start");

    let runtime_state = runtime.state();
    let att_state = attestation.state();

    assert!(att_state.attest_called, "attestation must run for confidential workload on TEE");
    assert!(runtime_state.start_called, "runtime must be started after successful attestation");
}

/// Test scenario 3: Confidential workload + TEE but attestation fails
/// Expected: Attestation runs, runtime does NOT start, error returned
#[test]
fn test_scenario_3_confidential_with_tee_attestation_fails() {
    let tee_type = TeeType::SevSnp;

    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::with_failure();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(true);

    let result = service.start(req);

    assert!(result.is_err(), "confidential workload must fail when attestation fails");

    let runtime_state = runtime.state();
    let att_state = attestation.state();

    assert!(att_state.attest_called, "attestation must run for confidential workload on TEE");
    assert!(
        !runtime_state.start_called,
        "runtime must NOT start when attestation fails"
    );
}

/// Test scenario 4: Non-confidential workload + No TEE
/// Expected: No attestation, runtime starts successfully
#[test]
fn test_scenario_4_non_confidential_no_tee() {
    let tee_type = TeeType::None;

    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::new();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(false);

    let result = service.start(req);

    assert!(result.is_ok(), "non-confidential workload should start even without TEE");

    let runtime_state = runtime.state();
    let att_state = attestation.state();

    assert!(
        runtime_state.start_called,
        "runtime must start for non-confidential workload on non-TEE"
    );
    assert!(
        !att_state.attest_called,
        "attestation must NOT run for non-confidential workload"
    );
}

/// Test scenario 5: Non-confidential workload + TEE
/// Expected: No attestation required, runtime starts successfully
#[test]
fn test_scenario_5_non_confidential_with_tee() {
    let tee_type = detect_tee_type();

    // If the host has no TEE, this scenario is equivalent to scenario 4.
    // If the host has a TEE, we still don't require attestation for non-confidential workloads.
    let runtime = MockRuntime::new();
    let attestation = MockAttestationService::new();
    let service = PropletService::new(runtime.clone(), attestation.clone(), tee_type);

    let req = make_start_request(false);

    let result = service.start(req);

    assert!(
        result.is_ok(),
        "non-confidential workload should start regardless of TEE presence"
    );

    let runtime_state = runtime.state();
    let att_state = attestation.state();

    assert!(runtime_state.start_called, "runtime must be started");
    assert!(
        !att_state.attest_called,
        "attestation must not be required for non-confidential workload"
    );
}
  1. Adjust imports and paths:
    • Replace service::PropletService, runtime::{Runtime, StartRequest, StartResult}, and attestation::{AttestationService, AttestationConfig, AttestationResult} with the correct module paths and type names used in your proplet crate.
  2. Align trait signatures:
    • Update the impl Runtime for MockRuntime and impl AttestationService for MockAttestationService blocks to exactly match your real trait method signatures (parameter names, return types, error types).
    • Replace Ok(()), Err("attestation failed".into()), and the StartResult/AttestationResult aliases with the actual result types your traits use.
  3. Match StartRequest fields:
    • The make_start_request helper currently assumes a confidential: bool field. Replace this with the real fields/constructors on your StartRequest (e.g., confidentiality: ConfidentialityLevel, workload identifiers, etc.).
  4. Constructing PropletService:
    • Replace PropletService::new(runtime.clone(), attestation.clone(), tee_type) with the actual constructor or builder pattern used by your service, passing the runtime, attestation service, and TEE type as appropriate.
  5. Error assertions:
    • The tests currently use generic result.is_err() checks. For stronger guarantees, refine these to assert the specific error variants that your service returns for:
      • “confidential workload on non-TEE” (e.g., Error::SecurityPolicy),
      • “attestation failure” (e.g., Error::AttestationFailed).
    • Update pattern matches and imports accordingly.
  6. Optional: use describe_environment for logging:
    • If you want to keep the original narrative logging, you can log describe_environment(&tee_type) inside tests, but avoid branching behavior exclusively on the host’s detect_tee_type() to keep tests deterministic.

Comment on lines 241 to 250
fn test_confidential_flag_parsing() {
// This would test the StartRequest deserialization
// For now, we just verify the logic exists

let test_cases = vec![
(true, "confidential workload"),
(false, "normal workload"),
];

for (confidential, description) in test_cases {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): The test_confidential_flag_parsing test is tautological and does not validate JSON parsing or StartRequest behavior.

This test only checks confidential == confidential, so it doesn’t validate any actual behavior. Instead, deserialize JSON into StartRequest and assert that confidential is parsed and defaulted correctly—for example, parse a payload without "confidential" and confirm it defaults to false via #[serde(default)], and then payloads with "confidential": true and "confidential": false to verify those values are honored.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In @proplet/Cargo.toml:
- Around line 32-39: The Cargo.toml currently uses path-based dependencies for
kbs_protocol, attestation-agent, and attester pointing at
../guest-components/attestation-agent/* which prevents building because the
guest-components directory is missing; fix this by either (A) converting those
three path dependencies into published crate versions (set kbs_protocol,
attestation-agent, attester to the appropriate crate versions on crates.io), or
(B) add the missing guest-components as a git submodule and ensure the paths
match, or (C) vendor the guest-components into the repo or add them as workspace
members; whichever route you pick, update Cargo.toml accordingly (touching the
dependency entries for kbs_protocol, attestation-agent, attester) and document
the chosen provisioning step in the README so builders know how to obtain the
guest-components.

In @proplet/src/service.rs:
- Around line 305-309: The early error paths are calling
self.running_tasks.lock().await.remove(&req.id) before the task is ever
inserted, which is misleading; remove those premature removals and leave the
publish_result(...) and Err(...) handling intact. Specifically, delete the
remove(&req.id) call in the error branch that logs error_msg (the block that
calls publish_result(&req.id, Vec::new(), Some(error_msg.clone())).await? and
returns Err(...)), and apply the same removal of the premature cleanup to the
two other analogous error branches that reference self.running_tasks and req.id
(the occurrences near the publish_result calls).

In @proplet/tests/integration_test.rs:
- Around line 250-253: The loop currently contains a tautological assertion
assert_eq!(confidential, confidential) which never validates behavior; replace
it with an assertion that compares the expected `confidential` value to the
actual result of the parsing/logic under test (for example, call the parser or
function that consumes `description` and assert_eq!(parsed_confidential,
confidential)). Locate the loop using the variables `confidential` and
`description` in the test and change the assertion to use the actual output
(e.g., `parsed_confidential`) and include a small context message to aid
debugging if the assertion fails.
🧹 Nitpick comments (9)
proplet/src/config.rs (1)

209-226: Environment variable parsing looks good, with a minor consideration.

The boolean parsing for PROPLET_ATTESTATION_ENABLED handles "true" (case-insensitive) and "1". This covers the common cases.

For consistency with common configuration patterns, you might also consider accepting "yes" and "on":

♻️ Optional enhancement
         if let Ok(val) = env::var("PROPLET_ATTESTATION_ENABLED") {
-            config.attestation.enabled = val.to_lowercase() == "true" || val == "1";
+            let val_lower = val.to_lowercase();
+            config.attestation.enabled = matches!(val_lower.as_str(), "true" | "1" | "yes" | "on");
         }
proplet/src/tee/mod.rs (1)

10-17: Inconsistent describe_environment() output format.

This function produces a different output format than TeeDetector::describe_environment() in detector.rs (lines 248-261). The module-level function outputs "TEE Detection: {tee_type} - Running in trusted execution environment", while TeeDetector::describe_environment() outputs "Running in {TEE_TYPE} Trusted Execution Environment".

Consider delegating to the existing implementation to avoid format inconsistency:

Suggested fix
 pub fn describe_environment() -> String {
-    let tee_type = detect_tee_type();
-    format!("TEE Detection: {} - {}", 
-        tee_type.as_str(),
-        if tee_type.is_tee() { "Running in trusted execution environment" } else { "Running on regular hardware" }
-    )
+    TeeDetector::describe_environment()
 }
proplet/src/tee/detector.rs (2)

164-170: Unreliable dmesg check location.

/var/log/dmesg is not a standard location on many Linux distributions. The kernel ring buffer is typically accessed via /dev/kmsg or the dmesg command. This check will likely always fail silently, which is safe but ineffective.

Consider removing or updating this check
-        // Check dmesg for SNP messages (requires root)
-        if let Ok(dmesg) = fs::read_to_string("/var/log/dmesg") {
-            if dmesg.contains("SEV-SNP") {
-                debug!("Found SEV-SNP in dmesg");
-                return true;
-            }
-        }

269-276: Missing test coverage for Se and Cca variants.

The test_tee_type_as_str test skips TeeType::Se and TeeType::Cca variants.

Add missing assertions
     fn test_tee_type_as_str() {
         assert_eq!(TeeType::Tdx.as_str(), "tdx");
         assert_eq!(TeeType::Sgx.as_str(), "sgx");
         assert_eq!(TeeType::Snp.as_str(), "snp");
         assert_eq!(TeeType::Sev.as_str(), "sev");
+        assert_eq!(TeeType::Se.as_str(), "se");
+        assert_eq!(TeeType::Cca.as_str(), "cca");
         assert_eq!(TeeType::None.as_str(), "none");
     }
proplet/tests/integration_test.rs (1)

182-187: Test assertion coupled to specific describe_environment() format.

This assertion expects "TEE Detection:" in the output, which matches the tee/mod.rs implementation but not TeeDetector::describe_environment(). If the format inconsistency is resolved by delegating to TeeDetector, this test will fail.

Consider making the assertion format-agnostic or updating it to match the intended behavior.

Format-agnostic alternative
     // Verify it contains key information
-    assert!(description.contains("TEE Detection:"));
+    // Verify the description mentions the environment type
+    let tee_type = detect_tee_type();
+    assert!(description.contains(tee_type.as_str()) || description.to_lowercase().contains("tee") || description.contains("hardware"));
proplet/src/attestation/service.rs (3)

79-94: Health check does not verify actual KBS connectivity.

The current implementation always returns Ok(true) regardless of whether the KBS is reachable. This could mask connectivity issues until actual resource retrieval is attempted. Consider implementing an actual connectivity check if KBS supports a health/status endpoint.


96-124: Consider removing or completing the decrypt_workload placeholder method.

This method is marked #[allow(dead_code)] and returns encrypted data unchanged, with a warning that "Workload decryption not yet implemented." The actual decryption is correctly implemented in crate::crypto::decrypt_workload (used at line 425 of service.rs) which properly handles AES-256-GCM decryption. Since the AttestationService method is unused and explicitly marked dead code, the practical security risk is minimal. However, keeping an obviously-unimplemented placeholder method may create confusion. Either implement it as part of a future API, or remove it if the placeholder design pattern is no longer needed.


40-53: Consider caching the KBS client to reuse TLS sessions.

Creating a new NativeEvidenceProvider and KbsClientBuilder on each call may incur repeated TLS setup. The KBS protocol supports long-lived secure channels via TLS session resumption, but this requires explicit configuration (e.g., rustls Resumption::in_memory_sessions). Consider either caching the client instance in the AttestationService struct or enabling session resumption in KbsClientBuilder to avoid redundant handshakes on repeated key requests.

proplet/src/crypto.rs (1)

32-46: Consider adding minimum ciphertext size validation.

The current check ensures data.len() >= NONCE_SIZE, but valid AES-GCM ciphertext must include at least the 16-byte authentication tag. Exactly 12 bytes would parse successfully but fail at decryption. Adding stricter validation could provide clearer error messages earlier.

💡 Optional enhancement for earlier validation
+const TAG_SIZE: usize = 16;
+
 pub fn from_bytes(data: &[u8]) -> Result<Self> {
-    if data.len() < NONCE_SIZE {
+    // Minimum: nonce (12) + authentication tag (16)
+    const MIN_SIZE: usize = NONCE_SIZE + TAG_SIZE;
+    if data.len() < MIN_SIZE {
         return Err(anyhow!(
-            "Encrypted workload too short: {} bytes (minimum {})",
+            "Encrypted workload too short: {} bytes (minimum {} for nonce + tag)",
             data.len(),
-            NONCE_SIZE
+            MIN_SIZE
         ));
     }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1e8ea05 and 4051f3b.

⛔ Files ignored due to path filters (1)
  • proplet/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (15)
  • proplet/Cargo.toml
  • proplet/src/attestation/config.rs
  • proplet/src/attestation/mod.rs
  • proplet/src/attestation/service.rs
  • proplet/src/config.rs
  • proplet/src/crypto.rs
  • proplet/src/lib.rs
  • proplet/src/main.rs
  • proplet/src/mqtt.rs
  • proplet/src/service.rs
  • proplet/src/tee/detector.rs
  • proplet/src/tee/mod.rs
  • proplet/src/types.rs
  • proplet/tests/integration_test.rs
  • task/task.go
🧰 Additional context used
🧬 Code graph analysis (4)
proplet/src/attestation/config.rs (1)
proplet/src/config.rs (1)
  • default (45-65)
proplet/src/tee/detector.rs (1)
proplet/src/tee/mod.rs (1)
  • describe_environment (11-17)
proplet/tests/integration_test.rs (2)
proplet/src/tee/mod.rs (2)
  • detect_tee_type (6-8)
  • describe_environment (11-17)
proplet/src/tee/detector.rs (3)
  • describe_environment (249-262)
  • as_str (26-36)
  • is_tee (39-41)
proplet/src/lib.rs (1)
config.go (1)
  • PropletConfig (23-28)
🔇 Additional comments (25)
proplet/src/mqtt.rs (2)

207-207: LGTM! Idiomatic boolean assertions.

The change from assert!(tls == false) to assert!(!tls) and assert!(tls == true) to assert!(tls) follows Rust's idiomatic style and aligns with clippy's bool_comparison lint.

Also applies to: 215-215, 223-223, 231-231, 239-239, 247-247, 255-255, 263-263, 271-271, 279-279, 287-287, 295-295


369-369: LGTM! Correct ownership semantics.

The change from unwrap_or(&address) to unwrap_or(address) is correct. Since address is already &str and nth(1) returns Option<&str>, passing address directly provides the correct fallback type.

Also applies to: 388-388, 407-407, 426-426, 445-445, 464-464, 483-483

task/task.go (1)

49-51: LGTM! New attestation-related fields extend the Task model appropriately.

The new fields (Confidential, RequireAttestation, AttestationEvidenceType) align well with the attestation infrastructure being added. Using omitempty ensures backward compatibility with existing Task consumers.

Consider documenting the relationship between these fields. Should RequireAttestation: true imply Confidential: true? Verify that consuming code handles the case where RequireAttestation is set without Confidential (or vice versa).

proplet/src/attestation/config.rs (2)

1-25: LGTM! Well-structured attestation configuration.

The configuration struct is appropriately designed with sensible defaults. The enabled: false default ensures attestation is opt-in.

Two considerations for production use:

  1. The default kbs_url uses http:// - ensure documentation guides users to configure HTTPS in production environments.
  2. Consider adding validation to prevent timeout_secs: 0 which could cause immediate timeouts.

27-39: LGTM!

The test properly validates all default configuration values.

proplet/src/config.rs (2)

1-1: LGTM! Proper integration of AttestationConfig.

The attestation configuration is correctly integrated into PropletConfig with appropriate default initialization.

Also applies to: 41-41, 63-63


276-329: LGTM! Cleaner test initialization.

Using the struct update syntax (..PropletConfig::default()) is more maintainable as it automatically handles new fields added to the struct.

proplet/Cargo.toml (1)

6-12: LGTM! Proper dual-target crate configuration.

The [lib] and [[bin]] sections correctly configure the crate as both a library and binary, enabling the proplet module to be used as a library in tests while also producing the binary.

proplet/src/main.rs (1)

1-8: LGTM! New module declarations for attestation infrastructure.

The new attestation, crypto, and tee modules are correctly declared. These modules provide the infrastructure for confidential workload handling as described in the PR objectives.

proplet/src/types.rs (3)

60-61: LGTM! New confidential field for StartRequest.

The confidential field is correctly added with #[serde(default)], ensuring backward compatibility with existing API consumers that don't provide this field.

Consider whether validate() should include checks related to the confidential flag, such as verifying that confidential requests have the required encryption metadata or that the runtime environment supports confidential workloads.


157-157: LGTM! Idiomatic boolean assertions.

The changes from assert_eq!(value, false) to assert!(!value) follow Rust's idiomatic style.

Also applies to: 317-317


196-293: LGTM! Comprehensive test updates.

All test cases have been properly updated to include the new confidential: false field in StartRequest instantiations.

proplet/src/tee/mod.rs (1)

1-8: LGTM!

The module structure, re-exports, and detect_tee_type() convenience function are clean and properly delegate to the underlying implementation.

proplet/src/attestation/mod.rs (1)

1-5: LGTM!

Clean module organization with appropriate visibility. The submodules are kept private while re-exporting only the necessary public types.

proplet/src/lib.rs (1)

1-22: LGTM!

Well-organized library root with clear documentation describing the purpose of the crate. The module structure and re-exports are appropriate for the library's public API surface.

proplet/src/tee/detector.rs (1)

5-42: LGTM!

The TeeType enum is well-designed with appropriate derives and clean implementations of as_str() and is_tee().

proplet/tests/integration_test.rs (1)

189-223: LGTM!

The should_perform_attestation helper and its test provide good coverage of the attestation decision logic, including error cases for security violations.

proplet/src/attestation/service.rs (1)

127-165: LGTM!

Tests appropriately cover the basic scenarios for disabled attestation. Testing enabled attestation would require mocking the KBS, which is outside the scope of unit tests.

proplet/src/service.rs (3)

62-83: LGTM!

Good initialization flow with appropriate warning when attestation is enabled without TEE detection. Caching the TEE type at startup is efficient.


318-320: Resource path uses task ID for key lookup.

The key is fetched from default/keys/{req.id}, meaning each task requires a pre-provisioned key in the KBS with a path matching its task ID. Verify this aligns with your key provisioning strategy—an alternative would be using a shared key or deriving the path from the workload type/name.


416-442: LGTM!

The decryption step has proper error handling, result publishing on failure, and informative logging.

proplet/src/crypto.rs (4)

8-18: LGTM!

The imports are appropriate for AES-256-GCM operations, and the constants are correctly sized (32 bytes for AES-256 key, 12 bytes/96 bits for GCM nonce as per NIST recommendations).


66-92: LGTM!

The decryption implementation is correct:

  • Validates key size upfront
  • Uses the aes-gcm library's authenticated decryption which verifies the tag
  • Provides contextual error messages without leaking sensitive information

105-134: LGTM!

The encryption implementation is secure:

  • Uses OsRng (cryptographically secure RNG) for nonce generation, preventing nonce reuse
  • Key validation matches the decryption function
  • Properly serializes the output for later decryption

The #[allow(dead_code)] annotation is appropriate given its documented purpose for development/testing.


140-205: Good test coverage.

The tests cover the essential scenarios:

  • Roundtrip encryption/decryption
  • Authentication failure with wrong key
  • Input validation for key size and data length
  • Serialization/deserialization of EncryptedWorkload

Comment on lines 305 to 309
error!("{}", error_msg);
self.running_tasks.lock().await.remove(&req.id);
self.publish_result(&req.id, Vec::new(), Some(error_msg.clone()))
.await?;
return Err(anyhow::anyhow!("{}", error_msg));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Premature removal from running_tasks.

The task is removed from running_tasks (line 306), but it hasn't been inserted yet—insertion happens at line 457. The same issue appears at lines 338 and 351. While this is a no-op (removing a non-existent key), it's misleading code.

Consider removing the premature cleanup calls
                 error!("{}", error_msg);
-                self.running_tasks.lock().await.remove(&req.id);
                 self.publish_result(&req.id, Vec::new(), Some(error_msg.clone()))
                     .await?;
                 return Err(anyhow::anyhow!("{}", error_msg));

Apply similar changes to lines 338 and 351.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
error!("{}", error_msg);
self.running_tasks.lock().await.remove(&req.id);
self.publish_result(&req.id, Vec::new(), Some(error_msg.clone()))
.await?;
return Err(anyhow::anyhow!("{}", error_msg));
error!("{}", error_msg);
self.publish_result(&req.id, Vec::new(), Some(error_msg.clone()))
.await?;
return Err(anyhow::anyhow!("{}", error_msg));
🤖 Prompt for AI Agents
In @proplet/src/service.rs around lines 305 - 309, The early error paths are
calling self.running_tasks.lock().await.remove(&req.id) before the task is ever
inserted, which is misleading; remove those premature removals and leave the
publish_result(...) and Err(...) handling intact. Specifically, delete the
remove(&req.id) call in the error branch that logs error_msg (the block that
calls publish_result(&req.id, Vec::new(), Some(error_msg.clone())).await? and
returns Err(...)), and apply the same removal of the premature cleanup to the
two other analogous error branches that reference self.running_tasks and req.id
(the occurrences near the publish_result calls).

Comment on lines 137 to 143
// Check cpuinfo for SGX features
if let Ok(cpuinfo) = fs::read_to_string("/proc/cpuinfo") {
if cpuinfo.contains("sgx") {
debug!("Found sgx in /proc/cpuinfo");
return true;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd detector.rs

Repository: rodneyosodo/propeller

Length of output: 92


🏁 Script executed:

cat -n proplet/src/tee/detector.rs | head -200

Repository: rodneyosodo/propeller

Length of output: 7257


🌐 Web query:

How to detect if code is running inside an Intel SGX enclave vs just having SGX capability

💡 Result:

Short answer — use different checks in untrusted vs trusted code:

  • From the untrusted host/process (detect capability, not running inside an enclave):

    • Use the SGX platform APIs (Intel SGX SDK) such as sgx_is_capable() / sgx_cap_get_status() or the OS driver/device node (e.g. /dev/isgx or /dev/sgx) to determine whether the CPU/driver/BIOS have SGX enabled. [1]
    • You can also query CPUID: CPUID.(EAX=07H,ECX=0):EBX.SGX=1 indicates the processor supports SGX; CPUID.(EAX=12H,ECX=…) enumerates SGX resources (EPC/SGX1/SGX2). (CPUID is the canonical platform-capability check.) [2]
  • From inside code running in an enclave (detect “I am inside an enclave”):

    • Enclave-only instructions / services exist (EENTER/EEXIT/EGETKEY/EREPORT etc.). Calling enclave-only functionality (or using SDK runtime APIs that only succeed inside an enclave) is the reliable way to prove you’re inside an enclave — these will fail/are unavailable to untrusted code. The SDM lists those enclave instructions. [2]
    • If you use a runtime like Open Enclave there are helpers (e.g. oe_is_within_enclave / memory-range checks) and attestation/report APIs to prove and attest that code is running inside an enclave. Use enclave attestation (reports/evidence) if you need an external party to verify the enclave identity. [3]

Notes / security tips:

  • CPUID cannot be trusted from inside an enclave without care: the SGX SDK’s sgx_cpuid() performs an OCALL and can be spoofed; Intel provides a trusted CPU-feature-detection library for enclaves if you need feature info without leaving the enclave. For decisions that affect security, prefer enclave instructions/attestation rather than host-side capability checks. [1][3]

Sources:

  • Intel SGX support / SDK feature-detection functions (sgx_is_capable, sgx_cap_get_status). [1]
  • Intel Architectures SDM: CPUID bits for SGX and enclave instructions (EGETKEY/EREPORT/EENTER etc.). [2]
  • Open Enclave / Intel trusted CPU feature-detection library and enclave helpers (oe_is_within_enclave, sgx-cpu-feature-detection). [3]

Cpuinfo-based TEE detection is unreliable for enclave vs. host distinction.

Checking for "sgx" in /proc/cpuinfo indicates the CPU supports SGX, not that code is running inside an SGX enclave. This causes false positives on hosts with SGX-capable CPUs but no active enclave, potentially leading the application to attempt enclave-specific operations.

The device file checks (/dev/sgx_enclave, /dev/sgx/enclave) in lines 127–135 are more reliable and are checked first. However, the cpuinfo fallback (lines 137–143) undermines this by returning true for any SGX-capable host, even if not inside an enclave.

Same issue applies to check_sev() at line 185: checking for "sev" or "sme" in cpuinfo indicates CPU capability, not guest VM status.

Consider removing the cpuinfo checks or adding additional validation (e.g., memory encryption status for SEV via /sys/kernel/mm/mem_encrypt/active).

Comment on lines 183 to 189
// Check for SEV in cpuinfo
if let Ok(cpuinfo) = fs::read_to_string("/proc/cpuinfo") {
if cpuinfo.contains("sev") || cpuinfo.contains("sme") {
debug!("Found sev/sme in /proc/cpuinfo");
return true;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Same false positive risk for SEV detection.

Checking for "sev" or "sme" in /proc/cpuinfo indicates CPU capability, not that the code is running inside a SEV-protected VM. The /sys/kernel/mm/mem_encrypt/active check (lines 192-199) is a better indicator for guest VMs.

Comment on lines 250 to 253
for (confidential, description) in test_cases {
println!("Testing: {} (confidential={})", description, confidential);
assert_eq!(confidential, confidential); // Tautology, but demonstrates the test
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Tautological assertion provides no test value.

assert_eq!(confidential, confidential) will always pass regardless of the value. This test doesn't validate any actual behavior.

Consider testing actual parsing logic
     for (confidential, description) in test_cases {
         println!("Testing: {} (confidential={})", description, confidential);
-        assert_eq!(confidential, confidential); // Tautology, but demonstrates the test
+        // TODO: Test actual StartRequest deserialization
+        // e.g., let req: StartRequest = serde_json::from_str(...)?;
+        // assert_eq!(req.confidential, confidential);
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for (confidential, description) in test_cases {
println!("Testing: {} (confidential={})", description, confidential);
assert_eq!(confidential, confidential); // Tautology, but demonstrates the test
}
for (confidential, description) in test_cases {
println!("Testing: {} (confidential={})", description, confidential);
// TODO: Test actual StartRequest deserialization
// e.g., let req: StartRequest = serde_json::from_str(...)?;
// assert_eq!(req.confidential, confidential);
}
🤖 Prompt for AI Agents
In @proplet/tests/integration_test.rs around lines 250 - 253, The loop currently
contains a tautological assertion assert_eq!(confidential, confidential) which
never validates behavior; replace it with an assertion that compares the
expected `confidential` value to the actual result of the parsing/logic under
test (for example, call the parser or function that consumes `description` and
assert_eq!(parsed_confidential, confidential)). Locate the loop using the
variables `confidential` and `description` in the test and change the assertion
to use the actual output (e.g., `parsed_confidential`) and include a small
context message to aid debugging if the assertion fails.

Copy link

@llamapreview llamapreview bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI Code Review by LlamaPReview

🎯 TL;DR & Recommendation

Recommendation: Request Changes
This PR introduces confidential computing support but has a critical schema mismatch between Rust and Go layers that could lead to data loss and inconsistent security policies, alongside several security and maintainability improvements needed.

📄 Documentation Diagram

This diagram documents the new confidential computing workflow for encrypted workload execution.

sequenceDiagram
    participant GM as Go Manager
    participant PS as PropletService
    participant TD as TEE Detector
    participant AS as AttestationService
    participant KBS as Key Broker Service
    participant CM as Crypto Module
    participant RT as Runtime
    
    GM->>PS: StartRequest (confidential: true)
    PS->>TD: Detect TEE
    TD-->>PS: TeeType
    alt No TEE detected
        PS-->>GM: Publish Error (Security violation)
    else TEE detected
        PS->>AS: Get Resource Key
        AS->>KBS: Attest & Retrieve Key
        KBS-->>AS: Encryption Key
        AS-->>PS: Key
        PS->>CM: Decrypt Workload
        CM-->>PS: Decrypted Binary
        PS->>RT: Execute Task
        RT-->>PS: Results
        PS-->>GM: Publish Results
    end
Loading

🌟 Strengths

  • Adds comprehensive confidential computing features with TEE detection and attestation integration.
  • Implements proper security enforcement to prevent confidential workloads from running without TEE.

📊 Findings Summary

Priority File Category Impact Summary (≤12 words) Anchors
P1 proplet/src/types.rs Architecture Schema mismatch with Go struct risks data loss and policy inconsistencies. path:task/task.go
P2 proplet/src/attestation/config.rs Security Insecure default KBS URL risks key exposure in production deployments.
P2 proplet/src/service.rs Architecture Enforces TEE requirement, preventing confidential workload execution without secure environment.
P2 proplet/Cargo.toml Architecture Path-based dependencies limit portability and build robustness.
P2 proplet/src/attestation/service.rs Maintainability Placeholder decrypt method misleads and creates a gap in core functionality. method:decrypt_workload
P2 proplet/src/service.rs Maintainability Base64 decoding for encrypted workloads needs clear documentation to avoid confusion.

🔍 Notable Themes

  • Cross-Component Synchronization: The schema mismatch highlights a need for tighter coordination between Rust and Go layers to maintain API consistency and prevent silent failures.
  • Security Validation: Multiple findings emphasize the importance of secure defaults and clear enforcement mechanisms in confidential computing contexts.

📈 Risk Diagram

This diagram illustrates the schema mismatch risk leading to data loss in cross-component communication.

sequenceDiagram
    participant GM as Go Manager
    participant PS as PropletService
    
    GM->>PS: Task with {Confidential, RequireAttestation, AttestationEvidenceType}
    note over GM,PS: R1(P1): Schema mismatch; PS ignores RequireAttestation and AttestationEvidenceType
    PS-->>PS: Deserialize to StartRequest (confidential only)
    PS->>PS: Enforce security based on confidential field
    PS-->>GM: Response (potential policy inconsistency)
Loading
⚠️ **Unanchored Suggestions (Manual Review Recommended)**

The following suggestions could not be precisely anchored to a specific line in the diff. This can happen if the code is outside the changed lines, has been significantly refactored, or if the suggestion is a general observation. Please review them carefully in the context of the full file.


📁 File: proplet/src/types.rs

The Rust StartRequest type has been extended with a confidential: bool field. However, the related_context shows the corresponding Go Task struct (task/task.go) has been extended with three new fields: Confidential, RequireAttestation, and AttestationEvidenceType. This creates a schema mismatch between the two services/layers. The Rust service will ignore RequireAttestation and AttestationEvidenceType when deserializing a StartRequest from an MQTT message. This can lead to:

  1. Data Loss: The Go side may set RequireAttestation or AttestationEvidenceType expecting them to be honored, but they will be silently dropped.
  2. Inconsistent State: The security policy (defined by the Go manager) and the enforcement logic (in Rust proplet) may diverge, potentially leading to workloads running without intended attestation or failing unexpectedly.
    The change to StartRequest is a public API modification; its schema must be synchronized with all producers (like the Go manager) to avoid runtime serialization/deserialization errors or logic bugs.

Suggestion:

Align the `StartRequest` schema with the Go `Task` struct. Add the missing fields with appropriate defaults.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StartRequest {
    // ... existing fields
    #[serde(default)]
    pub confidential: bool,
    #[serde(default)]
    pub require_attestation: bool,
    #[serde(default)]
    pub attestation_evidence_type: Option<String>, // Use global config if None
}

Related Code:

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StartRequest {
    ...
    #[serde(default)]
    pub confidential: bool,
}

📁 File: proplet/src/attestation/service.rs

The decrypt_workload method is marked as a "placeholder" and does not perform decryption; it returns the encrypted input unchanged. This is misleading and creates a significant gap in the confidential computing feature. The crypto module already provides a fully implemented decrypt_workload function, but it is not integrated here. This means the core promise of the PR ("decryption of encrypted workloads before execution") is not fulfilled in this service method. While the main execution flow in service.rs correctly uses crypto::decrypt_workload, this dead-code method should either be removed, implemented, or its placeholder status made very clear (e.g., unimplemented!()).

Suggestion:

Implement the method using the existing `crypto` module, or remove it if not needed.
pub async fn decrypt_workload(
    &self,
    encrypted_binary: &[u8],
    resource_path: &str,
    workload_id: &str,
) -> Result<Vec<u8>> {
    let key = self.get_resource(resource_path, workload_id).await?;
    crate::crypto::decrypt_workload(encrypted_binary, &key)
        .context("Failed to decrypt workload with retrieved key")
}

Related Code:

pub async fn get_resource(&self, resource_path: &str, _workload_id: &str) -> Result<Vec<u8>> {
    if !self.config.enabled {
        return Err(anyhow::anyhow!("Attestation is disabled"));
    }
    // ... builds client and calls get_resource
}

#[allow(dead_code)]
pub async fn decrypt_workload(
    &self,
    encrypted_binary: &[u8],
    resource_path: &str,
    workload_id: &str,
) -> Result<Vec<u8>> {
    // Get the encryption key from KBS
    let key = self.get_resource(resource_path, workload_id).await?;
    // TODO: Implement actual decryption using the key
    warn!("Workload decryption not yet implemented, returning encrypted data");
    Ok(encrypted_binary.to_vec())
}


💡 Have feedback? We'd love to hear it in our GitHub Discussions.
✨ This review was generated by LlamaPReview Advanced, which is free for all open-source projects. Learn more.

Comment on lines 16 to 25
impl Default for AttestationConfig {
fn default() -> Self {
Self {
enabled: false,
kbs_url: "http://localhost:8080".to_string(),
evidence_type: "sample".to_string(),
timeout_secs: 30,
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 | Confidence: High

The default KBS (Key Broker Service) URL uses an insecure HTTP scheme (http://localhost:8080). While this may be suitable for local development or testing, deploying this configuration in any environment with real attestation and key retrieval would expose sensitive cryptographic keys to network eavesdropping and man-in-the-middle attacks. The KBS is a security-critical component; its communication channel must be protected with TLS in production.

Suggested change
impl Default for AttestationConfig {
fn default() -> Self {
Self {
enabled: false,
kbs_url: "http://localhost:8080".to_string(),
evidence_type: "sample".to_string(),
timeout_secs: 30,
}
}
}
impl Default for AttestationConfig {
fn default() -> Self {
Self {
enabled: false,
// Default to HTTPS for production security, or a clear placeholder.
kbs_url: "https://localhost:8443".to_string(),
evidence_type: "sample".to_string(),
timeout_secs: 30,
}
}
}

// Scenario 4: Non-confidential workload + TEE = Normal execution (TEE unused)

let encryption_key = if req.confidential {
// CONFIDENTIAL WORKLOAD - Requires attestation
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 | Confidence: High

(Auto-downgraded from P0/P1: missing non-empty code snippet evidence for a non-speculative finding)
The security enforcement logic for confidential workloads is correctly placed in the critical path (handle_start_command), ensuring workloads marked as confidential cannot execute without a TEE. This is a direct, deterministic, and observable change to the system's behavior. Previously, all workloads would execute if the binary was available; now, a new class of workloads will be rejected with a clear error, which constitutes a breaking change for any existing caller attempting to run a confidential: true workload on a non-TEE node. The error is published back via MQTT, preventing silent failures.

Code Suggestion:

N/A (Implementation is correct). Ensure the error message is documented as part of the public API contract.

None // No encryption key needed
};

// Step 2: Download Wasm binary from OCI or use provided file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 | Confidence: High

The confidential workload decryption step (Step 3) occurs after the Wasm binary has been base64-decoded or downloaded from OCI. The decrypt_workload function expects the raw, encrypted bytes (nonce + ciphertext). However, the req.file field is documented as containing a base64-encoded string. The current flow is: base64_decode(req.file) -> (encrypted bytes) -> decrypt. This is correct. However, there is a risk of confusion if the req.file is expected to contain base64-encoded plaintext in non-confidential cases but base64-encoded ciphertext in confidential cases. The code does not differentiate; it always treats req.file as base64-encoded data, which is consistent. No change is required, but this subtlety should be clearly documented in the StartRequest struct or the API specification to avoid caller errors.

Code Suggestion:

Add a doc comment to the `file` field in `StartRequest`:
/// Base64-encoded WebAssembly binary.
/// For confidential workloads, this must be the base64-encoded encrypted binary (nonce + ciphertext).
pub file: String,

Comment on lines 32 to 35
# Guest components for attestation
kbs_protocol = { path = "../guest-components/attestation-agent/kbs_protocol", default-features = false, features = ["rust-crypto", "background_check"] }
attestation-agent = { path = "../guest-components/attestation-agent/attestation-agent", default-features = false, features = ["rust-crypto", "kbs"] }
attester = { path = "../guest-components/attestation-agent/attester", default-features = false }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 | Confidence: Medium

The PR introduces dependencies on local, path-based crates (../guest-components/...). This creates a strong coupling to the specific directory structure of the monorepo and assumes these components are always built and available at those relative paths. This impacts:

  1. Portability: The crate cannot be built outside the original repository context (e.g., if published to crates.io or used as a git dependency).
  2. Build Robustness: Changes to the directory layout or independent versioning of the attestation-agent crate will break the build.
    This is a common pattern for workspace development but should be evaluated for long-term maintenance. If these are internal, unstable components, the path dependency is acceptable. However, consider documenting this build requirement clearly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Fix all issues with AI agents
In `@manager/service.go`:
- Around line 168-174: The update logic unconditionally assigns dbT.Encrypted =
t.Encrypted which will overwrite the stored flag on partial updates; change the
update DTO or Task parameter so Encrypted is a pointer (*bool) and only assign
when non-nil (e.g., if t.Encrypted != nil { dbT.Encrypted = *t.Encrypted }), or
alternatively document/ensure callers always supply the current Encrypted value;
update the Task/update handler and any callers to use the pointer semantics so
partial updates don’t inadvertently clear the flag.

In `@proplet/src/cdh/core.rs`:
- Line 229: The assertion assert!(hub.is_ok() || hub.is_err()) is tautological
and should be replaced with a concrete expectation: depending on the test setup,
assert that hub is Ok (e.g., assert!(hub.is_ok()) and then inspect the Ok value
with hub.unwrap() to assert specific fields), or assert that hub is Err and
validate the error (e.g., let err = hub.unwrap_err();
assert_eq!(err.to_string(), "<expected message>") or match on the error
variant). Locate the test where the variable hub is produced and replace the
tautological check with an assertion that verifies the expected success/failure
and any relevant properties of the result.

In `@proplet/src/cdh/image/mod.rs`:
- Around line 61-63: The is_encrypted_image function currently uses URL
heuristics which are incorrect; change is_encrypted_image(&self, image_url:
&str) to inspect the image manifest/layer descriptors instead: look for any
layer Descriptor.media_type (or mediaType) having the "+encrypted" suffix (e.g.,
"application/...+encrypted") and/or manifest/index annotations that indicate
encryption (using the existing AnnotationPacket structure), and return true only
if such encrypted mediaType or annotation is found; integrate this check where
manifests are available (use the same code paths that call decrypt_layer())
rather than relying on image URL patterns.

In `@proplet/src/cdh/resource/mod.rs`:
- Around line 27-35: The constructor ResourceHandler::new can panic because it
unconditionally calls LruCache::new with cache_size.try_into().unwrap(); change
this to validate cache_size is non-zero and handle zero safely: either return an
Err (use NonZeroUsize::new(cache_size).ok_or_else(|| anyhow!("cache_size must be
> 0"))?) or convert zero to a safe default (create a NonZeroUsize via
NonZeroUsize::new(cache_size).unwrap_or(NonZeroUsize::new(1).unwrap()) before
calling LruCache::new); update the creation of the cache (the cache:
Arc::new(Mutex::new(LruCache::new(...)))) accordingly and adjust error handling
to propagate a clear message if validation fails.

In `@proplet/src/cdh/secret/mod.rs`:
- Around line 126-146: The unseal implementation reuses the same IV for both key
and data decryption which is insecure; update the EnvelopeSecret type to carry
separate IVs (e.g., iv_wrapping / iv_data or iv_key / iv_data), decode both
base64 IV fields (instead of decoding only the single iv) and pass the wrapping
IV to decrypt_key(&kek, &encrypted_key, &iv_wrapping) and the data IV to
decrypt_data(&dek, &encrypted_data, &iv_data) inside the unseal method,
adjusting field names and calls consistently (EnvelopeSecret, unseal,
encrypted_key, encrypted_data, decrypt_key, decrypt_data, kek, dek).

In `@proplet/src/cdh/tee/mod.rs`:
- Around line 93-120: In detect_tee(), the Azure-specific checks are placed
after the generic TDX/SNP checks so is_azure_tdx_vtpm() and is_azure_snp_vtpm()
never run; reorder the conditional checks so that is_azure_tdx_vtpm() and
is_azure_snp_vtpm() are evaluated before is_tdx() and is_snp(), respectively
(keep the rest of the function and return values the same).
🧹 Nitpick comments (15)
proplet/src/service.rs (2)

377-432: Consider simplifying the pattern match and reducing code duplication.

  1. Line 381 uses .unwrap() after an is_some() check. While safe, this could be cleaner with if let.
  2. Lines 417-419 log internal state (cdh_hub.is_some(), req.encrypted) which is useful for debugging but may be verbose in production. Consider using debug! level instead.
  3. The registry fallback logic (lines 417-431) duplicates the non-CDH path (lines 434-449).
♻️ Suggested simplification using if-let
-            #[cfg(feature = "cdh")]
-            if self.cdh_hub.is_some() && req.encrypted {
-                info!("Processing encrypted image with CDH: {}", req.image_url);
-
-                let cdh_hub = self.cdh_hub.as_ref().unwrap().clone();
+            #[cfg(feature = "cdh")]
+            if let Some(cdh_hub) = self.cdh_hub.as_ref().filter(|_| req.encrypted) {
+                info!("Processing encrypted image with CDH: {}", req.image_url);
+                let cdh_hub = cdh_hub.clone();

434-449: Code duplication between CDH and non-CDH paths.

The registry binary fetching logic is duplicated between the #[cfg(feature = "cdh")] else-branch (lines 416-432) and the #[cfg(not(feature = "cdh"))] block (lines 434-449). Consider extracting this into a helper method to reduce duplication and maintenance burden.

♻️ Suggested helper extraction
async fn fetch_binary_from_registry(&self, task_id: &str, image_url: &str) -> Result<Vec<u8>> {
    info!("Requesting binary from registry: {}", image_url);
    self.request_binary_from_registry(image_url).await?;
    
    match self.wait_for_binary(image_url).await {
        Ok(binary) => Ok(binary),
        Err(e) => {
            error!("Failed to get binary for task {}: {}", task_id, e);
            self.running_tasks.lock().await.remove(task_id);
            self.publish_result(task_id, Vec::new(), Some(e.to_string())).await?;
            Err(e)
        }
    }
}
proplet/src/cdh/resource/mod.rs (2)

106-113: Fire-and-forget cache clear may cause unexpected behavior.

clear_cache spawns a background task but provides no way for the caller to await completion or detect failure. If immediate cache invalidation is needed, callers may be surprised.

♻️ Consider making this async for controllable behavior
-    fn clear_cache(&self) {
-        let cache = self.cache.clone();
-        tokio::spawn(async move {
-            let mut cache = cache.lock().await;
-            cache.clear();
-            info!("Resource cache cleared");
-        });
+    pub async fn clear_cache(&self) {
+        let mut cache = self.cache.lock().await;
+        cache.clear();
+        info!("Resource cache cleared");
     }

115-131: Code duplication with attest_and_get_resource.

The KBS client creation and resource fetching logic is duplicated between fetch_from_kbs and attest_and_get_resource. Consider extracting the common logic into a shared helper.

♻️ Suggested helper extraction
async fn create_kbs_client(&self) -> Result<impl KbsClientCapabilities> {
    let evidence_provider = Box::new(NativeEvidenceProvider::new()
        .map_err(|e| Error::KbsClient(format!("Failed to create evidence provider: {}", e)))?);

    KbsClientBuilder::with_evidence_provider(evidence_provider, &self.kbs_config.url)
        .build()
        .map_err(|e| Error::KbsClient(format!("Failed to create KBS client: {}", e)))
}

async fn fetch_from_kbs(&self, uri: &str) -> Result<Vec<u8>> {
    let mut client = self.create_kbs_client().await?;
    let resource_uri: resource_uri::ResourceUri = uri.try_into()
        .map_err(|e| Error::KbsClient(format!("Invalid resource URI: {}", e)))?;
    client.get_resource(resource_uri)
        .await
        .map_err(|e| Error::KbsClient(format!("Failed to get resource from KBS: {}", e)))
}
proplet/src/cdh/tee/mod.rs (3)

54-79: Consider adding se and Azure variants to the override options.

The override mechanism handles tdx, snp, sev, none, and mock, but omits se, azuretdxvtpm, and azuresnpvtpm. For consistency and comprehensive testing coverage, consider supporting all TEE types.

♻️ Suggested addition
                 "mock" => {
                     warn!("TEE override: Using Mock TEE (FOR TESTING ONLY)");
                     return TeeType::Mock;
                 }
+                "se" => {
+                    warn!("TEE override: Using IBM Secure Execution (FOR TESTING ONLY)");
+                    return TeeType::Se;
+                }
+                "azuretdxvtpm" => {
+                    warn!("TEE override: Using Azure TDX with vTPM (FOR TESTING ONLY)");
+                    return TeeType::AzureTdxVtpm;
+                }
+                "azuresnpvtpm" => {
+                    warn!("TEE override: Using Azure SNP with vTPM (FOR TESTING ONLY)");
+                    return TeeType::AzureSnpVtpm;
+                }
                 _ => {
                     warn!("Unknown TEE override value: {}, using auto-detection", override_tee);
                 }

128-137: SNP detection may overlap with SEV detection.

Both is_snp() and is_sev() check for the sev CPU flag—is_snp() checks for sev_snp which contains sev. Since cpuinfo.to_lowercase().contains(flag) is used, a system with sev_snp flag will also match sev, potentially causing ambiguity if detection order changes. Consider using word-boundary matching or exact flag matching for robustness.


156-170: Substring matching on /proc/cpuinfo and /proc/cmdline may produce false positives.

Using contains(flag) can match unintended substrings. For example, has_cpu_flag("sev") would match "sev_snp" or any string containing "sev". Consider using word-boundary regex or splitting on whitespace/delimiters for more precise matching.

proplet/src/cdh/core.rs (1)

75-82: Async wrappers around synchronous validation are unnecessary.

validate_encrypted_workload and validate_non_encrypted_workload are marked async but call synchronous WorkloadValidator::validate(). If these methods are intended to remain async for future expansion, this is acceptable; otherwise, consider making them synchronous.

proplet/src/cdh/attestation/mod.rs (2)

45-76: Asymmetric error handling is intentional but consider documenting the design choice.

get_evidence returns an error when AA is unavailable, while extend_runtime_measurement silently succeeds. This appears intentional (attestation evidence is critical, measurement extension is best-effort), but a doc comment would clarify this contract for future maintainers.


167-177: Unused field _grpc_client and dead code.

The _grpc_client field is Option<()> which provides no value. If this is a placeholder for future gRPC integration, consider using a proper type or removing it until needed.

proplet/src/cdh/config.rs (1)

29-88: Environment variable reading is comprehensive but consider documenting the precedence.

The from_proplet_config method reads from multiple environment variables with various fallback patterns. Consider adding a doc comment listing all supported environment variables and their precedence for easier configuration management.

proplet/src/cdh/image/mod.rs (4)

30-39: Consider caching the ImageClient if it's stateless between operations.

Creating a new ImageClient on every call to get_client() may add unnecessary overhead. If the client can be reused, consider storing it as an Option<ImageClient> field and lazily initializing it.


77-105: Code duplication with AnnotationPacket::decrypt_key.

The AES-256-GCM decryption logic here (lines 83-102) is nearly identical to decrypt_key (lines 153-174). Consider extracting a shared helper function to reduce duplication.

♻️ Suggested helper extraction
fn aes_gcm_decrypt(key: &[u8], iv: &[u8], ciphertext: &[u8]) -> Result<Vec<u8>> {
    use aes_gcm::{Aes256Gcm, KeyInit};
    use aes_gcm::aead::{Aead, Payload};

    let cipher = Aes256Gcm::new(
        key.try_into()
            .map_err(|_| Error::Crypto("Key must be 32 bytes".to_string()))?
    );

    let payload = Payload {
        msg: ciphertext,
        aad: b"",
    };

    cipher
        .decrypt(iv.try_into().map_err(|_| {
            Error::Crypto("IV must be 12 bytes".to_string())
        })?, payload)
        .map_err(|e| Error::Crypto(format!("AES-GCM decryption failed: {}", e)))
}

119-132: Consider validating WrapType::Unknown before attempting decryption.

The Unknown variant silently deserializes but will fail at decryption time since only AES-256-GCM is implemented. Adding an early validation in unwrap_key would provide a clearer error message.

♻️ Suggested validation
 pub async fn unwrap_key(&self, resource_handler: &ResourceHandler) -> Result<Vec<u8>> {
+    if self.wrap_type == WrapType::Unknown {
+        return Err(Error::KeyUnwrap("Unsupported wrap type".to_string()));
+    }
+
     debug!("Unwrapping key from KBS: {}", self.kid);

200-241: Tests cover basic serialization but could use more coverage.

Consider adding tests for:

  • is_encrypted_image with various URL formats (including digest-pinned URLs)
  • WrapType::Unknown deserialization and handling
  • Invalid base64 in annotation packets
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4051f3b and c79edaf.

⛔ Files ignored due to path filters (1)
  • proplet/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (17)
  • manager/service.go
  • proplet/Cargo.toml
  • proplet/README.md
  • proplet/src/cdh/attestation/mod.rs
  • proplet/src/cdh/config.rs
  • proplet/src/cdh/core.rs
  • proplet/src/cdh/error.rs
  • proplet/src/cdh/image/mod.rs
  • proplet/src/cdh/mod.rs
  • proplet/src/cdh/resource/mod.rs
  • proplet/src/cdh/secret/mod.rs
  • proplet/src/cdh/tee/mod.rs
  • proplet/src/config.rs
  • proplet/src/main.rs
  • proplet/src/service.rs
  • proplet/src/types.rs
  • task/task.go
✅ Files skipped from review due to trivial changes (1)
  • proplet/README.md
🚧 Files skipped from review as they are similar to previous changes (3)
  • proplet/src/config.rs
  • proplet/src/types.rs
  • proplet/Cargo.toml
🧰 Additional context used
🧬 Code graph analysis (10)
proplet/src/main.rs (1)
proplet/src/service.rs (2)
  • new (26-32)
  • new (65-116)
proplet/src/cdh/image/mod.rs (3)
proplet/src/cdh/core.rs (3)
  • new (31-73)
  • new (139-146)
  • pull_encrypted_image (115-128)
proplet/src/cdh/resource/mod.rs (1)
  • new (27-35)
proplet/src/cdh/mod.rs (1)
  • pull_encrypted_image (42-42)
proplet/src/cdh/resource/mod.rs (7)
proplet/src/cdh/attestation/mod.rs (2)
  • new (17-43)
  • new (136-142)
proplet/src/cdh/core.rs (3)
  • new (31-73)
  • new (139-146)
  • get_resource (111-113)
proplet/src/cdh/image/mod.rs (1)
  • new (18-28)
proplet/src/cdh/secret/mod.rs (1)
  • new (15-17)
proplet/src/cdh/tee/mod.rs (1)
  • new (190-192)
proplet/src/types.rs (1)
  • new (23-31)
proplet/src/cdh/mod.rs (1)
  • get_resource (39-39)
proplet/src/cdh/core.rs (7)
proplet/src/cdh/attestation/mod.rs (4)
  • new (17-43)
  • new (136-142)
  • for_image_pull (144-153)
  • for_wasm_execution (155-164)
proplet/src/cdh/image/mod.rs (4)
  • new (18-28)
  • unwrap_key (136-150)
  • pull_encrypted_image (41-59)
  • default (129-131)
proplet/src/cdh/resource/mod.rs (4)
  • new (27-35)
  • get_resource (37-52)
  • get_resource (136-136)
  • get_resource (142-144)
proplet/src/cdh/secret/mod.rs (3)
  • new (15-17)
  • unseal_secret (19-28)
  • default (80-82)
proplet/src/cdh/tee/mod.rs (3)
  • new (190-192)
  • detect (51-85)
  • default (229-231)
proplet/src/cdh/mod.rs (4)
  • unseal_secret (33-33)
  • unwrap_key (36-36)
  • get_resource (39-39)
  • pull_encrypted_image (42-42)
proplet/src/cdh/config.rs (3)
  • default (160-165)
  • default (186-191)
  • from_proplet_config (30-88)
proplet/src/cdh/mod.rs (4)
proplet/src/cdh/core.rs (4)
  • unseal_secret (103-105)
  • unwrap_key (107-109)
  • get_resource (111-113)
  • pull_encrypted_image (115-128)
proplet/src/cdh/secret/mod.rs (1)
  • unseal_secret (19-28)
proplet/src/cdh/image/mod.rs (2)
  • unwrap_key (136-150)
  • pull_encrypted_image (41-59)
proplet/src/cdh/resource/mod.rs (3)
  • get_resource (37-52)
  • get_resource (136-136)
  • get_resource (142-144)
proplet/src/cdh/secret/mod.rs (3)
proplet/src/cdh/core.rs (3)
  • new (31-73)
  • new (139-146)
  • unseal_secret (103-105)
proplet/src/cdh/resource/mod.rs (1)
  • new (27-35)
proplet/src/cdh/mod.rs (1)
  • unseal_secret (33-33)
proplet/src/cdh/config.rs (2)
proplet/src/cdh/core.rs (2)
  • new (31-73)
  • new (139-146)
proplet/src/cdh/resource/mod.rs (1)
  • new (27-35)
proplet/src/cdh/tee/mod.rs (1)
proplet/src/cdh/core.rs (2)
  • new (31-73)
  • new (139-146)
proplet/src/service.rs (4)
proplet/src/cdh/attestation/mod.rs (2)
  • new (17-43)
  • new (136-142)
proplet/src/cdh/core.rs (3)
  • new (31-73)
  • new (139-146)
  • default (193-195)
proplet/src/cdh/image/mod.rs (2)
  • new (18-28)
  • default (129-131)
proplet/src/cdh/config.rs (3)
  • default (160-165)
  • default (186-191)
  • from_proplet_config (30-88)
proplet/src/cdh/attestation/mod.rs (2)
proplet/src/cdh/core.rs (2)
  • new (31-73)
  • new (139-146)
proplet/src/cdh/tee/mod.rs (2)
  • new (190-192)
  • name (28-39)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Rust Lint and Build
  • GitHub Check: Go Lint and Build
🔇 Additional comments (42)
task/task.go (1)

46-46: LGTM!

The new Encrypted field is well-placed and follows the existing field naming conventions. The default false value ensures backwards compatibility for non-encrypted workloads.

manager/service.go (1)

192-204: LGTM!

The encrypted field is correctly added to the task start payload, enabling proplets to determine whether decryption is required before execution.

proplet/src/main.rs (2)

9-10: LGTM!

The conditional module import is correctly guarded by the cdh feature flag, ensuring the CDH module is only compiled when the feature is enabled.


84-84: LGTM!

The .await correctly reflects the change to an async constructor in PropletService::new, which is necessary for the async CDH hub initialization.

proplet/src/service.rs (3)

15-16: LGTM!

Conditional imports are correctly guarded by the cdh feature flag.


60-61: LGTM!

The optional cdh_hub field is correctly feature-gated and uses Arc<CdhHub> for safe shared access.


70-98: LGTM on graceful degradation.

The CDH initialization properly handles failures by logging warnings and continuing without CDH support, which is a good resilience pattern for optional functionality.

proplet/src/cdh/error.rs (2)

1-56: LGTM!

The error enum is well-designed with semantic variants covering the full range of CDH failure modes. The #[from] derives for standard error types provide good ergonomics.


58-63: LGTM!

The From<kbs_protocol::Error> implementation is correctly feature-gated to match the conditional availability of the kbs_protocol crate.

proplet/src/cdh/mod.rs (2)

1-1: Good design: error module is unconditionally available.

This allows error types to be referenced even when the CDH feature is disabled, which is useful for consistent error handling across the codebase.


29-43: LGTM on trait definition.

The ConfidentialDataHub trait provides a clean abstraction for CDH operations. The Send + Sync bounds are appropriate for async contexts. The #[allow(dead_code)] annotations are acceptable for methods that may not be called yet but are part of the public API contract.

proplet/src/cdh/resource/mod.rs (2)

54-74: Unused _attestation_report parameter.

The attestation_report parameter is prefixed with _ indicating it's intentionally unused, but the method name suggests it should use attestation. Currently, this method behaves identically to fetch_from_kbs. If attestation is not yet implemented, consider documenting this or adding a TODO.

Is the attestation report intended to be used in the future? If so, consider adding a TODO comment or returning an error if attestation is expected but not implemented.


151-170: LGTM on test coverage.

The test validates basic cache put/get operations. Consider adding tests for TTL expiration and edge cases (e.g., cache miss, zero TTL).

proplet/src/cdh/tee/mod.rs (3)

9-20: LGTM!

The TeeType enum is well-structured with appropriate serde attributes for serialization. The variants cover the major TEE platforms including Intel TDX, AMD SEV/SNP, IBM SE, and Azure vTPM variants.


22-46: LGTM!

The TeeType implementation provides useful helper methods. The is_tee() method correctly identifies both None and Mock as non-TEE environments, and name() provides human-readable descriptions.


173-233: LGTM!

The WorkloadValidator correctly enforces security constraints:

  • Rejects encrypted workloads in Mock TEE when strict mode is enabled
  • Rejects encrypted workloads outside any TEE
  • Allows non-encrypted workloads in any environment

The logging at appropriate levels (error for violations, warn for testing scenarios, info for normal operations) is well-structured.

proplet/src/cdh/core.rs (4)

18-28: LGTM!

The CdhHub struct is well-organized with appropriate use of Arc for shared ownership of handlers across async contexts.


30-73: LGTM!

The initialization sequence is correct—environment variables are set before TEE detection, and all handlers are properly constructed with their dependencies.


115-128: LGTM!

The pull_encrypted_image implementation correctly sequences validation, image pull, and event recording. The attestation measurement extension provides an audit trail for confidential workloads.


131-196: LGTM!

The builder pattern implementation is clean and follows Rust conventions. The Default implementation delegates to new() as expected.

proplet/src/cdh/attestation/mod.rs (5)

10-43: LGTM!

The AttestationClient construction handles multiple endpoint types (Unix socket, HTTP/HTTPS) with appropriate fallback when the AA endpoint is unavailable. The logging provides clear visibility into the initialization state.


101-126: LGTM!

The AttestationPlatform mapping from TeeType is complete and correct. Mapping TeeType::Mock to AttestationPlatform::None is appropriate since mock environments shouldn't claim real attestation capabilities.


135-165: LGTM!

The RuntimeMeasurementEvent factory methods provide a clean API for creating domain-specific events with proper JSON structure.


242-267: LGTM!

Tests adequately cover the platform mapping and event construction. Good coverage of the public API surface.


210-225: The get_evidence method is dead code and is never called—downstream issue concerns are unfounded.

The public get_evidence method at line 45 is not invoked anywhere in the codebase. The AttestationClient is only used for extend_runtime_measurement and is_available checks. The concern about callers failing due to empty attestation data is invalid since no callers exist. If this method is intentionally stubbed pending future implementation, that's fine—but the concern raised about downstream failures does not apply.

Likely an incorrect or invalid review comment.

proplet/src/cdh/secret/mod.rs (6)

10-29: LGTM!

The SecretHandler provides a clean interface for unsealing secrets, with proper UTF-8 validation before parsing.


31-83: LGTM!

The secret data model with SealedSecret, SecretContent, EnvelopeSecret, and VaultSecret is well-structured. Using serde's #[serde(tag = "type")] for discriminated unions is idiomatic.


87-116: Header and signature sections are parsed but not validated.

The JWS-like format requires sealed.header.payload.signature, but only the payload is decoded and used. The header and signature are extracted but never validated. This means:

  1. Any header value is accepted (even invalid base64)
  2. The signature is never verified

If signature verification is intentionally deferred (e.g., trusted source), document this. Otherwise, consider validating or removing these expectations.


148-184: LGTM on the AES-GCM implementation mechanics.

The key and data decryption functions correctly handle key/IV size validation and use the aes_gcm crate appropriately. The error messages are descriptive.


187-195: LGTM!

The VaultSecret implementation is straightforward—delegating to the resource handler for secret retrieval.


197-258: LGTM!

Tests cover parsing of valid vault/envelope secrets and invalid formats. Good coverage of the parsing logic.

proplet/src/cdh/config.rs (5)

8-27: LGTM!

The CdhConfig struct is well-organized with appropriate serde attributes. Using skip_serializing for ImageConfig (which comes from kernel cmdline) makes sense since it's runtime-derived.


121-131: LGTM!

The TeeConfig struct with optional override and strict mode default is appropriate.


150-174: LGTM!

The AttestationConfig with sensible defaults for socket path and timeout is well-structured.


176-200: LGTM!

The ResourceConfig with defaults for cache size and TTL is appropriate.


202-235: LGTM!

The test verifies TOML round-trip serialization/deserialization, ensuring config persistence works correctly.

proplet/src/cdh/image/mod.rs (6)

1-11: LGTM!

Imports are appropriate for the module's functionality. The module-level #![allow(dead_code)] is acceptable for new code under active development.


12-28: LGTM!

Clean struct definition and constructor. The async constructor maintains consistency with other handlers in the codebase (e.g., ResourceHandler::new in core.rs).


41-59: LGTM!

The image pulling logic is straightforward. The method appropriately maps errors and constructs the ImageInfo response.


65-75: LGTM!

Clean delegation pattern with appropriate error handling for JSON parsing failures.


134-175: LGTM!

The key unwrapping flow is well-structured: base64 decode → fetch LEK from KBS → decrypt. Error handling is appropriate for each step.


177-198: LGTM!

Well-structured data types for image metadata. The separation between ImageInfo and EncryptedImageInfo provides clear semantics for encrypted vs. decrypted image handling.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines 168 to 174
if t.ImageURL != "" {
dbT.ImageURL = t.ImageURL
}
if t.Env != nil {
dbT.Env = t.Env
}
dbT.Encrypted = t.Encrypted
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unconditional Encrypted assignment may unintentionally clear the flag on partial updates.

Unlike ImageURL and Env which use conditional checks, Encrypted is always overwritten. If a caller performs a partial update (e.g., only changing Name), the Encrypted flag will be reset to false if not explicitly provided.

Consider using a *bool pointer type for Encrypted in update requests, or document that callers must always provide the current Encrypted value.

🛠️ Option: Use pointer to allow nil-check

In the update request DTO (if separate from Task), use *bool:

// If using a separate update DTO:
type UpdateTaskRequest struct {
    // ...
    Encrypted *bool `json:"encrypted,omitempty"`
}

// Then in UpdateTask:
if t.Encrypted != nil {
    dbT.Encrypted = *t.Encrypted
}

Alternatively, if keeping the current Task struct for updates, document the behavior explicitly and ensure all callers pass the current value.

🤖 Prompt for AI Agents
In `@manager/service.go` around lines 168 - 174, The update logic unconditionally
assigns dbT.Encrypted = t.Encrypted which will overwrite the stored flag on
partial updates; change the update DTO or Task parameter so Encrypted is a
pointer (*bool) and only assign when non-nil (e.g., if t.Encrypted != nil {
dbT.Encrypted = *t.Encrypted }), or alternatively document/ensure callers always
supply the current Encrypted value; update the Task/update handler and any
callers to use the pointer semantics so partial updates don’t inadvertently
clear the flag.

Comment on lines 112 to 118
pub fn set_env_vars(&self) {
std::env::set_var("AA_KBC_PARAMS", format!("{}::{}", self.kbc.name, self.kbc.url));

if let Some(kbs_cert) = &self.kbc.kbs_cert {
std::env::set_var("KBS_CERT", kbs_cert);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

std::env::set_var thread-safe Rust documentation

💡 Result:

Short answer: std::env::set_var is NOT generally thread-safe on Unix-like platforms. The function is marked unsafe for multi‑threaded use except on Windows; on POSIX you must ensure no other thread reads or writes the environment (or avoid using it and use Command::env to pass vars to children). [1][2][3]

Sources:

  • std::env::set_var documentation (safety notes). [1]
  • Rust edition guide (set_var marked unsafe). [2]
  • Rust internals discussion about synchronization and the ENV lock. [3]

🏁 Script executed:

# First, let's examine the code context around the set_env_vars function
cd /tmp && find . -name "config.rs" -type f 2>/dev/null | head -5

Repository: rodneyosodo/propeller

Length of output: 186


🏁 Script executed:

# Search for set_env_vars usage in the codebase
rg -n "set_env_vars" --type rust

Repository: rodneyosodo/propeller

Length of output: 185


🏁 Script executed:

# Search for CdhHub and its usage
rg -n "CdhHub" --type rust -A 3 -B 1

Repository: rodneyosodo/propeller

Length of output: 4479


🏁 Script executed:

# Look for async runtime setup or tokio usage
rg -n "tokio|async fn|#\[tokio::main\]" --type rust | head -20

Repository: rodneyosodo/propeller

Length of output: 1509


std::env::set_var is not thread-safe and creates a race condition in multi-threaded contexts.

Per Rust documentation, set_var is unsafe on Unix-like platforms because it can race with getenv calls in other threads (including linked C libraries). This code runs within a #[tokio::main] multi-threaded runtime, and set_env_vars() is called during CdhHub::new() (an async function), which means environment variables are being set while other threads may be running or starting.

Consider alternatives:

  1. Pass KBC params explicitly to components that need them instead of via environment variables
  2. Set environment variables before the async runtime is created (in synchronous main code)
  3. Use Command::env() to pass variables only to child processes that need them

.build()
.await;

assert!(hub.is_ok() || hub.is_err());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test assertion is always true and provides no verification.

assert!(hub.is_ok() || hub.is_err()) will always pass since a Result is always either Ok or Err. This test doesn't verify any actual behavior.

Either assert on expected success/failure based on test conditions, or verify specific properties of the result.

💚 Suggested fix
-        assert!(hub.is_ok() || hub.is_err());
+        // In a test environment without actual CDH infrastructure,
+        // we expect this to either succeed with mock defaults or fail gracefully.
+        // For a meaningful test, check specific expected outcomes:
+        if let Ok(hub) = hub {
+            // Verify the hub was configured correctly
+            assert!(!hub.is_attestation_available()); // No AA in test env
+        }
🤖 Prompt for AI Agents
In `@proplet/src/cdh/core.rs` at line 229, The assertion assert!(hub.is_ok() ||
hub.is_err()) is tautological and should be replaced with a concrete
expectation: depending on the test setup, assert that hub is Ok (e.g.,
assert!(hub.is_ok()) and then inspect the Ok value with hub.unwrap() to assert
specific fields), or assert that hub is Err and validate the error (e.g., let
err = hub.unwrap_err(); assert_eq!(err.to_string(), "<expected message>") or
match on the error variant). Locate the test where the variable hub is produced
and replace the tautological check with an assertion that verifies the expected
success/failure and any relevant properties of the result.

Comment on lines 61 to 63
pub fn is_encrypted_image(&self, image_url: &str) -> bool {
image_url.contains("@") || image_url.contains("encrypted")
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OCI image encryption detection ocicrypt standard

💡 Result:

Short answer — look at the manifest/index descriptors:

  • Check each layer descriptor's mediaType for the ocicrypt encrypted media-types (they end with +encrypted), e.g.:

    • application/vnd.oci.image.layer.v1.tar+encrypted
    • application/vnd.oci.image.layer.v1.tar+gzip+encrypted
    • application/vnd.oci.image.layer.v1.tar+zstd+encrypted
      If any layer uses one of those mediaTypes, the layer is ocicrypt-style encrypted. [1][2]
  • Also inspect manifest/index annotations and ocicrypt metadata (key-wrapper / JWE metadata) that ocicrypt producers add (ocicrypt libraries/tools like skopeo/imgcrypt place encryption-related annotations and recipients info). Tools that implement ocicrypt (ocicrypt, imgcrypt, skopeo) recognize these mediaTypes+annotations and perform decryption if configured. [1][3]

How to check practically:

  • Use image inspect (skopeo/ctr/nerdctl/docker-registry API) and read the manifest/index and layer descriptors’ mediaType and annotations; unknown mediaTypes with +encrypted indicate encrypted layers. [3][2]

Sources:

  • ocicrypt project (implementation/library). [1]
  • containerd/imgcrypt (tools that register +encrypted media types). [2]
  • OCI image spec / media-type discussion (mediaType-based extension for encrypted layers). [3]

🏁 Script executed:

# Explore the file and surrounding context
cd proplet && find . -name "mod.rs" -path "*/image/*" -type f | head -5

Repository: rodneyosodo/propeller

Length of output: 87


🏁 Script executed:

# Look at the full image/mod.rs file to understand context
wc -l src/cdh/image/mod.rs

Repository: rodneyosodo/propeller

Length of output: 116


🏁 Script executed:

# Search for uses of is_encrypted_image function
rg "is_encrypted_image" proplet/

Repository: rodneyosodo/propeller

Length of output: 157


🏁 Script executed:

# Look for any image encryption or annotation handling
rg -i "encrypt" proplet/src/cdh/image/ -A 2 -B 2

Repository: rodneyosodo/propeller

Length of output: 3501


🏁 Script executed:

# See where is_encrypted_image is called
rg "is_encrypted_image" proplet/ -B 3 -A 3

Repository: rodneyosodo/propeller

Length of output: 423


🏁 Script executed:

# Get more context around the function and nearby code
sed -n '50,100p' proplet/src/cdh/image/mod.rs | cat -n

Repository: rodneyosodo/propeller

Length of output: 2024


🏁 Script executed:

# Check if there's any manifest inspection happening elsewhere
rg "mediaType\|annotation\|manifest" proplet/src/cdh/image/ -i

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

# Search entire repo for is_encrypted_image usage, not just proplet/
find . -type f -name "*.rs" | xargs rg "is_encrypted_image" 2>/dev/null

Repository: rodneyosodo/propeller

Length of output: 159


🏁 Script executed:

# Check if there's any actual manifest inspection or layer media type checking anywhere
rg "application/vnd.oci" . -t rust

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

# Check the full test to understand intent
rg "test_encrypted_image_info" proplet/src/cdh/image/mod.rs -A 15

Repository: rodneyosodo/propeller

Length of output: 816


🏁 Script executed:

# Check if is_encrypted_image is imported or exported from the module
rg "pub fn is_encrypted_image\|use.*is_encrypted_image" proplet/ -B 5 -A 5

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

# Look at the full ImageHandler struct to understand the context
sed -n '1,100p' proplet/src/cdh/image/mod.rs | head -60

Repository: rodneyosodo/propeller

Length of output: 1742


🏁 Script executed:

# Check if is_encrypted_image appears in any tests or documentation
rg "is_encrypted_image" proplet/ -r

Repository: rodneyosodo/propeller

Length of output: 478


🏁 Script executed:

# Check git history or comments around this function
sed -n '55,70p' proplet/src/cdh/image/mod.rs | cat -n

Repository: rodneyosodo/propeller

Length of output: 662


🏁 Script executed:

# Look for any pull requests or TODOs related to this
rg "TODO\|FIXME\|XXX" proplet/src/cdh/image/mod.rs -B 2 -A 2

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

# Check if is_encrypted_image is part of a public API or exported
sed -n '1,150p' proplet/src/cdh/image/mod.rs | grep -E "pub|impl" | head -20

Repository: rodneyosodo/propeller

Length of output: 636


🏁 Script executed:

# Look for any references to this module from outside
rg "ImageHandler\|image/mod" proplet/src --glob="*.rs" | grep -v "mod.rs:"

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

# Check what this module actually exports
grep -E "^pub " proplet/src/cdh/image/mod.rs | head -10

Repository: rodneyosodo/propeller

Length of output: 223


Encryption detection logic is fundamentally flawed and will produce false positives.

The current implementation has critical flaws:

  1. @ check incorrectly identifies digest-pinned images as encrypted. The @ syntax is standard OCI image reference format for digest-pinned images (e.g., ghcr.io/org/app@sha256:abc123) and does not indicate encryption. This will cause all digest-referenced images to be treated as encrypted.

  2. "encrypted" substring check is unreliable. This is not a standard convention and real encrypted images won't necessarily contain "encrypted" in their URL.

Proper OCI image encryption detection requires inspecting:

  • Layer descriptor mediaType values for the +encrypted suffix (e.g., application/vnd.oci.image.layer.v1.tar+encrypted)
  • Manifest/index annotations containing encryption metadata (available through layer inspection)

The codebase already has the necessary infrastructure (AnnotationPacket, decrypt_layer()) to handle encrypted images correctly, but is_encrypted_image() should leverage manifest inspection rather than URL pattern matching.

🤖 Prompt for AI Agents
In `@proplet/src/cdh/image/mod.rs` around lines 61 - 63, The is_encrypted_image
function currently uses URL heuristics which are incorrect; change
is_encrypted_image(&self, image_url: &str) to inspect the image manifest/layer
descriptors instead: look for any layer Descriptor.media_type (or mediaType)
having the "+encrypted" suffix (e.g., "application/...+encrypted") and/or
manifest/index annotations that indicate encryption (using the existing
AnnotationPacket structure), and return true only if such encrypted mediaType or
annotation is found; integrate this check where manifests are available (use the
same code paths that call decrypt_layer()) rather than relying on image URL
patterns.

Comment on lines 27 to 35
pub async fn new(kbs_config: KbcConfig, cache_size: usize, cache_ttl_seconds: u64) -> Result<Self> {
info!("Initializing ResourceHandler with cache size: {}, TTL: {}s", cache_size, cache_ttl_seconds);

Ok(Self {
kbs_config,
cache: Arc::new(Mutex::new(LruCache::new(cache_size.try_into().unwrap()))),
cache_ttl: Duration::from_secs(cache_ttl_seconds),
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential panic if cache_size is zero.

LruCache::new requires a NonZeroUsize, and .try_into().unwrap() will panic if cache_size is 0. Consider validating the input or using a fallback.

🛠️ Suggested fix with validation
     pub async fn new(kbs_config: KbcConfig, cache_size: usize, cache_ttl_seconds: u64) -> Result<Self> {
         info!("Initializing ResourceHandler with cache size: {}, TTL: {}s", cache_size, cache_ttl_seconds);
 
+        let cache_size = std::num::NonZeroUsize::new(cache_size)
+            .ok_or_else(|| Error::Configuration("cache_size must be greater than 0".to_string()))?;
+
         Ok(Self {
             kbs_config,
-            cache: Arc::new(Mutex::new(LruCache::new(cache_size.try_into().unwrap()))),
+            cache: Arc::new(Mutex::new(LruCache::new(cache_size))),
             cache_ttl: Duration::from_secs(cache_ttl_seconds),
         })
     }
🤖 Prompt for AI Agents
In `@proplet/src/cdh/resource/mod.rs` around lines 27 - 35, The constructor
ResourceHandler::new can panic because it unconditionally calls LruCache::new
with cache_size.try_into().unwrap(); change this to validate cache_size is
non-zero and handle zero safely: either return an Err (use
NonZeroUsize::new(cache_size).ok_or_else(|| anyhow!("cache_size must be > 0"))?)
or convert zero to a safe default (create a NonZeroUsize via
NonZeroUsize::new(cache_size).unwrap_or(NonZeroUsize::new(1).unwrap()) before
calling LruCache::new); update the creation of the cache (the cache:
Arc::new(Mutex::new(LruCache::new(...)))) accordingly and adjust error handling
to propagate a clear message if validation fails.

Comment on lines 126 to 146
impl EnvelopeSecret {
async fn unseal(&self, resource_handler: &ResourceHandler) -> Result<Vec<u8>> {
debug!("Unsealing envelope secret");

let kek = resource_handler.get_resource(&self.key_id).await?;

let encrypted_key = b64.decode(&self.encrypted_key)
.map_err(|e| Error::SecretUnseal(format!("Failed to decode encrypted_key: {}", e)))?;

let encrypted_data = b64.decode(&self.encrypted_data)
.map_err(|e| Error::SecretUnseal(format!("Failed to decode encrypted_data: {}", e)))?;

let iv = b64.decode(&self.iv)
.map_err(|e| Error::SecretUnseal(format!("Failed to decode IV: {}", e)))?;

let dek = self.decrypt_key(&kek, &encrypted_key, &iv)?;

let plaintext = self.decrypt_data(&dek, &encrypted_data, &iv)?;

Ok(plaintext)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reusing the same IV for both KEK and DEK decryption is a cryptographic vulnerability.

AES-GCM security requires unique nonce/IV per encryption operation. Using the same iv for decrypting both the key (Line 141) and the data (Line 143) violates this requirement. If the same KEK were used to encrypt multiple DEKs with identical IVs, this could leak information.

The envelope secret format should include separate IVs for key wrapping and data encryption.

🔒 Recommended fix

The EnvelopeSecret struct should have separate IV fields:

 pub struct EnvelopeSecret {
     pub provider: String,
     pub key_id: String,
     pub encrypted_key: String,
     pub encrypted_data: String,
     #[serde(default)]
     pub wrap_type: WrapType,
-    pub iv: String,
+    pub key_iv: String,
+    pub data_iv: String,
     #[serde(default)]
     pub provider_settings: serde_json::Value,
     #[serde(default)]
     pub annotations: serde_json::Value,
 }

Then use the appropriate IV for each decryption:

-        let dek = self.decrypt_key(&kek, &encrypted_key, &iv)?;
-        let plaintext = self.decrypt_data(&dek, &encrypted_data, &iv)?;
+        let key_iv = b64.decode(&self.key_iv)
+            .map_err(|e| Error::SecretUnseal(format!("Failed to decode key_iv: {}", e)))?;
+        let data_iv = b64.decode(&self.data_iv)
+            .map_err(|e| Error::SecretUnseal(format!("Failed to decode data_iv: {}", e)))?;
+        
+        let dek = self.decrypt_key(&kek, &encrypted_key, &key_iv)?;
+        let plaintext = self.decrypt_data(&dek, &encrypted_data, &data_iv)?;
🤖 Prompt for AI Agents
In `@proplet/src/cdh/secret/mod.rs` around lines 126 - 146, The unseal
implementation reuses the same IV for both key and data decryption which is
insecure; update the EnvelopeSecret type to carry separate IVs (e.g.,
iv_wrapping / iv_data or iv_key / iv_data), decode both base64 IV fields
(instead of decoding only the single iv) and pass the wrapping IV to
decrypt_key(&kek, &encrypted_key, &iv_wrapping) and the data IV to
decrypt_data(&dek, &encrypted_data, &iv_data) inside the unseal method,
adjusting field names and calls consistently (EnvelopeSecret, unseal,
encrypted_key, encrypted_data, decrypt_key, decrypt_data, kek, dek).

Comment on lines 93 to 120
pub fn detect_tee() -> TeeType {
if is_tdx() {
return TeeType::Tdx;
}

if is_snp() {
return TeeType::Snp;
}

if is_sev() {
return TeeType::Sev;
}

if is_azure_tdx_vtpm() {
return TeeType::AzureTdxVtpm;
}

if is_azure_snp_vtpm() {
return TeeType::AzureSnpVtpm;
}

if is_se() {
return TeeType::Se;
}

debug!("No TEE detected");
TeeType::None
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Detection order prevents Azure vTPM variants from being detected.

The Azure-specific checks (is_azure_tdx_vtpm, is_azure_snp_vtpm) are evaluated after the base is_tdx and is_snp checks. Since is_tdx() returns true when /dev/tdx-guest exists, and is_azure_tdx_vtpm() also requires /dev/tdx-guest, the Azure variant will never be detected—the function returns early with TeeType::Tdx.

Move Azure variant checks before the base TDX/SNP checks.

🐛 Proposed fix
 pub fn detect_tee() -> TeeType {
-    if is_tdx() {
-        return TeeType::Tdx;
-    }
-    
-    if is_snp() {
-        return TeeType::Snp;
-    }
-    
-    if is_sev() {
-        return TeeType::Sev;
-    }
-    
     if is_azure_tdx_vtpm() {
         return TeeType::AzureTdxVtpm;
     }
     
     if is_azure_snp_vtpm() {
         return TeeType::AzureSnpVtpm;
     }
     
+    if is_tdx() {
+        return TeeType::Tdx;
+    }
+    
+    if is_snp() {
+        return TeeType::Snp;
+    }
+    
+    if is_sev() {
+        return TeeType::Sev;
+    }
+    
     if is_se() {
         return TeeType::Se;
     }
     
     debug!("No TEE detected");
     TeeType::None
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn detect_tee() -> TeeType {
if is_tdx() {
return TeeType::Tdx;
}
if is_snp() {
return TeeType::Snp;
}
if is_sev() {
return TeeType::Sev;
}
if is_azure_tdx_vtpm() {
return TeeType::AzureTdxVtpm;
}
if is_azure_snp_vtpm() {
return TeeType::AzureSnpVtpm;
}
if is_se() {
return TeeType::Se;
}
debug!("No TEE detected");
TeeType::None
}
pub fn detect_tee() -> TeeType {
if is_azure_tdx_vtpm() {
return TeeType::AzureTdxVtpm;
}
if is_azure_snp_vtpm() {
return TeeType::AzureSnpVtpm;
}
if is_tdx() {
return TeeType::Tdx;
}
if is_snp() {
return TeeType::Snp;
}
if is_sev() {
return TeeType::Sev;
}
if is_se() {
return TeeType::Se;
}
debug!("No TEE detected");
TeeType::None
}
🤖 Prompt for AI Agents
In `@proplet/src/cdh/tee/mod.rs` around lines 93 - 120, In detect_tee(), the
Azure-specific checks are placed after the generic TDX/SNP checks so
is_azure_tdx_vtpm() and is_azure_snp_vtpm() never run; reorder the conditional
checks so that is_azure_tdx_vtpm() and is_azure_snp_vtpm() are evaluated before
is_tdx() and is_snp(), respectively (keep the rest of the function and return
values the same).

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@proplet/src/cdh/attestation/mod.rs`:
- Around line 38-44: The log message incorrectly says "will attempt to connect
via gRPC" for HTTP/HTTPS endpoints; update the log to clearly state that an
HTTP(S) endpoint was detected and that the client will use the HTTP transport
variant (i.e., AttestationAgentClient::new_grpc_http) to connect to
config.aa_socket, so the message reflects the actual transport in use.

In `@proplet/src/cdh/config.rs`:
- Around line 41-53: The AA_KBC_PARAMS parsing uses aa_kbc_params.split("::")
which splits on every occurrence and breaks when the URL contains :: (e.g.,
IPv6), so change the split call to only split on the first occurrence using
splitn(2, "::") when building KbcConfig in the code that reads AA_KBC_PARAMS;
keep the existing parts.len() == 2 validation and KbcConfig construction (name
from parts[0], url from parts[1], kbs_cert from KBS_CERT) so URLs containing ::
are preserved.

In `@proplet/src/cdh/secret/mod.rs`:
- Around line 87-119: SealedSecret::parse currently ignores and never validates
the JWS header (sections[1]) and signature (sections[3]); update parse to
validate the header and signature before trusting the decoded payload: parse and
base64-decode sections[1] into a header struct (or at least verify expected
alg/typ fields), verify sections[3] is a valid signature for the concatenated
header.payload using your signing key/verification routine (or return
Error::SecretParse on mismatch), and only then decode sections[2] and
deserialize into SealedSecret (still checking secret.version against
SECRET_VERSION); refer to SealedSecret::parse, sections[1], sections[2],
sections[3], and SECRET_VERSION to locate where to add header parsing and
signature verification.
♻️ Duplicate comments (8)
proplet/src/cdh/tee/mod.rs (1)

96-123: Detection order prevents Azure vTPM variants from being detected.

The Azure-specific checks (is_azure_tdx_vtpm, is_azure_snp_vtpm) are evaluated after the base is_tdx and is_snp checks. Since is_tdx() returns true when /dev/tdx-guest exists, and is_azure_tdx_vtpm() also requires /dev/tdx-guest, the Azure variant will never be detected—the function returns early with TeeType::Tdx.

Move Azure variant checks before the base TDX/SNP checks.

🐛 Proposed fix
 pub fn detect_tee() -> TeeType {
+    if is_azure_tdx_vtpm() {
+        return TeeType::AzureTdxVtpm;
+    }
+
+    if is_azure_snp_vtpm() {
+        return TeeType::AzureSnpVtpm;
+    }
+
     if is_tdx() {
         return TeeType::Tdx;
     }

     if is_snp() {
         return TeeType::Snp;
     }

     if is_sev() {
         return TeeType::Sev;
     }

-    if is_azure_tdx_vtpm() {
-        return TeeType::AzureTdxVtpm;
-    }
-
-    if is_azure_snp_vtpm() {
-        return TeeType::AzureSnpVtpm;
-    }
-
     if is_se() {
         return TeeType::Se;
     }

     debug!("No TEE detected");
     TeeType::None
 }
proplet/src/cdh/resource/mod.rs (1)

37-42: Potential panic if cache_size is zero.

LruCache::new requires a NonZeroUsize, and .try_into().unwrap() will panic if cache_size is 0. Validate the input or use a fallback.

🛠️ Suggested fix
     pub async fn new(
         kbs_config: KbcConfig,
         cache_size: usize,
         cache_ttl_seconds: u64,
     ) -> Result<Self> {
         info!(
             "Initializing ResourceHandler with cache size: {}, TTL: {}s",
             cache_size, cache_ttl_seconds
         );

+        let cache_size = std::num::NonZeroUsize::new(cache_size)
+            .ok_or_else(|| Error::Configuration("cache_size must be greater than 0".to_string()))?;
+
         Ok(Self {
             kbs_config,
-            cache: Arc::new(Mutex::new(LruCache::new(cache_size.try_into().unwrap()))),
+            cache: Arc::new(Mutex::new(LruCache::new(cache_size))),
             cache_ttl: Duration::from_secs(cache_ttl_seconds),
         })
     }
proplet/src/cdh/config.rs (1)

124-133: std::env::set_var is not thread-safe in multi-threaded async contexts.

Per Rust documentation, set_var is unsafe on Unix-like platforms because it can race with getenv calls in other threads. This is called from CdhHub::new() within a Tokio runtime where multiple threads may be active.

Consider alternatives:

  1. Pass KBC params explicitly to components instead of via environment variables
  2. Set environment variables before the async runtime starts
  3. Use Command::env() to pass variables only to child processes that need them
proplet/src/cdh/secret/mod.rs (2)

46-59: IV reuse vulnerability - struct should have separate IV fields.

The EnvelopeSecret struct has a single iv field that gets reused for both key decryption (Lines 147) and data decryption (Line 149). This is a cryptographic vulnerability as AES-GCM requires unique nonces per encryption operation.


129-152: IV reuse in decryption operations.

Lines 147 and 149 use the same iv for both decrypt_key and decrypt_data. This has already been flagged in previous reviews as a cryptographic vulnerability.

proplet/src/cdh/image/mod.rs (1)

58-60: Encryption detection logic is flawed.

This has been flagged in a previous review. The current heuristics will incorrectly identify all digest-pinned images (containing @) as encrypted.

proplet/src/cdh/core.rs (1)

225-225: Tautological assertion provides no verification.

assert!(hub.is_ok() || hub.is_err()) always passes since a Result is either Ok or Err. This test doesn't verify meaningful behavior.

♻️ Suggested fix: assert expected outcome
-        assert!(hub.is_ok() || hub.is_err());
+        // In test environment without CDH infrastructure, expect graceful failure
+        // or success with mock/disabled attestation
+        match hub {
+            Ok(h) => {
+                // Verify hub was configured with overridden values
+                // (if accessible) or just confirm construction succeeded
+            }
+            Err(e) => {
+                // Expected in environments without attestation-agent socket
+                println!("CdhHub construction failed as expected: {}", e);
+            }
+        }
proplet/src/service.rs (1)

383-415: Temporary file not cleaned up on error paths.

When tokio::fs::read(&output_path) fails (lines 399-405), the temp file created at line 384 is not removed. Similarly, if pull_encrypted_image fails (lines 408-414), any partially written temp file may remain.

♻️ Suggested fix: ensure cleanup on all error paths
                     Err(e) => {
                         error!("Failed to read decrypted wasm binary: {}", e);
+                        tokio::fs::remove_file(&output_path).await.ok();
                         self.running_tasks.lock().await.remove(&req.id);
                         self.publish_result(&req.id, Vec::new(), Some(e.to_string()))
                             .await?;
                         return Err(e.into());
                     }
                 Err(e) => {
                     error!("Failed to pull encrypted image: {}", e);
+                    tokio::fs::remove_file(&output_path).await.ok();
                     self.running_tasks.lock().await.remove(&req.id);
                     self.publish_result(&req.id, Vec::new(), Some(e.to_string()))
                         .await?;
                     return Err(e.into());
                 }
🧹 Nitpick comments (7)
proplet/src/cdh/resource/mod.rs (1)

122-129: Fire-and-forget clear_cache doesn't guarantee completion.

The method spawns an async task but doesn't return a handle or await it. Callers cannot confirm when the cache is actually cleared. Consider making this async fn and awaiting the clear operation, or returning a JoinHandle if non-blocking behavior is desired.

♻️ Suggested refactor
-    fn clear_cache(&self) {
-        let cache = self.cache.clone();
-        tokio::spawn(async move {
-            let mut cache = cache.lock().await;
-            cache.clear();
-            info!("Resource cache cleared");
-        });
+    async fn clear_cache(&self) {
+        let mut cache = self.cache.lock().await;
+        cache.clear();
+        info!("Resource cache cleared");
     }
proplet/src/cdh/attestation/mod.rs (1)

234-274: Placeholder implementations should be tracked.

The get_evidence and extend_runtime_measurement methods are stubbed with placeholder implementations. Consider adding a TODO comment or tracking this in an issue to ensure the actual gRPC integration is completed.

Would you like me to open an issue to track the completion of the Attestation Agent gRPC integration?

proplet/src/config.rs (1)

252-272: Missing test coverage for cdh_enabled field.

The existing test_proplet_config_default test doesn't verify cdh_enabled, and there's no test for parsing PROPLET_CDH_ENABLED from the environment. Consider adding tests to verify both the default value and environment variable parsing.

🧪 Suggested test additions
#[test]
fn test_proplet_config_default_cdh_enabled() {
    let config = PropletConfig::default();
    assert!(config.cdh_enabled.is_none());
}

#[test]
fn test_proplet_config_from_env_cdh_enabled_true() {
    env::set_var("PROPLET_CDH_ENABLED", "true");
    let config = PropletConfig::from_env();
    env::remove_var("PROPLET_CDH_ENABLED");

    assert_eq!(config.cdh_enabled, Some(true));
}

#[test]
fn test_proplet_config_from_env_cdh_enabled_false() {
    env::set_var("PROPLET_CDH_ENABLED", "false");
    let config = PropletConfig::from_env();
    env::remove_var("PROPLET_CDH_ENABLED");

    assert_eq!(config.cdh_enabled, Some(false));
}
proplet/src/service.rs (2)

417-419: Consider using debug! level for internal state logging.

Lines 418-419 log internal state (cdh_hub.is_some(), req.encrypted) at info! level. These are debugging aids and would be better suited to debug! level to reduce log noise in production.

♻️ Suggested fix
                 info!("Requesting binary from registry: {}", req.image_url);
-                info!("cdh_hub.is_some() = {}", self.cdh_hub.is_some());
-                info!("req.encrypted = {}", req.encrypted);
+                debug!("cdh_hub.is_some() = {}", self.cdh_hub.is_some());
+                debug!("req.encrypted = {}", req.encrypted);
                 self.request_binary_from_registry(&req.image_url).await?;

420-449: Code duplication between CDH and non-CDH paths.

The registry binary fetching logic (lines 420-431 and 436-448) is duplicated between the #[cfg(feature = "cdh")] and #[cfg(not(feature = "cdh"))] blocks. Consider extracting this into a helper method to reduce duplication.

proplet/src/cdh/core.rs (1)

1-1: Consider scoped #[allow(dead_code)] instead of module-level.

The module-level #![allow(dead_code)] suppresses warnings for the entire file. The individual fields already have #[allow(dead_code)] attributes (lines 19, 21, 24). Consider removing the module-level directive to catch unintentional dead code in future additions.

proplet/Cargo.toml (1)

27-32: Pin git dependencies to stable releases for reproducible builds.

The six guest-components dependencies (lines 27-32) lack a pinned rev, tag, or branch. Since confidential-containers/guest-components has stable releases (v0.16.0 and earlier), unpinned dependencies risk build inconsistency if upstream changes unexpectedly. This is inconsistent with the rest of the dependencies in the file, which use semantic versioning.

Consider pinning to a specific commit SHA or the latest stable tag to ensure build reproducibility:

♻️ Pin to a stable release or commit
-attestation-agent = { git = "https://github.com/confidential-containers/guest-components", features = ["all-attesters"], optional = true }
-kbs_protocol = { git = "https://github.com/confidential-containers/guest-components", package = "kbs_protocol", optional = true }
-kbc = { git = "https://github.com/confidential-containers/guest-components", package = "kbc", optional = true }
-image-rs = { git = "https://github.com/confidential-containers/guest-components", package = "image-rs", optional = true }
-crypto = { git = "https://github.com/confidential-containers/guest-components", package = "crypto", optional = true }
-resource_uri = { git = "https://github.com/confidential-containers/guest-components", package = "resource_uri", optional = true }
+attestation-agent = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.16.0", features = ["all-attesters"], optional = true }
+kbs_protocol = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.16.0", package = "kbs_protocol", optional = true }
+kbc = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.16.0", package = "kbc", optional = true }
+image-rs = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.16.0", package = "image-rs", optional = true }
+crypto = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.16.0", package = "crypto", optional = true }
+resource_uri = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.16.0", package = "resource_uri", optional = true }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c79edaf and 53b41b8.

⛔ Files ignored due to path filters (1)
  • proplet/Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (17)
  • manager/service.go
  • proplet/Cargo.toml
  • proplet/README.md
  • proplet/src/cdh/attestation/mod.rs
  • proplet/src/cdh/config.rs
  • proplet/src/cdh/core.rs
  • proplet/src/cdh/error.rs
  • proplet/src/cdh/image/mod.rs
  • proplet/src/cdh/mod.rs
  • proplet/src/cdh/resource/mod.rs
  • proplet/src/cdh/secret/mod.rs
  • proplet/src/cdh/tee/mod.rs
  • proplet/src/config.rs
  • proplet/src/main.rs
  • proplet/src/service.rs
  • proplet/src/types.rs
  • task/task.go
🚧 Files skipped from review as they are similar to previous changes (4)
  • manager/service.go
  • proplet/src/types.rs
  • proplet/src/cdh/mod.rs
  • task/task.go
🧰 Additional context used
🧬 Code graph analysis (7)
proplet/src/main.rs (1)
proplet/src/service.rs (2)
  • new (26-32)
  • new (65-116)
proplet/src/cdh/resource/mod.rs (4)
proplet/src/cdh/core.rs (3)
  • new (31-63)
  • new (134-141)
  • get_resource (105-107)
proplet/src/cdh/image/mod.rs (1)
  • new (18-25)
proplet/src/cdh/secret/mod.rs (1)
  • new (15-17)
proplet/src/cdh/mod.rs (1)
  • get_resource (39-39)
proplet/src/cdh/core.rs (6)
proplet/src/cdh/attestation/mod.rs (4)
  • new (17-58)
  • new (156-166)
  • for_image_pull (168-177)
  • for_wasm_execution (179-188)
proplet/src/cdh/image/mod.rs (4)
  • new (18-25)
  • unwrap_key (132-146)
  • pull_encrypted_image (38-56)
  • default (125-127)
proplet/src/cdh/secret/mod.rs (3)
  • new (15-17)
  • unseal_secret (19-28)
  • default (80-82)
proplet/src/cdh/tee/mod.rs (3)
  • new (187-192)
  • detect (51-88)
  • default (230-232)
proplet/src/cdh/mod.rs (4)
  • unseal_secret (33-33)
  • unwrap_key (36-36)
  • get_resource (39-39)
  • pull_encrypted_image (42-42)
proplet/src/cdh/config.rs (3)
  • default (175-180)
  • default (201-206)
  • from_proplet_config (30-97)
proplet/src/cdh/tee/mod.rs (1)
proplet/src/cdh/core.rs (2)
  • new (31-63)
  • new (134-141)
proplet/src/cdh/image/mod.rs (2)
proplet/src/cdh/core.rs (4)
  • new (31-63)
  • new (134-141)
  • pull_encrypted_image (109-123)
  • unwrap_key (101-103)
proplet/src/cdh/mod.rs (2)
  • pull_encrypted_image (42-42)
  • unwrap_key (36-36)
proplet/src/cdh/attestation/mod.rs (2)
proplet/src/cdh/core.rs (2)
  • new (31-63)
  • new (134-141)
proplet/src/cdh/tee/mod.rs (2)
  • new (187-192)
  • name (28-39)
proplet/src/cdh/config.rs (4)
config.go (1)
  • PropletConfig (23-28)
proplet/src/cdh/attestation/mod.rs (3)
  • new (17-58)
  • new (156-166)
  • name (135-145)
proplet/src/cdh/core.rs (2)
  • new (31-63)
  • new (134-141)
proplet/src/cdh/resource/mod.rs (1)
  • new (27-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Go Lint and Build
  • GitHub Check: Rust Lint and Build
🔇 Additional comments (33)
proplet/README.md (1)

60-60: LGTM!

The documentation update to use /usr/bin/wasmtime aligns with the standard system installation path.

proplet/src/cdh/error.rs (1)

1-63: Well-structured error handling module.

The error types are comprehensive, covering all CDH subsystems. The use of thiserror with #[from] for automatic conversions is idiomatic Rust. The feature-gated From<kbs_protocol::Error> implementation correctly isolates KBS-specific code behind the cdh feature.

proplet/src/cdh/tee/mod.rs (1)

170-234: LGTM!

The workload validator correctly enforces security requirements: encrypted workloads must run inside a TEE, with appropriate handling for mock TEE environments in non-strict mode. The default strict mode ensures secure defaults.

proplet/src/cdh/resource/mod.rs (2)

61-90: Unused attestation_report parameter.

The _attestation_report parameter is never used—the method creates a NativeEvidenceProvider that generates its own attestation evidence. If this parameter is intended for non-native attestation flows, consider documenting this or implementing the alternative path. Otherwise, this may indicate incomplete functionality.


155-178: LGTM!

The KeyBrokerClient trait provides a clean abstraction for KBS access, and the implementation correctly delegates to the inherent methods.

proplet/src/cdh/config.rs (2)

16-17: image field behavior differs between file and runtime loading.

With skip_serializing, the image field isn't saved to TOML, so load_from_file will use ImageConfig::default() while from_proplet_config uses ImageConfig::from_kernel_cmdline(). If this is intentional (image config should always come from kernel cmdline at runtime), consider adding a comment to clarify.


136-215: LGTM!

The configuration sub-structs are well-organized with consistent default handling between Default implementations and serde attributes.

proplet/src/cdh/attestation/mod.rs (5)

1-8: LGTM!

Module imports are appropriate for the attestation client functionality.


60-96: LGTM!

The asymmetric error handling is appropriate: get_evidence fails when attestation is unavailable (critical operation), while extend_runtime_measurement degrades gracefully (non-critical logging operation).


98-146: LGTM!

Data types are well-defined. The AttestationPlatform::from_tee_type mapping correctly handles all TeeType variants, including the Mock variant mapping to None.


148-189: LGTM!

The RuntimeMeasurementEvent provides clear factory methods for common use cases with consistent domain naming.


277-301: LGTM!

Tests cover the essential mapping and construction logic.

proplet/src/cdh/secret/mod.rs (3)

1-29: LGTM!

The SecretHandler provides a clean interface for unsealing secrets with appropriate error handling for UTF-8 conversion.


154-200: LGTM!

The decryption implementations correctly validate key and IV sizes and provide clear error messages. The cryptographic operations are properly implemented.


203-275: LGTM!

The VaultSecret implementation is straightforward, and the tests adequately cover the parsing logic for both secret types.

proplet/src/cdh/image/mod.rs (5)

12-36: LGTM!

The ImageHandler struct and initialization logic are clean. The client creation is properly deferred to when needed.


38-56: LGTM!

The image pull implementation correctly delegates to the image-rs client with proper error wrapping.


62-101: LGTM!

The key unwrapping and layer decryption implementations are correct with proper error handling and size validation.


175-237: LGTM!

Data structures and tests are well-defined for the encrypted image handling use case.


130-146: No action needed. The base64 encoding inconsistency between modules is not a problem.

The secret and image modules are independent and operate on separate data:

  • secret/mod.rs uses URL_SAFE_NO_PAD to encode internal secret data
  • image/mod.rs uses STANDARD to decode annotation packet metadata received from external sources (container image specs)

These modules do not share encoded data, and different encoding schemes are appropriate for their respective contexts.

Likely an incorrect or invalid review comment.

proplet/src/config.rs (3)

42-42: LGTM!

The cdh_enabled field is appropriately typed as Option<bool> to distinguish between "not configured" and "explicitly disabled".


66-66: LGTM!

Default to None is correct, allowing the CDH feature to be explicitly opted-in.


220-222: LGTM!

Environment variable parsing follows the established pattern used for other boolean options (Line 217).

proplet/src/main.rs (2)

9-10: LGTM!

Feature-gated module inclusion follows Rust conventions for conditional compilation.


84-84: LGTM!

The service construction now correctly awaits the async constructor, which enables CDH initialization when the feature is enabled. The integration with PropletService::new (per proplet/src/service.rs lines 64-115) properly handles CDH initialization with appropriate error handling and fallback behavior.

proplet/src/service.rs (3)

15-16: LGTM!

Conditional import for CDH components is properly gated behind the cdh feature flag.


60-62: LGTM!

The optional cdh_hub field is properly gated behind the feature flag and wrapped in Arc for thread-safe sharing.


65-98: Breaking API change: PropletService::new is now async.

All call sites are properly updated to await this constructor (verified in proplet/src/main.rs:84).

The graceful degradation on CDH initialization failure is correct—the service continues without CDH support rather than failing entirely.

proplet/Cargo.toml (1)

33-49: LGTM!

The versioned dependencies (aes-gcm, lru, zeroize) and the cdh feature flag aggregation are well-structured.

proplet/src/cdh/core.rs (4)

30-63: LGTM!

The CdhHub::new constructor properly initializes all components in sequence: environment setup, TEE detection, then handler creation. The use of Arc for handlers enables safe sharing across async tasks.


65-92: LGTM!

The validation and event recording methods provide clean delegation to underlying handlers. The attestation event recording creates a useful audit trail for confidential workloads.


109-123: LGTM!

The pull_encrypted_image implementation correctly orchestrates the flow: validate workload → pull image → record event. The fail-fast validation ensures resources aren't wasted on invalid configurations.


126-191: LGTM!

The builder pattern implementation is clean and provides flexibility for configuration overrides while requiring the essential CdhConfig.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines 38 to 44
} else if config.aa_socket.starts_with("http://")
|| config.aa_socket.starts_with("https://")
{
info!("AA HTTP endpoint found, will attempt to connect via gRPC");
Some(Arc::new(AttestationAgentClient::new_grpc_http(
config.aa_socket.clone(),
)?))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Misleading log message for HTTP endpoint.

Line 41 states "will attempt to connect via gRPC" for HTTP/HTTPS endpoints, which is confusing. The message should reflect the actual transport being used.

🔧 Suggested fix
         } else if config.aa_socket.starts_with("http://")
             || config.aa_socket.starts_with("https://")
         {
-            info!("AA HTTP endpoint found, will attempt to connect via gRPC");
+            info!("AA HTTP endpoint found, will attempt to connect via HTTP");
             Some(Arc::new(AttestationAgentClient::new_grpc_http(
                 config.aa_socket.clone(),
             )?))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} else if config.aa_socket.starts_with("http://")
|| config.aa_socket.starts_with("https://")
{
info!("AA HTTP endpoint found, will attempt to connect via gRPC");
Some(Arc::new(AttestationAgentClient::new_grpc_http(
config.aa_socket.clone(),
)?))
} else if config.aa_socket.starts_with("http://")
|| config.aa_socket.starts_with("https://")
{
info!("AA HTTP endpoint found, will attempt to connect via HTTP");
Some(Arc::new(AttestationAgentClient::new_grpc_http(
config.aa_socket.clone(),
)?))
🤖 Prompt for AI Agents
In `@proplet/src/cdh/attestation/mod.rs` around lines 38 - 44, The log message
incorrectly says "will attempt to connect via gRPC" for HTTP/HTTPS endpoints;
update the log to clearly state that an HTTP(S) endpoint was detected and that
the client will use the HTTP transport variant (i.e.,
AttestationAgentClient::new_grpc_http) to connect to config.aa_socket, so the
message reflects the actual transport in use.

Comment on lines 41 to 53
let kbc_config = if let Ok(aa_kbc_params) = std::env::var("AA_KBC_PARAMS") {
let parts: Vec<&str> = aa_kbc_params.split("::").collect();
if parts.len() == 2 {
KbcConfig {
name: parts[0].to_string(),
url: parts[1].to_string(),
kbs_cert: std::env::var("KBS_CERT").ok(),
}
} else {
return Err(Error::Configuration(
"Invalid AA_KBC_PARAMS format. Expected: kbc_type::url".to_string(),
));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

AA_KBC_PARAMS parsing breaks with URLs containing ::.

Using split("::") without a limit causes issues with URLs containing :: (e.g., IPv6 addresses like http://[::1]:8080). Use splitn(2, "::") to split only on the first occurrence.

🐛 Proposed fix
         let kbc_config = if let Ok(aa_kbc_params) = std::env::var("AA_KBC_PARAMS") {
-            let parts: Vec<&str> = aa_kbc_params.split("::").collect();
-            if parts.len() == 2 {
+            let parts: Vec<&str> = aa_kbc_params.splitn(2, "::").collect();
+            if parts.len() == 2 && !parts[0].is_empty() && !parts[1].is_empty() {
                 KbcConfig {
                     name: parts[0].to_string(),
                     url: parts[1].to_string(),
                     kbs_cert: std::env::var("KBS_CERT").ok(),
                 }
             } else {
                 return Err(Error::Configuration(
                     "Invalid AA_KBC_PARAMS format. Expected: kbc_type::url".to_string(),
                 ));
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let kbc_config = if let Ok(aa_kbc_params) = std::env::var("AA_KBC_PARAMS") {
let parts: Vec<&str> = aa_kbc_params.split("::").collect();
if parts.len() == 2 {
KbcConfig {
name: parts[0].to_string(),
url: parts[1].to_string(),
kbs_cert: std::env::var("KBS_CERT").ok(),
}
} else {
return Err(Error::Configuration(
"Invalid AA_KBC_PARAMS format. Expected: kbc_type::url".to_string(),
));
}
let kbc_config = if let Ok(aa_kbc_params) = std::env::var("AA_KBC_PARAMS") {
let parts: Vec<&str> = aa_kbc_params.splitn(2, "::").collect();
if parts.len() == 2 && !parts[0].is_empty() && !parts[1].is_empty() {
KbcConfig {
name: parts[0].to_string(),
url: parts[1].to_string(),
kbs_cert: std::env::var("KBS_CERT").ok(),
}
} else {
return Err(Error::Configuration(
"Invalid AA_KBC_PARAMS format. Expected: kbc_type::url".to_string(),
));
}
🤖 Prompt for AI Agents
In `@proplet/src/cdh/config.rs` around lines 41 - 53, The AA_KBC_PARAMS parsing
uses aa_kbc_params.split("::") which splits on every occurrence and breaks when
the URL contains :: (e.g., IPv6), so change the split call to only split on the
first occurrence using splitn(2, "::") when building KbcConfig in the code that
reads AA_KBC_PARAMS; keep the existing parts.len() == 2 validation and KbcConfig
construction (name from parts[0], url from parts[1], kbs_cert from KBS_CERT) so
URLs containing :: are preserved.

Comment on lines 87 to 119
impl SealedSecret {
pub fn parse(secret: &str) -> Result<Self> {
let sections: Vec<&str> = secret.split('.').collect();

if sections.len() != 4 {
return Err(Error::SecretParse(
"Invalid sealed secret format. Expected: sealed.header.payload.signature"
.to_string(),
));
}

if sections[0] != "sealed" {
return Err(Error::SecretParse(
"Sealed secret must start with 'sealed' prefix".to_string(),
));
}

let payload_json = b64
.decode(sections[2])
.map_err(|e| Error::SecretParse(format!("Failed to decode payload: {}", e)))?;

let secret: SealedSecret = serde_json::from_slice(&payload_json)
.map_err(|e| Error::SecretParse(format!("Failed to parse secret JSON: {}", e)))?;

if secret.version != SECRET_VERSION {
return Err(Error::SecretParse(format!(
"Unsupported secret version: {}. Expected: {}",
secret.version, SECRET_VERSION
)));
}

Ok(secret)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for signature handling in the codebase
rg -n "signature" proplet/src/cdh/ -A 3 -B 3

Repository: rodneyosodo/propeller

Length of output: 1494


🏁 Script executed:

#!/bin/bash
# Find and examine the full SealedSecret struct definition and all its methods
rg -n "impl SealedSecret|struct SealedSecret" proplet/src/cdh/secret/ -A 50 | head -200

Repository: rodneyosodo/propeller

Length of output: 6499


🏁 Script executed:

#!/bin/bash
# Check the entire secret/mod.rs file to understand the full implementation
wc -l proplet/src/cdh/secret/mod.rs

Repository: rodneyosodo/propeller

Length of output: 98


🏁 Script executed:

#!/bin/bash
# Search for JWT, verify, signature validation patterns
rg -i "verify|jwt|jws" proplet/src/cdh/ -A 2 -B 2

Repository: rodneyosodo/propeller

Length of output: 810


🏁 Script executed:

#!/bin/bash
# Check if the header section (sections[1]) is used anywhere for signature verification
cat proplet/src/cdh/secret/mod.rs | head -200

Repository: rodneyosodo/propeller

Length of output: 6091


🏁 Script executed:

#!/bin/bash
# Look for any crypto/signing related imports or functions
rg -n "use|import" proplet/src/cdh/secret/mod.rs | head -30

Repository: rodneyosodo/propeller

Length of output: 527


Signature and header sections are not validated.

The parse method extracts the signature (sections[3]) and header (sections[1]) but never validates them. If the sealed secret format includes these for integrity or additional metadata, they should be validated before trusting the payload. The test cases using "fakesignature" and "fakejwsheader" further confirm this is currently a no-op.

🤖 Prompt for AI Agents
In `@proplet/src/cdh/secret/mod.rs` around lines 87 - 119, SealedSecret::parse
currently ignores and never validates the JWS header (sections[1]) and signature
(sections[3]); update parse to validate the header and signature before trusting
the decoded payload: parse and base64-decode sections[1] into a header struct
(or at least verify expected alg/typ fields), verify sections[3] is a valid
signature for the concatenated header.payload using your signing
key/verification routine (or return Error::SecretParse on mismatch), and only
then decode sections[2] and deserialize into SealedSecret (still checking
secret.version against SECRET_VERSION); refer to SealedSecret::parse,
sections[1], sections[2], sections[3], and SECRET_VERSION to locate where to add
header parsing and signature verification.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New security issues found

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@hal/ubuntu/qemu.sh`:
- Around line 156-189: The config template for /etc/proplet/config.toml
currently contains hardcoded UUID credentials in sections [manager], [proplet1],
[proplet2], [proplet3], and [proxy]; replace those literal UUIDs with clear
placeholders (e.g., 00000000-0000-0000-0000-000000000000 or
MANAGER_DOMAIN_ID_PLACEHOLDER etc.) and add substitution logic so real values
are injected at deploy time (add environment variables like MANAGER_DOMAIN_ID,
PROPLET1_CLIENT_ID, PROXY_CLIENT_KEY and perform runtime substitutions using
your existing templating/sed mechanism). Ensure each unique key name from the
diff (domain_id, client_id, client_key, channel_id) is mapped to an env var and
substituted for all sections to avoid committing secrets.
- Around line 528-581: The current QEMU command is built into a single
space-delimited string (QEMU_OPTS) which will break complex arguments containing
quotes/JSON (e.g., the tdx-guest object string) during expansion; convert
QEMU_OPTS into a Bash array (e.g., QEMU_OPTS=() and use QEMU_OPTS+=("<opt>"
"<value>") for each option including the tdx-guest JSON and memory-backend-memfd
entries, avoid string-manipulating sed on the combined string (instead push the
modified name/flags directly when TDX_AVAILABLE is true), and change execution
to call the binary with separate arguments ("$QEMU_CMD" "${QEMU_OPTS[@]}") so
quoted JSON and options are preserved intact.

In `@proplet/src/cdh/resource/mod.rs`:
- Around line 61-88: The function attest_and_get_resource currently accepts an
attestation_report parameter but ignores it—NativeEvidenceProvider::new()
generates fresh evidence and KbsClientBuilder::with_evidence_provider is fed
that provider—so either remove the unused attestation_report from the
attest_and_get_resource signature and the corresponding trait, or wire the
provided report into the evidence flow (e.g., replace
NativeEvidenceProvider::new() with an evidence provider constructor that accepts
the attestation_report and pass that provider into
KbsClientBuilder::with_evidence_provider); update attest_and_get_resource, its
trait declaration, and any callers accordingly to keep signatures and behavior
consistent.

In `@proplet/src/service.rs`:
- Around line 376-446: Detect the case where req.encrypted is true but CDH
handling is unavailable (cdh_hub.is_none() or the "cdh" feature is not enabled)
and fail fast instead of falling back to the registry: inside the branch
handling req.image_url, add a guard that when req.encrypted &&
(self.cdh_hub.is_none() || cfg!(not(feature = "cdh"))) you remove the task from
self.running_tasks, call self.publish_result(&req.id, Vec::new(),
Some("encrypted image received but CDH unavailable".into())).await? and return
an appropriate Err early; do this before calling request_binary_from_registry or
wait_for_binary so encrypted bytes are never processed by
request_binary_from_registry or executed. Ensure you reference the same symbols:
req.encrypted, self.cdh_hub, request_binary_from_registry, wait_for_binary,
publish_result, and running_tasks.
♻️ Duplicate comments (8)
proplet/src/cdh/tee/mod.rs (1)

96-123: Detection order still prevents Azure vTPM variants from being detected.

The Azure-specific checks (is_azure_tdx_vtpm, is_azure_snp_vtpm) are evaluated after the base is_tdx and is_snp checks. Since is_tdx() returns true when /dev/tdx-guest exists, the Azure TDX variant will never be detected—the function returns early at line 98.

🐛 Proposed fix - reorder checks
 pub fn detect_tee() -> TeeType {
+    // Check Azure variants first (more specific)
+    if is_azure_tdx_vtpm() {
+        return TeeType::AzureTdxVtpm;
+    }
+
+    if is_azure_snp_vtpm() {
+        return TeeType::AzureSnpVtpm;
+    }
+
+    // Then check base TEE types
     if is_tdx() {
         return TeeType::Tdx;
     }

     if is_snp() {
         return TeeType::Snp;
     }

     if is_sev() {
         return TeeType::Sev;
     }

-    if is_azure_tdx_vtpm() {
-        return TeeType::AzureTdxVtpm;
-    }
-
-    if is_azure_snp_vtpm() {
-        return TeeType::AzureSnpVtpm;
-    }
-
     if is_se() {
         return TeeType::Se;
     }

     debug!("No TEE detected");
     TeeType::None
 }
proplet/src/cdh/secret/mod.rs (2)

143-149: Same IV reused for both KEK and DEK decryption - cryptographic vulnerability.

AES-GCM security requires a unique nonce/IV per encryption operation. Lines 147-149 use the same iv for both decrypt_key (KEK → DEK) and decrypt_data (DEK → plaintext). If the same KEK is used to encrypt multiple DEKs with identical IVs, this violates AES-GCM security guarantees.

🔒 Recommended fix - use separate IVs

Update EnvelopeSecret to have separate IV fields:

 pub struct EnvelopeSecret {
     // ...
-    pub iv: String,
+    pub key_iv: String,
+    pub data_iv: String,
     // ...
 }

Then decode and use them separately:

-    let iv = b64.decode(&self.iv)...
-    let dek = self.decrypt_key(&kek, &encrypted_key, &iv)?;
-    let plaintext = self.decrypt_data(&dek, &encrypted_data, &iv)?;
+    let key_iv = b64.decode(&self.key_iv)...
+    let data_iv = b64.decode(&self.data_iv)...
+    let dek = self.decrypt_key(&kek, &encrypted_key, &key_iv)?;
+    let plaintext = self.decrypt_data(&dek, &encrypted_data, &data_iv)?;

87-119: Signature and header sections are parsed but never validated.

The parse method extracts the JWS-like structure (sealed.header.payload.signature) but only validates the sealed prefix and decodes the payload. The header (sections[1]) and signature (sections[3]) are completely ignored, allowing any value (including "fakejwsheader" and "fakesignature" as seen in tests).

If integrity verification is intended, the signature should be validated against the payload.

proplet/src/cdh/resource/mod.rs (1)

27-40: Guard against zero cache size to avoid panic.

LruCache::new requires a non‑zero capacity; cache_size.try_into().unwrap() will panic when cache_size == 0. Validate and return a configuration error (or default to a safe minimum) before constructing the cache.

🛠️ Suggested fix
+use std::num::NonZeroUsize;
@@
-        Ok(Self {
+        let cache_size = NonZeroUsize::new(cache_size).ok_or_else(|| {
+            Error::Configuration("cache_size must be greater than 0".to_string())
+        })?;
+
+        Ok(Self {
             kbs_config,
-            cache: Arc::new(Mutex::new(LruCache::new(cache_size.try_into().unwrap()))),
+            cache: Arc::new(Mutex::new(LruCache::new(cache_size))),
             cache_ttl: Duration::from_secs(cache_ttl_seconds),
         })
Does lru 0.12.0 require NonZeroUsize for LruCache::new, and what happens with a zero capacity?
proplet/src/cdh/config.rs (2)

41-52: AA_KBC_PARAMS parsing breaks on URLs containing ::.

split("::") will split IPv6 or other URLs containing ::. Use splitn(2, "::") and validate non‑empty parts.

🛠️ Suggested fix
-            let parts: Vec<&str> = aa_kbc_params.split("::").collect();
-            if parts.len() == 2 {
+            let parts: Vec<&str> = aa_kbc_params.splitn(2, "::").collect();
+            if parts.len() == 2 && !parts[0].is_empty() && !parts[1].is_empty() {

124-132: Avoid std::env::set_var after the async runtime starts.

set_env_vars() mutates process environment during runtime; on Unix this is not thread‑safe when other threads may read the environment. Consider passing KBC params explicitly, or set env vars before the multi‑threaded runtime starts.

Is std::env::set_var thread-safe on Unix, and what are the recommended alternatives in multi-threaded Rust programs?
proplet/src/cdh/image/mod.rs (1)

58-60: URL heuristic misclassifies encrypted images.

@ indicates digest references and “encrypted” substrings are not reliable encryption signals. Proper detection should inspect manifest/layer mediaType (e.g., +encrypted) and/or ocicrypt annotations. This likely requires an async manifest fetch and a different API surface.

What are the standard OCI mediaType/annotation signals for encrypted layers (ocicrypt), and how should encrypted images be detected?
proplet/src/cdh/attestation/mod.rs (1)

38-43: HTTP endpoint log message is misleading.

The log says “via gRPC” for HTTP/HTTPS endpoints. Update it to reflect the actual transport.

🛠️ Suggested fix
-            info!("AA HTTP endpoint found, will attempt to connect via gRPC");
+            info!("AA HTTP endpoint found, will attempt to connect via HTTP");
🧹 Nitpick comments (4)
task/task.go (1)

45-46: Consider omitting encrypted when false to preserve payload compatibility.

Line 46 always serializes encrypted:false; adding omitempty keeps non-confidential payloads identical to earlier versions and avoids strict-decoder issues. Please confirm whether downstream consumers require explicit false before adopting.

💡 Suggested change
-	Encrypted         bool                       `json:"encrypted"`
+	Encrypted         bool                       `json:"encrypted,omitempty"`
proplet/src/types.rs (1)

61-62: Handle encrypted: null consistently with other nullable fields.

Line 61 uses #[serde(default)], which won’t accept JSON null. If any client can emit encrypted: null (common in optional-bool serializers), deserialization will fail. Consider reusing deserialize_null_default for parity with other fields. Please verify whether upstream serializers can emit null.

💡 Suggested change
-    #[serde(default)]
-    pub encrypted: bool,
+    #[serde(default, deserialize_with = "deserialize_null_default")]
+    pub encrypted: bool,
hal/ubuntu/qemu.sh (1)

33-33: Unused PASSWORD variable.

The PASSWORD variable is defined but never used. The cloud-init configuration directly uses plain_text_passwd: propeller instead of referencing this variable.

♻️ Proposed fix

Either remove the unused variable or use it in the cloud-init template:

-PASSWORD="propeller"

Or substitute it in the cloud-init:

-    plain_text_passwd: propeller
+    plain_text_passwd: PASSWORD_PLACEHOLDER

And add to the sed substitutions:

+  sed -i "s|PASSWORD_PLACEHOLDER|${PASSWORD}|g" $USER_DATA
proplet/Cargo.toml (1)

27-35: Pin git dependencies to a specific version for reproducible builds.

The guest-components git dependencies lack explicit rev, tag, or branch specifications, causing Cargo to pull the latest HEAD of the default branch on each build. This leads to:

  • Non-reproducible builds across environments and time
  • Potential unexpected breakages from upstream API changes

Pin to a stable release—the latest is v0.16.0:

♻️ Example pinning
-attestation-agent = { git = "https://github.com/confidential-containers/guest-components", features = ["all-attesters"], optional = true }
+attestation-agent = { git = "https://github.com/confidential-containers/guest-components", tag = "v0.16.0", features = ["all-attesters"], optional = true }

Apply similar pinning to kbs_protocol, kbc, image-rs, crypto, and resource_uri.

Comment on lines 156 to 189
- path: /etc/proplet/config.toml
content: |
# SuperMQ Configuration

[manager]
domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
client_id = "cdaccb11-7209-4fb9-8df1-3c52e9d64284"
client_key = "507d687d-51f8-4c71-8599-4273a5d75429"
channel_id = "34a616c3-8817-4995-aade-a383e64766a8"

[proplet1]
domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
client_id = "0deb859f-973d-4e2e-93cf-ec756f4fc3c8"
client_key = "17c03d05-b55d-4a05-88ec-cadecb2130c4"
channel_id = "34a616c3-8817-4995-aade-a383e64766a8"

[proplet2]
domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
client_id = "3dfb6fa7-8e8f-4a2b-a462-8afc59898686"
client_key = "06244015-8286-4dd6-89bd-e2ba7d7a9637"
channel_id = "34a616c3-8817-4995-aade-a383e64766a8"

[proplet3]
domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
client_id = "f869bde7-8b1a-483e-9837-b621309af55a"
client_key = "316ba339-76f5-4149-acd7-8d6f3f7c9276"
channel_id = "34a616c3-8817-4995-aade-a383e64766a8"

[proxy]
domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
client_id = "0deb859f-973d-4e2e-93cf-ec756f4fc3c8"
client_key = "17c03d05-b55d-4a05-88ec-cadecb2130c4"
channel_id = "34a616c3-8817-4995-aade-a383e64766a8"
permissions: '0644'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded credentials in config.toml template should use placeholders.

The /etc/proplet/config.toml template contains hardcoded UUIDs for domain_id, client_id, client_key, and channel_id across multiple sections. Unlike /etc/default/proplet which uses *_PLACEHOLDER values that get substituted, these remain static.

If these are example values, consider using obvious placeholders (e.g., 00000000-0000-0000-0000-000000000000) or adding substitution logic. If these are real credentials, they should not be committed to source control.

🔒 Recommended approach

Use placeholders consistently and add substitution:

       [manager]
-      domain_id = "4bae1a76-afc4-4054-976c-5427c49fbbf3"
-      client_id = "cdaccb11-7209-4fb9-8df1-3c52e9d64284"
-      client_key = "507d687d-51f8-4c71-8599-4273a5d75429"
-      channel_id = "34a616c3-8817-4995-aade-a383e64766a8"
+      domain_id = "MANAGER_DOMAIN_ID_PLACEHOLDER"
+      client_id = "MANAGER_CLIENT_ID_PLACEHOLDER"
+      client_key = "MANAGER_CLIENT_KEY_PLACEHOLDER"
+      channel_id = "MANAGER_CHANNEL_ID_PLACEHOLDER"

Then add environment variables and sed substitutions for each section.

🧰 Tools
🪛 Gitleaks (8.30.0)

[high] 163-163: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 169-169: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 175-175: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 181-181: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 187-187: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
In `@hal/ubuntu/qemu.sh` around lines 156 - 189, The config template for
/etc/proplet/config.toml currently contains hardcoded UUID credentials in
sections [manager], [proplet1], [proplet2], [proplet3], and [proxy]; replace
those literal UUIDs with clear placeholders (e.g.,
00000000-0000-0000-0000-000000000000 or MANAGER_DOMAIN_ID_PLACEHOLDER etc.) and
add substitution logic so real values are injected at deploy time (add
environment variables like MANAGER_DOMAIN_ID, PROPLET1_CLIENT_ID,
PROXY_CLIENT_KEY and perform runtime substitutions using your existing
templating/sed mechanism). Ensure each unique key name from the diff (domain_id,
client_id, client_key, channel_id) is mapped to an env var and substituted for
all sections to avoid committing secrets.

Comment on lines 528 to 581
# Build QEMU command
QEMU_CMD="$QEMU_BINARY"
QEMU_OPTS="-name $VM_NAME"
QEMU_OPTS="$QEMU_OPTS -m $RAM"
QEMU_OPTS="$QEMU_OPTS -smp $CPU"
QEMU_OPTS="$QEMU_OPTS -enable-kvm"
QEMU_OPTS="$QEMU_OPTS -boot d"
QEMU_OPTS="$QEMU_OPTS -netdev user,id=vmnic,hostfwd=tcp::2222-:22,hostfwd=tcp::50002-:50002"
QEMU_OPTS="$QEMU_OPTS -nographic"
QEMU_OPTS="$QEMU_OPTS -no-reboot"
QEMU_OPTS="$QEMU_OPTS -drive file=$SEED_IMAGE,media=cdrom"
QEMU_OPTS="$QEMU_OPTS -drive file=$CUSTOM_IMAGE,if=none,id=disk0,format=qcow2"
QEMU_OPTS="$QEMU_OPTS -device virtio-scsi-pci,id=scsi,disable-legacy=on"
QEMU_OPTS="$QEMU_OPTS -device scsi-hd,drive=disk0"

if [ "$TDX_AVAILABLE" = true ]; then
echo "Starting QEMU VM with Intel TDX (Confidential VM)..."
QEMU_OPTS=$(echo "$QEMU_OPTS" | sed "s/-name $VM_NAME/-name $VM_NAME,process=$VM_NAME,debug-threads=on/")
QEMU_OPTS=$(echo "$QEMU_OPTS" | sed "s/-m $RAM//")
QEMU_OPTS="$QEMU_OPTS -object memory-backend-memfd,id=ram1,size=$RAM,share=true,prealloc=false"
QEMU_OPTS="$QEMU_OPTS -m $RAM"
QEMU_OPTS="$QEMU_OPTS -cpu host,pmu=off"
QEMU_OPTS="$QEMU_OPTS -object {\"qom-type\":\"tdx-guest\",\"id\":\"tdx0\",\"quote-generation-socket\":{\"type\":\"vsock\",\"cid\":\"2\",\"port\":\"4050\"}}"
QEMU_OPTS="$QEMU_OPTS -machine q35,confidential-guest-support=tdx0,memory-backend=ram1,kernel-irqchip=split,hpet=off"
QEMU_OPTS="$QEMU_OPTS -bios /usr/share/ovmf/OVMF.fd"
QEMU_OPTS="$QEMU_OPTS -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile="
QEMU_OPTS="$QEMU_OPTS -nodefaults"
QEMU_OPTS="$QEMU_OPTS -nographic"
QEMU_OPTS="$QEMU_OPTS -serial mon:stdio"
QEMU_OPTS="$QEMU_OPTS -monitor pty"
elif [ "$SEV_AVAILABLE" = true ]; then
echo "Starting QEMU VM with AMD SEV (Confidential VM)..."
QEMU_OPTS="$QEMU_OPTS -machine q35"
QEMU_OPTS="$QEMU_OPTS -cpu EPYC"
QEMU_OPTS="$QEMU_OPTS -object sev-guest,id=sev0,cbitpos=47,reduced-phys-bits=1"
QEMU_OPTS="$QEMU_OPTS -machine memory-encryption=sev0"
QEMU_OPTS="$QEMU_OPTS -drive if=pflash,format=raw,unit=0,file=$OVMF_CODE,readonly=on"
QEMU_OPTS="$QEMU_OPTS -drive if=pflash,format=raw,unit=1,file=$OVMF_VARS_COPY"
QEMU_OPTS="$QEMU_OPTS -device virtio-net-pci,netdev=vmnic,romfile="
else
echo "Starting QEMU VM in regular mode (no CVM)..."
QEMU_OPTS="$QEMU_OPTS -drive if=pflash,format=raw,unit=0,file=$OVMF_CODE,readonly=on"
QEMU_OPTS="$QEMU_OPTS -drive if=pflash,format=raw,unit=1,file=$OVMF_VARS_COPY"
QEMU_OPTS="$QEMU_OPTS -cpu host"
QEMU_OPTS="$QEMU_OPTS -machine q35"
QEMU_OPTS="$QEMU_OPTS -device virtio-net-pci,netdev=vmnic,romfile="
fi

# Execute QEMU
echo "VM will be accessible via:"
echo " SSH: ssh -p 2222 propeller@localhost"
echo " Attestation Agent: localhost:50002"
echo ""
$QEMU_CMD $QEMU_OPTS
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

QEMU command construction may break with complex JSON arguments.

Building QEMU_OPTS as a space-delimited string and executing it directly (line 581) can cause issues with the JSON object at line 550. The nested quotes in the tdx-guest configuration may not be preserved correctly during shell expansion.

🛠️ Recommended fix using an array
-  # Build QEMU command
-  QEMU_CMD="$QEMU_BINARY"
-  QEMU_OPTS="-name $VM_NAME"
-  QEMU_OPTS="$QEMU_OPTS -m $RAM"
+  # Build QEMU command as array
+  QEMU_OPTS=()
+  QEMU_OPTS+=(-name "$VM_NAME")
+  QEMU_OPTS+=(-m "$RAM")
   # ... continue with array syntax ...
   
   # For TDX JSON object:
+  QEMU_OPTS+=(-object '{"qom-type":"tdx-guest","id":"tdx0","quote-generation-socket":{"type":"vsock","cid":"2","port":"4050"}}')
   
   # Execute with proper quoting:
-  $QEMU_CMD $QEMU_OPTS
+  "$QEMU_BINARY" "${QEMU_OPTS[@]}"
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 550-550: Quotes/backslashes will be treated literally. Use an array.

(SC2089)


[warning] 581-581: Quotes/backslashes in this variable will not be respected.

(SC2090)

🤖 Prompt for AI Agents
In `@hal/ubuntu/qemu.sh` around lines 528 - 581, The current QEMU command is built
into a single space-delimited string (QEMU_OPTS) which will break complex
arguments containing quotes/JSON (e.g., the tdx-guest object string) during
expansion; convert QEMU_OPTS into a Bash array (e.g., QEMU_OPTS=() and use
QEMU_OPTS+=("<opt>" "<value>") for each option including the tdx-guest JSON and
memory-backend-memfd entries, avoid string-manipulating sed on the combined
string (instead push the modified name/flags directly when TDX_AVAILABLE is
true), and change execution to call the binary with separate arguments
("$QEMU_CMD" "${QEMU_OPTS[@]}") so quoted JSON and options are preserved intact.

Comment on lines 61 to 88
pub async fn attest_and_get_resource(
&self,
uri: &str,
_attestation_report: Vec<u8>,
) -> Result<Vec<u8>> {
info!("Getting resource from KBS with attestation: {}", uri);

let evidence_provider =
Box::new(NativeEvidenceProvider::new().map_err(|e| {
Error::KbsClient(format!("Failed to create evidence provider: {}", e))
})?);

let mut client =
KbsClientBuilder::with_evidence_provider(evidence_provider, &self.kbs_config.url)
.build()
.map_err(|e| Error::KbsClient(format!("Failed to create KBS client: {}", e)))?;

let resource_uri: resource_uri::ResourceUri = uri
.try_into()
.map_err(|e| Error::KbsClient(format!("Invalid resource URI: {}", e)))?;

let data = client
.get_resource(resource_uri)
.await
.map_err(|e| Error::KbsClient(format!("Failed to get resource from KBS: {}", e)))?;

self.cache_resource(uri, &data).await;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the file context and the full function implementation
wc -l proplet/src/cdh/resource/mod.rs

Repository: rodneyosodo/propeller

Length of output: 100


🏁 Script executed:

# Read the relevant section with more context
sed -n '50,100p' proplet/src/cdh/resource/mod.rs

Repository: rodneyosodo/propeller

Length of output: 1675


🏁 Script executed:

# Search for any usage of attestation_report within this function or nearby
rg "attestation_report" proplet/src/cdh/resource/mod.rs -A 3 -B 3

Repository: rodneyosodo/propeller

Length of output: 633


🏁 Script executed:

# Look for callers of attest_and_get_resource to understand usage patterns
rg "attest_and_get_resource" --type rust

Repository: rodneyosodo/propeller

Length of output: 378


🏁 Script executed:

# Check if there are other similar functions that might show the pattern
rg "get_resource" proplet/src/cdh/resource/mod.rs -B 5 -A 10

Repository: rodneyosodo/propeller

Length of output: 2820


🏁 Script executed:

# Search for KbsClientBuilder in the codebase to understand how it works
rg "KbsClientBuilder" --type rust -A 5

Repository: rodneyosodo/propeller

Length of output: 1433


🏁 Script executed:

# Look for evidence provider usage and how attestation is handled
rg "NativeEvidenceProvider\|evidence_provider" --type rust -B 2 -A 5

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

# Search for NativeEvidenceProvider to understand how it works
rg "NativeEvidenceProvider" --type rust -B 3 -A 10

Repository: rodneyosodo/propeller

Length of output: 3076


🏁 Script executed:

# Look for evidence provider trait or implementations
rg "EvidenceProvider|trait.*Evidence" --type rust -B 2 -A 8

Repository: rodneyosodo/propeller

Length of output: 2434


🏁 Script executed:

# Check if there's any documentation or usage of passing attestation to evidence provider
rg "attestation" --type rust -i | head -30

Repository: rodneyosodo/propeller

Length of output: 2324


🏁 Script executed:

# Search for TODOs, FIXMEs, or comments about the attestation_report parameter
rg "TODO|FIXME|XXX" proplet/src/cdh/resource/mod.rs -B 2 -A 2

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

# Look for test cases to understand intended behavior
rg "attest_and_get_resource" --type rust -B 5 -A 15 | head -50

Repository: rodneyosodo/propeller

Length of output: 3016


🏁 Script executed:

# Check if there's any issue or comment explaining why parameter is unused
rg "_attestation_report\|attestation_report" proplet/src/cdh/resource/mod.rs -B 5 -A 2

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

# Search for any documentation or design docs
fd -e md | xargs rg "attestation" 2>/dev/null | head -20

Repository: rodneyosodo/propeller

Length of output: 250


🏁 Script executed:

# Check if the parameter is ever passed as None or with meaningful data in tests
rg "attest_and_get_resource" --type rust -B 10 -A 10 | grep -A 20 "test_"

Repository: rodneyosodo/propeller

Length of output: 2907


Remove unused attestation_report parameter or implement its use.

The attestation_report parameter is accepted but never used; NativeEvidenceProvider::new() generates fresh evidence regardless of the provided report. This creates a misleading interface suggesting attestation control while silently discarding caller-provided attestation. Either pass the attestation report into the KBS API (if supported by the evidence provider or KbsClientBuilder), or remove the parameter from both the method and trait definition to clarify the actual behavior.

🤖 Prompt for AI Agents
In `@proplet/src/cdh/resource/mod.rs` around lines 61 - 88, The function
attest_and_get_resource currently accepts an attestation_report parameter but
ignores it—NativeEvidenceProvider::new() generates fresh evidence and
KbsClientBuilder::with_evidence_provider is fed that provider—so either remove
the unused attestation_report from the attest_and_get_resource signature and the
corresponding trait, or wire the provided report into the evidence flow (e.g.,
replace NativeEvidenceProvider::new() with an evidence provider constructor that
accepts the attestation_report and pass that provider into
KbsClientBuilder::with_evidence_provider); update attest_and_get_resource, its
trait declaration, and any callers accordingly to keep signatures and behavior
consistent.

Comment on lines 376 to 446
} else if !req.image_url.is_empty() {
info!("Requesting binary from registry: {}", req.image_url);
self.request_binary_from_registry(&req.image_url).await?;
#[cfg(feature = "cdh")]
if self.cdh_hub.is_some() && req.encrypted {
info!("Processing encrypted image with CDH: {}", req.image_url);

match self.wait_for_binary(&req.image_url).await {
Ok(binary) => binary,
Err(e) => {
error!("Failed to get binary for task {}: {}", req.id, e);
self.running_tasks.lock().await.remove(&req.id);
self.publish_result(&req.id, Vec::new(), Some(e.to_string()))
.await?;
return Err(e);
let cdh_hub = self.cdh_hub.as_ref().unwrap().clone();

let temp_dir = std::env::temp_dir();
let output_path = temp_dir.join(format!("task-{}.wasm", req.id));

match cdh_hub
.pull_encrypted_image(&req.image_url, &output_path)
.await
{
Ok(digest) => {
info!("Successfully pulled encrypted image, digest: {}", digest);

match tokio::fs::read(&output_path).await {
Ok(binary) => {
info!("Read decrypted wasm binary, size: {} bytes", binary.len());
tokio::fs::remove_file(&output_path).await.ok();
binary
}
Err(e) => {
error!("Failed to read decrypted wasm binary: {}", e);
self.running_tasks.lock().await.remove(&req.id);
self.publish_result(&req.id, Vec::new(), Some(e.to_string()))
.await?;
return Err(e.into());
}
}
}
Err(e) => {
error!("Failed to pull encrypted image: {}", e);
self.running_tasks.lock().await.remove(&req.id);
self.publish_result(&req.id, Vec::new(), Some(e.to_string()))
.await?;
return Err(e.into());
}
}
} else {
info!("Requesting binary from registry: {}", req.image_url);
info!("cdh_hub.is_some() = {}", self.cdh_hub.is_some());
info!("req.encrypted = {}", req.encrypted);
self.request_binary_from_registry(&req.image_url).await?;

match self.wait_for_binary(&req.image_url).await {
Ok(binary) => binary,
Err(e) => {
error!("Failed to get binary for task {}: {}", req.id, e);
self.running_tasks.lock().await.remove(&req.id);
self.publish_result(&req.id, Vec::new(), Some(e.to_string()))
.await?;
return Err(e);
}
}
}

#[cfg(not(feature = "cdh"))]
{
info!("Requesting binary from registry: {}", req.image_url);
self.request_binary_from_registry(&req.image_url).await?;

match self.wait_for_binary(&req.image_url).await {
Ok(binary) => binary,
Err(e) => {
error!("Failed to get binary for task {}: {}", req.id, e);
self.running_tasks.lock().await.remove(&req.id);
self.publish_result(&req.id, Vec::new(), Some(e.to_string()))
.await?;
return Err(e);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast when encrypted workloads arrive without CDH.

If req.encrypted is true and CDH is unavailable (or the feature isn’t enabled), the code falls back to the registry path. That can lead to running encrypted bytes or confusing errors. Consider returning a clear error and publishing results instead of falling back.

🛠️ Suggested fix
         } else if !req.image_url.is_empty() {
+            if req.encrypted {
+                #[cfg(feature = "cdh")]
+                if self.cdh_hub.is_none() {
+                    let err = anyhow::anyhow!(
+                        "Encrypted image requires CDH, but CDH is not initialized"
+                    );
+                    error!("Validation error for task {}: {}", req.id, err);
+                    self.running_tasks.lock().await.remove(&req.id);
+                    self.publish_result(&req.id, Vec::new(), Some(err.to_string()))
+                        .await?;
+                    return Err(err);
+                }
+
+                #[cfg(not(feature = "cdh"))]
+                {
+                    let err = anyhow::anyhow!(
+                        "Encrypted image requires the 'cdh' feature to be enabled"
+                    );
+                    error!("Validation error for task {}: {}", req.id, err);
+                    self.running_tasks.lock().await.remove(&req.id);
+                    self.publish_result(&req.id, Vec::new(), Some(err.to_string()))
+                        .await?;
+                    return Err(err);
+                }
+            }
+
             #[cfg(feature = "cdh")]
             if self.cdh_hub.is_some() && req.encrypted {
🤖 Prompt for AI Agents
In `@proplet/src/service.rs` around lines 376 - 446, Detect the case where
req.encrypted is true but CDH handling is unavailable (cdh_hub.is_none() or the
"cdh" feature is not enabled) and fail fast instead of falling back to the
registry: inside the branch handling req.image_url, add a guard that when
req.encrypted && (self.cdh_hub.is_none() || cfg!(not(feature = "cdh"))) you
remove the task from self.running_tasks, call self.publish_result(&req.id,
Vec::new(), Some("encrypted image received but CDH unavailable".into())).await?
and return an appropriate Err early; do this before calling
request_binary_from_registry or wait_for_binary so encrypted bytes are never
processed by request_binary_from_registry or executed. Ensure you reference the
same symbols: req.encrypted, self.cdh_hub, request_binary_from_registry,
wait_for_binary, publish_result, and running_tasks.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New security issues found

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Fix all issues with AI agents
In `@proplet/Cargo.toml`:
- Around line 27-32: The listed git dependencies (attestation-agent,
kbs_protocol, kbc, image-rs, crypto, resource_uri) are missing pinned revisions;
update each dependency entry in Cargo.toml to include a fixed rev or tag (e.g.,
rev = "..." or tag = "..." / versioned tag) so the git= ... lines become
reproducible and immutable—edit the dependency lines for attestation-agent,
kbs_protocol (package = "kbs_protocol"), kbc (package = "kbc"), image-rs
(package = "image-rs"), crypto (package = "crypto"), and resource_uri (package =
"resource_uri") to add an appropriate rev or tag value.

In `@proplet/docs/scripts/test-addition.sh`:
- Around line 158-161: The script currently calls scp with $WASM_LOCAL without
verifying the file exists; add a pre-check before the "Copying WASM module to
VM..." step that verifies the path in WASM_LOCAL is a readable file (or invoke
the existing download logic that creates WASM_LOCAL) and if missing log an
explicit error via log_error and exit non‑zero, otherwise proceed to scp;
reference the WASM_LOCAL variable and the scp invocation so you can insert the
guard immediately before that scp command.
- Around line 207-216: The probe is hitting the gRPC-only port 50002 with curl,
so aa_response will be "ERROR" even when healthy; update the ssh curl command
that sets aa_response to target the Attestation Agent REST port
(http://localhost:8006/aa/evidence?runtime_data=test) or replace the curl call
with a gRPC probe (e.g., grpcurl) against port 50002; modify the command that
assigns aa_response accordingly so the check uses the REST endpoint or a proper
gRPC tool.

In `@proplet/docs/TEE-WORKLOAD-TESTING.md`:
- Around line 11-15: The example env exports expose real-looking credentials
(PROPLET_DOMAIN_ID, PROPLET_CHANNEL_ID, PROPLET_CLIENT_ID, PROPLET_CLIENT_KEY
and PROPLET_MQTT_ADDRESS); replace those concrete values with clearly marked
placeholders (e.g., PROPLET_DOMAIN_ID="<YOUR_DOMAIN_ID>",
PROPLET_CHANNEL_ID="<YOUR_CHANNEL_ID>", PROPLET_CLIENT_ID="<YOUR_CLIENT_ID>",
PROPLET_CLIENT_KEY="<YOUR_CLIENT_KEY>",
PROPLET_MQTT_ADDRESS="<YOUR_MQTT_ADDRESS>") and add a short comment that these
are examples only so no real secrets are committed.

In `@proplet/README.md`:
- Around line 54-57: The README currently exposes real-looking secrets
(PROPLET_DOMAIN_ID, PROPLET_CHANNEL_ID, PROPLET_CLIENT_ID, PROPLET_CLIENT_KEY);
replace those concrete values with non-sensitive placeholders (e.g.,
<DOMAIN_ID>, <CHANNEL_ID>, <CLIENT_ID>, <CLIENT_KEY> or instructions like "set
your value here") and add a short note to not commit real credentials, updating
any example env export lines that reference PROPLET_DOMAIN_ID,
PROPLET_CHANNEL_ID, PROPLET_CLIENT_ID, and PROPLET_CLIENT_KEY accordingly.
- Around line 48-59: The README contains two conflicting examples for the same
env var PROPLET_MQTT_ADDRESS which can confuse readers; pick a single example or
present alternatives clearly (e.g., PROPLET_MQTT_ADDRESS_HTTP and
PROPLET_MQTT_ADDRESS_TCP or label them "example A" / "example B"). Update the
README so only one PROPLET_MQTT_ADDRESS example remains or add distinct
names/labels for alternative addresses, and ensure any duplicated export for
PROPLET_MQTT_ADDRESS is removed or renamed consistently throughout the snippet.

In `@proplet/src/cdh/attestation/mod.rs`:
- Around line 217-263: The current stub in get_evidence returns an
AttestationReport with hardcoded AttestationPlatform::TDX which is misleading;
change get_evidence (the async fn get_evidence) to avoid returning a fabricated
platform: instead return an Err(Error::Attestation("attestation not implemented"
or similar)) or set AttestationReport.platform to AttestationPlatform::None and
clearly log/info that this is a stub; update the return path where
Ok(AttestationReport { ... AttestationPlatform::TDX }) is created to use
Error::Attestation or AttestationPlatform::None and adjust logs/messages so
callers can detect the unimplemented stub (refer to AttestationReport,
AttestationPlatform::TDX, and Error::Attestation).
♻️ Duplicate comments (2)
proplet/src/cdh/attestation/mod.rs (1)

38-44: Misleading log message for HTTP endpoint.

Line 41 states "will attempt to connect via gRPC" for HTTP/HTTPS endpoints, which is confusing since the actual transport is HTTP.

         } else if config.aa_socket.starts_with("http://")
             || config.aa_socket.starts_with("https://")
         {
-            info!("AA HTTP endpoint found, will attempt to connect via gRPC");
+            info!("AA HTTP endpoint found, will attempt to connect via HTTP");
             Some(Arc::new(AttestationAgentClient::new_grpc_http(
                 config.aa_socket.clone(),
             )?))
proplet/src/service.rs (1)

377-437: Encrypted workloads silently fall back to registry when CDH is unavailable.

When req.encrypted is true but cdh_hub is None (CDH initialization failed or disabled), the code falls through to the registry path (lines 421-436). This will attempt to process encrypted bytes as plaintext WASM, leading to runtime failures or undefined behavior.

Suggested fix: fail fast when encrypted image requires CDH
         } else if !req.image_url.is_empty() {
+            #[cfg(feature = "cdh")]
+            if req.encrypted && self.cdh_hub.is_none() {
+                let err = anyhow::anyhow!(
+                    "Encrypted image requires CDH, but CDH is not initialized"
+                );
+                error!("Validation error for task {}: {}", req.id, err);
+                self.running_tasks.lock().await.remove(&req.id);
+                self.publish_result(&req.id, Vec::new(), Some(err.to_string()))
+                    .await?;
+                return Err(err);
+            }
+
+            #[cfg(not(feature = "cdh"))]
+            if req.encrypted {
+                let err = anyhow::anyhow!(
+                    "Encrypted image requires the 'cdh' feature to be enabled"
+                );
+                error!("Validation error for task {}: {}", req.id, err);
+                self.running_tasks.lock().await.remove(&req.id);
+                self.publish_result(&req.id, Vec::new(), Some(err.to_string()))
+                    .await?;
+                return Err(err);
+            }
+
             #[cfg(feature = "cdh")]
             if self.cdh_hub.is_some() && req.encrypted {
🧹 Nitpick comments (7)
proplet/docs/scripts/setup-api-server-rest.sh (2)

61-66: Prefer direct file argument over piping from cat.

Minor style nit: cat file | tail can be simplified to tail file.

♻️ Suggested simplification
 		else
 			log_error "Installation failed"
-			cat /tmp/api-server-build.log | tail -20
+			tail -20 /tmp/api-server-build.log
 			return 1
 		fi
 	else
 		log_error "Build failed"
-		cat /tmp/api-server-build.log | tail -20
+		tail -20 /tmp/api-server-build.log
 		return 1

135-165: Add timeout to curl commands to prevent indefinite hangs.

The curl calls lack timeout options. If the service hangs or responds very slowly, the script could block indefinitely. Consider adding --connect-timeout and --max-time for bounded execution.

♻️ Suggested fix
 	while [ $retry -lt $max_retries ]; do
-		if curl -s http://localhost:8006/aa/evidence?runtime_data=test >/dev/null 2>&1; then
+		if curl -s --connect-timeout 5 --max-time 10 http://localhost:8006/aa/evidence?runtime_data=test >/dev/null 2>&1; then
 			log_info "✓ HTTP endpoint is responding"
 			break
 		else
 	# Test evidence endpoint
 	log_info "Testing /aa/evidence endpoint..."
-	if curl -s "http://localhost:8006/aa/evidence?runtime_data=test" | grep -q "svn\|report_data\|evidence"; then
+	if curl -s --connect-timeout 5 --max-time 10 "http://localhost:8006/aa/evidence?runtime_data=test" | grep -q "svn\|report_data\|evidence"; then
 		log_info "✓ Evidence endpoint working"
 	# Test token endpoint
 	log_info "Testing /aa/token endpoint..."
-	if curl -s "http://localhost:8006/aa/token?token_type=kbs" | grep -q "token\|error"; then
+	if curl -s --connect-timeout 5 --max-time 10 "http://localhost:8006/aa/token?token_type=kbs" | grep -q "token\|error"; then
 		log_info "✓ Token endpoint working"
proplet/src/service.rs (2)

70-98: CDH initialization silently degrades on failure.

When cdh_enabled is true but CDH initialization fails (lines 84-87, 90-93), the service continues without CDH. This could allow encrypted workloads to be submitted and then fail later in the request path rather than at startup. Consider whether a hard failure might be more appropriate when CDH is explicitly enabled but cannot initialize.


383-401: Temporary file cleanup may leave artifacts on partial failures.

Line 401 uses .ok() to ignore cleanup errors, which is acceptable. However, if the code panics or returns early between file creation (line 384) and cleanup (line 401), the temp file persists. Consider using a guard/RAII pattern or scopeguard for more robust cleanup.

proplet/src/cdh/mod.rs (1)

31-45: Consider consolidating #[allow(dead_code)] at the trait level.

Each method has its own #[allow(dead_code)] annotation. If all methods may be unused in some configurations, consider a single annotation on the trait definition for cleaner code.

 #[cfg(feature = "cdh")]
 #[async_trait::async_trait]
+#[allow(dead_code)]
 pub trait ConfidentialDataHub: Send + Sync {
-    #[allow(dead_code)]
     async fn unseal_secret(&self, secret: &[u8]) -> Result<Vec<u8>>;
 
-    #[allow(dead_code)]
     async fn unwrap_key(&self, annotation_packet: &[u8]) -> Result<Vec<u8>>;
 
-    #[allow(dead_code)]
     async fn get_resource(&self, uri: &str) -> Result<Vec<u8>>;
 
-    #[allow(dead_code)]
     async fn pull_encrypted_image(&self, image_url: &str, output_path: &Path) -> Result<String>;
 }
proplet/src/cdh/attestation/mod.rs (2)

265-300: Runtime measurement extension is a logging-only stub.

extend_runtime_measurement only logs the event (line 285) without actually extending the runtime measurement in the attestation agent. While the comments explain this is pending full implementation, callers may assume measurements are being recorded for attestation purposes.

Consider adding a warn! or debug! log explicitly stating measurements are not being persisted to the attestation log until the full implementation is complete.


191-200: endpoint_type field is set but never read.

AttestationAgentClient stores endpoint_type (lines 193-194) but never uses it to differentiate behavior. The get_evidence method re-parses the endpoint string (lines 236-247) instead of using this field.

Either use endpoint_type to avoid re-parsing:

let url = match self.endpoint_type {
    EndpointType::Http => format!("{}/aa/evidence?...", self.endpoint.trim_end_matches('/')),
    EndpointType::UnixSocket => format!("http://127.0.0.1:50002/aa/evidence?..."),
};

Or remove the unused field to reduce dead code.

Comment on lines 27 to 32
attestation-agent = { git = "https://github.com/confidential-containers/guest-components", features = ["all-attesters"], optional = true }
kbs_protocol = { git = "https://github.com/confidential-containers/guest-components", package = "kbs_protocol", optional = true }
kbc = { git = "https://github.com/confidential-containers/guest-components", package = "kbc", optional = true }
image-rs = { git = "https://github.com/confidential-containers/guest-components", package = "image-rs", optional = true }
crypto = { git = "https://github.com/confidential-containers/guest-components", package = "crypto", optional = true }
resource_uri = { git = "https://github.com/confidential-containers/guest-components", package = "resource_uri", optional = true }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "Cargo.toml" -path "*/proplet/*" | head -5

Repository: rodneyosodo/propeller

Length of output: 85


🏁 Script executed:

cat -n proplet/Cargo.toml | sed -n '20,40p'

Repository: rodneyosodo/propeller

Length of output: 1437


🏁 Script executed:

# Check the entire dependencies section to understand context
rg "attestation-agent|kbs_protocol|kbc|image-rs|crypto|resource_uri" proplet/Cargo.toml -A 1 -B 1

Repository: rodneyosodo/propeller

Length of output: 996


🏁 Script executed:

rg "git = \"" proplet/Cargo.toml

Repository: rodneyosodo/propeller

Length of output: 815


🏁 Script executed:

# Check if project has dev-dependencies or other sections with git deps
rg "\[.*dependencies\]" proplet/Cargo.toml -A 30 | grep "git ="

Repository: rodneyosodo/propeller

Length of output: 815


🏁 Script executed:

# Check root Cargo.toml for comparison
rg "git = \"" Cargo.toml 2>/dev/null | head -10

Repository: rodneyosodo/propeller

Length of output: 47


Pin git dependencies to a specific revision.

All six guest-components dependencies in lines 27–32 lack rev or tag specifications, making builds non-reproducible and vulnerable to upstream breaking changes. Add a revision pin to each dependency.

🤖 Prompt for AI Agents
In `@proplet/Cargo.toml` around lines 27 - 32, The listed git dependencies
(attestation-agent, kbs_protocol, kbc, image-rs, crypto, resource_uri) are
missing pinned revisions; update each dependency entry in Cargo.toml to include
a fixed rev or tag (e.g., rev = "..." or tag = "..." / versioned tag) so the
git= ... lines become reproducible and immutable—edit the dependency lines for
attestation-agent, kbs_protocol (package = "kbs_protocol"), kbc (package =
"kbc"), image-rs (package = "image-rs"), crypto (package = "crypto"), and
resource_uri (package = "resource_uri") to add an appropriate rev or tag value.

Comment on lines 158 to 161
# Copy WASM to VM
log_info "Copying WASM module to VM..."
scp -P 2222 $WASM_LOCAL propeller@localhost:/tmp/addition.wasm

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against missing local WASM before scp.

Line 160 assumes $WASM_LOCAL exists; if the user runs qemu mode first, the script fails with a cryptic scp error. Add a pre‑check (or reuse the download logic) before copying.

🔧 Suggested guard
 	# Copy WASM to VM
 	log_info "Copying WASM module to VM..."
+	if [ ! -f "$WASM_LOCAL" ]; then
+		log_error "WASM file not found: $WASM_LOCAL"
+		log_info "Run './test-addition.sh local' or set WASM_LOCAL before qemu mode."
+		exit 1
+	fi
 	scp -P 2222 $WASM_LOCAL propeller@localhost:/tmp/addition.wasm
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Copy WASM to VM
log_info "Copying WASM module to VM..."
scp -P 2222 $WASM_LOCAL propeller@localhost:/tmp/addition.wasm
# Copy WASM to VM
log_info "Copying WASM module to VM..."
if [ ! -f "$WASM_LOCAL" ]; then
log_error "WASM file not found: $WASM_LOCAL"
log_info "Run './test-addition.sh local' or set WASM_LOCAL before qemu mode."
exit 1
fi
scp -P 2222 $WASM_LOCAL propeller@localhost:/tmp/addition.wasm
🤖 Prompt for AI Agents
In `@proplet/docs/scripts/test-addition.sh` around lines 158 - 161, The script
currently calls scp with $WASM_LOCAL without verifying the file exists; add a
pre-check before the "Copying WASM module to VM..." step that verifies the path
in WASM_LOCAL is a readable file (or invoke the existing download logic that
creates WASM_LOCAL) and if missing log an explicit error via log_error and exit
non‑zero, otherwise proceed to scp; reference the WASM_LOCAL variable and the
scp invocation so you can insert the guard immediately before that scp command.

Comment on lines 207 to 216
# Check Attestation Agent
log_info "Testing Attestation Agent..."
local aa_response=$(ssh -p 2222 propeller@localhost "curl -s http://localhost:50002/aa/evidence?runtime_data=test" || echo "ERROR")

if [ "$aa_response" = "ERROR" ]; then
log_error "Attestation Agent is not responding"
return 1
else
log_info "✓ Attestation Agent is responding"
fi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Encrypted workflow probes the gRPC port with HTTP.

Line 209 uses curl against port 50002, which is gRPC‑only; the check will fail even if the Attestation Agent is healthy. Use the REST port (8006) or switch to gRPC tooling.

🔧 Suggested fix
-	local aa_response=$(ssh -p 2222 propeller@localhost "curl -s http://localhost:50002/aa/evidence?runtime_data=test" || echo "ERROR")
+	local aa_response=$(ssh -p 2222 propeller@localhost "curl -s http://localhost:8006/aa/evidence?runtime_data=test" || echo "ERROR")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Check Attestation Agent
log_info "Testing Attestation Agent..."
local aa_response=$(ssh -p 2222 propeller@localhost "curl -s http://localhost:50002/aa/evidence?runtime_data=test" || echo "ERROR")
if [ "$aa_response" = "ERROR" ]; then
log_error "Attestation Agent is not responding"
return 1
else
log_info "✓ Attestation Agent is responding"
fi
# Check Attestation Agent
log_info "Testing Attestation Agent..."
local aa_response=$(ssh -p 2222 propeller@localhost "curl -s http://localhost:8006/aa/evidence?runtime_data=test" || echo "ERROR")
if [ "$aa_response" = "ERROR" ]; then
log_error "Attestation Agent is not responding"
return 1
else
log_info "✓ Attestation Agent is responding"
fi
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 209-209: Declare and assign separately to avoid masking return values.

(SC2155)

🤖 Prompt for AI Agents
In `@proplet/docs/scripts/test-addition.sh` around lines 207 - 216, The probe is
hitting the gRPC-only port 50002 with curl, so aa_response will be "ERROR" even
when healthy; update the ssh curl command that sets aa_response to target the
Attestation Agent REST port
(http://localhost:8006/aa/evidence?runtime_data=test) or replace the curl call
with a gRPC probe (e.g., grpcurl) against port 50002; modify the command that
assigns aa_response accordingly so the check uses the REST endpoint or a proper
gRPC tool.

Comment on lines 11 to 15
export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
export PROPLET_MQTT_ADDRESS="tcp://0.tcp.in.ngrok.io:16498"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sanitize example credentials in docs.

Lines 11‑15 include concrete domain/channel/client IDs and a client key. If any are real, this is a credential leak. Please replace with placeholders.

🔧 Suggested doc sanitization
-export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
-export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
-export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
-export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
+export PROPLET_DOMAIN_ID="your-domain-id"
+export PROPLET_CHANNEL_ID="your-channel-id"
+export PROPLET_CLIENT_ID="your-client-id"
+export PROPLET_CLIENT_KEY="your-client-key"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
export PROPLET_MQTT_ADDRESS="tcp://0.tcp.in.ngrok.io:16498"
export PROPLET_DOMAIN_ID="your-domain-id"
export PROPLET_CHANNEL_ID="your-channel-id"
export PROPLET_CLIENT_ID="your-client-id"
export PROPLET_CLIENT_KEY="your-client-key"
export PROPLET_MQTT_ADDRESS="tcp://0.tcp.in.ngrok.io:16498"
🧰 Tools
🪛 Gitleaks (8.30.0)

[high] 14-14: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
In `@proplet/docs/TEE-WORKLOAD-TESTING.md` around lines 11 - 15, The example env
exports expose real-looking credentials (PROPLET_DOMAIN_ID, PROPLET_CHANNEL_ID,
PROPLET_CLIENT_ID, PROPLET_CLIENT_KEY and PROPLET_MQTT_ADDRESS); replace those
concrete values with clearly marked placeholders (e.g.,
PROPLET_DOMAIN_ID="<YOUR_DOMAIN_ID>", PROPLET_CHANNEL_ID="<YOUR_CHANNEL_ID>",
PROPLET_CLIENT_ID="<YOUR_CLIENT_ID>", PROPLET_CLIENT_KEY="<YOUR_CLIENT_KEY>",
PROPLET_MQTT_ADDRESS="<YOUR_MQTT_ADDRESS>") and add a short comment that these
are examples only so no real secrets are committed.

Comment on lines 48 to 59
# MQTT configuration
export PROPLET_MQTT_ADDRESS=tcp://localhost:1883
export PROPLET_MQTT_ADDRESS=tcp://0.tcp.in.ngrok.io:18269
export PROPLET_MQTT_TIMEOUT=30
export PROPLET_MQTT_QOS=2

# SuperMQ credentials
export PROPLET_DOMAIN_ID=your-domain-id
export PROPLET_CHANNEL_ID=your-channel-id
export PROPLET_CLIENT_ID=your-client-id
export PROPLET_CLIENT_KEY=your-client-key
export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
export PROPLET_MQTT_ADDRESS="tcp://0.tcp.in.ngrok.io:16498"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid conflicting PROPLET_MQTT_ADDRESS examples.

Line 49 and Line 58 set PROPLET_MQTT_ADDRESS twice; the second overwrites the first, and readers won’t know which address to use. Keep a single example or clearly label alternatives.

💡 One possible cleanup
-export PROPLET_MQTT_ADDRESS=tcp://0.tcp.in.ngrok.io:18269
+export PROPLET_MQTT_ADDRESS="tcp://broker.example.com:1883"
 ...
-export PROPLET_MQTT_ADDRESS="tcp://0.tcp.in.ngrok.io:16498"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# MQTT configuration
export PROPLET_MQTT_ADDRESS=tcp://localhost:1883
export PROPLET_MQTT_ADDRESS=tcp://0.tcp.in.ngrok.io:18269
export PROPLET_MQTT_TIMEOUT=30
export PROPLET_MQTT_QOS=2
# SuperMQ credentials
export PROPLET_DOMAIN_ID=your-domain-id
export PROPLET_CHANNEL_ID=your-channel-id
export PROPLET_CLIENT_ID=your-client-id
export PROPLET_CLIENT_KEY=your-client-key
export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
export PROPLET_MQTT_ADDRESS="tcp://0.tcp.in.ngrok.io:16498"
# MQTT configuration
export PROPLET_MQTT_ADDRESS="tcp://broker.example.com:1883"
export PROPLET_MQTT_TIMEOUT=30
export PROPLET_MQTT_QOS=2
# SuperMQ credentials
export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
🧰 Tools
🪛 Gitleaks (8.30.0)

[high] 57-57: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
In `@proplet/README.md` around lines 48 - 59, The README contains two conflicting
examples for the same env var PROPLET_MQTT_ADDRESS which can confuse readers;
pick a single example or present alternatives clearly (e.g.,
PROPLET_MQTT_ADDRESS_HTTP and PROPLET_MQTT_ADDRESS_TCP or label them "example A"
/ "example B"). Update the README so only one PROPLET_MQTT_ADDRESS example
remains or add distinct names/labels for alternative addresses, and ensure any
duplicated export for PROPLET_MQTT_ADDRESS is removed or renamed consistently
throughout the snippet.

Comment on lines 54 to 57
export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Replace real-looking credentials with placeholders.

Line 54-57 contains concrete domain/channel/client IDs and a client key; if any of these are real, this is a credential leak and encourages copy‑paste of secrets into public docs. Please switch to placeholders.

🔧 Suggested doc sanitization
-export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
-export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
-export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
-export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
+export PROPLET_DOMAIN_ID="your-domain-id"
+export PROPLET_CHANNEL_ID="your-channel-id"
+export PROPLET_CLIENT_ID="your-client-id"
+export PROPLET_CLIENT_KEY="your-client-key"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export PROPLET_DOMAIN_ID="35ab3e93-a483-498d-80f2-8bb060dcfc63"
export PROPLET_CHANNEL_ID="3d24ba2d-c80b-4648-8dd0-213a394d0d4b"
export PROPLET_CLIENT_ID="82b870d6-9589-4a75-93a0-72cc7d2a212f"
export PROPLET_CLIENT_KEY="1aa5c33a-04ff-427e-92d0-24dbc1271c0d"
export PROPLET_DOMAIN_ID="your-domain-id"
export PROPLET_CHANNEL_ID="your-channel-id"
export PROPLET_CLIENT_ID="your-client-id"
export PROPLET_CLIENT_KEY="your-client-key"
🧰 Tools
🪛 Gitleaks (8.30.0)

[high] 57-57: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
In `@proplet/README.md` around lines 54 - 57, The README currently exposes
real-looking secrets (PROPLET_DOMAIN_ID, PROPLET_CHANNEL_ID, PROPLET_CLIENT_ID,
PROPLET_CLIENT_KEY); replace those concrete values with non-sensitive
placeholders (e.g., <DOMAIN_ID>, <CHANNEL_ID>, <CLIENT_ID>, <CLIENT_KEY> or
instructions like "set your value here") and add a short note to not commit real
credentials, updating any example env export lines that reference
PROPLET_DOMAIN_ID, PROPLET_CHANNEL_ID, PROPLET_CLIENT_ID, and PROPLET_CLIENT_KEY
accordingly.

Comment on lines 217 to 263
async fn get_evidence(
&self,
nonce: Option<Vec<u8>>,
_timeout: u64,
) -> Result<AttestationReport> {
use base64::Engine;

info!("Requesting attestation evidence via gRPC: {}", self.endpoint);

let runtime_data = nonce.unwrap_or_default();

// Prepare gRPC request payload
let _request_json = serde_json::json!({
"runtime_data": base64::engine::general_purpose::STANDARD.encode(&runtime_data)
});

// Make HTTP request to attestation agent REST API
// In real implementation, this would use tonic gRPC client
// For now, we'll use a hybrid approach that works with the actual AA
let url = if self.endpoint.starts_with("http://") || self.endpoint.starts_with("https://") {
format!("{}/aa/evidence?runtime_data={}",
self.endpoint.trim_end_matches('/'),
base64::engine::general_purpose::STANDARD.encode(&runtime_data))
} else if self.endpoint.starts_with("unix://") {
// For Unix sockets, we need to convert to HTTP endpoint
// Attestation Agent typically runs on localhost:50002
format!("http://127.0.0.1:50002/aa/evidence?runtime_data={}",
base64::engine::general_purpose::STANDARD.encode(&runtime_data))
} else {
return Err(Error::Attestation(format!("Invalid endpoint format: {}", self.endpoint)));
};

debug!("Requesting evidence from: {}", url);

// Note: This is a simplified implementation
// A production implementation should use the proper gRPC client with tonic
// and the attestation-agent protobuf definitions

info!("Evidence request prepared for endpoint: {}", self.endpoint);

// Return placeholder for now - will be fully implemented with tonic
// The actual attestation happens via NativeEvidenceProvider in KBS client
Ok(AttestationReport {
report: runtime_data,
platform: AttestationPlatform::TDX, // Will be detected properly in full implementation
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stub implementation returns misleading hardcoded attestation data.

get_evidence returns a placeholder AttestationReport with hardcoded AttestationPlatform::TDX (line 261) regardless of the actual TEE environment. If callers rely on this report for security decisions, they'll receive incorrect information.

Consider either:

  1. Returning AttestationPlatform::None to indicate the report is not real
  2. Returning an error indicating the full implementation is pending
  3. Adding clear documentation that this is a stub
         // Return placeholder for now - will be fully implemented with tonic
         // The actual attestation happens via NativeEvidenceProvider in KBS client
         Ok(AttestationReport {
             report: runtime_data,
-            platform: AttestationPlatform::TDX, // Will be detected properly in full implementation
+            platform: AttestationPlatform::None, // Placeholder - actual platform detection pending
         })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async fn get_evidence(
&self,
nonce: Option<Vec<u8>>,
_timeout: u64,
) -> Result<AttestationReport> {
use base64::Engine;
info!("Requesting attestation evidence via gRPC: {}", self.endpoint);
let runtime_data = nonce.unwrap_or_default();
// Prepare gRPC request payload
let _request_json = serde_json::json!({
"runtime_data": base64::engine::general_purpose::STANDARD.encode(&runtime_data)
});
// Make HTTP request to attestation agent REST API
// In real implementation, this would use tonic gRPC client
// For now, we'll use a hybrid approach that works with the actual AA
let url = if self.endpoint.starts_with("http://") || self.endpoint.starts_with("https://") {
format!("{}/aa/evidence?runtime_data={}",
self.endpoint.trim_end_matches('/'),
base64::engine::general_purpose::STANDARD.encode(&runtime_data))
} else if self.endpoint.starts_with("unix://") {
// For Unix sockets, we need to convert to HTTP endpoint
// Attestation Agent typically runs on localhost:50002
format!("http://127.0.0.1:50002/aa/evidence?runtime_data={}",
base64::engine::general_purpose::STANDARD.encode(&runtime_data))
} else {
return Err(Error::Attestation(format!("Invalid endpoint format: {}", self.endpoint)));
};
debug!("Requesting evidence from: {}", url);
// Note: This is a simplified implementation
// A production implementation should use the proper gRPC client with tonic
// and the attestation-agent protobuf definitions
info!("Evidence request prepared for endpoint: {}", self.endpoint);
// Return placeholder for now - will be fully implemented with tonic
// The actual attestation happens via NativeEvidenceProvider in KBS client
Ok(AttestationReport {
report: runtime_data,
platform: AttestationPlatform::TDX, // Will be detected properly in full implementation
})
}
async fn get_evidence(
&self,
nonce: Option<Vec<u8>>,
_timeout: u64,
) -> Result<AttestationReport> {
use base64::Engine;
info!("Requesting attestation evidence via gRPC: {}", self.endpoint);
let runtime_data = nonce.unwrap_or_default();
// Prepare gRPC request payload
let _request_json = serde_json::json!({
"runtime_data": base64::engine::general_purpose::STANDARD.encode(&runtime_data)
});
// Make HTTP request to attestation agent REST API
// In real implementation, this would use tonic gRPC client
// For now, we'll use a hybrid approach that works with the actual AA
let url = if self.endpoint.starts_with("http://") || self.endpoint.starts_with("https://") {
format!("{}/aa/evidence?runtime_data={}",
self.endpoint.trim_end_matches('/'),
base64::engine::general_purpose::STANDARD.encode(&runtime_data))
} else if self.endpoint.starts_with("unix://") {
// For Unix sockets, we need to convert to HTTP endpoint
// Attestation Agent typically runs on localhost:50002
format!("http://127.0.0.1:50002/aa/evidence?runtime_data={}",
base64::engine::general_purpose::STANDARD.encode(&runtime_data))
} else {
return Err(Error::Attestation(format!("Invalid endpoint format: {}", self.endpoint)));
};
debug!("Requesting evidence from: {}", url);
// Note: This is a simplified implementation
// A production implementation should use the proper gRPC client with tonic
// and the attestation-agent protobuf definitions
info!("Evidence request prepared for endpoint: {}", self.endpoint);
// Return placeholder for now - will be fully implemented with tonic
// The actual attestation happens via NativeEvidenceProvider in KBS client
Ok(AttestationReport {
report: runtime_data,
platform: AttestationPlatform::None, // Placeholder - actual platform detection pending
})
}
🤖 Prompt for AI Agents
In `@proplet/src/cdh/attestation/mod.rs` around lines 217 - 263, The current stub
in get_evidence returns an AttestationReport with hardcoded
AttestationPlatform::TDX which is misleading; change get_evidence (the async fn
get_evidence) to avoid returning a fabricated platform: instead return an
Err(Error::Attestation("attestation not implemented" or similar)) or set
AttestationReport.platform to AttestationPlatform::None and clearly log/info
that this is a stub; update the return path where Ok(AttestationReport { ...
AttestationPlatform::TDX }) is created to use Error::Attestation or
AttestationPlatform::None and adjust logs/messages so callers can detect the
unimplemented stub (refer to AttestationReport, AttestationPlatform::TDX, and
Error::Attestation).

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

🤖 Fix all issues with AI agents
In `@ATTESTATION_ARCHITECTURE_CLARIFICATION.md`:
- Around line 93-117: The fenced code blocks under "The Correct Architecture"
(the ASCII diagram and the subsequent checklist) lack language identifiers
causing MD040 failures; update the opening triple-backtick fences to include
explicit languages (e.g., replace ``` with ```text for the ASCII diagram block
and ```bash for the checklist block) and ensure all corresponding closing ```
remain intact; apply the same change to the other affected blocks around lines
referenced (the similar blocks at 191-219) so every fenced block has a language
tag.

In `@hal/ubuntu/qemu.sh`:
- Around line 191-212: The attestation agent port is inconsistent: the env var
AA_ATTESTATION_SOCK and QEMU forwarding use 50002 but the systemd unit ExecStart
passes --attestation_sock 127.0.0.1:50000; update the service to use
127.0.0.1:50002 so it matches AA_ATTESTATION_SOCK and the QEMU forward, i.e.,
change the ExecStart in attestation-agent.service to use port 50002 (also scan
for other occurrences of 50000 in the diff and replace them with 50002 to keep
all references aligned).
- Around line 32-34: Remove the hardcoded default credentials and avoid enabling
password SSH auth in the VM image: eliminate the unused PASSWORD variable and
any code paths that set plain_text_passwd or enable PasswordAuthentication;
instead provision an SSH public key for the USER (or require a user-supplied
password input) and ensure the VM image's SSH config disables
PasswordAuthentication by default. Update the symbols USER and any blocks that
reference plain_text_passwd (and the sections around the earlier referenced
ranges) to accept/consume an SSH public key or interactive password input,
remove reliance on forwarded-password auth, and verify QEMU launch/config
generation (QEMU_BINARY-related invocation) no longer injects plain-text
credentials.
- Around line 136-154: The KBS_URL_PLACEHOLDER and KBS_PORT_PLACEHOLDER
substitutions never make it into the VM because /etc/default/proplet (the
write_files entry) does not include corresponding variables; update the
write_files block that creates /etc/default/proplet to include placeholders
(e.g., add PROPLET_KBS_URL=KBS_URL_PLACEHOLDER and
PROPLET_KBS_PORT=KBS_PORT_PLACEHOLDER) so the substitutions are injected into
the VM, or alternatively remove the unused KBS_* substitutions from the
template; locate the write_files entry that writes /etc/default/proplet and
either add the two PROPLET_KBS_* lines to its content or drop the unused
substitutions from the provisioning flow.

In `@hal/ubuntu/README.md`:
- Around line 144-151: The README's Network Ports section conflicts with
qemu.sh: README.md claims "No external attestation ports needed" while qemu.sh
forwards port 50002; either document that port in hal/ubuntu/README.md or remove
the forward in qemu.sh. Update the README's Network Ports block to list **Port
50002** with a short note about its use for attestation/TEEs (or if you choose
removal, delete the "hostfwd=tcp::50002-:50002" entry in qemu.sh), and mention
NativeEvidenceProvider only if no external port remains.
- Around line 91-99: README’s RAM default (8192M) and the qemu.sh variable
RAM="18192M" are inconsistent; pick the intended default and make both sources
match: either change the README table entry for `RAM` to `18192M` to reflect
qemu.sh, or edit qemu.sh to set RAM="8192M" so it matches the README—ensure you
update the `RAM` value in the README table and the `RAM` assignment in qemu.sh
accordingly.
- Line 41: Replace the bold line "**Example workflow:**" with a proper Markdown
heading (e.g., "## Example workflow" or an appropriate heading level) in
hal/ubuntu/README.md to satisfy markdownlint MD036; ensure you use the heading
syntax (hashes), remove the surrounding bold markup, and leave a blank line
after the heading so the following content renders correctly.

In `@PROPLET_TEE_FIX_SUMMARY.md`:
- Around line 320-327: The markdown has multiple fenced code blocks without
language tags (e.g., the "Expected log output" block and the ASCII diagrams)
which triggers MD040; update each triple-backtick block in the affected ranges
to include a language (use "text" for logs/diagrams) — specifically modify the
blocks that contain the log lines referencing
AttestationClient.extend_runtime_measurement(),
CdhHub.record_image_pull_event(), CdhHub.record_wasm_execution_event(),
Runtime.start_app(), and the ASCII Attestation Architecture/Encrypted Workload
Execution Flow diagrams so each block starts with ```text and ends with ```.
- Around line 592-600: Replace the bold "Status: ✅ PRODUCTION READY" with a
proper heading to satisfy MD036; locate the "### Result" section and change the
final status line from bold text to a Markdown heading (e.g., "#### Status: ✅
PRODUCTION READY") so the document uses a heading element instead of bold
formatting.

In `@PROPLET_TEE_IMPLEMENTATION_ANALYSIS.md`:
- Around line 3-12: Change the top lines to use a proper Markdown heading and
fix the date punctuation: replace the bolded date line "**Date:** January 19
2026" with a heading (e.g., "# January 19, 2026") to satisfy MD036 and add the
missing comma after the day, and convert the bold "Executive Summary" to a
heading (e.g., "## Executive Summary") so headings are used instead of bold
text; update any similar bolded section titles in
PROPLET_TEE_IMPLEMENTATION_ANALYSIS.md to proper Markdown headings for
consistency.

In `@proplet/docs/TEE-WORKLOAD-TESTING.md`:
- Line 1000: Update the service status command that currently reads "sudo
systemctl status attestation-agent proplet" by removing the unnecessary
attestation-agent reference so it only checks the proplet service (e.g., "sudo
systemctl status proplet"); locate the line containing the exact string "sudo
systemctl status attestation-agent proplet" in TEE-WORKLOAD-TESTING.md and
replace it, and also search nearby examples or outputs in the same section for
any other uses of "attestation-agent" to remove or annotate per the architecture
note that attestation-agent is not needed.
- Line 193: Replace the hard-coded developer path string
"/home/rodneyosodo/code/absmach/propeller" used in the `cd
/home/rodneyosodo/code/absmach/propeller` command with a generic placeholder or
relative path; for example use "./propeller" or an environment variable
placeholder like "$PROP_PROJECT_DIR" so the instruction works across
environments, and update the line in TEE-WORKLOAD-TESTING.md where that exact
path appears.
- Line 10: The markdown contains an incomplete git clone command ("git clone")
that will fail; update the line in TEE-WORKLOAD-TESTING.md to include the
repository URL or a placeholder/comment indicating it's intentionally omitted
(e.g., replace the lone "git clone" with a full "git clone <repo-url>" or a
clarifying comment), ensuring readers can actually clone the repository or
understand it's a placeholder.
- Around line 155-181: Add a short clarifying note linking the "Service
Architecture" diagram (ports 50002 and 8006) to the "Exposed ports" section by
explaining that the diagram shows internal service-to-service ports used by the
runtime (Attestation Agent gRPC on 50002 and API Server REST on 8006) while the
"Exposed ports" list (1883, 8080, 2222) documents only the externally
reachable/testable ports for the test environment; place this sentence either
directly below the diagram or at the top of the Exposed ports section and use
the exact port identifiers (50002, 8006, 1883, 8080, 2222) so readers understand
which ports are internal vs externally exposed.
- Around line 300-303: The doc currently contradicts itself by declaring
attestation-agent "NOT needed" while later showing service management commands
for it; update the "NOT needed" statement about
attestation-agent/NativeEvidenceProvider to clearly state whether
attestation-agent is optional, context-specific, or legacy (e.g., "optional for
non-native providers / legacy setups") and add a short rationale referencing
NativeEvidenceProvider; then either remove the attestation-agent service
management commands or mark them as conditional examples and prepend a one-line
note explaining when to run them (e.g., for non-native or legacy attestation
flows); ensure all references to attestation-agent and NativeEvidenceProvider
are consistent across the document and update any example blocks that assume
attestation-agent is present.

In `@WHY_API_SERVER_REST.md`:
- Around line 312-324: The fenced code blocks in WHY_API_SERVER_REST.md lack
language identifiers and violate MD040; update each triple-backtick fence around
the Rust snippets to use ```rust and the shell snippet to use ```bash
(specifically the blocks containing NativeEvidenceProvider, KbsClientBuilder,
kbs_client.get_resource and the prerequisites list) so markdownlint passes and
syntax highlighting works.
♻️ Duplicate comments (2)
hal/ubuntu/qemu.sh (2)

160-188: Replace hardcoded SuperMQ credentials with placeholders.

These UUIDs look like real credentials and are embedded directly in the VM image config. Please replace them with placeholders and inject at runtime.


528-581: QEMU options should be built as an array to preserve JSON quoting.

Building QEMU_OPTS as a single string will split the TDX JSON object and break the command line. Please switch to a Bash array and exec with "${QEMU_OPTS[@]}".

Comment on lines 93 to 117
## The Correct Architecture

### Minimal Setup (Basic Encrypted Workloads)

```
┌────────────────────┐
│ Proplet │
│ (uses │
│ NativeEvidence │
│ Provider) │
└─────────┬──────────┘
│ Direct /dev/tdx-guest access
│ for KBS attestation
┌────────────────────┐
│ KBS │
│ (verifies and │
│ releases keys) │
└────────────────────┘

✅ This works for basic encrypted image decryption
❌ But runtime measurements are NOT included in attestation
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Specify languages on fenced blocks to satisfy MD040.

Proposed doc fix
-```
+```text
 ┌────────────────────┐
 │   Proplet          │
 │   (uses            │
 │   NativeEvidence   │
 │   Provider)        │
 └─────────┬──────────┘
           │
           │ Direct /dev/tdx-guest access
           │ for KBS attestation
           │
           ▼
 ┌────────────────────┐
 │   KBS              │
 │   (verifies and    │
 │    releases keys)  │
 └────────────────────┘
 
 ✅ This works for basic encrypted image decryption
 ❌ But runtime measurements are NOT included in attestation
-```
+```

-```bash
+```bash
 ✅ Proplet (with CDH enabled)
 ✅ KBS
 ❌ Attestation Agent - measurements only logged locally
 ❌ API Server REST - not needed
-```
+```

Also applies to: 191-219

🤖 Prompt for AI Agents
In `@ATTESTATION_ARCHITECTURE_CLARIFICATION.md` around lines 93 - 117, The fenced
code blocks under "The Correct Architecture" (the ASCII diagram and the
subsequent checklist) lack language identifiers causing MD040 failures; update
the opening triple-backtick fences to include explicit languages (e.g., replace
``` with ```text for the ASCII diagram block and ```bash for the checklist
block) and ensure all corresponding closing ``` remain intact; apply the same
change to the other affected blocks around lines referenced (the similar blocks
at 191-219) so every fenced block has a language tag.

Comment on lines 32 to 34
USER="propeller"
PASSWORD="propeller"
QEMU_BINARY="qemu-system-x86_64"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid default credentials and password SSH auth in the VM image.

plain_text_passwd: propeller + PasswordAuthentication yes + forwarded SSH makes the VM trivially accessible. Also, PASSWORD is defined but unused. Prefer SSH keys (or require a user‑provided password and disable password auth by default).

🔒 Suggested hardening (SSH keys)
-    plain_text_passwd: propeller
-    lock_passwd: false
+    lock_passwd: true
+    ssh_authorized_keys:
+      - SSH_AUTHORIZED_KEY_PLACEHOLDER
...
-ssh_pwauth: true
+ssh_pwauth: false
...
-  - echo 'propeller:propeller' | chpasswd
+  # password auth disabled; rely on SSH keys

Also applies to: 100-109, 254-263

🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 33-33: PASSWORD appears unused. Verify use (or export if used externally).

(SC2034)

🤖 Prompt for AI Agents
In `@hal/ubuntu/qemu.sh` around lines 32 - 34, Remove the hardcoded default
credentials and avoid enabling password SSH auth in the VM image: eliminate the
unused PASSWORD variable and any code paths that set plain_text_passwd or enable
PasswordAuthentication; instead provision an SSH public key for the USER (or
require a user-supplied password input) and ensure the VM image's SSH config
disables PasswordAuthentication by default. Update the symbols USER and any
blocks that reference plain_text_passwd (and the sections around the earlier
referenced ranges) to accept/consume an SSH public key or interactive password
input, remove reliance on forwarded-password auth, and verify QEMU launch/config
generation (QEMU_BINARY-related invocation) no longer injects plain-text
credentials.

Comment on lines 136 to 154
write_files:
- path: /etc/default/proplet
content: |
# Proplet Environment Variables
PROPLET_LOG_LEVEL=info
PROPLET_INSTANCE_ID=INSTANCE_ID_PLACEHOLDER
PROPLET_DOMAIN_ID=DOMAIN_ID_PLACEHOLDER
PROPLET_CLIENT_ID=CLIENT_ID_PLACEHOLDER
PROPLET_CLIENT_KEY=CLIENT_KEY_PLACEHOLDER
PROPLET_CHANNEL_ID=CHANNEL_ID_PLACEHOLDER
PROPLET_MQTT_ADDRESS=MQTT_ADDRESS_PLACEHOLDER
PROPLET_MQTT_TIMEOUT=30
PROPLET_MQTT_QOS=2
PROPLET_EXTERNAL_WASM_RUNTIME=/usr/local/bin/wasmtime
PROPLET_LIVELINESS_INTERVAL=10
PROPLET_MANAGER_K8S_NAMESPACE=default
PROPLET_CONFIG_FILE=/etc/proplet/config.toml
PROPLET_CONFIG_SECTION=proplet1
permissions: '0644'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

KBS_URL/KBS_PORT substitutions are no-ops.

You substitute KBS_URL_PLACEHOLDER and KBS_PORT_PLACEHOLDER, but those placeholders never appear in user-data, so the values never reach the VM. Add placeholders to the config that Proplet actually reads (e.g., /etc/default/proplet) or remove the vars.

🛠️ One possible fix (inject into /etc/default/proplet)
       PROPLET_MQTT_ADDRESS=MQTT_ADDRESS_PLACEHOLDER
+      KBS_URL=KBS_URL_PLACEHOLDER
+      KBS_PORT=KBS_PORT_PLACEHOLDER

Also applies to: 455-463

🤖 Prompt for AI Agents
In `@hal/ubuntu/qemu.sh` around lines 136 - 154, The KBS_URL_PLACEHOLDER and
KBS_PORT_PLACEHOLDER substitutions never make it into the VM because
/etc/default/proplet (the write_files entry) does not include corresponding
variables; update the write_files block that creates /etc/default/proplet to
include placeholders (e.g., add PROPLET_KBS_URL=KBS_URL_PLACEHOLDER and
PROPLET_KBS_PORT=KBS_PORT_PLACEHOLDER) so the substitutions are injected into
the VM, or alternatively remove the unused KBS_* substitutions from the
template; locate the write_files entry that writes /etc/default/proplet and
either add the two PROPLET_KBS_* lines to its content or drop the unused
substitutions from the provisioning flow.

sudo ./qemu.sh run
```

**Example workflow:**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use a heading instead of bold text for “Example workflow”.

This is flagged by markdownlint (MD036). Consider a heading for consistency.

✏️ Suggested change
-**Example workflow:**
+### Example workflow
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Example workflow:**
### Example workflow
🤖 Prompt for AI Agents
In `@hal/ubuntu/README.md` at line 41, Replace the bold line "**Example
workflow:**" with a proper Markdown heading (e.g., "## Example workflow" or an
appropriate heading level) in hal/ubuntu/README.md to satisfy markdownlint
MD036; ensure you use the heading syntax (hashes), remove the surrounding bold
markup, and leave a blank line after the heading so the following content
renders correctly.

Comment on lines 155 to 181

```
┌─────────────────────────────────────────────┐
│ Service Architecture │
├─────────────────────────────────────────────┤
│ │
│ Port 50002: Attestation Agent (gRPC) │
│ ┌────────────────────────────────┐ │
│ │ - GetEvidence() │ │
│ │ - GetToken() │ │
│ │ - ExtendRuntimeMeasurement() │ │
│ └────────────────────────────────┘ │
│ ▲ │
│ │ (gRPC calls) │
│ │ │
│ Port 8006: API Server REST (HTTP) │
│ ┌────────────────────────────────┐ │
│ │ - GET /aa/evidence │ │
│ │ - GET /aa/token │ │
│ │ - GET /cdh/resource │ │
│ └────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────┘

⚠️ IMPORTANT: Use port 8006 for curl/HTTP
Port 50002 is gRPC only (causes HTTP/0.9 error)
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's check the file size and read the relevant sections
wc -l proplet/docs/TEE-WORKLOAD-TESTING.md

Repository: rodneyosodo/propeller

Length of output: 106


🏁 Script executed:

# Read lines 155-181 (Service Architecture section)
sed -n '155,181p' proplet/docs/TEE-WORKLOAD-TESTING.md

Repository: rodneyosodo/propeller

Length of output: 1180


🏁 Script executed:

# Read lines 889-897 (Port reference table section)
sed -n '889,897p' proplet/docs/TEE-WORKLOAD-TESTING.md

Repository: rodneyosodo/propeller

Length of output: 498


🏁 Script executed:

# Search for all port references throughout the document
rg -n "port|Port|PORT" proplet/docs/TEE-WORKLOAD-TESTING.md | head -40

Repository: rodneyosodo/propeller

Length of output: 1957


Clarify the relationship between Service Architecture ports and Exposed ports sections.

Lines 155-181 document a Service Architecture with ports 50002 (Attestation Agent gRPC) and 8006 (API Server REST), but the Exposed ports section (lines 889-897) lists only MQTT (1883), KBS (8080), and SSH (2222), with a note stating "No attestation agent ports needed." This creates confusion about whether ports 50002 and 8006 are internal-only, optional for certain scenarios, or part of a different deployment model. Add a brief explanation clarifying the relationship between these two sections (e.g., "The Service Architecture describes internal communication patterns; for testing environments, only the Exposed ports above are required").

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

156-156: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@proplet/docs/TEE-WORKLOAD-TESTING.md` around lines 155 - 181, Add a short
clarifying note linking the "Service Architecture" diagram (ports 50002 and
8006) to the "Exposed ports" section by explaining that the diagram shows
internal service-to-service ports used by the runtime (Attestation Agent gRPC on
50002 and API Server REST on 8006) while the "Exposed ports" list (1883, 8080,
2222) documents only the externally reachable/testable ports for the test
environment; place this sentence either directly below the diagram or at the top
of the Exposed ports section and use the exact port identifiers (50002, 8006,
1883, 8080, 2222) so readers understand which ports are internal vs externally
exposed.


```bash
# Get the WASM module
cd /home/rodneyosodo/code/absmach/propeller
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace hard-coded developer path with a generic placeholder.

The path /home/rodneyosodo/code/absmach/propeller is specific to a developer's environment and should be replaced with a generic placeholder or relative path.

📝 Proposed fix
-cd /home/rodneyosodo/code/absmach/propeller
+cd /path/to/propeller  # Replace with your propeller repository path
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cd /home/rodneyosodo/code/absmach/propeller
cd /path/to/propeller # Replace with your propeller repository path
🤖 Prompt for AI Agents
In `@proplet/docs/TEE-WORKLOAD-TESTING.md` at line 193, Replace the hard-coded
developer path string "/home/rodneyosodo/code/absmach/propeller" used in the `cd
/home/rodneyosodo/code/absmach/propeller` command with a generic placeholder or
relative path; for example use "./propeller" or an environment variable
placeholder like "$PROP_PROJECT_DIR" so the instruction works across
environments, and update the line in TEE-WORKLOAD-TESTING.md where that exact
path appears.

Comment on lines 300 to 303
**NOT needed:**

- ❌ Attestation Agent - Proplet uses NativeEvidenceProvider (direct hardware access)
- ❌ API Server REST - Proplet doesn't make HTTP calls to attestation services
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, locate the file and check its size
find . -name "TEE-WORKLOAD-TESTING.md" -type f 2>/dev/null

Repository: rodneyosodo/propeller

Length of output: 103


🏁 Script executed:

# Get line count to understand file size
wc -l proplet/docs/TEE-WORKLOAD-TESTING.md 2>/dev/null || echo "File not found or access denied"

Repository: rodneyosodo/propeller

Length of output: 106


🏁 Script executed:

# Read lines 300-303 to verify the "NOT needed" claim
sed -n '295,310p' proplet/docs/TEE-WORKLOAD-TESTING.md

Repository: rodneyosodo/propeller

Length of output: 571


🏁 Script executed:

# Read lines around 464, 700, 705, 1000 to verify attestation-agent references
echo "=== Lines 460-470 ===" && sed -n '460,470p' proplet/docs/TEE-WORKLOAD-TESTING.md
echo "=== Lines 695-710 ===" && sed -n '695,710p' proplet/docs/TEE-WORKLOAD-TESTING.md
echo "=== Lines 995-1005 ===" && sed -n '995,1005p' proplet/docs/TEE-WORKLOAD-TESTING.md

Repository: rodneyosodo/propeller

Length of output: 905


Clarify whether attestation-agent is required for encrypted workload testing.

Lines 300-303 explicitly state that Attestation Agent is "NOT needed" because Proplet uses NativeEvidenceProvider, yet the same document later references attestation-agent in service management commands (lines 464, 700, 705, 1000).

Either update lines 300-303 to accurately reflect its role in the setup, or remove the service management commands that reference attestation-agent. Clarify if attestation-agent is optional, context-specific, or a legacy reference.

🤖 Prompt for AI Agents
In `@proplet/docs/TEE-WORKLOAD-TESTING.md` around lines 300 - 303, The doc
currently contradicts itself by declaring attestation-agent "NOT needed" while
later showing service management commands for it; update the "NOT needed"
statement about attestation-agent/NativeEvidenceProvider to clearly state
whether attestation-agent is optional, context-specific, or legacy (e.g.,
"optional for non-native providers / legacy setups") and add a short rationale
referencing NativeEvidenceProvider; then either remove the attestation-agent
service management commands or mark them as conditional examples and prepend a
one-line note explaining when to run them (e.g., for non-native or legacy
attestation flows); ensure all references to attestation-agent and
NativeEvidenceProvider are consistent across the document and update any example
blocks that assume attestation-agent is present.

# 4. In VM, verify configuration
ssh -p 2222 propeller@localhost
grep PROPLET_CDH /etc/default/proplet
sudo systemctl status attestation-agent proplet
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove attestation-agent reference from status check.

Consistent with the architecture that states attestation-agent is not needed (line 302), remove it from the service status check.

🔧 Proposed fix
-sudo systemctl status attestation-agent proplet
+sudo systemctl status proplet
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sudo systemctl status attestation-agent proplet
sudo systemctl status proplet
🤖 Prompt for AI Agents
In `@proplet/docs/TEE-WORKLOAD-TESTING.md` at line 1000, Update the service status
command that currently reads "sudo systemctl status attestation-agent proplet"
by removing the unnecessary attestation-agent reference so it only checks the
proplet service (e.g., "sudo systemctl status proplet"); locate the line
containing the exact string "sudo systemctl status attestation-agent proplet" in
TEE-WORKLOAD-TESTING.md and replace it, and also search nearby examples or
outputs in the same section for any other uses of "attestation-agent" to remove
or annotate per the architecture note that attestation-agent is not needed.

Comment on lines 312 to 324
```rust
// This code is INSIDE proplet
// No external services needed!

let evidence_provider = Box::new(NativeEvidenceProvider::new()?);
let mut kbs_client = KbsClientBuilder::with_evidence_provider(
evidence_provider,
&kbs_url
).build()?;

// Fetches keys from KBS with automatic attestation
let key = kbs_client.get_resource("kbs:///default/keys/key1").await?;
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add languages to fenced code blocks to satisfy MD040.
This will fix markdownlint and improve syntax highlighting.

Proposed doc fix
-```
+```rust
 // This code is INSIDE proplet
 // No external services needed!
 
 let evidence_provider = Box::new(NativeEvidenceProvider::new()?);
 let mut kbs_client = KbsClientBuilder::with_evidence_provider(
     evidence_provider,
     &kbs_url
 ).build()?;
 
 // Fetches keys from KBS with automatic attestation
 let key = kbs_client.get_resource("kbs:///default/keys/key1").await?;
-```
+```

-```
+```rust
 use kbs_protocol::evidence_provider::NativeEvidenceProvider;
 use kbs_protocol::KbsClientBuilder;
 
 // Direct hardware access, no services needed
 let provider = Box::new(NativeEvidenceProvider::new()?);
 let client = KbsClientBuilder::with_evidence_provider(provider, kbs_url).build()?;
-```
+```

-```
+```bash
 # Prerequisites for encrypted workloads
 
 REQUIRED:
 ✅ KBS (Key Broker Service) - Manages encryption keys
 ✅ CDH-enabled Proplet - Handles decryption
 
 OPTIONAL (for testing/debugging only):
 ⚠️  Attestation Agent - Only if you need gRPC endpoint
 ⚠️  API Server REST - Only if you need HTTP endpoint for curl/scripts
 
 NOT NEEDED:
 ❌ Attestation Agent - Proplet uses NativeEvidenceProvider
 ❌ API Server REST - Proplet doesn't make HTTP calls
-```
+```

Also applies to: 332-340, 357-371

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

321-321: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@WHY_API_SERVER_REST.md` around lines 312 - 324, The fenced code blocks in
WHY_API_SERVER_REST.md lack language identifiers and violate MD040; update each
triple-backtick fence around the Rust snippets to use ```rust and the shell
snippet to use ```bash (specifically the blocks containing
NativeEvidenceProvider, KbsClientBuilder, kbs_client.get_resource and the
prerequisites list) so markdownlint passes and syntax highlighting works.

Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
…r type matching

Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Signed-off-by: Rodney Osodo <socials@rodneyosodo.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Fix all issues with AI agents
In `@proplet/Cargo.toml`:
- Around line 28-30: The git dependencies attestation-agent, image-rs,
kbs_protocol (and resource_uri if present) in Cargo.toml currently point to a
branch ("enable-wasm-workloads") only; pin each to a specific commit by adding a
rev = "<commit-sha>" (or tag) to the dependency table (e.g. attestation-agent,
image-rs, kbs_protocol) so Cargo fetches a reproducible revision instead of the
moving branch; update each dependency entry to include rev = "..." and remove
reliance on the branch alone.

In `@proplet/README.md`:
- Line 206: The README contains a user-specific hardcoded path
"/home/rodneyosodo/code/absmach/propeller/proplet"; replace that literal with a
generic placeholder (e.g. /path/to/project/proplet) or a relative path (e.g.
./proplet) so other users can follow the instructions; update the single
occurrence of the path string in README.md accordingly.

In `@proplet/src/config.rs`:
- Around line 262-264: The code sets config.layer_store_path from
env::var("PROPLET_LAYER_STORE_PATH") but does not treat the empty string as None
like other fields; update the assignment so that when env::var returns Ok(val)
you check if val.trim().is_empty() and only set config.layer_store_path to
Some/non-empty value (or leave it None) if non-empty, mirroring the handling
used for kbs_uri and aa_config_path; reference the env::var call and the
config.layer_store_path field to locate and change the logic.
- Around line 321-328: Add an assertion that verifies the default
pull_concurrent_limit (field pull_concurrent_limit on the config variable)
equals 4 inside the same cfg(feature = "tee") test block; i.e., update the test
around the config variable (the block containing #[cfg(feature = "tee")] and
assertions for tee_enabled, kbs_uri, aa_config_path, layer_store_path) to
include an assert_eq!(config.pull_concurrent_limit, 4) so the default is
explicitly checked.

In `@proplet/src/main.rs`:
- Around line 86-97: CI failed due to rustfmt style violations; run `cargo fmt`
to reformat proplet/src/main.rs (particularly the block around
config.tee_enabled, the match on TeeWasmRuntime::new, and the
PropletService::with_tee_runtime construction) and commit the resulting changes
so the match arms and indentation conform to rustfmt rules.
- Around line 86-103: When config.tee_enabled is true and
TeeWasmRuntime::new(&config).await returns Err, don't silently fall back to
PropletService::new; instead fail fast: in the Err(_) arm of the match log a
clear, contextual error message including the underlying error (from
TeeWasmRuntime::new) and then propagate the error or terminate startup (e.g.,
return Err(...) from main or call process::exit(1)). Update the match around
TeeWasmRuntime::new so the Err branch no longer constructs PropletService::new
but instead fails startup; keep successful path creating
Arc::new(PropletService::with_tee_runtime(...)) unchanged.

In `@proplet/src/runtime/tee_runtime.rs`:
- Around line 350-361: The heuristic using oci_ref.contains("/") ||
oci_ref.contains(":") is too permissive; replace it with a proper validator
(e.g., add a helper like is_valid_oci_ref(reference: &str) -> bool) and call
that instead of the current check in the block that reads config.cli_args and
before calling pull_and_decrypt_wasm and run_wasm. Implement is_valid_oci_ref to
reject relative/ filesystem-like values (e.g., starting with "./" or "/"),
require a registry-style host component with a dot or port (e.g., host.some.tld
or host:port) followed by a '/' and a repo name, and optionally accept
tag/digest syntax after the repo (':' or '@'); use this function to gate
pull_and_decrypt_wasm so strings like "arg:value" or "./path/to/file" no longer
trigger an OCI pull.
- Line 317: The call currently passes an empty argument slice to the WASM
function; update the invocation to construct and pass the actual arguments from
config.args instead of &[]: map config.args into the expected wasmtime values
(matching the function signature param types) and create a &[Val] (or Vec<Val>)
to pass into func.call(&mut store, &arg_vals, &mut results), validating the
number and types against the function’s params and return types and returning a
clear error if there is a mismatch. Ensure you reference config.args, func.call,
store and results when locating where to convert and pass the arguments.
- Line 87: Remove the global side-effect call
std::env::set_var("OCICRYPT_KEYPROVIDER_CONFIG", config_path) from
TeeWasmRuntime::new() to avoid undefined behavior in multi-threaded async
contexts; instead propagate config_path explicitly to the OCICRYPT consumer(s)
(the code that relies on OCICRYPT_KEYPROVIDER_CONFIG) by adding a parameter or
API call that accepts the path (e.g., extend the constructor or the helper that
initializes ocicrypt to take config_path) and use that value when initializing
or calling the ocicrypt-related functions, ensuring no process-wide environment
mutation occurs.
♻️ Duplicate comments (2)
hal/ubuntu/qemu.sh (2)

198-205: KBS_URL override is currently a no-op.

COCO_KP_KBS_URL is hardcoded while the script replaces KBS_URL_PLACEHOLDER, so KBS_URL never reaches the VM. Use a placeholder (or drop the substitution).

🛠️ Proposed fix
-      COCO_KP_KBS_URL=http://10.0.2.2:8082
+      COCO_KP_KBS_URL=KBS_URL_PLACEHOLDER

Also applies to: 515-522


587-641: Avoid string-built QEMU_OPTS; JSON args will be mangled.

The -object {...} JSON is embedded in a single string, so shell expansion strips quotes (Shellcheck SC2089/SC2090). Build the command as an array and exec with "${QEMU_OPTS[@]}".

🛠️ Proposed fix (array-based QEMU args)
-    QEMU_CMD="$QEMU_BINARY"
-    QEMU_OPTS="-name $VM_NAME"
-    QEMU_OPTS="$QEMU_OPTS -m $RAM"
+    QEMU_CMD="$QEMU_BINARY"
+    QEMU_OPTS=()
+    QEMU_OPTS+=(-name "$VM_NAME")
+    QEMU_OPTS+=(-m "$RAM")
@@
-        QEMU_OPTS="$QEMU_OPTS -object {\"qom-type\":\"tdx-guest\",\"id\":\"tdx0\",\"quote-generation-socket\":{\"type\":\"vsock\",\"cid\":\"2\",\"port\":\"4050\"}}"
+        QEMU_OPTS+=(-object '{"qom-type":"tdx-guest","id":"tdx0","quote-generation-socket":{"type":"vsock","cid":"2","port":"4050"}}')
@@
-    $QEMU_CMD $QEMU_OPTS
+    "$QEMU_CMD" "${QEMU_OPTS[@]}"
🧹 Nitpick comments (3)
proplet/src/service.rs (2)

85-111: Consider reducing code duplication between constructors.

The with_tee_runtime constructor duplicates most of new(). Consider extracting common initialization into a helper or using a builder pattern. This is optional since the duplication is limited and feature-gated.


460-478: Image URL passed via cli_args creates an implicit contract.

The OCI reference for encrypted workloads is passed to the TEE runtime by prepending it to cli_args. The TeeWasmRuntime::start_app then extracts it from cli_args[0]. This implicit contract works but is fragile.

Consider adding a dedicated field to StartConfig for the OCI reference to make the contract explicit.

proplet/src/runtime/tee_runtime.rs (1)

366-372: stop_app and get_pid are no-ops.

The TEE runtime spawns external WASM processes via Command::new() but doesn't track their PIDs. This means:

  • stop_app cannot terminate running tasks
  • get_pid always returns None, disabling process monitoring

If task lifecycle management is required for TEE workloads, consider tracking spawned process handles.

Comment on lines +28 to +30
attestation-agent = { git = "https://github.com/rodneyosodo/guest-components", branch = "enable-wasm-workloads", optional = true, default-features = false, features = ["rust-crypto", "kbs"] }
image-rs = { git = "https://github.com/rodneyosodo/guest-components", branch = "enable-wasm-workloads", optional = true, default-features = false, features = ["encryption-ring", "keywrap-grpc", "oci-client-rustls", "signature-cosign-rustls", "futures"] }
kbs_protocol = { git = "https://github.com/rodneyosodo/guest-components", branch = "enable-wasm-workloads", optional = true, features = ["background_check", "rust-crypto"], default-features = false }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n proplet/Cargo.toml | grep -A 30 "^\s*28\s"

Repository: rodneyosodo/propeller

Length of output: 1922


Git dependencies lack revision pins for reproducible builds.

The git dependencies for attestation-agent, image-rs, kbs_protocol, and resource_uri specify only a branch (enable-wasm-workloads) without a rev or tag. This makes builds non-reproducible as upstream changes to the branch could introduce breaking changes unexpectedly. Consider pinning each dependency to a specific commit revision for stability and consistency.

🤖 Prompt for AI Agents
In `@proplet/Cargo.toml` around lines 28 - 30, The git dependencies
attestation-agent, image-rs, kbs_protocol (and resource_uri if present) in
Cargo.toml currently point to a branch ("enable-wasm-workloads") only; pin each
to a specific commit by adding a rev = "<commit-sha>" (or tag) to the dependency
table (e.g. attestation-agent, image-rs, kbs_protocol) so Cargo fetches a
reproducible revision instead of the moving branch; update each dependency entry
to include rev = "..." and remove reliance on the branch alone.

5. **Build and run Proplet**:

```bash
cd /home/rodneyosodo/code/absmach/propeller/proplet
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace user-specific hardcoded path with a generic placeholder.

The path /home/rodneyosodo/code/absmach/propeller/proplet is user-specific and won't work for other users. Use a generic placeholder or relative path.

Suggested fix
-cd /home/rodneyosodo/code/absmach/propeller/proplet
+cd /path/to/propeller/proplet
🤖 Prompt for AI Agents
In `@proplet/README.md` at line 206, The README contains a user-specific hardcoded
path "/home/rodneyosodo/code/absmach/propeller/proplet"; replace that literal
with a generic placeholder (e.g. /path/to/project/proplet) or a relative path
(e.g. ./proplet) so other users can follow the instructions; update the single
occurrence of the path string in README.md accordingly.

Comment on lines +262 to +264
if let Ok(val) = env::var("PROPLET_LAYER_STORE_PATH") {
config.layer_store_path = val;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Empty string handling inconsistent for layer_store_path.

Unlike kbs_uri and aa_config_path which convert empty strings to None, layer_store_path accepts any value including empty strings. An empty PROPLET_LAYER_STORE_PATH="" could cause path resolution issues.

Suggested fix
             if let Ok(val) = env::var("PROPLET_LAYER_STORE_PATH") {
-                config.layer_store_path = val;
+                if !val.is_empty() {
+                    config.layer_store_path = val;
+                }
             }
🤖 Prompt for AI Agents
In `@proplet/src/config.rs` around lines 262 - 264, The code sets
config.layer_store_path from env::var("PROPLET_LAYER_STORE_PATH") but does not
treat the empty string as None like other fields; update the assignment so that
when env::var returns Ok(val) you check if val.trim().is_empty() and only set
config.layer_store_path to Some/non-empty value (or leave it None) if non-empty,
mirroring the handling used for kbs_uri and aa_config_path; reference the
env::var call and the config.layer_store_path field to locate and change the
logic.

Comment on lines 86 to 103
#[cfg(feature = "tee")]
let service = if config.tee_enabled {
match TeeWasmRuntime::new(&config).await {
Ok(tee_runtime) => {
Arc::new(PropletService::with_tee_runtime(
config.clone(),
pubsub,
runtime,
Arc::new(tee_runtime),
))
}
Err(_) => {
Arc::new(PropletService::new(config.clone(), pubsub, runtime))
}
}
} else {
Arc::new(PropletService::new(config.clone(), pubsub, runtime))
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t silently fall back when TEE init fails.

If tee_enabled is true and TeeWasmRuntime::new fails, the current fallback silently runs without TEE. That can violate confidentiality expectations. Prefer failing fast (or explicitly disabling tee with a clear log).

🛠️ Proposed fix (fail fast on TEE init failure)
-    let service = if config.tee_enabled {
-        match TeeWasmRuntime::new(&config).await {
-            Ok(tee_runtime) => {
-                Arc::new(PropletService::with_tee_runtime(
-                    config.clone(),
-                    pubsub,
-                    runtime,
-                    Arc::new(tee_runtime),
-                ))
-            }
-            Err(_) => {
-                Arc::new(PropletService::new(config.clone(), pubsub, runtime))
-            }
-        }
-    } else {
-        Arc::new(PropletService::new(config.clone(), pubsub, runtime))
-    };
+    let service = if config.tee_enabled {
+        let tee_runtime = TeeWasmRuntime::new(&config)
+            .await
+            .map_err(|e| anyhow::anyhow!("TEE runtime init failed: {e}"))?;
+        Arc::new(PropletService::with_tee_runtime(
+            config.clone(),
+            pubsub,
+            runtime,
+            Arc::new(tee_runtime),
+        ))
+    } else {
+        Arc::new(PropletService::new(config.clone(), pubsub, runtime))
+    };
🧰 Tools
🪛 GitHub Actions: Rust Continuous Integration Pipeline

[error] 86-97: Rustfmt formatting check failed. Run 'cargo fmt' to fix code style issues in this file. Command exited with code 1.

🤖 Prompt for AI Agents
In `@proplet/src/main.rs` around lines 86 - 103, When config.tee_enabled is true
and TeeWasmRuntime::new(&config).await returns Err, don't silently fall back to
PropletService::new; instead fail fast: in the Err(_) arm of the match log a
clear, contextual error message including the underlying error (from
TeeWasmRuntime::new) and then propagate the error or terminate startup (e.g.,
return Err(...) from main or call process::exit(1)). Update the match around
TeeWasmRuntime::new so the Err branch no longer constructs PropletService::new
but instead fails startup; keep successful path creating
Arc::new(PropletService::with_tee_runtime(...)) unchanged.

std::fs::write(&config_path, config_content)
.context("Failed to write keyprovider config")?;

std::env::set_var("OCICRYPT_KEYPROVIDER_CONFIG", config_path);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

std::env::set_var Rust multi-threaded unsafe soundness

💡 Result:

std::env::set_var (and remove_var) is unsound in multi-threaded programs on non-Windows platforms because the process environment is not reliably thread-safe there: if any other thread concurrently reads or writes the environment (often indirectly via libc / system APIs), mutating it can lead to memory unsafety (including crashes/UB). Rust can’t practically guarantee that “no other thread reads the environment”, because many functions/libraries may consult it without going through Rust’s std::env APIs (e.g., things like address resolution). (doc.rust-lang.org)

Because this violates Rust’s “no UB in safe code” rule, Rust 2024 marks std::env::set_var/remove_var as unsafe fn (while older editions may still allow calling them without an unsafe block for compatibility). (doc.rust-lang.org)

Practical guidance:

  • Windows: calling is always safe (even multi-threaded). (doc.rust-lang.org)
  • Non-Windows + multi-threaded: the only generally safe option is don’t call set_var/remove_var at all. (doc.rust-lang.org)
  • If your goal is configuring a child process, prefer std::process::Command::env(...) / env_remove(...) instead of mutating the global process environment. (doc.rust-lang.org)

Citations:


🏁 Script executed:

# Find the file and examine the context around line 87 and the function call site
fd tee_runtime.rs --type f

Repository: rodneyosodo/propeller

Length of output: 186


🏁 Script executed:

# Get the full context of the setup_keyprovider_config function and its call sites
cat -n proplet/src/runtime/tee_runtime.rs | head -100

Repository: rodneyosodo/propeller

Length of output: 3825


🏁 Script executed:

# Search for where setup_keyprovider_config is called
rg "setup_keyprovider_config" proplet/src/ -B 3 -A 3

Repository: rodneyosodo/propeller

Length of output: 1041


Remove std::env::set_var to avoid undefined behavior in multi-threaded contexts.

std::env::set_var is unsound on non-Windows platforms when called in multi-threaded code. Since TeeWasmRuntime::new() is async and may run on a tokio executor with multiple threads, this call at line 87 can cause undefined behavior—the process environment is not thread-safe, and other threads or libraries may concurrently read environment variables.

Pass the config path explicitly to the libraries that need it instead of mutating the global process environment. Alternatively, ensure this code only runs in a guaranteed single-threaded context before the async runtime spawns multiple threads.

🤖 Prompt for AI Agents
In `@proplet/src/runtime/tee_runtime.rs` at line 87, Remove the global side-effect
call std::env::set_var("OCICRYPT_KEYPROVIDER_CONFIG", config_path) from
TeeWasmRuntime::new() to avoid undefined behavior in multi-threaded async
contexts; instead propagate config_path explicitly to the OCICRYPT consumer(s)
(the code that relies on OCICRYPT_KEYPROVIDER_CONFIG) by adding a parameter or
API call that accepts the path (e.g., extend the constructor or the helper that
initializes ocicrypt to take config_path) and use that value when initializing
or calling the ocicrypt-related functions, ensuring no process-wide environment
mutation occurs.

})
.collect();

func.call(&mut store, &[], &mut results)?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Function arguments from config.args are not passed to the WASM function.

The func.call(&mut store, &[], &mut results) always passes an empty slice for arguments, ignoring config.args which contains the input values (e.g., [10, 20] from req.inputs). This means WASM functions expecting arguments will receive none.

Suggested fix
+        // Convert config.args to WASM values
+        let func_params: Vec<_> = func.ty(&store).params().collect();
+        let args: Vec<Val> = config
+            .args
+            .iter()
+            .zip(func_params.iter())
+            .map(|(arg, param_type)| match param_type {
+                wasmtime::ValType::I32 => Val::I32(*arg as i32),
+                wasmtime::ValType::I64 => Val::I64(*arg as i64),
+                _ => Val::I32(*arg as i32), // fallback
+            })
+            .collect();
+
-        func.call(&mut store, &[], &mut results)?;
+        func.call(&mut store, &args, &mut results)?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func.call(&mut store, &[], &mut results)?;
// Convert config.args to WASM values
let func_params: Vec<_> = func.ty(&store).params().collect();
let args: Vec<Val> = config
.args
.iter()
.zip(func_params.iter())
.map(|(arg, param_type)| match param_type {
wasmtime::ValType::I32 => Val::I32(*arg as i32),
wasmtime::ValType::I64 => Val::I64(*arg as i64),
_ => Val::I32(*arg as i32), // fallback
})
.collect();
func.call(&mut store, &args, &mut results)?;
🤖 Prompt for AI Agents
In `@proplet/src/runtime/tee_runtime.rs` at line 317, The call currently passes an
empty argument slice to the WASM function; update the invocation to construct
and pass the actual arguments from config.args instead of &[]: map config.args
into the expected wasmtime values (matching the function signature param types)
and create a &[Val] (or Vec<Val>) to pass into func.call(&mut store, &arg_vals,
&mut results), validating the number and types against the function’s params and
return types and returning a clear error if there is a mismatch. Ensure you
reference config.args, func.call, store and results when locating where to
convert and pass the arguments.

Comment on lines +350 to +361
if !config.cli_args.is_empty() {
if let Some(oci_ref) = config.cli_args.first() {
if oci_ref.contains("/") || oci_ref.contains(":") {
let wasm_path = self.pull_and_decrypt_wasm(oci_ref).await?;

let mut exec_config = config.clone();
exec_config.cli_args.remove(0);

return self.run_wasm(&wasm_path, &exec_config);
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

OCI reference detection heuristic is fragile.

The check oci_ref.contains("/") || oci_ref.contains(":") at line 352 may false-positive on non-OCI strings like "arg:value" or relative paths like "./path/to/file". Consider using a more robust validation such as checking for a registry pattern (e.g., domain.com/repo:tag).

🤖 Prompt for AI Agents
In `@proplet/src/runtime/tee_runtime.rs` around lines 350 - 361, The heuristic
using oci_ref.contains("/") || oci_ref.contains(":") is too permissive; replace
it with a proper validator (e.g., add a helper like is_valid_oci_ref(reference:
&str) -> bool) and call that instead of the current check in the block that
reads config.cli_args and before calling pull_and_decrypt_wasm and run_wasm.
Implement is_valid_oci_ref to reject relative/ filesystem-like values (e.g.,
starting with "./" or "/"), require a registry-style host component with a dot
or port (e.g., host.some.tld or host:port) followed by a '/' and a repo name,
and optionally accept tag/digest syntax after the repo (':' or '@'); use this
function to gate pull_and_decrypt_wasm so strings like "arg:value" or
"./path/to/file" no longer trigger an OCI pull.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@proplet/src/runtime/tee_runtime.rs`:
- Around line 238-277: run_wasm_embedded currently ignores StartConfig.args
while run_wasm forwards them; update run_wasm_embedded to forward config.args to
the embedded runtime invocation instead of the empty slice. Locate the call in
run_wasm_embedded where an empty args slice is passed (the call analogous to
run_wasm) and replace it with the config.args converted to the expected type
(e.g., a slice of &str or CString as required by the embedded runtime API) so
embedded execution receives the same CLI arguments as run_wasm.
- Around line 366-372: stop_app and get_pid are currently no-ops; implement real
process tracking and termination: maintain a shared async-safe map (e.g.
HashMap<String,u32> or HashMap<String, Child/ProcessHandle> behind an
Arc<Mutex/RwLock> or DashMap) that records PIDs/handles when launching TEE
workloads, update that map in the launcher, have get_pid(&self, id: &str) look
up and return the PID Option from that map, and have stop_app(&self, id: String)
look up the PID/handle, send a graceful terminate (or SIGKILL fallback) to the
process using the stored handle (or std/tokio process APIs), remove the entry
from the map, and return an appropriate Result; add error handling for missing
entries and failed kills and consider adding a TODO comment if full signal
semantics are deferred.
- Around line 336-344: The attestation call in tee_runtime.rs currently discards
the result from self.attestation_agent.get_evidence(b"wasm-runner") into
_evidence, leaving the evidence unused and undocumented; update the block around
attestation_agent / get_evidence to either (a) explicitly validate or forward
the returned evidence (e.g., call a validate_evidence or send_to_kbs function)
or (b) at minimum replace the underscore with a named variable and add a clear
comment explaining that the evidence is intentionally not used beyond
attestation (or why validation happens elsewhere), and consider logging the
evidence metadata via processLogger/debug for diagnostics; reference the
attestation_agent, get_evidence, and the local _evidence variable when making
this change.

In `@proplet/src/tee_detection.rs`:
- Around line 127-133: The current SEV detection returns
TeeDetection::with_details(TeeType::Sev, "device_file", "/dev/sev exists") when
Path::new("/dev/sev").exists(), which can produce false positives on hosts;
update the detection in the same function to first look for guest-specific
indicators (for example check Path::new("/dev/sev-guest").exists()) and then
fall back to verifying SEV guest EFI/attestation variables (e.g. inspect efivars
under /sys/firmware/efi/efivars for SEV guest-related variable names) before
returning TeeDetection::with_details; only return SEV if one of these
guest-specific checks succeeds, otherwise skip or mark as uncertain.
♻️ Duplicate comments (7)
proplet/src/config.rs (2)

262-264: Empty string handling inconsistent for layer_store_path.

Unlike kbs_uri and aa_config_path which convert empty strings to None, layer_store_path accepts any value including empty strings. An empty PROPLET_LAYER_STORE_PATH="" could cause path resolution issues.

Suggested fix
             if let Ok(val) = env::var("PROPLET_LAYER_STORE_PATH") {
-                config.layer_store_path = val;
+                if !val.is_empty() {
+                    config.layer_store_path = val;
+                }
             }

321-328: Test missing assertion for pull_concurrent_limit default.

The test verifies defaults for tee_enabled, kbs_uri, aa_config_path, and layer_store_path, but omits pull_concurrent_limit which defaults to 4.

Add missing assertion
             assert!(!config.tee_enabled);
             assert!(config.kbs_uri.is_none());
             assert!(config.aa_config_path.is_none());
             assert_eq!(config.layer_store_path, "/tmp/proplet/layers");
+            assert_eq!(config.pull_concurrent_limit, 4);
         }
proplet/src/runtime/tee_runtime.rs (3)

84-90: std::env::set_var is unsound in multi-threaded contexts.

std::env::set_var is unsafe on non-Windows platforms when called in multi-threaded code. Since TeeWasmRuntime::new() is async and may run on a tokio executor with multiple threads, this can cause undefined behavior.

Pass the config path explicitly to libraries that need it, or ensure this runs before the async runtime spawns multiple threads.


315-318: Function arguments from config.args are not passed to the WASM function.

The func.call(&mut store, &[], &mut results) always passes an empty slice for arguments, ignoring config.args which contains the input values. WASM functions expecting arguments will receive none.


350-361: OCI reference detection heuristic is fragile.

The check oci_ref.contains("/") || oci_ref.contains(":") may false-positive on non-OCI strings like "arg:value" or relative paths like "./path/to/file".

proplet/src/main.rs (1)

86-99: Silent fallback when TEE initialization fails violates confidentiality expectations.

If config.tee_enabled is true (meaning a TEE was detected) but TeeWasmRuntime::new fails, the code silently falls back to PropletService::new without TEE support. This means encrypted workloads could arrive but fail in unexpected ways, or worse, the system might process workloads without intended TEE protections.

Fail fast on TEE init failure
     #[cfg(feature = "tee")]
     let service = if config.tee_enabled {
-        match TeeWasmRuntime::new(&config).await {
-            Ok(tee_runtime) => Arc::new(PropletService::with_tee_runtime(
-                config.clone(),
-                pubsub,
-                runtime,
-                Arc::new(tee_runtime),
-            )),
-            Err(_) => Arc::new(PropletService::new(config.clone(), pubsub, runtime)),
-        }
+        let tee_runtime = TeeWasmRuntime::new(&config)
+            .await
+            .map_err(|e| anyhow::anyhow!("TEE runtime initialization failed: {e}"))?;
+        Arc::new(PropletService::with_tee_runtime(
+            config.clone(),
+            pubsub,
+            runtime,
+            Arc::new(tee_runtime),
+        ))
     } else {
         Arc::new(PropletService::new(config.clone(), pubsub, runtime))
     };
proplet/README.md (1)

206-206: User-specific hardcoded path should use a placeholder.

The path /home/rodneyosodo/code/absmach/propeller/proplet is user-specific and won't work for other users.

Suggested fix
-cd /home/rodneyosodo/code/absmach/propeller/proplet
+cd /path/to/propeller/proplet
🧹 Nitpick comments (5)
proplet/src/tee_detection.rs (2)

111-121: dmesg calls may fail in restricted environments.

The dmesg command typically requires elevated privileges (root or CAP_SYSLOG). In containerized or sandboxed environments, this detection method will silently fail. The fallback behavior is correct, but consider logging at debug level when dmesg fails to aid troubleshooting.


274-299: Tests with assert!(true) are no-op assertions.

The assert!(true) statements in the None branches don't validate any behavior—they always pass. These tests essentially just verify the functions don't panic, which is useful but could be more explicit.

More explicit test pattern
     #[test]
     fn test_detect_tdx_no_tee() {
         let result = detect_tdx();
-        match result {
-            Some(detection) => assert_eq!(detection.tee_type, TeeType::Tdx),
-            None => assert!(true),
-        }
+        // Either detects TDX or returns None; both are valid depending on environment
+        if let Some(detection) = result {
+            assert_eq!(detection.tee_type, TeeType::Tdx);
+        }
+        // No assertion needed for None - the test validates no panic occurs
     }
proplet/src/service.rs (2)

85-111: Constructor duplication between new() and with_tee_runtime().

Both constructors duplicate the same initialization logic. Consider extracting common setup or using a builder pattern to reduce duplication.

Refactored approach
+    fn create_service_internals(config: &PropletConfig) -> (Arc<Mutex<Proplet>>, Arc<SystemMonitor>, Arc<Mutex<MetricsCollector>>) {
+        let proplet = Proplet::new(config.instance_id.clone(), "proplet".to_string());
+        let monitor = Arc::new(SystemMonitor::new(MonitoringProfile::default()));
+        let metrics_collector = Arc::new(Mutex::new(MetricsCollector::new()));
+        (Arc::new(Mutex::new(proplet)), monitor, metrics_collector)
+    }
+
     #[cfg(feature = "tee")]
     pub fn with_tee_runtime(
         config: PropletConfig,
         pubsub: PubSub,
         runtime: Arc<dyn Runtime>,
         tee_runtime: Arc<dyn Runtime>,
     ) -> Self {
-        let proplet = Proplet::new(config.instance_id.clone(), "proplet".to_string());
-        let monitor = Arc::new(SystemMonitor::new(MonitoringProfile::default()));
-        let metrics_collector = Arc::new(Mutex::new(MetricsCollector::new()));
+        let (proplet, monitor, metrics_collector) = Self::create_service_internals(&config);
         // ... rest of initialization

410-428: Implicit contract: encrypted workloads pass image_url via cli_args.

The empty Vec::new() for wasm_binary when encrypted is intentional—the TEE runtime fetches and decrypts the image. However, this creates an implicit coupling where the service must prepend image_url to cli_args (lines 476-478), and the TEE runtime must extract it as the first argument.

Consider making this contract explicit with documentation or a dedicated field in StartConfig.

Add documentation
         } else if !req.image_url.is_empty() {
             if req.encrypted {
+                // For encrypted workloads, the TEE runtime handles image pulling and decryption.
+                // The image_url is passed as the first CLI argument (see lines 476-478).
+                // wasm_binary is intentionally empty as the runtime fetches it.
                 info!("Encrypted workload with image_url: {}", req.image_url);
                 Vec::new()
             } else {
proplet/src/runtime/tee_runtime.rs (1)

159-165: Parsing failure for ImageConfiguration falls back to default silently.

When ImageConfiguration::from_reader fails, the code logs a warning and uses a default config. This may hide issues with malformed images. Consider whether this should be an error for encrypted workloads where proper config parsing is critical.

Comment on lines +238 to +277
fn run_wasm(&self, wasm_path: &PathBuf, config: &StartConfig) -> Result<Vec<u8>> {
let runtime_path = self
.config
.external_wasm_runtime
.as_ref()
.map(|s| s.as_str())
.unwrap_or("wasmtime");

let mut cmd = Command::new(runtime_path);

cmd.arg("--dir").arg("/tmp");

for arg in &config.cli_args {
cmd.arg(arg);
}

cmd.arg(wasm_path);

for arg in &config.args {
cmd.arg(arg.to_string());
}

cmd.stdout(std::process::Stdio::piped());
cmd.stderr(std::process::Stdio::piped());

let output = cmd.output().context("Failed to execute WASM runtime")?;

if !output.stderr.is_empty() {
warn!("WASM stderr:\n{}", String::from_utf8_lossy(&output.stderr));
}

if !output.status.success() {
return Err(anyhow::anyhow!(
"WASM execution failed with exit code: {:?}",
output.status.code()
));
}

Ok(output.stdout)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

External runtime correctly passes config.args, but embedded mode ignores them.

run_wasm (external) properly passes config.args as command-line arguments (lines 256-258), but run_wasm_embedded ignores them entirely (line 317 passes empty slice). This creates inconsistent behavior depending on which execution path is taken.

🤖 Prompt for AI Agents
In `@proplet/src/runtime/tee_runtime.rs` around lines 238 - 277, run_wasm_embedded
currently ignores StartConfig.args while run_wasm forwards them; update
run_wasm_embedded to forward config.args to the embedded runtime invocation
instead of the empty slice. Locate the call in run_wasm_embedded where an empty
args slice is passed (the call analogous to run_wasm) and replace it with the
config.args converted to the expected type (e.g., a slice of &str or CString as
required by the embedded runtime API) so embedded execution receives the same
CLI arguments as run_wasm.

Comment on lines +336 to +344
#[cfg(feature = "tee")]
{
if let Some(ref agent) = self.attestation_agent {
let _evidence = agent
.get_evidence(b"wasm-runner")
.await
.context("Failed to get TEE evidence")?;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n "get_evidence" --type rust

Repository: rodneyosodo/propeller

Length of output: 153


🏁 Script executed:

wc -l proplet/src/runtime/tee_runtime.rs

Repository: rodneyosodo/propeller

Length of output: 103


🏁 Script executed:

cat -n proplet/src/runtime/tee_runtime.rs | head -350 | tail -100

Repository: rodneyosodo/propeller

Length of output: 3898


🏁 Script executed:

rg -n "evidence" proplet/src/runtime/tee_runtime.rs -A 2 -B 2

Repository: rodneyosodo/propeller

Length of output: 368


🏁 Script executed:

cat -n proplet/src/runtime/tee_runtime.rs | head -50

Repository: rodneyosodo/propeller

Length of output: 1923


🏁 Script executed:

grep -n "attestation_agent\|KBS\|TODO\|FIXME" proplet/src/runtime/tee_runtime.rs

Repository: rodneyosodo/propeller

Length of output: 336


🏁 Script executed:

cat -n proplet/src/runtime/tee_runtime.rs | sed -n '30,70p'

Repository: rodneyosodo/propeller

Length of output: 1732


🏁 Script executed:

grep -n "setup_keyprovider_config" proplet/src/runtime/tee_runtime.rs -A 20

Repository: rodneyosodo/propeller

Length of output: 1395


🏁 Script executed:

cat -n proplet/src/runtime/tee_runtime.rs | sed -n '70,150p'

Repository: rodneyosodo/propeller

Length of output: 3272


🏁 Script executed:

rg -n "KBS\|Key.*Broker\|validate.*evidence" proplet/src/runtime/tee_runtime.rs

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

rg -n "_evidence" proplet/src/runtime/tee_runtime.rs

Repository: rodneyosodo/propeller

Length of output: 160


🏁 Script executed:

rg -n "TODO\|FIXME\|XXX" proplet/src/runtime/tee_runtime.rs

Repository: rodneyosodo/propeller

Length of output: 47


🏁 Script executed:

git log --oneline proplet/src/runtime/tee_runtime.rs | head -20

Repository: rodneyosodo/propeller

Length of output: 94


🏁 Script executed:

rg -n "get_evidence" . --type rust -B 5 -A 10

Repository: rodneyosodo/propeller

Length of output: 1293


TEE evidence collection lacks context or explicit use.

The code calls agent.get_evidence(b"wasm-runner") and discards the result with an underscore prefix. While this ensures attestation succeeds (or fails early), the evidence itself is not validated, logged, or used for further processing. Consider adding a comment explaining the intent—whether this is intended solely for attestation verification, or if evidence should be explicitly validated or sent to KBS for encrypted workload handling.

🤖 Prompt for AI Agents
In `@proplet/src/runtime/tee_runtime.rs` around lines 336 - 344, The attestation
call in tee_runtime.rs currently discards the result from
self.attestation_agent.get_evidence(b"wasm-runner") into _evidence, leaving the
evidence unused and undocumented; update the block around attestation_agent /
get_evidence to either (a) explicitly validate or forward the returned evidence
(e.g., call a validate_evidence or send_to_kbs function) or (b) at minimum
replace the underscore with a named variable and add a clear comment explaining
that the evidence is intentionally not used beyond attestation (or why
validation happens elsewhere), and consider logging the evidence metadata via
processLogger/debug for diagnostics; reference the attestation_agent,
get_evidence, and the local _evidence variable when making this change.

Comment on lines +366 to +372
async fn stop_app(&self, _id: String) -> Result<()> {
Ok(())
}

async fn get_pid(&self, _id: &str) -> Result<Option<u32>> {
Ok(None)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

stop_app and get_pid are no-ops.

These methods return success/None without actually stopping processes or tracking PIDs. This means long-running TEE workloads cannot be stopped via the normal task management flow.

Consider adding a TODO or tracking issue
     async fn stop_app(&self, _id: String) -> Result<()> {
+        // TODO: Implement process termination for TEE workloads
         Ok(())
     }

     async fn get_pid(&self, _id: &str) -> Result<Option<u32>> {
+        // TODO: Implement PID tracking for TEE workloads
         Ok(None)
     }
🤖 Prompt for AI Agents
In `@proplet/src/runtime/tee_runtime.rs` around lines 366 - 372, stop_app and
get_pid are currently no-ops; implement real process tracking and termination:
maintain a shared async-safe map (e.g. HashMap<String,u32> or HashMap<String,
Child/ProcessHandle> behind an Arc<Mutex/RwLock> or DashMap) that records
PIDs/handles when launching TEE workloads, update that map in the launcher, have
get_pid(&self, id: &str) look up and return the PID Option from that map, and
have stop_app(&self, id: String) look up the PID/handle, send a graceful
terminate (or SIGKILL fallback) to the process using the stored handle (or
std/tokio process APIs), remove the entry from the map, and return an
appropriate Result; add error handling for missing entries and failed kills and
consider adding a TODO comment if full signal semantics are deferred.

Comment on lines +127 to +133
if Path::new("/dev/sev").exists() {
return Some(TeeDetection::with_details(
TeeType::Sev,
"device_file",
"/dev/sev exists",
));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

False positive risk: /dev/sev detection may match host systems.

The /dev/sev device file exists on AMD host systems with SEV support enabled in BIOS, not just inside SEV-protected VMs. This can cause false positives where a non-confidential host system is incorrectly detected as running inside a TEE.

Consider checking for guest-specific indicators like /dev/sev-guest or the SEV EFI variables that indicate actual guest attestation capability.

Suggested fix
 fn detect_sev() -> Option<TeeDetection> {
-    if Path::new("/dev/sev").exists() {
+    // /dev/sev-guest indicates running inside an SEV VM (guest attestation)
+    // /dev/sev alone exists on host systems with SEV support
+    if Path::new("/dev/sev-guest").exists() {
         return Some(TeeDetection::with_details(
             TeeType::Sev,
             "device_file",
-            "/dev/sev exists",
+            "/dev/sev-guest exists",
         ));
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if Path::new("/dev/sev").exists() {
return Some(TeeDetection::with_details(
TeeType::Sev,
"device_file",
"/dev/sev exists",
));
}
// /dev/sev-guest indicates running inside an SEV VM (guest attestation)
// /dev/sev alone exists on host systems with SEV support
if Path::new("/dev/sev-guest").exists() {
return Some(TeeDetection::with_details(
TeeType::Sev,
"device_file",
"/dev/sev-guest exists",
));
}
🤖 Prompt for AI Agents
In `@proplet/src/tee_detection.rs` around lines 127 - 133, The current SEV
detection returns TeeDetection::with_details(TeeType::Sev, "device_file",
"/dev/sev exists") when Path::new("/dev/sev").exists(), which can produce false
positives on hosts; update the detection in the same function to first look for
guest-specific indicators (for example check
Path::new("/dev/sev-guest").exists()) and then fall back to verifying SEV guest
EFI/attestation variables (e.g. inspect efivars under /sys/firmware/efi/efivars
for SEV guest-related variable names) before returning
TeeDetection::with_details; only return SEV if one of these guest-specific
checks succeeds, otherwise skip or mark as uncertain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant