A Rust-based WebAssembly connector template for a Standout connector. This connector works with the Standout platform via Standout App Bridge and can automatically generate boilerplate code for actions and triggers based on OpenAPI specs.
This connector uses the WebAssembly Interface Type (WIT) specification to define the interface (App Bridge) between the connector and the Standout platform. The wit/standout-app.wit file defines:
- Actions interface: Methods for executing actions (
execute,input_schema,output_schema,action_ids) - Triggers interface: Methods for fetching trigger events (
fetch_events,input_schema,output_schema,trigger_ids) - Types: Shared data structures like
ActionContext,TriggerContext,Connection, etc.
The WIT file is used to generate Rust bindings that allow the connector to communicate with the Standout App Bridge runtime. This file should not be modified.
Build actions and triggers (see below). Change the name of the connector from base-connector
to the appropriate name in Cargo.toml, for example to github-connector.
# Build the WebAssembly component
cargo build --target wasm32-wasip2 --releaseThe compiled WebAssembly file will be at: target/wasm32-wasip2/release/github_connector.wasm. Actions and triggers located in the folders explained below will automatically be included in the built file.
- Rust 1.84.0+ with
wasm32-wasip2target:rustup target add wasm32-wasip2 - Ruby 3.4.8+ (optional, for RSpec tests)
Actions and triggers can be generated from OpenAPI specifications using base-connector-tools, or created manually. The generation tools are optional - you can build a connector without an OpenAPI schema.
First, install the tools from GitHub (one-time setup):
cargo install --git https://github.com/standout/base-connector-tools.git --bin endpoints
cargo install --git https://github.com/standout/base-connector-tools.git --bin generate_action
cargo install --git https://github.com/standout/base-connector-tools.git --bin generate_triggerList all operations from an OpenAPI spec:
endpoints <openapi_url_or_file>openapi_url_or_file- URL or path to OpenAPI specification file (local file paths can be relative or absolute)
Examples:
# From URL
endpoints https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/api.github.com/api.github.com.json
# From local file (relative to current directory)
endpoints ./openapi.yaml
endpoints openapi.json
# From local file (absolute path)
endpoints /path/to/openapi.yamlThe output shows each operation with a round emoji (🔵) followed by the operation_id. Use the operation_id shown in this output when generating actions or triggers.
Generate schemas and executor code for a specific action operation:
generate_action <openapi_url_or_file> <operation_id> [name]openapi_url_or_file- URL or path to OpenAPI specification file (local file paths can be relative or absolute)operation_id- The operation ID from the OpenAPI spec (shown after the round emoji in theendpointsoutput)name(optional) - Custom name for the action. If not provided, the operation ID will be used
Examples:
# From URL with default name (uses operation_id)
generate_action https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/api.github.com/api.github.com.json repos/update
# From URL with custom name
generate_action https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/api.github.com/api.github.com.json repos/update update_repository
# From local file (relative to current directory)
generate_action ./openapi.yaml repos/update update_repository
generate_action openapi.json repos/update
# From local file (absolute path)
generate_action /path/to/openapi.yaml repos/updateGenerate schemas and executor code for a specific trigger operation:
generate_trigger <openapi_url_or_file> <operation_id> [name]openapi_url_or_file- URL or path to OpenAPI specification file (local file paths can be relative or absolute)operation_id- The operation ID from the OpenAPI spec (shown after the round emoji in theendpointsoutput)name(optional) - Custom name for the trigger. If not provided, the operation ID will be used
Examples:
# From URL with default name (uses operation_id)
generate_trigger https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/api.github.com/api.github.com.json issues/list-for-repo
# From URL with custom name
generate_trigger https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/api.github.com/api.github.com.json issues/list-for-repo list_issues
# From local file (relative to current directory) with custom name
generate_trigger ./openapi.yaml issues/list-for-repo list_issues
generate_trigger openapi.json issues/list-for-repo
# From local file (absolute path)
generate_trigger /path/to/openapi.yaml issues/list-for-repoGenerated actions will be placed in src/actions/{action_name}/ with:
action.rs- Action executor codebase_input_schema.json- Input validation schemabase_output_schema.json- Output schema
Generated triggers will be placed in src/triggers/{trigger_name}/ with:
fetch_events.rs- Trigger executor codeinput_schema.json- Input validation schemaoutput_schema.json- Output schema
Important: The generated folder name is used as the action/trigger name in Standout. You can specify a custom name when generating (using the optional name argument), or rename the folder after generation to a clear, descriptive name that represents what the action or trigger does. For example:
repos_create→create_repositoryissues_list-for-repo→list_issuesrepos_get→get_repositoryrepos_update→update_repository
After generation, rebuild to include the new action or trigger:
cargo build --target wasm32-wasip2 --releaseGenerated actions may need customization for your specific use case:
The generated input schema (base_input_schema.json) includes all parameters from the OpenAPI spec, but you may not need all of them. You can remove fields that aren't needed for your use case
The generated schema is based on the OpenAPI specification, but it may not always match the actual API behavior. Review and test the schema against the real API to ensure accuracy. Add titles and descriptions to the fields if needed.
The generated output schema (base_output_schema.json) is based on the API response structure. You may need to adjust it if the API response structure differs from the OpenAPI spec
Actions and Triggers can return file data that will be automatically processed by the platform. Use the file::normalize function to handle files from URLs, data URIs, or base64 strings.
Import and use the normalize function in your action:
use crate::standout::app::file::normalize;
use crate::standout::app::types::{AppError, ErrorCode, ActionContext};
use serde_json::Value;
pub fn execute(context: ActionContext) -> Result<Value, AppError> {
let input_data = input_data(&context)?;
// Get file source from input (could be URL, data URI, or base64)
let file_source = input_data
.get("file_url")
.and_then(|v| v.as_str())
.ok_or_else(|| AppError {
code: ErrorCode::InvalidInput,
message: "file_url is required".to_string(),
})?;
let api_client = client(&context)?;
let headers: Vec<(String, String)> = api_client.headers.iter().map(|(k, v)| (k.clone(), v.clone())).collect();
let file_data_to_send = normalize(
file_source,
None, // Optional headers for authorized requests
Some("invoice.pdf"), // Optional filename override
).map_err(|e| AppError {
code: ErrorCode::Other,
message: format!("Failed to normalize file: {:?}", e),
})?;
// Upload the file to your API
// The API will return a URL for the uploaded file
let upload_response = api_client.post(
"/api/files/upload",
&serde_json::json!({
"file": {
"base64": file_data_to_send.base64,
"content_type": file_data_to_send.content_type,
"filename": file_data_to_send.filename,
}
})
)?;
// Extract the file URL from the API response
let file_received_url = upload_response
.get("url")
.and_then(|v| v.as_str())
.ok_or_else(|| AppError {
code: ErrorCode::MalformedResponse,
message: "API response missing 'url' field".to_string(),
})?;
// Normalize the file from the API response URL
let file_data_received = normalize(
file_received_url,
Some(&headers), // Use context header data for authorized request
Some("invoice.pdf"), // Optional filename override
).map_err(|e| AppError {
code: ErrorCode::Other,
message: format!("Failed to normalize file from API URL: {:?}", e),
})?;
// Return the normalized file data in the same format as the normalize output
// The platform will process it if marked with format: "file-output"
Ok(serde_json::json!({
"document": {
"base64": file_data_received.base64,
"content_type": file_data_received.content_type,
"filename": file_data_received.filename,
}
}))
}The normalize function automatically detects the input format:
- URL:
"https://example.com/file.pdf"- fetched with optional headers - Data URI:
"data:application/pdf;base64,JVBERi0..."- parsed and extracted - Base64: Any other string is treated as raw base64 - decoded to detect type
Mark file fields in your output schema with format: "file-output" so the platform knows to process them:
{
"type": "object",
"properties": {
"document": {
"type": "string",
"format": "file-output"
}
}
}The platform will:
- Identify fields with
format: "file-output"in the output schema.typeshould be string. - Expect a normalized file object in the corresponding location in the Action or Trigger response (serialized_output)
Generated triggers may need customization for your specific use case:
The store is a JSON string that persists between trigger runs. Use it to track state like timestamps, pagination cursors, or last processed IDs. In fetch_events.rs:
// Parse existing store data
let store_data: Value = if context.store.is_empty() {
serde_json::json!({})
} else {
serde_json::from_str(&context.store)?
};
// Read state from store
let since = store_data.get("since")
.and_then(|v| v.as_str())
.map(|s| s.to_string());
// Update store with new state
let updated_store = serde_json::json!({
"since": chrono::Utc::now().to_rfc3339(),
"last_id": last_processed_id,
});
let store_string = serde_json::to_string(&updated_store)?;The input schema (input_schema.json) is typically empty {} by default, but you can add fields in valid JSON Schema format:
{
"type": "object",
"properties": {
"filter": {
"type": "string",
"description": "Filter criteria"
}
}
}The output schema (output_schema.json) should represent one of the objects (an event) returned by the API endpoint. The generated schema is based on the API response, but you may need to adjust it to match your specific event structure.
If you don't have an OpenAPI specification or prefer to create actions and triggers manually, you can create them directly following the required structure:
Create a directory src/actions/{action_name}/ with:
-
action.rs- Must export three functions:use crate::client::ApiClient; use crate::standout::app::types::{AppError, ActionContext}; use serde_json::Value; /// Get the ApiClient from context fn client(context: &ActionContext) -> Result<ApiClient, AppError> { let connection_data: Value = serde_json::from_str(&context.connection.serialized_data)?; ApiClient::new(&connection_data) } /// Get the input data from context fn input_data(context: &ActionContext) -> Result<Value, AppError> { serde_json::from_str(&context.serialized_input) } /// Execute the action pub fn execute(context: ActionContext) -> Result<Value, AppError> { let client = client(&context)?; let input_data = input_data(&context)?; // Your action implementation // Make API calls using client.get(), client.post(), etc. // Return the response as Value } /// Get input schema pub fn input_schema(_context: &ActionContext) -> Result<Value, AppError> { // Load schema from file or return inline // If the schema should be fetched dynamically, use the ApiClient let schema = include_str!("base_input_schema.json"); Ok(serde_json::from_str(schema)?) } /// Get output schema pub fn output_schema(_context: &ActionContext) -> Result<Value, AppError> { // Load schema from file or return inline // If the schema should be fetched dynamically, use the ApiClient let schema = include_str!("base_output_schema.json"); Ok(serde_json::from_str(schema)?) }
-
base_input_schema.json- JSON Schema for action input (JSON Schema Draft 2020-12 format) -
base_output_schema.json- JSON Schema for action output (JSON Schema Draft 2020-12 format)
Create a directory src/triggers/{trigger_name}/ with:
-
fetch_events.rs- Must export three functions:use crate::client::ApiClient; use crate::standout::app::types::{AppError, TriggerContext, TriggerResponse, TriggerEvent}; use serde_json::Value; /// Get the ApiClient from context fn client(context: &TriggerContext) -> Result<ApiClient, AppError> { let connection_data: Value = serde_json::from_str(&context.connection.serialized_data)?; ApiClient::new(&connection_data) } /// Get the input data from context fn input_data(context: &TriggerContext) -> Result<Value, AppError> { if context.serialized_input.is_empty() { Ok(serde_json::json!({})) } else { serde_json::from_str(&context.serialized_input) } } /// Get the store data from context fn store_data(context: &TriggerContext) -> Result<Value, AppError> { if context.store.is_empty() { Ok(serde_json::json!({})) } else { serde_json::from_str(&context.store) } } /// Fetch events for the trigger pub fn fetch_events(context: TriggerContext) -> Result<TriggerResponse, AppError> { let api_client = client(&context)?; let input_data = input_data(&context)?; let store_data = store_data(&context)?; // Fetch data from API, process into events let events: Vec<TriggerEvent> = vec![]; // Your events here // Update store with new state let updated_store = serde_json::json!({}); let store_string = serde_json::to_string(&updated_store)?; Ok(TriggerResponse { events, store: store_string, }) } /// Get input schema pub fn input_schema(_context: &TriggerContext) -> Result<Value, AppError> { let schema = include_str!("input_schema.json"); Ok(serde_json::from_str(schema)?) } /// Get output schema pub fn output_schema(_context: &TriggerContext) -> Result<Value, AppError> { let schema = include_str!("output_schema.json"); Ok(serde_json::from_str(schema)?) }
-
input_schema.json- JSON Schema for trigger input (typically empty) -
output_schema.json- JSON Schema for trigger output (represents one event object)
After creating the files, rebuild to include them:
cargo build --target wasm32-wasip2 --releaseThe connector expects connection data at runtime. By default, it expects the following structure:
{
"base_url": "https://api.example.com",
"headers": {
"Authorization": "Bearer your-token",
"Content-Type": "application/json"
}
}If your API's connection data uses a different structure (e.g., different field names, nested objects, or missing base_url/headers), you'll need to customize the ApiClient::new() method in src/client.rs.
Example: If your connection data looks like this:
{
"api_endpoint": "https://api.example.com",
"auth": {
"token": "your-token"
}
}You would modify src/client.rs to extract these fields:
pub fn new(connection_data: &Value) -> Result<Self, AppError> {
let base_url = connection_data
.get("api_endpoint") // Changed from "base_url"
.and_then(|v| v.as_str())
.ok_or_else(|| AppError {
code: ErrorCode::Misconfigured,
message: "api_endpoint not found in connection data".to_string(),
})?
.to_string();
let auth_obj = connection_data
.get("auth") // Changed from "headers"
.and_then(|v| v.as_object())
.ok_or_else(|| AppError {
code: ErrorCode::Misconfigured,
message: "auth not found in connection data".to_string(),
})?;
let mut headers = HashMap::new();
if let Some(token) = auth_obj.get("token").and_then(|v| v.as_str()) {
headers.insert("Authorization".to_string(), format!("Bearer {}", token));
}
headers.insert("Content-Type".to_string(), "application/json".to_string());
Ok(ApiClient { base_url, headers })
}See src/client.rs for the current implementation.
The connector uses RSpec for integration testing with WireMock to mock API responses.
-
Install Ruby dependencies:
bundle install
-
Build the WASM module:
cargo build --release --target wasm32-wasip2
-
Ensure Docker is running (required for WireMock):
docker --version
The test suite automatically starts and stops WireMock using Docker Compose:
# Run all tests
bundle exec rspec
# Run tests for a specific file
bundle exec rspec spec/triggers/example_spec.rb
# Run a specific test
bundle exec rspec spec/triggers/example_spec.rb:16Tests use WireMock running in Docker to mock external API responses. The mock server is automatically managed by the test suite:
- Automatic startup: WireMock starts before tests run
- Automatic cleanup: WireMock stops after tests complete
- Manual control (optional):
./scripts/test-setup.sh start # Start WireMock ./scripts/test-setup.sh stop # Stop WireMock ./scripts/test-setup.sh status # Check status
WireMock runs on http://localhost:8080 by default. The TestHelper module in spec/test_helper.rb provides utilities for configuring mock endpoints and creating test contexts.
# Format code
cargo fmt
# Lint code
cargo clippy --target wasm32-wasip2The repository includes a GitHub Actions workflow (.github/workflows/release.yml) that automatically builds and releases the connector when a GitHub release is published.
What it does:
- Builds the WASM module with optimizations (
opt-level=z,lto=true,strip=true) - Extracts the package name from
Cargo.toml(converts hyphens to underscores for the WASM filename) - Creates a release archive (ZIP) containing the WASM file and README
- Attaches the WASM file and archive to the GitHub release
To create a release:
- Create a new release in GitHub (with a tag, e.g.,
v1.0.0) - The workflow will automatically build and attach the assets
The workflow uses the package name from Cargo.toml to determine the WASM filename, so it works automatically when you rename the connector.