Skip to content

add-support-for-s3-express-zone-directory-buckets#168

Open
Gargi-thakur01 wants to merge 1 commit intomasterfrom
add-support-for-s3-express-zone-directory-buckets
Open

add-support-for-s3-express-zone-directory-buckets#168
Gargi-thakur01 wants to merge 1 commit intomasterfrom
add-support-for-s3-express-zone-directory-buckets

Conversation

@Gargi-thakur01
Copy link
Collaborator

@Gargi-thakur01 Gargi-thakur01 commented Mar 19, 2026

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Proposed Changes

This PR adds support for AWS S3 Express One Zone directory buckets to the drone-cache plugin.

Description

AWS S3 Express One Zone is a high-performance, single-AZ storage class that uses directory buckets with specific API limitations. This implementation:

Automatic Directory Bucket Detection

  • Uses HeadBucket API to detect S3 Express directory buckets via BucketLocationType (AvailabilityZone/LocalZone)
  • Cross-validates detection using bucket ARN (contains "s3express")
  • Gracefully falls back to general-purpose bucket behavior for MinIO and non-AWS S3-compatible services

S3 Express API Compatibility

  • ACL Handling: Automatically skips ACL settings for directory buckets (not supported) with warning logs
  • Encryption Handling: Skips DSSE-KMS encryption for directory buckets (not supported), while allowing AES256 and aws:kms

Preserves existing behavior for general-purpose buckets

  • Enhanced Logging
  • Logs bucket type detection for all buckets (isDirectoryBucket: true/false)
  • Provides upfront notification when directory bucket is detected, explaining ACL/DSSE-KMS behavior
  • Warns when unsupported settings (ACL, DSSE-KMS) are configured for directory buckets

Technical Changes

  • Added isDirectoryBucket field to Backend struct
  • Implemented detectDirectoryBucket() function with dual validation
  • Modified Put() method to conditionally skip ACL and DSSE-KMS based on bucket type
  • Added informative logging throughout the flow
  • No changes required to Get(), Exists(), or List() methods (work identically for both bucket types)

Testing

  • Tested with S3 Express directory bucket (gargi-express-test--use1-az4--x-s3)
  • Verified cache save and restore operations work correctly
  • Confirmed ACL and DSSE-KMS are properly skipped with appropriate warnings
  • Validated backwards compatibility with general-purpose buckets

Checklist

  • Read the CONTRIBUTING document.
  • Read the CODE OF CONDUCT document.
  • Add tests to cover changes. (Manual testing completed; unit tests pending)
  • Ensure your code follows the code style of this project.
  • Ensure CI and all other PR checks are green OR
  • Code compiles correctly.
  • Created tests which fail without the change (if possible).
  • All new and existing tests passed. (Pending test creation)
  • Add your changes to Unreleased section of CHANGELOG.
  • Improve and update the README (if necessary). (May need to document S3 Express support)
  • Ensure documentation is up-to-date.(https://github.com/drone/drone-plugin-index/blob/master/content/meltwater/drone-cache/index.md) when your PR is accepted, so it will be available for end-users at http://plugins.drone.io.

@smjt-h
Copy link
Collaborator

smjt-h commented Mar 19, 2026

Question-

  1. Would this also be supported in cache-intelligence restoreCachetoHarness and saveCachetoHarness steps if some gives s3 express buckets in default settings?
  2. Does it need additional roles/ARNs to be configured and setup via aws connectors, or just bucket name is enough ?

// determine whether the bucket is an S3 Express directory bucket. It returns
// false on any error so that MinIO and non-AWS endpoints fall back gracefully
// to the existing general-purpose bucket code path.
func detectDirectoryBucket(ctx context.Context, client *s3.Client, bucket string) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this detection mechanism (HeadBucket) is overkill and introduces an extra API call on every New() invocation for all users (not just S3 Express users), and requires an extra IAM permission s3:HeadBucket that wasn't needed before. although it silently falls back for regular buckets for backward compatibility, it's still overkill

the entire detectDirectoryBucket() function can be deleted and the isDirectoryBucket field on Backend struct is not needed either

pr's goal is right but the implementation needs to be significantly simplified

just use strings.HasSuffix(bucket, "--x-s3") like how the AWS SDK itself does (https://github.com/aws/aws-sdk-go-v2/blob/main/service/s3/express_user_agent.go#L27)
to detect the bucket type and inline the ACL and DSSE-KMS checks directly in Put() using the suffix check

this works universally because S3 Express One Zone is a proprietary AWS feature and it's not part of the S3 API specification that other providers implement (MinIO, R2, etc.), so no user of those providers would ever have a bucket ending in --x-s3. the suffix is AWS-enforced at bucket creation time, making it a zero-cost and reliable signal

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I planned the same before but it was suggested by Hemanth, Satya and Maanav before to use bucket metadata instead to detect this- Tech-Spec-Link

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! I had considered the suffix approach earlier but understand the preference for bucket metadata detection. that said at the very least the HeadBucket call should be gated behind the suffix check to avoid the extra API call for all regular bucket users. right now it calls HeadBucket for every single bucket regardless of type which adds unnecessary latency

this is also how AWS's own k8s s3 controller handles it with pure suffix check, no API call: https://github.com/aws-controllers-k8s/s3-controller/blob/main/pkg/resource/bucket/hook.go#L38

HeadBucket on every New() is worth fixing regardless of which detection approach we go with

@Gargi-thakur01
Copy link
Collaborator Author

Question-

  1. Would this also be supported in cache-intelligence restoreCachetoHarness and saveCachetoHarness steps if some gives s3 express buckets in default settings?

Ans - YES, It works transparently with S3 Express buckets specified in account/project-level cache settings. I manually tested for both cache intelligence and explicit save/restore steps.

  1. Does it need additional roles/ARNs to be configured and setup via aws connectors, or just bucket name is enough ?

Ans - Yes, Some additional IAM permissions are required to be configured:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3express:CreateSession"], "Resource": ["arn:aws:s3express:us-east-1:ACCOUNT_ID:bucket/BUCKET_NAME"] }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3express:us-east-1:ACCOUNT_ID:bucket/BUCKET_NAME"] } ] }

Connector setup: Same AWS connector works - just ensure IAM user/role has these two permissions. No special ARN configuration needed beyond standard access key or role assumption.

@smjt-h
Copy link
Collaborator

smjt-h commented Mar 19, 2026

Question-

  1. Would this also be supported in cache-intelligence restoreCachetoHarness and saveCachetoHarness steps if some gives s3 express buckets in default settings?

Ans - YES, It works transparently with S3 Express buckets specified in account/project-level cache settings. I manually tested for both cache intelligence and explicit save/restore steps.

  1. Does it need additional roles/ARNs to be configured and setup via aws connectors, or just bucket name is enough ?

Ans - Yes, Some additional IAM permissions are required to be configured:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3express:CreateSession"], "Resource": ["arn:aws:s3express:us-east-1:ACCOUNT_ID:bucket/BUCKET_NAME"] }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3express:us-east-1:ACCOUNT_ID:bucket/BUCKET_NAME"] } ] }

Connector setup: Same AWS connector works - just ensure IAM user/role has these two permissions. No special ARN configuration needed beyond standard access key or role assumption.

We should update documentation for this requirement

Also why are we not using commit format- type: [JIRA-TICKET]: Description

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants