Conversation
- Refactored rollout status checking to include authentication error handling and kubeconfig refresh logic, improving robustness. - Added `cluster-name` and `aws-region` parameters to `argo-deploy-application` and `argo-rollout-status` commands - Introduced `update_kubeconfig_common.sh` script to manage kubeconfig updates for EKS, enhancing authentication handling.
WalkthroughAdd Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧬 Code graph analysis (1)src/scripts/argo_rollout_status_common.sh (2)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/jobs/argo-deploy.yml (1)
237-238: Minor: Parameter naming inconsistency (region → aws-region).The argo-deploy.yml job uses a parameter named
regionbut passes it asaws-regionto argo-deploy-application. While this works correctly, consider renaming the job-level parameter toaws-regionfor clarity and consistency across the pipeline.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/commands/argo-deploy-application.yml(2 hunks)src/commands/argo-rollout-status.yml(2 hunks)src/jobs/argo-deploy.yml(2 hunks)src/scripts/argo_rollout_status_common.sh(6 hunks)src/scripts/update_kubeconfig_common.sh(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(27-154)
🔇 Additional comments (9)
src/commands/argo-deploy-application.yml (1)
5-8: LGTM: Clean parameter propagation.The cluster-name and aws-region parameters are properly defined as top-level inputs and correctly passed through to the argo-rollout-status step. The changes maintain backward compatibility and follow the existing parameter pattern.
Also applies to: 247-248
src/jobs/argo-deploy.yml (1)
260-261: LGTM: Environment setup for migration workflow.CLUSTER_NAME, AWS_REGION, and UPDATE_KUBECONFIG_COMMON_SCRIPT are properly exposed to the migration trigger, enabling kubeconfig refresh during the workflow execution.
Also applies to: 265-265
src/commands/argo-rollout-status.yml (1)
4-7: LGTM: Clean parameter and script wiring.The cluster-name and aws-region parameters are properly received and exported as environment variables for use by the rollout status script. The UPDATE_KUBECONFIG_COMMON_SCRIPT is correctly included.
Also applies to: 23-24, 27-27
src/scripts/update_kubeconfig_common.sh (2)
27-76: LGTM: Robust flag parsing and auto-detection.The parameter parsing is thorough with proper error handling, and the auto-detection fallback chain (explicit arg → environment variables → kubeconfig extraction) is well-designed for flexibility.
111-115: LGTM: Good validation and command construction.Cluster name validation ensures a required parameter is present before execution. The command argument building using an array is safer than string concatenation and handles all optional parameters correctly.
Also applies to: 117-153
src/scripts/argo_rollout_status_common.sh (4)
122-135: LGTM: Comprehensive auth error detection.The pattern matching covers common kubectl authentication failures (Unauthorized, token expiration, 401/403 errors, access denied). Case-insensitive matching ensures robustness.
137-148: LGTM: Clean kubeconfig refresh wrapper.The refresh_kubeconfig function properly wraps update_kubeconfig and provides clear feedback. Calling update_kubeconfig with no arguments relies on environment-based auto-detection (CLUSTER_NAME), which aligns with the export on line 275.
150-194: LGTM: Well-designed retry logic with auth-aware refresh.The get_kubectl_argo_rollout function properly:
- Captures both stdout and stderr (2>&1)
- Retries up to 3 times with kubeconfig refresh on auth errors
- Returns exit codes 0 (success) or 2 (auth/non-auth failures)
- Avoids unnecessary refresh attempts after exhausting retries
- Propagates failures correctly to the caller
The retry strategy (kubeconfig refresh on attempts 1-2, but not 3) is reasonable and prevents infinite refresh loops.
259-265: LGTM: Proper environment variable export for subshell execution.CLUSTER_NAME and AWS_REGION are correctly exported before the timeout subshell (line 275), making them available to the update_kubeconfig auto-detection logic. The subshell receives all required functions (is_auth_error, refresh_kubeconfig, get_kubectl_argo_rollout, update_kubeconfig) via declare -f declarations, enabling the full retry flow.
Also applies to: 275-286
# Conflicts: # src/scripts/argo_rollout_status_common.sh
…lout_status This commit introduces the is_not_found_error function to check for specific "not found" error patterns in the output of kubectl commands. The exec_rollout_status function is updated to utilize this new error handling, allowing it to return successful output when a "not found" error is detected, while maintaining existing authentication error checks.
Refine error messages in argo_rollout_status_common.sh
This commit updates the `update-kubeconfig-with-authenticator.yml` to utilize an external script for kubeconfig updates, improving maintainability and readability. The environment variables are now set directly, and the command execution is streamlined through the inclusion of `update_kubeconfig.sh`, which handles the update logic and error checking.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/scripts/update_kubeconfig.sh (1)
3-14: Define color variables for consistency.The script uses
REDandNCcolor variables on lines 4 and 12, but they are not defined locally. Following the pattern insrc/scripts/argo_rollout_status_common.sh(lines 17–22), define these at the top of the script to maintain consistency across the codebase and avoid reliance on inherited or sourced definitions.#!/bin/bash + +# Colors for output +RED="\033[0;31m" +NC="\033[0m" # No Color + if [[ -z "${UPDATE_KUBECONFIG_COMMON_SCRIPT:-}" ]]; thensrc/scripts/argo_rollout_status_common.sh (1)
304-320: Consider validating CLUSTER_NAME and AWS_REGION at function entry.The
print_headerfunction usesCLUSTER_NAMEandAWS_REGIONwithout validation. While these are expected to be set by the caller (via wiring inargo-rollout-status.yml), adding explicit validation at the start ofexec_rollout_status(similar to the check on lines 54–57 forAPPLICATION_NAMESPACE) would catch missing configuration early and provide clearer error messages.if [[ -z "${APPLICATION_NAMESPACE}" ]]; then echo -e "${RED}❌ Error: APPLICATION_NAMESPACE environment variable is required.${NC}" return 2 fi + if [[ -z "${CLUSTER_NAME}" ]]; then + echo -e "${RED}❌ Error: CLUSTER_NAME environment variable is required.${NC}" + return 2 + fi + + if [[ -z "${AWS_REGION}" ]]; then + echo -e "${RED}❌ Error: AWS_REGION environment variable is required.${NC}" + return 2 + fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/commands/update-kubeconfig-with-authenticator.yml(1 hunks)src/scripts/argo_rollout_status_common.sh(8 hunks)src/scripts/update_kubeconfig.sh(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/scripts/update_kubeconfig.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(27-154)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(27-154)
🔇 Additional comments (4)
src/commands/update-kubeconfig-with-authenticator.yml (1)
60-73: LGTM!The command cleanly delegates kubeconfig updates to the included script via environment variables, properly mapping all parameters to their corresponding flags. The approach is consistent with the broader PR architecture.
src/scripts/argo_rollout_status_common.sh (3)
32-48: Past review feedback incorporated.The error message on line 46 has been corrected to accurately reflect the condition (now reads "is not defined" instead of the previous misleading message). The validation pattern for both ARGO_CLI_COMMON_SCRIPT and UPDATE_KUBECONFIG_COMMON_SCRIPT is consistent and properly structured.
147-232: Well-designed auth retry logic with clear error handling.The new helper functions properly detect authentication and not-found errors, and the retry mechanism correctly distinguishes between transient auth failures (retry after refresh) and permanent failures. The max retry limit prevents infinite loops while allowing recovery from token expiration.
323-330: LGTM!The function declarations for the subshell timeout context are complete and correct. All new helpers (
is_auth_error,is_not_found_error,refresh_kubeconfig,get_kubectl_argo_rollout) and theupdate_kubeconfigfunction are properly included to support the enhanced auth and retry flow.
…ript This commit modifies the `update-kubeconfig-with-authenticator.yml` file to change the syntax of the AWS_REGION environment variable from a quoted string to a direct parameter reference.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/commands/update-kubeconfig-with-authenticator.yml (1)
11-20: Resolve inconsistency: cluster-name is defined as required but treated as optional.The
cluster-nameparameter (lines 11-14) has nodefault:key, making it a required parameter. However, line 67 exports it without validation, which means CLUSTER_NAME could be empty if not provided. Either add a default value or add validation to ensure the parameter is present.Apply one of the following fixes:
Option 1: Mark cluster-name as required explicitly (no default)
cluster-name: description: | The name of the cluster for which to create a kubeconfig entry. type: stringThen add validation in the first step:
+ if [ -z "<< parameters.cluster-name >>" ]; then + echo "Error: cluster-name parameter is required" + exit 1 + fiOption 2: Add a default value to cluster-name (if appropriate for your use case)
cluster-name: description: | The name of the cluster for which to create a kubeconfig entry. type: string + default: ""
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.circleci/test-deploy.yml(2 hunks)src/commands/update-kubeconfig-with-authenticator.yml(1 hunks)
🔇 Additional comments (2)
.circleci/test-deploy.yml (1)
38-40: Verify the filter logic for production deployment jobs.The
production-argo-deploy-approval(line 40) andproduction-argo-deploy(line 95) jobs are now configured to use*filter-staging, which triggers on non-master branches. This inverts the typical CI/CD pattern where production deployments should gate on the master branch only.Is this change intentional? If these are intended to be production deployments, they should typically use
*filter-production(master branch only) rather than*filter-staging(non-master branches). If this is a test-specific configuration to prevent accidental production deployments, please clarify the intent.Also applies to: 94-95
src/commands/update-kubeconfig-with-authenticator.yml (1)
75-79: The original review comment is based on incorrect assumptions about the pattern.The evidence shows:
The pattern is intentional and established, not unconventional. It's used consistently across the codebase in at least 10 instances across multiple YAML files (
argo-deploy.yml,argo-rollout-status.yml,update-kubeconfig-with-authenticator.yml, etc.).The mechanism is explicit and validated, not unclear.
update_kubeconfig.shcontains:
- Line 3-6: Explicit check that
UPDATE_KUBECONFIG_COMMON_SCRIPTis set with error message- Line 9: Sources via
source <(echo "${UPDATE_KUBECONFIG_COMMON_SCRIPT}")using process substitution- Line 11-14: Validates that the sourced function is defined
The same pattern appears in
argo_rollout_status_common.shwith identical validation.Size is not a concern. The script is 154 lines—well within typical CircleCI environment variable limits (which are in the megabyte range).
The pattern works as intended, as evidenced by error handling, validation code, and consistent use across the codebase.
The suggested alternative (sourcing directly from filesystem) may not be viable if the scripts aren't available in the working directory during CircleCI execution, which is likely why this pattern was adopted.
Likely an incorrect or invalid review comment.
| echo "------------------------------------------------------" | ||
| echo "📝 Computing Environment Variables" | ||
| echo "------------------------------------------------------" | ||
| echo "export CLUSTER_NAME=<< parameters.cluster-name >>" | tee -a $BASH_ENV | ||
| echo "export AWS_REGION=<< parameters.aws-region >>" | tee -a $BASH_ENV | ||
| echo "export AWS_PROFILE=<< parameters.aws-profile >>" | tee -a $BASH_ENV | ||
| echo "export KUBECONFIG_FILE_PATH=<< parameters.kubeconfig-file-path >>" | tee -a $BASH_ENV | ||
| echo "export ROLE_ARN=<< parameters.cluster-authentication-role-arn >>" | tee -a $BASH_ENV | ||
| echo "export CLUSTER_CONTEXT_ALIAS=<< parameters.cluster-context-alias >>" | tee -a $BASH_ENV | ||
| echo "export DRY_RUN=<< parameters.dry-run >>" | tee -a $BASH_ENV | ||
| echo "export VERBOSE=<< parameters.verbose >>" | tee -a $BASH_ENV |
There was a problem hiding this comment.
Add shell escaping and error handling to environment variable exports.
Parameters are echoed directly to BASH_ENV without escaping. If any parameter contains special characters, newlines, or quotes, this could break the environment variable syntax or introduce injection vulnerabilities. Additionally, there is no error handling if any step fails.
Apply this diff to add proper escaping and error handling:
- run:
name: Computing Environment Variables
command: |
+ set -euo pipefail
echo "------------------------------------------------------"
echo "📝 Computing Environment Variables"
echo "------------------------------------------------------"
- echo "export CLUSTER_NAME=<< parameters.cluster-name >>" | tee -a $BASH_ENV
- echo "export AWS_REGION=<< parameters.aws-region >>" | tee -a $BASH_ENV
- echo "export AWS_PROFILE=<< parameters.aws-profile >>" | tee -a $BASH_ENV
- echo "export KUBECONFIG_FILE_PATH=<< parameters.kubeconfig-file-path >>" | tee -a $BASH_ENV
- echo "export ROLE_ARN=<< parameters.cluster-authentication-role-arn >>" | tee -a $BASH_ENV
- echo "export CLUSTER_CONTEXT_ALIAS=<< parameters.cluster-context-alias >>" | tee -a $BASH_ENV
- echo "export DRY_RUN=<< parameters.dry-run >>" | tee -a $BASH_ENV
- echo "export VERBOSE=<< parameters.verbose >>" | tee -a $BASH_ENV
+ printf '%s\n' "export CLUSTER_NAME='<< parameters.cluster-name >>'" >> $BASH_ENV
+ printf '%s\n' "export AWS_REGION='<< parameters.aws-region >>'" >> $BASH_ENV
+ printf '%s\n' "export AWS_PROFILE='<< parameters.aws-profile >>'" >> $BASH_ENV
+ printf '%s\n' "export KUBECONFIG_FILE_PATH='<< parameters.kubeconfig-file-path >>'" >> $BASH_ENV
+ printf '%s\n' "export ROLE_ARN='<< parameters.cluster-authentication-role-arn >>'" >> $BASH_ENV
+ printf '%s\n' "export CLUSTER_CONTEXT_ALIAS='<< parameters.cluster-context-alias >>'" >> $BASH_ENV
+ printf '%s\n' "export DRY_RUN='<< parameters.dry-run >>'" >> $BASH_ENV
+ printf '%s\n' "export VERBOSE='<< parameters.verbose >>'" >> $BASH_ENVKey improvements:
set -euo pipefail: Exit on errors, undefined variables, and pipe failuresprintfoverecho: More portable and predictable output- Single quotes: Prevents shell expansion of special characters in parameter values
>>instead oftee -a: Simpler and avoids redundant stdout
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/commands/update-kubeconfig-with-authenticator.yml around lines 64-74, the
script appends raw parameter values to BASH_ENV using echo/tee which can break
with special characters, quotes or newlines and has no robust error handling;
change the top of the script to enable strict failure handling (set -euo
pipefail) and replace each echo | tee -a $BASH_ENV with a safe append using
printf to $BASH_ENV (use >>) while wrapping values in single quotes and escaping
any embedded single quotes (replace ' with '\''), so each line written is of the
form export VAR='escaped-value'; rely on set -euo pipefail for error handling
rather than continuing on failure.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
src/scripts/update_kubeconfig_common.sh (2)
120-122: Redundant condition check on validated parameter.At this point in the function,
cluster_nameis guaranteed to be non-empty due to the validation check at lines 112–115, which returns early ifcluster_nameis missing. The condition at line 120 will always be true.Remove the redundant check:
- if [[ -n "$cluster_name" ]]; then - cmd_args+=("--name" "$cluster_name") - fi + cmd_args+=("--name" "$cluster_name")
152-152: Unconditional debug output should respect--verboseflag.Line 152 always prints the command arguments, regardless of whether
--verboseis set. This contradicts the typical design where debug-level output is gated by a verbosity flag. Clarify intent: should this always print (and be documented as such), or should it be conditional?If this should only print in verbose mode, apply this diff:
- echo "cmd_args: ${cmd_args[*]}" + if [[ "$verbose" == "true" ]]; then + echo "cmd_args: ${cmd_args[*]}" + fiAlternatively, if unconditional output is intentional for debugging, add a comment explaining why it bypasses the
--verboseflag.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/commands/update-kubeconfig-with-authenticator.yml(1 hunks)src/scripts/update_kubeconfig_common.sh(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/commands/update-kubeconfig-with-authenticator.yml
🔇 Additional comments (1)
src/scripts/update_kubeconfig_common.sh (1)
26-156: Solid implementation with good error handling and auto-detection strategy.The function demonstrates strong shell scripting practices:
- Robust flag parsing with proper shift logic and unknown-flag error handling (lines 33–76)
- Multi-tier auto-detection fallback (environment variables → kubeconfig context extraction), enabling flexible deployment scenarios (lines 78–109)
- Proper validation with informative error messages to stderr (lines 111–115)
- Safe regex fallbacks with
|| echo ""for silent degradation (lines 89, 106)- Implicit exit-code propagation from the
awscommand (line 155)- Consistent quoting discipline throughout
The script is well-positioned for reuse across Argo workflows and kubeconfig refresh scenarios mentioned in the PR objectives.
…ironment variable exports This commit modifies the `update-kubeconfig-with-authenticator.yml` file to include conditional checks before exporting environment variables. This ensures that only non-empty parameters are exported, enhancing the script's robustness and preventing unnecessary exports.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/commands/update-kubeconfig-with-authenticator.yml (1)
67-82:⚠️ Critical security issue remains unresolved: shell injection and missing error handling.The same shell escaping and error handling gaps flagged in the past review still persist. Parameters are echoed directly to
BASH_ENVwithout escaping, making the code vulnerable to injection if parameters contain special characters, newlines, or quotes. Additionally, there is noset -euo pipefailto enforce strict error handling.Apply this diff to fix shell escaping and add error handling:
- run: name: Computing kubectl Environment Variables command: | + set -euo pipefail echo "------------------------------------------------------" echo "📝 Computing kubectl Environment Variables" echo "------------------------------------------------------" - echo "export CLUSTER_NAME=<< parameters.cluster-name >>" | tee -a $BASH_ENV - echo "export AWS_REGION=<< parameters.aws-region >>" | tee -a $BASH_ENV + printf '%s\n' "export CLUSTER_NAME='<< parameters.cluster-name >>'" >> $BASH_ENV + printf '%s\n' "export AWS_REGION='<< parameters.aws-region >>'" >> $BASH_ENV if [[ -n "<< parameters.aws-profile >>" ]]; then - echo "export AWS_PROFILE=<< parameters.aws-profile >>" | tee -a $BASH_ENV + printf '%s\n' "export AWS_PROFILE='<< parameters.aws-profile >>'" >> $BASH_ENV fi if [[ -n "<< parameters.kubeconfig-file-path >>" ]]; then - echo "export KUBECONFIG_FILE_PATH=<< parameters.kubeconfig-file-path >>" | tee -a $BASH_ENV + printf '%s\n' "export KUBECONFIG_FILE_PATH='<< parameters.kubeconfig-file-path >>'" >> $BASH_ENV fi if [[ -n "<< parameters.cluster-authentication-role-arn >>" ]]; then - echo "export ROLE_ARN=<< parameters.cluster-authentication-role-arn >>" | tee -a $BASH_ENV + printf '%s\n' "export ROLE_ARN='<< parameters.cluster-authentication-role-arn >>'" >> $BASH_ENV fi if [[ -n "<< parameters.cluster-context-alias >>" ]]; then - echo "export CLUSTER_CONTEXT_ALIAS=<< parameters.cluster-context-alias >>" | tee -a $BASH_ENV + printf '%s\n' "export CLUSTER_CONTEXT_ALIAS='<< parameters.cluster-context-alias >>'" >> $BASH_ENV fi - echo "export DRY_RUN=<< parameters.dry-run >>" | tee -a $BASH_ENV - echo "export VERBOSE=<< parameters.verbose >>" | tee -a $BASH_ENV + printf '%s\n' "export DRY_RUN='<< parameters.dry-run >>'" >> $BASH_ENV + printf '%s\n' "export VERBOSE='<< parameters.verbose >>'" >> $BASH_ENVKey improvements:
set -euo pipefail: Exit on errors, undefined variables, and pipe failuresprintfoverecho: More portable and predictable output- Single quotes: Prevent shell expansion and injection of special characters
>>instead oftee -a: Simpler and avoids redundant stdout
This commit updates the environment variable names in both `update-kubeconfig-with-authenticator.yml` and `update_kubeconfig_common.sh` to use a consistent `KUBECONFIG_` prefix. This change narrows the scope of the declared env vars, fixing compatibility issues with further usages of kubectl commands.
…r env vars This update enhances the `update_kubeconfig_common.sh` script by implementing a new structure that allows command-line arguments to take precedence over environment variables for configuration settings. The previous auto-detection logic for parameters has been removed, streamlining the script's functionality and improving clarity.
…ue in CircleCI config
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/scripts/argo_rollout_status_common.sh (2)
261-267: kubectl_exit_code retains stale failure state across loop iterations.The variable
kubectl_exit_codeis initialized once before the loop (line 257), but the|| kubectl_exit_code=$?pattern only executes on command failure. If iteration 1 fails and iteration 2 succeeds, the exit code remains stuck at the old failure value, causing the success check on line 264 to fail incorrectly.Reset
kubectl_exit_code=0at the start of each loop iteration:while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
308-312: Remove or guard debug AWS credential expiration output.Lines 308–312 print AWS credential expiration on every status check iteration. This clutters logs and appears to be temporary debug code. Remove these lines or guard them behind a clearly named debug flag.
If this is temporary debug code, remove it:
- echo "** DEBUG AWS EXPIRATION *************************************" - # Use AWS CLI v1 compatible function instead of v2's export-credentials - # aws configure export-credentials --format env-no-export - get_aws_credential_expiration_v1 - echo "*************************************************************"If this is intentional for observability, guard it behind a debug flag:
+ if [[ "${DEBUG:-false}" == "true" ]]; then + echo "** DEBUG AWS EXPIRATION *************************************" + get_aws_credential_expiration_v1 + echo "*************************************************************" + fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
🔇 Additional comments (1)
src/scripts/argo_rollout_status_common.sh (1)
32-35: Error messages have been corrected from past review.The error messages at lines 33 and 46 now accurately report when functions are "not defined" rather than the previous misleading phrasing. Good catch on that fix.
Also applies to: 45-48
| #shellcheck disable=SC2329 | ||
| function refresh_kubeconfig() { | ||
| echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" | ||
|
|
||
| if update_kubeconfig; then | ||
| echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}" | ||
| return 0 | ||
| else | ||
| echo -e "${RED}❌ Failed to refresh kubeconfig${NC}" >&2 | ||
| return 1 | ||
| fi | ||
| } |
There was a problem hiding this comment.
Environment variable naming mismatch prevents kubeconfig refresh.
Line 175 calls update_kubeconfig with no arguments, expecting it to read from environment variables. However, line 392 exports CLUSTER_NAME and AWS_REGION, while update_kubeconfig (from update_kubeconfig_common.sh) expects KUBECONFIG_CLUSTER_NAME and KUBECONFIG_AWS_REGION. The function will not find the variables and kubeconfig refresh will fail.
Option 1: Export variables with the correct names expected by update_kubeconfig():
- export CLUSTER_NAME AWS_REGION
+ export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Option 2: Pass cluster name and region as arguments to update_kubeconfig in the refresh_kubeconfig() function:
function refresh_kubeconfig() {
echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}"
- if update_kubeconfig; then
+ if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then
echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}"
return 0
else
echo -e "${RED}❌ Failed to refresh kubeconfig${NC}" >&2
return 1
fi
}Also applies to: 376-392
🤖 Prompt for AI Agents
In src/scripts/argo_rollout_status_common.sh around lines 171-182 (and also
affecting lines ~376-392), refresh_kubeconfig calls update_kubeconfig but the
environment variable names exported later are CLUSTER_NAME and AWS_REGION while
update_kubeconfig expects KUBECONFIG_CLUSTER_NAME and KUBECONFIG_AWS_REGION,
causing refresh to fail; fix by either (A) changing the exports at ~392 to
export KUBECONFIG_CLUSTER_NAME="$CLUSTER_NAME" and
KUBECONFIG_AWS_REGION="$AWS_REGION" so the function finds the expected env vars,
or (B) change the call at line 175 to pass the values explicitly like
update_kubeconfig "$CLUSTER_NAME" "$AWS_REGION" (ensure the argument order
matches update_kubeconfig's parameter order).
This commit adds detailed debug messages to the `exec_rollout_status` function, improving visibility into the AWS profile search process and the retrieval of credential expiration. It includes messages for searching the credentials file and CLI cache, as well as handling cases where the cache directory is not found. These enhancements aim to facilitate troubleshooting and provide clearer insights during rollout status checks.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
src/scripts/argo_rollout_status_common.sh (3)
268-313: Fix exit code variable not reset on retry loop iterations.The
kubectl_exit_codevariable is initialized once (line 271) but never reset at the start of each loop iteration. If attempt 1 fails (setskubectl_exit_codeto non-zero) and attempt 2 succeeds (command returns 0), the||clause at line 276 does not execute, leavingkubectl_exit_codewith the stale failure value from attempt 1. The check at line 278 then incorrectly treats the successful attempt as a failure.This issue was flagged in the prior review and must be fixed.
Apply this diff to reset the exit code at the start of each loop iteration:
while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$? if [[ $kubectl_exit_code -eq 0 ]] || is_not_found_error "$kubectl_output"; then
172-182: Fix environment variable naming mismatch that prevents kubeconfig refresh.Line 175 calls
update_kubeconfigwith no arguments, expecting it to read from environment variables. However, line 406 exportsCLUSTER_NAMEandAWS_REGION, whileupdate_kubeconfig()fromupdate_kubeconfig_common.shexpectsKUBECONFIG_CLUSTER_NAMEandKUBECONFIG_AWS_REGION(or accepts these as CLI arguments). The mismatch prevents kubeconfig refresh from finding the required parameters and will fail.This issue was flagged in prior reviews and remains unresolved.
Choose one of these solutions:
Option 1 (preferred): Pass cluster name and region as CLI arguments to
update_kubeconfig:function refresh_kubeconfig() { echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" - if update_kubeconfig; then + if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}" return 0 else echo -e "${RED}❌ Failed to refresh kubeconfig${NC}" >&2 return 1 fi }Option 2: Export variables with the names expected by
update_kubeconfig()(line 406):- export CLUSTER_NAME AWS_REGION + export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Both options require verification against the actual signature of
update_kubeconfig()inupdate_kubeconfig_common.sh.
322-326: Remove or guard debug AWS credential expiration output.Lines 322–326 print debug output on every status check iteration, including AWS credential expiration information. This debug output was flagged in the prior review but remains unresolved. It clutters logs during normal operation and may contain sensitive timing information.
Either remove this debug block entirely (if temporary) or guard it behind a clearly named debug flag and document its purpose.
If this is temporary debug code, remove it:
while true; do - echo "** DEBUG AWS EXPIRATION *************************************" - # Use AWS CLI v1 compatible function instead of v2's export-credentials - # aws configure export-credentials --format env-no-export - get_aws_credential_expiration_v1 - echo "*************************************************************" echo "============================================================="If this is intentional for observability, wrap it behind a debug flag:
while true; do + if [[ "${DEBUG_AWS_EXPIRATION:-false}" == "true" ]]; then + echo "** DEBUG AWS EXPIRATION *************************************" + get_aws_credential_expiration_v1 + echo "*************************************************************" + fi echo "============================================================="
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
This commit expands the `exec_rollout_status` function to include additional methods for retrieving AWS credential expiration. It introduces checks for environment variables, reads from the credentials file using `awk`, and searches the CLI cache and SSO cache. The script now provides detailed debug output for each method, improving visibility into the credential retrieval process and error handling for missing credentials. This enhancement aims to facilitate better management of AWS credentials during rollout status checks.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/scripts/argo_rollout_status_common.sh (2)
171-182: Environment variable naming mismatch prevents kubeconfig refresh (duplicate).Line 175 calls
update_kubeconfigwith no arguments, expecting it to read from environment variables. However, perupdate_kubeconfig_common.shdocumentation, it expectsKUBECONFIG_CLUSTER_NAMEandKUBECONFIG_AWS_REGION, but line 588 exportsCLUSTER_NAMEandAWS_REGION. The function will fail to find the variables and kubeconfig refresh will not work.Fix via Option 1 (preferred): Export variables with the correct names:
- export CLUSTER_NAME AWS_REGION + export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Fix via Option 2: Pass cluster name and region as arguments in
refresh_kubeconfig:function refresh_kubeconfig() { echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" - if update_kubeconfig; then + if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}" return 0 else echo -e "${RED}❌ Failed to refresh kubeconfig${NC}" >&2 return 1 fi }
449-495: Exit code not reset in retry loop causes retries to fail (duplicate).The retry loop does not reset
kubectl_exit_codeat the start of each iteration. If the first kubectl command fails and setskubectl_exit_codeto a non-zero value, the second iteration's successful command will not clear it. Line 460's check will then fail even though the command succeeded, breaking the retry recovery.Initialize
kubectl_exit_code=0at the start of each loop iteration:while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
src/scripts/argo_rollout_status_common.sh (3)
504-508: 🟠 Major: Unconditional debug output clutters logs on every iteration.Lines 504–508 invoke
get_aws_credential_expiration_v1on every status check iteration, producing extensive multi-method debug output (Methods 0–4, colored headers, cache searches, etc.). In production, this will overwhelm logs during normal operation.Guard this behind a debug flag:
- echo "** DEBUG AWS EXPIRATION *************************************" - # Use AWS CLI v1 compatible function instead of v2's export-credentials - # aws configure export-credentials --format env-no-export - get_aws_credential_expiration_v1 - echo "*************************************************************" + if [[ "${DEBUG_AWS_EXPIRATION:-false}" == "true" ]]; then + echo "** DEBUG AWS EXPIRATION *************************************" + get_aws_credential_expiration_v1 + echo "*************************************************************" + fi
450-495: 🔴 Critical: Retry loop does not reset exit code on each iteration.The variable
kubectl_exit_codeis initialized once at line 453, but the while loop (line 457) does not reset it at the start of each iteration. If the first kubectl attempt fails with a non-zero exit code and a subsequent attempt succeeds,kubectl_exit_codewill retain the old failure value. This causes the condition at line 460 ([[ $kubectl_exit_code -eq 0 ]]) to evaluate incorrectly, misreporting the retry outcome.Trace: On iteration 2, if kubectl succeeds, the
|| kubectl_exit_code=$?branch doesn't execute (command succeeded), leavingkubectl_exit_codewith its previous iteration's failure value. The subsequent checks then process stale state.while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
171-182: 🔴 Critical: Environment variable naming mismatch prevents kubeconfig refresh.Line 175 calls
update_kubeconfigwith no arguments, expecting it to read environment variables. However, line 588 exportsCLUSTER_NAMEandAWS_REGION, while theupdate_kubeconfig()function fromupdate_kubeconfig_common.sh(per the relevant code snippets) expectsKUBECONFIG_CLUSTER_NAMEandKUBECONFIG_AWS_REGION. The function will fail to find the variables, causing kubeconfig refresh to fail even if inputs are available.Fix by either:
Option 1 — Export variables with the correct names expected by
update_kubeconfig():- export CLUSTER_NAME AWS_REGION + export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Option 2 — Pass cluster name and region as explicit arguments to
update_kubeconfig():function refresh_kubeconfig() { echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" - if update_kubeconfig; then + if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}" return 0.circleci/test-deploy.yml (1)
93-93: RemoveDUMMY_ENV_VARfrom production and staging deployments (duplicate).This issue was flagged in a previous review and remains unresolved. Lines 93 and 148 inject
DUMMY_ENV_VARinto production and staging Argo deployments respectively. The variable name strongly indicates a test artifact that should not be propagated to actual Kubernetes clusters. Remove both occurrences.Also applies to: 148-148
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.circleci/test-deploy.yml(2 hunks)src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
🔇 Additional comments (4)
src/scripts/argo_rollout_status_common.sh (4)
32-48: ✅ Error message validation has been corrected.The error message at line 46 now correctly states "is not defined" rather than the misleading "is defined in subshell" from the previous review. This issue appears to have been addressed.
147-156: Helper functions for error detection are well-structured.The
is_not_found_error()function cleanly detects not-found patterns across various error message formats. This is a solid foundation for handling missing resources gracefully.
158-169: Proactive auth check is a good defensive pattern.The
check_kubectl_auth()function tests kubectl connectivity before attempting rollout operations, which helps surface auth issues early rather than discovering them during command execution.
572-579: Verify that CLUSTER_NAME and AWS_REGION are provided by the caller.Lines 572–573 reference
${CLUSTER_NAME}and${AWS_REGION}in the status header. Per the PR summary, these are new parameters added to argo-rollout-status commands. Ensure the calling layer (e.g.,src/commands/argo-rollout-status.yml) passes these as environment variables toexec_rollout_status().
| cluster-name: staging | ||
| image-tag: stg-60cbb90 | ||
| region: ${AWS_REGION_STAGING} | ||
| region: us-west-2 |
There was a problem hiding this comment.
🧩 Analysis chain
Replace hardcoded us-west-2 with a parameterized region variable.
Line 114 hardcodes the staging region to us-west-2, which contradicts the PR objective of adding parameterized aws-region support. The enriched summary indicates this was previously ${AWS_REGION_STAGING}. Hardcoding reduces reusability and deviates from the pattern used in production (line 60: ${AWS_REGION}). Restore the parameterized region variable to align with the PR goals.
🏁 Script executed:
#!/bin/bash
# Verify what region variable was previously used in staging and check all region usages
echo "=== Staging region assignments in current file ==="
rg -n "region:" .circleci/test-deploy.yml | grep -A2 -B2 "staging"
echo -e "\n=== Git log: check prior staging region value ==="
git log --oneline -p .circleci/test-deploy.yml | grep -A3 -B3 "staging.*region" | head -30Length of output: 101
I need to refine the verification approach since the initial searches returned empty results. Let me inspect the file directly and check all region assignments.
🏁 Script executed:
#!/bin/bash
# Direct inspection of the file and region assignments
echo "=== File exists and line count ==="
wc -l .circleci/test-deploy.yml
echo -e "\n=== Line 114 and surrounding context ==="
sed -n '110,120p' .circleci/test-deploy.yml
echo -e "\n=== All 'region:' assignments in the file ==="
rg -n "region:" .circleci/test-deploy.yml
echo -e "\n=== Check for staging-argo-deploy section ==="
rg -n "staging-argo-deploy" .circleci/test-deploy.yml
echo -e "\n=== Git history of the file ==="
git log --oneline .circleci/test-deploy.yml | head -10
echo -e "\n=== Check if file was modified in this PR ==="
git diff HEAD~1 .circleci/test-deploy.yml | head -50Length of output: 2216
Replace hardcoded us-west-2 with parameterized region variable at line 114.
Line 114 hardcodes the staging region to us-west-2, which is inconsistent with the rest of the codebase. All other staging sections (lines 106, 161, 169, 216, 224) use ${AWS_REGION_STAGING}, and production (line 60) uses ${AWS_REGION}. The git diff confirms this PR changed line 114 from ${AWS_REGION_STAGING} to us-west-2—a regression that contradicts the PR objective of adding parameterized aws-region support. Restore to ${AWS_REGION_STAGING}.
🤖 Prompt for AI Agents
In .circleci/test-deploy.yml around line 114, the staging region was
accidentally hardcoded to "us-west-2" (regression from this PR); replace that
value with the parameterized variable ${AWS_REGION_STAGING} to match the other
staging sections and restore consistent parameterized region usage.
Test for new image with tools and binaries already installed.
Remove AWS cli v1 related code
This commit updates the .gitignore file to include auto-generated rules for the Snyk Security Extension, ensuring that these files are not tracked in the repository.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/scripts/argo_rollout_status_common.sh (1)
185-230: Fixkubectl_exit_codenot being reset in retry loop.Line 188 initializes
kubectl_exit_code=0, but the retry loop does not reset it at the start of each iteration. When a kubectl command fails on attempt N (settingkubectl_exit_codeto non-zero), and then succeeds on attempt N+1, the||operator does not execute (because the command succeeds), leavingkubectl_exit_codewith the old failure value. Line 195 then incorrectly evaluates the exit code as non-zero even though the retry succeeded.This causes the retry logic to misreport successful retries as failures, breaking the entire fault-recovery mechanism.
Apply this diff to reset
kubectl_exit_codeat the start of each iteration:while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
🧹 Nitpick comments (1)
.gitignore (1)
10-11: Minor: Redundant ignore rule due to parent directory already excluded.Line 5 already ignores
/.cursor/, so the new entry/.cursor/rules/snyk_rules.mdcis redundant. While explicit documentation of auto-generated files is valuable for clarity, this specific path is already covered by the parent directory exclusion.Consider removing line 11 and relying on the parent directory exclusion, or adding a comment to line 5 explaining what auto-generated files live there.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.gitignore(1 hunks)src/executors/argo.yml(1 hunks)src/jobs/argo-deploy.yml(2 hunks)src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
🔇 Additional comments (4)
src/executors/argo.yml (1)
3-3: Verify the RC image version is appropriate for production use.The CircleCI EKS convenience image is pinned to
v2.3.0-rc1, which is a release candidate. RC versions are typically unstable and not recommended for production deployments. Confirm whether this is intentional or if a stable release version (e.g.,v2.3.0) should be used instead.src/jobs/argo-deploy.yml (1)
10-15: Clean parameter propagation and environment wiring.The new
cluster-nameandregionparameters are properly defined and consistently propagated through the job steps. TheUPDATE_KUBECONFIG_COMMON_SCRIPTinclusion aligns well with the refactored kubeconfig management approach.Also applies to: 180-181, 203-208
src/scripts/argo_rollout_status_common.sh (2)
37-48: Environment script validation now correctly handles missingupdate_kubeconfigfunction.The validation logic properly checks that
update_kubeconfigis available before proceeding, with a corrected error message that accurately reflects the failure condition.
147-156: New helper functions support robust error detection and authentication checks.The
is_not_found_error()andcheck_kubectl_auth()functions provide good defensive checks for distinguishing authentication failures from not-found errors, which improves the retry logic's ability to make informed decisions.Also applies to: 158-169
| function refresh_kubeconfig() { | ||
| echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" | ||
|
|
||
| if update_kubeconfig; then | ||
| echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}" | ||
| return 0 | ||
| else | ||
| echo -e "${RED}❌ Failed to refresh kubeconfig${NC}" >&2 | ||
| return 1 | ||
| fi | ||
| } |
There was a problem hiding this comment.
Fix environment variable naming mismatch between exports and update_kubeconfig expectations.
Line 175 calls update_kubeconfig() with no arguments, expecting it to read from environment variables. However, based on the update_kubeconfig function definition (in update_kubeconfig_common.sh), it expects variables named KUBECONFIG_CLUSTER_NAME and KUBECONFIG_AWS_REGION.
Line 321 exports CLUSTER_NAME and AWS_REGION instead. When the subshell runs refresh_kubeconfig() → update_kubeconfig(), the function will not find the required environment variables and will fail.
Option 1 (Recommended): Export variables with the correct names expected by update_kubeconfig():
- export CLUSTER_NAME AWS_REGION
+ export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Option 2: Pass cluster name and region as explicit arguments to update_kubeconfig in the refresh_kubeconfig() function:
function refresh_kubeconfig() {
echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}"
- if update_kubeconfig; then
+ if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then
echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}"
return 0
elseAlso applies to: 321-321
🤖 Prompt for AI Agents
In src/scripts/argo_rollout_status_common.sh around lines 172-182 (and related
export at line 321), refresh_kubeconfig() calls update_kubeconfig() which
expects environment variables named KUBECONFIG_CLUSTER_NAME and
KUBECONFIG_AWS_REGION but the script currently exports CLUSTER_NAME and
AWS_REGION; change the export(s) at/near line 321 to export
KUBECONFIG_CLUSTER_NAME="$CLUSTER_NAME" and KUBECONFIG_AWS_REGION="$AWS_REGION"
(or rename the variables to those names) so update_kubeconfig finds them, or
alternatively modify refresh_kubeconfig() to call update_kubeconfig
"$CLUSTER_NAME" "$AWS_REGION" and ensure update_kubeconfig's parameter handling
matches.
| echo "** DEBUG AWS EXPIRATION *************************************" | ||
| aws configure export-credentials --format env-no-export | ||
| echo "*************************************************************" |
There was a problem hiding this comment.
Remove or guard unconditional debug AWS credentials output.
Lines 239–241 unconditionally call aws configure export-credentials --format env-no-export, which may expose sensitive AWS credentials in logs depending on configuration. This appears to be temporary debugging code that should either be removed or guarded behind a debug flag.
Guard this behind a debug flag or remove if no longer needed:
- echo "** DEBUG AWS EXPIRATION *************************************"
- aws configure export-credentials --format env-no-export
- echo "*************************************************************"
+ if [[ "${DEBUG_AWS_EXPIRATION:-false}" == "true" ]]; then
+ echo "** DEBUG AWS EXPIRATION *************************************"
+ aws configure export-credentials --format env-no-export
+ echo "*************************************************************"
+ fi🤖 Prompt for AI Agents
In src/scripts/argo_rollout_status_common.sh around lines 239 to 241, there is
an unconditional debug block that prints and exports AWS credentials (aws
configure export-credentials --format env-no-export), which can leak sensitive
credentials into logs; remove these three lines if they are no longer needed, or
wrap them in a conditional debug check (e.g., test a DEBUG or VERBOSE env var)
so the aws export-credentials call only runs when debugging is explicitly
enabled and never in normal/CI runs; ensure any debug path requires an explicit
opt-in and document the env flag in the script header.
Test for new image with tools and binaries already installed.
Test for new image with tools and binaries already installed.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/scripts/argo_rollout_status_common.sh (2)
172-182: Environment variable naming mismatch prevents kubeconfig refresh.Line 175 calls
update_kubeconfigwith no arguments, expecting it to read from environment variables. However, line 340 exportsCLUSTER_NAMEandAWS_REGION, while theupdate_kubeconfigfunction (from the relevant code snippet) expectsKUBECONFIG_CLUSTER_NAMEandKUBECONFIG_AWS_REGION. The function will not find the variables and kubeconfig refresh will fail silently or exit with an error.Recommended fix: Export variables with the correct names expected by
update_kubeconfig():- export CLUSTER_NAME AWS_REGION + export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Alternatively, pass cluster name and region as arguments in the
refresh_kubeconfig()function call:function refresh_kubeconfig() { echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" - if update_kubeconfig; then + if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then
188-225: Resetkubectl_exit_codeon each retry loop iteration.The retry loop initializes
kubectl_exit_code=0at line 188, but does not reset it at the start of each iteration. If the firstkubectlcommand fails with a non-zero exit code and a subsequent attempt succeeds,kubectl_exit_codewill retain the old failure value. This causes the function to incorrectly report success/failure based on stale state.Initialize
kubectl_exit_code=0at the start of each loop iteration (before thekubectlinvocation) to ensure clean state:while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
🔇 Additional comments (3)
src/scripts/argo_rollout_status_common.sh (3)
45-48: Good catch on the error message.The error message at line 46 correctly states "is not defined" now, matching the actual condition. ✅
324-325: Verify CLUSTER_NAME and AWS_REGION availability at print time.The print header references
${CLUSTER_NAME}and${AWS_REGION}at lines 324–325, but these variables are populated from command-line parameters. Confirm that these parameters are being passed through the Argo commands and set beforeprint_headeris called. If they are empty, the header output will be misleading.
340-340: I'll help you verify this review comment. Let me start by examining the relevant code section and understanding the context around the subshell exports.
<function_calls>
cat -n src/scripts/argo_rollout_status_common.sh | sed -n '330,360p'
</function_calls>Let me check the file size first:
<function_calls>
wc -l src/scripts/argo_rollout_status_common.sh
</function_calls>Now let me examine the context around lines 340-348:
<function_calls>
cat -n src/scripts/argo_rollout_status_common.sh | sed -n '320,370p'
</function_calls>Now I need to identify all function calls within
get_kubectl_argo_rolloutand find where all the functions are declared:
<function_calls>
cat -n src/scripts/argo_rollout_status_common.sh | grep -A 50 "^[[:space:]]*get_kubectl_argo_rollout()" | head -60
</function_calls>Let me get a better view of the function definitions:
<function_calls>
grep -n "^[a-zA-Z_][a-zA-Z0-9_]*() {" src/scripts/argo_rollout_status_common.sh
</function_calls>Now let me search for function calls within the subshell context (around line 345):
<function_calls>
sed -n '330,400p' src/scripts/argo_rollout_status_common.sh | cat -n
</function_calls>Let me get a cleaner view by reading the entire function more carefully:
<function_calls>
awk '/^get_kubectl_argo_rollout() {/,/^}/' src/scripts/argo_rollout_status_common.sh | head -100
</function_calls>Now let me check what's exported and what functions are called:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
content = f.read()
lines = content.split('\n')Find get_kubectl_argo_rollout function
start_idx = None
end_idx = None
for i, line in enumerate(lines):
if re.match(r'^get_kubectl_argo_rollout() {', line):
start_idx = i
breakif start_idx is not None:
depth = 1
for i in range(start_idx + 1, len(lines)):
if '{' in lines[i]:
depth += lines[i].count('{')
if '}' in lines[i]:
depth -= lines[i].count('}')
if depth == 0:
end_idx = i
breakprint("get_kubectl_argo_rollout function:") print("=" * 60) for i in range(start_idx, min(end_idx + 1, start_idx + 100)): print(f"{i+1:4d}: {lines[i]}")EOF
</function_calls>Now let me search for the function declarations to verify they exist:
<function_calls>
for func in "is_not_found_error" "check_kubectl_auth" "refresh_kubeconfig" "update_kubeconfig"; do
echo "=== Searching for $func ==="
grep -n "^$func()" src/scripts/argo_rollout_status_common.sh || echo "Not found at function definition level"
done
</function_calls>Let me check if these functions might be sourced or defined elsewhere:
<function_calls>
rg -n "is_not_found_error|check_kubectl_auth|refresh_kubeconfig|update_kubeconfig" src/scripts/argo_rollout_status_common.sh | head -30
</function_calls>Let me look at the beginning of the file to see what's sourced:
<function_calls>
head -50 src/scripts/argo_rollout_status_common.sh
</function_calls>Now let me check the full context around line 340-348 more carefully:
<function_calls>
sed -n '335,355p' src/scripts/argo_rollout_status_common.sh | cat -n
</function_calls>Let me look at the structure more carefully around lines 340-348:
<function_calls>
cat -n src/scripts/argo_rollout_status_common.sh | sed -n '338,355p'
</function_calls>Let me get a Python script to understand the structure better:
<function_calls>
python3 << 'EOF'
with open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Show lines 335-360 with line numbers and highlighting context
for i in range(334, min(360, len(lines))):
line = lines[i]
print(f"{i+1:4d}: {line.rstrip()}")
EOF
</function_calls>Now let me understand what's in the subshell by looking at line 345:
<function_calls>
sed -n '345p' src/scripts/argo_rollout_status_common.sh
</function_calls>Let me see a broader context:
<function_calls>
sed -n '340,360p' src/scripts/argo_rollout_status_common.sh
</function_calls>Let me check if there are function declarations in the file using a proper bash-aware method:
<function_calls>
grep -n "^[[:space:]]*(is_not_found_error|check_kubectl_auth|refresh_kubeconfig|update_kubeconfig|get_kubectl_argo_rollout)() {" src/scripts/argo_rollout_status_common.sh
</function_calls>Let me search more broadly:
<function_calls>
rg -n "^\s*(is_not_found_error|check_kubectl_auth|refresh_kubeconfig|update_kubeconfig)\s*()" src/scripts/argo_rollout_status_common.sh
</function_calls>Let me check the export statements around line 340 and 347-348:
<function_calls>
sed -n '335,352p' src/scripts/argo_rollout_status_common.sh | grep -E "(export|declare -f)"
</function_calls>Let me get the exact context:
<function_calls>
sed -n '330,360p' src/scripts/argo_rollout_status_common.sh | nl -v 330
</function_calls>Now let me search for where these functions might be defined:
<function_calls>
rg -n "is_not_found_error()|check_kubectl_auth()|refresh_kubeconfig()|update_kubeconfig()" src/scripts/ | head -20
</function_calls>Let me check if these functions are in other files in the same directory:
<function_calls>
ls -la src/scripts/ | head -20
</function_calls>Let me search across the entire src/scripts directory:
<function_calls>
grep -r "^is_not_found_error()" src/scripts/ 2>/dev/null | head -5
</function_calls>Let me look for function definitions more carefully:
<function_calls>
grep -n "is_not_found_error" src/scripts/*.sh
</function_calls>Let me check all shell scripts in src/scripts:
<function_calls>
fd -e sh src/scripts/ -exec wc -l {} + | sort -rn | head -20
</function_calls>Let me look at the full structure of the file to find where functions are defined:
<function_calls>
grep -n "^[a-zA-Z_][a-zA-Z0-9_]*() {" src/scripts/argo_rollout_status_common.sh | head -30
</function_calls>Let me check the entire file structure:
<function_calls>
cat src/scripts/argo_rollout_status_common.sh | head -200 | tail -100
</function_calls>Let me search for function declarations in a different way:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Find all function definitions
functions = []
for i, line in enumerate(lines):
if re.match(r'^[a-zA-Z_][a-zA-Z0-9_]()\s{', line):
func_name = line.split('(')[0].strip()
functions.append((i+1, func_name))print("All functions defined in the file:")
for line_num, func_name in functions:
print(f"Line {line_num}: {func_name}()")Now find get_kubectl_argo_rollout and analyze it
print("\n" + "="*60)
print("Looking for get_kubectl_argo_rollout function:")
for line_num, func_name in functions:
if func_name == 'get_kubectl_argo_rollout':
print(f"Found at line {line_num}")
# Show the function
start_idx = line_num - 1
depth = 0
for i in range(start_idx, min(start_idx + 150, len(lines))):
line = lines[i]
depth += line.count('{') - line.count('}')
print(f"{i+1:4d}: {line.rstrip()}")
if depth == 0 and i > start_idx:
break
EOF
</function_calls>Perfect! Now let me check what functions are called within
get_kubectl_argo_rollout:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Find get_kubectl_argo_rollout function
start_idx = None
for i, line in enumerate(lines):
if re.match(r'^get_kubectl_argo_rollout()\s*{', line):
start_idx = i
breakif start_idx:
depth = 0
end_idx = start_idx
func_body = []
for i in range(start_idx, len(lines)):
line = lines[i]
func_body.append(line)
depth += line.count('{') - line.count('}')
if depth == 0 and i > start_idx:
end_idx = i
break# Extract function calls func_text = ''.join(func_body) # Find function calls (pattern: word followed by parentheses) calls = re.findall(r'\b([a-zA-Z_][a-zA-Z0-9_]*)\s*\(', func_text) print("Function calls within get_kubectl_argo_rollout:") print("=" * 60) for call in sorted(set(calls)): if call not in ['if', 'then', 'else', 'for', 'while', 'do', 'done', 'function', 'local']: print(f" {call}()") print("\n" + "=" * 60) print("Function body:") for i, line in enumerate(func_body): print(f"{start_idx + i + 1:4d}: {line.rstrip()}")EOF
</function_calls>Now let me check what's exported on lines 340 and 347-348:
<function_calls>
python3 << 'EOF'
with open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Check lines around 340 and 347-348
print("Exports and declarations around lines 340 and 347-348:")
print("=" * 60)
for i in [339, 346, 347]: # 0-indexed
if i < len(lines):
print(f"Line {i+1}: {lines[i].rstrip()}")Also show broader context
print("\n" + "=" * 60)
print("Context around lines 330-360:")
for i in range(329, min(360, len(lines))):
print(f"{i+1:4d}: {lines[i].rstrip()}")
EOF
</function_calls>Now let me verify if the functions mentioned are actually called and if they need to be exported:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
content = f.read()
lines = content.split('\n')Find all function definitions
all_functions = set()
for line in lines:
match = re.match(r'^([a-zA-Z_][a-zA-Z0-9_])()\s{', line)
if match:
all_functions.add(match.group(1))print("All functions defined in file:")
for func in sorted(all_functions):
print(f" {func}")print("\n" + "=" * 60)
print("Checking for required functions mentioned in review:")
required = ['is_not_found_error', 'check_kubectl_auth', 'refresh_kubeconfig', 'update_kubeconfig']
for func in required:
if func in all_functions:
print(f" ✓ {func} is defined")
else:
# Check if it's called
if re.search(rf'\b{func}\s*(', content):
print(f" ✗ {func} is called but NOT defined in this file")
else:
print(f" ? {func} is not found (neither defined nor called)")
EOF
</function_calls>Now let me check what variables are used within the
get_kubectl_argo_rolloutfunction, particularly around the subshell:
<function_calls>
sed -n '320,360p' src/scripts/argo_rollout_status_common.sh | cat -n
</function_calls>Let me look at lines 340 and 347-348 more precisely with context on what's happening:
<function_calls>
python3 << 'EOF'
with open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Show lines around 340 and 347-348 with detailed context
print("Exact lines mentioned in review comment:")
print("=" * 70)Line 340
if 340 <= len(lines):
print(f"Line 340: {lines[339].rstrip()}")Lines 347-348
if 348 <= len(lines):
print(f"Line 347: {lines[346].rstrip()}")
print(f"Line 348: {lines[347].rstrip()}")print("\n" + "=" * 70)
print("Full context (lines 335-360):")
print("=" * 70)
for i in range(334, min(360, len(lines))):
print(f"{i+1:4d}: {lines[i].rstrip()}")
EOF
</function_calls>Let me check if there are source or dot statements at the beginning of the file:
<function_calls>
head -100 src/scripts/argo_rollout_status_common.sh | grep -E "^\s*(.|source)\s"
</function_calls>Let me check the beginning more carefully:
<function_calls>
head -60 src/scripts/argo_rollout_status_common.sh
</function_calls>Now let me search all shell scripts for these function definitions:
<function_calls>
grep -r "^is_not_found_error()|^check_kubectl_auth()|^refresh_kubeconfig()|^update_kubeconfig()" src/ 2>/dev/null | head -20
</function_calls>Let me search more broadly:
<function_calls>
find src -name "*.sh" -exec grep -l "is_not_found_error|check_kubectl_auth|refresh_kubeconfig|update_kubeconfig" {} ;
</function_calls>Now let me examine what those other files contain:
<function_calls>
for file in $(find src -name "*.sh" -exec grep -l "is_not_found_error|check_kubectl_auth" {} ;); do
echo "File: $file"
grep -n "is_not_found_error|check_kubectl_auth" "$file" | head -5
done
</function_calls>Let me look at the full picture by examining what's in the subshell context more carefully:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Find the subshell context starting around line 345
print("Searching for subshell context (timeout ...)")
print("=" * 70)for i in range(len(lines)):
if 'timeout' in lines[i] and '(' in lines[i]:
print(f"Found timeout at line {i+1}:")
# Show context
for j in range(max(0, i-5), min(len(lines), i+30)):
print(f"{j+1:4d}: {lines[j].rstrip()}")
print()
EOF
</function_calls>Perfect! Now I can see the structure. Let me verify what's actually exported and what functions are called:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Find the timeout line and analyze exports
print("Analysis of subshell exports and function calls:")
print("=" * 70)Look for timeout with subshell
for i in range(len(lines)):
line = lines[i]
if 'timeout' in line and '(' in line:
print(f"Line {i+1}: {line.rstrip()}")# Look for the subshell opening j = i while j < len(lines) and '$(' not in lines[j]: j += 1 if j < len(lines): print(f"\nSubshell starts at line {j+1}: {lines[j].rstrip()}") # Find the matching closing paren depth = 0 in_subshell = False subshell_lines = [] for k in range(j, min(j+50, len(lines))): line_k = lines[k] if '$(bash' in line_k or '$(sh' in line_k: in_subshell = True depth = 1 if in_subshell: subshell_lines.append((k+1, line_k.rstrip())) if ')' in line_k: depth -= 1 if depth == 0: break print("\nSubshell content:") for line_num, content in subshell_lines: print(f"{line_num:4d}: {content}") # Extract exports print("\n" + "=" * 70) print("Exports before subshell:") for k in range(max(0, j-10), j): if 'export' in lines[k]: print(f"Line {k+1}: {lines[k].rstrip()}")EOF
</function_calls>Now let me look at the complete picture around lines 340-360:
<function_calls>
sed -n '325,365p' src/scripts/argo_rollout_status_common.sh | cat -n -v
</function_calls>Let me get a cleaner view:
<function_calls>
python3 << 'EOF'
with open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Show lines 325-365
print("Lines 325-365:")
print("=" * 70)
for i in range(324, min(365, len(lines))):
# Show line with visible spaces/tabs
line = lines[i].rstrip()
print(f"{i+1:4d}: {line}")
EOF
</function_calls>Now let me verify the specific questions in the review comment:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
content = f.read()
lines = content.split('\n')Find all function definitions
all_functions = {}
for i, line in enumerate(lines):
match = re.match(r'^([a-zA-Z_][a-zA-Z0-9_])()\s{', line)
if match:
all_functions[match.group(1)] = i + 1print("=" * 70)
print("VERIFICATION CHECKLIST:")
print("=" * 70)Question 1: Functions called within get_kubectl_argo_rollout
print("\n1. Functions called within get_kubectl_argo_rollout:")
print("-" * 70)Find get_kubectl_argo_rollout
start = None
end = None
for i, line in enumerate(lines):
if re.match(r'^get_kubectl_argo_rollout()\s*{', line):
start = i
breakif start:
depth = 0
for i in range(start, len(lines)):
line = lines[i]
depth += line.count('{') - line.count('}')
if depth == 0 and i > start:
end = i
breakfunc_text = '\n'.join(lines[start:end+1]) # Extract function calls calls = set(re.findall(r'\b([a-zA-Z_][a-zA-Z0-9_]*)\s*\(', func_text)) # Filter out keywords keywords = {'if', 'then', 'else', 'for', 'while', 'do', 'done', 'function', 'local', 'local', 'case', 'in', 'esac'} func_calls = sorted(calls - keywords) required_functions = ['is_not_found_error', 'check_kubectl_auth', 'refresh_kubeconfig'] for func in required_functions: status = "FOUND in calls" if func in func_calls else "NOT FOUND" defined = f"Defined at line {all_functions[func]}" if func in all_functions else "NOT DEFINED in this file" print(f" {func}(): {status} - {defined}")print("\n2. Exports on lines 340 and 347-348:")
print("-" * 70)
print(f" Line 340: {lines[339]}")
print(f" Line 347: {lines[346]}")
print(f" Line 348: {lines[347]}")print("\n3. Subshell context and what's being called within it:")
print("-" * 70)Find the subshell starting around line 345
for i in range(340, 360):
if i < len(lines) and ('$(bash' in lines[i] or '$(sh' in lines[i] or 'timeout' in lines[i]):
print(f" Subshell context found around line {i+1}")
for j in range(max(0, i-2), min(len(lines), i+15)):
print(f" Line {j+1}: {lines[j]}")
breakprint("\n" + "=" * 70)
print("CONCLUSION:")
print("=" * 70)
print("The review comment's assumption about 'declare -f' is problematic.")
print("The verification command grep -n 'declare -f' is not the correct way")
print("to find function declarations in bash. Functions are defined with:")
print(" function_name() { ... }")
print("Not with 'declare -f'.")
EOF
</function_calls>Perfect! Now I have enough information. Let me verify the core concern - whether the functions being called in the subshell are properly exported:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()print("=" * 70)
print("FINAL ANALYSIS:")
print("=" * 70)Show the exact lines from the review
print("\nReview mentions these lines:")
print(f"Line 340: {lines[339].rstrip()}")
print(f"Line 347: {lines[346].rstrip()}")
print(f"Line 348: {lines[347].rstrip()}")Check if these are actual export statements
print("\nVerifying exports:")
if 'export' in lines[339]:
print(f" Line 340 contains export: YES")
parts = lines[339].split()
print(f" Exports: {', '.join([p for p in parts if p != 'export'])}")
else:
print(f" Line 340 contains export: NO")if 'export' in lines[346]:
print(f" Line 347 contains export: YES")
elif 'export' in lines[347]:
print(f" Line 348 contains export: YES")Check what functions are in the subshell
print("\nFunctions that need to be exported to subshell:")
print(" - update_kubeconfig (called in subshell on line 349)")
print(" - Any other functions called within the subshell")Find what's actually exported on lines 347-348
print(f"\nActual export statements around 347-348:")
print(f"Line 346: {lines[345].rstrip()}")
print(f"Line 347: {lines[346].rstrip()}")
print(f"Line 348: {lines[347].rstrip()}")
if len(lines) > 348:
print(f"Line 349: {lines[348].rstrip()}")
EOF
</function_calls>Let me also verify the verification command suggested in the review comment:
<function_calls>
grep -n "declare -f" src/scripts/argo_rollout_status_common.sh | head -20
</function_calls>Now let me look at the actual complete subshell to understand what needs to be exported:
<function_calls>
sed -n '340,360p' src/scripts/argo_rollout_status_common.sh
</function_calls>Let me trace through this carefully with the actual bash semantics:
<function_calls>
python3 << 'EOF'
with open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()print("Detailed Analysis of Subshell Context")
print("=" * 70)Lines 340-360 show the full structure
for i in range(339, min(360, len(lines))):
line_num = i + 1
line_content = lines[i].rstrip()# Annotate key lines if line_num == 340: print(f"{line_num:3d}: {line_content}") print(" ^ Exports CLUSTER_NAME and AWS_REGION to environment") elif line_num == 345: print(f"{line_num:3d}: {line_content}") print(" ^ Start of timeout subshell") elif line_num in [347, 348]: print(f"{line_num:3d}: {line_content}") if line_num == 347: print(" ^ Export update_kubeconfig and get_aws_credential_expiration_v1 functions") elif line_num == 349: print(f"{line_num:3d}: {line_content}") print(" ^ bash -c command starts") else: print(f"{line_num:3d}: {line_content}")EOF
</function_calls>Now let me examine what functions are actually called inside the bash -c command:
<function_calls>
python3 << 'EOF'
with open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()Find the bash -c command and extract what's inside
print("Content of bash -c command (within the subshell):")
print("=" * 70)Start from line 349 (bash -c ...) and find the end quote
bash_c_start = None
for i in range(len(lines)):
if 'bash -c' in lines[i]:
bash_c_start = i
breakif bash_c_start:
# The command is wrapped in single quotes, need to find where it ends
line = lines[bash_c_start]# Extract the string after bash -c ' if "'" in line: quote_start = line.index("'") + 1 # Look for matching close quote remaining = line[quote_start:] # Find the close quote # We need to handle multi-line strings accumulated = "" for i in range(bash_c_start, min(bash_c_start + 50, len(lines))): current_line = lines[i] if i == bash_c_start: accumulated += current_line[quote_start:] else: accumulated += current_line # Check if we've found the closing quote if "'" in accumulated: # Find the position close_quote = accumulated.index("'") accumulated = accumulated[:close_quote] break print(accumulated) print("\n" + "=" * 70) # Extract function calls from this import re calls = set(re.findall(r'\b([a-zA-Z_][a-zA-Z0-9_]*)\s*\(', accumulated)) keywords = {'if', 'then', 'else', 'elif', 'fi', 'for', 'while', 'do', 'done', 'case', 'in', 'esac'} func_calls = sorted(calls - keywords) print("Functions called in bash -c command:") for func in func_calls: print(f" - {func}()")EOF
</function_calls>Excellent! Now I have all the information I need. Let me create a comprehensive summary:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()print("COMPREHENSIVE VERIFICATION REPORT")
print("=" * 70)1. Check what's exported on line 340
print("\n1. Line 340 - Variable Exports:")
print("-" * 70)
print(f"Content: {lines[339].rstrip()}")
print("Exports variables: CLUSTER_NAME, AWS_REGION")2. Check what's exported on lines 347-348
print("\n2. Lines 347-348 - Function Exports:")
print("-" * 70)
print(f"Line 347: {lines[346].rstrip()}")
print(f"Line 348: {lines[347].rstrip()}")Extract what's exported
export_line = lines[346] + " " + lines[347]
exported_functions = re.findall(r"export\s+([A-Za-z_][A-Za-z0-9_]*)", export_line)
print(f"\nFunctions/variables exported: {', '.join(exported_functions)}")3. Check what functions are defined in the file
print("\n3. Functions Defined in This File:")
print("-" * 70)
all_functions = {}
for i, line in enumerate(lines):
match = re.match(r'^([a-zA-Z_][a-zA-Z0-9_])()\s{', line)
if match:
all_functions[match.group(1)] = i + 1for func in sorted(all_functions.keys()):
print(f" {func}() - defined at line {all_functions[func]}")4. Check what's called inside the bash -c subshell
print("\n4. Functions Called Inside bash -c Subshell:")
print("-" * 70)Find bash -c line
for i in range(len(lines)):
if 'bash -c' in lines[i]:
# Extract the command
start = lines[i].index("'") + 1 if "'" in lines[i] else 0# Accumulate until closing quote accumulated = "" for j in range(i, min(i+50, len(lines))): if j == i: accumulated += lines[j][start:] else: accumulated += lines[j] if "'" in accumulated: end = accumulated.index("'") accumulated = accumulated[:end] break calls = set(re.findall(r'\b([a-zA-Z_][a-zA-Z0-9_]*)\s*\(', accumulated)) keywords = {'if', 'then', 'else', 'elif', 'fi', 'for', 'while', 'do', 'done', 'case', 'in', 'esac', 'return'} func_calls = sorted(calls - keywords) for call in func_calls: defined = f"defined in this file at line {all_functions[call]}" if call in all_functions else "NOT defined in this file" exported = "exported" if call in exported_functions else "NOT exported" print(f" {call}(): {defined} - {exported}") breakprint("\n5. Verification of Review Comment Suggestion:")
print("-" * 70)The review suggests: grep -n "declare -f"
result = []
for i, line in enumerate(lines):
if 'declare -f' in line:
result.append((i+1, line.rstrip()))if result:
print(f"'declare -f' appears {len(result)} time(s) in the file:")
for line_num, content in result:
print(f" Line {line_num}: {content}")
else:
print("'declare -f' does NOT appear in the file")
print("This suggests the proposed verification method is INCORRECT")
print("Bash functions are typically defined as: function_name() { ... }")
print("Not with 'declare -f' (which is used to display function definitions)")print("\n" + "=" * 70)
print("CONCLUSION:")
print("=" * 70)
print("The verification command suggested in the review comment")
print("(grep -n 'declare -f') will return NO results because:")
print("- 'declare -f' is NOT how functions are defined in bash")
print("- Functions are defined using syntax: function_name() { ... }")
print("- 'declare -f' is used to DISPLAY function definitions at runtime")
EOF
</function_calls>Perfect! Now I have a complete picture. Let me verify one more critical detail about whether all necessary functions are exported:
<function_calls>
python3 << 'EOF'
import rewith open('src/scripts/argo_rollout_status_common.sh', 'r') as f:
lines = f.readlines()print("CRITICAL VERIFICATION: Are all required functions exported?")
print("=" * 70)Functions called in bash -c that need to be exported
bash_c_line = None
for i, line in enumerate(lines):
if 'bash -c' in line:
bash_c_line = i
breakif bash_c_line:
# Get content of bash -c
accumulated = ""
for j in range(bash_c_line, min(bash_c_line+50, len(lines))):
if j == bash_c_line:
start = lines[j].index("'") + 1 if "'" in lines[j] else 0
accumulated += lines[j][start:]
else:
accumulated += lines[j]if "'" in accumulated: end = accumulated.index("'") accumulated = accumulated[:end] break calls = set(re.findall(r'\b([a-zA-Z_][a-zA-Z0-9_]*)\s*\(', accumulated)) keywords = {'if', 'then', 'else', 'elif', 'fi', 'for', 'while', 'do', 'done', 'case', 'in', 'esac', 'return'} func_calls = sorted(calls - keywords)Get exported functions
exported_lines = ""
for i in range(346, 349): # Lines 347-348
exported_lines += lines[i]exported_functions = set(re.findall(r"export\s+([A-Za-z_][A-Za-z0-9_]*)", exported_lines))
print(f"\nFunctions called in bash -c subshell:")
for call in func_calls:
print(f" {call}")print(f"\nFunctions exported to subshell (lines 347-348):")
for exp in sorted(exported_functions):
print(f" {exp}")print(f"\nVerification:")
for call in func_calls:
if call in exported_functions:
print(f" ✓ {call} is exported")
else:
print(f" ✗ {call} is NOT exported (POTENTIAL ISSUE)")Check for environment variables too
print(f"\nEnvironment variables exported (line 340):")
env_exports = lines[339].split()[1:] # Skip 'export'
for var in env_exports:
print(f" {var}")
EOF
</function_calls>Perfect! Now I have all the information I need to rewrite the review comment. Let me create the final output.
The verification method suggested in the review comment (grep -n "declare -f") is incorrect and needs correction.
The proposed command will find no results because Bash functions are defined using the syntax
function_name() { ... }, not withdeclare -f. Thedeclare -fcommand is used to display function definitions at runtime, not to define them.Actual issues identified:
Lines 347-348 do not export
is_not_found_error,check_kubectl_auth, orrefresh_kubeconfig— these functions are called within thebash -csubshell but are neither defined in this file nor exported to the subshell. This will cause runtime errors if these functions are not available in the calling environment or sourced before this function is invoked.The verification command is incorrect — Replace the suggested
grep -n "declare -f"with a proper check for function definitions:grep -n "^[a-zA-Z_].*() {"to find bash function declarations.Required verification:
- Confirm whether
is_not_found_error,check_kubectl_auth, andrefresh_kubeconfigare expected to be sourced/available from the calling environment or if they should be defined and exported in this file.- If they should be available from outside, document this dependency clearly. If they should be in this file, add their definitions.
| echo "-------------------------------- DEBUG AWS EXPIRATION -------" | ||
| aws_credentials=$(aws configure export-credentials --format env-no-export 2>/dev/null) | ||
| echo "$aws_credentials" | grep EXPIRATION | ||
| echo "---" | ||
| # Mask AWS_ACCESS_KEY_ID to show only last 4 characters | ||
| access_key_line=$(echo "$aws_credentials" | grep AWS_ACCESS_KEY_ID) | ||
| if [[ -n "$access_key_line" ]]; then | ||
| access_key_value="${access_key_line#*=}" | ||
| access_key_last4="${access_key_value: -4}" | ||
| echo "AWS_ACCESS_KEY_ID=****${access_key_last4}" | ||
| fi | ||
| echo "---" | ||
| # Mask AWS_SECRET_ACCESS_KEY to show only last 4 characters | ||
| secret_key_line=$(echo "$aws_credentials" | grep AWS_SECRET_ACCESS_KEY) | ||
| if [[ -n "$secret_key_line" ]]; then | ||
| secret_key_value="${secret_key_line#*=}" | ||
| secret_key_last4="${secret_key_value: -4}" | ||
| echo "AWS_SECRET_ACCESS_KEY=****${secret_key_last4}" | ||
| fi | ||
| echo "---" | ||
| echo "$aws_credentials" | grep AWS_SESSION_TOKEN | ||
| echo "-------------------------------------------------------------" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check the context around lines 239-260
sed -n '220,280p' src/scripts/argo_rollout_status_common.shRepository: TiendaNube/eks-orb
Length of output: 3215
🏁 Script executed:
# Also check if there are any conditionals immediately before the AWS credentials block
sed -n '235,245p' src/scripts/argo_rollout_status_common.sh | cat -ARepository: TiendaNube/eks-orb
Length of output: 631
🏁 Script executed:
# Check the entire file for any conditional logic or environment variable checks
rg -n "if.*DEBUG|if.*VERBOSE|if.*LOG|if.*\{" src/scripts/argo_rollout_status_common.sh | head -20Repository: TiendaNube/eks-orb
Length of output: 385
🏁 Script executed:
# Also check for recent git history comments around this file
git log --oneline -n 20 -- src/scripts/argo_rollout_status_common.sh 2>/dev/null || echo "Git history not available"Repository: TiendaNube/eks-orb
Length of output: 88
Verify the AWS credentials debug output and security implications.
The review comment is accurate and well-founded. Verification confirms:
-
Code is unconditionally executed: No
DEBUG_AWS_EXPIRATIONflag or conditional guard exists anywhere in the file. The AWS credentials block runs on every iteration of thewhile trueloop without any protection. -
Session token is unmasked: Line 259 (
echo "$aws_credentials" | grep AWS_SESSION_TOKEN) outputs the session token in plaintext to logs, directly contradicting the masking applied to access key and secret key. -
Confirmed as debug code: Git history shows commit
df633d3 Debug AWS env vars, confirming this was added for temporary debugging purposes and is not intended for production use. -
Repeated credential exposure: The credentials are logged on every loop iteration, creating potential exposure in log retention systems depending on environment and policy.
The original review's recommendation to either remove this code entirely or guard it behind a DEBUG_AWS_EXPIRATION flag is appropriate and necessary.
🤖 Prompt for AI Agents
In src/scripts/argo_rollout_status_common.sh around lines 239 to 260, the debug
block currently runs unconditionally inside the loop and prints sensitive
credentials (notably the AWS_SESSION_TOKEN) in plaintext; remove this
unconditional debug block or wrap it behind a guarded flag (e.g., check
DEBUG_AWS_EXPIRATION is set and true) so it only runs when explicitly enabled,
and when enabled ensure all secrets are masked consistently (mask
AWS_SESSION_TOKEN the same as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY or
avoid printing it entirely); make the default behavior safe (do nothing) so
credentials are never logged in normal operation.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/scripts/argo_rollout_status_common.sh (1)
59-78: Missing parameter parsing for--cluster-nameand--aws-region.The function references
${CLUSTER_NAME}and${AWS_REGION}on lines 339–340 and exports them on line 355, but these parameters are not parsed from the function arguments. The flag-parsing switch (lines 60–70) lacks handlers for--cluster-nameand--aws-region, so these variables remain undefined when referenced.Per the PR objectives, these parameters should be added. Add parsing logic to lines 60–70:
while [[ "$#" -gt 0 ]]; do case "$1" in --rollout-name) rollout_name="$2"; shift 2 ;; --namespace) namespace="$2"; shift 2 ;; --project-repo-name) project_repo_name="$2"; shift 2 ;; + --cluster-name) CLUSTER_NAME="$2"; shift 2 ;; + --aws-region) AWS_REGION="$2"; shift 2 ;; --timeout) rollout_status_timeout="$2"; shift 2 ;; --interval) rollout_status_check_interval="$2"; shift 2 ;; --) shift; break ;; *) echo -e "${RED}Unknown flag: $1${NC}"; return 2 ;; esac doneAnd update the required-fields check on lines 73–78 to include validation for these new parameters:
- if [[ -z "$rollout_name" ]] || [[ -z "$namespace" ]] || [[ -z "$project_repo_name" ]] || + if [[ -z "$rollout_name" ]] || [[ -z "$namespace" ]] || [[ -z "$project_repo_name" ]] || [[ -z "$CLUSTER_NAME" ]] || [[ -z "$AWS_REGION" ]] ||
♻️ Duplicate comments (3)
src/scripts/argo_rollout_status_common.sh (3)
238-255: Guard debug AWS credentials output behind a debug flag.Lines 238–255 contain an unconditional debug block that runs on every loop iteration and calls
aws configure export-credentials, exposing AWS credentials in logs. This is a security and compliance risk, especially since it runs repeatedly without any opt-in mechanism.Guard this behind an explicit debug flag or remove if no longer needed:
- echo "-------------------------------- DEBUG AWS EXPIRATION -------" - aws_credentials=$(aws configure export-credentials --format env-no-export 2>/dev/null) - echo "$aws_credentials" | grep EXPIRATION - # Mask AWS_ACCESS_KEY_ID to show only last 4 characters - access_key_line=$(echo "$aws_credentials" | grep AWS_ACCESS_KEY_ID) - if [[ -n "$access_key_line" ]]; then - access_key_value="${access_key_line#*=}" - access_key_last4="${access_key_value: -4}" - echo "AWS_ACCESS_KEY_ID=****${access_key_last4}" - fi - # Mask AWS_SECRET_ACCESS_KEY to show only last 4 characters - secret_key_line=$(echo "$aws_credentials" | grep AWS_SECRET_ACCESS_KEY) - if [[ -n "$secret_key_line" ]]; then - secret_key_value="${secret_key_line#*=}" - secret_key_last4="${secret_key_value: -4}" - echo "AWS_SECRET_ACCESS_KEY=****${secret_key_last4}" - fi - echo "-------------------------------------------------------------" + if [[ "${DEBUG_AWS_EXPIRATION:-false}" == "true" ]]; then + echo "-------------------------------- DEBUG AWS EXPIRATION -------" + aws_credentials=$(aws configure export-credentials --format env-no-export 2>/dev/null) + echo "$aws_credentials" | grep EXPIRATION + # Mask AWS_ACCESS_KEY_ID to show only last 4 characters + access_key_line=$(echo "$aws_credentials" | grep AWS_ACCESS_KEY_ID) + if [[ -n "$access_key_line" ]]; then + access_key_value="${access_key_line#*=}" + access_key_last4="${access_key_value: -4}" + echo "AWS_ACCESS_KEY_ID=****${access_key_last4}" + fi + # Mask AWS_SECRET_ACCESS_KEY to show only last 4 characters + secret_key_line=$(echo "$aws_credentials" | grep AWS_SECRET_ACCESS_KEY) + if [[ -n "$secret_key_line" ]]; then + secret_key_value="${secret_key_line#*=}" + secret_key_last4="${secret_key_value: -4}" + echo "AWS_SECRET_ACCESS_KEY=****${secret_key_last4}" + fi + echo "-------------------------------------------------------------" + fi
187-229: Resetkubectl_exit_codeat the start of each retry iteration.The retry loop does not reset
kubectl_exit_codeon each iteration. If the first kubectl attempt fails (settingkubectl_exit_codeto non-zero) and a subsequent attempt succeeds,kubectl_exit_coderetains the old failure value from the first attempt. The check on line 194if [[ $kubectl_exit_code -eq 0 ]]would then incorrectly evaluate to false even though the current attempt succeeded, causing the function to proceed to authentication checks instead of returning success.Initialize
kubectl_exit_code=0at the start of each loop iteration:while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
170-181: Fix environment variable naming mismatch between exports andupdate_kubeconfigexpectations.Line 174 calls
update_kubeconfig()with no arguments, expecting it to read from environment variables. However, line 355 exportsCLUSTER_NAMEandAWS_REGION, while theupdate_kubeconfigfunction (fromupdate_kubeconfig_common.sh, lines 34–44 of relevant snippets) expectsKUBECONFIG_CLUSTER_NAMEandKUBECONFIG_AWS_REGION. When the subshell runs, the function will fail because the required environment variables are not found.Option 1 (Recommended): Export variables with the correct names:
- export CLUSTER_NAME AWS_REGION + export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Option 2: Pass cluster name and region as explicit arguments in
refresh_kubeconfig():function refresh_kubeconfig() { echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" - if update_kubeconfig; then + if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then
🧹 Nitpick comments (1)
src/scripts/argo_rollout_status_common.sh (1)
184-229: Remove deadget_kubectl_argo_rolloutfunction or consolidate duplicated retry logic.The
get_kubectl_argo_rolloutfunction (lines 184–229) implements comprehensive retry logic with kubeconfig refresh and not-found handling, but it is not used in the main flow. Instead, line 259 comments out the call to this function, and lines 260–279 re-implement similar logic inline within the main loop. This creates code duplication and makes maintenance harder.Option 1: Use the function (recommended for cleaner separation of concerns):
- # kubectl_output=$(get_kubectl_argo_rollout "${rollout_name}" "${namespace}") || return $? - kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$? - - if [[ $kubectl_exit_code -ne 0 ]] && ! is_not_found_error "$kubectl_output"; then - # Check for authentication errors and retry with token refresh - if ! check_kubectl_auth "$namespace" >/dev/null 2>&1; then - echo -e "${BLUE}🔄 Authentication error detected. Refreshing kubeconfig...${NC}" - if refresh_kubeconfig; then - echo -e "${BLUE}🔄 Retrying kubectl command after kubeconfig refresh...${NC}" - kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) - else - echo -e "${RED}❌ Failed to refresh kubeconfig. Cannot continue.${NC}" >&2 - exit 2 - fi - else - # Any other error (not "not found" and not auth error) should fail - echo -e "${RED}❌ kubectl command failed (unexpected error):${NC}" >&2 - echo "$kubectl_output" >&2 - exit 2 - fi - fi + kubectl_output=$(get_kubectl_argo_rollout "${rollout_name}" "${namespace}") || exit $?Option 2: Remove the function and keep the inline implementation if you prefer it.
Also applies to: 259-279
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
src/scripts/argo_rollout_status_common.sh (3)
171-181: Fix environment variable naming mismatch that prevents kubeconfig refresh.Line 174 calls
update_kubeconfig()with no arguments, expecting it to read from environment variables. However, theupdate_kubeconfig()function expectsKUBECONFIG_CLUSTER_NAMEandKUBECONFIG_AWS_REGION(per update_kubeconfig_common.sh), but line 363 exportsCLUSTER_NAMEandAWS_REGION. This naming mismatch will cause the refresh to fail silently.Option 1 (Recommended): Export with correct variable names:
- export CLUSTER_NAME AWS_REGION + export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Option 2: Pass arguments explicitly to
update_kubeconfig:function refresh_kubeconfig() { echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" - if update_kubeconfig; then + if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}" return 0
184-229: Reset exit code on each retry iteration to avoid false negatives.The retry loop initializes
kubectl_exit_code=0before the loop (line 187) but does not reset it at the start of each iteration. If an earlier attempt fails and a later attempt succeeds, the function will still report the old failure code.while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
240-257: Remove or guard AWS credentials debug output — active security vulnerability.This debug block unconditionally executes on every loop iteration and exposes sensitive AWS credentials (access keys, secret key, and session token) in logs. Even with partial masking of access/secret keys, the session token is printed in plaintext (line 242). This was flagged in prior review as temporary debugging code.
Remove entirely if no longer needed for debugging:
- echo "-------------------------------- DEBUG AWS EXPIRATION -------" - aws_credentials=$(aws configure export-credentials --format env-no-export 2>/dev/null) - echo "$aws_credentials" | grep EXPIRATION - # Mask AWS_ACCESS_KEY_ID to show only last 4 characters - access_key_line=$(echo "$aws_credentials" | grep AWS_ACCESS_KEY_ID) - if [[ -n "$access_key_line" ]]; then - access_key_value="${access_key_line#*=}" - access_key_last4="${access_key_value: -4}" - echo "AWS_ACCESS_KEY_ID=****${access_key_last4}" - fi - # Mask AWS_SECRET_ACCESS_KEY to show only last 4 characters - secret_key_line=$(echo "$aws_credentials" | grep AWS_SECRET_ACCESS_KEY) - if [[ -n "$secret_key_line" ]]; then - secret_key_value="${secret_key_line#*=}" - secret_key_last4="${secret_key_value: -4}" - echo "AWS_SECRET_ACCESS_KEY=****${secret_key_last4}" - fi - echo "-------------------------------------------------------------"Or guard behind an explicit opt-in flag:
+ if [[ "${DEBUG_AWS_EXPIRATION:-false}" == "true" ]]; then + echo "-------------------------------- DEBUG AWS EXPIRATION -------" + # Only log non-sensitive debug info (e.g., file existence, profile) + echo "AWS credentials available" + echo "-------------------------------------------------------------" + fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
🔇 Additional comments (4)
src/scripts/argo_rollout_status_common.sh (4)
37-48: Validation structure is sound.The checks for UPDATE_KUBECONFIG_COMMON_SCRIPT and update_kubeconfig function declaration follow good patterns. However, ensure the environment variable mismatch at line 363 is resolved so the function can actually execute when called.
148-156: Error pattern detection is well structured.The helper function correctly identifies "not found" errors with flexible regex patterns. The function integrates cleanly into the error handling flow.
159-168: Authentication check implementation is appropriate.Uses standard kubectl pattern to verify credentials without side effects.
343-363: I'll help you verify the concerns aboutCLUSTER_NAMEandAWS_REGIONinitialization. Let me search the codebase to understand where these variables originate and how they're used.
<function_calls>
#!/bin/bashSearch for CLUSTER_NAME and AWS_REGION usage in shell files
rg -n 'CLUSTER_NAME|AWS_REGION' src/scripts/ --type sh -B2 -A2
</function_calls>Now let me check the function definition and understand the context better:
<function_calls>
#!/bin/bashRead the file to understand the context around line 51 and the function parameters
head -60 src/scripts/argo_rollout_status_common.sh | tail -30
</function_calls>Let me also check how the script is called and where these variables might be set:
<function_calls>
#!/bin/bashLook for function definition and check where variables come from
sed -n '1,70p' src/scripts/argo_rollout_status_common.sh | cat -n
</function_calls><function_calls>
#!/bin/bashCheck around line 343-370 to see the full context
sed -n '340,370p' src/scripts/argo_rollout_status_common.sh | cat -n
</function_calls><function_calls>
#!/bin/bashSearch for where check_argo_rollout_status is called to see how variables are passed
rg -n 'check_argo_rollout_status' . --type sh -B5 -A5
</function_calls><function_calls>
#!/bin/bashLook for all files that source or call this script
fd -e sh -type f | xargs rg -l 'argo_rollout_status_common' 2>/dev/null
</function_calls>
| # kubectl_output=$(get_kubectl_argo_rollout "${rollout_name}" "${namespace}") || return $? | ||
| kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$? | ||
|
|
||
| echo "kubectl_exit_code: $kubectl_exit_code" | ||
|
|
||
| if [[ $kubectl_exit_code -ne 0 ]] && ! is_not_found_error "$kubectl_output"; then | ||
| echo "kubectl command failed" | ||
| # Check for authentication errors and retry with token refresh | ||
| if ! check_kubectl_auth "$namespace" >/dev/null 2>&1; then | ||
| echo -e "${BLUE}🔄 Authentication error detected. Refreshing kubeconfig...${NC}" | ||
| if refresh_kubeconfig; then | ||
| echo -e "${BLUE}🔄 Retrying kubectl command after kubeconfig refresh...${NC}" | ||
| kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) | ||
| else | ||
| echo -e "${RED}❌ Failed to refresh kubeconfig. Cannot continue.${NC}" >&2 | ||
| exit 2 | ||
| fi | ||
| else | ||
| # Any other error (not "not found" and not auth error) should fail | ||
| echo -e "${RED}❌ kubectl command failed (unexpected error):${NC}" >&2 | ||
| echo "$kubectl_output" >&2 | ||
| exit 2 | ||
| fi | ||
| else | ||
| echo "kubectl command succeeded" | ||
| fi |
There was a problem hiding this comment.
Activate the new get_kubectl_argo_rollout() function instead of duplicating error handling inline.
Line 261 comments out a call to the new get_kubectl_argo_rollout() function that was defined to handle retries and error handling (lines 184–229). Instead, error handling is duplicated inline at lines 262–286. This creates maintenance burden and inconsistency: the inline code only retries once (line 273), while the defined function enforces max_retries=3.
Uncomment line 261 to use the dedicated function:
- # Get kubectl rollout status (handles errors and retries internally)
- # kubectl_output=$(get_kubectl_argo_rollout "${rollout_name}" "${namespace}") || return $?
- kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
-
- echo "kubectl_exit_code: $kubectl_exit_code"
-
- if [[ $kubectl_exit_code -ne 0 ]] && ! is_not_found_error "$kubectl_output"; then
- echo "kubectl command failed"
- # Check for authentication errors and retry with token refresh
- if ! check_kubectl_auth "$namespace" >/dev/null 2>&1; then
- echo -e "${BLUE}🔄 Authentication error detected. Refreshing kubeconfig...${NC}"
- if refresh_kubeconfig; then
- echo -e "${BLUE}🔄 Retrying kubectl command after kubeconfig refresh...${NC}"
- kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1)
- else
- echo -e "${RED}❌ Failed to refresh kubeconfig. Cannot continue.${NC}" >&2
- exit 2
- fi
- else
- # Any other error (not "not found" and not auth error) should fail
- echo -e "${RED}❌ kubectl command failed (unexpected error):${NC}" >&2
- echo "$kubectl_output" >&2
- exit 2
- fi
- else
- echo "kubectl command succeeded"
- fi
+ # Get kubectl rollout status (handles errors and retries internally)
+ kubectl_output=$(get_kubectl_argo_rollout "${rollout_name}" "${namespace}") || {
+ local exit_code=$?
+ if [[ $exit_code -ne 0 ]]; then
+ exit $exit_code
+ fi
+ }🤖 Prompt for AI Agents
In src/scripts/argo_rollout_status_common.sh around lines 261 to 286, the inline
kubectl error/retry logic duplicates the existing get_kubectl_argo_rollout()
function and only retries once; revert to calling the function instead:
uncomment the call to get_kubectl_argo_rollout "${rollout_name}" "${namespace}"
at line 261, remove the duplicated kubectl invocation and related auth/retry
block (lines 262–286), and ensure the function's output is captured into
kubectl_output and its exit code into kubectl_exit_code so subsequent logic uses
those values (preserve any necessary variable names and error handling behavior
already implemented by get_kubectl_argo_rollout).
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
src/scripts/argo_rollout_status_common.sh (3)
191-192: Resetkubectl_exit_codeon each loop iteration to avoid stale state.The
kubectl_exit_codevariable is initialized outside the while loop (line 188) but never reset on each iteration. If an attempt fails and the next attempt succeeds, the variable will retain the old failure code, causing incorrect error handling downstream.Apply this diff to reset the exit code at the start of each iteration:
while [[ $attempt -le $max_retries ]]; do + kubectl_exit_code=0 kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$?
261-291: Use the dedicatedget_kubectl_argo_rollout()function instead of duplicating inline error handling.Lines 261–291 duplicate the retry and error-handling logic that was refactored into
get_kubectl_argo_rollout()(lines 184–229). The comment at line 261 even indicates the function should be used. This duplication creates a maintenance burden and inconsistency: the function enforces 3 retries, while the inline code has only 1 retry.Uncomment and use the dedicated function instead of inline error handling:
- # Get kubectl rollout status (handles errors and retries internally) - # kubectl_output=$(get_kubectl_argo_rollout "${rollout_name}" "${namespace}") || return $? - - echo "Checking kubectl auth" - kubectl auth can-i get pods --namespace "${namespace}" - - echo "Getting kubectl rollout status" - kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) || kubectl_exit_code=$? - - echo "kubectl_exit_code: $kubectl_exit_code" - - if [[ $kubectl_exit_code -ne 0 ]] && ! is_not_found_error "$kubectl_output"; then - echo "kubectl command failed" - # Check for authentication errors and retry with token refresh - if ! check_kubectl_auth "$namespace" >/dev/null 2>&1; then - echo -e "${BLUE}🔄 Authentication error detected. Refreshing kubeconfig...${NC}" - if refresh_kubeconfig; then - echo -e "${BLUE}🔄 Retrying kubectl command after kubeconfig refresh...${NC}" - kubectl_output=$(kubectl argo rollouts get rollout "${rollout_name}" --namespace "${namespace}" 2>&1) - else - echo -e "${RED}❌ Failed to refresh kubeconfig. Cannot continue.${NC}" >&2 - exit 2 - fi - else - # Any other error (not "not found" and not auth error) should fail - echo -e "${RED}❌ kubectl command failed (unexpected error):${NC}" >&2 - echo "$kubectl_output" >&2 - exit 2 - fi - else - echo "kubectl command succeeded" - fi - echo "kubectl_output:" + # Get kubectl rollout status (handles errors and retries internally) + kubectl_output=$(get_kubectl_argo_rollout "${rollout_name}" "${namespace}") || return $? + echo "kubectl_output:"This delegates all retry logic and authentication handling to the dedicated function, ensuring consistent behavior and reducing code duplication.
240-257: Remove or guard the unconditional debug AWS credentials block.This block executes on every loop iteration without any guard, adding unnecessary overhead and log verbosity. While credentials are partially masked, the code is clearly temporary debugging that should either be removed entirely or explicitly guarded behind a debug flag.
Option 1 (Recommended): Remove the debug block entirely:
- echo "-------------------------------- DEBUG AWS EXPIRATION -------" - aws_credentials=$(aws configure export-credentials --format env-no-export 2>/dev/null) - echo "$aws_credentials" | grep EXPIRATION - # Mask AWS_ACCESS_KEY_ID to show only last 4 characters - access_key_line=$(echo "$aws_credentials" | grep AWS_ACCESS_KEY_ID) - if [[ -n "$access_key_line" ]]; then - access_key_value="${access_key_line#*=}" - access_key_last4="${access_key_value: -4}" - echo "AWS_ACCESS_KEY_ID=****${access_key_last4}" - fi - # Mask AWS_SECRET_ACCESS_KEY to show only last 4 characters - secret_key_line=$(echo "$aws_credentials" | grep AWS_SECRET_ACCESS_KEY) - if [[ -n "$secret_key_line" ]]; then - secret_key_value="${secret_key_line#*=}" - secret_key_last4="${secret_key_value: -4}" - echo "AWS_SECRET_ACCESS_KEY=****${secret_key_last4}" - fi - echo "-------------------------------------------------------------" echo "============================================================="Option 2: Guard behind a debug flag:
+ if [[ "${DEBUG_AWS_CREDENTIALS:-false}" == "true" ]]; then echo "-------------------------------- DEBUG AWS EXPIRATION -------" aws_credentials=$(aws configure export-credentials --format env-no-export 2>/dev/null) echo "$aws_credentials" | grep EXPIRATION # Mask AWS_ACCESS_KEY_ID to show only last 4 characters access_key_line=$(echo "$aws_credentials" | grep AWS_ACCESS_KEY_ID) if [[ -n "$access_key_line" ]]; then access_key_value="${access_key_line#*=}" access_key_last4="${access_key_value: -4}" echo "AWS_ACCESS_KEY_ID=****${access_key_last4}" fi # Mask AWS_SECRET_ACCESS_KEY to show only last 4 characters secret_key_line=$(echo "$aws_credentials" | grep AWS_SECRET_ACCESS_KEY) if [[ -n "$secret_key_line" ]]; then secret_key_value="${secret_key_line#*=}" secret_key_last4="${secret_key_value: -4}" echo "AWS_SECRET_ACCESS_KEY=****${secret_key_last4}" fi echo "-------------------------------------------------------------" + fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/scripts/argo_rollout_status_common.sh(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/scripts/argo_rollout_status_common.sh (1)
src/scripts/update_kubeconfig_common.sh (1)
update_kubeconfig(35-173)
🔇 Additional comments (1)
src/scripts/argo_rollout_status_common.sh (1)
37-48: ✅ Solid validation logic for UPDATE_KUBECONFIG_COMMON_SCRIPT.The validation mirrors the pattern for
ARGO_CLI_COMMON_SCRIPTand correctly verifies that the script is non-empty and the function is defined.
| function refresh_kubeconfig() { | ||
| echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}" | ||
|
|
||
| if update_kubeconfig; then |
There was a problem hiding this comment.
Environment variable naming mismatch prevents kubeconfig refresh from working.
Line 174 calls update_kubeconfig with no arguments, expecting it to read from environment variables. However, the update_kubeconfig function (from update_kubeconfig_common.sh) expects variables named KUBECONFIG_CLUSTER_NAME and KUBECONFIG_AWS_REGION, but line 368 exports CLUSTER_NAME and AWS_REGION instead. This mismatch will cause the refresh to silently fail inside the subshell.
Option 1 (Recommended): Update line 368 to export variables with the correct names:
- export CLUSTER_NAME AWS_REGION
+ export KUBECONFIG_CLUSTER_NAME="${CLUSTER_NAME}" KUBECONFIG_AWS_REGION="${AWS_REGION}"Option 2: Modify line 174 to pass the values explicitly as arguments:
function refresh_kubeconfig() {
echo -e "${BLUE}🔄 Attempting to refresh kubeconfig...${NC}"
- if update_kubeconfig; then
+ if update_kubeconfig --cluster-name "${CLUSTER_NAME}" --aws-region "${AWS_REGION}"; then
echo -e "${GREEN}✅ Kubeconfig refreshed successfully${NC}"
return 0Verify that update_kubeconfig parameter handling accepts the --cluster-name and --aws-region flags in Option 2.
Also applies to: 368-368
| $(declare -f is_not_found_error check_kubectl_auth refresh_kubeconfig get_kubectl_argo_rollout) | ||
| $(declare -f with_argocd_cli set_argocd_cli unset_argocd_cli is_argocd_logged_in is_kubectl_namespace_set) | ||
| $(declare -f update_kubeconfig) | ||
| $(declare -f get_aws_credential_expiration_v1) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for function definition of get_aws_credential_expiration_v1
rg -n "function get_aws_credential_expiration_v1|^get_aws_credential_expiration_v1\s*\(" --type bashRepository: TiendaNube/eks-orb
Length of output: 90
🏁 Script executed:
#!/bin/bash
# Also search for any references to this function
rg -n "get_aws_credential_expiration_v1" --type bash -C 2Repository: TiendaNube/eks-orb
Length of output: 90
🏁 Script executed:
#!/bin/bash
# Search for function definition of get_aws_credential_expiration_v1
rg -n "get_aws_credential_expiration_v1" -C 2Repository: TiendaNube/eks-orb
Length of output: 543
🏁 Script executed:
#!/bin/bash
# Check the file size and read it
wc -l src/scripts/argo_rollout_status_common.shRepository: TiendaNube/eks-orb
Length of output: 107
🏁 Script executed:
#!/bin/bash
# Search entire repository for the function definition
rg "function get_aws_credential_expiration_v1|^get_aws_credential_expiration_v1" -C 3Repository: TiendaNube/eks-orb
Length of output: 44
🏁 Script executed:
#!/bin/bash
# Read the relevant section of the file around line 376 to see context
sed -n '1,100p' src/scripts/argo_rollout_status_common.sh | tail -50Repository: TiendaNube/eks-orb
Length of output: 1923
🏁 Script executed:
#!/bin/bash
# Read the entire file and search for the function definition
cat -n src/scripts/argo_rollout_status_common.sh | grep -i "get_aws_credential"Repository: TiendaNube/eks-orb
Length of output: 117
🏁 Script executed:
#!/bin/bash
# Also check if there are any other files that might define this function
fd . --type f --name "*.sh" | head -20Repository: TiendaNube/eks-orb
Length of output: 312
Perfect. I've completed the verification. The search results confirm that:
- Line 376 is the ONLY reference to
get_aws_credential_expiration_v1in the entire file - The function is never defined anywhere in the repository (repository-wide search returned no results)
- The function is referenced only in a
declare -fexport statement, which attempts to export a function that doesn't exist
This confirms the original review comment is accurate. The function will need to be either defined or removed from the export list.
get_aws_credential_expiration_v1 is not defined anywhere in the codebase but is declared for export at line 376.
Line 376 references get_aws_credential_expiration_v1 in a declare -f statement for export to the subshell. A repository-wide search confirms this function is never defined. This will cause a runtime error when the subshell attempts to use it.
Either define the function in this script or remove line 376 from the export list.
🤖 Prompt for AI Agents
In src/scripts/argo_rollout_status_common.sh around line 376, the script exports
a function named get_aws_credential_expiration_v1 via "declare -f" but that
function is not defined anywhere in the repository; remove the export reference
on line 376 if the function is unused, or alternatively add the missing function
definition (implementing the expected behavior and unit-tested logic) elsewhere
in the file before the export so the declare -f refers to a real function.
What? 📃
update_kubeconfig_common.shscript to manage kubeconfig updates for EKS, enhancing authentication handling.cluster-nameandaws-regionparameters toargo-deploy-applicationandargo-rollout-statuscommandsAdditional Links 🔗
BAC-468
Summary by CodeRabbit
New Features
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.