Skip to content

feat: configure automated deployment with Ansible#15

Merged
JakubDurkac merged 22 commits intomainfrom
ISV-6364
Oct 22, 2025
Merged

feat: configure automated deployment with Ansible#15
JakubDurkac merged 22 commits intomainfrom
ISV-6364

Conversation

@JakubDurkac
Copy link
Copy Markdown
Contributor

Created ansible folder:

  1. Configured vaults/ containing secrets.
  2. Configured inventories/ with basic deployment configuration.
  3. Configured templates/ defining all the main components including database.
  4. Configured roles/ including tasks/ for the deployment of project setup, api, web and worker.
  5. Configured playbooks/ with deploy.yml initializing deployment based on given env (stage/prod) and deployment tags (always, api, web, worker).

Configured .github/workflows/ci-cd.yaml:

  1. Added set-image-suffix job to generate unique image tag suffix accessible by other jobs.
  2. Made build-and-push job create output containing the services we've built images for.
  3. Added set-ansible-tags job to generate a string like "--tags api --tags web --tags always" based on build-and-push output, also checks if ansible config was modified, in which case at least the project-setup is deployed to update secrets/config.
  4. Added deploy-stage and deploy-prod jobs to create deployment in case of PR and push to main respectively.

1. Added set-image-suffix job to generate unique image tag suffix accessible by other jobs.
2. Made build-and-push job create output containing the services we've built images for.
3. Added set-ansible-tags job to generate a string like "--tags api --tags web --tags always" based on build-and-push output, also checks if ansible config was modified, in which case at least the project-setup is deployed to update secrets/config.
4. Added deploy-stage and deploy-prod jobs to create deployment in case of PR and push to main respectively.
1. Added official OpenShift template postgresql-persistent.yml.
2. Added task for creation/update of database based on the template to project_setup.
3. Added template postgresql-migration-job.yml that uses files/migrations/V1__initial_setup.sql to create the base table schema and set the db_start_date if they are not already set.
4. Added task for running the migration job at the end of project_setup.
@JakubDurkac JakubDurkac force-pushed the ISV-6364 branch 5 times, most recently from 5735e6f to 995a4ce Compare October 9, 2025 13:14
When using the image resolution within template, it seems that it needs to create some temporary objects
in the openshift namespace. To avoid the need for setting up writing permissions, the image is now resolved
just by reading the image from the API and passing it down to the template.

I have also simplified the template a bit and fixed database migration job behavior - indentation issues, time to live, removal of finished jobs.
@JakubDurkac JakubDurkac force-pushed the ISV-6364 branch 2 times, most recently from 9c46465 to 38e77d9 Compare October 10, 2025 07:53
@JakubDurkac JakubDurkac requested a review from Allda October 10, 2025 08:45
ansible/**
.github/workflows/**

- name: Generate Ansible deployment tags
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure why we need this. We should always run a playbook and (re-)deploy a whole application including all subcomponents. For local testing, it is good to have a tags around but it should not be used when deploying app.

Copy link
Copy Markdown
Contributor Author

@JakubDurkac JakubDurkac Oct 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea behind this is the following: Currently we are building and pushing images of only those services that changed in the PR/push to main. So if we only build images for api and web, after we deploy all 3 apps: api, web and worker, I think that the worker image would also change to a new image tag, while that image wasn't built and doesn't exist. That's why currently only those changed are deployed.

I can refactor this, but I'm not sure what would be preferred. If we should always deploy all apps, should we be also always building and pushing images of all apps, no matter if their code changed? Or else?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went forward with it. Now always all parts are built and deployed. It made it much more simple. I tried to address all the other comments as well. Thanks for feedback and feel free to check and let me know, if you would still do something differently.

- name: Install required Python libraries
run: pip install kubernetes openshift

- name: Run Ansible deployment for stage
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using dawidd6/action-ansible-playbook github action here. You can find our usecase here https://github.com/redhat-openshift-ecosystem/operator-pipelines/blob/main/.github/workflows/deploy.yml#L40C15-L40C46

--vault-password-file <(echo "$ANSIBLE_VAULT_PASSWORD") \
${{ needs.set-ansible-tags.outputs.ansible_tags }} \
-e "env=stage" \
-e "openshift_server=${{ secrets.OPENSHIFT_SERVER_STAGE }}" \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The server should be configured as part of Ansible inventories.

${{ needs.set-ansible-tags.outputs.ansible_tags }} \
-e "env=stage" \
-e "openshift_server=${{ secrets.OPENSHIFT_SERVER_STAGE }}" \
-e "openshift_token=${{ secrets.OPENSHIFT_TOKEN_STAGE }}" \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And token should be also part of Ansible vault

@@ -0,0 +1,21 @@
pullsar_prod:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one way to define inventory, but it is not really scalable or user-friendly. Instead, you should create a inventory file where you describe your hosts and their hierarchy and then create a group vars for each. This will allow you to share common variables and bring many more features.

You can find an example here: https://github.com/redhat-openshift-ecosystem/operator-pipelines/tree/main/ansible/inventory

namespace: "{{ project_namespace }}"
definition:
apiVersion: v1
kind: Secret
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For tasks that work with secrets, please use no_log: true - Ansible can leak secrets when using verbose output.

kubernetes.core.k8s_info:
api_version: image.openshift.io/v1
kind: ImageStreamTag
name: "postgresql:{{ postgres_version }}"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does this image comes from? You should use the image from redhat official registry.

@@ -0,0 +1,51 @@
apiVersion: batch/v1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The template definition should be stored under a role that is using it.

K8S_AUTH_API_KEY: "{{ openshift_token }}"
K8S_AUTH_HOST: "{{ openshift_server }}"

roles:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there might be a bit of a misunderstanding about what the roles are used for. The role is independent unit that contains all resources needed for a deployment. I see you define a role as a set of task but all your templates are outside of it.

I also think you don't need to create role for each component. It is unnecesary complicated. Better to split a components as a independet task inside one role.

You can use ansible-galaxy to bootstrap role directory structure to see how the role should look like. Or check our other projects where roles are used.

name: Deploy to stage
runs-on: ubuntu-latest
needs: [set-image-suffix, build-and-push]
if: github.event_name == 'pull_request'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stage instance won't be deployable after a merge is done?

Copy link
Copy Markdown
Contributor Author

@JakubDurkac JakubDurkac Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On what events should I be deploying to stage and to prod? What would be acceptable in my case?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually, we deploy both stage and prod after a PR is merged. Since you don't have a dev and qa environment, I would maybe make the stage deployment optional for PRs for testing purposes and after a PR is merged I would deploy both stage and prod.

Copy link
Copy Markdown
Contributor

@Allda Allda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks ok to me, let's just resolve the issue with when to run stage and prod deployment and merge this. Also before merging please run ansible lint to fix all potential issues with ansible configs. It would be also good idea to run ansible lint in the Github CI.

@JakubDurkac JakubDurkac added the deploy-stage Optional label to mark PR changes ready for deployment to stage. label Oct 21, 2025
@JakubDurkac JakubDurkac removed the deploy-stage Optional label to mark PR changes ready for deployment to stage. label Oct 21, 2025
@JakubDurkac JakubDurkac added the deploy-stage Optional label to mark PR changes ready for deployment to stage. label Oct 22, 2025
@JakubDurkac JakubDurkac merged commit 831e7cc into main Oct 22, 2025
19 of 21 checks passed
@JakubDurkac JakubDurkac deleted the ISV-6364 branch October 22, 2025 10:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deploy-stage Optional label to mark PR changes ready for deployment to stage.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants