feat: configure automated deployment with Ansible#15
Conversation
1. Added set-image-suffix job to generate unique image tag suffix accessible by other jobs. 2. Made build-and-push job create output containing the services we've built images for. 3. Added set-ansible-tags job to generate a string like "--tags api --tags web --tags always" based on build-and-push output, also checks if ansible config was modified, in which case at least the project-setup is deployed to update secrets/config. 4. Added deploy-stage and deploy-prod jobs to create deployment in case of PR and push to main respectively.
1. Added official OpenShift template postgresql-persistent.yml. 2. Added task for creation/update of database based on the template to project_setup. 3. Added template postgresql-migration-job.yml that uses files/migrations/V1__initial_setup.sql to create the base table schema and set the db_start_date if they are not already set. 4. Added task for running the migration job at the end of project_setup.
…put for set-ansible-tags
5735e6f to
995a4ce
Compare
When using the image resolution within template, it seems that it needs to create some temporary objects in the openshift namespace. To avoid the need for setting up writing permissions, the image is now resolved just by reading the image from the API and passing it down to the template. I have also simplified the template a bit and fixed database migration job behavior - indentation issues, time to live, removal of finished jobs.
9c46465 to
38e77d9
Compare
.github/workflows/ci-cd.yaml
Outdated
| ansible/** | ||
| .github/workflows/** | ||
|
|
||
| - name: Generate Ansible deployment tags |
There was a problem hiding this comment.
I am not sure why we need this. We should always run a playbook and (re-)deploy a whole application including all subcomponents. For local testing, it is good to have a tags around but it should not be used when deploying app.
There was a problem hiding this comment.
The idea behind this is the following: Currently we are building and pushing images of only those services that changed in the PR/push to main. So if we only build images for api and web, after we deploy all 3 apps: api, web and worker, I think that the worker image would also change to a new image tag, while that image wasn't built and doesn't exist. That's why currently only those changed are deployed.
I can refactor this, but I'm not sure what would be preferred. If we should always deploy all apps, should we be also always building and pushing images of all apps, no matter if their code changed? Or else?
There was a problem hiding this comment.
I went forward with it. Now always all parts are built and deployed. It made it much more simple. I tried to address all the other comments as well. Thanks for feedback and feel free to check and let me know, if you would still do something differently.
.github/workflows/ci-cd.yaml
Outdated
| - name: Install required Python libraries | ||
| run: pip install kubernetes openshift | ||
|
|
||
| - name: Run Ansible deployment for stage |
There was a problem hiding this comment.
Consider using dawidd6/action-ansible-playbook github action here. You can find our usecase here https://github.com/redhat-openshift-ecosystem/operator-pipelines/blob/main/.github/workflows/deploy.yml#L40C15-L40C46
.github/workflows/ci-cd.yaml
Outdated
| --vault-password-file <(echo "$ANSIBLE_VAULT_PASSWORD") \ | ||
| ${{ needs.set-ansible-tags.outputs.ansible_tags }} \ | ||
| -e "env=stage" \ | ||
| -e "openshift_server=${{ secrets.OPENSHIFT_SERVER_STAGE }}" \ |
There was a problem hiding this comment.
The server should be configured as part of Ansible inventories.
.github/workflows/ci-cd.yaml
Outdated
| ${{ needs.set-ansible-tags.outputs.ansible_tags }} \ | ||
| -e "env=stage" \ | ||
| -e "openshift_server=${{ secrets.OPENSHIFT_SERVER_STAGE }}" \ | ||
| -e "openshift_token=${{ secrets.OPENSHIFT_TOKEN_STAGE }}" \ |
There was a problem hiding this comment.
And token should be also part of Ansible vault
ansible/inventories/prod.yml
Outdated
| @@ -0,0 +1,21 @@ | |||
| pullsar_prod: | |||
There was a problem hiding this comment.
This is one way to define inventory, but it is not really scalable or user-friendly. Instead, you should create a inventory file where you describe your hosts and their hierarchy and then create a group vars for each. This will allow you to share common variables and bring many more features.
You can find an example here: https://github.com/redhat-openshift-ecosystem/operator-pipelines/tree/main/ansible/inventory
| namespace: "{{ project_namespace }}" | ||
| definition: | ||
| apiVersion: v1 | ||
| kind: Secret |
There was a problem hiding this comment.
For tasks that work with secrets, please use no_log: true - Ansible can leak secrets when using verbose output.
| kubernetes.core.k8s_info: | ||
| api_version: image.openshift.io/v1 | ||
| kind: ImageStreamTag | ||
| name: "postgresql:{{ postgres_version }}" |
There was a problem hiding this comment.
Where does this image comes from? You should use the image from redhat official registry.
| @@ -0,0 +1,51 @@ | |||
| apiVersion: batch/v1 | |||
There was a problem hiding this comment.
The template definition should be stored under a role that is using it.
| K8S_AUTH_API_KEY: "{{ openshift_token }}" | ||
| K8S_AUTH_HOST: "{{ openshift_server }}" | ||
|
|
||
| roles: |
There was a problem hiding this comment.
I think there might be a bit of a misunderstanding about what the roles are used for. The role is independent unit that contains all resources needed for a deployment. I see you define a role as a set of task but all your templates are outside of it.
I also think you don't need to create role for each component. It is unnecesary complicated. Better to split a components as a independet task inside one role.
You can use ansible-galaxy to bootstrap role directory structure to see how the role should look like. Or check our other projects where roles are used.
1. Merged roles api, web and worker into 1 role app. 2. Moved templates and files under the roles. 3. Refactored inventory with group_vars and host_vars.
.github/workflows/ci-cd.yaml
Outdated
| name: Deploy to stage | ||
| runs-on: ubuntu-latest | ||
| needs: [set-image-suffix, build-and-push] | ||
| if: github.event_name == 'pull_request' |
There was a problem hiding this comment.
The stage instance won't be deployable after a merge is done?
There was a problem hiding this comment.
On what events should I be deploying to stage and to prod? What would be acceptable in my case?
There was a problem hiding this comment.
Usually, we deploy both stage and prod after a PR is merged. Since you don't have a dev and qa environment, I would maybe make the stage deployment optional for PRs for testing purposes and after a PR is merged I would deploy both stage and prod.
Allda
left a comment
There was a problem hiding this comment.
The PR looks ok to me, let's just resolve the issue with when to run stage and prod deployment and merge this. Also before merging please run ansible lint to fix all potential issues with ansible configs. It would be also good idea to run ansible lint in the Github CI.
Created ansible folder:
Configured .github/workflows/ci-cd.yaml: