This deployment will modify root domain A record
This repository use the following plan, we are going to deploy Amazon EKS with Serverless Fargate. on The front we use route 53 to point our DNS name to the Cluster. Read Route 53 Documentation about migrating your own domain.
Before we move forward, these are required package to install and configure:
If you are using windows I recommend using choco to install all the required package.
Edit Dockerfile inside the NestJS project directory to:
FROM node:lts-alpine3.13
WORKDIR /app
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "run", "start:prod"]Then run command:
cd <YOUR PROJECT DIRECTORY>
docker login
docker build -t <USERNAME>/<YOUR ARTIFACT NAME> .
docker push <USERNAME>/<YOUR ARTIFACT NAME>Edit Dockerfile inside the NestJS project directory to:
FROM <USERNAME>/<YOUR ARTIFACT NAME>:latest
WORKDIR /app
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "run", "start:prod"]only do this after you push your initial image
Before running terraform, create new tfvars file inside folder infrastructure using this template:
region =
vpc_name =
vpc_cidr = "10.2.0.0/16"
eks_cluster_name =
cidr_block_igw = "0.0.0.0/0"
node_group_name =
ng_instance_types = [ "t2.small" ]
disk_size = 10
desired_nodes = 2
max_nodes = 2
min_nodes = 1
fargate_profile_name =
kubernetes_namespace =
deployment_name =
deployment_replicas =
domain_name =
grafana_password = "admin"
docker_image = "symefa/datasaur-symefa:latest"
app_domain =
app_labels = {
"app" =
"tier" =
}| Name | Description | Recommended |
|---|---|---|
region |
Your AWS region | |
vpc_name |
VPC instance name | |
vpc_cidr |
VPC Classless Inter-Domain Routing | "10.2.0.0/16" |
eks_cluster_name |
Kubernetes cluster name | |
cidr_block_igw |
"0.0.0.0/0" | |
node_group_name |
Name for node group | |
ng_instance_types |
EC2 Instance type | ["t2.small"] |
disk_size |
Size of allocated storage | |
desired_nodes |
Number of preferred node | |
max_nodes |
Maximum amount of node | |
min_nodes |
Minimum amount of node | |
fargate_profile_name |
Fargate instance name | |
kubernetes_namespace |
Namespace for application tobe deployed to | |
deployment_name |
Name of deployment | |
deployment_replica |
Number of deployment | |
domain_name |
Your domain name | |
grafana_password |
Password for grafana | |
app_labels |
Set name of the apps and tier | |
docker_image |
dockerhub image to be deployed | |
app_domain |
Full application domain name |
after your create configuration file run:
cd infrastructure
terraform init
terraform plan -var-file=<your tfvars file>
terraform apply -var-file=<your tfvars file>your application will be accessible from <Your app_domain>. edit the .tfvars inside the infrastructure folder to change
Edit .github/workflows/main.yml using configuration from your terraform to:
name: CI-CD
env:
NAMESPACE: <namespace name>
DEPLOYMENT: <deployment name>-datasaur
REGION: <region name>
CLUSTER_NAME: <cluster name>
#ommited filenote this implementation using rollout strategy, changes will be applied after 1-2 minutes.
Use this command:
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
kubectl get svcif its returns your list of service then kubectl was configured successfully.
Run this command:
kubectl port-forward -n monitoring svc/grafana 9000:80Then click this Grafana. Login using:
USERNAME=admin
PASSWORD=<your grafana password>To view default dashboard, import file "grafana-dashboard.js" inside the infrastructure folder.
Run this command:
kubectl port-forward -n monitoring svc/prometheus-server 8081:80Then click this Prometheus
You can destroy this cluster and vpc entirely by running:
terraform destroy -var-file=<Your tfvars file>infrastructure
├───modules
│ ├───dns
│ ├───eks
│ │ ├───eks_cluster
│ │ ├───eks_node_group
│ │ └───fargate
│ ├───kubernetes
│ ├───monitoring
│ │ └───data
│ └───network
...
- When stuck at destroy igw try again after sometimes!
- If stuck because namespace conflict try deleting it manually using helm and/or kubectl
- If apply failed try again after sometimes, this happen because the dns required ingress setup properly first and terraform seems doesnt catch that
Common command to debug terraform:
terraform refresh -var-file=<Your tfvars file>
terraform state list
terraform state show <state>Common command to debug kubernetes:
kubectl list <type> -n <namespace>
kubectl describe <type> <name> -n <namespace>
helm show <name> -n <namespace>wild kawai komodo dragon appears!!

