Real-World, Production-Ready GitOps Project for DevOps Practitioners

Let's learn together and serve the society, Make India Proud.
Hello, this is Rakesh Kumar — your DevOps project practice trainer and your friend.
I’m back again with a brilliant DevOps project that is not only production-ready but also highly valuable for real-world learning.
This project is designed to take you deep inside the core of DevOps practices. We will not just run commands — we will understand why things work the way they do in real production environments.
By the end of this project, you won’t be the same person. You’ll gain real-world, production-level insights that most beginners miss.
Tech Stack Used
In this project, we will work with the following tools and technologies:
Linux
LAMP Server (WordPress)
Docker + Kind (to create a mini Kubernetes practice cluster)
Kubernetes (for microservices-based application deployment)
Kubernetes Dashboard (for pod-level monitoring)
HPA (For Pod Auto-Scale as per matrix Data)
ArgoCD (GitOps tool to tightly sync Git & GitHub for continuous deployment)
Helm (your special package manager for Kubernetes)
AWS EC2 (for Project Infra, t2.large instance)
Git & GitHub
Core Skills You Should Have
Before starting this project, you should be comfortable with:
Basic Linux commands
Git & GitHub (basic usage)
Docker and Kubernetes fundamentals
AWS EC2 instance creation
Basic Linux web server knowledge
Don’t worry — you don’t need to be an expert.
If your basics are clear, this project will sharpen your mindset and confidence.
Let’s Get Started
I am using aws cloud for project infrastructure. So we will use
OS: Ubuntu 22.04
Configuration: T2.Large (Because we have to do lots of work here- so need more power)
Storage Volume: 24 GB Minimum
Step-1: Create AWS T2.Large Instance

We will use direct aws console to access terminal

Install some required packages tool tech
apt-get update
apt install -y vim git docker.io
systemctl enable --now docker
Step-2: Now we have to install KIND so we can create k8s mini cluster. So for this we have to create some script. and then run the script after give the execute permission.
# vim install_kind.sh
#!/bin/bash
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo cp ./kind /usr/local/bin/kind
rm -rf kind
chmod +x install_kind.sh
bash install_kind.sh
In Ubuntu Install Docker
apt-get update
apt install docker.io
systemctl enable --now docker
systemctl status docker
docker ps -a
In Fedora Based System Install Docker
sudo dnf remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl enable --now docker
sudo docker ps -a
Your Docker is installed successfully. However, to access the Kubernetes cluster, you need a command-line tool, which is "kubectl."
Step-3: Install kubectl command in KIND cluster.
# vim install_kubectl.sh
#!/bin/bash
# Variables
VERSION="v1.30.0"
URL="https://dl.k8s.io/release/${VERSION}/bin/linux/amd64/kubectl"
INSTALL_DIR="/usr/local/bin"
# Download and install kubectl
curl -LO "$URL"
chmod +x kubectl
sudo mv kubectl $INSTALL_DIR/
kubectl version --client
# Clean up
rm -f kubectl
echo "kubectl installation complete."
chmod +x install_kubectl.sh
bash install_kubectl.sh
Step-4: Install the KIND cluster, so we have to create a KIND config file, that specify the core specification details regarding your KIND cluster nodes.
# vim config.yml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.30.0
- role: worker
image: kindest/node:v1.30.0
- role: worker
image: kindest/node:v1.30.0
Create the KIND cluster with specifying config.yml file for reference
kind create cluster --config=config.yml
and you will get screen like this.

Your KIND kubernetes cluster is working or not, let him check
kubectl get no -o wide

Step-5: The GitHUB Repository Server
Over this KIND cluster, your application will deploy. But what about the project application files? As you know, we are working on a LAMP WordPress application, and we have a dedicated GitHub repository server for this, where you will get all the required files and folders. Because this is a GitOps project, GitHub is the core artifact server.
Go to this link: devrakaops/projectwala
This is the huge repository where some of my core learning DevOps project are kept, this project is one of them.

In this repository the related files and folder for this project are kept here only.
Go to this link: projectwala/Basic-Level/Project-2 at main · devrakaops/projectwala

The Kubernetes folder has all the YAML for our project, which we will use.

Step-6. Installing Argo CD
We have multiple options to install ArgoCD
Using Helm install ArgoCD
To install ArgoCD using helm, you have to first install helm package manager in your kubernetes cluster itself and after then need to install using dedicated yaml file. I am here providing you a link:
Using Direct command install ArgoCD
To install argocd using direct command line method follow the commands
Create a namespace for Argo CD:
kubectl create namespace argocdApply the Argo CD manifest:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yamlCheck services in Argo CD namespace:
kubectl get svc -n argocdExpose Argo CD server using NodePort:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'Forward ports to access Argo CD server:
kubectl port-forward -n argocd service/argocd-server 443:443 &
Access the ArgoCD cluster using public IP of your AWS instance with dedicated 443 port. This is not fix that only use 443 port. You can use any random port, because we are using port forward so request will come at the end of the service port
- Don’t forgot to open 443 ports in your AWS security group inbound rule
In starting you will see the unsecure page warning

But click on “Continue to 13.201.28.87 (unsafe)“ link. And you will get the actual ArgoCD UI Page.

You can cross-check in Kubernetes cluster as well.

Now it’s time to access the ArgoCD, you required username and password:
The username is "admin," but the password can only be obtained by decoding a secret using the base64 algorithm.
Run this command to Retrieve Argo CD admin password:
kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
jr20RBeyfzwvB1mW
Paste the password to the UI dashboard
Username: admin
Password: jr20RBeyfzwvB1mW

This is simple interface of ArgoCD dashboard.
Step-7: Setup the application in ArgoCD
Click on +New APP Icon and click on EDIT AS YAML icon, just left top side, so paste the code below
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
source:
path: Basic-Level/Project-2/Kubernetes
repoURL: https://github.com/devrakaops/projectwala.git
targetRevision: main
sources: []
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
enabled: true
Save this and click on CREATE .
Your application is running, and within 5 seconds, it will sync with your GitHub application directory YAML in the Kubernetes folder.

Just click on this.
And you will see the magic of GitOps.
All kubernetes YAML is showing in tree wise structure. all are healthy and working properly.


Either make changes on GitHub repository folder or direct change here will impact the resource.
If you will check at the kubernetes cluster using
kubectl get all -n default

Which means ArgoCD is working properly.
Now its time to expose the WordPress deployment using port forwarding.
kubectl port-forward svc/wordpress -p 8080:80 --address=0.0.0.0 &
I have used 8080 for mapping, that is mapped inside with port 80 and accessable from everywhere.
Now you can access your wordpress application on port 8080 specify with your public IP of AWS instance. As i told you earlier that this is not fix that only expose application on 8080. You can use any random open port that is not using currently, means shold be free. so don’t forgot to open 8080 port in AWS instances securit groups inbound rule.

Fill the WordPress details.

And now take login with login credentials.

Setup for LAMP WordPress theme or customize your application.
Finally you will get the application at the end.


Step-8: Install Kubernetes Dashboard for pod level monitoring
Deploy Kubernetes dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yamlCreate a token for dashboard access:
kubectl -n kubernetes-dashboard create token admin-user
So create a service role “admin-user“ and make him binding with service account so we can access the kubernetes-dashboard using the token of service account
# vim dashboard_admin.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
Make port mapping for kubernetes-dashboard, so you can access the dashboard. So first check the service for the kubernetes-dashboard.
kubectl get svc -n kubernetes-dashboard
You will see kubernetes-dashboard service is a Cluster-IP Service that is only accessable on port 443 inside the cluster only, not from outside.

So lets have a port mapping to access the kubernetes-dashboard via the service. so 9090 port is opening here to access kubernetes-dashboard that is mapped with port 443 inside.
kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard -p 9090:443 --address=0.0.0.0 &
Access the dashboard on port 9090

You can see that the Kubernetes Dashboard is accessible only when we provide a token.
This is because Kubernetes follows a security-first approach by default.
Now, let’s understand this in easy language.
Kubernetes Dashboard is nothing but a pod running inside the Kubernetes cluster, right?
But Kubernetes does not allow any pod or user to access cluster information by default.
So if we try to open the dashboard without permission, Kubernetes will block the access.
That’s why we need to create a ServiceAccount.
A ServiceAccount tells Kubernetes:
“This dashboard pod is trusted, and it is allowed to view cluster resources.”
Using this ServiceAccount, Kubernetes generates a token.
When we log in to the dashboard using this token, Kubernetes verifies the permissions and then allows access.
So in this step, we are creating a Service Account (and assigning proper role) so the Kubernetes Dashboard can securely access the cluster resources.
kubectl create -f dashboard_admin.yml
If you check you will see a service account named “admin-user“
kubectl get sa -n kubernetes-dashboard

Now create the token and paste it to token section in dashboard UI.
kubectl -n kubernetes-dashboard create token admin-user
You will get something like this

eyJhbGciOiJSUzI1NiIsImtpZCI6Ik8wRnE3NDRTcXVHUHdsMHB3eUw1bmRNX3lwUDZjNWlkYUJ4aThNM01KQkkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY5NTI4NzE5LCJpYXQiOjE3Njk1MjUxMTksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNThhYzI5ZDYtZGQxNi00YjA4LTg4ZGYtYTAxM2IyZmNkOWVjIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiYWE1M2MyYzctNmU4Ni00ZTY3LThkYmItYTI1MmE1ZTAzOTA0In19LCJuYmYiOjE3Njk1MjUxMTksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.HPJbXiKqjUBXQxYV47F2_kQ1fcgkHitAqRBdqmVQqn4tyDP2zQOAjPuqJWYDKu8t75KjjkmPu8bmOrznuEFJq3d42tyzPa62dSx8CuMpSa_GJ2h9SqwjpYJ46-PtdBy4kczEn4fidvMRLhI1wDYn8ne16if2YZqL--mYpAUR1LG2IEikidTDpFwZY5FkW8Am09SMIESV4u_JfrwxpTgJvB_9l1iXNCNXApujYnBEWEjcc479heqmzwdvQy6pBUq3sn5KgfenzbjhLzJZUI6nwIeKOOS3j_UUcyYsIrDxUwtkbzjQ0VLKZkMmcOVrV41tzdEIcIkRqbX6oLIlV-asQQ


You will get the complete resources namespace vise

Change the namespace and you can access the related resources of namespace.
There are many possibilities in this project:
We can add monitoring pods for metrics and log-based monitoring using Prometheus, Grafana, Loki, and exporters.
We can set up Kubernetes pod autoscaling and more.
Step 9: Set up Pod Autoscaler for load-based deployment management
To setup a Pod AutoScaller, we required official yaml. See the link: HorizontalPodAutoscaler Walkthrough | Kubernetes
Create a HPA YAML and customize it. I have created the complete yaml for you.
SO simply Copy-Paste and create the resource.
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
behavior:
scaleDown:
stabilizationWindowSeconds: 300
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: wordpress
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50

If you see carefully HPA is waiting for matrix, so HPA over it can work.
If we dont have HPA, this wont work at all. so just check matrix-server is running or not.
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Just edit one thing
kubectl edit deployment metrics-server -n kube-system
Add this arg flag “--kubelet-insecure-tls“ in “container.args“
spec:
containers:
- args:
- --kubelet-insecure-tls
- --cert-dir=/tmp
- --secure-port=10250
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s

Now check again the HPA status
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/wordpress cpu: 0%/50% 1 5 1 6m40s

So that was the project practical documentation! I hope you gain some real-world insights into DevOps tools and technologies and get hands-on experience with a real-world project!
Don't forget to share this with your colleagues, friends, and your group!
See you soon in the next amazing project!
Thank you,
Rakesh Kumar Jangid.





