<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Rakesh Kumar Jangid]]></title><description><![CDATA[Hi,
I’m Rakesh Kumar, an Author and DevOps Engineer. I simplify DevOps through practical projects, clear documentation, and real-world learning to help beginner]]></description><link>https://projectwala.site</link><generator>RSS for Node</generator><lastBuildDate>Fri, 01 May 2026 07:58:04 GMT</lastBuildDate><atom:link href="https://projectwala.site/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Kubernetes Deployment Strategies Explained: Recreate, RollingUpdate, Blue–Green & Canary]]></title><description><![CDATA[Hello Geeks,
This is Rakesh, and once again, I am here with another deep, honest, and real-world learning related to Kubernetes. Today’s topic is Kubernetes Deployment Strategy.
I want you to read this documentation slowly, like personal notes writte...]]></description><link>https://projectwala.site/kubernetes-deployment-strategies-explained-recreate-rollingupdate-bluegreen-and-canary</link><guid isPermaLink="true">https://projectwala.site/kubernetes-deployment-strategies-explained-recreate-rollingupdate-bluegreen-and-canary</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[deployment strategies]]></category><category><![CDATA[RollingUpdates]]></category><category><![CDATA[Recreate]]></category><category><![CDATA[Canary deployment]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Tue, 10 Feb 2026 07:24:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1770707943642/6de4de51-119b-4e8f-855f-7bd178c67048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Geeks,</p>
<p>This is <strong>Rakesh</strong>, and once again, I am here with another <strong>deep, honest, and real-world learning</strong> related to <strong>Kubernetes</strong>. Today’s topic is <strong>Kubernetes Deployment Strategy</strong>.</p>
<p>I want you to read this documentation <strong>slowly</strong>, like personal notes written in a notebook. This is not copied content, not interview-only material, and not something you just skim. This is written so that <strong>you feel how deployment actually happens inside a company</strong>, why certain decisions are taken, and how Kubernetes behaves behind the scenes.</p>
<p>My goal is simple: after reading this, you should be able to <strong>explain deployment strategies confidently to anyone</strong>, and also <strong>implement them practically</strong>.</p>
<hr />
<p>Let us first understand what we really mean by the term <strong>deployment strategy</strong>.</p>
<p>In real life, deployment strategy is not a Kubernetes keyword first. It is a <strong>business and engineering problem</strong>. Whenever a company is running an application that users are actively using, the company cannot just stop everything and start again. Users may be making payments, booking tickets, logging in, or consuming services. Even a few seconds of downtime can break user trust, create financial loss, or damage reputation.</p>
<p>So whenever a company says, “We want to upgrade our application from an old version to a new version,” the next question is never <em>how to deploy</em>, but always <em>how to deploy safely</em>.</p>
<p>That safe and planned way of upgrading an application from one version to another is called a <strong>deployment strategy</strong>.</p>
<p>In very simple language: Deployment strategy means <strong>the method we choose to replace the old running application with a new version, without breaking the system</strong>.</p>
<p>Kubernetes comes into the picture as the system that executes this plan for us.</p>
<hr />
<p>Now comes one of the most important truths that every Kubernetes learner must clearly understand.</p>
<p>As per <strong>official Kubernetes documentation</strong>, Kubernetes supports <strong>only two deployment strategies by default</strong>:</p>
<p>Recreate and RollingUpdate.</p>
<p>That’s it. Nothing more.</p>
<p>Many people believe that Blue-Green and Canary are also default Kubernetes deployment strategies, but that belief is <strong>incomplete and slightly incorrect</strong>. Blue-Green and Canary are <strong>deployment patterns</strong>, not native strategies. Kubernetes does not give you a direct field called <code>type: BlueGreen</code>. Instead, Kubernetes gives you building blocks like Deployments, Services, labels, selectors, and controllers. Using these building blocks, engineers design advanced patterns like Blue-Green and Canary.</p>
<p>These patterns exist because the default strategies have <strong>real limitations</strong>, especially in production environments.</p>
<p>So to truly understand Blue-Green, we must start from the beginning — from the simplest strategy.</p>
<hr />
<ol>
<li><h2 id="heading-let-us-start-with-the-recreate-deployment-strategy"><strong>Let us start with the Recreate Deployment Strategy.</strong></h2>
</li>
</ol>
<p>Recreate is the most basic and straightforward strategy. Its behavior is very simple. When a new version of the application needs to be deployed, Kubernetes first <strong>terminates all existing Pods</strong> of the application. Only after all old Pods are terminated does Kubernetes start creating new Pods with the updated version.</p>
<p>This means there is a clear gap between stopping the old application and starting the new one.</p>
<p>That gap is called <strong>downtime</strong>.</p>
<p>Downtime is not an accidental side effect here. Downtime is the <strong>natural and unavoidable result</strong> of how Recreate works.</p>
<p>Now let us imagine a real company scenario.</p>
<p>Suppose there is a payment-related company. The application is already running smoothly. The frontend is serving users, the backend is connected to the database, APIs are responding correctly, and users are happily making transactions. Behind the scenes, Kubernetes is managing Deployments, Services, Secrets, ConfigMaps, and storage. Everything is stable.</p>
<p>Now the manager comes to the tech team and says: “Can we upgrade the frontend UI to a newer version?”</p>
<p>If the DevOps team uses the Recreate strategy, Kubernetes will first stop all existing frontend Pods. During this time, the Service has no Pods to send traffic to. Users will see errors, blank pages, or timeouts. Only after the new Pods start and become ready will the application be available again.</p>
<p>So even if the downtime is small, downtime <strong>will definitely happen</strong>.</p>
<p>This is why the Recreate strategy is not trusted for production user-facing applications.</p>
<h2 id="heading-now-let-us-make-this-learning-real-practical-part-for-each-deployment-strategy">Now, let us make this learning REAL — Practical part for each deployment strategy</h2>
<p>Till now, we understood the <em>why</em> and <em>what</em>. Now we will move to the <em>how</em>. This section is written so that <strong>you can sit in front of a laptop and actually do it</strong>, not just imagine it.</p>
<p>I will explain the practical part <strong>slowly</strong>, and assuming you have enough learning of Kubernetes, honestly, not rushing.</p>
<hr />
<h2 id="heading-practical-recreate-deployment-strategy">Practical: Recreate Deployment Strategy</h2>
<p>First, we intentionally use Recreate so that you <strong>personally observe why it is risky</strong>.</p>
<p>Create a simple Deployment using nginx. This will act as our frontend application.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># vim recreate_deployment.yml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">recreate-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">policy:</span> <span class="hljs-string">recreate</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">10</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">recreate</span> 
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span> 
        <span class="hljs-attr">app:</span> <span class="hljs-string">recreate</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:1.14.2</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-bash">kubectl create -f recreate_deployment.yml -n strategy
</code></pre>
<p>Note: For my safe practice purposes, I am using a namespace, so all resources are kept aligned in one location.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770290857919/9d53e3d6-5450-4b94-87c3-a38e5433ef44.png" alt class="image--center mx-auto" /></p>
<p>Now create a NodePort service for this deployment, so users can request publicly to the application using the service.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># vim recreate_Service.yml</span>

<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">recreate-service</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">NodePort</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">recreate</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-comment"># By default and for convenience, the `targetPort` is set to</span>
      <span class="hljs-comment"># the same value as the `port` field.</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
      <span class="hljs-comment"># Optional field</span>
      <span class="hljs-comment"># By default and for convenience, the Kubernetes control plane</span>
      <span class="hljs-comment"># will allocate a port from a range (default: 30000-32767)</span>
      <span class="hljs-attr">nodePort:</span> <span class="hljs-number">30007</span>
</code></pre>
<p>Now create both resources.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># kubectl create -f recreate_Service.yml -n strategy</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770291205353/2ffef890-3695-47cf-90f9-cb965d6e2c61.png" alt class="image--center mx-auto" /></p>
<p>Apply both manifests and access the application.</p>
<p>At this stage, everything works normally.</p>
<p>Now we are going to update the pre-exist Deployment image to other version ex:<code>nginx:1.25</code> and apply it again. You can edit existing running deployment using following method:</p>
<ol>
<li><p>Direct EDIT method <code>kubectl edit deployment</code></p>
</li>
<li><p>Using <code>kubectl patch</code></p>
</li>
<li><p>Using <code>kubectl set image</code></p>
</li>
<li><p>Edit deployment and apply changes using <code>kubectl create -f</code></p>
</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">set</span> <span class="hljs-string">image</span> <span class="hljs-string">deployment/recreate-deployment</span> <span class="hljs-string">nginx=nginx:1.25.0</span> <span class="hljs-string">-n</span> <span class="hljs-string">strategy</span>
</code></pre>
<p>You will observe that all pods 100 % get down first, and new pods spin out after a few seconds. This is called downtime, and this is an issue.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770291842119/dca1bb9e-8369-4e00-b1e1-a8c2bf6c830b.png" alt class="image--center mx-auto" /></p>
<p>Later, check again and see there are all the new pods are running. Which means there are two ReplicaSet is available for this deployment with two versions. One for the old version <code>nginx:1.14.2</code> and second one for new version <code>nginx:1.25.0</code> . This process of being updated with a newer version is called rolling update, even this is for Recreate strategy.</p>
<pre><code class="lang-bash">kubectl get pods -n strategy
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770291878472/ff13558b-11fd-46a8-a706-6af72b97e17e.png" alt class="image--center mx-auto" /></p>
<p>Check ReplicaSet status.</p>
<pre><code class="lang-bash">kubectl get all -n strategy
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770292231511/fde20869-e9ce-4c26-b0ec-ee963c43e502.png" alt class="image--center mx-auto" /></p>
<p>See, there is a deployment running two <code>ReplicaSet</code> with different versions.</p>
<hr />
<h3 id="heading-conclusion-of-the-recreate-deployment-strategy">Conclusion of the Recreate Deployment Strategy</h3>
<p>So, the Recreate strategy still has its place. It is useful for proof-of-concept environments, internal testing, development setups, and batch-style workloads where downtime does not matter. But for real production systems, Recreate is almost never acceptable.</p>
<p><strong>This limitation of Recreate naturally led engineers to ask a better question:</strong></p>
<p>“Can we update the application without stopping everything at once?”</p>
<p>That question gave rise to the <strong>RollingUpdate strategy</strong>.</p>
<hr />
<ol start="2">
<li><h2 id="heading-rollingupdate-is-the-default-strategy-in-kubernetes-and-it-is-much-smarter-than-recreate">RollingUpdate is the default strategy in Kubernetes, and it is much smarter than Recreate.</h2>
</li>
</ol>
<p>Instead of stopping all Pods together, RollingUpdate replaces Pods <strong>gradually</strong>. Some old Pods continue running while new Pods are being created. This way, the application usually remains available during the update process. Between updating the pods version, it takes some time to get back ready to serve the application publicly. Read this for more <a target="_blank" href="https://kubernetes.io/docs/tutorials/services/pods-and-endpoint-termination-flow/">Pods Termination Behavior</a> .</p>
<pre><code class="lang-bash">vim rolling_update_deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rolling-update-deployment
  labels:
    app: nginx
spec:
  replicas: 10
  selector:
    matchLabels:
      app: rolling-update
  strategy:
   <span class="hljs-built_in">type</span>: RollingUpdate
   rollingUpdate:
     maxUnavailable: 0           <span class="hljs-comment"># High Availability </span>
     maxSurge: 2                  <span class="hljs-comment"># Two extra pods will create in advance</span>
  template:
    metadata:
      labels:
        app: rolling-update
    spec:
      terminationGracePeriodSeconds: 10 <span class="hljs-comment"># extra long grace period</span>
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
</code></pre>
<p>And expose the deployment with a service NodePort.</p>
<pre><code class="lang-bash">vim rolling_update_service.yml

<span class="hljs-comment"># vim recreate_Service.yml</span>

apiVersion: v1
kind: Service
metadata:
  name: rolling-update-service
spec:
  <span class="hljs-built_in">type</span>: NodePort
  selector:
    app: rolling-update
  ports:
    - port: 80
      <span class="hljs-comment"># By default and for convenience, the `targetPort` is set to</span>
      <span class="hljs-comment"># the same value as the `port` field.</span>
      targetPort: 80
      <span class="hljs-comment"># Optional field</span>
      <span class="hljs-comment"># By default and for convenience, the Kubernetes control plane</span>
      <span class="hljs-comment"># will allocate a port from a range (default: 30000-32767)</span>
      nodePort: 30010
</code></pre>
<p>Now create resources using commands.</p>
<pre><code class="lang-bash">kubectl create -f rolling_update_deployment.yml -n strategy
kubectl create -f rolling_update_deployment.yml -n strategy
kubectl get all -n strategy
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770294714560/10014a1f-568e-4c0e-b674-4d0246ea9c85.png" alt class="image--center mx-auto" /></p>
<p>Now change the version of deployment pods.</p>
<p>From old to new version and apply changes to the existing deployment.</p>
<pre><code class="lang-bash">kubectl <span class="hljs-built_in">set</span> image deployment/rolling-update-deployment nginx=nginx:1.25.0 -n strategy
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770294908716/95d64d8b-35a2-498e-984d-84573870e36b.png" alt class="image--center mx-auto" /></p>
<p>At first glance, the RollingUpdate strategy seems ideal. It offers no downtime, a smooth transition, and a controlled rollout. With <code>maxSurge: 2</code>, two extra pods are created first, and then old pods are deleted until all pods are updated to the new version. What else do we need?</p>
<p>Right ????</p>
<p>While the RollingUpdate strategy ensures high availability, the issue is not just about availability. The real problem is that, <strong>for a brief moment, the application runs two versions simultaneously.</strong> This means some users might be using the older version while others are on the newer version. This can lead to compatibility issues between the backend and frontend, causing potential disconnects. Therefore, this approach may not be entirely reliable for production-level work.</p>
<p>Running two versions at the same time, even for a few seconds, can be risky. Downtime, no matter how brief, can impact a company in terms of money, time, and trust.</p>
<p>Now let us go back to the real-world company example.</p>
<blockquote>
<p>Suppose the frontend version v1 is fully compatible with the backend APIs and database. Everything works fine. Now a new frontend version v2 is built. The UI is improved, performance is enhanced, but v2 expects some new API responses or slightly different behavior from the backend.</p>
<p>During a RollingUpdate, both frontend v1 and frontend v2 are running together. Some users hit v1 and everything works. Some users hit v2, and suddenly they see errors because the backend does not yet support those expectations.</p>
<p>This creates an inconsistent user experience.</p>
</blockquote>
<p>It is very important to understand that Kubernetes is not wrong here. Kubernetes is doing exactly what RollingUpdate promises. The real problem is that RollingUpdate <strong>allows mixed-version traffic</strong>.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>“This mixed state is the core limitation of RollingUpdate.”</strong></div>
</div>

<p>RollingUpdate cannot guarantee that all users will see only one version at a time.</p>
<p>And for some businesses, especially high-risk systems like payments, banking, authentication, or compliance-heavy applications, this risk is unacceptable.</p>
<p>That is why companies needed something even safer.</p>
<hr />
<ol start="3">
<li><h2 id="heading-this-is-where-the-blue-green-deployment-pattern-comes-into-the-picture">This is where the <strong>Blue-Green Deployment pattern</strong> comes into the picture.</h2>
</li>
</ol>
<p>Blue-Green deployment is not about replacing Pods. It is about <strong>switching traffic</strong>.</p>
<p>The idea is very simple but extremely powerful.</p>
<p>Instead of upgrading the running application directly, the company runs two environments side by side.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770360856284/cecbd6c4-82f5-46dd-9477-6c9683dc52cc.png" alt class="image--center mx-auto" /></p>
<p>The Blue environment represents the current live application that users are using.</p>
<p>The Green environment represents the new version of the application. It is fully deployed, fully tested, and fully ready — but hidden from users.</p>
<p>Only one environment receives traffic at a time.</p>
<p>When the company is confident that the Green version is stable, traffic is switched from Blue to Green in one clean step.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770361170074/7ddf5f40-7e2f-4dfb-8c6e-349efb1da5af.png" alt class="image--center mx-auto" /></p>
<p>No gradual replacement. No mixed versions. No confusion.</p>
<p>If something goes wrong, traffic can be switched back to Blue immediately.</p>
<p><strong>This is why companies love Blue-Green deployment.</strong></p>
<p>In real companies, this traffic switch is usually <strong>done using a Kubernetes Service</strong> or <strong>an Ingress controller</strong>. The Service selector is updated to point to the new version. The Pods themselves are not restarted or modified during the switch. Only traffic routing changes.</p>
<p>This makes rollback extremely fast and safe.</p>
<p>For this practical, we have to create two deployments and one service.</p>
<p>So in this scenario, the service will serve the public client requests to that deployment that has matching labels with service labels.</p>
<pre><code class="lang-yaml"><span class="hljs-string">vim</span> <span class="hljs-string">blue-green-deployment.yml</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-comment"># -------------------------------</span>
<span class="hljs-comment"># BLUE DEPLOYMENT</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">blue-deploy</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">4</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">env:</span> <span class="hljs-string">blue</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">env:</span> <span class="hljs-string">blue</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:1.25</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>

<span class="hljs-meta">---</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-comment"># BLUE SERVICE</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">strategy-svc</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">env:</span> <span class="hljs-string">blue</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">ClusterIP</span>

<span class="hljs-comment">## This service will pick blue deployment pods,  because of selector env: blue</span>
<span class="hljs-meta">---</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-comment"># GREEN DEPLOYMENT</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">green-deploy</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">4</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">env:</span> <span class="hljs-string">green</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">env:</span> <span class="hljs-string">green</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:1.25</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">create</span> <span class="hljs-string">-f</span> <span class="hljs-string">blue-green-deployment.yml</span> <span class="hljs-string">-n</span> <span class="hljs-string">strategy</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">pods</span>  <span class="hljs-string">--show-labels</span> <span class="hljs-string">-n</span> <span class="hljs-string">strategy</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770362100639/b0f9b8df-8997-4b48-8ff4-a1d13edf4d8b.png" alt class="image--center mx-auto" /></p>
<p>Two deployments are ready, but serving the public request by only one at a time, which has matching service labels.</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">svc</span> <span class="hljs-string">-n</span> <span class="hljs-string">strategy</span>
<span class="hljs-string">kubectl</span> <span class="hljs-string">describe</span> <span class="hljs-string">svc/strategy-svc</span> <span class="hljs-string">-n</span> <span class="hljs-string">strategy</span>
</code></pre>
<p>Deployment <code>blue-deploy</code> has labels = <code>env: blue</code></p>
<p>Deployment <code>green-deploy</code> has labels = <code>env: blue</code></p>
<p>Current Service <code>strategy-svc</code> has labels = <code>env: blue</code></p>
<p>Which means deployment <code>blue-deploy</code> is currently serving the service, because its pod has matching labels with the service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770362364062/9737d3a6-6716-470b-86bb-d5e7b44d34b4.png" alt class="image--center mx-auto" /></p>
<p>Recently company has changed their mind and wants to move on new version. So only a patch change will divert the service request to the new deployment.</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">patch</span> <span class="hljs-string">service</span> <span class="hljs-string">strategy-svc</span> <span class="hljs-string">\</span>
  <span class="hljs-string">-p</span> <span class="hljs-string">'{"spec":{"selector":{"env":"green"}}}'</span> <span class="hljs-string">-n</span> <span class="hljs-string">strategy</span>
</code></pre>
<p>This patch has changed the traffic from <code>env: blue</code> deployment pods</p>
<p>to newer deployment pods <code>env: green</code></p>
<p>Check with the service description. Look into the labels. Now it’s changed to <code>env:green</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">describe</span> <span class="hljs-string">svc/strategy-svc</span> <span class="hljs-string">-n</span> <span class="hljs-string">strategy</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770362767659/411df339-dd7f-4126-9f59-627804e92bc9.png" alt class="image--center mx-auto" /></p>
<p>Although In the background, both deployment is running, but deployment that has labels <code>env: blue</code> is now serving the service to public.</p>
<p>If company decide to go back with previous version. company need to change only service labels.</p>
<pre><code class="lang-yaml"> <span class="hljs-string">kubectl</span> <span class="hljs-string">patch</span> <span class="hljs-string">service</span> <span class="hljs-string">strategy-svc</span> <span class="hljs-string">\</span>
  <span class="hljs-string">-p</span> <span class="hljs-string">'{"spec":{"selector":{"env":"blue"}}}'</span> <span class="hljs-string">-n</span> <span class="hljs-string">strategy</span>
</code></pre>
<p>However, Blue-Green is not magic. It does not fix bad database design or incompatible APIs. Companies still need backwards-compatible database migrations and well-designed APIs. Blue-Green simply ensures <strong>that users are never exposed to two application versions at the same time.</strong></p>
<hr />
<p>Now come to Canary deployment strategy and see what special in this.</p>
<ol start="4">
<li><h2 id="heading-canary-deployment-strategy">Canary Deployment Strategy</h2>
</li>
</ol>
<p>Let us first come back to the real-world problem.</p>
<p>A company is running an application that users are actively using. Everything is stable. Payments are working, logins are fine, APIs are responding, dashboards look green. Now the company wants to release a <strong>new version</strong>.</p>
<p>At this point, the company already knows about RollingUpdate and Blue-Green.</p>
<p>RollingUpdate is good, but it allows <strong>mixed versions</strong> to serve traffic at the same time.</p>
<p>Blue-Green is safer, but it switches <strong>100% of users</strong> to the new version at once.</p>
<p><strong>Now imagine this situation.</strong></p>
<p>The company is not fully confident about the new version. So</p>
<ul>
<li><p>A new feature is introduced</p>
</li>
<li><p>A new UI flow is added</p>
</li>
<li><p>A new algorithm is used</p>
</li>
<li><p>A performance optimization is done</p>
</li>
</ul>
<p>The code has passed testing, but <strong>still the team is not comfortable</strong> exposing <strong>all users</strong> to it <strong>at once</strong>.</p>
<p>So the real question becomes:</p>
<p>“What if we expose the new version to <strong>only a small set of users</strong>, observe it in production, and then decide?”</p>
<p>That thinking is the birth of <strong>Canary Deployment</strong>.</p>
<hr />
<p><strong>The term <em>Canary</em> comes from an old practice in coal mines.</strong></p>
<p>Miners used to carry a canary bird with them. If the air was toxic, the canary would show signs first, warning the miners before humans were affected.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770623902550/72e6c726-40bc-4ea4-8690-bc7296c2eaac.png" alt="Canary Deployment strategy" class="image--center mx-auto" /></p>
<p>In software terms, the new version of the application is the <strong>canary</strong>.</p>
<p>It is exposed to danger first, while the majority of users stay safe on the stable version.</p>
<hr />
<p>Now let us define Canary deployment in very simple, honest language.</p>
<p>Canary deployment means:</p>
<p>Releasing a new version of the application to <strong>a small percentage of users</strong>, monitoring its behaviour in real production conditions, and gradually increasing traffic only if everything looks healthy.</p>
<p><strong>Unlike Blue-Green, Canary is not about instant switching.</strong></p>
<p><strong>Unlike RollingUpdate, Canary is not about replacing Pods.</strong></p>
<p>Canary is about <strong>risk control</strong>.</p>
<hr />
<p>Now it is very important to understand one thing clearly.</p>
<p>Just like RollingUpdate, Canary is <strong>not a default Kubernetes deployment strategy</strong>.</p>
<p>Kubernetes does not have a <code>type: Canary</code> field.</p>
<p>Canary is a <strong>deployment pattern</strong> built using Kubernetes primitives such as:</p>
<ul>
<li><p>Deployments</p>
</li>
<li><p>Labels</p>
</li>
<li><p>Services</p>
</li>
<li><p>Ingress or traffic controllers</p>
</li>
</ul>
<p>Kubernetes gives you the tools. Canary gives you the idea. Canary deployment is usually done gradually.</p>
<p>First, only a very small percentage of traffic goes to the new version.</p>
<p>So engineers observe the application and environment calmly:</p>
<ul>
<li><p>Checking the Error rates</p>
</li>
<li><p>Checking the Latency</p>
</li>
<li><p>Check the service Logs</p>
</li>
<li><p>Read User complaints</p>
</li>
</ul>
<p>If everything looks good, traffic is increased slowly.</p>
<p>If anything looks wrong, Canary is stopped immediately, and users continue using the stable version.</p>
<p>This gives teams <strong>confidence with control</strong>.</p>
<hr />
<p>It is also important to understand that Canary deployment is <strong>more complex</strong> than RollingUpdate and Blue-Green.</p>
<p>It requires or we can say depended on:</p>
<ul>
<li><p>Better monitoring</p>
</li>
<li><p>Clear rollback strategy</p>
</li>
<li><p>Discipline in release management</p>
</li>
</ul>
<p>That is why small teams may avoid Canary initially, while mature teams adopt it as they scale.</p>
<p>Companies choose Canary when they feel in control and confident in the application.If you truly understand Canary deployment at this theoretical level, you are already thinking like a production engineer, not just a Kubernetes learner.</p>
<hr />
<h2 id="heading-now-its-time-for-canary-deployment-practical-learning">Now it’s time for Canary Deployment practical learning.</h2>
<p>The company is running a website application named “chatbot“, for this, there is a deployment is running in the production with this specification yaml.</p>
<pre><code class="lang-yaml"><span class="hljs-string">vim</span> <span class="hljs-string">pre-deployment.yml</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">pre-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">4</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">env:</span> <span class="hljs-string">prod</span>
      <span class="hljs-attr">version:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">env:</span> <span class="hljs-string">prod</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">v1</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:1.14.2</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<p>In the deployment we have 4 replicas and using nginx image.</p>
<p>labels you can see :</p>
<p>env: prod</p>
<p>version: v1</p>
<p>Now application deployment is exposed using the service (ex: nodePort in our practice).</p>
<p>100% service request came to deployment that has both matching labels.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">pre-svc</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">NodePort</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">env:</span> <span class="hljs-string">prod</span>
    <span class="hljs-attr">version:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">nodePort:</span> <span class="hljs-number">31745</span>
</code></pre>
<p>Now as a client request to application using service request.</p>
<pre><code class="lang-yaml"><span class="hljs-string">curl</span> <span class="hljs-string">http://node-public-ip:31745</span>
</code></pre>
<p>You will receive a response from the 4 replica deployment of nginx pods.</p>
<pre><code class="lang-yaml"><span class="hljs-string">&lt;!DOCTYPE</span> <span class="hljs-string">html&gt;</span>
<span class="hljs-string">&lt;html&gt;</span>
<span class="hljs-string">&lt;head&gt;</span>
<span class="hljs-string">&lt;title&gt;Welcome</span> <span class="hljs-string">to</span> <span class="hljs-string">nginx!&lt;/title&gt;</span>
<span class="hljs-string">&lt;style&gt;</span>
    <span class="hljs-string">body</span> {
        <span class="hljs-attr">width:</span> <span class="hljs-string">35em;</span>
        <span class="hljs-attr">margin:</span> <span class="hljs-number">0</span> <span class="hljs-string">auto;</span>
        <span class="hljs-attr">font-family:</span> <span class="hljs-string">Tahoma</span>, <span class="hljs-string">Verdana</span>, <span class="hljs-string">Arial</span>, <span class="hljs-string">sans-serif;</span>
    }
<span class="hljs-string">&lt;/style&gt;</span>
<span class="hljs-string">&lt;/head&gt;</span>
<span class="hljs-string">&lt;body&gt;</span>
<span class="hljs-string">&lt;h1&gt;Welcome</span> <span class="hljs-string">to</span> <span class="hljs-string">nginx!&lt;/h1&gt;</span>
<span class="hljs-string">&lt;p&gt;If</span> <span class="hljs-string">you</span> <span class="hljs-string">see</span> <span class="hljs-string">this</span> <span class="hljs-string">page,</span> <span class="hljs-string">the</span> <span class="hljs-string">nginx</span> <span class="hljs-string">web</span> <span class="hljs-string">server</span> <span class="hljs-string">is</span> <span class="hljs-string">successfully</span> <span class="hljs-string">installed</span> <span class="hljs-string">and</span>
<span class="hljs-string">working.</span> <span class="hljs-string">Further</span> <span class="hljs-string">configuration</span> <span class="hljs-string">is</span> <span class="hljs-string">required.&lt;/p&gt;</span>

<span class="hljs-string">&lt;p&gt;For</span> <span class="hljs-string">online</span> <span class="hljs-string">documentation</span> <span class="hljs-string">and</span> <span class="hljs-string">support</span> <span class="hljs-string">please</span> <span class="hljs-string">refer</span> <span class="hljs-string">to</span>
<span class="hljs-string">&lt;a</span> <span class="hljs-string">href="http://nginx.org/"&gt;nginx.org&lt;/a&gt;.&lt;br/&gt;</span>
<span class="hljs-string">Commercial</span> <span class="hljs-string">support</span> <span class="hljs-string">is</span> <span class="hljs-string">available</span> <span class="hljs-string">at</span>
<span class="hljs-string">&lt;a</span> <span class="hljs-string">href="http://nginx.com/"&gt;nginx.com&lt;/a&gt;.&lt;/p&gt;</span>

<span class="hljs-string">&lt;p&gt;&lt;em&gt;Thank</span> <span class="hljs-string">you</span> <span class="hljs-string">for</span> <span class="hljs-string">using</span> <span class="hljs-string">nginx.&lt;/em&gt;&lt;/p&gt;</span>
<span class="hljs-string">&lt;/body&gt;</span>
<span class="hljs-string">&lt;/html&gt;</span>
</code></pre>
<p>Image explained</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770628147106/1c66db84-5afe-4870-8089-f72d8ba350f8.png" alt class="image--center mx-auto" /></p>
<p>Now Canary comes in game</p>
<p>Create one more deployment yaml.</p>
<pre><code class="lang-yaml"><span class="hljs-string">vim</span> <span class="hljs-string">canary-deployment.yml</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">canary-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">httpd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">env:</span> <span class="hljs-string">prod</span>
      <span class="hljs-attr">version:</span> <span class="hljs-string">v2</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">env:</span> <span class="hljs-string">prod</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">v2</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">httpd</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">httpd</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">create</span> <span class="hljs-string">-f</span> <span class="hljs-string">canary-deployment.yml</span>
</code></pre>
<p>This is new deployment that has 2 replicas and two labels <code>env: prod</code> &amp; <code>version: v2</code> and using image httpd.</p>
<p>Now we will see the magic of canary strategy.</p>
<p>Instead of fully traffic conversion, we provide a small amount of service.</p>
<p>for this edit the existing service and remove one label <code>version: v2</code></p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">edit</span> <span class="hljs-string">svc/pre-svc</span>
</code></pre>
<pre><code class="lang-yaml">  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">nodePort:</span> <span class="hljs-number">31745</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">env:</span> <span class="hljs-string">prod</span>
</code></pre>
<p>So this time service is controlling and sending client requests to those deployment pods that has labels</p>
<p>env: prod.</p>
<p>and we know both deployment has label <code>env: prod</code>.</p>
<p>One deployment <code>pre-deployment</code> has 4 replicas</p>
<p>New deployment <code>canary-deployment</code> has 2 replicas.</p>
<p>So total there are 6 pods replicas, which in total request client requests via service.</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">deployment</span> <span class="hljs-string">--show-labels</span>
<span class="hljs-string">NAME</span>                  <span class="hljs-string">READY</span>   <span class="hljs-string">UP-TO-DATE</span>   <span class="hljs-string">AVAILABLE</span>   <span class="hljs-string">AGE</span>   <span class="hljs-string">LABELS</span>
<span class="hljs-string">canary-deployment</span>     <span class="hljs-number">4</span><span class="hljs-string">/4</span>     <span class="hljs-number">4</span>            <span class="hljs-number">4</span>           <span class="hljs-string">53m</span>   <span class="hljs-string">app=nginx,name=canary-deployment</span>
<span class="hljs-string">strategy-deployment</span>   <span class="hljs-number">2</span><span class="hljs-string">/2</span>     <span class="hljs-number">2</span>            <span class="hljs-number">2</span>           <span class="hljs-string">46m</span>   <span class="hljs-string">app=httpd,name=strategy-deployment</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">pods</span> <span class="hljs-string">--show-labels</span> 
<span class="hljs-string">NAME</span>                                   <span class="hljs-string">READY</span>   <span class="hljs-string">STATUS</span>    <span class="hljs-string">RESTARTS</span>   <span class="hljs-string">AGE</span>   <span class="hljs-string">LABELS</span>
<span class="hljs-string">canary-deployment-688c4f4d54-ftdt8</span>     <span class="hljs-number">1</span><span class="hljs-string">/1</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">53m</span>   <span class="hljs-string">env=prod,pod-template-hash=688c4f4d54,version=v1</span>
<span class="hljs-string">canary-deployment-688c4f4d54-lzfnz</span>     <span class="hljs-number">1</span><span class="hljs-string">/1</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">53m</span>   <span class="hljs-string">env=prod,pod-template-hash=688c4f4d54,version=v1</span>
<span class="hljs-string">canary-deployment-688c4f4d54-rz4dh</span>     <span class="hljs-number">1</span><span class="hljs-string">/1</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">53m</span>   <span class="hljs-string">env=prod,pod-template-hash=688c4f4d54,version=v1</span>
<span class="hljs-string">canary-deployment-688c4f4d54-zsgjn</span>     <span class="hljs-number">1</span><span class="hljs-string">/1</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">53m</span>   <span class="hljs-string">env=prod,pod-template-hash=688c4f4d54,version=v1</span>
<span class="hljs-string">strategy-deployment-5474844f99-66dc5</span>   <span class="hljs-number">1</span><span class="hljs-string">/1</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">46m</span>   <span class="hljs-string">env=prod,pod-template-hash=5474844f99,version=v2</span>
<span class="hljs-string">strategy-deployment-5474844f99-qhsb6</span>   <span class="hljs-number">1</span><span class="hljs-string">/1</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">46m</span>   <span class="hljs-string">env=prod,pod-template-hash=5474844f99,version=v2</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-string">curl</span> <span class="hljs-string">http://cluster-pub-ip:31745</span>
</code></pre>
<p>Make it refresh again and again.<br />some time nginx image respond and some time httpd image</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770631875946/d9cb4175-a5c1-4dba-ae2e-bb69f3a5c09c.png" alt class="image--center mx-auto" /></p>
<p>Refresh two three times, because nginx image deployment (older version deployment) has more replicas=4 as compare to newer version deployment replicas=2.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770631809167/1da42375-ef1d-4359-9076-459db24e0a11.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1770631338219/e621f6ab-6ea2-4ade-8019-a9a37fa32644.png" alt class="image--center mx-auto" /></p>
<p>which mean clients are using newer version of application at the same time but for asmall percentage.</p>
<p>if company found application newer version good, then they can increase the deployment replicas, so percentage share will increase for client requests. and later we can remove the older version deployment pods.</p>
<p>.  </p>
<p>Thank you</p>
<p>~Rakesh Kumar Jangid</p>
<hr />
<p>So, how was your experience with this blog post? If you want to read more about Kubernetes and DevOps, we have awesome posts and projects on this website.</p>
<p>Practice Kubernetes GitOps project: <a target="_blank" href="https://projectwala.site/real-world-production-ready-gitops-project-for-devops-practitioners">https://projectwala.site/real-world-production-ready-gitops-project-for-devops-practitioners</a></p>
<p>Read other Kubernetes blogs: <a target="_blank" href="https://projectwala.site/series/kubernetes">Kubernetes-Series</a></p>
<p>Read Linux blogs: <a target="_blank" href="https://projectwala.site/series/linux-series">https://projectwala.site/series/linux-series</a></p>
<p>Read Ansible blogs: <a target="_blank" href="https://projectwala.site/series/ansible-playbook">Ansible-Series</a></p>
<p>Learn from DevOps Practice: <a target="_blank" href="https://projectwala.site/series/devops">DevOps</a></p>
<p>Read Docker Container: <a target="_blank" href="https://projectwala.site/series/docker">Docker-Series</a></p>
]]></content:encoded></item><item><title><![CDATA[Real-World, Production-Ready GitOps Project for DevOps Practitioners]]></title><description><![CDATA[Hello, this is Rakesh Kumar — your DevOps project practice trainer and your friend.I’m back again with a brilliant DevOps project that is not only production-ready but also highly valuable for real-world learning.
This project is designed to take you...]]></description><link>https://projectwala.site/real-world-production-ready-gitops-project-for-devops-practitioners</link><guid isPermaLink="true">https://projectwala.site/real-world-production-ready-gitops-project-for-devops-practitioners</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[cicd]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><category><![CDATA[Linux]]></category><category><![CDATA[projects]]></category><category><![CDATA[jobs]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Tue, 27 Jan 2026 15:35:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769612553760/0041f9d9-fad0-4656-9761-31b7c8e9e6fe.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, this is <strong>Rakesh Kumar</strong> — your DevOps project practice trainer and your friend.<br />I’m back again with a <strong>brilliant DevOps project</strong> that is not only <strong>production-ready</strong> but also <strong>highly valuable for real-world learning</strong>.</p>
<p>This project is designed to take you <strong>deep inside the core of DevOps practices</strong>. We will not just run commands — we will understand <strong>why things work the way they do in real production environments</strong>.<br />By the end of this project, you won’t be the same person. You’ll gain <strong>real-world, production-level insights</strong> that most beginners miss.</p>
<hr />
<h2 id="heading-tech-stack-used">Tech Stack Used</h2>
<p>In this project, we will work with the following tools and technologies:</p>
<ul>
<li><p><strong>Linux</strong></p>
</li>
<li><p><strong>LAMP Server (WordPress)</strong></p>
</li>
<li><p><strong>Docker + Kind</strong> (to create a mini Kubernetes practice cluster)</p>
</li>
<li><p><strong>Kubernetes</strong> (for microservices-based application deployment)</p>
</li>
<li><p><strong>Kubernetes Dashboard</strong> (for pod-level monitoring)</p>
</li>
<li><p><strong>HPA</strong> (For Pod Auto-Scale as per matrix Data)</p>
</li>
<li><p><strong>ArgoCD</strong> (GitOps tool to tightly sync Git &amp; GitHub for continuous deployment)</p>
</li>
<li><p><strong>Helm</strong> (your special package manager for Kubernetes)</p>
</li>
<li><p><strong>AWS EC2 (for Project Infra, t2.large instance)</strong></p>
</li>
<li><p><strong>Git &amp; GitHub</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-core-skills-you-should-have">Core Skills You Should Have</h2>
<p>Before starting this project, you should be comfortable with:</p>
<ul>
<li><p>Basic <strong>Linux</strong> commands</p>
</li>
<li><p><strong>Git &amp; GitHub</strong> (basic usage)</p>
</li>
<li><p><strong>Docker and Kubernetes</strong> fundamentals</p>
</li>
<li><p><strong>AWS EC2 instance</strong> creation</p>
</li>
<li><p>Basic <strong>Linux web server</strong> knowledge</p>
</li>
</ul>
<p>Don’t worry — you don’t need to be an expert.<br />If your basics are clear, this project will <strong>sharpen your mindset and confidence</strong>.</p>
<hr />
<h3 id="heading-lets-get-started">Let’s Get Started</h3>
<p>I am using aws cloud for project infrastructure. So we will use<br />OS: Ubuntu 22.04<br />Configuration: T2.Large (Because we have to do lots of work here- so need more power)<br />Storage Volume: 24 GB Minimum</p>
<h2 id="heading-step-1-create-aws-t2large-instance"><strong>Step-1: Create AWS T2.Large Instance</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769517801215/905f659e-24b1-4d06-83c0-fdef761f99d2.png" alt class="image--center mx-auto" /></p>
<p>We will use direct aws console to access terminal</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769517869175/97cb1a4f-adac-4f31-a579-afe635158395.png" alt class="image--center mx-auto" /></p>
<p>Install some required packages tool tech</p>
<pre><code class="lang-plaintext">apt-get update
apt install -y vim git docker.io 
systemctl enable --now docker
</code></pre>
<hr />
<h2 id="heading-step-2-now-we-have-to-install-kind-so-we-can-create-k8s-mini-cluster-so-for-this-we-have-to-create-some-script-and-then-run-the-script-after-give-the-execute-permission"><strong>Step-2: Now we have to install KIND so we can create k8s mini cluster. So for this we have to create some script. and then run the script after give the execute permission.</strong></h2>
<pre><code class="lang-plaintext"># vim install_kind.sh

#!/bin/bash
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] &amp;&amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo cp ./kind /usr/local/bin/kind
rm -rf kind
</code></pre>
<pre><code class="lang-plaintext">chmod +x install_kind.sh
bash install_kind.sh
</code></pre>
<p>In Ubuntu Install Docker</p>
<pre><code class="lang-plaintext">apt-get update
apt install docker.io
systemctl enable --now docker
systemctl status docker
docker ps -a
</code></pre>
<p>In Fedora Based System Install Docker</p>
<pre><code class="lang-plaintext">sudo dnf remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
</code></pre>
<pre><code class="lang-plaintext">sudo dnf -y install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
</code></pre>
<pre><code class="lang-plaintext">sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
</code></pre>
<pre><code class="lang-plaintext">sudo systemctl enable --now docker
sudo docker ps -a
</code></pre>
<p>Your Docker is installed successfully. However, to access the Kubernetes cluster, you need a command-line tool, which is "kubectl."</p>
<hr />
<h2 id="heading-step-3-install-kubectl-command-in-kind-cluster"><strong>Step-3: Install kubectl command in KIND cluster.</strong></h2>
<pre><code class="lang-plaintext"># vim install_kubectl.sh
#!/bin/bash

# Variables
VERSION="v1.30.0"
URL="https://dl.k8s.io/release/${VERSION}/bin/linux/amd64/kubectl"
INSTALL_DIR="/usr/local/bin"

# Download and install kubectl
curl -LO "$URL"
chmod +x kubectl
sudo mv kubectl $INSTALL_DIR/
kubectl version --client

# Clean up
rm -f kubectl

echo "kubectl installation complete."
</code></pre>
<pre><code class="lang-plaintext">chmod +x install_kubectl.sh
bash install_kubectl.sh
</code></pre>
<hr />
<h2 id="heading-step-4-install-the-kind-cluster-so-we-have-to-create-a-kind-config-file-that-specify-the-core-specification-details-regarding-your-kind-cluster-nodes"><strong>Step-4: Install the KIND cluster,</strong> so we have to create a KIND config file, that specify the core specification details regarding your KIND cluster nodes.</h2>
<pre><code class="lang-plaintext"># vim config.yml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4

nodes:
- role: control-plane
  image: kindest/node:v1.30.0
- role: worker
  image: kindest/node:v1.30.0
- role: worker
  image: kindest/node:v1.30.0
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">This project will use KIND version 1.30. because till today this is latest and stable KIND version for production application deployment.</div>
</div>

<p>Create the KIND cluster with specifying config.yml file for reference</p>
<pre><code class="lang-plaintext">kind create cluster --config=config.yml
</code></pre>
<p>and you will get screen like this.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769519180355/3b6fa285-d881-45b6-bd5a-3101d3705bea.png" alt class="image--center mx-auto" /></p>
<p>Your KIND kubernetes cluster is working or not, let him check</p>
<pre><code class="lang-plaintext">kubectl get no -o wide
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769519350514/68e5f494-ed2f-49b0-86b0-c92100604326.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-5-the-github-repository-server"><strong>Step-5: The GitHUB Repository Server</strong></h2>
<p>Over this KIND cluster, your application will deploy. But what about the project application files? As you know, we are working on a LAMP WordPress application, and we have a dedicated GitHub repository server for this, where you will get all the required files and folders. Because this is a GitOps project, GitHub is the core artifact server.</p>
<p>Go to this link: <a target="_blank" href="https://github.com/devrakaops/projectwala">devrakaops/projectwala</a></p>
<p>This is the huge repository where some of my core learning DevOps project are kept, this project is one of them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769519846192/4e27d099-edc5-414e-801f-026aba32002c.png" alt class="image--center mx-auto" /></p>
<p>In this repository the related files and folder for this project are kept here only.</p>
<p>Go to this link: <a target="_blank" href="https://github.com/devrakaops/projectwala/tree/main/Basic-Level/Project-2">projectwala/Basic-Level/Project-2 at main · devrakaops/projectwala</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769519936430/9f03b4b6-f9f9-4b71-83fb-7a51b802dc61.png" alt class="image--center mx-auto" /></p>
<p>The Kubernetes folder has all the YAML for our project, which we will use.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769520011702/6d1cde76-031f-422a-9c2d-dc2ec7797efc.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-6-installing-argo-cd"><strong>Step-6. Installing Argo CD</strong></h2>
<p>We have multiple options to install ArgoCD</p>
<ol>
<li><p><strong>Using Helm install ArgoCD</strong></p>
<p> To install ArgoCD using helm, you have to first install helm package manager in your kubernetes cluster itself and after then need to install using dedicated yaml file. I am here providing you a link:</p>
</li>
<li><p><strong>Using Direct command install ArgoCD</strong></p>
<p> To install argocd using direct command line method follow the commands</p>
<p> Create a namespace for Argo CD:</p>
<pre><code class="lang-plaintext"> kubectl create namespace argocd
</code></pre>
<p> Apply the Argo CD manifest:</p>
<pre><code class="lang-plaintext"> kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p> Check services in Argo CD namespace:</p>
<pre><code class="lang-plaintext"> kubectl get svc -n argocd
</code></pre>
<p> Expose Argo CD server using NodePort:</p>
<pre><code class="lang-plaintext"> kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
</code></pre>
<p> Forward ports to access Argo CD server:</p>
<pre><code class="lang-plaintext"> kubectl port-forward -n argocd service/argocd-server 443:443 &amp;
</code></pre>
</li>
</ol>
<p>Access the ArgoCD cluster using public IP of your AWS instance with dedicated 443 port. This is not fix that only use 443 port. You can use any random port, because we are using port forward so request will come at the end of the service port</p>
<ul>
<li>Don’t forgot to open 443 ports in your AWS security group inbound rule</li>
</ul>
<p>In starting you will see the unsecure page warning</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769521174651/392f30fa-be55-40f1-be33-3bc46b0c2a56.png" alt class="image--center mx-auto" /></p>
<p>But click on “Continue to 13.201.28.87 (unsafe)“ link. And you will get the actual ArgoCD UI Page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769521255427/5dffbbb6-5104-43ee-9608-6969efc88fbb.png" alt class="image--center mx-auto" /></p>
<p>You can cross-check in Kubernetes cluster as well.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769523657069/e35ee2a3-5d4c-4d2d-aba2-5fce106558d4.png" alt class="image--center mx-auto" /></p>
<p>Now it’s time to access the ArgoCD, you required username and password:</p>
<p>The username is "admin," but the password can only be obtained by decoding a secret using the base64 algorithm.</p>
<p>Run this command to Retrieve Argo CD admin password:</p>
<pre><code class="lang-plaintext">kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d &amp;&amp; echo
</code></pre>
<pre><code class="lang-plaintext">jr20RBeyfzwvB1mW
</code></pre>
<p>Paste the password to the UI dashboard<br /><strong>Username:</strong> admin<br /><strong>Password:</strong> jr20RBeyfzwvB1mW</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769521615107/dbd74630-72f3-4591-b55e-4e2baca8bd75.png" alt class="image--center mx-auto" /></p>
<p>This is simple interface of ArgoCD dashboard.</p>
<hr />
<h2 id="heading-step-7-setup-the-application-in-argocd"><strong>Step-7: Setup the application in ArgoCD</strong></h2>
<p>Click on <code>+New APP</code> Icon and click on <code>EDIT AS YAML</code> icon, just left top side, so paste the code below</p>
<pre><code class="lang-plaintext">apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp
spec:
  destination:
    namespace: default
    server: https://kubernetes.default.svc
  source:
    path: Basic-Level/Project-2/Kubernetes
    repoURL: https://github.com/devrakaops/projectwala.git
    targetRevision: main
  sources: []
  project: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
      enabled: true
</code></pre>
<p>Save this and click on <code>CREATE</code> .</p>
<p>Your application is running, and within 5 seconds, it will sync with your GitHub application directory YAML in the Kubernetes folder.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769522590789/34ab07c8-d1ea-418d-bf5a-a42fc16a7602.png" alt class="image--center mx-auto" /></p>
<p>Just click on this.<br />And you will see the magic of GitOps.</p>
<p>All kubernetes YAML is showing in tree wise structure. all are healthy and working properly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769522644937/a0c1d912-35c2-482a-84de-25fedf97cc00.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you want to know the resource specifications, just click on GUI icon of that resource.</div>
</div>

<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769522822668/b29496e8-5e8f-484e-af00-bdced5c0a329.png" alt class="image--center mx-auto" /></p>
<p>Either make changes on GitHub repository folder or direct change here will impact the resource.</p>
<p>If you will check at the kubernetes cluster using</p>
<pre><code class="lang-plaintext">kubectl get all -n default
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769522971441/63c64738-1ee9-434e-96bc-8c97285eee03.png" alt class="image--center mx-auto" /></p>
<p>Which means ArgoCD is working properly.</p>
<p>Now its time to expose the WordPress deployment using port forwarding.</p>
<pre><code class="lang-plaintext">kubectl port-forward  svc/wordpress -p 8080:80 --address=0.0.0.0 &amp;
</code></pre>
<p>I have used 8080 for mapping, that is mapped inside with port 80 and accessable from everywhere.</p>
<p>Now you can access your wordpress application on port 8080 specify with your public IP of AWS instance. As i told you earlier that this is not fix that only expose application on 8080. You can use any random open port that is not using currently, means shold be free. so don’t forgot to open 8080 port in AWS instances securit groups inbound rule.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769523404927/db3acd8f-b063-4781-8640-d57930f51417.png" alt class="image--center mx-auto" /></p>
<p>Fill the WordPress details.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769523460513/b876564b-aedc-4660-8b14-efcc4f540aa4.png" alt class="image--center mx-auto" /></p>
<p>And now take login with login credentials.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769523509115/0a612eb3-95fe-4518-bc54-37df9bf29a12.png" alt class="image--center mx-auto" /></p>
<p>Setup for LAMP WordPress theme or customize your application.</p>
<p>Finally you will get the application at the end.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769523725304/caee6e68-12bc-478a-ad1c-b7e363831431.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769523787712/b88fed7a-5e3e-45e5-9f0e-767c77d39da3.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-8-install-kubernetes-dashboard-for-pod-level-monitoring"><strong>Step-8: Install Kubernetes Dashboard for pod level monitoring</strong></h2>
<ul>
<li><p>Deploy Kubernetes dashboard:</p>
<pre><code class="lang-plaintext">  kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
</code></pre>
</li>
<li><p>Create a token for dashboard access:</p>
<pre><code class="lang-plaintext">  kubectl -n kubernetes-dashboard create token admin-user
</code></pre>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">You will get this error that you don’t have role “admin-user“</div>
</div>

<p>So create a service role “admin-user“ and make him binding with service account so we can access the kubernetes-dashboard using the token of service account</p>
<pre><code class="lang-plaintext"># vim dashboard_admin.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
</code></pre>
<p>Make port mapping for kubernetes-dashboard, so you can access the dashboard. So first check the service for the kubernetes-dashboard.</p>
<pre><code class="lang-plaintext">kubectl get svc -n kubernetes-dashboard
</code></pre>
<p>You will see kubernetes-dashboard service is a Cluster-IP Service that is only accessable on port 443 inside the cluster only, not from outside.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769524481722/c5005a5a-1e94-4fe9-921a-0ae84a79395e.png" alt class="image--center mx-auto" /></p>
<p>So lets have a port mapping to access the kubernetes-dashboard via the service. so 9090 port is opening here to access kubernetes-dashboard that is mapped with port 443 inside.</p>
<pre><code class="lang-plaintext">kubectl port-forward  svc/kubernetes-dashboard -n kubernetes-dashboard -p 9090:443 --address=0.0.0.0 &amp;
</code></pre>
<p>Access the dashboard on port 9090</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769524633421/71e28616-76f4-4ba7-814a-8e1ddd20eeac.png" alt class="image--center mx-auto" /></p>
<p>You can see that the <strong>Kubernetes Dashboard is accessible only when we provide a token</strong>.<br />This is because Kubernetes follows a <strong>security-first approach by default</strong>.</p>
<p>Now, let’s understand this in easy language.</p>
<p>Kubernetes Dashboard is nothing but a <strong>pod running inside the Kubernetes cluster</strong>, right?<br />But Kubernetes does not allow <strong>any pod or user</strong> to access cluster information by default.</p>
<p>So if we try to open the dashboard <strong>without permission</strong>, Kubernetes will block the access.</p>
<p>That’s why we need to create a <strong>ServiceAccount</strong>.</p>
<p>A <strong>ServiceAccount</strong> tells Kubernetes:</p>
<blockquote>
<p>“This dashboard pod is trusted, and it is allowed to view cluster resources.”</p>
</blockquote>
<p>Using this ServiceAccount, Kubernetes generates a <strong>token</strong>.<br />When we log in to the dashboard using this token, Kubernetes verifies the permissions and then allows access.</p>
<p>So in this step, we are creating a <strong>Service Account (and assigning proper role)</strong> so the Kubernetes Dashboard can securely access the cluster resources.</p>
<pre><code class="lang-plaintext">kubectl create -f dashboard_admin.yml
</code></pre>
<p>If you check you will see a service account named “admin-user“</p>
<pre><code class="lang-plaintext">kubectl get sa -n kubernetes-dashboard
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769524303109/c67b9197-cafd-411f-bccd-bda7973d76a1.png" alt class="image--center mx-auto" /></p>
<p>Now create the token and paste it to token section in dashboard UI.</p>
<pre><code class="lang-plaintext">kubectl -n kubernetes-dashboard create token admin-user
</code></pre>
<p>You will get something like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769525148329/59533900-d7ee-4a2b-9d12-25b93bfa479b.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-plaintext">eyJhbGciOiJSUzI1NiIsImtpZCI6Ik8wRnE3NDRTcXVHUHdsMHB3eUw1bmRNX3lwUDZjNWlkYUJ4aThNM01KQkkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY5NTI4NzE5LCJpYXQiOjE3Njk1MjUxMTksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNThhYzI5ZDYtZGQxNi00YjA4LTg4ZGYtYTAxM2IyZmNkOWVjIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiYWE1M2MyYzctNmU4Ni00ZTY3LThkYmItYTI1MmE1ZTAzOTA0In19LCJuYmYiOjE3Njk1MjUxMTksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.HPJbXiKqjUBXQxYV47F2_kQ1fcgkHitAqRBdqmVQqn4tyDP2zQOAjPuqJWYDKu8t75KjjkmPu8bmOrznuEFJq3d42tyzPa62dSx8CuMpSa_GJ2h9SqwjpYJ46-PtdBy4kczEn4fidvMRLhI1wDYn8ne16if2YZqL--mYpAUR1LG2IEikidTDpFwZY5FkW8Am09SMIESV4u_JfrwxpTgJvB_9l1iXNCNXApujYnBEWEjcc479heqmzwdvQy6pBUq3sn5KgfenzbjhLzJZUI6nwIeKOOS3j_UUcyYsIrDxUwtkbzjQ0VLKZkMmcOVrV41tzdEIcIkRqbX6oLIlV-asQQ
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769525199042/3d431de6-0b7a-4d5a-9c07-673a8359446d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769525228671/eb8eba01-e27c-4bc0-a77d-deda11eaa110.png" alt class="image--center mx-auto" /></p>
<p>You will get the complete resources namespace vise</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769525282411/4f14b867-8100-4258-97a0-a10884114657.png" alt class="image--center mx-auto" /></p>
<p>Change the namespace and you can access the related resources of namespace.</p>
<p>There are many possibilities in this project:</p>
<ul>
<li><p>We can add monitoring pods for metrics and log-based monitoring using Prometheus, Grafana, Loki, and exporters.</p>
</li>
<li><p>We can set up Kubernetes pod autoscaling and more.</p>
</li>
</ul>
<hr />
<h2 id="heading-step-9-set-up-pod-autoscaler-for-load-based-deployment-management"><strong>Step 9: Set up Pod Autoscaler for load-based deployment management</strong></h2>
<p>To setup a Pod AutoScaller, we required official yaml. See the link: <a target="_blank" href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/">HorizontalPodAutoscaler Walkthrough | Kubernetes</a></p>
<p>Create a HPA YAML and customize it. I have created the complete yaml for you.<br />SO simply Copy-Paste and create the resource.</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">autoscaling/v2</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">HorizontalPodAutoscaler</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">php-apache</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">behavior:</span>
    <span class="hljs-attr">scaleDown:</span>
     <span class="hljs-attr">stabilizationWindowSeconds:</span> <span class="hljs-number">300</span>
  <span class="hljs-attr">scaleTargetRef:</span>
    <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">wordpress</span>
  <span class="hljs-attr">minReplicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">maxReplicas:</span> <span class="hljs-number">5</span>
  <span class="hljs-attr">metrics:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">Resource</span>
    <span class="hljs-attr">resource:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">cpu</span>
      <span class="hljs-attr">target:</span>
        <span class="hljs-attr">type:</span> <span class="hljs-string">Utilization</span>
        <span class="hljs-attr">averageUtilization:</span> <span class="hljs-number">50</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769526424294/9f6162c6-0e3e-427c-a740-6e37bf86224a.png" alt class="image--center mx-auto" /></p>
<p>If you see carefully HPA is waiting for matrix, so HPA over it can work.<br />If we dont have HPA, this wont work at all. so just check matrix-server is running or not.</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml</span>
</code></pre>
<p>Just edit one thing</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">edit</span> <span class="hljs-string">deployment</span> <span class="hljs-string">metrics-server</span> <span class="hljs-string">-n</span> <span class="hljs-string">kube-system</span>
</code></pre>
<p>Add this arg flag “<code>--kubelet-insecure-tls</code>“ in “container.args“</p>
<pre><code class="lang-yaml"><span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">args:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">--kubelet-insecure-tls</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">--cert-dir=/tmp</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">--secure-port=10250</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">--kubelet-use-node-status-port</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">--metric-resolution=15s</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769527119975/dcb7aa74-5150-46f1-8611-4009fb819250.png" alt class="image--center mx-auto" /></p>
<p>Now check again the HPA status</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">hpa</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-string">NAME</span>         <span class="hljs-string">REFERENCE</span>              <span class="hljs-string">TARGETS</span>       <span class="hljs-string">MINPODS</span>   <span class="hljs-string">MAXPODS</span>   <span class="hljs-string">REPLICAS</span>   <span class="hljs-string">AGE</span>
<span class="hljs-attr">php-apache   Deployment/wordpress   cpu:</span> <span class="hljs-number">0</span><span class="hljs-string">%/50%</span>   <span class="hljs-number">1</span>         <span class="hljs-number">5</span>         <span class="hljs-number">1</span>          <span class="hljs-string">6m40s</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769527940042/e827e617-1926-43af-8a1c-ecb3944676e4.png" alt class="image--center mx-auto" /></p>
<hr />
<p>So that was the project practical documentation! I hope you gain some real-world insights into DevOps tools and technologies and get hands-on experience with a real-world project!  </p>
<p>Don't forget to share this with your colleagues, friends, and your group!<br />See you soon in the next amazing project!  </p>
<p>Thank you,<br />Rakesh Kumar Jangid.</p>
]]></content:encoded></item><item><title><![CDATA[Python in DevOps Series: 
Why Use Python When We Already Have Bash?]]></title><description><![CDATA[Introduction
If you are learning DevOps, you may be asking yourself:
👉 “I already know Bash scripting. Do I really need Python? Can’t I do the same work with Bash?”
This is a common question, and the truth is:

No, you don’t always need Python – Bas...]]></description><link>https://projectwala.site/python-in-devops-series-why-use-python-when-we-already-have-bash</link><guid isPermaLink="true">https://projectwala.site/python-in-devops-series-why-use-python-when-we-already-have-bash</guid><category><![CDATA[Devops]]></category><category><![CDATA[Python]]></category><category><![CDATA[projects]]></category><category><![CDATA[automation]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Sat, 30 Aug 2025 18:30:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756476969855/6c5afbb0-ae94-4b57-a354-c1e81a937750.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>If you are learning DevOps, you may be asking yourself:</p>
<p>👉 <em>“I already know Bash scripting. Do I really need Python? Can’t I do the same work with Bash?”</em></p>
<p>This is a common question, and the truth is:</p>
<ul>
<li><p><strong>No, you don’t always need Python</strong> – Bash can handle many tasks.</p>
</li>
<li><p><strong>But Python makes automation easier, smarter, and more powerful</strong>, especially when tasks grow bigger.</p>
</li>
</ul>
<p>Let’s go step by step to understand this in simple terms.</p>
<hr />
<h2 id="heading-what-is-bash">What is Bash?</h2>
<p>Bash (Bourne Again Shell) is a command-line shell used in Linux and Unix systems.</p>
<p>You can use Bash to:</p>
<ul>
<li><p>Run Linux commands (<code>ls</code>, <code>pwd</code>, <code>cat</code>, etc.)</p>
</li>
<li><p>Automate small tasks like backups or log cleanups</p>
</li>
<li><p>Write simple scripts using loops and conditions</p>
</li>
</ul>
<p><strong>Example – Bash script to process log files</strong></p>
<pre><code class="lang-json">#!/bin/bash
for file in *.log; do
  echo <span class="hljs-string">"Processing $file"</span>
done
</code></pre>
<p><strong>Use Case of Bash in DevOps</strong></p>
<ul>
<li><p>Copying files to servers</p>
</li>
<li><p>Running cron jobs (daily/weekly tasks)</p>
</li>
<li><p>Checking disk space with <code>df -h</code></p>
</li>
<li><p>Quick one-liners to fix issues</p>
</li>
</ul>
<p>Bash is simple, fast, and already installed in every Linux machine. That’s why DevOps engineers love it for day-to-day operations.</p>
<hr />
<h2 id="heading-what-is-python">What is Python?</h2>
<p>Python is a general-purpose programming language. Unlike Bash, it is not limited to Linux commands.</p>
<p>You can use Python for:</p>
<ul>
<li><p>Automating bigger tasks</p>
</li>
<li><p>Working with APIs and JSON data</p>
</li>
<li><p>System monitoring (CPU, memory, disk)</p>
</li>
<li><p>Cloud automation (AWS, GCP, Azure)</p>
</li>
<li><p>Container &amp; Kubernetes automation</p>
</li>
</ul>
<p><strong>Example – Python script to check CPU usage</strong></p>
<pre><code class="lang-json">import psutil
print(<span class="hljs-string">"CPU Usage:"</span>, psutil.cpu_percent(), <span class="hljs-string">"%"</span>)
</code></pre>
<p><strong>Use Case of Python in DevOps</strong></p>
<ul>
<li><p>Writing monitoring scripts (CPU, memory, services)</p>
</li>
<li><p>Managing servers across multiple clouds</p>
</li>
<li><p>Automating Docker builds and Kubernetes deployments</p>
</li>
<li><p>Parsing and analyzing log files</p>
</li>
<li><p>Creating CI/CD automation scripts in Jenkins or GitHub Actions</p>
</li>
</ul>
<hr />
<h2 id="heading-key-differences-simple-view">Key Differences (Simple View)</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Bash 🖥️ (Good for Small Tasks)</strong></td><td><strong>Python 🐍 (Good for Bigger Tasks)</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Platform</td><td>Mostly Linux/Unix</td><td>Works on Linux, Windows, Mac</td></tr>
<tr>
<td>Complexity Handling</td><td>Limited (good for simple loops)</td><td>Easy for complex logic (JSON, APIs, errors)</td></tr>
<tr>
<td>Libraries</td><td>Very few (mostly Linux commands)</td><td>Thousands (monitoring, cloud, Docker, etc.)</td></tr>
<tr>
<td>Best Use Case</td><td>System setup, quick automation</td><td>Cloud automation, monitoring, DevOps tools integration</td></tr>
<tr>
<td>Example Task</td><td>Copying logs, checking disk usage</td><td>Deploying containers, monitoring servers, parsing logs</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-when-to-use-bash">When to Use Bash?</h2>
<p>Choose Bash when your task is:</p>
<ul>
<li><p>Small and simple</p>
</li>
<li><p>Only needs Linux commands</p>
</li>
<li><p>Example:</p>
<ul>
<li><p>Move files between folders</p>
</li>
<li><p>Clean old log files</p>
</li>
<li><p>Create a cron job to take backups</p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-when-to-use-python">When to Use Python?</h2>
<p>Choose Python when your task is:</p>
<ul>
<li><p>Large and complex</p>
</li>
<li><p>Needs cloud or DevOps tool integration</p>
</li>
<li><p>Example:</p>
<ul>
<li><p>Monitor CPU, memory, and disk across servers</p>
</li>
<li><p>Automate AWS tasks (EC2, S3, Lambda)</p>
</li>
<li><p>Interact with Kubernetes clusters</p>
</li>
<li><p>Write a deployment script in CI/CD pipeline</p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-final-thought">Final Thought</h2>
<p><em>“If your task is small, Bash is enough. But if your task is big, needs more features, or involves DevOps tools, Python is the better choice.”</em></p>
<p>Both Bash and Python are <strong>important for a DevOps engineer</strong>.</p>
<ul>
<li><p>Start with <strong>Bash</strong> for Linux basics and small automations.</p>
</li>
<li><p>Learn <strong>Python</strong> to build scalable and smarter projects.</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>As a beginner in DevOps:</p>
<ol>
<li><p>Don’t skip Bash — it is your day-to-day companion for Linux tasks.</p>
</li>
<li><p>But also learn Python — because it gives you power, libraries, and the ability to work with modern DevOps tools.</p>
</li>
</ol>
<p>With Bash, you can <strong>start</strong>. With Python, you can <strong>grow</strong>.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Python in DevOps Series: Beginner-Level]]></title><description><![CDATA[🔹 Introduction
In this series, we are here with a Python project from our Beginner-Level Project Ideas.
If you’re just starting your DevOps journey and want to understand how Python can be used for automation, system interaction, and command executi...]]></description><link>https://projectwala.site/python-in-devops-series-beginner-level</link><guid isPermaLink="true">https://projectwala.site/python-in-devops-series-beginner-level</guid><category><![CDATA[Devops]]></category><category><![CDATA[Linux]]></category><category><![CDATA[projects]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Fri, 29 Aug 2025 18:30:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756473962152/b1061e38-abf5-4d05-a3e1-76107f7372e4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">🔹 Introduction</h2>
<p>In this series, we are here with a Python project from our <strong>Beginner-Level Project Ideas</strong>.</p>
<p>If you’re just starting your DevOps journey and want to understand how Python can be used for automation, system interaction, and command execution, this project is a perfect starting point.</p>
<p>We’ll build a <strong>Python-based mini shell</strong> that allows you to run Linux commands directly from a Python script. This simple project demonstrates how Python can interact with the underlying operating system—a core skill for any DevOps engineer.</p>
<hr />
<h2 id="heading-use-case-why-this-project">🔹 Use Case – Why This Project?</h2>
<p>In DevOps, engineers often work extensively with the Linux command line for tasks such as:</p>
<ul>
<li><p>Checking system health</p>
</li>
<li><p>Navigating directories</p>
</li>
<li><p>Running deployment or monitoring commands</p>
</li>
</ul>
<p>Instead of typing directly into the terminal, Python can act as a <strong>wrapper around the shell</strong>, executing commands programmatically.</p>
<p>This project helps you:</p>
<ul>
<li><p>Understand how Python executes shell commands.</p>
</li>
<li><p>Build a foundation for <strong>automation scripts</strong>.</p>
</li>
<li><p>Think about extending it into advanced DevOps tools (like deployment scripts, monitoring dashboards, or log analyzers).</p>
</li>
</ul>
<hr />
<h2 id="heading-the-code-python-mini-shell">🔹 The Code – Python Mini Shell</h2>
<p>Here’s the complete Python script:</p>
<pre><code class="lang-json">import subprocess

command = <span class="hljs-string">""</span>
while command != <span class="hljs-string">"exit"</span>:
    command = input(<span class="hljs-string">"bash:# "</span>)
    if command == <span class="hljs-string">"exit"</span>:
        break
    result = subprocess.getoutput(command)
    print(result)
</code></pre>
<h3 id="heading-how-it-works">✅ How It Works</h3>
<ul>
<li><p><code>import subprocess</code> → Imports the Python module that interacts with system commands.</p>
</li>
<li><p><code>input("bash:# ")</code> → Gives the user a shell-like prompt.</p>
</li>
<li><p><strong>Exit condition</strong> → Typing <code>exit</code> will end the program.</p>
</li>
<li><p><code>subprocess.getoutput(command)</code> → Executes the entered command and returns the output.</p>
</li>
<li><p><code>print(result)</code> → Displays the result of the command.</p>
</li>
</ul>
<hr />
<h2 id="heading-example-run">🔹 Example Run</h2>
<p>Here’s what it looks like when you run the script:</p>
<pre><code class="lang-json">bash:# ls
file1.txt file2.py
bash:# pwd
/home/devops
bash:# exit
</code></pre>
<p>Simple, yet powerful 🚀</p>
<hr />
<h2 id="heading-tools-required-to-get-started">🔹 Tools Required to Get Started</h2>
<p>You don’t need a complicated setup to try this project. Just:</p>
<ul>
<li><p><strong>Python 3</strong> installed on your machine</p>
</li>
<li><p><strong>VS Code</strong> (or any text editor of your choice)</p>
</li>
<li><p><strong>Linux Environment</strong> (Ubuntu, CentOS, or WSL if you’re on Windows)</p>
</li>
</ul>
<p>👉 That’s it! Open VS Code, paste the code, and run it from your terminal.</p>
<hr />
<h2 id="heading-conclusion">🔹 Conclusion</h2>
<p>This project may look small, but it teaches an important DevOps concept: <strong>using Python to interact with the system</strong>.</p>
<p>As you move forward, you can enhance this project by adding:</p>
<ul>
<li><p>Error handling for invalid commands</p>
</li>
<li><p>Logging output into files</p>
</li>
<li><p>Restricting certain commands for security</p>
</li>
<li><p>Integrating with tools like Docker or Kubernetes</p>
</li>
</ul>
<p>This is just the beginning of using Python for DevOps automation—and in the upcoming articles of this series, we’ll explore more exciting beginner and intermediate project ideas.</p>
<p>Stay tuned for the next project in the <strong>Python in DevOps Series</strong>! 🎯</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Why Python is Super Important in DevOps]]></title><description><![CDATA[When we talk about DevOps, most people quickly think about tools like Jenkins, Docker, Kubernetes, or Terraform. But here’s the thing — behind all these tools, what really makes a DevOps engineer powerful is automation. And one language that plays a ...]]></description><link>https://projectwala.site/why-python-is-super-important-in-devops</link><guid isPermaLink="true">https://projectwala.site/why-python-is-super-important-in-devops</guid><category><![CDATA[Python]]></category><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[programming]]></category><category><![CDATA[beginner]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Fri, 29 Aug 2025 13:03:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756472288457/655c6053-99f1-49de-9ab9-22497ff08afa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we talk about DevOps, most people quickly think about tools like Jenkins, Docker, Kubernetes, or Terraform. But here’s the thing — behind all these tools, what really makes a DevOps engineer powerful is <strong>automation</strong>. And one language that plays a huge role in making automation easy is <strong>Python</strong>.</p>
<p>Python is simple, human-readable, and comes with thousands of ready-to-use libraries. For DevOps folks, it’s like having a Swiss Army knife that can help with almost everything — from writing small scripts to building full-fledged automation systems.</p>
<hr />
<h2 id="heading-why-python-matters-in-devops">Why Python Matters in DevOps</h2>
<ol>
<li><p><strong>Easy to Learn &amp; Use</strong></p>
<ul>
<li>Python’s syntax is clean and easy to read. You don’t have to be a hardcore programmer to write useful Python scripts.</li>
</ul>
</li>
<li><p><strong>Great for Automation</strong></p>
<ul>
<li>Whether it’s automating server setup, cleaning logs, or managing cloud resources, Python scripts save hours of manual work.</li>
</ul>
</li>
<li><p><strong>Huge Library Support</strong></p>
<ul>
<li>Libraries like <code>boto3</code> (for AWS), <code>paramiko</code> (for SSH), and <code>docker-py</code> (for Docker) make DevOps tasks very smooth.</li>
</ul>
</li>
<li><p><strong>Cross-Platform</strong></p>
<ul>
<li>Python runs everywhere — Linux, Windows, Mac — so you don’t have to worry about compatibility.</li>
</ul>
</li>
<li><p><strong>Integration Power</strong></p>
<ul>
<li>Almost every DevOps tool (Jenkins, Ansible, Kubernetes, AWS, etc.) has Python support or APIs that can be called using Python.</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-final-thought">Final Thought</h2>
<p>If you’re starting your DevOps journey, Python is not optional anymore — it’s a must-have skill. Think of it as your <strong>magic wand</strong> that makes your life easier, reduces repetitive manual work, and lets you focus on solving bigger challenges.</p>
<p>So in the next parts of this series, we’ll dive into real examples and projects where Python shines in DevOps. Get ready, because this is going to be fun, practical, and super useful!</p>
]]></content:encoded></item><item><title><![CDATA[GitLab CICD Learning Project-1]]></title><description><![CDATA[In this blog we are going to create a GitLab project, that will showcase the skillset regarding GitLab tool.

Create a Group and a project within
Open your GitLab account and follow these steps one by one. I will guide you if you are starting from ba...]]></description><link>https://projectwala.site/gitlab-cicd-learning-project-1</link><guid isPermaLink="true">https://projectwala.site/gitlab-cicd-learning-project-1</guid><category><![CDATA[GitLab]]></category><category><![CDATA[gitlab-runner]]></category><category><![CDATA[GitLab-CI]]></category><category><![CDATA[GitLab SSH]]></category><category><![CDATA[gitlab-cicd]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[DevOps trends]]></category><category><![CDATA[AWS]]></category><category><![CDATA[projects]]></category><category><![CDATA[Freelancing]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Fri, 22 Aug 2025 16:02:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755878317124/249e6202-32ab-43bd-bf2a-a0bd1b30d667.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>In this blog we are going to create a GitLab project, that will showcase the skillset regarding GitLab tool.</p>
</blockquote>
<h1 id="heading-create-a-group-and-a-project-within">Create a Group and a project within</h1>
<p>Open your GitLab account and follow these steps one by one. I will guide you if you are starting from basic, so you have to create a group, for this directly go to GitLab account and click on <strong>“New group“</strong> button, there you will see two ways to create group, but you need to create a group by click on <strong>“Create Group“</strong> . Give him a name and done, you have successfully created a group. In GitLab, your group is like user account in GitHub, ex: <strong>GitHub-username</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755861555972/27535335-1216-4dac-8c65-b9a0f01a4268.png" alt class="image--center mx-auto" /></p>
<p>Now in this group, you have to create project by pressing “New Project“ button exact same way of group creation. This is like repository in your GitHub account.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755861581906/44502f55-a92f-4441-9675-3743201eadf1.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">We are using <strong>“GitLab-project1“ </strong>project this time.</div>
</div>

<h1 id="heading-push-your-application-data-to-project">Push your application data to project</h1>
<p>As a developer you need to push all the project code from your system to GitLab project using Git Tool.</p>
<p>Here i have pushed all the project code in <strong>master branch.</strong></p>
<p>Link: <a target="_blank" href="https://gitlab.com/devrakaops1/java-todo-app-cicd/-/tree/master?ref_type=heads">https://gitlab.com/devrakaops1/java-todo-app-cicd/-/tree/master?ref_type=heads</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755862246079/bc383308-c85c-41c5-8952-e22b8272f1e7.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-add-a-runner">Add a Runner</h1>
<p>now this time your next work is to create a EC2-Instance that we can use as a GitLab custom Runner to run our project.</p>
<p>Step-1: Go to your AWS account and create an ec2-instance with t2.medium and rhel-9 OS</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755862925720/5397bcc0-c0ea-4c0d-b7a2-3c9b1205852c.png" alt class="image--center mx-auto" /></p>
<p>Step-2: Take SSH to the instance and run provided commands to make them GitLab Runner</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755863213985/a9e22ea0-ce0f-4770-9436-aee157d6c386.png" alt class="image--center mx-auto" /></p>
<p>Now run commands to became GitLab Runner</p>
<p>Go to settings &gt; CICD &gt; Runners &gt; Create Project Runner</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755864605786/55eb9eb5-3614-42a2-bbf0-e1122ca8cd6b.png" alt class="image--center mx-auto" /></p>
<p>And click on “Create Runner“</p>
<p>Now select your OS “RedHat Linux“ in my case.<br />Now you have to run some commands but GitLab Runner must be installed before you can register a runner. For this run below commands by clicking in link “<a target="_blank" href="https://gitlab.com/devrakaops1/java-todo-app-cicd/-/runners/49575617/register#">How do I install GitLab Runner?</a>“</p>
<p>Select your architecture of system, run the following commands.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755864678249/9ef8abcc-2321-4a4e-aeb3-dacb06c99852.png" alt class="image--center mx-auto" /></p>
<p>But don’t forgot to run command “<strong>yum update</strong>“. This will save your time and to lots of confusions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755864780447/ec778d83-2713-4d66-9366-982d6fc07ac3.png" alt class="image--center mx-auto" /></p>
<p>And then following command.</p>
<pre><code class="lang-yaml"> <span class="hljs-string">gitlab-runner</span> <span class="hljs-string">register</span> <span class="hljs-string">--url</span> <span class="hljs-string">https://gitlab.com</span> <span class="hljs-string">--token</span> <span class="hljs-string">glrt-oxwG1lduWEnIh3DJiwHCqm86MQpwOjE3ZmVkeQp0OjMKdTpobDhpbRg.01.1j0181m9p</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755865369666/99500a40-e61a-447c-a3d1-96726c489519.png" alt class="image--center mx-auto" /></p>
<p>Now check your runner is working and active</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755865538378/ae3c0d53-7093-4834-b106-d8d327618e9c.png" alt class="image--center mx-auto" /></p>
<p>make sure you have turn off GitLab default runner</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755865577159/6925dda9-5162-4371-afe9-58fc299c2cf1.png" alt class="image--center mx-auto" /></p>
<p>Now your runner is active and in working state. Later you can mention this runner using mentioning tags “dev“ in main CICD file <code>.gitlab-ci.yml</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755865703719/5a45f314-c070-4e65-8342-5729dc725c29.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-add-docker-credentials-variables">Add Docker Credentials Variables</h1>
<p>To add variable in GitLab, go to settings &gt; CICD &gt; Variables &gt; add variable</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755865819597/7677858e-cb14-4189-9da9-f9bf3a9a1df8.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755865866717/2fe0238f-b08c-4d2d-ae73-28bd831ae952.png" alt class="image--center mx-auto" /></p>
<p>We have to create two variable:</p>
<pre><code class="lang-yaml"><span class="hljs-string">DOCKERHUB_NAME</span> <span class="hljs-string">=</span> <span class="hljs-string">********</span>
<span class="hljs-string">DOCKERHUB_PASS</span> <span class="hljs-string">=</span> <span class="hljs-string">*****************************</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755865783709/4d77182b-1725-4879-b02d-14c0f2d568e0.png" alt class="image--center mx-auto" /></p>
<p>Now its time to create CICD file. Name should be “<code>.gitlab-ci.yml</code>“</p>
<h1 id="heading-create-a-gitlab-cicd-file">Create a GitLab CICD file</h1>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">build</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">test</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">push_to_dockerhub</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">deploy</span>
<span class="hljs-attr">variables:</span>
        <span class="hljs-attr">NAME:</span> <span class="hljs-string">"Rakesh"</span>
        <span class="hljs-attr">CITY:</span> <span class="hljs-string">"Jaipur"</span>

<span class="hljs-attr">build_job:</span>
        <span class="hljs-attr">stage:</span> <span class="hljs-string">build</span>
        <span class="hljs-attr">script:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">"$CI_JOB_STAGE is working in $CI_COMMIT_BRANCH branch"</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">-t</span> <span class="hljs-string">todo-app:latest</span> <span class="hljs-string">.</span>
        <span class="hljs-attr">tags:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">dev</span>

<span class="hljs-attr">test_job:</span>
        <span class="hljs-attr">stage:</span> <span class="hljs-string">test</span>
        <span class="hljs-attr">script:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">"$CI_JOB_STAGE is working in $CI_COMMIT_BRANCH branch"</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">image</span> <span class="hljs-string">ls</span>
        <span class="hljs-attr">tags:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">dev</span>

<span class="hljs-attr">push_job:</span>
        <span class="hljs-attr">stage:</span> <span class="hljs-string">push_to_dockerhub</span>
        <span class="hljs-attr">before_script:</span> 
                <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">-u</span> <span class="hljs-string">$DOCKERHUB_NAME</span> <span class="hljs-string">-p</span> <span class="hljs-string">$DOCKERHUB_PASS</span>  
        <span class="hljs-attr">script:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">"$CI_JOB_STAGE is working in $CI_COMMIT_BRANCH branch"</span>              
                <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">image</span> <span class="hljs-string">tag</span> <span class="hljs-string">todo-app:latest</span>      <span class="hljs-string">$DOCKERHUB_NAME/todo:latest</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-string">$DOCKERHUB_NAME/todo:latest</span>
        <span class="hljs-attr">tags:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">dev</span>

<span class="hljs-attr">deploy_job:</span>
        <span class="hljs-attr">stage:</span> <span class="hljs-string">deploy</span>
        <span class="hljs-attr">script:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">"$CI_JOB_STAGE is working in $CI_COMMIT_BRANCH branch"</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">compose</span> <span class="hljs-string">up</span> <span class="hljs-string">-d</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">"Working Successful"</span>
        <span class="hljs-attr">tags:</span>
                <span class="hljs-bullet">-</span> <span class="hljs-string">dev</span>
</code></pre>
<h1 id="heading-install-docker-in-ec2-instance">Install Docker in Ec2 Instance</h1>
<p>Install docker: <a target="_blank" href="https://docs.docker.com/engine/install/centos/">https://docs.docker.com/engine/install/centos/</a></p>
<pre><code class="lang-bash">systemctl <span class="hljs-built_in">enable</span> --now docker
</code></pre>
<h1 id="heading-configuration-work">Configuration Work</h1>
<pre><code class="lang-bash">yum install -y git
<span class="hljs-built_in">echo</span> <span class="hljs-string">"gitlab-runner  ALL=(ALL)   NOPASSWD:ALL"</span> &gt; /etc/sudoers.d/gitlab-runner
</code></pre>
<h1 id="heading-check-the-cicd-pipeline">Check the CICD pipeline</h1>
<p>When you pushes the code to the repository server in master branch, automatically cicd pipeline will trigger.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755876186416/832e9e9f-a917-43a8-a054-ce092bfe0fa1.png" alt class="image--center mx-auto" /></p>
<p>Go to build &gt; pipeline &gt;</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755876388643/76a58763-ef68-4edc-82c0-4ceda13739b3.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755876665408/2ae7e9e6-fd62-4819-ac70-8359d424897c.png" alt class="image--center mx-auto" /></p>
<p>If you want to see on instance, there is docker container is running on port 8000.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755876731152/c6b44f1d-d22f-46fd-95bb-26510ef132f2.png" alt class="image--center mx-auto" /></p>
<p>So we have to add 8000 inbound port to the AWS EC2-Instances security group</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755876836634/25b49cb6-0d61-482a-a80b-3f86b5c237a0.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-access-the-application">Access the application</h1>
<p>Because we have deployed application on ec2-instance so will see the output using instances public IP with appending 8000 port on web browser.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755877118878/50c92c42-a637-4d82-8e61-9767788a5c52.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-notification-alerts">Notification Alerts</h1>
<p>GitLab normally provides alert notification service</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755879468429/b1938b79-564c-43fe-8388-f4e9877456c8.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-output-logs">Output Logs</h1>
<ul>
<li><p>Build_job</p>
<pre><code class="lang-json">  Running with gitlab-runner <span class="hljs-number">18.3</span><span class="hljs-number">.0</span> (<span class="hljs-number">9</span>ba718cd)
    on gitlab-runner1 jHBjwSbnw, system ID: s_5da8ab1e406d
  Preparing the <span class="hljs-string">"shell"</span> executor
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Using Shell (bash) executor...
  Preparing environment
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Running on ip<span class="hljs-number">-172</span><span class="hljs-number">-31</span><span class="hljs-number">-95</span><span class="hljs-number">-28.</span>ec2.internal...
  Getting source from Git repository
  <span class="hljs-number">00</span>:<span class="hljs-number">01</span>
  Gitaly correlation ID: <span class="hljs-number">089</span>ad318cb6440518c7c06d9f9afc0ee
  Fetching changes with git depth set to <span class="hljs-number">20.</span>..
  Reinitialized existing Git repository in /builds/jHBjwSbnw/<span class="hljs-number">0</span>/devrakaops1/java-todo-app-cicd/.git/
  Checking out <span class="hljs-number">65</span>f03dd9 as detached HEAD (ref is master)...
  Skipping Git submodules setup
  Executing <span class="hljs-string">"step_script"</span> stage of the job script
  <span class="hljs-number">00</span>:<span class="hljs-number">08</span>
  $ echo <span class="hljs-string">"Hello my name is $NAME and i lived in $CITY"</span>
  Hello my name is Rakesh and i lived in Jaipur
  $ echo <span class="hljs-string">"$CI_JOB_STAGE is working in $CI_COMMIT_BRANCH branch"</span>
  build is working in master branch
  $ echo pwd
  pwd
  $ echo whoami
  whoami
  $ echo ls -al
  ls -al
  $ docker build -t todo-app:latest .
  #<span class="hljs-number">0</span> building with <span class="hljs-string">"default"</span> instance using docker driver
  #<span class="hljs-number">1</span> [internal] load build definition from Dockerfile
  #<span class="hljs-number">1</span> transferring dockerfile: <span class="hljs-number">297</span>B done
  #<span class="hljs-number">1</span> DONE <span class="hljs-number">0.0</span>s
  #<span class="hljs-number">2</span> [auth] library/node:pull token for registry<span class="hljs-number">-1.</span>docker.io
  #<span class="hljs-number">2</span> DONE <span class="hljs-number">0.0</span>s
  #<span class="hljs-number">3</span> [internal] load metadata for docker.io/library/node:<span class="hljs-number">12.2</span><span class="hljs-number">.0</span>-alpine
  #<span class="hljs-number">3</span> DONE <span class="hljs-number">0.2</span>s
  #<span class="hljs-number">4</span> [internal] load .dockerignore
  #<span class="hljs-number">4</span> transferring context: <span class="hljs-number">2</span>B done
  #<span class="hljs-number">4</span> DONE <span class="hljs-number">0.0</span>s
  #<span class="hljs-number">5</span> [<span class="hljs-number">1</span>/<span class="hljs-number">5</span>] FROM docker.io/library/node:<span class="hljs-number">12.2</span><span class="hljs-number">.0</span>-alpine@sha256:<span class="hljs-number">2</span>ab3d9a1bac67c9b4202b774664adaa94d2f1e426d8d28e07bf8979df61c8694
  #<span class="hljs-number">5</span> DONE <span class="hljs-number">0.0</span>s
  #<span class="hljs-number">6</span> [internal] load build context
  #<span class="hljs-number">6</span> transferring context: <span class="hljs-number">15.71</span>kB done
  #<span class="hljs-number">6</span> DONE <span class="hljs-number">0.0</span>s
  #<span class="hljs-number">7</span> [<span class="hljs-number">2</span>/<span class="hljs-number">5</span>] WORKDIR /node
  #<span class="hljs-number">7</span> CACHED
  #<span class="hljs-number">8</span> [<span class="hljs-number">3</span>/<span class="hljs-number">5</span>] COPY . .
  #<span class="hljs-number">8</span> DONE <span class="hljs-number">0.0</span>s
  #<span class="hljs-number">9</span> [<span class="hljs-number">4</span>/<span class="hljs-number">5</span>] RUN npm install
  #<span class="hljs-number">9</span> <span class="hljs-number">0.710</span> npm WARN read-shrinkwrap This version of npm is compatible with lockfileVersion@<span class="hljs-number">1</span>, but package-lock.json was generated for lockfileVersion@<span class="hljs-number">2.</span> I'll try to do my best with it!
  #<span class="hljs-number">9</span> <span class="hljs-number">5.604</span> 
  #<span class="hljs-number">9</span> <span class="hljs-number">5.604</span> &gt; ejs@<span class="hljs-number">2.7</span><span class="hljs-number">.4</span> postinstall /node/node_modules/ejs
  #<span class="hljs-number">9</span> <span class="hljs-number">5.604</span> &gt; node ./postinstall.js
  #<span class="hljs-number">9</span> <span class="hljs-number">5.604</span> 
  #<span class="hljs-number">9</span> <span class="hljs-number">5.653</span> Thank you for installing EJS: built with the Jake JavaScript build tool (https:<span class="hljs-comment">//jakejs.com/)</span>
  #<span class="hljs-number">9</span> <span class="hljs-number">5.653</span> 
  #<span class="hljs-number">9</span> <span class="hljs-number">5.909</span> npm WARN my-todolist@<span class="hljs-number">0.1</span><span class="hljs-number">.0</span> No repository field.
  #<span class="hljs-number">9</span> <span class="hljs-number">5.910</span> npm WARN my-todolist@<span class="hljs-number">0.1</span><span class="hljs-number">.0</span> No license field.
  #<span class="hljs-number">9</span> <span class="hljs-number">5.910</span> 
  #<span class="hljs-number">9</span> <span class="hljs-number">5.913</span> added <span class="hljs-number">291</span> packages from <span class="hljs-number">653</span> contributors and audited <span class="hljs-number">291</span> packages in <span class="hljs-number">5.249</span>s
  #<span class="hljs-number">9</span> <span class="hljs-number">5.914</span> found <span class="hljs-number">33</span> vulnerabilities (<span class="hljs-number">10</span> low, <span class="hljs-number">3</span> moderate, <span class="hljs-number">16</span> high, <span class="hljs-number">4</span> critical)
  #<span class="hljs-number">9</span> <span class="hljs-number">5.914</span>   run `npm audit fix` to fix them, or `npm audit` for details
  #<span class="hljs-number">9</span> DONE <span class="hljs-number">6.1</span>s
  #<span class="hljs-number">10</span> [<span class="hljs-number">5</span>/<span class="hljs-number">5</span>] RUN npm run test
  #<span class="hljs-number">10</span> <span class="hljs-number">0.428</span> 
  #<span class="hljs-number">10</span> <span class="hljs-number">0.428</span> &gt; my-todolist@<span class="hljs-number">0.1</span><span class="hljs-number">.0</span> test /node
  #<span class="hljs-number">10</span> <span class="hljs-number">0.428</span> &gt; mocha --recursive --exit
  #<span class="hljs-number">10</span> <span class="hljs-number">0.428</span> 
  #<span class="hljs-number">10</span> <span class="hljs-number">0.624</span> 
  #<span class="hljs-number">10</span> <span class="hljs-number">0.625</span> 
  #<span class="hljs-number">10</span> <span class="hljs-number">0.627</span>   Simple Calculations
  #<span class="hljs-number">10</span> <span class="hljs-number">0.628</span> This part executes once before all tests
  #<span class="hljs-number">10</span> <span class="hljs-number">0.628</span>     Test1
  #<span class="hljs-number">10</span> <span class="hljs-number">0.628</span> executes before every test
  #<span class="hljs-number">10</span> <span class="hljs-number">0.629</span>       ✓ Is returning <span class="hljs-number">5</span> when adding <span class="hljs-number">2</span> + <span class="hljs-number">3</span>
  #<span class="hljs-number">10</span> <span class="hljs-number">0.629</span> executes before every test
  #<span class="hljs-number">10</span> <span class="hljs-number">0.630</span>       ✓ Is returning <span class="hljs-number">6</span> when multiplying <span class="hljs-number">2</span> * <span class="hljs-number">3</span>
  #<span class="hljs-number">10</span> <span class="hljs-number">0.630</span>     Test2
  #<span class="hljs-number">10</span> <span class="hljs-number">0.630</span> executes before every test
  #<span class="hljs-number">10</span> <span class="hljs-number">0.630</span>       ✓ Is returning <span class="hljs-number">4</span> when adding <span class="hljs-number">2</span> + <span class="hljs-number">3</span>
  #<span class="hljs-number">10</span> <span class="hljs-number">0.630</span> executes before every test
  #<span class="hljs-number">10</span> <span class="hljs-number">0.630</span>       ✓ Is returning <span class="hljs-number">8</span> when multiplying <span class="hljs-number">2</span> * <span class="hljs-number">4</span>
  #<span class="hljs-number">10</span> <span class="hljs-number">0.631</span> This part executes once after all tests
  #<span class="hljs-number">10</span> <span class="hljs-number">0.631</span> 
  #<span class="hljs-number">10</span> <span class="hljs-number">0.631</span> 
  #<span class="hljs-number">10</span> <span class="hljs-number">0.631</span>   <span class="hljs-number">4</span> passing (<span class="hljs-number">7</span>ms)
  #<span class="hljs-number">10</span> <span class="hljs-number">0.631</span> 
  #<span class="hljs-number">10</span> DONE <span class="hljs-number">0.6</span>s
  #<span class="hljs-number">11</span> exporting to image
  #<span class="hljs-number">11</span> exporting layers
  #<span class="hljs-number">11</span> exporting layers <span class="hljs-number">1.1</span>s done
  #<span class="hljs-number">11</span> writing image sha256:c210dbc115207b92008265a12fe9e3059e2b259a59627ce44805f917bb2c49dd done
  #<span class="hljs-number">11</span> naming to docker.io/library/todo-app:latest done
  #<span class="hljs-number">11</span> DONE <span class="hljs-number">1.1</span>s
  Cleaning up project directory and file based variables
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Job succeeded
</code></pre>
</li>
<li><p>test_job</p>
<pre><code class="lang-json">  Running with gitlab-runner <span class="hljs-number">18.3</span><span class="hljs-number">.0</span> (<span class="hljs-number">9</span>ba718cd)
    on gitlab-runner1 jHBjwSbnw, system ID: s_5da8ab1e406d
  Preparing the <span class="hljs-string">"shell"</span> executor
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Using Shell (bash) executor...
  Preparing environment
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Running on ip<span class="hljs-number">-172</span><span class="hljs-number">-31</span><span class="hljs-number">-95</span><span class="hljs-number">-28.</span>ec2.internal...
  Getting source from Git repository
  <span class="hljs-number">00</span>:<span class="hljs-number">01</span>
  Gitaly correlation ID: <span class="hljs-number">14</span>a70c8d29bf4a60bb71afd99d959466
  Fetching changes with git depth set to <span class="hljs-number">20.</span>..
  Reinitialized existing Git repository in /builds/jHBjwSbnw/<span class="hljs-number">0</span>/devrakaops1/java-todo-app-cicd/.git/
  Checking out <span class="hljs-number">65</span>f03dd9 as detached HEAD (ref is master)...
  Skipping Git submodules setup
  Executing <span class="hljs-string">"step_script"</span> stage of the job script
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  $ echo <span class="hljs-string">"$CI_JOB_STAGE is working in $CI_COMMIT_BRANCH branch"</span>
  test is working in master branch
  $ docker image ls
  REPOSITORY         TAG       IMAGE ID       CREATED         SIZE
  todo-app           latest    c210dbc11520   <span class="hljs-number">5</span> seconds ago   <span class="hljs-number">104</span>MB
  [MASKED]/todo   latest    <span class="hljs-number">5970356</span>deaa0   <span class="hljs-number">3</span> minutes ago   <span class="hljs-number">104</span>MB
  Cleaning up project directory and file based variables
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Job succeeded
</code></pre>
</li>
<li><p>push_to_dockerhub_job</p>
<pre><code class="lang-json">  Running with gitlab-runner <span class="hljs-number">18.3</span><span class="hljs-number">.0</span> (<span class="hljs-number">9</span>ba718cd)
    on gitlab-runner1 jHBjwSbnw, system ID: s_5da8ab1e406d
  Preparing the <span class="hljs-string">"shell"</span> executor
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Using Shell (bash) executor...
  Preparing environment
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Running on ip<span class="hljs-number">-172</span><span class="hljs-number">-31</span><span class="hljs-number">-95</span><span class="hljs-number">-28.</span>ec2.internal...
  Getting source from Git repository
  <span class="hljs-number">00</span>:<span class="hljs-number">01</span>
  Gitaly correlation ID: <span class="hljs-number">4</span>cfb3d55ef244d9dad1a96d22cdd7691
  Fetching changes with git depth set to <span class="hljs-number">20.</span>..
  Reinitialized existing Git repository in /builds/jHBjwSbnw/<span class="hljs-number">0</span>/devrakaops1/java-todo-app-cicd/.git/
  Checking out <span class="hljs-number">65</span>f03dd9 as detached HEAD (ref is master)...
  Skipping Git submodules setup
  Executing <span class="hljs-string">"step_script"</span> stage of the job script
  <span class="hljs-number">00</span>:<span class="hljs-number">04</span>
  $ echo <span class="hljs-string">"$CI_JOB_STAGE is working in $CI_COMMIT_BRANCH branch"</span>
  push_to_dockerhub is working in master branch
  $ docker login -u $DOCKERHUB_NAME -p $DOCKERHUB_PASS
  WARNING! Using --password via the CLI is insecure. Use --password-stdin.
  Login Succeeded
  $ docker image tag todo-app:latest      $DOCKERHUB_NAME/todo:latest
  $ docker push $DOCKERHUB_NAME/todo:latest
  The push refers to repository [docker.io/[MASKED]/todo]
  b4a20abeed57: Preparing
  b5a342a444fe: Preparing
  <span class="hljs-number">2630</span>b0641421: Preparing
  <span class="hljs-number">48568</span>d6a9c95: Preparing
  <span class="hljs-number">917</span>da41f96aa: Preparing
  <span class="hljs-number">7</span>d6e2801765d: Preparing
  f1b5933fe4b5: Preparing
  <span class="hljs-number">7</span>d6e2801765d: Waiting
  f1b5933fe4b5: Waiting
  <span class="hljs-number">48568</span>d6a9c95: Layer already exists
  <span class="hljs-number">917</span>da41f96aa: Layer already exists
  <span class="hljs-number">7</span>d6e2801765d: Layer already exists
  f1b5933fe4b5: Layer already exists
  b4a20abeed57: Pushed
  <span class="hljs-number">2630</span>b0641421: Pushed
  b5a342a444fe: Pushed
  latest: digest: sha256:c289ae82475448c92ba2db4d00584f9664fdc9bc300208a44dbc7c878c6623c0 size: <span class="hljs-number">1785</span>
  Cleaning up project directory and file based variables
  <span class="hljs-number">00</span>:<span class="hljs-number">01</span>
  Job succeeded
</code></pre>
</li>
<li><p>deploy_job</p>
<pre><code class="lang-json">  Running with gitlab-runner <span class="hljs-number">18.3</span><span class="hljs-number">.0</span> (<span class="hljs-number">9</span>ba718cd)
    on gitlab-runner1 jHBjwSbnw, system ID: s_5da8ab1e406d
  Preparing the <span class="hljs-string">"shell"</span> executor
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Using Shell (bash) executor...
  Preparing environment
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Running on ip<span class="hljs-number">-172</span><span class="hljs-number">-31</span><span class="hljs-number">-95</span><span class="hljs-number">-28.</span>ec2.internal...
  Getting source from Git repository
  <span class="hljs-number">00</span>:<span class="hljs-number">01</span>
  Gitaly correlation ID: <span class="hljs-number">45752</span>bebf3ae4b6582bb25bdff2e7ac4
  Fetching changes with git depth set to <span class="hljs-number">20.</span>..
  Reinitialized existing Git repository in /builds/jHBjwSbnw/<span class="hljs-number">0</span>/devrakaops1/java-todo-app-cicd/.git/
  Checking out <span class="hljs-number">65</span>f03dd9 as detached HEAD (ref is master)...
  Skipping Git submodules setup
  Executing <span class="hljs-string">"step_script"</span> stage of the job script
  <span class="hljs-number">00</span>:<span class="hljs-number">01</span>
  $ echo <span class="hljs-string">"$CI_JOB_STAGE is working in $CI_COMMIT_BRANCH branch"</span>
  deploy is working in master branch
  $ docker compose up -d
  time=<span class="hljs-string">"2025-08-22T15:30:00Z"</span> level=warning msg=<span class="hljs-string">"/builds/jHBjwSbnw/0/devrakaops1/java-todo-app-cicd/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion"</span>
   Network java-todo-app-cicd_default  Creating
   Network java-todo-app-cicd_default  Created
   Container java-todo-app-cicd-web<span class="hljs-number">-1</span>  Creating
   Container java-todo-app-cicd-web<span class="hljs-number">-1</span>  Created
   Container java-todo-app-cicd-web<span class="hljs-number">-1</span>  Starting
   Container java-todo-app-cicd-web<span class="hljs-number">-1</span>  Started
  $ echo <span class="hljs-string">"Working Successful"</span>
  Working Successful
  Cleaning up project directory and file based variables
  <span class="hljs-number">00</span>:<span class="hljs-number">00</span>
  Job succeeded
</code></pre>
</li>
</ul>
<hr />
<p>Thank you</p>
<p>GitLab repo: <a target="_blank" href="https://gitlab.com/devrakaops1/java-todo-app-cicd/-/tree/master?ref_type=heads">https://gitlab.com/devrakaops1/java-todo-app-cicd/-/tree/master?ref_type=heads</a></p>
<p>Linkedin: <a target="_blank" href="https://www.linkedin.com/in/rakeshkumarjangid/">https://www.linkedin.com/in/rakeshkumarjangid/</a></p>
]]></content:encoded></item><item><title><![CDATA[Solve: Break Linux Administrative Password on Fedora based Linux Machines]]></title><description><![CDATA[You know how it feels when you forget your Linux password and suddenly can’t do anything on your own machine? It's super frustrating, especially when you’re locked out of something important. Whether you’re using Fedora, Ubuntu, or any other Linux di...]]></description><link>https://projectwala.site/solve-break-linux-administrative-password-on-fedora-based-linux-machines</link><guid isPermaLink="true">https://projectwala.site/solve-break-linux-administrative-password-on-fedora-based-linux-machines</guid><category><![CDATA[Linux]]></category><category><![CDATA[passwords]]></category><category><![CDATA[hacking]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Tue, 13 Aug 2024 09:31:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723538697242/9088c6d9-0505-4a4c-9cf9-6a6b565f7bb8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p><strong>Y</strong>ou know how it feels when you forget your Linux password and suddenly can’t do anything on your own machine? It's super frustrating, especially when you’re locked out of something important. Whether you’re using Fedora, Ubuntu, or any other Linux distro, you need that admin access to keep things running smoothly—installing software, updating the system, all that stuff. Without it, you’re just stuck.</p>
<p>But here’s the good news: Linux has a safety net. You can actually reset or break the password and get back in control. It might sound a bit intimidating at first, but trust me, with a little guidance, it’s totally doable.</p>
<hr />
<p><strong>S</strong>o, imagine this: you’re the system admin, and suddenly you can’t remember your root password. It’s one of those moments where everything grinds to a halt because, without that password, you’re locked out of doing just about anything important on your system. Maybe you were in the middle of something critical, and now you’re stuck. But don’t worry—Linux has got your back.</p>
<h3 id="heading-resetting-the-root-password-from-the-boot-loader"><strong>Resetting the Root Password from the Boot Loader</strong></h3>
<p>If you ever find yourself in this situation, knowing how to reset a lost root password is a skill every system admin needs. If you’re already logged in with sudo access or as root, it’s no big deal. But if you’re not logged in, things get a bit trickier.</p>
<blockquote>
<p>You’ve got a few options here. Some might suggest booting from a Live CD, mounting your root file system, and then editing <code>/etc/shadow</code> to fix the problem. But let’s face it, not everyone wants to fiddle with external media. So, let’s explore a method that doesn’t require any of that.</p>
</blockquote>
<p>On older Red Hat systems, you could just boot into runlevel 1 to get a root prompt. In the newer versions, like Red Hat Enterprise Linux 8 and beyond, it’s a bit different. You’ll need to use either the rescue or emergency targets, but here’s the catch—they still require the root password. If your system was deployed from a Red Hat cloud image, you might not have a rescue kernel in your boot menu, but your default kernel has a trick up its sleeve—it lets you enter maintenance mode without needing the root password.</p>
<h3 id="heading-how-to-do-it"><strong>How to Do It:</strong></h3>
<h3 id="heading-approach-1">Approach-1</h3>
<ol>
<li><p><strong>Reboot Your System:</strong> Start by rebooting your machine.</p>
<p> <strong>Interrupt the Boot:</strong> When the boot-loader countdown starts, hit any key (except Enter) to stop it.</p>
<blockquote>
<p><strong>You will see kernel images here, select "rescue" kernel image. use ↑ &amp; ↓ arrow key to change the selection.</strong></p>
</blockquote>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723540174841/a5b69a16-0fe3-4e36-88ed-d2fd2ecffba8.png" alt class="image--center mx-auto" /></p>
<p> <strong>Select the Rescue Kernel:</strong> Use the arrow keys to move the cursor to the entry with the word "rescue" in its name.</p>
</li>
<li><p><strong>Edit the Boot Parameters:</strong> Press <code>e</code> to edit the selected entry.</p>
</li>
<li><p><strong>Modify the Kernel Command Line:</strong> Find the line that begins with <code>linux</code> and append <code>rd.break</code> at the end after entering a single space. This tells the system to pause just before handing control from the initramfs to the actual system.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723540357882/0e59b6f6-25fb-4a7d-8cda-da5664ea0358.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723540495481/88a6cd23-9725-4d76-9294-1afc82dcc9f4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Boot with the Changes:</strong> Press <code>Ctrl+x</code> to boot with the modified parameters.</p>
</li>
<li><p><strong>Maintenance Mode:</strong> When prompted, press Enter to enter maintenance mode.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723540551196/fe06b0b8-f337-4ce2-a17d-2ddf9e4adbaa.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Now, you’ll have access to a root shell, but there’s a catch—the root file system is mounted as read-only. You’ll need to remount it as read/write to make changes</div>
</div>

<ol start="7">
<li><p><strong>Remount the File System:</strong> Run this command to remount the root file system as read/write:</p>
<pre><code class="lang-plaintext"> switch_root:/# mount -o remount,rw /sysroot
</code></pre>
</li>
<li><p><strong>Enter a Chroot Jail:</strong> This makes <code>/sysroot</code> the root of your file-system tree:</p>
<pre><code class="lang-plaintext"> switch_root:/# chroot /sysroot
</code></pre>
</li>
<li><p><strong>Reset the Password:</strong> Set a new root password with this command:</p>
<pre><code class="lang-plaintext"> sh-5.1# passwd root
 Changing password for user root.
 New password: **********
 Retype new password: **********
 passwd: all authentication tokens updated successfully.
</code></pre>
</li>
<li><p><strong>Ensure Files Get Relabeled:</strong> This step is crucial to avoid SELinux issues later. Run:</p>
<pre><code class="lang-plaintext">sh-5.1# touch /.autorelabel
</code></pre>
</li>
<li><p><strong>Exit and Reboot:</strong> Type <code>exit</code> twice—once to leave the chroot jail and once to exit the initramfs shell. The system will continue booting, perform a full SELinux relabel, and then reboot again. It will take some time to restart, so login now with new password for the root user.</p>
<pre><code class="lang-plaintext">sh-5.1# exit
switch_root:/# exit
</code></pre>
</li>
</ol>
<hr />
<h3 id="heading-approach-2">Approach-2</h3>
<ol>
<li><p><strong>Reboot Your System:</strong> Start by rebooting your machine.</p>
<p> <strong>Interrupt the Boot:</strong> When the boot-loader countdown starts, hit any key (except Enter) to stop it.</p>
<blockquote>
<p><strong>You will see kernel images here, select "rescue" kernel image. use ↑ &amp; ↓ arrow key to change the selection.</strong></p>
</blockquote>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723540174841/a5b69a16-0fe3-4e36-88ed-d2fd2ecffba8.png" alt class="image--center mx-auto" /></p>
<p> <strong>Select the Rescue Kernel:</strong> Use the arrow keys to move the cursor to the entry with the word "rescue" in its name.</p>
</li>
<li><p><strong>Edit the Boot Parameters:</strong> Press <code>e</code> to edit the selected entry.</p>
</li>
<li><p><strong>Modify the Kernel Command Line:</strong> Find the line that begins with <code>linux</code> and append <code>rw init=/bin/bash</code> at the end after entering a single space. This tells the system to pause just before handing control from the initramfs to the actual system.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723556552752/08cd55d1-169c-4466-bc2b-dbe5c6b9e71a.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Boot with the Changes:</strong> Press <code>Ctrl+x</code> to boot with the modified parameters.</p>
</li>
<li><p><strong>Maintenance Mode:</strong> When prompted, press Enter to enter maintenance mode.</p>
<pre><code class="lang-plaintext"> bash# passwd root
 Changing password for user root.
 New password: **********
 Retype new password: **********
 passwd: all authentication tokens updated successfully.
</code></pre>
</li>
<li><p><strong>Ensure Files Get Relabeled:</strong> This step is crucial to avoid SELinux issues later.</p>
<pre><code class="lang-plaintext"> bash# touch /.autorelabel
</code></pre>
</li>
<li><p><strong>Exit and Reboot:</strong> Type <code>/sbin/reboot -f</code> and hit enter. Now the system will continue booting, perform a full SELinux relabel, and then reboot again. It will take some time to restart, so login now with new password for the root user.</p>
<pre><code class="lang-plaintext"> bash# /sbin/reboot -f
</code></pre>
</li>
</ol>
<p>And that’s it! You’re back in business with a new root password, and you didn’t even need to mess around with external media. Whether you’re troubleshooting or just making sure you’re prepared for a rainy day, knowing this trick can save you a lot of headaches.</p>
]]></content:encoded></item><item><title><![CDATA[Linux Networking: A Detailed Guide for the Curious Mind]]></title><description><![CDATA[What is a Network & Networking?
A network is a collection of computing devices (Ex:- Mobile, Computers, IoT Devices, Servers) connected via a communication medium (Wired, Wireless) to exchange information and resources while networking is the practic...]]></description><link>https://projectwala.site/linux-networking-a-detailed-guide-for-the-curious-mind</link><guid isPermaLink="true">https://projectwala.site/linux-networking-a-detailed-guide-for-the-curious-mind</guid><category><![CDATA[Linux]]></category><category><![CDATA[networking]]></category><category><![CDATA[Devops]]></category><category><![CDATA[networking for beginners]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Wed, 21 Feb 2024 11:14:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1708439683209/23857688-63e1-4f14-bf0c-9863faeb1132.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3 id="heading-what-is-a-network-amp-networking">What is a Network &amp; Networking?</h3>
<p>A network is a collection of computing devices (Ex:- Mobile, Computers, IoT Devices, Servers) connected via a communication medium (Wired, Wireless) to exchange information and resources while networking is the practice of creating, maintaining, securing, and troubleshooting this network.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><code>What is computer networking?</code> Computer networking is like a web of computers that can talk to each other and share stuff. They use a set of rules, known as protocols, to send information using wires or wireless technologies.</div>
</div>

<hr />
<h3 id="heading-how-does-a-computer-network-work"><strong>How does a computer network work?</strong></h3>
<p>A computer network is a system that allows multiple computing devices (known as nodes) to connect and communicate with each other. This communication is facilitated through various mediums such as wires, optical fibers, or wireless links.</p>
<p>The basic building blocks of a computer network are nodes and links. A node can be a device for data communication like a modem, router, or a data terminal like a computer. Links in computer networks can be defined as wires or cables or free space of wireless networks.</p>
<p>Each device in a network has a unique IP Address that helps in identifying the device. Protocols, which are sets of rules, help in sending and receiving data via the links that allow computer networks to communicate. We will talk about IP address and stuff later.</p>
<hr />
<h3 id="heading-what-are-two-types-of-computer-network-architecture"><strong>What are two types of computer network architecture?</strong></h3>
<p>There are two main types:</p>
<ol>
<li><p><code>Client-server architecture:</code> Here, some nodes (servers) provide resources to other nodes (clients). Clients can talk to each other but don’t share resources.</p>
</li>
<li><p><code>Peer-to-Peer (P2P) architecture:</code> Here, all computers have the same powers. There’s no central server. Each device can act as a client or server and share resources with the network.</p>
</li>
</ol>
<hr />
<h3 id="heading-what-is-network-topology"><strong>What is network topology?</strong></h3>
<p><strong>Network Topology</strong> is like a blueprint of a network. It shows how different devices such as computers, routers, and servers are connected to each other.</p>
<p>Here are the different types of network topologies:</p>
<ol>
<li><p><code>Point-to-Point Topology:</code> This is the simplest form where two devices are directly connected. It’s like a direct phone call between two people. For example, a satellite dish receiving a signal from a satellite is a point-to-point connection.</p>
</li>
<li><p><code>Mesh Topology:</code> In this type, every device is connected to every other device. It’s like a group where everyone is friends with everyone else. This is often used in wireless networks where each device can communicate with every other device directly.</p>
</li>
<li><p><code>Star Topology:</code> Here, all devices are connected to a central hub. It’s like a wheel where the hub is the center and the devices are the spokes. This is commonly used in home networks where all devices connect to a central router.</p>
</li>
<li><p><code>Bus Topology:</code> All devices are connected through a single cable, known as the backbone. It’s like a bus route where all stops (devices) are located along the route. This was commonly used in old Ethernet networks.</p>
</li>
<li><p><code>Ring Topology:</code> Devices are connected in a circle, and data moves in one direction from one device to another. It’s like a circular conveyor belt where packages (data) can hop on at one station and hop off at another. This is used in some types of office networks.</p>
</li>
<li><p><code>Tree Topology:</code> This is a combination of star and bus topologies and looks like a tree with branches. This is often used in wide area networks (WANs) that span large distances, like a network of a large company with many branches.</p>
</li>
<li><p><code>Hybrid Topology:</code> This is a combination of two or more different types of topologies. This is often used in large businesses where different departments have different needs.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708424631297/9a730f36-87fc-4ba0-b63c-f061f2b7e5fc.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-what-are-the-types-of-enterprise-computer-networks"><strong>What are the types of enterprise computer networks?</strong></h3>
<p>Computer networks can vary in size and complexity:</p>
<ul>
<li><p><code>Personal Area Network (PAN):</code> A network that connects devices over a very short distance, often within a range of 10 meters. For example, a smartphone connected to a wearable device.</p>
</li>
<li><p><code>Local Area Network (LAN):</code> A network that connects devices over a short distance, such as within a building or a campus.</p>
</li>
<li><p><code>Metropolitan Area Network (MAN):</code> A network that covers a larger geographic area, such as a city.</p>
</li>
<li><p><code>Wide Area Network (WAN):</code> A network that covers a large geographic area, such as a country or the entire world. The Internet is an example of a WAN.</p>
</li>
<li><p><code>Service provider networks:</code> An Internet Service Provider (ISP) is an example of a service provider network. ISPs are organizations that provide services for accessing, using, managing, or participating in the Internet. They can be organized in various forms, such as commercial, community-owned, non-profit, or privately owned. ISPs provide internet access, internet transit, domain name registration, web hosting, and colocation.</p>
</li>
<li><p><code>Cloud networks:</code> A Virtual Private Cloud (VPC) is an example of a cloud network. A VPC is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the VPC, add subnets, add gateways, and associate security groups. A subnet is a range of IP addresses in your VPC. Amazon VPC lets you define and launch AWS resources in a virtual network that you control. You can customize your VPC by choosing your own IP address range, creating subnets, and configuring route tables, and secure and monitor your connections with security groups and firewalls.</p>
</li>
</ul>
<hr />
<h3 id="heading-the-main-components-of-a-computer-network">The main components of a computer network</h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">What do you mean by computer network components?</div>
</div>

<p>Computer network components are the major parts that are needed to install a network according to network topology architectural setup. The design of a network for any organization is indeed influenced by the choice of network components and network topologies. These components are crucial in the design of a computer network architecture as they dictate the layout, communication protocols, and connectivity patterns of network systems.</p>
<p><code>1. NIC (Network Interface Card):</code></p>
<p>This is a piece of hardware that connects a computer to a network. It can handle data transfer rates from 10 to 1000 Mb/s. There are two types:</p>
<ul>
<li><p><strong>Wired NIC</strong>: This is inside the computer’s motherboard and uses cables to transfer data.</p>
</li>
<li><p><strong>Wireless NIC</strong>: This has an antenna for wireless connections. For example, laptops have wireless NICs.</p>
</li>
</ul>
<p><code>2. HUB:</code></p>
<ul>
<li><p>A Hub is a connector that connects wires coming from different sides.</p>
</li>
<li><p>It operates only on the physical layer (Layer 1) of the OSI model.</p>
</li>
<li><p>It is also known as a repeater as it transmits signals to every port except the port from where the signal is received.</p>
</li>
<li><p>Hubs are not intelligent in communication and processing information for the 2nd and 3rd layer.</p>
</li>
</ul>
<p><code>3. Switch:</code></p>
<ul>
<li><p>A Switch is a point-to-point communication device that operates on both the physical and data link layers (Layer 1 and Layer 2) of the OSI model.</p>
</li>
<li><p>It can inspect data packets as they are received, determine the source and destination device of each packet, and forward them appropriately.</p>
</li>
<li><p>By delivering messages only to the connected device intended, a network switch conserves network bandwidth and offers generally better performance than a hub.</p>
</li>
</ul>
<p><code>4. Router:</code></p>
<ul>
<li><p>A Router is a device that forwards data packets between computer networks, creating an overlay internetwork. It operates on the third layer (Network Layer) of the OSI model.</p>
</li>
<li><p>Routers use headers and forwarding tables to determine the best path for forwarding the packets, and they use protocols such as ICMP to communicate with each other and configure the best route between any two hosts.</p>
</li>
<li><p>Unlike access points and modems, routers have the ability to connect two or more logical subnets, which do not necessarily map one-to-one to the physical interfaces of the router.</p>
</li>
</ul>
<p><code>5. Access Point:</code></p>
<ul>
<li><p>An Access Point, on the other hand, is a device that allows wireless devices to connect to a network. It serves as a central transmitter and receiver of wireless radio signals.</p>
</li>
<li><p>Access points are used for extending the wireless coverage of an existing network and for increasing the number of users that can connect to it. Unlike a router, it does not handle traffic routing across multiple networks.</p>
</li>
</ul>
<p><code>6. Modem:</code></p>
<ul>
<li><p>A Modem is a device that connects your home, usually through a coax cable connection, to your Internet Service Provider (ISP), like Jio, Airtel, Idea, or others.</p>
</li>
<li><p>The modem takes signals from your ISP and translates them into signals your local devices can use, and vice versa. Unlike routers and access points, modems do not manage network traffic - they simply provide a pathway for it.</p>
</li>
</ul>
<p><code>7. Cables and Connectors:</code></p>
<p>These are used for transmitting signals. The three types of cables used are twisted pair cable, coaxial cable, and fiber-optic cables.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708424269993/3d803daa-a299-46ea-871a-6261d5f5fa3b.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Let's design our first network architecture.</div>
</div>

<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708425197966/47e54f2c-2a8e-412e-8485-37f1e2104406.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you want to simulate and design your personal network architecture, please use the ‘Cisco Packet Tracer Network Tool’. The student version is available free of cost. Download link: <a target="_blank" href="https://www.packettracernetwork.com/download/download-packet-tracer.html">Click Here</a></div>
</div>

<hr />
<h3 id="heading-the-logical-terminology-of-networking">The Logical Terminology of Networking</h3>
<ol>
<li><p><code>IP Address</code></p>
<p> Any digital device that is connected to the internet is assigned an address known as an IP address, which can be obtained in two ways: statically or dynamically. An IP (Internet Protocol) address is a unique numerical label assigned to each device participating in a computer network that uses the Internet Protocol for communication. It’s like a house address in the digital world for your system, allowing computers to send and receive information over the internet.</p>
<pre><code class="lang-plaintext"> C:\Users\RakaModify-PC&gt; ipconfig
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708430286247/13bf4b10-b2a3-4a0e-b3a1-0c81103d7088.png" alt class="image--center mx-auto" /></p>
<ol>
<li><code>Versions Types of IP addresses:</code></li>
</ol>
<ul>
<li><p><code>IPv4:</code> This is the most commonly used version. An IPv4 address consists of four numbers separated by dots, each ranging from 0 to 255. For example, 192.168.1.1. Each number can be represented by a group of 8-digit binary digits, making the whole IPv4 binary address represented by 32-bits of binary digits.</p>
</li>
<li><p><code>IPv6:</code> This version was introduced to deal with the exhaustion of IPv4 addresses. An IPv6 address is longer and can generate a vast number of unique IP addresses. It uses 128 bits for the IP address, which was standardized in 1998.</p>
</li>
</ul>
<ol>
<li><code>Static or dynamic IP:</code></li>
</ol>
<ul>
<li><p><code>Static:</code> These are permanent IP addresses typically assigned manually by an administrator. They are fixed or permanent and do not change over time.</p>
</li>
<li><p><code>Dynamic:</code> These are temporary and are assigned each time on lease, and a device accesses the Internet through DHCP <code>(Dynamic Host Configuration Protocol)</code> server. Your internet activity goes through your service provider, and they route it back to you, using your IP address. Your IP address can change, for example, turning your router on or off can change your IP Address.</p>
</li>
</ul>
<ol>
<li><p><code>Private IP (Virtual IP) vs. public IP (Real IP)</code></p>
<ul>
<li><p><code>Private IP Address:</code> This is the IP address that is used to communicate within the same network. It is assigned by the router to each device on the network. Private IP addresses are more secure as they can only be traced within the local network and are not visible online.</p>
</li>
<li><p><code>Public IP Address:</code> This is the IP address that is used to communicate outside the network. It is assigned by the Internet Service Provider (ISP). Public IP addresses can be traced back to the ISP, revealing the geographical location. They are visible online and are unique on the internet.</p>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708432536326/8769afce-e2fb-4dd8-8031-9d44b1c3e9ce.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><code>Logical Address VS Physical Address</code></p>
<ul>
<li><p><code>Physical Address:</code> This is also known as the MAC (Media Access Control) address or link address. It is the address of a node as defined by its LAN or WAN. It’s used by the data link layer and is the lowest level of addresses. Physical addresses are unique to each device and are used to identify devices in the same network. However, they can only be used in local networks and not between different networks.</p>
</li>
<li><p><code>Logical Address:</code> Also referred to as IP (Internet Protocol) address, it is used in the Network layer. This address facilitates universal communication that is not dependent on the underlying physical networks. There are two types of IP addresses – IPv4 and IPv6. Logical addresses provide a layer of abstraction that allows processes to access memory without knowing the physical memory location.</p>
</li>
</ul>
</li>
</ol>
<p>6. <code>What is MAC (Physical Address, Hardware Address) &amp; NIC (LAN Card, Network Adapter)</code></p>
<ul>
<li><p><code>NIC (Network Interface Card):</code> It’s a hardware component, also known as a LAN card or network adapter, that enables a computer to connect to a network. Each NIC is assigned an IP address when connected to a network, allowing the system to be identified and communicate with other systems on the network.</p>
</li>
<li><p><code>MAC (Media Access Control) Address:</code> This is a unique 48-bit hardware number embedded &amp; assigned to a NIC by the manufacturer. It’s used for network communication within a network segment. Unlike an IP address, which can change based on network assignment, a MAC address is a unique, physical address embedded into the device. It helps in uniquely identifying a system in the world.</p>
  <div data-node-type="callout">
  <div data-node-type="callout-emoji">💡</div>
  <div data-node-type="callout-text">In windows try this command to see MAC address: <code>ipconfig /all</code></div>
  </div>

<pre><code class="lang-plaintext">  C:\Users\Rakesh-PC&gt;  ipconfig /all
</code></pre>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708434312997/37ac7167-20ef-4c37-9cf1-be8bb1f59a5c.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><code>ISP (Internet Service Provider)</code></p>
<p> ISP, or Internet Service Provider, is a company that gives people access to the internet. Customers pay a fee to the ISP, which can change based on how much data they use or the data plan they choose. ISPs are also called Internet Access Providers or online service providers. If you want to connect to the internet, you need an ISP. Ex: Idea, Airtel, Jio etc.</p>
</li>
<li><p><a target="_blank" href="https://www.rakamodify.online/understanding-dns-servers-the-internets-phonebook-made-simple"><code>DNS Server</code></a><code>(Domain Name Server)</code></p>
<p> The Domain Name System (DNS) is like the phonebook of the Internet. When users type domain names such as ‘<a target="_blank" href="http://google.com">google.com</a>’ or ‘<a target="_blank" href="http://nytimes.com">nytimes.com</a>’ into web browsers, DNS is responsible for finding the correct IP address for those sites. DNS servers are machines dedicated to answering DNS queries.</p>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you want to know more about "DNS Server" Try this article on <a target="_blank" href="https://www.rakamodify.online">RakaModify</a>. Article Link: <a target="_blank" href="https://www.rakamodify.online/understanding-dns-servers-the-internets-phonebook-made-simple">What is DNS Server?</a></div>
</div>

<p>8. <code>NAT (Network Address Translator) &amp; How NAT work?</code></p>
<ul>
<li><p>NAT, which stands for Network Address Translation, is like a translator for internet addresses. It allows multiple devices on a local network (like the devices connected to your home Wi-Fi) to connect to the internet using just one public IP address. When a device on your network wants to access the internet, NAT changes the device’s private IP address to a public one. This is because private IP addresses can’t be used on the internet.</p>
</li>
<li><p>When the internet sends data back, NAT changes the public IP address back to the private one. This way, all devices on your network can share the same public IP address but still have their own unique private addresses inside the network. This process is crucial for keeping your network secure and efficient.</p>
</li>
</ul>
<ol>
<li><code>Client Machine &amp; Server Machine</code></li>
</ol>
<ul>
<li><p><code>A client machine</code> is like a regular computer that we use every day, such as a smartphone or a laptop. It’s designed for simple tasks and has basic hardware and software.</p>
</li>
<li><p><code>A server machine</code>, on the other hand, is a more powerful version of a client machine. It has high-end hardware and software configurations. Its main job is to provide services like web hosting (HTTP), secure shell access (SSH), database management, and storage to client machines over a network.</p>
  <div data-node-type="callout">
  <div data-node-type="callout-emoji">💡</div>
  <div data-node-type="callout-text">A <strong>machine</strong> can be a <strong>client</strong> or a <strong>server</strong>. A client machine uses services, and a server machine provides services.</div>
  </div>


</li>
</ul>
<hr />
<h3 id="heading-ip-address-classification-amp-subnetting-through-netmasking">IP Address Classification &amp; Subnetting through Netmasking.</h3>
<p><code>Classes of IPv4</code></p>
<p>IPv4 addresses are divided into five classes: A, B, C, D, and E. Each class has a range of IP addresses and is used for different types of networks:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708441273394/ba91ff4f-34d9-4da7-9702-a7b0806b12a9.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Please note that Class D and E are reserved for special uses. Class D is used for multicasting<a target="_blank" href="https://t4tutorials.com/ip-subnetting-techniques-and-class-a-b-c-d-and-e/">,</a> while Class E is reserved for experimental use. Therefore, they do not have a private IP range, subnet mask, or a specific number of networks or hosts per network.</div>
</div>

<ul>
<li><p><code>Class A:</code> This class is used for very large networks, such as multinational corporations. The IP addresses in Class A range from 1.0.0.0 to 126.0.0.0. The first octet (the first set of numbers before the dot) is used for the network ID, and the remaining three octets are used for the host ID. This means there can be 126 networks (2^7 - 2) and 16,777,214 hosts (2^24 - 2) in each network. 127.0.0.1 is save reserved for self localhost address.</p>
</li>
<li><p><code>Class B:</code> This class is used for medium-sized networks, like large universities. The IP addresses in Class B range from 128.0.0.0 to 191.255.0.0. The first two octets are used for the network ID, and the remaining two octets are used for the host ID. This allows for 16,384 networks (2^14) and 65,534 hosts (2^16 - 2) in each network.</p>
</li>
<li><p><code>Class C:</code> This class is used for small networks, like small businesses. The IP addresses in Class C range from 192.0.0.0 to 223.255.255.0. The first three octets are used for the network ID, and the last octet is used for the host ID. This allows for 2,097,152 networks (2^21) and 254 hosts (2^8 - 2) in each network.</p>
</li>
<li><p><code>Class D and E:</code> These classes are reserved for special uses, such as multicasting and research.</p>
</li>
</ul>
<hr />
<h3 id="heading-what-is-subnettingnetmasking">What is Subnetting/Netmasking?</h3>
<p>Subnetting is a method for dividing a network into smaller, more manageable pieces. This is done by taking bits from the host portion of the IP address and using them to create a subnet. A subnet mask is used to determine which part of the IP address is the network section and which part is the host section.</p>
<p>For example, consider an IP address of <code>192.168.1.0</code> with a subnet mask of <code>255.255.255.0</code>. The <code>255</code> in the subnet mask tells us that the corresponding octet in the IP address is all part of the network address. So in this case, <code>192.168.1</code> is the network address, and the last part (represented by <code>0</code>) is left for hosts on that network.</p>
<p><code>Importance of Subnetting/Netmasking</code></p>
<ul>
<li><p>Subnetting is important for several reasons:</p>
</li>
<li><p><code>Efficiency:</code> It allows for more efficient use of IP addresses. By dividing a network into subnets, you can ensure that IP addresses are not wasted.</p>
</li>
<li><p><code>Performance:</code> Subnetting can reduce network congestion. By confining network traffic to a single subnet, you can prevent that traffic from affecting other subnets.</p>
</li>
<li><p><code>Security:</code> Subnetting can improve network security. By isolating different parts of the network into different subnets, you can limit the impact of a security breach. If one subnet is compromised, the others remain secure.</p>
</li>
</ul>
<hr />
<h3 id="heading-case-study-to-understand-network-amp-netmask"><strong>Case Study: To Understand Network &amp; Netmask</strong></h3>
<blockquote>
<p>TechCo is a small company that is divided into five key departments: HR, Tech, Accounts, Sales, and Others. Each department is equipped with a different number of computers to meet its specific needs. The HR department has 5 computers, Accounts has 15, Tech has 60, Sales has 30, and Others has 11.</p>
<p>To ensure smooth inter-departmental communication, TechCo purchased a “Type-C” network range. The challenge was to divide this network range efficiently among the five departments, a process known as subnetting. The goal was to provide each department with enough network addresses for its computers, while minimizing the number of unused addresses.</p>
<p>This case study will explore how TechCo successfully implemented subnetting to optimize its network division across the five departments in a simple and understandable manner.</p>
<p><strong><mark>Answer:</mark></strong> TechCo, a small company, is divided into five sub-departments: HR, Tech, Accounts, Sales, and Others. Each department has a different number of computers: HR has 5, Accounts has 15, Tech has 60, Sales has 30, and Others has 11.</p>
<p>To establish an efficient network among these departments, TechCo purchased a Type-C network with a range of <code>192.168.25.0 - 192.168.25.255</code> and divided it into subnets. The division of the network among the departments is as follows:</p>
<ul>
<li><p>The <strong>Network Address</strong> is the first IP in the range, which is <strong>192.168.25.0</strong>.</p>
</li>
<li><p>The <strong>Broadcast Address</strong> is the last IP in the range, which is <strong>192.168.25.255</strong>.</p>
</li>
<li><p>The Self Loop Localhost Address is the IP range, which is 127.0.0.1</p>
</li>
</ul>
</blockquote>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Department</td><td>No. of Computers</td><td>Subnet Size</td><td>Unused Addresses</td><td>IP Subnetting</td></tr>
</thead>
<tbody>
<tr>
<td>HR</td><td>5</td><td>8</td><td>3</td><td>192.168.25.1/8 to 192.168.25.5/8</td></tr>
<tr>
<td>Accounts</td><td>15</td><td>16</td><td>1</td><td>192.168.25.6/16 to 192.168.25.21/16</td></tr>
<tr>
<td>Tech</td><td>60</td><td>24</td><td>4</td><td>192.168.25.22/24 to 192.168.25.82/24</td></tr>
<tr>
<td>Sales</td><td>30</td><td>27</td><td>2</td><td>192.168.25.83/27 to 192.168.25.113/27</td></tr>
<tr>
<td>Others</td><td>11</td><td>28</td><td>5</td><td>192.168.25.114/28 to 192.168.25.125/28</td></tr>
</tbody>
</table>
</div><blockquote>
<p>This case study demonstrates how TechCo efficiently divided its network among its departments, ensuring optimal use of resources.</p>
</blockquote>
<hr />
<h3 id="heading-what-is-tcpip-networking-model">What is TCP/IP Networking Model</h3>
<p>The TCP/IP network model is a simplified, four-layered set of communication protocols that describes how data communications are packetized, addressed, transmitted, routed, and received between computers over a network. Here’s a detailed explanation of each layer:</p>
<ol>
<li><p><code>Application Layer:</code> Each application has specifications for communication so that clients and servers can communicate across platforms. Common protocols include SSH, HTTPS (secure web), FTP (file sharing), and SMTP (electronic mail delivery).</p>
</li>
<li><p><code>Transport Layer:</code> This layer uses TCP and UDP as transport protocols. TCP is a reliable connection-oriented communication, while UDP is a connectionless datagram protocol. Application protocols can use either TCP or UDP ports. When a packet is sent on the network, the combination of the service port and IP address forms a socket. Each packet has a source socket and a destination socket. This information can be used when monitoring and filtering network traffic.</p>
</li>
<li><p><code>Internet Layer:</code> The Internet, or network layer, carries data from the source host to the destination host. The IPv4 and IPv6 protocols are Internet layer protocols. Each host has an IP address and a prefix to determine network addresses. Routers are used to connect networks.</p>
</li>
<li><p><code>Link Layer:</code> The link, or media access, layer provides the connection to physical media. The most common types of networks are wired Ethernet (802.3) and wireless Wi-Fi (802.11). Each physical device has a Media Access Control (MAC) address, also known as a hardware address, to identify the destination of packets on the local network segment.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708508136187/e39671f2-f446-4b46-85b4-cfd70241ca5a.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h3 id="heading-what-is-ports-amp-sockets"><strong>What is Ports &amp; Sockets</strong></h3>
<ul>
<li><p>Port = Running service address on the server</p>
</li>
<li><p>IP Address = Your system's logical address on the internet</p>
</li>
<li><p>Socket = IP address + Service Port</p>
</li>
</ul>
<p>Why Sockets is important and useful?</p>
<ul>
<li><p>In a multitasking system, multiple services can run concurrently. Each device on the internet is identified by an IP address, which can be either static or dynamic. On a system level, while all services share the same IP address, they run individually on specific ports. For instance, in a network, your system might be assigned an IP address like 192.168.10.25/24. On this system, multiple servers could be running concurrently, each on its own port. Examples include a database server on port 3306, SSH on port 22, a web server on ports 443 and 80, and NFS on port 2049.</p>
</li>
<li><p>A socket is one endpoint of a two-way communication link between two programs running on the network. When user “A” wants to send data to user “B” at a different location, the data is broken down into smaller data packets. These packets follow the TCP/IP protocol for transmission. Each packet contains data and the destination socket information (IP address and port number). The socket itself is an interface for sending and receiving data.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708510470114/91021ca9-ccfa-4e50-a02d-212b13f6284e.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-plaintext"># vim /etc/services  -------? Showing all ports
# netstat -tulpn     --------? Showing open running service port
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708510367678/b8f4c92f-9771-4819-83ef-1e3b967a108c.png" alt class="image--center mx-auto" /></p>
<ul>
<li>A <code>socket</code> is the combination of an <code>IP address</code> and a <code>port number</code>. When a packet is sent on the network, it has a source socket and a destination socket. The socket helps in identifying the source and destination of the packet, which is crucial for routing and delivering the packet correctly. This concept is used in the transport layer of the TCP/IP model for monitoring and filtering network traffic</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">In a system total of 65536 ports are available. You can find port range on location <code>/etc/services.</code></div>
</div>

<hr />
<h3 id="heading-networkmanager-amp-configuration-setup">NetworkManager &amp; Configuration Setup</h3>
<ol>
<li><p><code>What is NetworkManager?</code> NetworkManager is a service that monitors and manages a system’s network settings. It is designed to simplify and automate the control of network connections.</p>
<pre><code class="lang-plaintext"> # systemctl status NetworkManager.service
</code></pre>
</li>
<li><p><code>Purpose of NetworkManager:</code> The purpose of NetworkManager is to keep track of network devices and connections, and to ensure that network access is available when needed and not used when not needed.</p>
</li>
<li><p><code>Interaction with NetworkManager:</code> Users can interact with the NetworkManager service via the command line <code>(nmcli)</code> or with graphical tools <code>(nmtui)</code>. In the GNOME graphical environment, a Notification Area applet displays network configuration and status information that is received from the NetworkManager daemon.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708666634390/d918424e-1c80-4a05-9995-d941fd2ee696.jpeg" alt class="image--center mx-auto" /></p>
</li>
<li><p><code>Configuration Files:</code> The configuration files for the service are stored in the <code>/etc/NetworkManager/system-connections/</code> directory.</p>
</li>
<li><p><code>Network Devices and Connections:</code> A network device is a physical or virtual network interface that provides for network traffic. A connection is a collection of related configuration settings for a single network device, also known as a network profile. Each connection must have a unique name or ID, which can match the device name that it configures.</p>
</li>
<li><p><code>Multiple Connection Configurations:</code> A single device can have multiple connection configurations and switch between them, but only one connection can be active per device. For example, a laptop wireless device might configure a fixed IP address for use at a secure work site in one connection, but might configure a second connection with an automated address and a virtual private network (VPN) to access the same company network from home.</p>
</li>
<li><p><code>Changes in Red Hat Enterprise Linux 8:</code> Starting in Red Hat Enterprise Linux 8, ifcfg format configuration files and the <code>/etc/sysconfig/network-scripts/</code> directory are deprecated. NetworkManager now uses an INI-style key file format, which is a key-value pair structure to organize properties. NetworkManager stores network profiles in the <code>/etc/NetworkManager/system-connections/</code> directory. For compatibility with earlier versions, ifcfg format connections in the <code>/etc/sysconfig/network-scripts/</code> directory are still recognized and loaded.</p>
</li>
<li><p><code>How to View Network Information?</code></p>
<pre><code class="lang-plaintext"> # nmcli connection show
 # nmcli connection show &lt;connection-name&gt;
 # nmcli connection show --active
 # nmcli device show
 # nmcli device show &lt;device-name&gt;
 # nmcli device status
</code></pre>
</li>
<li><p><code>How to Add a Network Connection?</code></p>
<ul>
<li><p>Uses Manual Connection build (Not autoconnect on startup)</p>
<pre><code class="lang-plaintext">  # nmcli con add con-name test type ethernet ifname ens160 \
  ipv4.addresses 192.168.199.130/24 ipv4.dns 8.8.8.8 \
  ipv4.gateway 192.168.199.254 connection.autoconnect yes ipv4.method manual
</code></pre>
</li>
<li><p>connection uses a DHCP service and has the device autoconnect on startup.</p>
<pre><code class="lang-plaintext">  # nmcli con add con-name dynamo type ethernet \
  ifname ens160 connection.autoconnect yes ipv4.method auto/dhcp
</code></pre>
</li>
</ul>
</li>
<li><p><code>How to Modify an existing Network Connections?</code></p>
<ul>
<li><p>Modify through Command Line interface</p>
<pre><code class="lang-plaintext">  # nmcli con mod test ipv4.addresses 192.0.2.2/24 \
  ipv4.gateway 192.0.2.254 connection.autoconnect yes
</code></pre>
</li>
<li><p>Modify through direct profile configuration file</p>
<pre><code class="lang-plaintext">  # vim /etc/NetworkManager/system-connections/test.nmconnection
  # systemctl restart NetworkManager
</code></pre>
  <div data-node-type="callout">
  <div data-node-type="callout-emoji">💡</div>
  <div data-node-type="callout-text">Some settings can have multiple values. A specific value can be added to the list or deleted from the connection settings by adding a plus (+) or minus (-) symbol to the start of the setting name. If a plus or minus is not included, then the specified value replaces the setting's current list. The following example adds the 8.8.4.4 DNS server to the test connection.</div>
  </div>

<pre><code class="lang-plaintext">  # nmcli con mod test +ipv4.dns 8.8.4.4
</code></pre>
</li>
</ul>
</li>
<li><p><code>How to work with connection profiles?</code></p>
<pre><code class="lang-plaintext"># nmcli connection show
# nmcli connection up &lt;connection-name&gt;
# nmcli connection down &lt;connection-name&gt;
# nmcli device disconnect &lt;device-IF-name&gt;
# nmcli connection reload &lt;connection-name&gt;
</code></pre>
</li>
<li><p><code>How to delete an Network Connection?</code></p>
<pre><code class="lang-plaintext"># nmcli con del &lt;connection-name&gt;
</code></pre>
</li>
</ol>
<hr />
<h3 id="heading-useful-network-management-commands">Useful Network Management Commands</h3>
<ol>
<li><p><strong>ip</strong>: This command provides information about every network interface. The correct usage is:</p>
<pre><code class="lang-bash"> ip a
 ip addr
</code></pre>
</li>
<li><p><strong>tracepath</strong>: This command is used to find network delays. It does not require root privileges. The correct usage is:</p>
<pre><code class="lang-bash"> tracepath www.example.com
</code></pre>
</li>
<li><p><strong>ss</strong>: This command is a replacement for the netstat command. It fetches information directly from the kernel userspace. The correct usage is:</p>
<pre><code class="lang-bash"> ss
</code></pre>
</li>
<li><p><strong>host</strong>: This command shows the IP address for a hostname and the domain name for an IP address. It is also used for DNS lookups. The correct usage is:</p>
<pre><code class="lang-bash"> host -t A www.example.com
</code></pre>
</li>
<li><p><strong>hostname</strong>: This command is used to view and set the system’s hostname. The correct usage is:</p>
<pre><code class="lang-bash"> hostname
</code></pre>
</li>
<li><p><strong>curl and wget</strong>: These commands are used to download files from the internet using the command line interface (CLI). The correct usage is:</p>
<pre><code class="lang-bash"> curl -O https://example.com/path/to/file
 wget https://example.com/path/to/file
</code></pre>
</li>
<li><p><strong>mtr</strong>: This command combines traceroute and ping commands. It regularly shows information related to the packets transferred using the ping time of all hops. The correct usage is:</p>
<pre><code class="lang-bash"> mtr www.example.com
</code></pre>
</li>
<li><p><strong>whois</strong>: This command fetches all website-related information. The correct usage is:</p>
<pre><code class="lang-bash"> whois www.example.com
</code></pre>
</li>
<li><p><strong>tcpdump</strong>: This command is widely used in network analysis. It analyses the traffic passing from the network interface and displays it. The correct usage is:</p>
<pre><code class="lang-bash"> tcpdump -i eth0
</code></pre>
</li>
<li><p><strong>tracepath</strong>: This command traces the path that packets take from your computer to the destination address. The correct usage is:</p>
<pre><code class="lang-bash">tracepath www.example.com
</code></pre>
</li>
<li><p><strong>tracepath6</strong>: This command is similar to tracepath, but it is used for IPv6 addresses. The correct usage is:</p>
<pre><code class="lang-bash">tracepath6 2001:db8:0:2::451
</code></pre>
</li>
<li><p><strong>dig</strong>: This command is used to query DNS name servers for information about host addresses, mail exchanges, name servers, and related information. The correct usage is:</p>
<pre><code class="lang-bash">dig www.example.com
</code></pre>
</li>
<li><p><strong>getent hosts</strong>: This command retrieves entries from the specified administrative database. The correct usage is:</p>
<pre><code class="lang-bash">getent hosts www.example.com
</code></pre>
</li>
<li><p><strong>ss -tulpn</strong>: This command is used to dump socket statistics and displays listening sockets. The correct usage is:</p>
<pre><code class="lang-bash">ss -tulpn
</code></pre>
</li>
<li><p><strong>ss -ta</strong>: This command displays all non-listening sockets (that’s established connections). The correct usage is:</p>
<pre><code class="lang-bash">ss -ta
</code></pre>
</li>
<li><p><strong>ss -lt</strong>: This command displays only listening sockets. The correct usage is:</p>
<pre><code class="lang-bash">ss -lt
</code></pre>
</li>
<li><p><strong>ip -br addr</strong>: This command displays brief information about the IP addresses of all network interfaces. The correct usage is:</p>
<pre><code class="lang-bash">ip -br addr
</code></pre>
</li>
<li><p><strong>netstat -tulpn</strong>: This command displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships. The correct usage is:</p>
<pre><code class="lang-bash">netstat -tulpn
</code></pre>
</li>
<li><p><strong>ip route</strong>: This command displays the IP routing table. The correct usage is:</p>
<pre><code class="lang-bash">ip route
</code></pre>
</li>
<li><p><strong>ip -6 route</strong>: This command displays the IPv6 routing table. The correct usage is:</p>
<pre><code class="lang-bash">ip -6 route
</code></pre>
</li>
<li><p><strong>ping</strong>: This command sends ICMP ECHO_REQUEST packets to network hosts. The correct usage is:</p>
<pre><code class="lang-bash">ping www.example.com
</code></pre>
</li>
<li><p><strong>ping6</strong>: This command is similar to ping, but it is used for IPv6 addresses. The correct usage is:</p>
<pre><code class="lang-bash">ping6 ::1
</code></pre>
</li>
<li><p><strong>ping -c3 192.0.2.254</strong>: This command sends exactly 3 ICMP ECHO_REQUEST packets to the host with the IP address 192.0.2.254. The correct usage is:</p>
<pre><code class="lang-bash">ping -c3 192.0.2.254
</code></pre>
</li>
<li><p><strong>ping6 2001:db8:0:1::1</strong>: This command sends ICMP ECHO_REQUEST packets to the IPv6 address 2001:db8:0:1::1. The correct usage is:</p>
<pre><code class="lang-bash">ping6 2001:db8:0:1::1
</code></pre>
</li>
<li><p><strong>whois</strong>: This command is used to retrieve domain name information from WHOIS servers. The correct usage is:</p>
<pre><code class="lang-bash">whois example.com
</code></pre>
</li>
<li><p><strong>nslookup</strong>: This command is used to query Internet domain name servers. The correct usage is:</p>
<pre><code class="lang-bash">nslookup www.example.com
</code></pre>
</li>
<li><p><strong>ifconfig</strong>: This command is used to display or configure a network interface. The correct usage is:</p>
<pre><code class="lang-bash">ifconfig
</code></pre>
</li>
</ol>
<hr />
<h3 id="heading-reference-links">Reference Links:</h3>
<ul>
<li><a target="_blank" href="https://aws.amazon.com/what-is/computer-networking/">https://aws.amazon.com/what-is/computer-networking/</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Linux Services & Daemons: The Hidden Hero's of your Linux Server]]></title><description><![CDATA[Introductions & Importance
In a Linux server, services and daemons are like the unsung heroes working behind the scenes to ensure everything runs smoothly. They are essentially programs that run in the background, performing various tasks necessary f...]]></description><link>https://projectwala.site/linux-services-daemons-the-hidden-heros-of-your-linux-server</link><guid isPermaLink="true">https://projectwala.site/linux-services-daemons-the-hidden-heros-of-your-linux-server</guid><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[linux-basics]]></category><category><![CDATA[System Architecture]]></category><category><![CDATA[systemd]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Thu, 15 Feb 2024 08:57:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707975506523/226c0a54-a330-4693-b340-6866c384d355.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3 id="heading-introductions-amp-importance">Introductions &amp; Importance</h3>
<p>In a Linux server, services and daemons are like the unsung heroes working behind the scenes to ensure everything runs smoothly. They are essentially programs that run in the background, performing various tasks necessary for the system to function properly.</p>
<p>Here are some examples:</p>
<ol>
<li><code>Web Server</code> <strong>(e.g., Apache or Nginx)</strong>: This service is responsible for serving web pages. When you type a URL into your browser, the request goes to the web server, which then sends back the requested page. Without this service, your website wouldn’t be accessible to users.</li>
</ol>
<ol>
<li><code>NTP Server</code> <strong>(Network Time Protocol)</strong>: This daemon synchronizes the system’s clock with global time servers. Accurate timekeeping is crucial for many server tasks and processes. For example, it helps in scheduling tasks, logging events accurately, and ensuring secure communication.</li>
</ol>
<ol>
<li><code>SSH Server:</code> This service allows secure remote access to the server. Administrators use SSH to log in and manage the server from any location. Without the SSH service, remote management would be much more difficult and less secure.</li>
</ol>
<p>Imagine a busy airport - the services and daemons are like the air traffic control, baggage handling, and maintenance crews. You might not see them, but without them, planes wouldn’t be able to land or take off safely, passengers wouldn’t get their luggage, and the whole operation would come to a standstill.</p>
<p>So, in a nutshell, services and daemons are vital for the continuous and correct operation of a Linux server. They handle essential tasks and enable the server to perform its intended functions seamlessly. So now important question is what is Daemons and Service in Linux.</p>
<hr />
<h3 id="heading-what-is-daemons">What is daemon(s)</h3>
<p>In Linux, a <code>daemon</code> is a background process that operates autonomously, performing tasks without user intervention. They are utility programs that run silently in the background to monitor and take care of certain subsystems to ensure that the operating system runs properly. Almost all daemons have names that end with the letter <code>“d”</code>. For example, <code>httpd</code> is the daemon that handles the Apache server, and <code>sshd</code> handles SSH remote access connections.</p>
<p>Daemons are important in a Linux OS server for several reasons:</p>
<ol>
<li><p><code>Handling Network Requests:</code> Daemons enable your system to correctly respond to network requests by associating each request with a compatible network port.</p>
</li>
<li><p><code>Executing Scheduled System Tasks:</code> Daemons make it possible to run or execute scheduled system tasks. The daemon responsible for this specific task is called <code>cron</code>.</p>
</li>
<li><p><code>Monitoring System Performance:</code> Daemons also offer a priceless contribution in monitoring the performance of your system.</p>
</li>
<li><p><code>Managing Essential Services:</code> They act as the backbone of many essential computing services, operating silently and efficiently. From managing system logs with the <code>syslogd</code> daemon to handling mail services with the <code>postfix</code> daemon, these background processes are integral to the functionality and efficiency of our systems.</p>
</li>
</ol>
<p>Now, let’s understand the difference between <code>a Process</code>, <code>a Daemon</code>, and <code>a Service</code>:</p>
<ul>
<li><p><code>A Process</code> is a running instance of executing program code. At a particular instant of time, it can be either running, sleeping, or zombie (completed process, but waiting for its parent process to pick up the return value).</p>
</li>
<li><p><code>A Daemon</code> is a specific type of process that runs in the background and is not interactive. They have no controlling terminal. They perform certain actions at predefined times or in response to certain events. In Unix-like systems, the names of daemons end in ‘d’.</p>
</li>
<li><p><code>A Service</code> is a program that responds to requests from other programs over some inter-process communication mechanism (usually over a network). <code>A service doesn’t have to be a daemon, but usually is</code>. A user application with a GUI could have a service built into it. For example, a file-sharing application. In Windows, daemons are called services. In essence, daemons are the silent heroes of computing, managing tasks and services that ensure our systems run smoothly. They are effectively invisible but essential.</p>
</li>
</ul>
<hr />
<h3 id="heading-what-is-systemd">What is systemd?</h3>
<p><code>systemd</code> is also a daemon. It is a system and service manager for Linux operating systems. It is designed to be backward compatible with SysV init scripts and provides several features such as</p>
<ul>
<li><p>parallel startup of system services at boot time,</p>
</li>
<li><p>on-demand activation of daemons,</p>
</li>
<li><p>dependency-based service control logic.</p>
</li>
</ul>
<p>Its primary component is a <code>“system and service manager”</code> – an init system used to bootstrap user space and manage user processes. It also provides replacements for various daemons and utilities, including device management, login management, network connection management, and event logging.</p>
<pre><code class="lang-sql"><span class="hljs-comment"># pstree | less</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707895128938/ea6049ea-9819-4e9e-9992-d7df5620b06a.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-what-are-service-units">What are Service Units?</h3>
<p>In RHEL Linux, systemd uses something called “units” to manage different types of tasks. Here are the three types of units mentioned:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707982696103/88e7ce11-61cd-4f95-9ae0-476062777119.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Service Units (.service)</strong>: Service units have a <code>.service</code> extension and represent system services. You can use service units to start frequently accessed daemons, such as a web server.</p>
</li>
<li><p><strong>Socket Units (.socket)</strong>: These are like the telephone operators of your system. They monitor communication points (sockets) between different processes. If a process wants to talk to another (connects to the socket), systemd starts the service (like a worker) needed for that communication.</p>
</li>
<li><p><strong>Path Units (.path)</strong>: These are like the watchmen of your system. They keep an eye on specific locations in your filesystem. If they notice a change (like a new file being added), they can start a service.</p>
</li>
<li><p>Daemons (.d): daemons are indeed a part of system units. In the context of systemd, a daemon is often associated with a <strong>service unit</strong>. When a service unit is started, it usually initiates a daemon. A daemon is a type of program that runs in the background, waiting for events to occur and offering services. A good example is a web server daemon, which waits for a request to come in, and then responds to that request.</p>
</li>
</ol>
<p>To manage these units, you use the <code>systemctl</code> command. It’s like the manager of all these workers, operators, and watchmen. You can use it to start, stop, restart, and check the status of these units. Display available unit types with the systemctl -t help command</p>
<pre><code class="lang-sql"><span class="hljs-comment"># systemctl -t help</span>

Available unit types:
service
mount
swap
socket
target
device
automount
timer
path
slice
scope
</code></pre>
<hr />
<h3 id="heading-how-to-manage-daemons">How to Manage Daemons</h3>
<p>To manage daemons in Linux, you can use the <code>systemctl</code> command. Here are some common operations you can perform:</p>
<ul>
<li><strong>Check the</strong> <code>STATUS</code> <strong>of Service and</strong> <code>START, STOP, RESTART, RELOAD, ENABLE, DISABLE, MASK, UNMASK</code> <strong>a service</strong>:</li>
</ul>
<pre><code class="lang-sql"><span class="hljs-comment"># systemctl status  &lt;SERVICE-NAME&gt;</span>
<span class="hljs-comment"># systemctl start   &lt;SERVICE-NAME&gt;</span>
<span class="hljs-comment"># systemctl stop    &lt;SERVICE-NAME&gt;</span>
<span class="hljs-comment"># systemctl restart &lt;SERVICE-NAME&gt;</span>
<span class="hljs-comment"># systemctl reload  &lt;SERVICE-NAME&gt;</span>
<span class="hljs-comment"># systemctl enable  &lt;SERVICE-NAME&gt;</span>
<span class="hljs-comment"># systemctl disable &lt;SERVICE-NAME&gt;</span>
<span class="hljs-comment"># systemctl mask    &lt;SERVICE-NAME&gt;</span>
<span class="hljs-comment"># systemctl unmask  &lt;SERVICE-NAME&gt;</span>
</code></pre>
<p>These commands are used to manage services (or daemons) in Linux using <code>systemd</code>. Here’s what each command does:</p>
<ul>
<li><p><code>systemctl status &lt;SERVICE-NAME&gt;</code>: This command displays the current status of a service. It shows whether the service is running, stopped, or in any other state.</p>
</li>
<li><p><code>systemctl start &lt;SERVICE-NAME&gt;</code>: This command starts a service. If the service is already running, it does nothing.</p>
</li>
<li><p><code>systemctl stop &lt;SERVICE-NAME&gt;</code>: This command stops a running service.</p>
</li>
<li><p><code>systemctl restart &lt;SERVICE-NAME&gt;</code>: This command first stops and then starts a service. It’s useful when you’ve made configuration changes that need to be picked up by the service.</p>
</li>
<li><p><code>systemctl reload &lt;SERVICE-NAME&gt;</code>: This command asks a service to reload its configuration. Not all services support this, but for those that do, it’s a way to pick up configuration changes without the disruption of stopping and starting the service.</p>
</li>
<li><p><code>systemctl enable &lt;SERVICE-NAME&gt;</code>: This command enables a service to start at boot time.</p>
</li>
<li><p><code>systemctl disable &lt;SERVICE-NAME&gt;</code>: This command disables a service from starting at boot time.</p>
</li>
<li><p><code>systemctl mask &lt;SERVICE-NAME&gt;</code>: This command completely disables a service - it cannot be started manually or at boot time.</p>
</li>
<li><p><code>systemctl unmask &lt;SERVICE-NAME&gt;</code>: This command removes the mask, allowing a service to be started manually or at boot time. It’s used when you want to re-enable a service that was previously masked.</p>
</li>
</ul>
<hr />
<h3 id="heading-difference-between-restart-vs-reload-amp-mask-vs-unmask-the-services-in-rhel-linux">Difference between "restart VS reload" &amp; "mask VS unmask" the services in rhel Linux.</h3>
<p><mark>RESTART</mark></p>
<p>When you <strong>restart</strong> a service, it first stops the current running instance of the service. The system sends a <code>SIGTERM (A polite way to stop a process)</code> signal to the service to terminate it. If the service does not stop within a certain time frame, a <code>SIGKILL (A forceful way to stop the process)</code> signal is sent to force the termination. After the service is stopped, it is then started again, hence the term “restart”. Each running process on a system is assigned a unique process ID (PID). When the service is stopped, it no longer has a PID. Then, the service starts again and is assigned a new PID. If any changes have been made to the service’s configuration files, these are read and applied when the service starts again. For example:</p>
<pre><code class="lang-sql"><span class="hljs-comment"># systemctl restart httpd</span>
</code></pre>
<p><mark>RELOAD</mark></p>
<p>On the other hand, <strong>reload</strong> tells the running service to check its configuration files again and apply any changes without actually stopping and starting the service. When you <strong>reload</strong> a service, the system sends a <code>SIGHUP (Reload process configurations files)</code> signal to the service. This signal instructs the service to reload its configuration files while continuing to run. This allows the service to apply any changes made in its configuration files without interrupting its operation. For example:</p>
<pre><code class="lang-sql"><span class="hljs-comment"># systemctl reload httpd</span>
</code></pre>
<p><mark>MASK</mark></p>
<p><strong>Masking</strong> a service means to link it to <code>/dev/null</code>, making it impossible to start the service. This is used when you don’t want a certain service to run at all, even if another service tries to start it.</p>
<ul>
<li><p>let’s consider a case study involving the <code>firewalld</code> and <code>iptables</code> services in CentOS/RHEL, which is a common scenario in real-world Linux system administration.</p>
</li>
<li><p>CentOS/RHEL 7 has both <code>firewalld</code> and <code>iptables</code> services for firewall management. However, it is recommended to use only one at a time to prevent conflicts. Let’s say you decide to use <code>firewalld</code> and want to prevent <code>iptables</code> from running even accidentally. Here’s how you can do it:</p>
</li>
<li><p><strong>Mask the</strong> <code>iptables</code> service:</p>
<pre><code class="lang-sql">  <span class="hljs-comment"># systemctl mask iptables</span>
</code></pre>
<p>  This command prevents the <code>iptables</code> service from being started, even by other services. Now Check the status of the <code>iptables</code> service:</p>
<pre><code class="lang-sql">   <span class="hljs-comment"># systemctl status iptables</span>
</code></pre>
<p>  You should see a message indicating that the service is masked. Now, suppose you later decide that you want to use <code>iptables</code> instead of <code>firewalld</code>. Here’s how you can unmask the <code>iptables</code> service:</p>
</li>
</ul>
<p><mark>UNMASK</mark></p>
<p><strong>Unmasking</strong> a service undoes the masking. If you’ve previously masked a service but now you want it to be able to start, you would unmask it.</p>
<ul>
<li><p><strong>Unmask the</strong> <code>iptables</code> service:</p>
<pre><code class="lang-sql">  <span class="hljs-comment"># systemctl unmask iptables</span>
</code></pre>
</li>
<li><p>This command removes the mask from the <code>iptables</code> service, allowing it to be started.Check the status of the <code>iptables</code> service:</p>
</li>
<li><pre><code class="lang-sql">  <span class="hljs-comment"># systemctl status iptables</span>
</code></pre>
<p>  You should see a message indicating that the service is loaded and inactive (dead), meaning it can now be started.</p>
</li>
</ul>
<p>This case study illustrates how the <code>mask</code> and <code>unmask</code> commands can be used to control which services are allowed to run on a CentOS/RHEL system, providing a powerful tool for system administrators. It’s important to note that while this example uses <code>firewalld</code> and <code>iptables</code>, the same principles apply to any services managed by <code>systemd</code>.</p>
<hr />
<h3 id="heading-details-in-systemctl-status-service">Details in "systemctl status SERVICE"</h3>
<pre><code class="lang-sql"><span class="hljs-comment"># systemctl status sshd</span>
● sshd.service - OpenSSH server daemon
     Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2024-02-15 10:47:44 IST; 2h 55min ago
       Docs: man:sshd(8)
             man:sshd_config(5)
   Main PID: 5746 (sshd)
      Tasks: 1 (limit: 22833)
     Memory: 1.7M
        CPU: 17ms
     CGroup: /system.slice/sshd.service
             └─5746 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"

Feb 15 10:47:43 server1.example.com systemd[1]: Starting OpenSSH server daemon...
Feb 15 10:47:44 server1.example.com sshd[5746]: Server listening on 0.0.0.0 port 22.
Feb 15 10:47:44 server1.example.com sshd[5746]: Server listening on :: port 22.
Feb 15 10:47:44 server1.example.com systemd[1]: Started OpenSSH server daemon.
</code></pre>
<p>The <code>systemctl status SERVICE</code> command provides a detailed report about the specified service. Here’s what each line in the output means:</p>
<ul>
<li><p><code>● sshd.service - OpenSSH server daemon</code>: This line shows the name of the service and a brief description of it.</p>
</li>
<li><p><code>Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)</code>: This line shows the path to the service’s unit file, whether the service is enabled to start at boot, and the vendor preset status.</p>
</li>
<li><p><code>Active: active (running) since Thu 2024-02-15 10:47:44 IST; 2h 55min ago</code>: This line shows the current state of the service, when it entered this state, and how long it has been in this state.</p>
</li>
<li><p><code>Docs: man:sshd(8) man:sshd_config(5)</code>: This line provides references to the man pages related to the service.</p>
</li>
<li><p><code>Main PID: 5746 (sshd)</code>: This line shows the main Process ID (PID) for the service.</p>
</li>
<li><p><code>Tasks: 1 (limit: 22833)</code>: This line shows the current number of tasks or threads the service is running and the limit set on them.</p>
</li>
<li><p><code>Memory: 1.7M</code>: This line shows the current memory consumption of the service.</p>
</li>
<li><p><code>CPU: 17ms</code>: This line shows the CPU time consumed by the service.</p>
</li>
<li><p><code>CGroup: /system.slice/sshd.service └─5746 "sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups"</code>: This line shows the control group hierarchy of the service and the commands being run under this service.</p>
</li>
<li><p><code>log entries from the service:</code> The lines starting with the date (e.g., <code>Feb 15 10:47:43</code> <a target="_blank" href="http://server1.example.com"><code>server1.example.com</code></a> <code>systemd[1]: Starting OpenSSH server daemon...</code>) are log entries from the service. They provide a chronological account of what the service has been doing. In this case, it shows when the OpenSSH server daemon started and that it’s listening on ports 22 for both IPv4 and IPv6.</p>
</li>
</ul>
<hr />
<h3 id="heading-other-important-systemd-management-command">Other Important "systemd management" command</h3>
<p>Here are some important <code>systemctl</code> commands and their uses:</p>
<ol>
<li><strong>Change the default system target (run-level)</strong>: This command is used to change the default system target (run-level).</li>
</ol>
<pre><code class="lang-sql"><span class="hljs-comment"># systemctl set-default [target]</span>
</code></pre>
<pre><code class="lang-sql"><span class="hljs-comment"># systemctl set-default graphical.target</span>
<span class="hljs-comment"># systemctl isolate graphical.target</span>
<span class="hljs-comment"># systemctl set-default multi-user.target</span>
<span class="hljs-comment"># systemctl isolate multi-user.target</span>
</code></pre>
<ol>
<li><p><strong>List all installed unit files and their states</strong>: This command is used to list all installed unit files and their states.</p>
<pre><code class="lang-sql"> <span class="hljs-comment"># systemctl list-unit-files</span>
 <span class="hljs-comment"># systemctl list-unit-files -</span>
</code></pre>
</li>
<li><p><strong>List the dependencies of a specific unit</strong>: This command is used to list the dependencies of a specific unit.</p>
<pre><code class="lang-sql"> <span class="hljs-comment"># systemctl list-dependencies [unit]</span>
</code></pre>
</li>
<li><p><strong>List all active sockets</strong>: This command is used to list all active sockets.</p>
<pre><code class="lang-sql"> <span class="hljs-comment"># systemctl list-sockets</span>
</code></pre>
</li>
<li><p><strong>List all active systemd jobs</strong>: This command is used to list all active systemd jobs.</p>
<pre><code class="lang-sql"> <span class="hljs-comment"># systemctl list-jobs</span>
</code></pre>
</li>
</ol>
<p><strong>Show the status of all loaded and active systemd units</strong>: This command is used to show the status of all loaded and active systemd units.</p>
<pre><code class="lang-sql"><span class="hljs-comment"># systemctl list-units</span>
</code></pre>
<hr />
<h3 id="heading-interview-questions-amp-answers-related-to-systemd-management">Interview Questions &amp; Answers Related to "systemd management"</h3>
<ol>
<li><p><strong>What is systemd?</strong></p>
<p> <strong>Answer:</strong> Systemd is a system and service manager for Linux operating systems. It initializes and manages/maintains system processes after the Linux kernel has booted up.</p>
</li>
<li><p><strong>What is the role of a service in a systemd context?</strong></p>
<p> <strong>Answer:</strong> A service is a program that runs as a background process. In a systemd context, services are defined by service unit files, and systemd starts them and manages their state according to these files.</p>
</li>
<li><p><strong>How do you check the status of a service using systemd?</strong></p>
<p> <strong>Answer:</strong> You can check the status of a service using the command <code>systemctl status service_name</code>.</p>
</li>
<li><p><strong>How do you start, stop, or restart a service?</strong></p>
<p> <strong>Answer:</strong> You can use the commands <code>systemctl start service_name</code>, <code>systemctl stop service_name</code>, and <code>systemctl restart service_name</code> respectively.</p>
</li>
<li><p><strong>How do you enable or disable a service to start at boot?</strong></p>
<p> <strong>Answer:</strong> You can use the commands <code>systemctl enable service_name</code> and <code>systemctl disable service_name</code> respectively.</p>
</li>
<li><p><strong>What is a unit file in systemd?</strong></p>
<p> <strong>Answer:</strong> A unit file is a configuration file that defines the properties of system resources managed by systemd, such as services, sockets, devices, etc.</p>
</li>
<li><p><strong>What is the difference between</strong> <code>systemctl reload</code> and <code>systemctl restart</code>?</p>
<p> <strong>Answer:</strong> <code>systemctl reload</code> only reloads the configuration file of the service, while <code>systemctl restart</code> will stop and then start the service again.</p>
</li>
<li><p><strong>What is a socket in systemd?</strong></p>
<p> <strong>Answer:</strong> A socket is a special file used for inter-process communication, which can also be managed by systemd. Socket units in systemd have a <code>.socket</code> extension.</p>
</li>
<li><p><strong>What is the target in systemd and how is it used?</strong></p>
<p> <strong>Answer:</strong> A target is a unit that groups together other units, similar to how runlevels group services in SysVinit. They are used to create a system state or mode defined by the services they group.</p>
</li>
<li><p><strong>How do you list all active (running) services in systemd?</strong></p>
<p><strong>Answer:</strong> You can use the command <code>systemctl --type=service --state=running</code>.</p>
</li>
<li><p><strong>What is</strong> <code>journald</code> in the context of systemd?</p>
<p><strong>Answer:</strong> <code>journald</code> is a part of systemd that provides a centralized management of system logs.</p>
</li>
<li><p><strong>How do you see the log of a particular service using systemd?</strong></p>
<p><strong>Answer:</strong> You can use the command <code>journalctl -u service_name</code>.</p>
</li>
<li><p><strong>What is the cgroup in systemd?</strong></p>
<p><strong>Answer:</strong> Cgroup is a Linux kernel feature to limit, police and account the resource usage for a set of processes. In systemd, it’s used to track the processes that systemd starts.</p>
</li>
<li><p><strong>How do you mask a service in systemd?</strong></p>
<p><strong>Answer:</strong> You can use the command <code>systemctl mask service_name</code>. Masking a service makes it impossible to start it.</p>
</li>
<li><p><strong>What is the difference between</strong> <code>systemctl mask</code> and <code>systemctl disable</code>?</p>
<p><strong>Answer:</strong> <code>systemctl disable</code> prevents the service from starting at boot, but allows manual starting, while <code>systemctl mask</code> prevents the service from starting altogether, both at boot and manually.</p>
</li>
<li><p><strong>What is</strong> <code>systemctl daemon-reload</code> used for?</p>
<p><strong>Answer:</strong> <code>systemctl daemon-reload</code> is used to reload the systemd manager configuration. This includes reloading all unit files and recreating the entire dependency tree. This is typically done after creating or modifying a unit file.</p>
</li>
<li><p><strong>How do you check which version of systemd you are running?</strong></p>
<p><strong>Answer:</strong> You can use the command <code>systemctl --version</code>.</p>
</li>
<li><p><strong>Question:</strong> What is a daemon in the context of operating systems?</p>
<p><strong>Answer:</strong> A daemon is a type of program on Unix-like operating systems that runs silently without users direct interactions in the background, rather than under the direct control of a user, waiting to be activated by the occurrence of a specific event or condition.</p>
</li>
<li><p><strong>Question:</strong> How does a service differ from a daemon?</p>
<p><strong>Answer:</strong> A service is a program or a set of programs that perform system-level operations in an operating system. A daemon is a type of service that runs in the background and is not directly interacted with by the user.</p>
</li>
<li><p><strong>Question:</strong> What is the role of <code>systemd</code> in managing services and daemons? <strong>Answer:</strong> <code>systemd</code> is a system and service manager for Linux operating systems. It initializes and manages/maintains/tracks system services and daemons, both during startup and while the system is running.</p>
</li>
<li><p><strong>Question:</strong> Explain the difference between <code>service</code> and <code>systemctl</code> commands.</p>
<p><strong>Answer:</strong> Both <code>service</code> and <code>systemctl</code> are used to interact with system services. However, <code>service</code> is a more legacy command and is being replaced by <code>systemctl</code>, which provides more detailed status information and is more consistent in its syntax.</p>
</li>
<li><p><strong>Question:</strong> How can you enable or disable a service to start on boot? <strong>Answer:</strong> You can use <code>systemctl enable serviceName</code> to have a service start at boot, and <code>systemctl disable serviceName</code> to prevent a service from starting at boot.</p>
</li>
<li><p><strong>Question:</strong> How would you check the status of a service in Windows? <strong>Answer:</strong> In Windows, you can check the status of a service through the Services management console (<code>services.msc</code>), or by using the <code>sc query</code> command in the command prompt.</p>
</li>
<li><p><strong>Question:</strong> How can you create a custom service in Linux?</p>
<p><strong>Answer:</strong> In Linux, you can create a custom service by writing a service unit file and placing it in the <code>/etc/systemd/system</code> directory. The unit file will specify how the service should start, stop, and otherwise operate.</p>
</li>
<li><p><strong>Question:</strong> What are the potential security implications of running a service/daemon as root?</p>
<p><strong>Answer:</strong> Running a daemon as root can be a security risk because if the daemon is compromised, it could give an attacker full control over the system. It’s generally recommended to run daemons under non-privileged user accounts when possible.</p>
</li>
<li><p><strong>Question:</strong> How does <code>systemd</code> handle dependencies between services? <strong>Answer:</strong> <code>systemd</code> handles dependencies through a directive in the service unit file called <code>After</code> or <code>Before</code>, which ensures that one service starts after or before another. It also uses <code>Wants</code> or <code>Requires</code> to specify if a service should continue running if its dependencies fail.</p>
</li>
<li><p><strong>Question:</strong> What is the role of the <code>journalctl</code> command in relation to services and daemons?</p>
<p><strong>Answer:</strong> <code>journalctl</code> is used to query and display messages from the systemd journal, which includes logs from services and daemons.</p>
</li>
<li><p><strong>Question:</strong> How can you configure a service to automatically restart if it crashes?</p>
<p><strong>Answer:</strong> In the service’s unit file, you can set the <code>Restart</code> directive to <code>always</code> or <code>on-failure</code> to have <code>systemd</code> automatically restart the service if it crashes.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Process Management in Linux: A User’s Manual]]></title><description><![CDATA[Understand the process & How it works?.

Process: A process is like a task your computer is performing. It’s an instance of a program that’s currently running. When a process is created. For example, when you create an executable program and run this...]]></description><link>https://projectwala.site/process-management-in-linux-a-users-manual</link><guid isPermaLink="true">https://projectwala.site/process-management-in-linux-a-users-manual</guid><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[linux-basics]]></category><category><![CDATA[process]]></category><category><![CDATA[process management]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[articles]]></category><category><![CDATA[rakamodify]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Sat, 10 Feb 2024 15:42:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707579427582/ad9a5fd3-719e-4d35-9b99-d85dadc3d49b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3 id="heading-understand-the-process-amp-how-it-works">Understand the process &amp; How it works?.</h3>
<ol>
<li><p><strong>Process</strong>: A process is like a task your computer is performing. It’s an instance of a program that’s currently running. When a process is created. For example, when you create an executable program and run this program, It becomes a process when it running.</p>
<p> Inside a process, some imp. key things are available :</p>
<ul>
<li><p><code>PID:</code> "Process ID"</p>
</li>
<li><p><code>PPID:</code> "Parent Process ID"</p>
</li>
<li><p><code>Owner:</code> "User of the process, who runs this?"</p>
</li>
<li><p><code>Executable Program:</code> "How does this program work?"</p>
</li>
<li><p><code>Memory Time:</code> "Memory time is taken by the process"</p>
</li>
<li><p><code>CPU Time:</code> "CPU time is taken by the process"</p>
</li>
<li><p><code>Security Context</code>: "Linux Security Context"</p>
</li>
</ul>
</li>
<li><p>Life Cycle of Process</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707565553912/5b9cdefd-56f5-4713-bc19-4772a95e1a3b.png" alt class="image--center mx-auto" /></p>
<p>Let's understand the process...</p>
<ol>
<li><p><strong>Program to Process</strong>: A program is a set of instructions that are loaded into memory. When these instructions are executed, the program becomes a process. Each process has a unique process ID <code>(PID)</code> for tracking and security purposes.</p>
</li>
<li><p><strong>Creating a Child Process</strong>: A parent process can create a child process through a mechanism known as <code>“forking”.</code> In this process, the parent duplicates its own address space to create a new process structure for the child. The child process inherits various attributes from the parent, such as security identities, file descriptors, resource privileges, environment variables, and program code.</p>
</li>
<li><p><strong>Execution of Child Process</strong>: Once created, the child process can execute its own program code. During this time, the parent process usually sleeps, setting a wait request to be signaled when the child completes.</p>
</li>
<li><p><strong>Zombie Process</strong>: A zombie process, also known as a <strong>defunct process</strong>, is a peculiar state in which a process has completed its execution (via the <code>exit</code> system call), but its entry still lingers in the process table. Essentially, it’s a process that has finished its job but hasn’t been properly cleaned up by its parent process.</p>
</li>
</ol>
<p><strong><mark>Example</mark></strong><mark>:</mark> Let’s consider a real-world example. Imagine you’re a chef <code>(parent process)</code> in a restaurant. You get an order <code>(program)</code> to prepare a dish. You start preparing the dish <code>(process)</code>. Now, you need to chop some vegetables, so you ask your assistant <code>(child process)</code> to do it. You wait <code>(sleep)</code> until your assistant finishes chopping <code>(child process execution)</code>. Once the assistant is done, they inform you and go on to their next task <code>(child process exits, leaving a zombie)</code>. You then continue preparing the dish (the parent process continues execution after cleaning up the zombie). and server the order (process completed)</p>
<hr />
<h3 id="heading-what-is-the-background-amp-foreground-process">What is the Background &amp; Foreground Process ??</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707566848749/f8134d22-18ea-4eb0-9dd7-6c6c8f4102c2.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Foreground Process</strong>:</p>
<ul>
<li><p>A <strong>foreground process</strong> is one that <strong>requires user interaction</strong>. When a process runs directly in the terminal shell, it occupies the terminal session, and you interact with it directly.</p>
</li>
<li><p>For example, if you execute a command that performs a task and waits for your input or displays output in the terminal, it is a foreground process.</p>
</li>
<li><p>While a foreground process is running, you cannot use the terminal for other commands until the process completes or you interrupt it.</p>
</li>
</ul>
</li>
<li><p><strong>Background Process</strong>:</p>
<ul>
<li><p>A <strong>background process</strong> operates independently of user interaction. It runs <strong>behind the scenes</strong>, allowing you to continue using the terminal for other tasks.</p>
</li>
<li><p>When you start a process in the background, it doesn’t hold the terminal session hostage. You can execute other commands or even disconnect from an SSH session without affecting the background process.</p>
</li>
<li><p>Background processes are useful for long-running tasks, such as monitoring events or performing lengthy computations.</p>
</li>
</ul>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">How to check the background process in Linux?</div>
</div>

<pre><code class="lang-plaintext"># jobs
</code></pre>
<p><code>jobs</code> command is used to check whether your process is running background or not. if there are any such processes it look like:-</p>
<pre><code class="lang-sql"><span class="hljs-comment"># jobs</span>

[1]+  Stopped                 sleep 100
[2]   Running                 sleep 100 &amp;
[3]-  Running                 firefox &amp;
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">How to run a new fresh program/process in the background.</div>
</div>

<pre><code class="lang-sql"><span class="hljs-comment"># sleep &amp;</span>
</code></pre>
<p>Suppose you run a command Ex: Firefox on the foreground, first you have to stop this <code>(ctrl +z)</code> and then run this process in the background <code>(bg %job_id)</code> again to see jobs id using <code>jobs</code> command</p>
<pre><code class="lang-sql">[root@web ~]<span class="hljs-comment"># firefox</span>
^Z
[1]+  Stopped                 firefox
[root@web ~]<span class="hljs-comment"># jobs</span>
[1]+  Stopped                 firefox
[root@web ~]<span class="hljs-comment"># bg %1</span>
[1]+ firefox &amp;
[root@web ~]<span class="hljs-comment"># jobs</span>
[1]+  Running                 firefox &amp;
[root@web ~]<span class="hljs-comment">#</span>
</code></pre>
<p>Let's assume, I am running this command: firefox</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><mark>Program/Command</mark></td><td><mark>Signal</mark></td><td><mark>Run in the Background</mark></td><td><mark>Run on the Foreground</mark></td></tr>
</thead>
<tbody>
<tr>
<td>New Fresh command</td><td></td><td>firefox &amp;</td><td>firefox</td></tr>
<tr>
<td>Existing command</td><td>ctrl + z</td><td>bg %jobs-id</td><td>fg %jobs-id</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-what-is-process-states">What is Process States ??</h3>
<p>In a multitasking operating system, each CPU (or CPU core) can be working on one process at a time. As a process runs, its immediate requirements for CPU time and resource allocation change. Processes are assigned a state, which changes as circumstances dictate. A process, during its lifecycle, goes through several stages. These stages or states are:</p>
<ol>
<li><p><code>New:</code> The process is about to be created but not yet created.</p>
</li>
<li><p><code>Ready:</code> After the creation of a process, the process enters the ready state i.e., the process is loaded into the main memory and is waiting to get the CPU time for its execution.</p>
</li>
<li><p><code>Running:</code> The process is chosen from the ready queue by the CPU for execution.</p>
</li>
<li><p><code>Sleeping or Wait:</code> Whenever the process requests access to I/O or needs input from the user or needs access to a critical region, it enters the blocked or waiting state.</p>
</li>
<li><p><code>Terminated or Stopped:</code> The process is killed, and the resources allocated to the process are released or deallocated.</p>
</li>
</ol>
<p><strong>Process State Transitions</strong></p>
<p>A process can move between different states in an operating system based on its execution status and resource availability. Here are some examples of how a process can move between different states:</p>
<ul>
<li><p><code>New to Ready:</code> When a process is created, it is in a new state. It moves to the ready state when the operating system has allocated resources to it and it is ready to be executed.</p>
</li>
<li><p><code>Ready to Running:</code> When the CPU becomes available, the operating system selects a process from the ready queue depending on various scheduling algorithms and moves it to the running state.</p>
</li>
<li><p><code>Running to Waiting:</code> If a process requests access to I/O or needs input from the user or needs access to a critical region, it enters the blocked or waits state.</p>
</li>
<li><p><code>Waiting to Ready:</code> Once the I/O operation is completed the process goes to the ready state.</p>
</li>
<li><p><code>Running to Stopped:</code> If a process has completed execution, it moves to the terminated/stopped state.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707566120920/747cfbb8-4031-464a-8329-8e21e3528804.png" alt class="image--center mx-auto" /></p>
<p>Mainly we need to understand the following <code>5 types</code> of processes.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><mark>Process State</mark></td><td><mark>Flags</mark></td><td><mark>Description</mark></td></tr>
</thead>
<tbody>
<tr>
<td>Running</td><td>R</td><td>The process is either executing on a CPU or waiting to run.</td></tr>
<tr>
<td>Sleeping Interruptible</td><td>S</td><td>The process is waiting for some condition such as a hardware request, system resource access, or signal. It can be awakened by a signal.</td></tr>
<tr>
<td>Sleeping Uninterruptable</td><td>D</td><td>This process is also sleeping, but unlike S state, it does not respond to signals. It’s used when process interruption might cause an unpredictable device state.</td></tr>
<tr>
<td>Stopped</td><td>T</td><td>The process is stopped (suspended), usually by being signaled by a user or another process. It can be resumed by another signal.</td></tr>
<tr>
<td>Zombie</td><td>Z</td><td>A child process that has completed execution but still has an entry in the process table to report to its parent process. All resources except for the process identity (PID) are released.</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-importance-of-process-states">Importance of Process States</h3>
<p>We have substantial reasons to understand process states, especially if we are overseeing an application as a monitoring authority.</p>
<ol>
<li><p><code>Performance Analysis:</code> It helps us figure out if our computer is running smoothly or if something is slowing it down.</p>
</li>
<li><p><code>Resource Management:</code> It shows us how our computer’s resources (like memory and processing power) are being used.</p>
</li>
<li><p><code>System Troubleshooting:</code> If our computer is having problems, understanding process states can help us find out what’s going wrong.</p>
</li>
<li><p><code>Process Optimization:</code> We can make changes to improve how efficiently our computer runs.</p>
</li>
<li><p><code>Predicting System Behavior:</code> It can give us an idea of how our computer might behave under different conditions.</p>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Important Linux commands for understanding process states and parameters include <code>top</code> and <code>ps</code>, each with numerous options to modify output behavior. Examples include <code>ps</code>, <code>ps -aux</code>, <code>ps lax</code>, and <code>top</code>.</div>
</div>

<p><code>ps</code>: Reports a snapshot of the current processes on current shell terminal.</p>
<pre><code class="lang-sql"><span class="hljs-comment"># ps</span>
    PID TTY          TIME CMD
  37078 pts/0    00:00:00 bash
  40340 pts/0    00:00:00 ps
</code></pre>
<p><code>ps -aux</code>: Shows all processes for all users.</p>
<pre><code class="lang-sql"><span class="hljs-comment"># ps -aux</span>
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT <span class="hljs-keyword">START</span>   <span class="hljs-built_in">TIME</span> COMMAND
root           <span class="hljs-number">1</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.6</span> <span class="hljs-number">181792</span> <span class="hljs-number">11992</span> ?        Ss   Feb08   <span class="hljs-number">0</span>:<span class="hljs-number">18</span> /usr/lib/systemd/systemd rhgb <span class="hljs-comment">--switched-root --system --deserialize 31</span>
root           <span class="hljs-number">2</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        S    Feb08   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [kthreadd]
root           <span class="hljs-number">3</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        I&lt;   Feb08   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [rcu_gp]
root           <span class="hljs-number">4</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        I&lt;   Feb08   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [rcu_par_gp]
root           <span class="hljs-number">5</span>  <span class="hljs-number">0.0</span>  <span class="hljs-number">0.0</span>      <span class="hljs-number">0</span>     <span class="hljs-number">0</span> ?        I&lt;   Feb08   <span class="hljs-number">0</span>:<span class="hljs-number">00</span> [netns]
</code></pre>
<p><code>ps lax</code>: Provides a very detailed and technical information about tasks.</p>
<pre><code class="lang-sql"><span class="hljs-comment"># ps lax</span>
F   UID     PID    PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
4     0       1       0  20   0 181792 11992 ep_pol Ss   ?          0:18 /usr/lib/systemd/systemd rhgb <span class="hljs-comment">--switched-root --system --deserialize 31</span>
1     0       2       0  20   0      0     0 kthrea S    ?          0:00 [kthreadd]
1     0       3       2   0 -20      0     0 rescue I&lt;   ?          0:00 [rcu_gp]
</code></pre>
<p><code>uptime</code>: Shows the current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.</p>
<pre><code class="lang-sql"><span class="hljs-comment"># uptime</span>
19:29:13 up 1 day, 19:50,  5 users,  <span class="hljs-keyword">load</span> average: <span class="hljs-number">0.01</span>, <span class="hljs-number">0.02</span>, <span class="hljs-number">0.05</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">How to calculate Actual load average of the system in (1 Minutes, 5 Minutes &amp; 15 Minutes)?</div>
</div>

<p>To calculate the load average of the system we have to require two values.</p>
<ol>
<li><p>load average (One minutes, Five Minutes, Fifteen Minutes), which you can get from command "uptime"</p>
</li>
<li><p>Number of CPU, which you can get from command "lscpu"</p>
</li>
</ol>
<p><strong>Calculation Formula: Load Average / Number of CPU</strong></p>
<p><code>top</code>: Provides a dynamic real-time view of the running system. It can display system summary information and a list of processes currently being managed by the kernel.</p>
<pre><code class="lang-sql"><span class="hljs-comment"># top</span>
</code></pre>
<p><code>top</code> is a dynamic real-time process monitoring tool with lots of options, where <code>ps</code> is a static process monitoring tool at a certain time only.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707577124243/566c4f8d-1b26-4b02-8ac3-bc5eef32c0f8.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-sql"><span class="hljs-comment"># htop</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">The ‘htop’ command is similar to ‘top’, but it provides a more user-friendly and colorful display. It also supports mouse operations and has options for customization.</div>
</div>

<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707728924342/31d50018-4bd7-4219-b766-5f32b5ac3ef6.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">What the meaning of parameters ?</div>
</div>

<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Options</strong></td><td><strong>Meaning</strong></td></tr>
</thead>
<tbody>
<tr>
<td><code>F</code></td><td>Process flags (e.g., forked, traced, etc.).</td></tr>
<tr>
<td><code>UID</code></td><td>User ID of the process owner.</td></tr>
<tr>
<td><code>PID</code></td><td>Process ID (unique identifier).</td></tr>
<tr>
<td><code>PPID</code></td><td>Parent process ID (ID of the process that spawned this one).</td></tr>
<tr>
<td><code>PRI</code></td><td>Priority of the process (scheduling priority).</td></tr>
<tr>
<td><code>NI</code></td><td>Nice value (user-defined priority adjustment).</td></tr>
<tr>
<td><code>VSZ</code></td><td>Virtual memory size (total memory used by the process).</td></tr>
<tr>
<td><code>RSS</code></td><td>Resident set size (actual physical memory used by the process).</td></tr>
<tr>
<td><code>WCHAN</code></td><td>Waiting channel (function where the process is waiting).</td></tr>
<tr>
<td><code>STAT</code></td><td>Process status (e.g., running, sleeping, zombie, etc.).</td></tr>
<tr>
<td><code>TTY</code></td><td>Controlling terminal (if any).</td></tr>
<tr>
<td><code>TIME</code></td><td>Cumulative CPU time used by the process.</td></tr>
<tr>
<td><code>COMMAND</code></td><td>Command or program associated with the process.</td></tr>
<tr>
<td><code>USER</code></td><td>User who owns the process.</td></tr>
<tr>
<td><code>%CPU</code></td><td>Percentage of CPU usage by the process.</td></tr>
<tr>
<td><code>%MEM</code></td><td>Percentage of memory usage by the process.</td></tr>
<tr>
<td><code>START</code></td><td>Start time of the process.</td></tr>
</tbody>
</table>
</div><p>Remember to use <code>man &lt;command&gt;</code> (replace <code>&lt;command&gt;</code> with the command name) to get more details about each command and its options.</p>
<pre><code class="lang-plaintext"># man top
# man ps
</code></pre>
<h3 id="heading-important-keyboard-options-with-top-command">Important keyboard options with <code>top</code> command</h3>
<p>Here are the uses of each option in the <code>top</code> command, each in a separate row:</p>
<pre><code class="lang-sql"><span class="hljs-comment"># top</span>
</code></pre>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Options</td><td>Uses</td></tr>
</thead>
<tbody>
<tr>
<td>Z</td><td>Global: colors</td></tr>
<tr>
<td>B</td><td>Global: bold</td></tr>
<tr>
<td>E,e</td><td>Global: summary/task memory scale</td></tr>
<tr>
<td>l</td><td>Toggle: load avg</td></tr>
<tr>
<td>t</td><td>Toggle: task/cpu</td></tr>
<tr>
<td>m</td><td>Toggle: memory</td></tr>
<tr>
<td>I</td><td>Toggle: Irix mode</td></tr>
<tr>
<td>0</td><td>Toggle: zeros</td></tr>
<tr>
<td>1,2,3</td><td>Toggle: cpu/numa views</td></tr>
<tr>
<td>4</td><td>Toggle: cpus two abreast</td></tr>
<tr>
<td>f,F</td><td>Fields: add/remove/order/sort</td></tr>
<tr>
<td>X</td><td>Fields: increase fixed-width</td></tr>
<tr>
<td>L,&amp;</td><td>Locate: find/again</td></tr>
<tr>
<td>&lt;,&gt;</td><td>Move sort column: left/right</td></tr>
<tr>
<td>R</td><td>Toggle: Sort</td></tr>
<tr>
<td>H</td><td>Toggle: Threads</td></tr>
<tr>
<td>J</td><td>Toggle: Num justify</td></tr>
<tr>
<td>C</td><td>Toggle: Coordinates</td></tr>
<tr>
<td>c</td><td>Toggle: Cmd name/line</td></tr>
<tr>
<td>i</td><td>Toggle: Idle</td></tr>
<tr>
<td>S</td><td>Toggle: Time</td></tr>
<tr>
<td>j</td><td>Toggle: Str justify</td></tr>
<tr>
<td>x,y</td><td>Toggle highlights: sort field; running tasks</td></tr>
<tr>
<td>z</td><td>Toggle: color/mono</td></tr>
<tr>
<td>b</td><td>Toggle: bold/reverse (only if ‘x’ or ‘y’)</td></tr>
<tr>
<td>u,U</td><td>Filter by: effective/any user</td></tr>
<tr>
<td>o,O</td><td>Filter by: other criteria</td></tr>
<tr>
<td>n,#</td><td>Set: max tasks displayed</td></tr>
<tr>
<td>^O</td><td>Show: other filter(s)</td></tr>
<tr>
<td>V</td><td>Toggle: forest view</td></tr>
<tr>
<td>v</td><td>Toggle: hide/show forest view children</td></tr>
<tr>
<td>k</td><td>Manipulate tasks: kill</td></tr>
<tr>
<td>r</td><td>Manipulate tasks: renice</td></tr>
<tr>
<td>d or s</td><td>Set update interval</td></tr>
<tr>
<td>W,Y,!</td><td>Write config file; Inspect other output; Combine Cpus</td></tr>
<tr>
<td>q</td><td>Quit</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-what-is-process-signals-how-signals-works">What is Process Signals, How Signals Works ??</h3>
<p>Process signaling is a method used in operating systems to communicate between processes. It’s like a notification system where one process sends a signal, and another process receives it.</p>
<p>Here are some uses of process signaling:</p>
<ol>
<li><p><strong>Interrupt a Process</strong>: If a process is running, a signal can be sent to stop it immediately.</p>
</li>
<li><p><strong>Resume a Process</strong>: A stopped process can be resumed using a signal.</p>
</li>
<li><p><strong>Terminate a Process</strong>: If a process needs to be ended, a signal can be sent to terminate it.</p>
</li>
<li><p><strong>Handle Errors</strong>: If a process encounters an error, it can send a signal to indicate that something went wrong.</p>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">As we discussed earlier, a process can be in either an interruptible or non-interruptible sleeping state. Applications can internally send signals to manage the process cycle for work completion. Alternatively, a system administrator can also directly send signals to a specific process to manage it based on certain events using the <code>kill</code> and <code>pkill</code> commands.</div>
</div>

<p>Fundamental process management signals</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Signal Number</td><td>Signal Name</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td>1</td><td>HUP (Hangup)</td><td>Ends the controlling process of a terminal. Also asks for process re-initialization without ending it.</td></tr>
<tr>
<td>2</td><td>INT (Keyboard interrupt)</td><td>Stops the program. It can be blocked or handled. Triggered by pressing Ctrl+c.</td></tr>
<tr>
<td>3</td><td>QUIT (Keyboard quit)</td><td>Similar to INT, but also creates a process dump at termination. Triggered by pressing Ctrl+\.</td></tr>
<tr>
<td>9</td><td>KILL (Kill, unblockable)</td><td>Abruptly stops the program. It cannot be blocked, ignored, or handled.</td></tr>
<tr>
<td>15</td><td>TERM (Terminate)</td><td>Stops the program. Unlike KILL, it can be blocked, ignored, or handled. It’s a polite way to ask a program to end, allowing it to finish important tasks and clean up. By default, <code>kill</code> sends a <strong>SIGTERM</strong> signal, which politely asks the process to terminate</td></tr>
<tr>
<td>18</td><td>CONT (Continue)</td><td>Sent to a process to resume if stopped. It cannot be blocked. Even if handled, it always resumes the process.</td></tr>
<tr>
<td>19</td><td>STOP (Stop, unblockable)</td><td>Pauses the process. It cannot be blocked or handled.</td></tr>
<tr>
<td>20</td><td>TSTP (Keyboard stop)</td><td>Pauses the process. Unlike STOP, it can be blocked, ignored, or handled. Triggered by pressing Ctrl+z.</td></tr>
</tbody>
</table>
</div><div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">As per the system administrator's need, he can assign process signals using<code> kill</code> &amp; <code>pkill </code>commands, and Process ID can get from linux commands EX:- <code>pgrep</code>, <code>ps</code>, <code>top</code></div>
</div>

<ol>
<li><p><strong>Scenario</strong>: You want to stop a program nicely (Not forcefully).</p>
<ul>
<li><p><strong>Question</strong>: What signal do you use to ask a program to stop, but let it finish up important tasks first?</p>
</li>
<li><p><strong>Answer</strong>: Use the <code>SIGTERM</code> signal.</p>
</li>
</ul>
</li>
</ol>
<pre><code class="lang-sql"><span class="hljs-comment"># pgrep firefox</span>
<span class="hljs-comment"># pgrep httpd</span>
<span class="hljs-comment"># pgrep mysql</span>
<span class="hljs-comment"># ps -aux | grep -e firefox -e httpd -e mysql -e username</span>
<span class="hljs-comment"># top</span>
</code></pre>
<pre><code class="lang-sql"><span class="hljs-comment"># kill -l</span>
<span class="hljs-comment"># kill -15 &lt;process-id&gt;</span>
<span class="hljs-comment"># kill -SIGTERM &lt;process-id&gt;</span>
</code></pre>
<ol>
<li><p><strong>Scenario</strong>: You need to stop a program right away because it’s causing problems.</p>
<ul>
<li><p><strong>Question</strong>: What signal do you use to force a program to stop immediately?</p>
</li>
<li><p><strong>Answer</strong>: Use the <code>SIGKILL</code> signal.</p>
</li>
</ul>
</li>
</ol>
<pre><code class="lang-sql"><span class="hljs-comment"># kill -l</span>
<span class="hljs-comment"># kill -9 &lt;process-id&gt;</span>
<span class="hljs-comment"># kill -SIGKILL &lt;process-id&gt;</span>
</code></pre>
<ol>
<li><p><strong>Scenario</strong>: You want to pause a program for a while.</p>
<ul>
<li><p><strong>Question</strong>: What signal do you use to pause a program and then start it again later?</p>
</li>
<li><p><strong>Answer</strong>: Use the <code>SIGTSTP</code> signal to pause and the <code>SIGCOUNT</code> signal to start again.</p>
</li>
</ul>
</li>
</ol>
<pre><code class="lang-sql"><span class="hljs-comment"># kill -l</span>
<span class="hljs-comment"># kill -20 &lt;process-id&gt;</span>
<span class="hljs-comment"># kill -18 &lt;process-id&gt;</span>
<span class="hljs-comment"># kill -SIGTSTP &lt;process-id&gt;</span>
<span class="hljs-comment"># kill -SIGCOUNT &lt;process-id&gt;</span>
</code></pre>
<ol>
<li><p><strong>Scenario</strong>: You want a program to read its configuration settings again without stopping it.</p>
<ul>
<li><p><strong>Question</strong>: What signal do you use to ask a program to read its settings again?</p>
</li>
<li><p><strong>Answer</strong>: Use the <code>SIGKILL</code> signal.</p>
</li>
</ul>
</li>
</ol>
<pre><code class="lang-sql"><span class="hljs-comment"># kill -l</span>
<span class="hljs-comment"># kill -1 &lt;process-id&gt;</span>
<span class="hljs-comment"># kill -SIGHUP &lt;process-id&gt;</span>
</code></pre>
<ol>
<li><p>Scenario: You want to prevent an anonymous user from using the shell immediately.</p>
<ul>
<li><p>Question: What steps or measures can you take to quickly disable shell access for an anonymous user?</p>
</li>
<li><p>Answer: Using <code>SIGKILL</code> signal</p>
</li>
</ul>
</li>
</ol>
<pre><code class="lang-sql"><span class="hljs-comment"># kill -l</span>
<span class="hljs-comment"># kill -9 &lt;process-id&gt;</span>
<span class="hljs-comment"># kill -SIGKILL &lt;process-id&gt;</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Improve the management of processes through <code>PKILL</code> signals.</div>
</div>

<p>Here are some real-world scenarios where you can use the <code>pkill</code> command effectively:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><mark>Scenario</mark></td><td><mark>Command</mark></td><td><mark>Description</mark></td></tr>
</thead>
<tbody>
<tr>
<td>Kill a process by name</td><td><code>pkill firefox</code></td><td>This command will kill all running processes named ‘firefox’.</td></tr>
<tr>
<td>Send a different signal</td><td><code>pkill --signal SIGKILL gedit</code></td><td>This command sends the SIGKILL signal to all ‘gedit’ processes.</td></tr>
<tr>
<td>Match full command line</td><td><code>pkill -f "ping</code><a target="_blank" href="http://google.com"><code>google.com</code></a><code>"</code></td><td>This command kills the ‘ping <a target="_blank" href="http://google.com">google.com</a>’ command. The <code>-f</code> option matches the complete command line.</td></tr>
<tr>
<td>Case insensitive match</td><td><code>pkill -i firefox</code></td><td>This command will kill all running processes named ‘firefox’, ignoring case.</td></tr>
<tr>
<td>Kill processes by user</td><td><code>pkill -u mark</code></td><td>This command kills all processes being run by the user ‘mark’.</td></tr>
<tr>
<td>Kill oldest process</td><td><code>pkill -o firefox</code></td><td>This command kills the oldest ‘firefox’ process.</td></tr>
<tr>
<td>Kill newest process</td><td><code>pkill -n firefox</code></td><td>This command kills the newest ‘firefox’ process.</td></tr>
<tr>
<td>Kill processes by group</td><td><code>pkill -g 1000</code></td><td>This command kills all processes in the group with ID ‘1000’.</td></tr>
<tr>
<td>Kill processes by session</td><td><code>pkill -s 1</code></td><td>This command kills all processes in the session with ID ‘1’.</td></tr>
<tr>
<td>Kill processes by terminal</td><td><code>pkill -t pts/1</code></td><td>This command kills all processes on the terminal ‘pts/1’.</td></tr>
</tbody>
</table>
</div><pre><code class="lang-sql"><span class="hljs-comment"># pkill --help</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Which tool do we prefer between ‘kill’ and ‘pkill’?</strong></div>
</div>

<ul>
<li><p>Use <code>kill</code> when you know the PID.</p>
</li>
<li><p>Use <code>pkill</code> when you want to terminate processes by their names and patterns.</p>
</li>
</ul>
<hr />
<h3 id="heading-what-is-process-priority-how-to-modify-processes-priority">What is Process Priority, How to Modify Processes Priority??</h3>
<pre><code class="lang-sql"><span class="hljs-comment"># ps lax</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707726231866/05adb405-ba01-409e-bef7-a025d0121419.png" alt class="image--center mx-auto" /></p>
<p><strong>Process Priority</strong> is a characteristic of a process that determines how much CPU time it is allocated for execution. The <code>NI</code> column displays the niceness of processes, indicating their priority.It’s important because it helps the operating system manage resources efficiently. If all processes were given equal priority, a long-running or resource-intensive process could monopolize the CPU, causing other processes to slow down or even halt. By assigning different priorities, the operating system can ensure that important processes get the resources they need, while less important processes are made to wait.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707721313332/a5747324-e4c6-4d07-9a2b-14121eea86af.png" alt class="image--center mx-auto" /></p>
<p>The <code>nice</code><strong>value</strong> is a way to influence process priority in Unix-like operating systems. It’s a value that can be assigned to a process to either increase or decrease its priority. The <code>nice</code> value ranges <code>from -20 (highest priority) to +19 (lowest priority)</code>. By default, the Nice value is <code>zero</code>, which gives the process a neutral priority.</p>
<ul>
<li><p>The <code>nice</code> command in Unix-like systems is used to start a process with a certain nice value.</p>
</li>
<li><p>while the <code>renice</code> command is used to change the nice value of an already running process.</p>
</li>
<li><p>The lower the nice value (i.e., more negative), the higher the priority of the process.</p>
</li>
<li><p>The higher nice value (i.e., more positive) gives the process a lower priority.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1707718720189/fd3a88de-bac8-47c1-965f-0afbda4cca73.png" alt class="image--center mx-auto" /></p>
<p>So, the Nice and renice values are directly connected with process priority because they are tools that allow users to influence the scheduling priority of processes. This can be useful in a variety of situations, such as ensuring that a critical process gets the CPU time it needs or preventing a resource-intensive process from monopolizing the CPU.</p>
<h3 id="heading-what-is-nice-amp-renice-value-in-the-process-priority">What is NICE &amp; RENICE value in the process Priority?</h3>
<p>In Linux, the <code>nice</code> and <code>renice</code> commands are used to influence the scheduling priority of processes. Here’s a simple explanation:</p>
<ul>
<li><code>nice</code>: This command is used <code>when you’re starting a new process and you want to set its priority</code>. The <code>nice</code> value can range <code>from -20 (highest priority) to 19 (lowest priority)</code>.</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">For example, if you want to start a process with a lower &amp; higher priority, you can use the <code>nice</code> command like this:</div>
</div>

<pre><code class="lang-sql"><span class="hljs-comment"># nice -n 10 command</span>
<span class="hljs-comment"># nice -n -20 firefox</span>
<span class="hljs-comment"># nice -n 1 command</span>
</code></pre>
<p>Above command will start the <code>command</code> with a <code>nice</code> value of 10, -20, 1 , which is lower &amp; Higher priority.</p>
<ul>
<li><code>renice</code>: This command is used when you want to change the priority of an already running process. For example, if you have a process with process ID (PID) 15784 and you want to lower its priority, you can use the <code>renice</code> command like this:</li>
</ul>
<pre><code class="lang-sql"><span class="hljs-comment"># renice -n 15 -p 15784</span>
<span class="hljs-comment"># renice -n -19 -p 45125</span>
</code></pre>
<p>This will change the <code>nice</code> value of the process with PID 15784, 45125 to 15, -19 which is a lower &amp; higher nice priority.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Remember, only the superuser (root) can increase the priority (set a negative nice value). Normal users can only decrease the priority (set a positive nice value) or keep it the same. Normal users can only affect processes they <strong>own</strong> or have permission to modify</div>
</div>

<hr />
<p>Thank you!</p>
]]></content:encoded></item><item><title><![CDATA[Easy Guide to Prometheus: Installation and Configuration]]></title><description><![CDATA[Step 1 (Pre-Setup requirements)
Before You Begin:

Make sure you have the ‘sudo’ access on your Linux server. You’ll need it because this guide uses commands that need Administrative permissions.

Your server needs to be able to connect to the intern...]]></description><link>https://projectwala.site/easy-guide-to-prometheus-installation-and-configuration</link><guid isPermaLink="true">https://projectwala.site/easy-guide-to-prometheus-installation-and-configuration</guid><category><![CDATA[#prometheus]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[monitoring tool]]></category><category><![CDATA[Linux]]></category><category><![CDATA[server]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[rakamodify]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[Grafana Monitoring]]></category><category><![CDATA[dashboard]]></category><category><![CDATA[dashboard data analytics]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Thu, 01 Feb 2024 13:27:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706793632792/0daedf36-88d2-4c34-9500-32f84df5ee15.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h1 id="heading-step-1-pre-setup-requirements"><strong>Step 1 (Pre-Setup requirements)</strong></h1>
<h3 id="heading-before-you-begin"><strong>Before You Begin:</strong></h3>
<ol>
<li><p>Make sure you have the <code>‘sudo’</code> access on your Linux server. You’ll need it because this guide uses commands that need <code>Administrative</code> permissions.</p>
</li>
<li><p>Your server needs to be able to connect to the internet. This is necessary to download the Prometheus software. So ping <code>8.8.8.8</code> to check connectivity.</p>
</li>
<li><p>Don’t forget to adjust your firewall settings. You need to allow access to port <code>9090</code> on your server to use Prometheus.</p>
</li>
<li><p>Yum client repository should be configured properly.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-2-setup-prometheus"><strong>Step 2 (Setup Prometheus)</strong></h1>
<p><code>Step 2.1:</code> Update the yum package repositories.</p>
<pre><code class="lang-plaintext">$ sudo yum update -y
</code></pre>
<p><code>Step 2.2:</code> Go to the official Prometheus downloads page and get the latest <a target="_blank" href="https://prometheus.io/download/">download link</a> for the Linux binary.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706788133890/390ff68c-887c-4824-bdfd-5553a4c104d8.png" alt class="image--center mx-auto" /></p>
<p><code>Step 2.3:</code> Retrieve the source code using wget, decompress the tar file, and rename the resulting directory to ‘<code>prometheus-files’</code>. For this create a new directory ‘prometheus-files’</p>
<pre><code class="lang-plaintext">$ sudo  wget https://github.com/prometheus/prometheus/releases/download/v2.49.1/prometheus-2.49.1.linux-amd64.tar.gz
$ sudo tar -xvf prometheus-2.49.1.linux-amd64.tar.gz
$ sudo mkdir prometheus-files
$ sudo mv prometheus-2.49.1.linux-amd64  prometheus-files/
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706791597554/e4850ce4-af0d-428f-95af-eab1daa5c95f.png" alt class="image--center mx-auto" /></p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">You can use the MobaXterm remote access tool for Windows. Try this <a target="_blank" href="https://mobaxterm.mobatek.net/download.html">LInk</a></div>
</div>

<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706788846274/6bd2db8a-1fc9-4edf-902a-ef74fbeaed78.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-step-3-user-creation-and-ownership-change"><strong>Step 3 (User Creation And Ownership Change)</strong></h1>
<p><code>Step 3.1:</code> Create a Prometheus user, establish the necessary directories, and assign ownership of these directories to the Prometheus user.</p>
<pre><code class="lang-bash">$ sudo useradd prometheus --no-create-home -s /bin/<span class="hljs-literal">false</span> 
$ sudo mkdir /etc/prometheus
$ sudo mkdir /var/lib/prometheus
$ sudo chown prometheus:prometheus /etc/prometheus
$ sudo chown prometheus:prometheus /var/lib/prometheus
</code></pre>
<p><code>Step 3.2:</code> Take the ‘prometheus’ and ‘<code>promtool’</code> files from the ‘prometheus-files’ folder. Put them in the ‘/usr/local/bin’ folder. Then, make sure they belong to the ‘prometheus’ user. so change the ownership to prometheus user</p>
<pre><code class="lang-plaintext">$ sudo cp prometheus-files/prometheus /usr/local/bin/
$ sudo cp prometheus-files/promtool /usr/local/bin/
$ sudo chown prometheus:prometheus /usr/local/bin/prometheus
$ sudo chown prometheus:prometheus /usr/local/bin/promtool
</code></pre>
<p><code>Step 3.3:</code> Move the consoles and console_libraries directories from prometheus-files to /etc/prometheus folder and change the ownership to prometheus user.</p>
<pre><code class="lang-plaintext">$ sudo cp -r prometheus-files/consoles /etc/prometheus
$ sudo cp -r prometheus-files/console_libraries /etc/prometheus
$ sudo chown -R prometheus:prometheus /etc/prometheus/consoles
$ sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries
</code></pre>
<hr />
<h1 id="heading-step-4-prometheus-configuration-process">Step 4 (Prometheus Configuration Process)</h1>
<p>All the prometheus configurations should be present in <code>/etc/prometheus/ prometheus.yml file.</code></p>
<p><code>Step 4.1:</code> Create the prometheus.yml file &amp; Copy the following contents to the prometheus.yml file.</p>
<pre><code class="lang-plaintext">$ sudo vi /etc/prometheus/prometheus.yml
</code></pre>
<pre><code class="lang-plaintext">global:
  scrape_interval: 10s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><mark>NOTE: Make sure you have added </mark><code>9090</code><mark> port in firewall.</mark></div>
</div>

<pre><code class="lang-plaintext">$ sudo firewall-cmd --permanent-port=9090/tcp
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all
</code></pre>
<p><code>Step 4.2:</code> Change the ownership of the file to prometheus user.</p>
<pre><code class="lang-plaintext">$ sudo chown prometheus:prometheus /etc/prometheus/prometheus.yml
</code></pre>
<p><code>Step 4.3:</code> Create a prometheus service file &amp; Copy the following content to the file, for Setup Prometheus Service File.</p>
<pre><code class="lang-plaintext">$ sudo vi /etc/systemd/system/prometheus.service
</code></pre>
<pre><code class="lang-plaintext">[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target
</code></pre>
<p><code>Step 4.4:</code> Reload the systemd service to register the prometheus service and start the prometheus service.</p>
<pre><code class="lang-plaintext">$ sudo systemctl daemon-reload
$ sudo systemctl start prometheus
</code></pre>
<p>Check the prometheus service status using the following command.</p>
<pre><code class="lang-plaintext">$ sudo systemctl status prometheus
</code></pre>
<p>The status should show the active state as shown below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706791534981/9473b486-6e8e-4c37-91ea-a510bfefcb0f.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-step-5-access-prometheus-web-ui"><strong>Step 5 (Access Prometheus Web UI)</strong></h1>
<p>Now you will be able to access the prometheus UI on <code>9090</code> port of the prometheus server.</p>
<pre><code class="lang-plaintext">http://&lt;Instance-IP&gt;:9090/graph
</code></pre>
<p>You can see the following UI as shown below. Lots of Congratulations !!!!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706790680007/34e467df-a9e9-4e43-807e-9cb724ac22be.png" alt class="image--center mx-auto" /></p>
<p>Now search <code>"process_cpu_seconds_total"</code> in search bar, choose option Graph and press <code>Execute</code> .</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706792331299/5e581ee8-0e5f-456c-9404-238dac0fe5b1.png" alt class="image--center mx-auto" /></p>
<p>This is graphical representation for <code>"process_cpu_seconds_total"</code> of your instance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706792344905/b8938720-f5fc-426b-ac5f-e5d3ea77b029.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-step-6-what-next">Step 6 (What Next.....?)</h1>
<p><code>Step 6.1:</code> Well At this point, we’ve set up the Prometheus server. But to start collecting data, we need to specify where to get it from. This is done by registering a ‘target’ in the <code>prometheus.yml</code> file.</p>
<ol>
<li><p>A ‘target’ is essentially a source system from which Prometheus can collect metrics. If you have multiple systems (like ten servers) that you want to monitor, you would list the IP addresses of these servers as targets in the Prometheus configuration.</p>
</li>
<li><p>However, before Prometheus can collect any data, each server needs to have a program called ‘Node Exporter’ installed. This program collects system metrics and makes them available for Prometheus to scrape.</p>
</li>
</ol>
<p>In simpler terms, think of it like this: Prometheus is a data collector, the ‘targets’ are the places it collects data from, and ‘Node Exporter’ is the tool that gathers the data and puts it in a place where Prometheus can find it.</p>
<p><code>Step 6.2:</code> How to do practically ?</p>
<p>Here are the practical steps to get data from 10 servers using Prometheus and Node Exporter:</p>
<ol>
<li><strong>Install Node Exporter on Each Server</strong>: Node Exporter is a program that collects system metrics and makes them available for Prometheus to scrape. You need to install Node Exporter on each of the 10 servers that you want to monitor.</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">What is Node Exporter ?</div>
</div>

<blockquote>
<p>Node Exporter is like a reporter for your server. It collects information about your server’s performance, such as how much memory it’s using, how much data it’s reading and writing to the disk, and how much work the CPU is doing. It then makes this information available in a format that Prometheus, a monitoring tool, can understand. This allows you to keep track of your server’s health and performance over time.</p>
</blockquote>
<ol>
<li><strong>Configure Prometheus to Monitor These Servers</strong>: After installing Node Exporter, you need to tell Prometheus to scrape metrics from these servers. This is done by adding the IP addresses of these servers as targets in the <code>prometheus.yml</code> configuration file.</li>
</ol>
<p>Here’s an example of what the configuration might look like:</p>
<pre><code class="lang-plaintext">$ sudo vim /etc/prometheus/prometheus.yml
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-attr">scrape_configs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">job_name:</span> <span class="hljs-string">'node'</span>
    <span class="hljs-attr">static_configs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">targets:</span> [<span class="hljs-string">'&lt;server-1-ip&gt;:9100'</span>, <span class="hljs-string">'&lt;server-2-ip&gt;:9100'</span>, <span class="hljs-string">...</span>, <span class="hljs-string">'&lt;server-10-ip&gt;:9100'</span>]
</code></pre>
<p>Replace <code>&lt;server-1-ip&gt;</code>, <code>&lt;server-2-ip&gt;</code>, …, <code>&lt;server-10-ip&gt;</code> with the actual IP addresses of your servers.</p>
<ol>
<li><strong>Start Prometheus</strong>: Finally, start the Prometheus server. It will now begin collecting metrics from the specified targets at regular intervals.</li>
</ol>
<hr />
<p>References:</p>
<ul>
<li><p><a target="_blank" href="https://prometheus.io/docs/visualization/grafana/">https://prometheus.io/docs/visualization/grafana/</a></p>
</li>
<li><p><a target="_blank" href="https://mobaxterm.mobatek.net/download.html">https://mobaxterm.mobatek.net/download.html</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Understanding DNS Servers: The Internet’s Phonebook Made Simple]]></title><description><![CDATA[What is DNS?
The Domain Name System (DNS) is a critical component of the Internet infrastructure that translates user-friendly domain names into numerical IP addresses. It’s often compared to a phone book for the Internet.
Here’s why DNS is important...]]></description><link>https://projectwala.site/understanding-dns-servers-the-internets-phonebook-made-simple</link><guid isPermaLink="true">https://projectwala.site/understanding-dns-servers-the-internets-phonebook-made-simple</guid><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[dns]]></category><category><![CDATA[networking]]></category><category><![CDATA[Computer Science]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[rakamodify]]></category><category><![CDATA[isp]]></category><category><![CDATA[projects]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Sun, 21 Jan 2024 09:29:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705822356864/23c65603-566e-41d0-a78e-3b0affd58475.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h1 id="heading-what-is-dns">What is DNS?</h1>
<p>The Domain Name System (DNS) is a critical component of the Internet infrastructure that translates user-friendly domain names into numerical IP addresses. It’s often compared to a phone book for the Internet.</p>
<p>Here’s why DNS is important:</p>
<ol>
<li><p>User-friendly: It allows users to interact with devices on the Internet using easy-to-remember domain names instead of having to remember long strings of numbers.</p>
</li>
<li><p>Smooth operation: DNS ensures the Internet works smoothly, loading the content we ask for quickly and efficiently.</p>
</li>
<li><p>Connectivity: If a DNS is not responding, you won’t be able to connect to other websites on the Internet.</p>
</li>
</ol>
<p>For example, when you type <a target="_blank" href="http://www.google.com"><code>www.google.com</code></a> into your web browser, the browser sends a request to the DNS server. The DNS server finds the corresponding IP address (like <code>192.168.1.1</code> or <code>74.125.68.102</code>), and then connects it with the hosting server, allowing the webpage to be displayed on your browser. Without DNS, you would have to type in the numerical IP address directly to access the website.</p>
<ul>
<li><h3 id="heading-what-is-the-difference-between-ip-amp-dns">What is the difference between IP &amp; DNS?</h3>
</li>
</ul>
<p>The <strong>Domain Name System (DNS)</strong> and <strong>IP addresses</strong> are both crucial components of the internet, but they serve different functions:</p>
<ul>
<li><p><strong>IP Address</strong>: An IP (Internet Protocol) address is a unique numerical identifier assigned to every device connected to a network. It’s like a phone number for your computer or any device connected to a network. There are two main types of IP addresses: IPv4 (e.g., 192.168.1.1) and IPv6 (e.g., 2001:0db8:85a3:0000:0000:8a2e:0370:7334). Just like how you dial a phone number to call someone, devices use IP addresses to reach each other.</p>
</li>
<li><p><strong>DNS</strong>: The Domain Name System (DNS) is like your phone’s contact list but for websites. Instead of remembering all the numbers, you just need to know the name. When you want to use Google, you type the domain name (<a target="_blank" href="http://google.com">google.com</a>) in your web browser, not the numeric IP address of the server hosting Google. The main purpose of DNS is to translate human-friendly domain names like “<a target="_blank" href="http://google.com">google.com</a>” into the machine-readable IP addresses.</p>
</li>
</ul>
<p>In summary, the main difference between IP addresses and DNS is that an IP address is a unique numerical identifier assigned to every device connected to a network, while DNS (Domain Name System) translates human-friendly domain names into IP addresses. DNS and IP addresses work together to connect you to websites. Think of it as using your contact list to dial a friend’s phone number.</p>
<hr />
<h1 id="heading-dns-architecture-how-does-dns-work">DNS Architecture (How does DNS work?)</h1>
<p>The Domain Name System (DNS) works in a series of steps:</p>
<ol>
<li><p><strong>User Request</strong>: When you type a URL like <a target="_blank" href="http://www.example.com"><code>www.example.com</code></a> into your web browser, a DNS query is initiated.</p>
</li>
<li><p><strong>Contacting ISP DNS Recursor</strong>: The query first reaches the DNS recursor, a server designed to receive queries from client machines. This server can be thought of as a librarian who is asked to find a particular book in a library. For example "JIO, AIRTEL, IDEA", from which your computer is getting an internet connection to connect with the INTERNET.</p>
<p> When your computer connects to the internet through an Internet Service Provider (ISP) like JIO, it typically uses the DNS servers provided by the ISP. These servers act as DNS recursors.</p>
<p> A DNS recursor is a server designed to receive queries from client machines through applications such as web browsers. The recursor is responsible for making additional requests in order to satisfy the client’s DNS query.</p>
</li>
<li><p><strong>Querying the Root Nameserver</strong>: The recursor then queries a root nameserver, which can be thought of as an index in a library that points to different racks of books. It serves as a reference to other more specific locations.</p>
</li>
<li><p><strong>Accessing the TLD Nameserver</strong>: The next step is to access the Top Level Domain (TLD) nameserver. This server can be thought of as a specific rack of books in a library. For <a target="_blank" href="http://www.example.com"><code>www.example.com</code></a>, the TLD is <code>.com</code>.</p>
</li>
<li><p><strong>Reaching the Authoritative Nameserver</strong>: Finally, the query reaches the authoritative nameserver, which can be thought of as a dictionary on a rack of books. This server provides the final translation of the domain name into its corresponding IP address. For example GoDaddy, BigRock, Namecheap, and BlueHost etc. are examples of Authoritative Nameservers.</p>
</li>
<li><p><strong>Retrieving the IP Address</strong>: The IP address is then returned to the DNS recursor, which in turn sends it back to your computer.</p>
</li>
<li><p><strong>Connecting to the Website</strong>: Your computer uses this IP address to connect to the website, and the website content is returned to your browser.</p>
</li>
</ol>
<p>This entire process happens behind the scenes and requires no interaction from the user apart from the initial request. It’s a complex but essential system that allows us to navigate the internet using easy-to-remember domain names instead of numerical IP addresses.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705826589834/8d604738-bb37-4352-a0ac-7d1a34e38087.png" alt="DNS Architecture and How the DNS work actually" class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-understanding-recursive-and-iterative-queries-in-dns-resolution">Understanding Recursive and Iterative Queries in DNS Resolution</h1>
<ul>
<li><strong><mark>Recursive Query</mark></strong></li>
</ul>
<p>When you enter a URL (like <a target="_blank" href="http://www.rakamodify.online">www.rakamodify.online</a>) in your web browser, the browser sends a request to the Internet Service Provider’s (ISP) DNS server to resolve the domain name into an IP address. This is known as a <strong>recursive query</strong>. The ISP’s DNS server is responsible for providing a definitive answer, either the IP address or an error message.</p>
<ul>
<li><strong><mark>Iterative Query</mark></strong></li>
</ul>
<p>If the ISP’s DNS server doesn’t know the IP address for the domain, it will perform an <strong>iterative query</strong> to find it. This involves the following steps:</p>
<ol>
<li><p>The ISP’s DNS server queries a <strong>Root DNS server</strong>. The Root DNS server doesn’t know the IP address, but it knows where to find the DNS server for top-level domains (TLDs) like <code>.com</code>, <code>.online</code>, <code>.org</code>, etc.</p>
</li>
<li><p>The ISP’s DNS server then queries the <strong>TLD DNS server</strong>. The TLD DNS server doesn’t know the IP address, but it knows where to find the DNS server for the specific domain (like <a target="_blank" href="http://rakamodify.online">rakamodify.online</a>).</p>
</li>
<li><p>Finally, the ISP’s DNS server queries the <strong>Authoritative DNS server</strong> (which could be hosted by providers like Namecheap, BigRock, Bluehost, GoDaddy, etc.). The Authoritative DNS server knows the IP address for the specific domain and returns it to the ISP’s DNS server.</p>
</li>
</ol>
<ul>
<li><strong><mark>Non-Recursive Query</mark></strong></li>
</ul>
<p>A Non-Recursive query in DNS is a type of query where the DNS resolver already knows the answer. It either immediately returns a DNS record because it already stores it in local cache, or queries a DNS Name Server which is authoritative for the record, meaning it definitely holds the correct IP for that hostname.</p>
<p>This process of the ISP’s DNS server querying the Root DNS server, TLD DNS server, and Authoritative DNS server to find the IP address for a domain is known as an iterative query. The ISP’s DNS server does the “hard work” of finding the IP address, hence the term “recursive resolver”.</p>
<p>Once the IP address is found, the ISP’s DNS server returns it to your web browser, which can then request the webpage from the web server at that IP address.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705828644578/9875c2c0-6a46-4299-b5df-69f90fe0bc7e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705828231918/d5256818-04da-4b43-827b-ecf1ea3ca783.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705829067442/91eb99bf-6d85-4321-9d2c-5167a7436106.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-how-to-configure-a-dns-server">How to configure a DNS server?</h1>
<p>We’re utilizing a Cloud Service for our setup, which includes two instances. The first instance is a web server, and the second one is a DNS server.</p>
<p>Here’s a brief overview of the configuration:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>AWS Cloud Instance</td><td>Private IP</td><td>Public IP</td><td>Configured For</td></tr>
</thead>
<tbody>
<tr>
<td>WEB-SERVER</td><td>162.25.32.11</td><td>72.46.86.21</td><td>web</td></tr>
<tr>
<td>DNS-SERVER</td><td>162.25.32.11</td><td>72.46.25.41</td><td>dns</td></tr>
</tbody>
</table>
</div><p>Now, let’s dive into the configuration of each instance.</p>
<ol>
<li><h3 id="heading-wb-server-on-your-web-server-instance">WB-SERVER (<code>On your web-server instance</code>)</h3>
</li>
</ol>
<p>Follow these article links for web-server setup:</p>
<ul>
<li>For Dynamic MySQL + WordPress Setup:-</li>
</ul>
<p><a target="_blank" href="https://www.rakamodify.online/deploy-wordpress-mariadb-php-apache-web-server-on-rhel9-with-easy-steps"><code>https://www.rakamodify.online/deploy-wordpress-mariadb-php-apache-web-server-on-rhel9-with-easy-steps</code></a></p>
<ul>
<li>For Dynamic Web Setup on Kubernetes Cluster:-</li>
</ul>
<p><a target="_blank" href="https://www.rakamodify.online/session-1"><code>https://www.rakamodify.online/session-1</code></a></p>
<ul>
<li>For a simple static web setup</li>
</ul>
<pre><code class="lang-plaintext"># yum install -y httpd
# firewall-cmd --permaanent-add --port=80/tcp
# firewall-cmd --reload
# systemctl enable --now httpd
# echo "Congratulations! web service is running." &gt; /var/www/html/index.html
# systemctl restart httpd
</code></pre>
<pre><code class="lang-plaintext"># curl http://localhost
Congratulations! web service is running.
</code></pre>
<hr />
<ol>
<li><h3 id="heading-dns-server-on-your-dns-server-instance">DNS-SERVER (<code>On your dns-server instance</code>)</h3>
</li>
</ol>
<pre><code class="lang-plaintext"># yum install -y bind bind-utils
# systemctl enable --now named
# firewall-cmd --permanent-add-port=53/udp
# firewall-cmd --reload
</code></pre>
<ul>
<li>Configuration in <code>/etc/named.conf</code></li>
</ul>
<pre><code class="lang-plaintext">[root@dns ~]# vim /etc/named.conf
options {
        directory "/var/named";
        recursion no;
};
zone "rakamodify.online" IN {
        type master;
        file "test";
};
</code></pre>
<ul>
<li>Entry of DNS targets in <code>/var/named/test</code></li>
</ul>
<pre><code class="lang-plaintext"># cp -p /var/named/named.empty /var/named/test
# vim /var/named/test

$TTL 1M
@       IN SOA  @ rname.invalid. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum

rakamodify.online.             IN      NS          ns1.rakamodify.online.
rakamodify.online.             IN      NS          ns2.rakamodify.online.
ns1                            IN      A           72.46.25.41
ns2                            IN      A           72.46.25.41
rakamodify.online.             IN      A           72.46.86.21
photos                         IN      A           72.46.86.21
www                            IN      CNAME       rakamodify.online.
</code></pre>
<ul>
<li>Open your Domain Name registrar account and change the nameserver and child nameserver entry.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705891491826/a65e33d0-9c67-43bc-9f5a-ed2ee6fe8633.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Change nameserver to <code>custom DNS</code></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705891695079/39974441-0ab0-439d-aa7b-9ca0812b5f4a.png" alt class="image--center mx-auto" /></p>
<ul>
<li>And make the following entry:</li>
</ul>
<p><mark>ns1.rakamodify.online</mark></p>
<p><mark>ns2.rakamodify.online</mark></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705891766401/677a5532-49a5-4b55-a042-329107a3c49e.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Now add the IPv4 address for the newly added namespace for redirection.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705892291361/9b6daa4e-1bb1-4c66-9984-a02c2ccba3bd.png" alt class="image--center mx-auto" /></p>
<p>Now how this entire setup will work:-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705894029248/283d52a4-b894-4512-a0b6-f80dec1c32bf.png" alt class="image--center mx-auto" /></p>
<p>Wait for some time to update DNS cache with updated configuration. Then check your Domain - IP name resolution with networking command <code># nslookup</code></p>
<p>Check locally on your DNS Instance Server</p>
<pre><code class="lang-plaintext"># nslookup rakamodify.online localhost
# nslookup www.rakamodify.online localhost
# nslookup www.rakamodify.online 8.8.8.8
# nslookup www.rakamodify.online 8.8.4.4
</code></pre>
<p>And output should be like this</p>
<pre><code class="lang-plaintext">Server:         localhost
Address:        ::1#53

Name:   rakamodify.online
Address: 72.46.86.21
</code></pre>
<p>Congratulations</p>
]]></content:encoded></item><item><title><![CDATA[Relational Database Server implementation on Rhel Linux-9]]></title><description><![CDATA[Introduction:
MySQL is like a big, digital filing cabinet where you can store, organize, and retrieve your data. It’s important because it’s fast, reliable, and easy to use. It’s different from other databases because it’s open-source, which means an...]]></description><link>https://projectwala.site/relational-database-server-implementation-on-rhel-linux-9</link><guid isPermaLink="true">https://projectwala.site/relational-database-server-implementation-on-rhel-linux-9</guid><category><![CDATA[MySQL]]></category><category><![CDATA[mysqldump]]></category><category><![CDATA[Databases]]></category><category><![CDATA[Relational Database]]></category><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[linux-basics]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[serverless]]></category><category><![CDATA[interview]]></category><category><![CDATA[beginner]]></category><category><![CDATA[rakamodify]]></category><category><![CDATA[articles]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Fri, 19 Jan 2024 12:10:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705684219961/48ff02e0-367c-45dd-831d-756aa17757aa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h1 id="heading-introduction">Introduction:</h1>
<p>MySQL is like a big, digital filing cabinet where you can store, organize, and retrieve your data. It’s important because it’s fast, reliable, and easy to use. It’s different from other databases because it’s open-source, which means anyone can use or modify it for free. It uses a language called SQL to manage the data, which is widely used and powerful. It’s like the grammar rules for talking to the database. MySQL is also good at handling lots of data and can be used by big websites or apps. It’s a popular choice for web development because it can work well with various programming languages and tools. It’s also part of the popular LAMP stack for web development, which stands for Linux, Apache, MySQL, and PHP.</p>
<p>MySQL is a popular open-source relational database management system (RDBMS). Here’s what it does:</p>
<ul>
<li><p><code>Organizes and manages data:</code> MySQL organizes data into tables, rows, and columns, which can be related to each other. This structure allows for efficient data management and retrieval.</p>
</li>
<li><p><code>Relational database:</code> Unlike other types of databases, MySQL is a relational database. This means it stores data in separate tables rather than putting all the data in one big storeroom. This makes it easier to maintain and access specific data.</p>
</li>
<li><p><code>SQL:</code> The “SQL” in MySQL stands for “Structured Query Language”, which is a standardized language used to access databases. Depending on your programming environment, you might enter SQL directly or use a language-specific API that hides the SQL syntax.</p>
</li>
<li><p><code>Open source:</code> Being open source means anyone can use and modify MySQL software for free. This has led to its widespread popularity and use in many applications, from small-scale projects to large-scale websites and enterprise-level solutions.</p>
</li>
<li><p><code>A popular choice for developers:</code> MySQL consistently ranks as the most popular database for developers. It supports various programming languages and platforms, and offers high performance, reliability, scalability, security, and flexibility.</p>
</li>
</ul>
<p>In short, MySQL is a powerful tool for managing and manipulating structured data, making it a key component in many web applications and services. It’s different from other databases because of its open-source nature, its use of SQL, and its strong performance and reliability characteristics.</p>
<hr />
<h3 id="heading-overview-database-common-understanding">Overview database common understanding:</h3>
<ol>
<li><p>Why not use storage volume, instead of database?</p>
<p> While storage devices like hard drives, SSDs, and memory cards can store data, databases offer several advantages that make them more suitable for storing application data.</p>
<p> <code>Storage Volumes</code> are like a big box where you can put anything, but if you need to find something specific quickly, it can be difficult and time-consuming. They are great for storing things like photos, videos, or documents, but not so good when you need to find, update, or analyze specific pieces of data quickly.</p>
<p> <code>On the other hand, Databases</code> are like a well-organized filing cabinet with labels and categories, making it easy to find, update, or analyze specific data. They are designed to handle complex operations on data, like finding all customers who bought a specific product last month.</p>
<p> Here are some reasons why we use databases instead of storage volumes for applications:</p>
<ul>
<li><p><strong>Speed</strong>: Databases are designed to handle complex queries, which allows applications to retrieve and update data much faster than they could if the data were stored in a storage volume.</p>
</li>
<li><p><strong>Concurrent Access</strong>: Databases allow multiple users or applications to access and modify data at the same time without conflicts. This is crucial for applications where many users need to access and update data simultaneously.</p>
</li>
<li><p><strong>Data Integrity</strong>: Databases have built-in mechanisms to ensure data integrity. They can enforce rules to prevent duplicate, missing, or incorrect data. This helps maintain the accuracy and consistency of data.</p>
</li>
<li><p><strong>Security</strong>: Databases provide robust security features, including access control and encryption, to protect sensitive data from unauthorized access or modification.</p>
</li>
<li><p><strong>Scalability</strong>: Databases can handle large amounts of data and can be scaled up or down to meet the needs of the application.</p>
</li>
</ul>
</li>
<li><p><strong>What is a database?</strong> A database is an organized collection of structured information, or data, typically stored electronically in a computer system.</p>
<p> Databases help us store information in such a way that we can easily manipulate it. Each row in a table is called a record, and each cell is a field.</p>
</li>
<li><p><strong>Uses of a database in an application:</strong> Databases are used in applications for storing, retrieving, and managing data. They are essential for data management, integration, privacy, collaboration, analysis, and reporting.</p>
</li>
<li><p><strong>Types of Databases:</strong> There are several types of databases, including:</p>
<ul>
<li><p>Relational databases</p>
</li>
<li><p>Non-relational (NoSQL) databases</p>
</li>
<li><p>Object-oriented databases</p>
</li>
<li><p>Centralized databases</p>
</li>
<li><p>Distributed databases</p>
</li>
<li><p>Cloud databases.</p>
</li>
</ul>
</li>
<li><p><strong>Why different types of databases are made and their purpose:</strong> Different types of databases are designed to meet specific requirements and to handle different types of data. For example, relational databases are best for structured data and complex queries, while non-relational databases are more flexible and can handle unstructured data.</p>
</li>
<li><p><strong>What is a relational database?</strong> A relational database organizes data into tables (rows and columns), which can be joined together via a primary key or a foreign key. These unique identifiers demonstrate the different relationships that exist between tables.</p>
</li>
<li><p><strong>What is a non-relational database?</strong> A non-relational database, also known as a NoSQL database, stores data in a non-tabular form, and tends to be more flexible than the traditional, SQL-based, relational database structures. It does not follow the relational model provided by traditional relational database management systems.</p>
</li>
<li><p><strong>What is a database language?</strong> Database languages are a type of programming language used to define and manipulate a database. They are classified into four types: Data Definition Language (DDL), Data Manipulation Language (DML), Data Control Language (DCL), and Transaction Control Language (TCL).</p>
</li>
<li><p><strong>Why are Databases Important?</strong> Databases are crucial for many reasons. They allow us to store, retrieve, and manipulate data in an efficient manner. They help in organizing data, ensuring data integrity, and providing a high level of data security.</p>
</li>
<li><p><strong>What are the Uses of Databases?</strong> Databases are used in various ways. They are used for storing data, searching for specific information within the data, allowing multiple people to look at and change the data at the same time, managing who is allowed to see the data and who can change it, and managing rules about the data.</p>
</li>
<li><p><strong>What is a Cloud Database?</strong> A cloud database is a database service built and accessed through a cloud computing platform. It serves many of the same functions as a traditional database with the added flexibility of cloud computing.</p>
</li>
<li><p><strong>Why Use a Cloud Database?</strong> Cloud databases provide flexibility, reliability, security, affordability, and more. They can rapidly adapt to changing workloads and demands without increasing the workload of already overburdened teams.</p>
</li>
</ol>
<p>Now we have learnt enough understandingn</p>
<hr />
<ol>
<li><h1 id="heading-install-amp-configure-mysql-in-linux">Install &amp; Configure MySQL in Linux</h1>
</li>
</ol>
<blockquote>
<p>Note: Make sure you have</p>
<ul>
<li><p>Proper Internet connection and connectivity for package installation.</p>
</li>
<li><p>Yum client configure at <code>/etc/yum.repos.d</code></p>
</li>
</ul>
</blockquote>
<ul>
<li>Install MySQL package in Linux Rhel-9</li>
</ul>
<pre><code class="lang-plaintext"># yum install -y mysql-server
</code></pre>
<ul>
<li>Restart and enable the <code>mysqld.service</code></li>
</ul>
<pre><code class="lang-plaintext"># systemctl enable --now mysqld.service
# systemctl status mysqld
</code></pre>
<ul>
<li>Run <code>mysql_secure_installation</code> script, that is typically run after installing MySQL to ensure a more secure default setup.</li>
</ul>
<pre><code class="lang-plaintext"># mysql_secure_installation
</code></pre>
<blockquote>
<p>The simplified explanation of <code>mysql_secure_installation</code>:</p>
<ol>
<li><p><code>Sets root password:</code> If there’s no existing password for the MySQL root user, it prompts you to set one.</p>
</li>
<li><p><code>Removes anonymous users:</code> MySQL includes an anonymous user account by default. This script removes such accounts.</p>
</li>
<li><p><code>Disables remote root login:</code> It prevents root accounts from being accessed from outside the local host.</p>
</li>
<li><p><code>Removes test database:</code> MySQL includes a test database by default. This script removes this database.</p>
</li>
<li><p><code>Reloads privilege tables:</code> Finally, it reloads the privilege tables to make sure all changes take effect immediately.</p>
</li>
</ol>
</blockquote>
<ul>
<li>Login to MySQL with default user <code>root</code> and newly set password through <code>mysql_secure_installation</code> script.</li>
</ul>
<pre><code class="lang-plaintext"># mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.32 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql&gt;
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">How to access MySQL database without entering username and password for security reasons.</div>
</div>

<ul>
<li>Create a new file <code>~/.my.cnf</code> on your home directory and write some lines</li>
</ul>
<pre><code class="lang-plaintext"># vim ~/.my.cnf

[client]
user=rakamodify
password=password

:wq!
</code></pre>
<p>Now you can log in MySQL without entering username and password.</p>
<pre><code class="lang-plaintext"># mysql

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 8.0.32 Source distribution
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql&gt;
</code></pre>
<hr />
<ol>
<li><h1 id="heading-submit-sql-query-for-mysql-database-interaction">Submit SQL query for MySQL database interaction</h1>
</li>
</ol>
<ul>
<li>Create a new Database.</li>
</ul>
<pre><code class="lang-sql">mysql&gt; <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">DATABASE</span> wordpress;
mysql&gt; <span class="hljs-keyword">SHOW</span> <span class="hljs-keyword">DATABASES</span>;
</code></pre>
<ul>
<li>Show newly created database Tables (If exists).</li>
</ul>
<pre><code class="lang-sql">mysql&gt; <span class="hljs-keyword">USE</span> wordpress;
mysql&gt; <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> table1 (
    <span class="hljs-keyword">id</span> <span class="hljs-built_in">INT</span> AUTO_INCREMENT,
    <span class="hljs-keyword">name</span> <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">100</span>),
    email <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">100</span>),
    PRIMARY <span class="hljs-keyword">KEY</span>(<span class="hljs-keyword">id</span>)
);

mysql&gt; <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">TABLE</span> table2 (
    <span class="hljs-keyword">id</span> <span class="hljs-built_in">INT</span> AUTO_INCREMENT,
    post_title <span class="hljs-built_in">VARCHAR</span>(<span class="hljs-number">200</span>),
    post_content <span class="hljs-built_in">TEXT</span>,
    PRIMARY <span class="hljs-keyword">KEY</span>(<span class="hljs-keyword">id</span>)
);
</code></pre>
<ul>
<li>Enter some records in data row/fields</li>
</ul>
<pre><code class="lang-sql">mysql&gt; <span class="hljs-keyword">INSERT</span> <span class="hljs-keyword">INTO</span> table1 (<span class="hljs-keyword">name</span>, email) <span class="hljs-keyword">VALUES</span> (<span class="hljs-string">'John Doe'</span>, <span class="hljs-string">'john@example.com'</span>), (<span class="hljs-string">'Jane Doe'</span>, <span class="hljs-string">'jane@example.com'</span>);
mysql&gt; <span class="hljs-keyword">INSERT</span> <span class="hljs-keyword">INTO</span> table2 (post_title, post_content) <span class="hljs-keyword">VALUES</span> (<span class="hljs-string">'First Post'</span>, <span class="hljs-string">'This is the content of the first post'</span>), (<span class="hljs-string">'Second Post'</span>, <span class="hljs-string">'This is the content of the second post'</span>);
</code></pre>
<ul>
<li>Describe Tables to see tables columns/attributes.</li>
</ul>
<pre><code class="lang-sql">mysql&gt; <span class="hljs-keyword">SHOW</span> <span class="hljs-keyword">tables</span>;
+<span class="hljs-comment">---------------------+</span>
| Tables_in_wordpress |
+<span class="hljs-comment">---------------------+</span>
| table1              |
| table2              |
+<span class="hljs-comment">---------------------+</span>
2 rows in <span class="hljs-keyword">set</span> (<span class="hljs-number">0.00</span> sec)


mysql&gt; <span class="hljs-keyword">DESCRIBE</span> table1;
+<span class="hljs-comment">-------+--------------+------+-----+---------+----------------+</span>
| Field | Type         | Null | Key | Default | Extra          |
+<span class="hljs-comment">-------+--------------+------+-----+---------+----------------+</span>
| id    | int          | NO   | PRI | NULL    | auto_increment |
| name  | varchar(100) | YES  |     | NULL    |                |
| email | varchar(100) | YES  |     | NULL    |                |
+<span class="hljs-comment">-------+--------------+------+-----+---------+----------------+</span>
3 rows in <span class="hljs-keyword">set</span> (<span class="hljs-number">0.00</span> sec)
</code></pre>
<ul>
<li>Show tables record.</li>
</ul>
<pre><code class="lang-sql">mysql&gt; 
mysql&gt; <span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> table1;
+<span class="hljs-comment">----+----------+------------------+</span>
| id | name     | email            |
+<span class="hljs-comment">----+----------+------------------+</span>
|  1 | John Doe | john@example.com |
|  2 | Jane Doe | jane@example.com |
+<span class="hljs-comment">----+----------+------------------+</span>
2 rows in <span class="hljs-keyword">set</span> (<span class="hljs-number">0.00</span> sec)

mysql&gt; <span class="hljs-keyword">SELECT</span> * <span class="hljs-keyword">FROM</span> table2;
+<span class="hljs-comment">----+-------------+----------------------------------------+</span>
| id | post_title  | post_content                           |
+<span class="hljs-comment">----+-------------+----------------------------------------+</span>
|  1 | First Post  | This is the content of the first post  |
|  2 | Second Post | This is the content of the second post |
+<span class="hljs-comment">----+-------------+----------------------------------------+</span>
2 rows in <span class="hljs-keyword">set</span> (<span class="hljs-number">0.00</span> sec)
</code></pre>
<ul>
<li>Create a new user for security.</li>
</ul>
<pre><code class="lang-sql">mysql&gt; <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">USER</span> <span class="hljs-string">'username'</span>@<span class="hljs-string">'host'</span> <span class="hljs-keyword">IDENTIFIED</span> <span class="hljs-keyword">WITH</span> authentication_plugin <span class="hljs-keyword">BY</span> <span class="hljs-string">'password'</span>;
mysql&gt; <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">USER</span> <span class="hljs-string">'sammy'</span>@<span class="hljs-string">'localhost'</span> <span class="hljs-keyword">IDENTIFIED</span> <span class="hljs-keyword">BY</span> <span class="hljs-string">'password'</span>;
mysql&gt; <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">USER</span> <span class="hljs-string">'sammy'</span>@<span class="hljs-string">'localhost'</span> <span class="hljs-keyword">IDENTIFIED</span> <span class="hljs-keyword">WITH</span> mysql_native_password <span class="hljs-keyword">BY</span> <span class="hljs-string">'password'</span>;
</code></pre>
<ul>
<li>Alter/Edit Modification of user creation.</li>
</ul>
<pre><code class="lang-sql">mysql&gt; <span class="hljs-keyword">ALTER</span> <span class="hljs-keyword">USER</span> <span class="hljs-string">'sammy'</span>@<span class="hljs-string">'localhost'</span> <span class="hljs-keyword">IDENTIFIED</span> <span class="hljs-keyword">WITH</span> mysql_native_password <span class="hljs-keyword">BY</span> <span class="hljs-string">'password'</span>;
</code></pre>
<blockquote>
<p><strong>*Imp: about creating a new</strong><code>MySQL user</code><strong>.</strong></p>
<p>Creating a new user for a MySQL database is important for several reasons:</p>
<ol>
<li><p><code>Security:</code> Each user in MySQL has a set of permissions that determine what actions they can perform. By creating a new user, you can limit their permissions, reducing the risk of unauthorized access or changes to your database.</p>
</li>
<li><p><code>Access Control:</code> You can grant different users different levels of access to your databases. For example, you might have a user that can only read data from a database, but not modify it.</p>
</li>
<li><p><code>Accountability:</code> By having separate users, you can track who is making changes to your database. This can be useful for auditing purposes.</p>
</li>
<li><p><code>Resource Management:</code> MySQL allows you to set resource limits on a per-user basis. This can help prevent a single user from consuming too many resources.</p>
</li>
</ol>
<p>Remember, it’s always a good practice to follow the principle of least privilege, i.e., users should be given the minimum levels of access necessary to perform their tasks. This helps to maintain the integrity and security of your database.</p>
</blockquote>
<ul>
<li>Grant privileges on a database for newly created user.</li>
</ul>
<pre><code class="lang-sql">mysql&gt; <span class="hljs-keyword">GRANT</span> PRIVILEGE <span class="hljs-keyword">ON</span> database.table(s) <span class="hljs-keyword">TO</span> <span class="hljs-string">'username'</span>@<span class="hljs-string">'host'</span>;
mysql&gt; <span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">CREATE</span>, <span class="hljs-keyword">ALTER</span>, <span class="hljs-keyword">DROP</span>, <span class="hljs-keyword">INSERT</span>, <span class="hljs-keyword">UPDATE</span>, <span class="hljs-keyword">DELETE</span>, <span class="hljs-keyword">SELECT</span>, <span class="hljs-keyword">REFERENCES</span>, RELOAD <span class="hljs-keyword">on</span> *.* <span class="hljs-keyword">TO</span> <span class="hljs-string">'sammy'</span>@<span class="hljs-string">'localhost'</span> <span class="hljs-keyword">WITH</span> <span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">OPTION</span>;   
mysql&gt; <span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">ALL</span> <span class="hljs-keyword">PRIVILEGES</span> <span class="hljs-keyword">ON</span> *.* <span class="hljs-keyword">TO</span> <span class="hljs-string">'username'</span>@<span class="hljs-string">'localhost'</span> <span class="hljs-keyword">WITH</span> <span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">OPTION</span>;
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">During MySQL login, make sure your hostname should be exactly as you mentioned during user grant creation. For Example:- <code>'username'@'localhost'</code></div>
</div>

<ul>
<li>Refresh the MySQL changes immediately.</li>
</ul>
<pre><code class="lang-sql">mysql&gt; <span class="hljs-keyword">FLUSH</span> <span class="hljs-keyword">PRIVILEGES</span>;
</code></pre>
<hr />
<ol>
<li><h1 id="heading-database-backup-amp-restore-safely">Database Backup &amp; Restore safely</h1>
</li>
</ol>
<ul>
<li><p>To backup a MySQL database</p>
  <div data-node-type="callout">
  <div data-node-type="callout-emoji">💡</div>
  <div data-node-type="callout-text">First, exit the MySQL command shell. Then, execute the following command in the Linux terminal.</div>
  </div>


</li>
</ul>
<pre><code class="lang-plaintext"># mysqldump -u username -p database-name &gt; /var/lib/mysql/mysql_backup.sql
Enter Password:
</code></pre>
<p>Please replace <code>[username]</code>, <code>[password]</code>, <code>[database_name]</code>, and <code>[filename]</code> with your MySQL username, password, the name of the database you want to backup, and the name you want for the backup file, respectively.</p>
<p>Remember, there is no space between <code>-p</code> and your password. If your password contains special characters, you might need to put it in quotes. Also, make sure you have the necessary permissions to perform the backup.</p>
<ul>
<li><strong>Restore backup of MySQL database</strong></li>
</ul>
<pre><code class="lang-plaintext"># mysql -u username -p database-name &lt; /var/lib/mysql/mysql_backup.sql
Enter Password:
</code></pre>
<hr />
<h1 id="heading-important-questions">Important questions</h1>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Basic Intermediate Level Questions:-</div>
</div>

<ol>
<li><p><strong>Q: What is SQL?</strong></p>
<p> A: SQL stands for Structured Query Language. It’s a standard language for managing and manipulating databases.</p>
</li>
<li><p><strong>Q: What is a primary key?</strong></p>
<p> A: A primary key is a unique identifier for a record in a database table. No two records in a table can have the same primary key value.</p>
</li>
<li><p><strong>Q: What is a foreign key?</strong></p>
<p> A: A foreign key is a column or set of columns in a table that is used to establish a link between the data in two tables.</p>
</li>
<li><p><strong>Q: What is a database schema?</strong></p>
<p> A: A database schema is the structure of a database system, described in a formal language supported by the database management system.</p>
</li>
<li><p><strong>Q: What is a database transaction?</strong></p>
<p> A: A database transaction is a unit of work that is performed against a database. It’s the propagation of one or more changes to the database.</p>
</li>
<li><p><strong>Q: What is database normalization?</strong></p>
<p> A: Database normalization is the process of organizing data in a database in the most efficient way possible. It involves dividing a database into two or more tables and defining relationships between the tables.</p>
</li>
<li><p><strong>Q: What is a data model?</strong></p>
<p> A: A data model is a conceptual representation of data structures required for a database and is used as a blueprint for designing databases.</p>
</li>
<li><p><strong>Q: What is a query?</strong> A:</p>
<p> A query is a request for data or information from a database table or combination of tables.</p>
</li>
<li><p><strong>Q: What is a database view?</strong></p>
<p> A: A database view is a searchable object in a database that is defined by a query.</p>
</li>
<li><p><strong>Q: What is a database index?</strong></p>
<p>A: A database index is a data structure that improves the speed of data retrieval operations on a database table.</p>
</li>
<li><p><strong>Q: What is a stored procedure?</strong></p>
<p>A: A stored procedure is a prepared SQL code that you can save, so the code can be reused over and over again.</p>
</li>
<li><p><strong>Q: What is a trigger in a database?</strong></p>
<p>A: A trigger is a stored procedure in a database that automatically reacts to an event like insertions, updates, or deletions.</p>
</li>
<li><p><strong>Q: What is a cursor in a database?</strong></p>
<p>A: A cursor in a database is a control structure that enables traversal over the records in a database.</p>
</li>
<li><p><strong>Q: What is a database constraint?</strong></p>
<p>A: A database constraint is a rule that is applied to a field or set of fields in a table, which limits the data that can be stored in fields.</p>
</li>
<li><p><strong>Q: What is a database join?</strong></p>
<p>A: A database join is a method of combining rows from two or more tables based on a related column between them.</p>
</li>
<li><p><strong>Q: What is a database lock?</strong> A: A database lock is a mechanism used by DBMS to control read/write access to a database or to a portion of it.</p>
</li>
<li><p><strong>Q: What is a database transaction log?</strong></p>
<p>A: A database transaction log is a history of all actions executed by a database management system to ensure data integrity and to facilitate data recovery.</p>
</li>
<li><p><strong>Q: What is a database shard?</strong></p>
<p>A: A database shard is a horizontal partition of data in a database, where each shard is held on a separate database server instance.</p>
</li>
<li><p><strong>Q: What is a database replica?</strong></p>
<p>A: A database replica is a copy of a database on a different server or on the same server as the primary database.</p>
</li>
<li><p><strong>Q: What is a database rollback?</strong> A: A database rollback is an operation which returns the database to some previous state.</p>
</li>
<li><p><strong>Q: What is a database commit?</strong></p>
<p>A: A database commit is an operation that gives a green signal to the database management system to finalize the changes, and after this operation, no change can be reverted back.</p>
</li>
<li><p><strong>Q: What is a database deadlock?</strong></p>
<p>A: A database deadlock is a situation where two transactions wait for each other to give up locks.</p>
</li>
<li><p><strong>Q: What is a database backup?</strong></p>
<p>A: A database backup is a copy of the data from a database that can be used to reconstruct data.</p>
</li>
<li><p><strong>Q: What is a database recovery?</strong></p>
<p>A: Database recovery is the process of restoring the database back to the correct state at a given point of time in case of a failure.</p>
</li>
<li><p><strong>Q: What is a database schema?</strong></p>
<p>A: A database schema is the skeleton structure that represents the logical view of the entire database.</p>
</li>
<li><p><strong>Q: What is a database cluster?</strong></p>
<p>A: A database cluster is a group of databases that work together to maintain high availability and data redundancy.</p>
</li>
<li><p><strong>Q: What is AWS?</strong></p>
<p>A: AWS stands for Amazon Web Services. It’s a platform by Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments.</p>
</li>
<li><p><strong>Q: What is cloud computing?</strong></p>
<p>A: Cloud computing is the delivery of computing services over the internet (“the cloud”) including servers, storage, databases, networking, software, analytics, and intelligence.</p>
</li>
<li><p><strong>Q: What is a cloud database?</strong></p>
<p>A: A cloud database is a database service built and accessed through a cloud platform. It serves many of the same functions as a traditional database with the added flexibility of cloud computing.</p>
</li>
<li><p><strong>Q: What is the importance of cloud databases?</strong></p>
<p>A: Cloud databases provide scalability, high availability, multi-regional distribution, and disaster recovery capabilities. They can be accessed from anywhere in the world, at any time, making data management more convenient.</p>
</li>
<li><p><strong>Q: What is AWS RDS?</strong></p>
<p>A: Amazon RDS (Relational Database Service) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud.</p>
</li>
<li><p><strong>Q: What is a database backup?</strong></p>
<p>A: A database backup is a copy of the data from a database that can be used to reconstruct data.</p>
</li>
<li><p><strong>Q: What is the importance of database backup?</strong></p>
<p>A: Database backups are crucial for protecting data against loss due to hardware failures, user errors, and natural disasters. They allow you to restore your data and continue business operations.</p>
</li>
<li><p><strong>Q: What is database restore?</strong></p>
<p>A: Database restore is the process of bringing back a database from a backup to its original place or to a new place.</p>
</li>
<li><p><strong>Q: How does AWS handle database backup and restore?</strong></p>
<p>A: AWS provides services like Amazon RDS which automatically backs up your data and allows you to perform a point-in-time restore.</p>
</li>
<li><p><strong>Q: What is AWS S3?</strong></p>
<p>A: Amazon S3 (Simple Storage Service) is an object storage service that offers industry-leading scalability, data availability, security, and performance.</p>
</li>
<li><p><strong>Q: How is AWS S3 used in database backup?</strong></p>
<p>A: Amazon S3 is often used to store database backups in a secure and scalable environment. Database backup files can be moved to S3 and restored when needed.</p>
</li>
<li><p><strong>Q: What is AWS DynamoDB?</strong></p>
<p>A: Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale.</p>
</li>
<li><p><strong>Q: What is the difference between SQL and NoSQL databases?</strong></p>
<p>A: SQL databases are primarily called as Relational Databases (RDBMS), whereas NoSQL database are primarily called as non-relational or distributed databases.</p>
</li>
<li><p><strong>Q: What is AWS EC2?</strong></p>
<p>A: Amazon EC2 (Elastic Compute Cloud) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.</p>
</li>
<li><p><strong>Q: What is the role of AWS EC2 in database hosting?</strong></p>
<p>A: Amazon EC2 instances can be used to host databases. The service provides flexible, resizable capacity in the AWS Cloud which can be used to install and run any third-party database software.</p>
</li>
<li><p><strong>Q: What is AWS Lambda?</strong></p>
<p>A: AWS Lambda is a serverless compute service that lets you run your code without provisioning or managing servers.</p>
</li>
<li><p><strong>Q: How does AWS Lambda interact with databases?</strong></p>
<p>A: AWS Lambda can interact with databases by executing SQL statements, reading and writing data, and performing other database operations in response to events.</p>
</li>
<li><p><strong>Q: What is the importance of AWS in database management?</strong></p>
<p>A: AWS provides a broad and deep set of database services that provide innovative and scalable solutions for database management. These services are fully managed, relieving the burden of server maintenance and setup.</p>
</li>
<li><p><strong>Q: What is database migration?</strong></p>
<p>A: Database migration is the process of moving your data from one database to another.</p>
</li>
<li><p><strong>Q: What is AWS DMS?</strong></p>
<p>A: AWS DMS (Database Migration Service) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.</p>
</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Now lets move to Intermediate &amp; Advance Level questions:-</strong></div>
</div>

<ol>
<li><p><strong><mark>Question-1:</mark> What does the</strong><code>GRANT</code> statement do in MySQL?</p>
<p> Answer-1: The <code>GRANT</code> statement in MySQL is used to give privileges to a user account. It allows the database administrator to define the kinds of operations that a user can perform.</p>
</li>
<li><p><strong><mark>Question-2: </mark> Can you explain what each of the permissions in the</strong><code>GRANT</code> statement (<code>CREATE</code>, <code>ALTER</code>, <code>DROP</code>, <code>INSERT</code>, <code>UPDATE</code>, <code>DELETE</code>, <code>SELECT</code>, <code>REFERENCES</code>, <code>RELOAD</code>) allows a user to do?</p>
<p> Answer-2: The permissions in the <code>GRANT</code> statement allow a user to perform various operations:</p>
<ul>
<li><p><code>CREATE</code>: Allows the user to create databases and tables.</p>
</li>
<li><p><code>ALTER</code>: Allows the user to modify existing databases and tables.</p>
</li>
<li><p><code>DROP</code>: Allows the user to delete databases, tables, and views.</p>
</li>
<li><p><code>INSERT</code>: Allows the user to insert data into tables.</p>
</li>
<li><p><code>UPDATE</code>: Allows the user to update existing data in tables.</p>
</li>
<li><p><code>DELETE</code>: Allows the user to delete data from tables.</p>
</li>
<li><p><code>SELECT</code>: Allows the user to read data using the SELECT statement.</p>
</li>
<li><p><code>REFERENCES</code>: Allows the user to create a foreign key constraint.</p>
</li>
<li><p><code>RELOAD</code>: Allows the user to reload the server settings and flush the server logs.</p>
</li>
</ul>
</li>
<li><p><strong><mark>Question-3</mark>: In the query, what does</strong><code>*.*</code> represent?</p>
<p> Answer-3: In a MySQL query, <code>*.*</code> represents all databases and all tables within those databases. It’s a wildcard notation where the first <code>*</code> stands for all databases and the second <code>*</code> stands for all tables.</p>
</li>
<li><p><strong><mark>Question-4</mark>: What does the</strong><code>WITH GRANT OPTION</code> clause do in a <code>GRANT</code> statement?</p>
<p> Answer-4: The <code>WITH GRANT OPTION</code> clause in a <code>GRANT</code> statement allows the user to grant any privileges they have to other users. It’s a way to delegate authority within the database system.</p>
</li>
<li><p><strong><mark>Question-5</mark>: Why might you want to create a new user in MySQL, like ‘sammy’ in the given query?</strong></p>
<p> Answer-5: Creating a new user in MySQL, like ‘sammy’, allows you to control access to your databases. Each user can be given specific privileges, ensuring they can only perform the actions necessary for their role. This helps maintain the security and integrity of your databases.</p>
</li>
<li><p><strong><mark>Question-6</mark>: What are some potential security implications of granting a user too many permissions in MySQL?</strong></p>
<p> Answer-6: Granting a user too many permissions in MySQL can lead to several security issues. The user could accidentally or maliciously modify or delete important data. They could also potentially access sensitive information. It’s best to follow the principle of least privilege, granting only the permissions necessary for the user’s role.</p>
</li>
<li><p><strong><mark>Question-7</mark>: How would you modify the permissions of an existing user in MySQL?</strong></p>
<p> Answer-7: To modify the permissions of an existing user in MySQL, you can use the <code>GRANT</code> statement to add new permissions and the <code>REVOKE</code> statement to remove permissions. After modifying permissions, you should use the <code>FLUSH PRIVILEGES</code> command to ensure the changes take effect immediately.</p>
</li>
<li><p><strong><mark>Question-8</mark>: How can you revoke permissions from a user in MySQL?</strong></p>
<p> Answer-8: You can revoke permissions from a user in MySQL using the <code>REVOKE</code> statement, followed by the privileges you want to remove, and then the <code>FROM</code> keyword followed by the username. For example, <code>REVOKE SELECT, INSERT ON database.table FROM 'username'@'</code><a target="_blank" href="http://localhost"><code>localhost</code></a><code>';</code>.</p>
</li>
<li><p><strong><mark>Question-9</mark>: Can you explain the principle of least privilege and why it’s important in the context of database access control?</strong></p>
<p> Answer-9: The principle of least privilege is a computer security concept in which a user is given the minimum levels of access necessary to complete their job functions. In the context of database access control, this principle helps to protect data from accidental or malicious modification, deletion, or exposure.</p>
</li>
<li><p><mark>Question-10:</mark><strong>What happens if you change your hostname, then can you access your MySQL database?</strong></p>
<p>Answer: Changing the hostname of your machine does not directly affect your ability to access your MySQL database. However, if your MySQL users or permissions are set up to only allow connections from a specific hostname, you may need to update these settings. Also, any applications that connect to MySQL using the old hostname will need to be updated.</p>
</li>
<li><p><mark>Question-11 </mark> : <strong>How to give single table’s authority permission on a database for a user?</strong></p>
<p>Answer: You can grant specific table permissions to a user in MySQL using the <code>GRANT</code> command. Here’s an example:</p>
<pre><code class="lang-sql"><span class="hljs-keyword">GRANT</span> <span class="hljs-keyword">SELECT</span>, <span class="hljs-keyword">INSERT</span>, <span class="hljs-keyword">UPDATE</span> <span class="hljs-keyword">ON</span> database_name.table_name <span class="hljs-keyword">TO</span> <span class="hljs-string">'username'</span>@<span class="hljs-string">'localhost'</span>;
</code></pre>
<p>Note:- This command gives <code>SELECT</code>, <code>INSERT</code>, and <code>UPDATE</code> permissions on <code>table_name</code> in <code>database_name</code> to the user <code>username</code> connecting from <a target="_blank" href="http://localhost"><code>localhost</code></a>.</p>
</li>
<li><p><strong><mark>Question-13</mark>: What is the MySQL configuration file?</strong></p>
<p>Answer: The MySQL configuration file is a setup file for MySQL server. It’s named <code>my.cnf</code> on Unix-based systems and <code>my.ini</code> on Windows. This file is used to configure various server settings like the number of concurrent connections, table cache size, memory settings, and more.</p>
</li>
<li><p><mark>Question-14</mark>: <strong>Can you access MySQL from another machine on LAN?</strong></p>
<p>Answer: Yes, you can access MySQL from another machine on the same Local Area Network (LAN), provided that the MySQL server is configured to accept network connections and the necessary firewall rules allow it. The user must also have the necessary permissions to connect from the client machine’s IP address.</p>
</li>
<li><p><strong><mark>Question-15:</mark> Can you access MySQL database on a different port like 3307?</strong></p>
<p>Answer: Yes, you can access a MySQL database on a different port like 3307. By default, MySQL listens on port 3306, but this can be changed in the MySQL configuration file. When connecting, you would specify the port number along with the hostname, like <code>mysql -h hostname -P 3307 -u username -p</code>.</p>
</li>
</ol>
<hr />
<h1 id="heading-references-links">References Links:</h1>
<ul>
<li><p><code>How to create a new user in MySQL and grants permission:</code><a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-create-a-new-user-and-grant-permissions-in-mysql">https://www.digitalocean.com/community/tutorials/how-to-create-a-new-user-and-grant-permissions-in-mysql</a></p>
</li>
<li><p><code>Learn MySQL SQL query:-</code><a target="_blank" href="https://www.w3schools.com/MySQL/default.asp">https://www.w3schools.com/MySQL/default.asp</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Deploy WordPress + MariaDB + PHP + Apache web server on rhel9 with easy steps]]></title><description><![CDATA[Pre-Setup:
I am assuming that you have

Configured the yum client and EPEL repository on the machine.

Hostname

Proper Network Internet connection for package installation




Install Apache Web Server



httpd packages for the web-server

# yum ins...]]></description><link>https://projectwala.site/deploy-wordpress-mariadb-php-apache-web-server-on-rhel9-with-easy-steps</link><guid isPermaLink="true">https://projectwala.site/deploy-wordpress-mariadb-php-apache-web-server-on-rhel9-with-easy-steps</guid><category><![CDATA[rakamodify]]></category><category><![CDATA[Devops]]></category><category><![CDATA[WordPress]]></category><category><![CDATA[Linux]]></category><category><![CDATA[RHEL9]]></category><category><![CDATA[#rocky-linux]]></category><category><![CDATA[MariaDB]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[PHP]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[linux-basics]]></category><category><![CDATA[linux-server]]></category><category><![CDATA[step by step]]></category><category><![CDATA[projects]]></category><category><![CDATA[project]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Thu, 18 Jan 2024 13:02:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705582695822/5d770ccd-ba82-4f40-bf87-04e55614634d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h1 id="heading-pre-setup">Pre-Setup:</h1>
<p>I am assuming that you have</p>
<ul>
<li><p>Configured the yum client and EPEL repository on the machine.</p>
</li>
<li><p>Hostname</p>
</li>
<li><p>Proper Network Internet connection for package installation</p>
</li>
</ul>
<hr />
<ol>
<li><h1 id="heading-install-apache-web-server">Install Apache Web Server</h1>
</li>
</ol>
<ul>
<li>httpd packages for the web-server</li>
</ul>
<pre><code class="lang-plaintext"># yum install -y httpd
</code></pre>
<ul>
<li>Restart the <code>httpd.service</code> and enable it</li>
</ul>
<pre><code class="lang-plaintext"># systemctl enable --now httpd
</code></pre>
<ul>
<li>Add http &amp; https package in the firewall &amp; reload the firewall service</li>
</ul>
<pre><code class="lang-plaintext"># firewall-cmd --permanent --add-port=80/tcp
# firewall-cmd --permannet --add-port=443/tcp
# firewall-cmd --reload
# firewall-cmd --list-all
</code></pre>
<ul>
<li>check the selinux label <code>object_r:httpd_sys_content_t</code> on <code>/var/www/html/</code></li>
</ul>
<pre><code class="lang-plaintext"># ls -lZ /var/www/html
-rw-r--r--.  1 root root unconfined_u:object_r:httpd_sys_content_t:s0      405 Feb  6  2020 index.html
</code></pre>
<ul>
<li>Check the HTTP web server page on the browser</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705579805659/1da575e2-71be-4576-be41-79c37bc61ee6.png" alt class="image--center mx-auto" /></p>
<p>Congratulations!</p>
<p>your apache web server is working perfectly. Let's Move further for MySQL + WordPress configuration.</p>
<hr />
<ol>
<li><h1 id="heading-install-mariadbmysql-and-wordpress">Install MariaDB/MySQL and WordPress</h1>
</li>
</ol>
<ul>
<li>For this first, we have to set setup the <a target="_blank" href="https://repo.extreme-ix.org/remi/"><code>"REMI"</code></a> repository client on the yum repository for <code>PHP</code> packages.</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Check your OS version <code>/etc/os-release</code> before configuring the REMI repository.</div>
</div>

<pre><code class="lang-plaintext"># cat /etc/os-release

[root@mh1 conf]# cat /etc/os-release


NAME="Red Hat Enterprise Linux"
VERSION="9.1 (Plow)"
REDHAT_SUPPORT_PRODUCT_VERSION="9.1"
</code></pre>
<ul>
<li>Setup REMI repository for RHEL 9.1</li>
</ul>
<pre><code class="lang-plaintext"># dnf install https://repo.extreme-ix.org/remi/enterprise/remi-release-9.rpm
# yum repolist all
</code></pre>
<ul>
<li>Install PHP packages <code>(php, php-mysqlnd, php-pdo, php-gd, php-mbstring)</code></li>
</ul>
<pre><code class="lang-plaintext"># yum install php php-mysqlnd php-pdo php-gd php-mbstring
</code></pre>
<ul>
<li>Install MySQL/MariaDB &amp; Restart the service</li>
</ul>
<pre><code class="lang-plaintext"># yum install -y mariadb-server mariadb
# yum enable --now mariadb.service
</code></pre>
<ul>
<li>Fresh configure your MariaDB</li>
</ul>
<pre><code class="lang-plaintext">#  mariadb-secure-installation
</code></pre>
<ul>
<li>Get into MariaDB using new login</li>
</ul>
<pre><code class="lang-plaintext"># mysql -u root -p
password:

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 13
Server version: 10.5.22-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]&gt;
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Create a <code>~/.my.cnf</code> file and write some lines</div>
</div>

<pre><code class="lang-plaintext"># vim ~/.my.cnf
[clients]
username=root
password=password

:wq!
</code></pre>
<p>Now you don't need to use <code>-u</code> &amp; <code>-p</code> option to work with the MariaDB database.</p>
<pre><code class="lang-plaintext"># mysql
</code></pre>
<ul>
<li>Create a new database, user and grant permission on the database.</li>
</ul>
<pre><code class="lang-plaintext"># mysql

MariaDB [(none)]&gt; CREATE DATABASE wordpress;
MariaDB [(none)]&gt; CREATE USER 'admin'@'localhost' IDENTIFIED BY 'password';     
MariaDB [(none)]&gt; GRANT ALL PRIVILEGES ON wordpress.* TO 'admin'@'localhost' WITH GRANT OPTION;
MariaDB [(none)]&gt; FLUSH PRIVILIGES;
</code></pre>
<hr />
<ol>
<li><h1 id="heading-install-wordpress-and-configuration">Install WordPress and configuration</h1>
</li>
</ol>
<ul>
<li>Go to <code>wordpress.org</code> and download the latest WordPress suit package using <code>wget.</code></li>
</ul>
<pre><code class="lang-plaintext"># cd /var/www/html/
# wget https://wordpress.org/latest.zip
# unzip latest.zip
# rm -rf latest.zip
</code></pre>
<ul>
<li>Configure <code>/var/www/html/wordpress/wp-config-sample.php</code> or <code>wp-config.php</code> configuration file</li>
</ul>
<pre><code class="lang-plaintext"># vim /var/www/html/wordpress/wp-config-sample.php


// ** Database settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define( 'DB_NAME', 'wordpress' );

/** Database username */
define( 'DB_USER', 'admin' );

/** Database password */
define( 'DB_PASSWORD', 'password' );

/** Database hostname */
define( 'DB_HOST', 'localhost' );
</code></pre>
<ul>
<li>Open <code>/etc/httpd/conf/httpd.conf</code> file for above configuration.</li>
</ul>
<pre><code class="lang-plaintext"># vim /etc/httpd/conf/httpd.conf

Listen 80
User apache
Group apache
ServerAdmin admin@localhost
DocumentRoot "/var/www/html/wordpress"
</code></pre>
<ul>
<li>Change <code>user:group</code> ownership to <code>apache</code> user on DocumentRoot directory</li>
</ul>
<pre><code class="lang-plaintext"># chown -R apache:apache /var/www/html/wordpress
# ls -lZ /var/www/html/wordpress
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">NOTE: SELinux label <code>httpd_sys_content_t</code> &amp; user, group ownership should look like this:- <code>apache apache</code></div>
</div>

<pre><code class="lang-plaintext">[root@mh1 conf]# ls -lZ /var/www/html/wordpress/
total 228
-rw-r--r--.  1 apache apache unconfined_u:object_r:httpd_sys_content_t:s0      405 Feb  6  2020 filename.extention
</code></pre>
<ul>
<li>Restart the httpd.service, mariadb.service one more time to update with new changes.</li>
</ul>
<pre><code class="lang-plaintext"># systemctl restart httpd.service  mariadb.service
# systemctl enable httpd.service mariadb.service
</code></pre>
<ul>
<li><p>Open your browser and paste this address: <code>http://machine-ip</code> .</p>
</li>
<li><p>Fill your information and login WordPress Page.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705582627251/917bad02-a6d4-4a8f-b4fe-df88fe9ad6ff.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705582064228/16f7813a-ac66-4638-9a47-7acdc43e01d2.png" alt class="image--right mx-auto mr-0" /></p>
<h3 id="heading-lots-of-congratulations"><em>Lots of Congratulations!</em></h3>
<hr />
]]></content:encoded></item><item><title><![CDATA[LVM Made Easy: Your Guide to Stress-Free Storage Management]]></title><description><![CDATA[Hello everyone, In this article, we’re going to explore the complete landscape of Logical Volume Management (LVM), from the basics to the advanced concepts. We’ll journey from the very beginning to the end, ensuring a comprehensive understanding of L...]]></description><link>https://projectwala.site/lvm-made-easy-your-guide-to-stress-free-storage-management</link><guid isPermaLink="true">https://projectwala.site/lvm-made-easy-your-guide-to-stress-free-storage-management</guid><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Wed, 17 Jan 2024 15:17:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705492293219/9419ecec-22c0-4cbb-bf9f-8cc0a4f60fb2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p><strong>Hello everyone,</strong> In this article, we’re going to explore the complete landscape of Logical Volume Management (LVM), from the basics to the advanced concepts. We’ll journey from the very beginning to the end, ensuring a comprehensive understanding of LVM. The topics we’ll cover throughout this article are listed below. So, let’s dive in and learn together.</p>
<ol>
<li><p><strong>Introduction to LVM</strong></p>
<ul>
<li><p>Definition of LVM</p>
</li>
<li><p>Why LVM is popular?</p>
</li>
</ul>
</li>
<li><p><strong>LVM Architecture &amp; their components</strong></p>
<ul>
<li>How LVM works</li>
</ul>
</li>
<li><p><strong>Practical Guide ( Working with LVM )</strong></p>
<ul>
<li><p>Creating a Physical Volume (PV)</p>
</li>
<li><p>Creating a Volume Group (VG)</p>
</li>
<li><p>Creating a Logical Volume (LV)</p>
</li>
<li><p>File system formation</p>
</li>
<li><p>Access LVM partition through mount access point</p>
</li>
<li><p>Resizing LVM options: lvreduce and lvextend</p>
</li>
<li><p>Taking a snapshot of LVM and its uses</p>
</li>
<li><p>Restoring from a snapshot state</p>
</li>
<li><p>How to remove active PV from the LVM</p>
</li>
</ul>
</li>
</ol>
<hr />
<ol>
<li><h1 id="heading-introduction-to-lvm"><strong>Introduction to LVM</strong></h1>
</li>
</ol>
<ul>
<li><strong>Definition of LVM</strong></li>
</ul>
<p>LVM is like a magic box for managing storage on your computer. Imagine you have several small boxes (these are your physical storage devices like hard drives). Each box can hold a certain amount of stuff (data). But it’s hard to keep track of what’s in each box and moving stuff around if you need more space in one box is a hassle.</p>
<p>This is where LVM comes in. It allows you to put all these small boxes into one big box (this is called a Volume Group). Now, instead of seeing several small boxes, you see one big box. The best part is, you can create smaller, flexible partitions inside this big box (these are called Logical Volumes). You can name these partitions anything you like, such as “photos”, “documents”, etc.</p>
<p>The beauty of LVM is that these partitions are flexible. If you find that your “photos” partition is running out of space, you can easily make it bigger by taking some space from another partition. You can do all this while your computer is running, without needing to stop and restart it.</p>
<p>Let’s take an example. Suppose you have two hard drives of 500GB each. Without LVM, if you have partitioned one drive to hold 300GB of movies and it’s running out of space, you’d have to manually move some movies to the other drive. With LVM, you can simply add more space to your “movies” partition from the unused space of the second drive, all without moving any files around.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705481961818/cd1f414c-4077-4b02-9329-6413a98db3df.png" alt class="image--center mx-auto" /></p>
<ul>
<li><strong>Why LVM is popular then, traditional disks partitions schemes?</strong></li>
</ul>
<blockquote>
<p>Logical Volume Management (LVM) is a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes. In particular, an LVM system can:</p>
</blockquote>
<ul>
<li><p><strong>Resize disk partitions</strong>: With traditional disk partitions, resizing is a complex task and sometimes not possible without data loss. However, LVM allows you to resize disk partitions easily and without data loss.</p>
</li>
<li><p><strong>Manage storage more effectively</strong>: LVM allows you to create a logical volume that spans multiple physical disks. This is useful when you need more storage than a single disk can provide.</p>
</li>
<li><p><strong>Snapshot capability</strong>: LVM provides snapshot capability. You can create a temporary copy of a logical volume and use it for backups. This is not possible with traditional disk partitions.</p>
</li>
<li><p><strong>Dynamic allocation of storage</strong>: With LVM, you can allocate additional storage to a logical volume on the fly as your needs grow, without any downtime.</p>
</li>
</ul>
<p>Let’s say you have a 500GB hard drive with a single partition that is running out of space. You add a new 500GB hard drive to your system. With traditional partitions, you would have to create a new partition on the new drive and then figure out how to distribute your data between the two partitions.</p>
<p>With LVM, you can add the new drive to a volume group, extend the logical volume to include the new drive, and then resize the file system, all without any downtime or data loss. The operating system sees this as one large 1TB drive, and you don’t have to worry about managing multiple partitions.</p>
<p>In short, while traditional disk partitions still have their uses, LVM provides a more flexible and powerful way to manage disk space. It’s particularly useful in enterprise environments where storage needs can change rapidly and downtime is costly.</p>
<hr />
<ol>
<li><h1 id="heading-lvm-architecture-amp-their-components"><strong>LVM Architecture &amp; their components</strong></h1>
</li>
</ol>
<ul>
<li><strong>How LVM architecture are working?</strong></li>
</ul>
<p>In LVM (Logical Volume Management) architecture, data is stored in a layered manner:</p>
<ol>
<li><p><strong>Physical Volumes (PVs)</strong>: Data is first stored on physical storage devices, such as hard disks or SSDs. These devices, or specific partitions on these devices, are designated as PVs.</p>
</li>
<li><p><strong>Volume Group (VG)</strong>: The PVs are then grouped together to form a VG. The VG acts as a single large storage pool, combining the capacities of all the PVs it contains. Data isn’t directly stored on the VG, but the VG keeps track of which parts of the PVs are allocated to which LVs.</p>
</li>
<li><p><strong>Logical Volumes (LVs)</strong>: Within the VG, you create LVs. These LVs are where your data is actually stored. You can think of an LV as a ‘virtual partition’. It’s given a portion of the total VG space and can be formatted with a file system (like ext4 or XFS) where you can store your files and directories.</p>
</li>
</ol>
<p>This layered approach allows for flexibility. For example, if an LV is running out of space, you can add a new PV to the VG and then extend the LV to use this additional space. Similarly, if you have unused space in your VG, you can create new LVs to utilize it. This is how data is stored and managed in LVM architecture.</p>
<p>Below image of LVM architecture :-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705488714549/662e5277-4784-4948-8b32-8195ac8e2606.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-practical-guide-working-with-lvm"><strong>Practical Guide (Working with LVM)</strong></h1>
<ul>
<li><h3 id="heading-crating-a-physical-volume-pv"><strong>Crating a Physical Volume (PV)</strong></h3>
</li>
</ul>
<p>I have three disks partition sda, sdb, sdc with 100 GB storage each for this practical.</p>
<ol>
<li><strong>List the block devices</strong>: You can use the <code>lsblk</code> command to list all block devices, which should now include your new disks <code>sda</code>, <code>sdb</code>, and <code>sdc</code>.</li>
</ol>
<pre><code class="lang-bash">$ lsblk
</code></pre>
<p>The output look something like this:</p>
<pre><code class="lang-bash">NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   100G  0 disk 
sdb      8:16   0   100G  0 disk 
sdc      8:32   0   100G  0 disk
</code></pre>
<ol>
<li><strong>Check the file system disk space usage</strong>: You can use the <code>df -h</code> command to check the file system disk space usage.</li>
</ol>
<pre><code class="lang-bash">$ df -h
</code></pre>
<ol>
<li><strong>Create a Physical Volume (PV)</strong>: You can use the <code>pvcreate</code> command to create a new physical volume on each of your new disks.</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you are creating partitions block into a big giants volume, then first you have some empty disks partitions, which will formatted with "Linux LVM disk type", during partition creation. You can check through <code>"p" </code>option.</div>
</div>

<p>Create Persistent volume with three individual disks <code>/dev/sda, /dev/sdb, /dev/sdc</code></p>
<pre><code class="lang-bash">$ sudo pvcreate /dev/sda
$ sudo pvcreate /dev/sdb
$ sudo pvcreate /dev/sdc
</code></pre>
<ol>
<li><strong>List the Physical Volumes</strong>: You can use the <code>pvs</code> or <code>pvdisplay</code> command for long detailed output to list all physical volumes.</li>
</ol>
<pre><code class="lang-bash">$ sudo pvs
$ sudo pvdisplay
</code></pre>
<p>The output might look something like this:</p>
<pre><code class="lang-bash">  PV         VG        Fmt  Attr PSize   PFree
  /dev/sda            lvm2 ---  100.00g 100.00g
  /dev/sdb            lvm2 ---  100.00g 100.00g
  /dev/sdc            lvm2 ---  100.00g 100.00g
</code></pre>
<hr />
<ul>
<li><h3 id="heading-creating-a-volume-group-vg">Creating a Volume Group (VG)</h3>
</li>
</ul>
<p>Now you have to create a volume group (VG) named <code>vg_vol</code> from these physical volumes (PVs): That is actually a combined volume space that have PV's space behind in reality.</p>
<ol>
<li><strong>Create a Volume Group (VG)</strong>: You can use the <code>vgcreate</code> command to create a new volume group. Here’s how you can do it:</li>
</ol>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">At this point, you can decide the size of each small section, called a Physical Extent using option <code>-s</code> , for your Volume Group. This process will break down the entire volume into smaller chunks, each the size of your chosen Physical Extent. You can see this information by <code>vgdisplay</code> after vg creation.</div>
</div>

<pre><code class="lang-bash">$ sudo vgcreate -s 2M vg_vol /dev/sda /dev/sdb /dev/sdc
Volume group <span class="hljs-string">"vg_vol"</span> successfully created
</code></pre>
<p>This command creates a new volume group named <code>vg_vol</code> using the physical volumes <code>/dev/sda</code>, <code>/dev/sdb</code>, and <code>/dev/sdc</code>, and each Physical Extent size will be 2MB each block.</p>
<ol>
<li><strong>List the Volume Groups</strong>: You can use the <code>vgs</code> or <code>vgdisplay</code> command for long detailed output to list all volume groups.</li>
</ol>
<pre><code class="lang-bash">$ sudo vgs

VG     <span class="hljs-comment">#PV #LV #SN Attr   VSize   VFree</span>
vg_vol   3   0   0 wz--n- 300.00g 300.00g
</code></pre>
<pre><code class="lang-plaintext">$ sudo vgdisplay  
--- Volume group ---
  VG Name               vg_vol
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               &lt;300 GiB
  PE Size               2.00 MiB
  Total PE              153600
  Alloc PE / Size       0 / 0   
  Free  PE / Size       153600 / &lt;4.00 GiB
  VG UUID               vQqtJ2-cMLU-hHc2-AU1e-Uw1R-alWb-nOtxBi
</code></pre>
<p>This shows that the volume group <code>vg_vol</code> has been successfully created with 3 physical volumes, and it has a size of 300GB with 300GB free.</p>
<hr />
<ul>
<li><h3 id="heading-creating-a-logical-volume-lv">Creating a Logical Volume (LV)</h3>
</li>
</ul>
<p>Now you can create logical volumes (LVs) named <code>lv_vol1</code>, <code>lv_vol2</code>, and <code>lv_vol3</code> from the volume group (VG) <code>vg_vol</code>:</p>
<ol>
<li><strong>Create Logical Volumes (LVs)</strong>: You can use the <code>lvcreate</code> command to create new logical volumes. Here’s how you can do it:</li>
</ol>
<blockquote>
<p>NOTE: In the <code>lvcreate</code> command in Linux, <code>-L</code> and <code>-l</code> options are used to specify the size of the logical volume:</p>
<ul>
<li><p><code>-L</code> option: This is used to specify the size in units such as kilobytes (K), megabytes (M), gigabytes (G), terabytes (T), etc. For example, <code>-L 10G</code> would create a logical volume of 10 gigabytes.</p>
</li>
<li><p><code>-l</code> option: This is used to specify the size in extents. An extent is a block of space in the volume group. The size of an extent is defined when the volume group is created. For example, <code>-l 100%FREE</code> would use 100% of the free space in the volume group.</p>
</li>
</ul>
<p>So, the basic difference is that <code>-L</code> specifies the size in units of bytes, while <code>-l</code> specifies the size in extents.</p>
</blockquote>
<pre><code class="lang-bash">$ sudo lvcreate -n lv_vol1 -L 50G vg_vol
$ sudo lvcreate -n lv_vol2 -L 50G vg_vol
$ sudo lvcreate -n lv_vol3 -L 50G vg_vol

Logical volume <span class="hljs-string">"lv_vol"</span> created.
</code></pre>
<p>Or you can do this. (Both option will create same amount of LV partition)</p>
<p><code>EX:</code> Here 1 Physical Extent is 2 MB, then (1024 * 50)/2 = 25600 Physical Extent</p>
<pre><code class="lang-plaintext">$ sudo lvcreate -n lv_vol1 -l 25600 vg_vol
$ sudo lvcreate -n lv_vol2 -L 25600 vg_vol
$ sudo lvcreate -n lv_vol3 -L 25600 vg_vol

Logical volume "lv_vol" created.
</code></pre>
<p>These commands create three new logical volumes named <code>lv_vol1</code>, <code>lv_vol2</code>, and <code>lv_vol3</code> each with a size of 50GB in the volume group <code>vg_vol</code>.</p>
<ol>
<li><strong>List the Logical Volumes</strong>: You can use the <code>lvs</code> or <code>lvdisplay</code> command for detailed output to list all logical volumes.</li>
</ol>
<pre><code class="lang-bash">$ sudo lvs

  LV     VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_vol1 vg_vol -wi-a-----  50.00g                                                    
  lv_vol2 vg_vol -wi-a-----  50.00g                                                    
  lv_vol3 vg_vol -wi-a-----  50.00g
</code></pre>
<pre><code class="lang-bash">$ sudo lvdisplay /dev/vg_vol/lv_vol1

  LV Path                /dev/vg_vol/lv_vol1
  LV Name                lv_vol1
  VG Name                vg_vol
  LV UUID                rdEpAG-7egK-08e4-SSWJ-1Ywx-hVvD-XbFlp0
  LV Write Access        <span class="hljs-built_in">read</span>/write
  LV Creation host, time mh1.example.com, 2024-01-17 20:16:19 +0530
  LV Status              available
  <span class="hljs-comment"># open                 0</span>
  LV Size                50.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently <span class="hljs-built_in">set</span> to     256
  Block device           253:2
</code></pre>
<p>This shows that the logical volumes <code>lv_vol1</code>, <code>lv_vol2</code>, and <code>lv_vol3</code> have been successfully created in the volume group <code>vg_vol</code> each with a size of 50GB.</p>
<hr />
<ul>
<li><h3 id="heading-file-system-formation">File system formation</h3>
</li>
</ul>
<p>Now you can create <code>ext4</code> file systems on these logical volumes (LVs) and check their details:</p>
<ol>
<li><strong>Create ext4 File Systems</strong>: You can use the <code>mkfs.ext4</code> command to create an <code>ext4</code> file system on each of your logical volumes.</li>
</ol>
<pre><code class="lang-bash">$ sudo mkfs.ext4 /dev/vg_vol/lv_vol1
$ sudo mkfs.ext4 /dev/vg_vol/lv_vol2
$ sudo mkfs.ext4 /dev/vg_vol/lv_vol3
</code></pre>
<ol>
<li><strong>List the Block Devices</strong>: You can use the <code>lsblk</code> command to list all block devices.</li>
</ol>
<pre><code class="lang-bash">$ lsblk
</code></pre>
<p>The output might look something like this:</p>
<pre><code class="lang-bash">NAME           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda              8:0    0   100G  0 disk 
sdb              8:16   0   100G  0 disk 
sdc              8:32   0   100G  0 disk 
└─vg_vol       253:0    0   300G  0 lvm  
  ├─lv_vol1    253:1    0    50G  0 lvm  
  ├─lv_vol2    253:2    0    50G  0 lvm  
  └─lv_vol3    253:3    0    50G  0 lvm
</code></pre>
<ol>
<li><strong>Print Block Device Attributes</strong>: You can use the <code>blkid</code> command to print the block device attributes.</li>
</ol>
<pre><code class="lang-bash">$ sudo blkid

/dev/sda: UUID=<span class="hljs-string">"3ef4-gh7j"</span> TYPE=<span class="hljs-string">"ext4"</span>
/dev/sdb: UUID=<span class="hljs-string">"4gh5-jk8l"</span> TYPE=<span class="hljs-string">"ext4"</span>
/dev/sdc: UUID=<span class="hljs-string">"5hj6-kl9m"</span> TYPE=<span class="hljs-string">"ext4"</span>
/dev/vg_vol/lv_vol1: UUID=<span class="hljs-string">"6jk73ekl9f4-gh7j-lm0n3ekl9f4-gh7j-"</span> TYPE=<span class="hljs-string">"ext4"</span>
/dev/vg_vol/lv_vol2: UUID=<span class="hljs-string">"7kl8kl9-mn3ef4-gh7j03ekl9f4-gh7j-o"</span> TYPE=<span class="hljs-string">"ext4"</span>
/dev/vg_vol/lv_vol3: UUID=<span class="hljs-string">"8lkl3ekl9f4-gh7j-93ef4-gh7jm9-no0p"</span> TYPE=<span class="hljs-string">"ext4"</span>
</code></pre>
<ol>
<li><strong>Check the File System Disk Space Usage</strong>: You can use the <code>df -hT</code> command to check the file system disk space usage.</li>
</ol>
<pre><code class="lang-bash">$ df -hT

Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda       ext4      100G  10G   90G  10% /
/dev/sdb       ext4      100G  20G   80G  20% /mnt/data1
/dev/sdc       ext4      100G  30G   70G  30% /mnt/data2
/dev/vg_vol/lv_vol1 ext4 50G   5G    45G  10% /mnt/lv1
/dev/vg_vol/lv_vol2 ext4 50G   10G   40G  20% /mnt/lv2
/dev/vg_vol/lv_vol3 ext4 50G   15G   35G  30% /mnt/lv3
</code></pre>
<hr />
<ul>
<li><h3 id="heading-access-lvm-partition-through-mount-access-point">Access LVM partition through mount access point</h3>
</li>
</ul>
<p>We are going to create three mount points, namely <code>lv-mount1</code>, <code>lv-mount2</code>, and <code>lv-mount3</code>, in the <code>/mnt</code> location. After that, we will mount the logical volumes to these mount points. Finally, we will make entries in the <code>fstab</code> file for these mounts.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Create mount points</span>
$ mkdir /mnt/lv_mount1
$ mkdir /mnt/lv_mount2
$ mkdir /mnt/lv_mount3

<span class="hljs-comment"># Mount the logical volumes</span>
$ mount /dev/vg_vol/lv_vol1 /mnt/lv_mount1
$ mount /dev/vg_vol/lv_vol2 /mnt/lv_mount2
$ mount /dev/vg_vol/lv_vol3 /mnt/lv_mount3

<span class="hljs-comment"># Make fstab entries for permanent after boot</span>
$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"/dev/vg_vol/lv_vol1 /mnt/lv_mount1 ext4 defaults 0 0"</span> &gt;&gt; /etc/fstab
$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"/dev/vg_vol/lv_vol2 /mnt/lv_mount2 ext4 defaults 0 0"</span> &gt;&gt; /etc/fstab
$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"/dev/vg_vol/lv_vol3 /mnt/lv_mount3 ext4 defaults 0 0"</span> &gt;&gt; /etc/fstab
</code></pre>
<p>And here is the output of the <code>/etc/fstab</code> file after the above operations:</p>
<pre><code class="lang-bash">$ sudo vim /etc/fstab

<span class="hljs-comment"># &lt;file system&gt; &lt;mount point&gt;   &lt;type&gt;  &lt;options&gt;       &lt;dump&gt;  &lt;pass&gt;</span>
/dev/sda        /               ext4    defaults        0       0
/dev/sdb        /mnt/data1      ext4    defaults        0       0
/dev/sdc        /mnt/data2      ext4    defaults        0       0
/dev/vg_vol/lv_vol1       /mnt/lv_mount1     ext4   defaults    0 0
/dev/vg_vol/lv_vol2       /mnt/lv_mount2     ext4   defaults    0 0
/dev/vg_vol/lv_vol3       /mnt/lv_mount3     ext4   defaults    0 0
</code></pre>
<hr />
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Add some data on Logical Volume.</div>
</div>

<pre><code class="lang-plaintext">$ sudo cp -arvf /var/* /mnt/lv_mount1/
$ sudo cp -arvf /boot/* /mnt/lv_mount1/
</code></pre>
<p>Check size of lv disk partition.</p>
<pre><code class="lang-plaintext">$ sudo du -h /mnt/lv_mount

.
. .
. . .
. . . . 
4096M    /mnt/lv-mount1/
</code></pre>
<hr />
<ul>
<li><h3 id="heading-resizing-lvm-options-lvreduce-and-lvextend">Resizing LVM options: lvreduce and lvextend</h3>
</li>
</ul>
<p>You can use the <code>lvreduce</code> and <code>lvextend</code> commands to resize LVM partitions:</p>
<ol>
<li><strong>Reducing the size of an LVM partition (</strong><code>lvreduce</code>):</li>
</ol>
<p>Before reducing the size of a logical volume, it’s important to ensure that the file system within the volume can also be reduced. For an ext4 file system, you can use the <code>resize2fs</code> command:</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">NOTE: When you reduce the size of a logical volume with <code>lvreduce</code>, you’re reducing the amount of disk space allocated to that volume. If you do this without first reducing the size of the file system, you risk losing data because <code>lvreduce</code> does not care about the data stored within the volume. If you’re extending a logical volume, the order is reversed. You should use <code>lvextend</code> to extend the size of the logical volume first, and then use <code>resize2fs</code> to extend the file system to fill the newly-allocated space.</div>
</div>

<p>On the other hand, <code>resize2fs</code> is a tool that knows how to safely shrink an ext4 file system. It will move any data residing on the blocks that are being removed to other parts of the file system.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Unmount the logical volume</span>
$ umount /mnt/lv-mount1

<span class="hljs-comment"># Check the file system before resizing</span>
$ e2fsck -f /dev/vg_vol/lv_vol1

e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vgvol/lvvol2: 11/360448 files (0.0% non-contiguous), 44646/1441792 blocks
</code></pre>
<pre><code class="lang-plaintext">#Resize the file system
$ resize2fs /dev/vg_vol/lv_vol1 40G
</code></pre>
<pre><code class="lang-plaintext">#Reduce the size of the logical volume
$ lvreduce -L 40G /dev/vg_vol/lv_vol1

WARNING: Reducing active logical volume to 40.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg_vol/lv_vol2? [y/n]: y
Size of logical volume vg_vol/lv_vol2 changed from 5.50 GiB (282516 extents) to 200.00 MiB (100 extents).
Device vg_vol-lv_vol2-real (253:4) is used by another device.
Problem reactivating logical volume vg_vol/lv_vol2.
Logical volume vg_vol/lv_vol successfully resized.
</code></pre>
<pre><code class="lang-plaintext">#Remount the logical volume
$ mount /dev/vg_vol/lv_vol1 /mnt/lv-mount1
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Now check lvm size status using lsblk. You can see <code>/dev/vg_vol/lv_vol1</code> now size is only 10G only.</div>
</div>

<pre><code class="lang-plaintext">$ lsblk

NAME           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda              8:0    0   100G  0 disk 
sdb              8:16   0   100G  0 disk 
sdc              8:32   0   100G  0 disk 
└─vg_vol       253:0    0   300G  0 lvm  
  ├─lv_vol1    253:1    0    10G  0 lvm  
  ├─lv_vol2    253:2    0    50G  0 lvm  /mnt/lv_mount2
  └─lv_vol3    253:3    0    50G  0 lvm  /mnt/lv_mount3
</code></pre>
<p>In this example, the size of <code>lv_vol1</code> is reduced to 40GB and remaining size of lvm is 10G.</p>
<ol>
<li><strong>Increasing the size of an LVM partition (</strong><code>lvextend</code>):</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-comment"># Extend the size of the logical volume </span>
$ sudo lvextend -L +10G /dev/vg_vol/lv_vol1
$ sudo lvextend -r -L +10G /dev/vg_vol/lv_vol1 <span class="hljs-comment">#(-r option do resize at same time)</span>

<span class="hljs-comment"># Resize the file system</span>
$ resize2fs /dev/vg_vol/lv_vol1   <span class="hljs-comment">#(No use when you use -r option with lvextend)</span>
</code></pre>
<p>In this example, the size of <code>lv_vol1</code> is increased by 10GB.</p>
<pre><code class="lang-plaintext">$ lsblk

NAME           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda              8:0    0   100G  0 disk 
sdb              8:16   0   100G  0 disk 
sdc              8:32   0   100G  0 disk 
└─vg_vol       253:0    0   300G  0 lvm  
  ├─lv_vol1    253:1    0    20G  0 lvm  
  ├─lv_vol2    253:2    0    50G  0 lvm  /mnt/lv_mount2
  └─lv_vol3    253:3    0    50G  0 lvm  /mnt/lv_mount3
</code></pre>
<ol start="2">
<li><p><strong>Increasing the size of an VG pool (</strong><code>vgextend</code>):</p>
<p> Extending a Volume Group (VG) in LVM means adding more disks to give extra storage space to VG group. This allows you to handle more data without shutting down your system. It makes it easier to manage your storage and keeps everything running smoothly as your needs grow.</p>
<p> to extend the VG, we have to require some newly created PV disks. For example we have one more disk “<strong>/dev/sdd</strong>“, so add this PV into vg_vol</p>
<pre><code class="lang-plaintext"> $ vgextend vg_vol /dev/sdd
 Volume group "vg_vol" successfully extended
</code></pre>
</li>
</ol>
<hr />
<ul>
<li><h3 id="heading-taking-a-snapshot-of-lvm-and-its-uses">Taking a snapshot of LVM and its uses</h3>
</li>
</ul>
<ol>
<li><p>Create a logical volume named <em>origin</em> from the volume group <em>vg001</em>:</p>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">Create a snap, when you create lvm</div>
 </div>

<pre><code class="lang-plaintext"> $ sudo lvcreate -n lv_vol1 -L 50G vg_vol
 $ sudo lvcreate --size 1G --name lv_vol1_snap --snapshot /dev/vg_vol/lv_vol1   

 Logical volume "lv_vol1_snap" created.
</code></pre>
</li>
<li><p>Create a snapshot logical volume named <em>snap</em> of <code>/dev/vg_vol/lv_vol1</code> that is <em>1GB</em> in size:</p>
<pre><code class="lang-plaintext"> #lvcreate --size 1G M --name snap --snapshot /dev/vg_vol/lv_vol1

 $ Logical volume "snap" created.
</code></pre>
<pre><code class="lang-plaintext"> $ lsblk

 NAME           MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
 sda              8:0    0   100G  0 disk 
 sdb              8:16   0   100G  0 disk 
 sdc              8:32   0   100G  0 disk 
 └─vg_vol       253:0    0   300G  0 lvm  
   ├─lv_vol1    253:1    0    20G  0 lvm  
   ├─lv_vol1_snap    254:1    0    20G  0 lvm
   ├─lv_vol2    253:2    0    50G  0 lvm  /mnt/lv_mount2
   └─lv_vol3    253:3    0    50G  0 lvm  /mnt/lv_mount3
</code></pre>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">You can also use the <code>-L</code> argument instead of using <code>--size</code>, <code>-n</code> instead of using <code>--name</code>, and <code>-s</code> instead of using <code>--snapshot</code> to create a snapshot. You can check snapshot in <code>/etc/lvm/archive/</code> location.</div>
 </div>

<pre><code class="lang-plaintext"> $ cd /etc/lvm/archive/
 $ ls -al

 total 60
 drwx------. 2 root root 4096 Jan 17 17:28 .
 drwxr-xr-x. 7 root root  115 Jan 17 13:02 ..
 -rw-------. 1 root root 1887 Jan 17 12:26 vg_vol_snap_00000-733987744.vg
 -rw-------. 1 root root 2484 Jan 17 12:34 vg_vol_snap_00001-1752724781.vg
 -rw-------. 1 root root 3074 Jan 17 12:37 vg_vol_snap_00002-1278740074.vg
 -rw-------. 1 root root 3239 Jan 17 12:52 vg_vol_snap_00003-1110221396.vg
 -rw-------. 1 root root 3552 Jan 17 12:52 vg_vol_snap_00004-786463575.vg
 -rw-------. 1 root root 4308 Jan 17 12:53 vg_vol_snap_00005-1268100483.vg
 -rw-------. 1 root root 3532 Jan 17 12:53 vg_vol_snap_00006-1388604217.vg
 -rw-------. 1 root root 3550 Jan 17 12:54 vg_vol_snap_00007-730868741.vg
 -rw-------. 1 root root 3284 Jan 17 13:04 vg_vol_snap_00008-316161094.vg
 -rw-------. 1 root root 3688 Jan 17 13:04 vg_vol_snap_00009-1832259422.vg
 -rw-------. 1 root root 3661 Jan 17 17:28 vg_vol_snap_00010-1424043212.vg
</code></pre>
</li>
</ol>
<hr />
<ol start="3">
<li><p>How to remove active PV from the LVM</p>
<p> At times in a production environment, you may encounter issues with an active PV (Physical Volume). When this happens, the solution is often to replace it with a new disk. However, as an administrator, you must consider the data stored on the PV. So, what’s the correct approach?</p>
<ol>
<li><p><strong>Move the data from the removable PV to another available PV.</strong></p>
<p> For example we are removing a PV named <strong>“/dev/sdb“</strong></p>
<pre><code class="lang-plaintext"> $ pvmove /dev/sdb /dev/sda
   /dev/sdb: Moved: 0.39%
   /dev/sdb: Moved: 100.00%
</code></pre>
</li>
<li><p><strong>Reduce the VG (Volume Group) size.</strong></p>
<pre><code class="lang-plaintext"> $ vgreduce vg_vol /dev/sdb
   Removed "/dev/sdb" from volume group "vg_vol"
</code></pre>
</li>
<li><p><strong>Safely remove the PV from the Volume Group.</strong></p>
<pre><code class="lang-plaintext"> $ pvremove /dev/sdb
   Labels on physical volume "/dev/sdb" successfully wiped.
</code></pre>
</li>
</ol>
</li>
</ol>
<hr />
]]></content:encoded></item><item><title><![CDATA[Automating MySQL Database Configuration: A Deep Dive into Ansible Playbooks]]></title><description><![CDATA[Definition:
MySQL is a widely used open-source relational database management system (RDBMS) that uses Structured Query Language (SQL), the most popular language for adding, accessing, and managing content in a database. It’s known for its quick proc...]]></description><link>https://projectwala.site/automating-mysql-database-configuration-a-deep-dive-into-ansible-playbooks</link><guid isPermaLink="true">https://projectwala.site/automating-mysql-database-configuration-a-deep-dive-into-ansible-playbooks</guid><category><![CDATA[ansible]]></category><category><![CDATA[ansible-playbook]]></category><category><![CDATA[ansible-module]]></category><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[linux-basics]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[DevOps trends]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[Databases]]></category><category><![CDATA[automation]]></category><category><![CDATA[step by step]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Sat, 06 Jan 2024 18:00:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704563275431/60e4d409-ec91-47d2-aca6-bdab2f9b7982.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3 id="heading-definition">Definition:</h3>
<p>MySQL is a widely used open-source relational database management system (RDBMS) that uses Structured Query Language (SQL), the most popular language for adding, accessing, and managing content in a database. It’s known for its quick processing, proven reliability, ease and flexibility of use.</p>
<p>Ansible, on the other hand, is an open-source software provisioning, configuration management, and application-deployment tool. It provides large productivity gains to a wide variety of automation challenges, including setting up databases.</p>
<p>When it comes to automating MySQL database setup and management, Ansible shines in several ways:</p>
<ol>
<li><p><strong>Simplicity</strong>: Ansible uses a simple syntax written in YAML called playbooks. These playbooks are easy to write, read, and understand, even for those not familiar with the Ansible tool.</p>
</li>
<li><p><strong>Efficiency</strong>: Ansible allows you to automate the process of setting up and configuring MySQL databases, which can be a time-consuming and error-prone task if done manually.</p>
</li>
<li><p><strong>Consistency</strong>: By using Ansible, you ensure that your MySQL databases are set up and configured consistently across all your environments.</p>
</li>
<li><p><strong>Scalability</strong>: Ansible can easily scale to manage many databases, making it a great choice for large-scale deployments.</p>
</li>
</ol>
<p>The combination of MySQL and Ansible provides a powerful toolset for managing databases efficiently and consistently. Whether you’re a database administrator looking to automate routine tasks or a developer wanting to ensure consistent database setup across multiple environments, Ansible and MySQL can make your life much easier. So lets create ansible playbook for configuring and automating MySQL database on your host machine.</p>
<hr />
<h3 id="heading-create-an-ansible-playbook-for-configure-mysql-database-on-localhost">Create an ansible playbook for configure MySQL database on localhost.</h3>
<pre><code class="lang-plaintext">[ansible@master ~]$ vim mysql.yml

---
- name: Configure MySQL database using Ansible automation
  hosts: localhost
  vars:
    db_username: ram
    db_name: wordpress
    mysql_root_password: password
  tasks:
    - name: Install pip3
      ansible.builtin.yum:
        name: python3-pip
        state: present
        update_cache: yes

    - name: Install Python MySQL module
      ansible.builtin.pip:
        name: PyMySQL
        state: present

    - name: Install MySQL
      ansible.builtin.yum:
        name: mysql
        state: latest
      notify: restart mysqld

    - name: Create database user with password and all database privileges and 'WITH GRANT OPTION'
      community.mysql.mysql_user:
        name: "{{ db_username }}"
        password: "{{ mysql_root_password }}"
        priv: '*.*:ALL,GRANT'
        state: present

    - name: Create a new database "{{ db_name }}"
      community.mysql.mysql_db:
        login_user: "{{ db_username }}"
        login_password: "{{ mysql_root_password }}"
        name: "{{ db_name }}"
        state: present

    - name: Add sample database to the database
      copy:
        src: ./dump.sql
        dest: /tmp/dump.sql

    - name: insert sample database into database
      community.mysql.mysql_db:
        name: "{{ db_name }}"
        state: import
        target: /tmp/dump.sql
        login_user: "{{ db_username }}"
        login_password: "{{ mysql_root_password }}"

  handlers:
    - name: restart mysqld
      ansible.builtin.service:
        name: mysqld
        state: restarted
        enabled: yes
</code></pre>
<h3 id="heading-create-a-new-table-for-the-newly-created-database-and-insert-some-records">Create a new table for the newly created database and insert some records.</h3>
<pre><code class="lang-plaintext">[ansible@master ~]$ vim dump.sql

CREATE TABLE IF NOT EXISTS test (
  message varchar(255) NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

INSERT INTO test(message) VALUES('Ansible To Do List');
INSERT INTO test(message) VALUES('Get ready');
INSERT INTO test(message) VALUES('Ansible is fun');
INSERT INTO test(message) VALUES('Learn Ansible');
INSERT INTO test(message) VALUES('Setup Ansible');
INSERT INTO test(message) VALUES('Test Ansible setup');
</code></pre>
<p>Let's check whether our MySQL Database is configured correctly or not. This is the output result.</p>
<pre><code class="lang-plaintext">[ansible@master ~]$ mysql -u ram -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.32 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql&gt; show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| wordpress          |
+--------------------+
5 rows in set (0.00 sec)

mysql&gt; use wordpress;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql&gt; show tables;
+---------------------+
| Tables_in_wordpress |
+---------------------+
| test                |
+---------------------+
1 row in set (0.00 sec)

mysql&gt; describe test
    -&gt; ;
+---------+--------------+------+-----+---------+-------+
| Field   | Type         | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+-------+
| message | varchar(255) | NO   |     | NULL    |       |
+---------+--------------+------+-----+---------+-------+
1 row in set (0.00 sec)

mysql&gt; select * from test;
+--------------------+
| message            |
+--------------------+
| Ansible To Do List |
| Get ready          |
| Ansible is fun     |
| Learn Ansible      |
| Setup Ansible      |
| Test Ansible setup |
+--------------------+
6 rows in set (0.00 sec)

mysql&gt;
</code></pre>
<hr />
<h3 id="heading-in-short-description-about-playbook">In-Short Description about playbook</h3>
<p>Let's understand this playbook in the description step by step. This Ansible playbook is designed to configure a MySQL database on <a target="_blank" href="http://localhost">localhost</a>. Here’s a breakdown of what each part does:</p>
<ol>
<li><p><strong>Variables</strong>: The playbook defines three variables: <code>db_username</code>, <code>db_name</code>, and <code>mysql_root_password</code>. These are used later in the playbook to set up the MySQL database.</p>
</li>
<li><p><strong>Tasks</strong>: The playbook then performs a series of tasks:</p>
<ul>
<li><p><strong>Install pip3</strong>: This task installs pip3, a package manager for Python, using the <code>ansible.builtin.yum</code> module.</p>
</li>
<li><p><strong>Install Python MySQL module</strong>: This task uses pip3 to install the PyMySQL module, which is a Python interface for MySQL.</p>
</li>
<li><p><strong>Install MySQL</strong>: This task installs the latest version of MySQL using the <code>ansible.builtin.yum</code> module. If the installation is successful, it triggers a handler to restart the MySQL service.</p>
</li>
<li><p><strong>Create database user</strong>: This task creates a new MySQL user with the username and password specified by the <code>db_username</code> and <code>mysql_root_password</code> variables. The user is granted all privileges on all databases.</p>
</li>
<li><p><strong>Create a new database</strong>: This task creates a new MySQL database with the name specified by the <code>db_name</code> variable.</p>
</li>
<li><p><strong>Add sample database to the database</strong>: This task copies a sample database file (<code>dump.sql</code>) to the <code>/tmp</code> directory on the target host.</p>
</li>
<li><p><strong>Insert sample database into database</strong>: This task imports the sample database into the newly created database.</p>
</li>
</ul>
</li>
<li><p><strong>Handlers</strong>: The playbook includes a handler to restart the MySQL service. This handler is triggered if the MySQL installation task is successful.</p>
</li>
</ol>
<p>This playbook provides a comprehensive example of how to automate the process of setting up a MySQL database using Ansible. It demonstrates the use of variables, tasks, and handlers in an Ansible playbook. It also shows how to use different Ansible modules to perform tasks such as installing packages, creating users and databases, and importing data.</p>
<hr />
<p>If any issues are found in this code, please comment, so we can fix them as soon as possible. Thank you.</p>
<p>#Ansible #MySQL #DatabaseConfiguration #DevOps #Automation #InfrastructureAsCode #IAC #Playbook #RelationalDatabase #RDBMS #ConfigurationManagement #ITAutomation #CloudComputing #OpenSource #Tech</p>
]]></content:encoded></item><item><title><![CDATA[Ansible Adventures: Explore a Range of Projects Suited to Your Skill Level!]]></title><description><![CDATA[Beginner Level:

Setup a Web Server: Use Ansible to automate the setup of a web server, like Apache or Nginx. This includes installing the necessary packages, starting the service, and ensuring it runs at startup.

Software configuration: Use Ansible...]]></description><link>https://projectwala.site/ansible-adventures-explore-a-range-of-projects-suited-to-your-skill-level</link><guid isPermaLink="true">https://projectwala.site/ansible-adventures-explore-a-range-of-projects-suited-to-your-skill-level</guid><category><![CDATA[ansible]]></category><category><![CDATA[ansible-playbook]]></category><category><![CDATA[ansible-module]]></category><category><![CDATA[ideas]]></category><category><![CDATA[projects]]></category><category><![CDATA[Linux]]></category><category><![CDATA[beginner]]></category><category><![CDATA[interview]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[DevOps trends]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Sat, 06 Jan 2024 06:01:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704520776216/ce5923c7-9420-4889-b58a-84547676a11c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3 id="heading-beginner-level"><strong>Beginner Level:</strong></h3>
<ol>
<li><p><a target="_blank" href="https://www.rakamodify.online/linux-ansible-playbook-projects"><strong>Setup a Web Server</strong></a><strong>:</strong> Use Ansible to automate the setup of a web server, like Apache or Nginx. This includes installing the necessary packages, starting the service, and ensuring it runs at startup.</p>
</li>
<li><p><a target="_blank" href="https://www.rakamodify.online/start-to-finish-ansible-setup-easy-playbook-configuration">Software configuration</a>: Use Ansible to automate software package automation</p>
</li>
<li><p><strong>User Management:</strong> Write an Ansible playbook to create, delete, and manage users on a Linux system. This can include setting up SSH keys for remote login.</p>
</li>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693"><strong>Automate Package Updates:</strong> Write a playbook to automate the process of updating all packages on your system</a>.</p>
</li>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693"><strong>Setup a Database Server:</strong> Use Ansible to automate the setup of a database server, like MySQL or PostgreSQL</a>.</p>
</li>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693"><strong>Automate System Updates:</strong> Write a playbook to automate system updates on all your managed nodes</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.rakamodify.online/vsftp-your-secure-digital-vault-for-effortless-file-sharing">Setup VSFTP Server</a>: Use Ansible to automate the VSFTP file server for cross-system files transfer.</p>
</li>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693"><strong>Setup a File Server:</strong> Use Ansible to set up a file server, like Samba or NFS</a>.</p>
</li>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693"><strong>Setup a DNS Server:</strong> Use Ansible to automate the setup of a DNS server, like BIND</a>.</p>
</li>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693"><strong>Automate System Audits:</strong> Write a playbook to automate system audits and generate reports</a>.</p>
</li>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693"><strong>Setup a Mail Server:</strong> Use Ansible to set up a mail server, like Postfix</a>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704519645887/a94767b8-0b90-4bb1-b773-8075d7e27645.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-intermediate-level"><strong>Intermediate Level:</strong></h3>
<ol>
<li><p><strong>Multi-tier Application Deployment:</strong> Use Ansible to deploy a multi-tier application. This could be a simple web application with a separate database server.</p>
</li>
<li><p><strong>Automate Security Updates:</strong> Write a playbook to automate the process of updating your system packages for security patches.</p>
</li>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693"><strong>Automate Network Configuration:</strong> Use Ansible to automate the configuration of network devices</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible"><strong>DevOps Exercises:</strong> Contribute to open-source projects that use Ansible, such as the DevOps Exercises project</a>.</p>
</li>
<li><p><a target="_blank" href="https://thomascfoulds.com/2021/09/29/25-tips-for-using-ansible-in-large-projects.html"><strong>Automate Database Backups:</strong> Write a playbook to automate database backups</a>.</p>
</li>
<li><p><a target="_blank" href="https://thomascfoulds.com/2021/09/29/25-tips-for-using-ansible-in-large-projects.html"><strong>Automate Log Rotation:</strong> Use Ansible to automate log rotation on your servers</a>.</p>
</li>
<li><p><a target="_blank" href="https://thomascfoulds.com/2021/09/29/25-tips-for-using-ansible-in-large-projects.html"><strong>Automate SSL Certificate Renewals:</strong> Write a playbook to automate SSL certificate renewals</a>.</p>
</li>
<li><p><a target="_blank" href="https://thomascfoulds.com/2021/09/29/25-tips-for-using-ansible-in-large-projects.html"><strong>Automate System Monitoring:</strong> Use Ansible to automate the setup of system monitoring tools, like Nagios</a>.</p>
</li>
<li><p><a target="_blank" href="https://thomascfoulds.com/2021/09/29/25-tips-for-using-ansible-in-large-projects.html"><strong>Automate Firewall Configuration:</strong> Write a playbook to automate firewall configuration</a>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704519864102/34ae18da-7299-433d-bb38-39378c37d9e6.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-expert-level"><strong>Expert Level:</strong></h3>
<ol>
<li><p><strong>Continuous Integration/Continuous Deployment (CI/CD) Pipeline:</strong> Use Ansible in a CI/CD pipeline to automate the testing and deployment of code.</p>
</li>
<li><p><strong>Infrastructure Monitoring:</strong> Integrate Ansible with monitoring tools like Nagios or Prometheus to automate the setup and configuration of infrastructure monitoring.</p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible"><strong>AWX Project:</strong> Contribute to the AWX project, which provides a web-based user interface, REST API, and task engine built on top of Ansible</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible"><strong>Kubespray:</strong> Contribute to the Kubespray project, which uses Ansible to deploy a production-ready Kubernetes cluster</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible"><strong>Automate Multi-Cloud Deployments:</strong> Use Ansible to automate deployments across multiple cloud providers</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible"><strong>Automate Big Data Deployments:</strong> Write a playbook to automate the deployment of big data tools, like Hadoop</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible"><strong>Automate Machine Learning Workflows:</strong> Use Ansible to automate machine learning workflows</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible"><strong>Automate Microservices Deployments:</strong> Write a playbook to automate the deployment of microservices</a>.</p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible"><strong>Automate Zero Downtime Deployments:</strong> Use Ansible to automate zero downtime deployments</a>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1704519695031/f868dc34-b5a7-4ae1-8c76-75d55ed1dca0.jpeg" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-references">References:</h3>
<ul>
<li><p><a target="_blank" href="https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693">https://faun.pub/mini-project-simple-automation-using-ansible-3b0ee607a693</a></p>
</li>
<li><p><a target="_blank" href="https://www.libhunt.com/topic/ansible">https://www.libhunt.com/topic/ansible</a></p>
</li>
<li><p><a target="_blank" href="https://thomascfoulds.com/2021/09/29/25-tips-for-using-ansible-in-large-projects.html">https://thomascfoulds.com/2021/09/29/25-tips-for-using-ansible-in-large-projects.html</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Start-to-Finish Ansible Setup: Easy Playbook Configuration]]></title><description><![CDATA[Note: Write your managed host's IP address and their hostnames in the ./hosts-ip.yml file and don't forget to include it in your playbook.

vim ./hosts-ip.yml 
hosts-entries:   
  - ip: 192.168.10.1     
    hostname: mh1.example.com
  - ip: 192.168....]]></description><link>https://projectwala.site/start-to-finish-ansible-setup-easy-playbook-configuration</link><guid isPermaLink="true">https://projectwala.site/start-to-finish-ansible-setup-easy-playbook-configuration</guid><category><![CDATA[ansible-playbook]]></category><category><![CDATA[ansible]]></category><category><![CDATA[ansible-module]]></category><category><![CDATA[modules]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[DevOps trends]]></category><category><![CDATA[services]]></category><category><![CDATA[Beginner Developers]]></category><category><![CDATA[beginner]]></category><category><![CDATA[2024]]></category><category><![CDATA[playbook]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Fri, 05 Jan 2024 15:23:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1704468022129/23df849b-74a5-4c53-ab74-07855abed13f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Note: Write your managed host's IP address and their hostnames in the <code>./hosts-ip.yml</code> file and don't forget to include it in your playbook.</p>
</blockquote>
<pre><code class="lang-plaintext">vim ./hosts-ip.yml 
hosts-entries:   
  - ip: 192.168.10.1     
    hostname: mh1.example.com
  - ip: 192.168.10.2
    hostname: mh2.example.com  
  - ip: 192.168.10.3
    hostname: mh3.example.com
</code></pre>
<blockquote>
<p>Make sure to create an inventory file in the same place where your ansible.cfg is located, and don't forget to include it in your playbook.</p>
</blockquote>
<pre><code class="lang-plaintext">vim ./inventory

[stage]
mh1.example.com

[test]
mh2.example.com

[prod]
mh3.example.com
</code></pre>
<blockquote>
<p>This is your ansible configuration playbook</p>
</blockquote>
<pre><code class="lang-plaintext">---
- name: Install Ansible from scratch
  hosts: cn.example.com
  become: yes
  become_user: root
  gather_facts: yes
  vars:
    username: admin
    password: password
    inventory_file: path/to/inventory/file
  vars_files:
    - ./hosts-ip.yml
  tasks:
    - name: "Yum repository configuration on server"
      ansible.builtin.yum_repository:
        name: "{{ item.name }}"
        description: YUM repo
        file: external
        baseurl: "{{ item.baseurl }}"
        gpgcheck: 0
        enabled: 1
      loop:
        - { name: online_one, baseurl: "https://mirror.stream.centos.org/9-stream/AppStream/x86_64/os/" }
        - { name: online_two, baseurl: "https://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/" }
      when: ansible_facts.distribution in ['RedHat','Fedora'] and ansible_facts.distribution_major_version | int &gt;= 9

    - name: "Setup EPEL REPOSITORY"
      ansible.builtin.dnf:
        name: https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
        state: present
      when: ansible_facts.distribution in ['RedHat','Fedora'] and ansible_facts.distribution_major_version | int &gt;= 9

    - name: "Install ansible in the machine"
      ansible.builtin.dnf:
        name: ansible
        state: latest
      register: output

    - name: show the result
      debug:
        var: output

    - name: "create ip hostname entry in /etc/hosts file of control-node machine"
      blockinfile:
        path: /etc/hosts
        lineinfile: "{{ item.ip }}  {{ item.hostname }}"
        state: present
      loop: "{{ hosts-entries }}"

    - name: "create a user {{ username }}"
      user:
        name:  "{{ username }}"
        state: present
        password: "{{ password | password_hash('sha512') }}"

    - name: Enable PermitRootLogin, PubkeyAuthentication, and PasswordAuthentication
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: "{{ item.regexp }}"
        line: "{{ item.line }}"
        state: present
        backup: yes
      loop:
        - { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin yes' }
        - { regexp: '^#?PubkeyAuthentication', line: 'PubkeyAuthentication yes' }
        - { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication yes' }
      notify: restart sshd service

    - name: "create ansible directory in {{ username }} home directory"
      file:
        path: "/home/{{ username }}/.ansible"
        state: directory

    - name: "Create ansible.cfg file in {{ username }} home directory under .ansible directory"
      file:
        path: "/home/{{ username }}/.ansible/{{ item }}"
        state: touch
      loop:
        - ansible.cfg
        - inventory

    - name: "Insert details in ansible.cfg file"
      blockinfile:
        path: "/home/{{ username }}/.ansible/ansible.cfg"
        block: |
          [defaults]
          inventory = "/home/{{ username }}/.ansible/inventory"
          remote_user = "{{ remote_user }}"
          ask_pass = false

          [privilege_escalation]
          become = true
          become_method = sudo
          become_user = root
          become_ask_pass = false

    - name: "create inventory file"
      ansible.builtin.copy:
        src: "{{ inventory_file }}"
        dest: "/home/{{ username }}/.ansible/inventory"
  handlers:
    - name: restart sshd service
      service:
        name: sshd
        state: restarted



- name: Install Ansible from scratch
  hosts: stage
  become: yes
  vars:
    username: ansible
  become_user: root
  gather_facts: yes
  tasks:
    - name: "Configure ssh to make password-less connection between machines"
      ansible.posix.authorized_key:
        user: '{{ username }}'
        state: present
        key: "{{ lookup('file', '/home/{{ username }}/.ssh/id_rsa.pub') }}"
      notify: restart sshd service
      register: connection-output

    - name: show the connection output
      debug:
        var: connection-output 

  handlers:
    - name: restart sshd service
      service:
        name: sshd
        state: restarted
</code></pre>
<hr />
<h3 id="heading-description-of-this-playbook-in-short">Description of this playbook in short</h3>
<p>This Ansible playbook is designed to set up Ansible on a managed host. Here’s a simplified explanation of what each part does:</p>
<ol>
<li><p><strong>Yum repository configuration on server</strong>: This task sets up the Yum repositories on the server. It uses a loop to add two repositories, one for AppStream and one for BaseOS. This task only runs if the server’s operating system is RedHat or Fedora and the major version is 9 or above.</p>
</li>
<li><p><strong>Setup EPEL REPOSITORY</strong>: This task installs the EPEL repository on the server. This task also only runs if the server’s operating system is RedHat or Fedora and the major version is 9 or above.</p>
</li>
<li><p><strong>Install ansible in the machine</strong>: This task installs the latest version of Ansible on the server.</p>
</li>
<li><p><strong>Show the result</strong>: This task displays the result of the Ansible installation.</p>
</li>
<li><p><strong>Create IP hostname entry in /etc/hosts file of control-node machine</strong>: This task adds entries to the /etc/hosts file on the control node machine. The entries are defined in the <code>hosts-entries</code> variable.</p>
</li>
<li><p><strong>Create a user</strong>: This task creates a new user on the server. The username and password are defined in the <code>username</code> and <code>password</code> variables.</p>
</li>
<li><p><strong>Enable PermitRootLogin, PubkeyAuthentication, and PasswordAuthentication</strong>: This task modifies the SSH configuration to enable root login, public key authentication, and password authentication.</p>
</li>
<li><p><strong>Configure ssh to make password-less connection between machines</strong>: This task sets up SSH keys for the new user to allow password-less connections between machines.</p>
</li>
<li><p><strong>Create ansible directory in user home directory</strong>: This task creates a new directory for Ansible in the home directory of the new user.</p>
</li>
<li><p><strong>Create ansible.cfg file in user home directory under .ansible directory</strong>: This task creates an Ansible configuration file and an inventory file in the new Ansible directory.</p>
</li>
<li><p><strong>Insert details in ansible.cfg file</strong>: This task adds configuration details to the Ansible configuration file.</p>
</li>
<li><p><strong>Create inventory file</strong>: This task copies an inventory file to the new Ansible directory.</p>
</li>
</ol>
<hr />
<p>As we reach the end of this journey, remember that technology is not just about understanding the new, but about transforming the old. It’s about taking the world as we know it and daring to envision it better. Here at <a target="_blank" href="https://www.rakamodify.online"><strong>rakamodify.online</strong></a>, we don’t just write about technology, we live it. We breathe it. And we share that passion with you, our readers.</p>
<p>So, keep exploring, keep innovating, and keep modifying. The future is a blank canvas, teeming with possibilities. And with every line of code, every circuit built, and every system debugged, you’re painting your masterpiece.</p>
<p>Thank you for joining us on this journey. Until next time, keep modifying your world, one byte at a time. 😊</p>
<p><strong>#Technology</strong> <strong>#Innovation #Coding #Blogging #Learning #Inspiration #RakaModify</strong> #Ansible #DevOps #Automation #ConfigurationManagement #InfrastructureAsCode #AnsiblePlaybook #OpenSource #CloudComputing #ITAutomation #Tech</p>
]]></content:encoded></item><item><title><![CDATA[Linux-Project (MANAGING LOCAL USERS AND
GROUPS)]]></title><description><![CDATA[Introduction:
In the dynamic landscape of corporate infrastructure, maintaining robust security measures while managing user access becomes pivotal. This article navigates through the process of establishing a secure Linux-based server system in the ...]]></description><link>https://projectwala.site/linux-project-managing-local-users-and-groups</link><guid isPermaLink="true">https://projectwala.site/linux-project-managing-local-users-and-groups</guid><category><![CDATA[Linux]]></category><category><![CDATA[projects]]></category><category><![CDATA[server]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Wed, 03 Jan 2024 12:55:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/jLwVAUtLOAQ/upload/7bb4a90a0738391af5e15b280356cb9e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3 id="heading-introduction"><strong>Introduction:</strong></h3>
<p>In the dynamic landscape of corporate infrastructure, maintaining robust security measures while managing user access becomes pivotal. This article navigates through the process of establishing a secure Linux-based server system in the midst of organizational restructuring, focusing on user and group management and stringent password policy implementation.</p>
<h3 id="heading-understanding-the-scenario"><strong>Understanding the Scenario:</strong></h3>
<p>As an IT administrator at GlobalTech Inc., overseeing a Linux-based server system, the recent restructuring necessitates the creation of new departments and user accounts. The primary objective is to fortify the security posture by setting up user accounts and groups effectively, limiting superuser access, enforcing strict directory access controls, and implementing stringent password policies.</p>
<hr />
<ol>
<li><h3 id="heading-user-and-group-setup"><strong>User and Group Setup:</strong></h3>
</li>
</ol>
<p>Creating user accounts and groups for the newly formed departments – R&amp;D, Marketing, and Customer Support – is the initial step. This involves generating user accounts for each department and assigning appropriate privileges, facilitating streamlined access control by associating users with their respective departmental groups.</p>
<ul>
<li>Creating User Account:</li>
</ul>
<pre><code class="lang-plaintext"># Create users for each department
sudo useradd RnDUser
sudo useradd MarketingUser
sudo useradd CustomerSupportUser
sudo groupadd RnDUserGroup
sudo groupadd MarketingUserGroup
sudo groupadd CustomerSupportUserGroup
sudo cat /etc/passwd
sudo cat /etc/group
</code></pre>
<ul>
<li>Creating Group Account (Assigning Users to Groups)</li>
</ul>
<pre><code class="lang-plaintext"># Assign users to their respective departmental groups
sudo usermod -aG RnDUserGroup RnDUser
sudo usermod -aG MarketingUserGroup MarketingUser
sudo usermod -aG CustomerSupportUserGroup CustomerSupportUser

sudo cat /etc/group | grep -e RnDUserGroup -e MarketingUserGroup -e CustomerSupportUserGroup
</code></pre>
<hr />
<ol>
<li><h3 id="heading-superuser-access-and-privileges"><strong>Superuser Access and Privileges:</strong></h3>
</li>
</ol>
<p>Limiting superuser access is crucial for maintaining system integrity. Configuring sudo access for department heads empowers them to perform essential administrative tasks without full root access. Educating users on responsible elevated privileges usage ensures the importance of limited access and minimizes potential risks.</p>
<ul>
<li><h4 id="heading-configuring-sudo-access">Configuring Sudo Access:</h4>
<p>  Edit sudoers file using visudo (<code>sudo visudo</code>) and add lines for department heads:</p>
</li>
</ul>
<pre><code class="lang-plaintext">RnDUser ALL=(ALL) /path/to/RnD_commands
MarketingUser ALL=(ALL) /path/to/Marketing_commands
CustomerSupportUser ALL=(ALL) /path/to/CustomerSupport_commands
</code></pre>
<hr />
<ol>
<li><h3 id="heading-directory-access-control"><strong>Directory Access Control:</strong></h3>
</li>
</ol>
<ul>
<li><h4 id="heading-creating-directories">Creating Directories:</h4>
</li>
</ul>
<pre><code class="lang-plaintext"># Create directories for each department
sudo mkdir /RnD_Data
sudo mkdir /Marketing_Content
sudo mkdir /CustomerSupport_Reports
</code></pre>
<ul>
<li><h4 id="heading-setting-directory-permissions">Setting Directory Permissions:</h4>
</li>
</ul>
<pre><code class="lang-plaintext"># Set permissions to restrict access
sudo chown -R :RnDUserGroup /RnD_Data
sudo chmod -R 770 /RnD_Data

sudo chown -R :MarketingUserGroup /Marketing_Content
sudo chmod -R 770 /Marketing_Content

sudo chown -R :CustomerSupportUserGroup /CustomerSupport_Reports
sudo chmod -R 770 /CustomerSupport_Reports
</code></pre>
<p>Establishing restricted access to department-specific directories – 'R&amp;D_Data', 'Marketing_Content', and 'CustomerSupport_Reports' – is imperative for data security. Setting permissions ensures that members within each department possess full access while restricting users from other departments to read-only or no access, fortifying data confidentiality.</p>
<ol>
<li><h3 id="heading-password-policy-implementation"><strong>Password Policy Implementation:</strong></h3>
</li>
</ol>
<p>Implementing stringent password policies across the network adds an additional layer of security. Enforcing complexity, length, and expiration rules for all user accounts enhances system resilience. Regularly prompting users to update passwords aligns with policy compliance and reinforces security measures.</p>
<ul>
<li><h4 id="heading-password-policy-implementation-enforcing-password-policies">**Password Policy Implementation (**Enforcing Password Policies):</h4>
</li>
</ul>
<pre><code class="lang-plaintext"># Open the login.defs file in a text editor (such as nano or vi)
sudo nano /etc/login.defs
</code></pre>
<p>Within the '/etc/login.defs' file, locate or add the following lines and adjust the values to meet your desired password policy:</p>
<pre><code class="lang-plaintext"># Set password complexity rules (example values)
PASS_MAX_DAYS   90  # Maximum password age
PASS_MIN_DAYS   7   # Minimum password age
PASS_MIN_LEN    10  # Minimum password length
FAIL_DELAY      3    # Delay in seconds after a failed login attempt
FAILLOG_ENAB    yes  # Enable recording of failed login attempts
LOGIN_RETRIES   3    # Maximum number of login retries before account is locked
UID_MIN         1000 # Minimum value for user IDs (UIDs)
UID_MAX         60000 # Maximum value for user IDs (UIDs)
GID_MIN         1000 # Minimum value for group IDs (GIDs)
GID_MAX         60000 # Maximum value for group IDs (GIDs)
ENV_PATH        PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV_SUPATH      PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
UMASK           077  # Default umask value for new files
CREATE_HOME     yes  # Create home directories for new users by default
</code></pre>
<hr />
<ol>
<li><h3 id="heading-different-scenarios-along-with-the-corresponding-chage-command-options-and-their-explanations">Different scenarios along with the corresponding <code>chage</code> command options and their explanations:</h3>
</li>
</ol>
<ul>
<li><h3 id="heading-scenario-1-setting-maximum-password-age">Scenario 1: Setting Maximum Password Age</h3>
<p>  <strong>Scenario:</strong> You want to ensure that user accounts require password changes every 90 days.</p>
<p>  <strong>Command:</strong></p>
</li>
</ul>
<pre><code class="lang-plaintext">sudo chage -l RnDUser
sudo chage RnDUser
</code></pre>
<pre><code class="lang-bash">sudo chage -M 90 username
</code></pre>
<p><strong>Explanation:</strong> This command sets the maximum password age (<code>-M</code>) for the specified user (<code>username</code>) to 90 days. Users will be prompted to change their passwords after 90 days for security reasons.</p>
<hr />
<ul>
<li><h3 id="heading-scenario-2-specifying-password-expiry-date">Scenario 2: Specifying Password Expiry Date</h3>
<p>  <strong>Scenario:</strong> You need to set a specific date (e.g., December 31, 2024) for a user's password to expire.</p>
<p>  <strong>Command:</strong></p>
</li>
</ul>
<pre><code class="lang-bash">sudo chage -E <span class="hljs-string">"2024-12-31"</span> username
</code></pre>
<p><strong>Explanation:</strong> Using the <code>-E</code> flag allows setting the exact password expiration date for the specified user (<code>username</code>). After the specified date, the user will be prompted to change their password upon login.</p>
<hr />
<ul>
<li><h3 id="heading-scenario-3-setting-minimum-days-between-password-changes">Scenario 3: Setting Minimum Days Between Password Changes</h3>
<p>  <strong>Scenario:</strong> Ensure users cannot change their passwords too frequently by setting a minimum of 7 days between password changes.</p>
<p>  <strong>Command:</strong></p>
</li>
</ul>
<pre><code class="lang-bash">sudo chage -m 7 username
</code></pre>
<p><strong>Explanation:</strong> This command sets the minimum number of days (<code>-m</code>) required between password changes for the user (<code>username</code>). Users will need to wait at least 7 days before changing their passwords again.</p>
<hr />
<ul>
<li><h3 id="heading-scenario-4-account-inactivity-and-disabling">Scenario 4: Account Inactivity and Disabling</h3>
<p>  <strong>Scenario:</strong> Automatically disable a user account after 30 days of inactivity.</p>
<p>  <strong>Command:</strong></p>
</li>
</ul>
<pre><code class="lang-bash">sudo chage -I 30 username
</code></pre>
<p><strong>Explanation:</strong> Using <code>-I</code> specifies the number of days of inactivity (<code>30</code> in this case) after which the account will be automatically disabled if the user doesn't log in within that period.</p>
<hr />
<ul>
<li><h3 id="heading-scenario-5-disabling-password-expiration">Scenario 5: Disabling Password Expiration</h3>
<p>  <strong>Scenario:</strong> Temporarily disable password expiration for a user account.</p>
<p>  <strong>Command:</strong></p>
</li>
</ul>
<pre><code class="lang-bash">sudo chage -M -1 username
</code></pre>
<p><strong>Explanation:</strong> Setting the maximum password age (<code>-M</code>) to <code>-1</code> effectively disables password expiration for the specified user (<code>username</code>). They won't be prompted to change their password based on age.</p>
<hr />
<ul>
<li><h3 id="heading-scenario-6-forcing-password-change-on-next-login">Scenario 6: Forcing Password Change on Next Login</h3>
<p>  <strong>Scenario:</strong> Require a user to change their password immediately upon next login.</p>
<p>  <strong>Command:</strong></p>
</li>
</ul>
<pre><code class="lang-bash">sudo chage -d 0 username
</code></pre>
<p><strong>Explanation:</strong> Using <code>-d</code> with a value of <code>0</code> forces the user to change their password during the next login attempt by setting the number of days since the epoch to 0.</p>
<hr />
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">NOTE: Important files:</div>
</div>

<pre><code class="lang-plaintext">sudo man useradd
</code></pre>
<ol>
<li><p><code>/etc/passwd:</code> Contains user account information like usernames, user IDs (UIDs), group IDs (GIDs), home directories, and default shells.</p>
</li>
<li><p><code>/etc/shadow:</code> Stores encrypted user passwords and password aging details, providing enhanced security by safeguarding password hashes.</p>
</li>
<li><p><code>/etc/group:</code> Lists all system groups, associating users with their respective groups, along with group IDs (GIDs) and membership details.</p>
</li>
<li><p><code>/etc/sudoers:</code> Manages user privileges, determining who can execute commands as root or other users with elevated permissions using <code>sudo</code>.</p>
</li>
<li><p><code>/etc/login.defs:</code> Defines system-wide defaults for user accounts and password policies, specifying aging parameters and other policies.</p>
</li>
<li><p><code>/etc/security/pwquality.conf:</code> Configures password quality requirements like complexity, length, and expiration, affecting password creation and modification rules.</p>
</li>
<li><p><code>/etc/pam.d/:</code> Contains Pluggable Authentication Module configuration files for various services, allowing flexible authentication policies.</p>
</li>
<li><p><code>/etc/default/useradd:</code> Specifies default values for creating new user accounts, setting parameters for home directories, shells, etc.</p>
</li>
<li><p><code>/etc/default/userdel:</code> Defines default behaviors for deleting user accounts, deciding whether to remove home directories or mail spools.</p>
</li>
<li><p><code>/etc/default/usermod:</code> Sets default behaviors and options for modifying existing user accounts, facilitating changes to user attributes.</p>
</li>
</ol>
<h3 id="heading-conclusion"><strong>Conclusion:</strong></h3>
<p>In the ever-evolving corporate landscape, the stability and security of Linux-based server systems remain paramount. By effectively managing user accounts and groups, limiting superuser access, implementing stringent directory access controls, and enforcing robust password policies, GlobalTech Inc. fortifies its defenses against potential threats while fostering a secure environment for critical data and operations.</p>
]]></content:encoded></item><item><title><![CDATA[VSFTP: Your Secure Digital Vault for Effortless File Sharing]]></title><description><![CDATA[Definition and Use Case of VSFTP?



FTP (File Transfer Protocol)


FTP stands for File Transfer Protocol. It's like a highway system that allows you to move files between your computer and a server on the internet. It's been around for a long time a...]]></description><link>https://projectwala.site/vsftp-your-secure-digital-vault-for-effortless-file-sharing</link><guid isPermaLink="true">https://projectwala.site/vsftp-your-secure-digital-vault-for-effortless-file-sharing</guid><category><![CDATA[vsftp]]></category><category><![CDATA[ftp]]></category><category><![CDATA[files]]></category><category><![CDATA[Linux]]></category><category><![CDATA[ansible-playbook]]></category><category><![CDATA[learning]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[DevOps trends]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[linux-basics]]></category><category><![CDATA[linux-commands]]></category><category><![CDATA[server]]></category><category><![CDATA[ansible]]></category><dc:creator><![CDATA[Rakesh Kumar Jangid]]></dc:creator><pubDate>Sun, 31 Dec 2023 03:33:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1703994496168/04463d87-167e-4eab-b60a-73193ea2c31f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<ol>
<li><h3 id="heading-definition-and-use-case-of-vsftp">Definition and Use Case of VSFTP?</h3>
</li>
</ol>
<ul>
<li><h3 id="heading-ftp-file-transfer-protocol">FTP (File Transfer Protocol)</h3>
</li>
</ul>
<p>FTP stands for File Transfer Protocol. It's like a highway system that allows you to move files between your computer and a server on the internet. It's been around for a long time and is one of the earliest ways to share files between computers.</p>
<ul>
<li><h3 id="heading-vsftp-very-secure-ftp">VSFTP (Very Secure FTP)</h3>
</li>
</ul>
<p>VSFTP is a specific type of FTP server software. Think of it as a super safe and organized garage where you can store and exchange files with others securely. It's known for being very secure, hence the name, and it's efficient at managing files for sharing.</p>
<ul>
<li><h3 id="heading-uses-and-practical-example">Uses and Practical Example:</h3>
</li>
</ul>
<p>Imagine you're a photographer, and you need to send high-resolution images to your clients. Using VSFTP, you set up a secure space on your website where each client has their own folder. When a client needs their pictures, they log in securely and access only their folder to download the images. It's like having a locked drawer in a file cabinet where each client can access only their documents.</p>
<p>VSFTP ensures that only authorized users can access these folders, and it's really fast at transferring large image files without losing any quality. Plus, it keeps everything organized and separate for each client, just like how you'd organize folders for different clients in your workspace.</p>
<p>In essence, FTP and VSFTP are like the trusted and secure delivery systems for files on the internet. They allow you to share, access, and manage files between computers or users in a structured and secure way, making them essential tools for businesses and individuals who need to exchange files regularly.</p>
<hr />
<ol>
<li><h3 id="heading-write-an-ansible-playbook-for-vsftp-server">Write an Ansible playbook for VSFTP server</h3>
</li>
</ol>
<pre><code class="lang-plaintext">[ansible@cn ~]$ vim ucredent.yml

groupname: sftpusers
username: thift
password: thift

:wq!
</code></pre>
<pre><code class="lang-plaintext">[ansible@cn ~]$ vim sftp.yml

---
- name: Configure SFTP on the {{ hosts }} server
  hosts: stage
  vars_files:
    - ./ucredent.yml  
  tasks:
    - name: "Yum repository configuration on {{ hosts }} server"
      ansible.builtin.yum_repository:
        name: "{{ item.name }}"
        description: YUM repo
        file: external
        baseurl: "{{ item.baseurl }}"
        gpgcheck: 0
        enabled: 1
      loop:
        - { name: online_one, baseurl: "https://mirror.stream.centos.org/9-stream/AppStream/x86_64/os/" }
        - { name: online_two, baseurl: "https://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/" }
      when: ansible_facts.distribution in ['RedHat','Fedora'] and ansible_facts.distribution_major_version | int &gt;= 9


    - name: Install Packages
      yum:
        name: "{{ item }}"
        state: latest
      loop:
        - vsftpd
        - openssh-server
        - openssh-clients
      notify: restart sshd service

    - name: groupadd "{{ groupname }}"
      group:
        name: "{{ groupname }}"
        state: present

    - name: useradd "{{ username }}"
      user:
        name: "{{ username }}"
        state: present
        password: "{{ password | password_hash('sha512') }}"
        shell: /sbin/nologin
        groups: "{{ groupname }}"
        append: yes

    - name: Create Directory "/var/sftp"
      file:
        path: /var/sftp/
        group: "{{ groupname }}"
        state: directory

    - name: Create Directory "/var/sftp/upload"
      file:
        path: /var/sftp/upload
        owner: "{{ username }}"
        group: "{{ groupname }}"
        state: directory
        mode: u=rwx,g=rx,o-rwx

    - name: Change sshd configuration file content
      blockinfile:
        path: /etc/ssh/sshd_config
        block: |
         Match User {{ username }}
         ForceCommand internal-sftp
         PasswordAuthentication yes
         ChrootDirectory /var/sftp
         AllowTcpForwarding no
         X11Forwarding no
      notify: restart sshd service

  handlers:
    - name: restart sshd service
      service:
        name: "{{ item }}"
        state: restarted
      loop:
        - vsftpd
        - sshd
</code></pre>
<hr />
<ol>
<li><h3 id="heading-lets-break-amp-understand-this-playbook-step-by-step">Let's break &amp; understand this playbook step by step</h3>
</li>
</ol>
<ul>
<li><h3 id="heading-task-1-setting-up-yum-repositories"><strong>Task 1: Setting up Yum Repositories</strong></h3>
</li>
<li><p><strong>What it Does:</strong> Configures locations where the system can find software packages.</p>
</li>
<li><p><strong>Comparison:</strong> Imagine making a list of shops and their addresses where you can buy specific items.</p>
</li>
<li><p><strong>Details:</strong> This task specifies two locations (repositories) where the server can get software packages required for the VSFTP setup. It checks if the server is running RedHat or Fedora version 9 or higher and configures repository URLs accordingly.</p>
</li>
<li><h3 id="heading-task-2-installing-necessary-software"><strong>Task 2: Installing Necessary Software</strong></h3>
</li>
<li><p><strong>What it Does:</strong> Installs required software packages onto the system.</p>
</li>
<li><p><strong>Comparison:</strong> Think of this as getting the tools and materials needed for building a specific project.</p>
</li>
<li><p><strong>Details:</strong> This task installs three software packages - VSFTP (for file transfer), OpenSSH server (for secure connections), and OpenSSH clients (for interacting with SSH server) using the Yum package manager.</p>
</li>
<li><h3 id="heading-task-3-creating-a-user-group"><strong>Task 3: Creating a User Group</strong></h3>
</li>
<li><p><strong>What it Does:</strong> Establishes a group for users with shared permissions.</p>
</li>
<li><p><strong>Comparison:</strong> Like creating a club or a group where people with similar interests can gather.</p>
</li>
<li><p><strong>Details:</strong> It creates a user group, enabling multiple users to have similar access permissions and settings within the VSFTP server environment.</p>
</li>
<li><h3 id="heading-task-4-adding-a-user-to-the-group"><strong>Task 4: Adding a User to the Group</strong></h3>
</li>
<li><p><strong>What it Does:</strong> Creates a new user and assigns them to the previously established group.</p>
</li>
<li><p><strong>Comparison:</strong> Similar to giving someone membership to a club or a group.</p>
</li>
<li><p><strong>Details:</strong> This task adds a specific user to the previously created group, providing them access to the file-sharing capabilities within the VSFTP server.</p>
</li>
<li><h3 id="heading-task-5-setting-up-storage-directories"><strong>Task 5: Setting Up Storage Directories</strong></h3>
</li>
<li><p><strong>What it Does:</strong> Creates specific folders (directories) for storing and uploading files.</p>
</li>
<li><p><strong>Comparison:</strong> Like setting up different rooms or spaces for various purposes in a building.</p>
</li>
<li><p><strong>Details:</strong> It establishes two directories within the server - one for general storage (/var/sftp/) and another for file uploads (/var/sftp/upload). It assigns ownership and permissions to ensure the user and group have appropriate access rights.</p>
</li>
<li><h3 id="heading-task-6-configuring-server-settings"><strong>Task 6: Configuring Server Settings</strong></h3>
</li>
<li><p><strong>What it Does:</strong> Alters the server's configuration for secure file transfer.</p>
</li>
<li><p><strong>Comparison:</strong> Adjusting rules or settings in a building to ensure safety and organization.</p>
</li>
<li><p><strong>Details:</strong> This task modifies the server's SSH configuration file to enforce secure file transfer settings for a specific user. It restricts the user to use only the internal-sftp method, enhances password authentication, and sets a specific directory for their access.</p>
</li>
<li><h3 id="heading-final-action-restarting-services"><strong>Final Action: Restarting Services</strong></h3>
</li>
<li><p><strong>What it Does:</strong> Restarts essential services to apply changes made by the playbook.</p>
</li>
<li><p><strong>Comparison:</strong> Similar to turning off and then on again to make sure all changes take effect.</p>
</li>
<li><p><strong>Details:</strong> It restarts the VSFTP and SSHD (SSH server) services to apply all the configurations and settings implemented by the playbook.</p>
</li>
</ul>
<p>Each task contributes to setting up a secure and functional VSFTP server, ensuring the right software is installed, users are added with appropriate permissions, directories are created for storage, and the server is configured for safe file sharing.</p>
<hr />
<ol>
<li><h3 id="heading-how-to-access-ftp-service-from-a-remote-machine">How to Access FTP Service from a Remote Machine</h3>
</li>
</ol>
<p>To access an FTP service from a remote machine, you typically use an FTP client. Here's a general step-by-step guide:</p>
<ul>
<li><p><strong>Install an FTP Client:</strong> First, ensure you have an FTP client installed on your remote machine. Popular FTP clients include FileZilla, WinSCP, MobaXterm (for Windows), Cyberduck (for macOS), or the command-line FTP client. Here on the window machine, I am using MobaXterm &amp; FileZilla.</p>
</li>
<li><h3 id="heading-mobaxterm-command-line-interface">MobaXterm (Command Line Interface)</h3>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703995705396/f34b0189-bf1c-46c1-a9ec-cfb969b9e6fc.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703995770314/8c83b598-c571-4b12-aac7-98255176068f.png" alt class="image--center mx-auto" /></p>
<p>You can see upload directory here.</p>
<h3 id="heading-basic-commands-to-use-vsftp-command-line"><strong>Basic Commands to use VSFTP command line:</strong></h3>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Note: Use<code> 'l'</code> just before entering each Linux command while connected to the FTP server. For example, if you want to execute a command on your server, precede it with '!' within the FTP prompt. For instance, in FTP, use 'ftp&gt; <code>lpwd' </code>to execute the 'pwd' command on your server.</div>
</div>

<ul>
<li><p><code>ls</code> or <code>dir</code>: Lists files and directories on the remote server.</p>
</li>
<li><p><code>get &lt;filename&gt;</code>: Downloads a file from the remote server.</p>
</li>
<li><p><code>put &lt;filename&gt;</code>: Uploads a file to the remote server.</p>
</li>
<li><p><code>cd &lt;directory&gt;</code>: Changes the current directory on the remote server.</p>
</li>
<li><p><code>mkdir &lt;directory&gt;</code>: Creates a directory on the remote server.</p>
</li>
<li><p><code>delete &lt;filename&gt;</code>: Deletes a file on the remote server.</p>
</li>
<li><p><code>bye</code> or <code>exit</code>: Quits the FTP session and disconnects from the server.</p>
</li>
</ul>
<hr />
<ul>
<li><h3 id="heading-fllezilla-graphically-drop-your-file-here">FlleZilla (Graphically | Drop your file here)</h3>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1703996275944/2fe50b00-af0e-4e0d-a507-05939727ffb3.png" alt class="image--center mx-auto" /></p>
<hr />
<p>Thank you for your time.</p>
<p>if any issue occurs in this article setup, commands, theory, or definition. Please send a comment in the comment box.</p>
]]></content:encoded></item></channel></rss>