<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Avik Kundu]]></title><description><![CDATA[Software Engineer @RedHat 👨‍💻 | AWS Community Builder

Full Stack Developer • DevOps & Cloud • Open Source Contributor 🚀]]></description><link>https://blog.avikkundu.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 08 Apr 2026 12:59:49 GMT</lastBuildDate><atom:link href="https://blog.avikkundu.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[From Data to Decisions: Leveraging Amazon Personalize for Recommendation Systems]]></title><description><![CDATA[Introduction
In an era dominated by personalized experiences, recommendation systems have become essential for businesses seeking to engage and retain customers. Amazon Personalize is a leading solution, offering advanced machine learning capabilitie...]]></description><link>https://blog.avikkundu.com/recommendation-systems-with-amazon-personalize</link><guid isPermaLink="true">https://blog.avikkundu.com/recommendation-systems-with-amazon-personalize</guid><category><![CDATA[recommender-systems]]></category><category><![CDATA[AI]]></category><category><![CDATA[Amazon Web Services]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Mon, 20 Nov 2023 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/VWcPlbHglYc/upload/b13481ff511ebe7f8dedab30d7a57818.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>In an era dominated by personalized experiences, recommendation systems have become essential for businesses seeking to engage and retain customers. Amazon Personalize is a leading solution, offering advanced machine learning capabilities to tailor recommendations based on individual preferences. This article explores the transformative power of Amazon Personalize, delving into its ability to drive user engagement, boost conversions, and foster long-term customer loyalty. Join us as we uncover the key principles and benefits of building recommendation systems with Amazon Personalize.</p>
<p>In this article, you will learn how to use the Amazon Personalize service to create Recommendation systems for your applications. For the demonstration, we are going to use the popular MovieLens dataset. You can download the dataset from this <a target="_blank" href="https://files.grouplens.org/datasets/movielens/ml-latest-small.zip">link</a>.</p>
<h3 id="heading-creating-dataset-groups">Creating Dataset Groups</h3>
<p>A dataset group is a container for Amazon Personalize resources, including datasets, domain recommenders, and custom resources.</p>
<p>To create a Dataset Group, click on the "Create Dataset Group" button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710135267321/0363e909-b147-438e-b43f-fbe30e7d968c.png" alt class="image--center mx-auto" /></p>
<p>Mention the name of the Dataset Group and click "Next". This will create the new Dataset Group.</p>
<p>Once the Dataset Group creation is complete, select it to see the dashboard:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710219883200/89674de9-3ebf-439d-953f-bad0d5d22a3b.png" alt class="image--center mx-auto" /></p>
<p>First, we are going to focus on the "Upload Datasets" section. Before continuing, please download the dataset from the link attached above.</p>
<h3 id="heading-uploading-the-datasets">Uploading the Datasets</h3>
<p>We will start with uploading the User-item interaction data. Click on the "Import" button adjacent to it. This will open the "Configure Schema" form.</p>
<p>Mention the name of the dataset. Then move to the "Schema Details" section.</p>
<p>Select the "User Existing Schema" option, as we will use the existing "movie-ratings" schema for this project and modify it to create a new schema. The column names need to be updated based on our dataset. We can fix this in the code editor below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710220589242/e9d650ef-93cd-45e3-b610-7c45854ce112.png" alt class="image--center mx-auto" /></p>
<p>In the Dataset you downloaded, there will be 4 <code>.csv</code> files. To decide the column names, open the <code>links.csv</code> file. Update the schema accordingly. After that, copy the JSON and paste it into the code editor after you select the "Create New Schema" radio button along with a new name of the schema. Hit the "Next" button.</p>
<p>On the next page, you need to upload the <code>.csv</code> file in an S3 bucket and mention the file location along with an IAM policy name to access the bucket.</p>
<p>Once completed, it will complete the uploading of the user-item interaction dataset.</p>
<p>In the same way, upload the <code>users.csv</code> and <code>movies.csv</code> in User and Item Data respectively.</p>
<p>This will complete the Uploading of all datasets required for the tutorial.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710272096702/71ea3ebc-a876-404d-a2c3-ca3c1a48e590.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-solutions-and-recipes">Solutions and Recipes</h3>
<p>In the next step, we need to create a solution and attach a recipe to it. There are many recipes provided by the service. We can create multiple Solutions using different recipes to test out the performance between them.</p>
<p>You can create the solutions using the "Create Solution" form. Just mention a unique name to the solution and attach a recipe to it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710272572756/8cdc8605-879f-4390-a1be-ca73a5baf3d6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-creating-a-campaign">Creating a Campaign</h3>
<p>Our tutorial's final step is creating a campaign to test the recommendation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710272903942/3f1fdca0-2e54-4f0f-9a29-7db6695c2e94.png" alt class="image--center mx-auto" /></p>
<p>Select the name of the new campaign and select a solution you had created in the earlier step. Click on the "Create Campaign" button.</p>
<p>This will create the campaign. Once created, click on the newly created Campaign name and go to the "Personalization API" tab. To test the recommendation, go to the "Test Campaign Results" section.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710273109212/9ad45da5-de27-4e59-94df-725dda1dcf88.png" alt class="image--center mx-auto" /></p>
<p>Open the <code>users.csv</code> file and select a USER_ID and 4 MOVIE_IDs and paste them in the User ID and Item IDs field respectively. After that, click the "Get Personalized Rankings" button.</p>
<p>You should see the Item ranking table.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1710273444188/7922455a-f5b5-4684-a9e3-9d17b7e6124e.png" alt class="image--center mx-auto" /></p>
<p>Thus, we have successfully got the order of personalization of movies for an user using the Amazon Personalize Service.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Amazon Personalize stands as a formidable tool for businesses looking to stay ahead in the competitive landscape of personalized experiences. By harnessing its advanced machine learning algorithms, companies can not only deliver tailored recommendations but also foster deeper connections with their audience. As we wrap up our exploration, it's evident that the implementation of recommendation systems powered by Amazon Personalize not only enhances user engagement and satisfaction but also drives tangible business growth. Embracing this technology enables businesses to stay agile, adaptive, and responsive to the evolving needs and preferences of their customers, ultimately securing a strong position in the digital marketplace.</p>
]]></content:encoded></item><item><title><![CDATA[Empower Your Business: Safeguard Against Online Frauds with Amazon Fraud Detection Service]]></title><description><![CDATA[Introduction
In today's digital landscape, the proliferation of online transactions has brought immense convenience but also heightened risks of fraud. Recognizing the critical need for businesses to protect themselves and their customers, Amazon int...]]></description><link>https://blog.avikkundu.com/online-fraud-detection-with-amazon-fraud-detection-service</link><guid isPermaLink="true">https://blog.avikkundu.com/online-fraud-detection-with-amazon-fraud-detection-service</guid><category><![CDATA[AI]]></category><category><![CDATA[Amazon Web Services]]></category><category><![CDATA[fraud detection]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Tue, 11 Jul 2023 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/9SoCnyQmkzI/upload/623e5020db60ef4ef4c0b4d18ccf63e4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>In today's digital landscape, the proliferation of online transactions has brought immense convenience but also heightened risks of fraud. Recognizing the critical need for businesses to protect themselves and their customers, Amazon introduced the Fraud Detection Service (AFD). This innovative solution leverages advanced machine learning algorithms and real-time monitoring to detect and prevent fraudulent activities swiftly and effectively.</p>
<p>In this article, you will learn how you can use the Amazon Fraud Detection service to detect fraudulent entries during registration. We will be using the example dataset provided by Amazon to train our model. You can find the training dataset from this <a target="_blank" href="https://github.com/aws-samples/aws-fraud-detector-samples/blob/master/data/registration_data_20K_full.csv">link</a>.</p>
<h3 id="heading-creating-an-event">Creating an event</h3>
<p>First, let's start by creating an event. Open your AWS console and search for Amazon Fraud Detection service. Once on the page, click on the "Create event" button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709927997773/4fd29512-3a9f-496a-ad16-5cb90e399d8e.png" alt class="image--center mx-auto" /></p>
<p>This will open the "Create Event Type" form. An event type defines the structure for an event sent to the service. In the Event Type Details section, enter a name and a description of the event. Next, we need to create an entity for the event. An entity represents who is performing the event. In the Entity creation form, enter the entity type name and description and create the entity. Select the newly created entity in the main form.</p>
<p>In the next section of the form, we need to provide the Training Data. But first, we need to select how do we want to define the event variables. In our case, we want to select the variables from the Training Dataset. The very important thing to note that for this demo, we are going to upload the training dataset file to a S3 storage bucket, which needs to be created in the same region of the Fraud Detection Service. In this form, we are going to create an IAM role which will give the service, the permission to access the S3 bucket. Then we are going to specify the location of the file. Before doing this, please create an S3 bucket and upload the dataset into it.</p>
<p>Once you paste the proper location of the dataset file, you will see the Variables and Variable Type table. In this table, you need to map each variable to an known variable type.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709930215511/fda60952-27f8-4469-8e8c-203f3f41c98f.png" alt class="image--center mx-auto" /></p>
<p>Next, in the Labels Section, please mention at least 2 labels: <code>Fraud</code> and <code>Legit</code>.</p>
<p>Finally, you can submit the form.</p>
<h3 id="heading-specifying-model-details">Specifying Model Details</h3>
<p>Once the event is created, lets move to the Model section, where we are going to select the suitable model for our use-case and train it with the training dataset.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709930812098/2adfe256-d9d4-41e2-931a-a9170c61b41e.png" alt class="image--center mx-auto" /></p>
<p>Click on the "Add your first model" button. In the "Define Model Details", mention the model name and select the "Online Fraud Insights" model type. Finally select the event you had previously created in this form.</p>
<p>This will open the "Historical event data" section. Here, you need to select the event source as data stored in S3. For the IAM role, mention the same bucket name if you want to keep the model output files in the same bucket. Then, again mention the location of the training dataset and click Next.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709932032397/cc5e20ff-add4-441b-bfc0-801af8e77e40.png" alt class="image--center mx-auto" /></p>
<p>In the next page, you need to select a specific label for Fraud and Legitimate.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709933911072/b6c82417-9672-45f5-bd86-6c915e1161e5.png" alt class="image--center mx-auto" /></p>
<p>Clicking "Next" will take you to the review page. Finally you can click the "Create and Train Model" button to start the training process.</p>
<p>Now, we need to wait for sometime to complete the model training process. Once completed, you can see the model performance metrics.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709935111974/497c68e3-4151-4be4-91dc-00ba5cf2ff6e.png" alt class="image--center mx-auto" /></p>
<p>Finally, you can deploy the model version.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709935843580/826d94ba-aa53-4418-b59d-b5c406eedcde.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-create-the-detector">Create the Detector</h3>
<p>Detectors are comprised of Models and rules that evaluates the event for fraud.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709938359321/a34b8acd-8e15-4b14-8dd1-2d2a0fe9dc54.png" alt class="image--center mx-auto" /></p>
<p>Click on the "Create Detector" button to open the form. Fill in the name, description and the event type for the detector and Click "Next".</p>
<p>In the following page, you need to add the Model you created in the earlier section.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709938641096/6112cdce-7754-4954-874b-978ca9bc2e86.png" alt class="image--center mx-auto" /></p>
<p>After that, you need to add the rules for the detector. We are going to make 3 rule: <code>fraud_rule</code>, <code>legit_rule</code> &amp; <code>review_rule</code>.</p>
<p>In the "Add Rules" page, mention the name and description of the respective rules. Then, you need to add the Expression using the Service's Expression language. The expression generally consists of some event variables or model output scores. In this scenario, we will be using the model output score.</p>
<p>For the <code>fraud_rule</code>, the output score should be greater than 900. The outcome should be <code>risk_high</code>.</p>
<p>For the <code>legit_rule</code>, the output score should be less than 700. The outcome should be <code>risk_low</code>.</p>
<p>For the <code>review_rule</code>, the output score should be greater than 700 but less than 900. The outcome should be <code>risk_medium</code>.</p>
<p>Finally you will see 3 rules listed in the page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709939854534/20930643-8771-4c4d-b003-ab4ecc31e4fd.png" alt class="image--center mx-auto" /></p>
<p>In the next step, you need to configure the order of the rule execution. For the Execution mode, select the "First Matched" option and continue.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709940043951/918887f5-ab13-4ffb-90e9-faf212217f46.png" alt class="image--center mx-auto" /></p>
<p>Finally review the information and submit the form to create the detector.</p>
<p>This completes all the steps that are required to setup the Fraud Detection service for our application.</p>
<h3 id="heading-testing-the-model">Testing the Model</h3>
<p>To test the model, you can take some data from the dataset and paste it in the fields of the "Run Tests" section. Once entered, you can see the prediction at the bottom.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709940463453/a4e457e8-dd26-4044-bf38-d8bec2964cd7.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>As businesses navigate the complex and ever-evolving landscape of online transactions, the importance of robust fraud detection and prevention measures cannot be overstated. Amazon's Fraud Detection Service emerges as a powerful ally in this endeavor, offering advanced technology and real-time insights to safeguard against fraudulent activities. By leveraging AFD, businesses can not only protect themselves and their customers but also enhance their overall operational efficiency and reputation. As we conclude our exploration of AFD, it's clear that investing in such cutting-edge solutions is not just a proactive step—it's a fundamental necessity in today's digital economy.</p>
]]></content:encoded></item><item><title><![CDATA[Using CI/CD to deploy web applications on Kubernetes with ArgoCD]]></title><description><![CDATA[Originally published in CircleCI Blog: Using CI/CD to deploy web applications on Kubernetes with ArgoCD by Avik Kundu
GitOps modernizes software management and operations by allowing developers to declaratively manage infrastructure and code using a ...]]></description><link>https://blog.avikkundu.com/deploy-to-kubernetes-with-argocd</link><guid isPermaLink="true">https://blog.avikkundu.com/deploy-to-kubernetes-with-argocd</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[CircleCI]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[CI/CD]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Fri, 09 Dec 2022 14:54:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670597046526/vdny5poRb.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Originally published in</em> <a target="_blank" href="https://circleci.com/blog/"><strong><em>CircleCI</em></strong></a> <em>Blog:</em> <a target="_blank" href="https://circleci.com/blog/deploy-to-kubernetes-with-argocd/"><em>Using CI/CD to deploy web applications on Kubernetes with ArgoCD</em></a> <em>by</em> <a target="_blank" href="https://circleci.com/blog/author/avik-kundu/"><strong><em>Avik Kundu</em></strong></a></p>
<p><a target="_blank" href="https://www.weave.works/technologies/gitops/">GitOps</a> modernizes software management and operations by allowing developers to declaratively manage infrastructure and code using a single source of truth, usually a Git repository. Many development teams and organizations have adopted GitOps procedures to improve the creation and delivery of software applications.</p>
<p>For a GitOps initiative to work, an orchestration system like <a target="_blank" href="https://kubernetes.io/">Kubernetes</a> is crucial. The number of incompatible technologies needed to develop software makes Kubernetes a key tool for managing infrastructure. Without Kubernetes, implementing infrastructure-as-code (IaC) procedures is inefficient or even impossible. Fortunately, the wide adoption of Kubernetes has enabled the creation of tools for implementing GitOps.</p>
<p>One of these tools, <a target="_blank" href="https://argoproj.github.io/cd/">ArgoCD</a>, is a Kubernetes-native continuous deployment (CD) tool. It can deploy code changes directly to Kubernetes resources by pulling it from Git repositories instead of an external CD solution. Many of these solutions support only push-based deployments. Using ArgoCD gives developers the ability to control application updates and infrastructure setup from a unified platform. It handles the latter stages of the GitOps process, ensuring that new configurations are correctly deployed to a Kubernetes cluster.</p>
<p>In this tutorial, you will learn how to deploy a Node.js application on Azure Kubernetes Service (AKS) using a CI/CD pipeline and ArgoCD.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>To follow along with this tutorial, you will need a few things first.</p>
<p>Accounts for:</p>
<ul>
<li><p><a target="_blank" href="https://hub.docker.com/">Docker Hub</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/">GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://azure.microsoft.com/en-in/features/azure-portal/">Microsoft Azure</a></p>
</li>
<li><p><a target="_blank" href="https://circleci.com/signup/">CircleCI</a></p>
</li>
</ul>
<p>These tools installed on your system:</p>
<ul>
<li><p><a target="_blank" href="https://kubernetes.io/docs/tasks/tools/">Kubectl</a></p>
</li>
<li><p><a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/cli_installation/">ArgoCD CLI</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Azure/azure-cli">Azure CLI</a></p>
</li>
<li><p><a target="_blank" href="https://nodejs.org/">Node JS</a></p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/get-docker/">Docker Engine</a></p>
</li>
</ul>
<p>After you have all the prerequisites complete you are ready to go to the next section.</p>
<h2 id="heading-cloning-the-nodejs-application"><strong>Cloning the Node.js application</strong></h2>
<p>In this tutorial, the main focus is on deploying the application on Kubernetes. You can directly <a target="_blank" href="https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository">clone</a> the <a target="_blank" href="https://github.com/CIRCLECI-GWP/aks-nodejs-argocd">Node.js application</a> to your GitHub and continue with the rest of the process.</p>
<p>To clone the project, run:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/CIRCLECI-GWP/aks-nodejs-argocd.git
</code></pre>
<p>There are 2 branches in this repository:</p>
<ul>
<li><p><code>main</code> branch contains only the Node.js Application code</p>
</li>
<li><p><code>circleci-project-setup</code> branch contains the application codes along with all YAML files that you will create</p>
</li>
</ul>
<p>Check out to the <code>main</code> branch.</p>
<p>The Node.js application lives in the <code>app.js</code> file and contains:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
<span class="hljs-keyword">const</span> path = <span class="hljs-built_in">require</span>(<span class="hljs-string">"path"</span>);
<span class="hljs-keyword">const</span> morgan = <span class="hljs-built_in">require</span>(<span class="hljs-string">"morgan"</span>);
<span class="hljs-keyword">const</span> bodyParser = <span class="hljs-built_in">require</span>(<span class="hljs-string">"body-parser"</span>);
<span class="hljs-comment">/* eslint-disable no-console */</span>
<span class="hljs-keyword">const</span> port = process.env.PORT || <span class="hljs-number">1337</span>;
<span class="hljs-keyword">const</span> app = express();
app.use(morgan(<span class="hljs-string">"dev"</span>));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ <span class="hljs-attr">extended</span>: <span class="hljs-string">"true"</span> }));
app.use(bodyParser.json({ <span class="hljs-attr">type</span>: <span class="hljs-string">"application/vnd.api+json"</span> }));
app.use(express.static(path.join(__dirname, <span class="hljs-string">"./"</span>)));
app.get(<span class="hljs-string">"*"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.sendFile(path.join(__dirname, <span class="hljs-string">"./index.html"</span>));
});
app.listen(port, <span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> {
  <span class="hljs-keyword">if</span> (err) {
    <span class="hljs-built_in">console</span>.log(err);
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`App at: http://localhost:<span class="hljs-subst">${port}</span>`</span>);
  }
});
<span class="hljs-built_in">module</span>.exports = app;
</code></pre>
<p>The key takeaway from this code is the port number, This is where the application will be running, which is <code>1337</code> for this tutorial.</p>
<p>You can run the application locally by first installing the dependencies. In the project’s root, type:</p>
<pre><code class="lang-bash">npm install
</code></pre>
<p>Then run the application with the command:</p>
<pre><code class="lang-bash">node app.js
</code></pre>
<p>The application should now be up and running at the address <code>http://localhost:1337</code>.</p>
<h2 id="heading-containerizing-the-nodejs-application"><strong>Containerizing the Node.js application</strong></h2>
<p>To deploy the application of Kubernetes you need to containerize it. To containerize applications using Docker as the container runtime tool, you will create a <a target="_blank" href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a>. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.</p>
<p>Create a new file in the root directory of the project and name it <code>Dockerfile</code>. Copy the following content in the file:</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># Set the base image to use for subsequent instructions</span>
<span class="hljs-string">FROM</span> <span class="hljs-string">node:alpine</span>
<span class="hljs-comment"># Set the working directory for any subsequent ADD, COPY, CMD, ENTRYPOINT,</span>
<span class="hljs-comment"># or RUN instructions that follow it in the Dockerfile</span>
<span class="hljs-string">WORKDIR</span> <span class="hljs-string">/usr/src/app</span>
<span class="hljs-comment"># Copy files or folders from source to the dest path in the image's filesystem.</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">package.json</span> <span class="hljs-string">/usr/src/app/</span>
<span class="hljs-string">COPY</span> <span class="hljs-string">.</span> <span class="hljs-string">/usr/src/app/</span>
<span class="hljs-comment"># Execute any commands on top of the current image as a new layer and commit the results.</span>
<span class="hljs-string">RUN</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span> <span class="hljs-string">--production</span>
<span class="hljs-comment"># Define the network ports that this container will listen to at runtime.</span>
<span class="hljs-string">EXPOSE</span> <span class="hljs-number">1337</span>
<span class="hljs-comment"># Configure the container to be run as an executable.</span>
<span class="hljs-string">ENTRYPOINT</span> [<span class="hljs-string">"npm"</span>, <span class="hljs-string">"start"</span>]
</code></pre>
<p>If you have <a target="_blank" href="https://docs.docker.com/get-docker/">Docker</a> installed, you can build and run the container locally for testing. Later on in this tutorial, you will learn how to automate this process with CircleCI orbs.</p>
<p>To build and tag the container, enter:</p>
<pre><code class="lang-bash">docker build -t aks-nodejs-argocd:latest .
</code></pre>
<p>Confirm that the image was successfully created by running this command from your terminal:</p>
<pre><code class="lang-bash">docker images
</code></pre>
<p>Then run the container using the command:</p>
<pre><code class="lang-bash">docker run -it -p 1337:1337 aks-nodejs-argocd:latest
</code></pre>
<p>The application should now be up and running at the address <code>http://127.0.0.1:1337</code>.</p>
<p>Commit and <a target="_blank" href="https://circleci.com/blog/pushing-a-project-to-github/">push</a> the changes to your GitHub repository.</p>
<h2 id="heading-configuring-kubernetes-manifests-for-deployment"><strong>Configuring Kubernetes manifests for deployment</strong></h2>
<p>To deploy containers on Kubernetes, you will have to configure Kubernetes to incorporate all the settings required to run your application. Kubernetes uses <a target="_blank" href="https://yaml.org/">YAML</a> for configuration.</p>
<p>Create a directory named <code>manifests</code> in the root directory of the project. Then create these files in the newly created folder:</p>
<ul>
<li><p><code>namespace.yaml</code></p>
</li>
<li><p><code>deployment.yaml</code></p>
</li>
<li><p><code>service.yaml</code></p>
</li>
<li><p><code>kustomization.yaml</code></p>
</li>
</ul>
<p>In Kubernetes, <a target="_blank" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/">namespaces</a> provide a mechanism for isolating groups of resources within a single cluster. The contents of the <code>namespace.yaml</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Namespace</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs</span>
</code></pre>
<p>This file would create a namespace named <code>nodejs</code> inside the Kubernetes cluster. All the resources would be created in this namespace.</p>
<p><a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Kubernetes Deployments</a> manage stateless services running on your cluster. Their purpose is to keep a set of identical pods running and upgrade them in a controlled way – performing a rolling update by default. The contents of the <code>deployment.yaml</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">nodejs</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nodejs</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">nodejs</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nodejs</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">nodeSelector:</span>
        <span class="hljs-attr">"beta.kubernetes.io/os":</span> <span class="hljs-string">linux</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">aks-nodejs-argocd</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">aks-nodejs-argocd</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
              <span class="hljs-attr">containerPort:</span> <span class="hljs-number">1337</span>
</code></pre>
<p>The key takeaway from this code is the <code>containerPort</code>. This is where the application will be running and where the <code>container-image</code> will be pulled and deployed in the namespace on the Kubernetes cluster.</p>
<p><a target="_blank" href="https://kubernetes.io/docs/concepts/services-networking/service/">Kubernetes Service</a> is an abstraction that defines a logical set of pods and a policy for accessing them. You need the Kubernetes Service type <code>LoadBalancer</code> to make the deployment accessible to the outside world.<br />The contents of the <code>service.yaml</code> are:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">nodejs</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nodejs</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">1337</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nodejs</span>
</code></pre>
<p>The key takeaway from this code is the <code>targetPort</code>, <code>port</code> and <code>type</code>:</p>
<ul>
<li><p><code>targetPort</code> is the container port</p>
</li>
<li><p><code>port</code> is where the application will be running</p>
</li>
<li><p><code>type</code> is the type of service</p>
</li>
</ul>
<p>To deploy the latest version of the application on the Kubernetes cluster, the resources have to be customized to maintain the updated information. You can use <a target="_blank" href="https://kustomize.io/">Kustomize</a>, which is a tool for customizing Kubernetes configurations.</p>
<p>The contents of the <code>kustomization.yaml</code> are:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">kustomize.config.k8s.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Kustomization</span>
<span class="hljs-attr">resources:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">deployment.yaml</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">service.yaml</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">namespace.yaml</span>
<span class="hljs-attr">namespace:</span> <span class="hljs-string">nodejs</span>
<span class="hljs-attr">images:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">aks-nodejs-argocd</span>
    <span class="hljs-attr">newName:</span> <span class="hljs-string">aks-nodejs-argocd</span>
    <span class="hljs-attr">newTag:</span> <span class="hljs-string">v1</span>
</code></pre>
<p>The key takeaway from this code is <code>newName</code> and <code>newTag</code>, which will be updated with the latest Docker image information as part of the continuous integration process.</p>
<p>Commit and <a target="_blank" href="https://circleci.com/blog/pushing-a-project-to-github/">push</a> these files into the <code>main</code> branch of the GitHub repository you had cloned earlier.</p>
<h2 id="heading-launching-the-azure-kubernetes-service-aks-cluster"><strong>Launching the Azure Kubernetes Service (AKS) cluster</strong></h2>
<p>In this tutorial, you will be deploying the application on the <a target="_blank" href="https://azure.microsoft.com/en-us/services/kubernetes-service/#overview">AKS</a> cluster. To create the AKS cluster, the Azure CLI should be <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli">connected to your Azure account</a>.</p>
<p>To launch an AKS cluster using the Azure CLI, create a Resource Group with this command:</p>
<pre><code class="lang-bash">az group create --name NodeRG --location eastus
</code></pre>
<p>Launch a two-node cluster:</p>
<pre><code class="lang-bash">az aks create --resource-group NodeRG --name NodeCluster --node-count 2 --enable-addons http_application_routing
</code></pre>
<p><strong>Note:</strong> <em>If you generated any SSH keys in your system previously, you need to add the optional</em> <code>--generate-ssh-keys</code> parameter to this command. This auto-generates SSH public and private key files if they are missing. The keys are stored in the <code>~/.ssh</code> directory.</p>
<p>The AKS cluster will take 10 to 15 minutes to launch.</p>
<h2 id="heading-installing-argocd-in-the-aks-cluster"><strong>Installing ArgoCD in the AKS Cluster</strong></h2>
<p>Once the cluster is up and running, you can install ArgoCD inside the cluster. You will use ArgoCD for deploying your application.</p>
<p>To install the application, use the Azure CLI. Configure <code>kubectl</code> to connect to AKS using this command:</p>
<pre><code class="lang-bash">az aks get-credentials --resource-group NodeRG --name NodeCluster
</code></pre>
<p>To install ArgoCD, use these commands:</p>
<pre><code class="lang-bash">kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>ArgoCD will be installed in the <code>argocd</code> namespace. To get all the resources in the namespace enter:</p>
<pre><code class="lang-bash">kubectl get all --namespace argocd
</code></pre>
<h3 id="heading-exposing-the-argocd-api-server"><strong>Exposing the ArgoCD API server</strong></h3>
<p>By default, the ArgoCD API server is not exposed to an external IP. Because you will access the application from the internet during this tutorial, you need to expose the ArgoCD server with an external IP via Service Type Load Balancer.</p>
<p>Change the argocd-server service type to LoadBalancer:</p>
<pre><code class="lang-bash">kubectl patch svc argocd-server -n argocd -p <span class="hljs-string">'{"spec": {"type": "LoadBalancer"}}'</span>
</code></pre>
<p><strong>Note:</strong> <em>You can also use Kubectl port forwarding to connect to the API server without exposing the service. Use this command:</em> <code>kubectl port-forward svc/argocd-server -n argocd 8080:443</code></p>
<p>You can now access the API server using <code>https://localhost:8080</code>.</p>
<h2 id="heading-accessing-the-argocd-web-portal"><strong>Accessing the ArgoCD Web Portal</strong></h2>
<p>Once you have exposed the ArgoCD API server with an external IP, you can now access the portal with the external IP address you generated.</p>
<p>ArgoCD is installed in the <code>argocd</code> namespace. Use this command to get all the resources in the namespace:</p>
<pre><code class="lang-bash">kubectl get all --namespace argocd
</code></pre>
<p>Copy the <code>External-IP</code> corresponding to <code>service/argocd-server</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-external-ip.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-external-ip.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="External-IP" /></a></p>
<p>You can access the application at <code>http://&lt;EXTERNAL-IP&gt;</code>.<br />In my case, that was <code>http://20.237.108.112/</code></p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-argocd-application.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-argocd-application.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="ArgoCD application" /></a></p>
<p>To log into the portal, you will need the username and password. The username is set as <code>admin</code> by default.</p>
<p>To fetch the password, execute this command:</p>
<pre><code class="lang-bash">kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=<span class="hljs-string">"{.data.password}"</span> | base64 -d; <span class="hljs-built_in">echo</span>
</code></pre>
<p>Use this username/password combination to log into the ArgoCD portal.</p>
<h2 id="heading-configuring-kubernetes-manifests-for-argocd"><strong>Configuring Kubernetes manifests for ArgoCD</strong></h2>
<p>To configure ArgoCD to deploy your application on Kubernetes, you will have to set up ArgoCD to connect the Git Repository and Kubernetes in a declarative way using <a target="_blank" href="https://yaml.org/">YAML</a> for configuration.</p>
<p>Apart from this method, you can also set up ArgoCD from the Web Portal or use the ArgoCD CLI. Because this tutorial is following GitOps principles, we are using the Git repository as the sole source of truth. Therefore the declarative method using YAML files works best.</p>
<p>One of the key features and capabilities of ArgoCD is to sync via manual or automatic policy for the deployment of applications to a Kubernetes cluster.</p>
<p>To get started, create a directory named <code>argocd</code> in the root directory of the project. Create a new file in the new directory and name it <code>config.yaml</code>.</p>
<h3 id="heading-manual-sync-policy"><strong>Manual Sync Policy</strong></h3>
<p>Use this policy to manually synchronize your application by way of your CI/CD pipelines. Whenever a code change is made, the CI/CD pipeline is triggered and calls the ArgoCD server APIs to start the sync process based on the changes you will commit. For communicating with the ArgoCD server APIs, you can use the ArgoCD CLI. You can also use one of the SDKs available for various programming languages.</p>
<p>For setting up the Manual Sync policy for ArgoCD, paste this in the <code>config.yaml</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">aks-nodejs-argocd</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">destination:</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">nodejs</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">"https://kubernetes.default.svc"</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">manifests</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">"https://github.com/Lucifergene/aks-nodejs-argocd"</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">circleci-project-setup</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>
</code></pre>
<h3 id="heading-automated-sync-policy"><strong>Automated Sync policy</strong></h3>
<p>ArgoCD can automatically sync an application when it detects differences between the desired manifests in Git and the live state in the cluster.</p>
<p>A benefit of automatic sync is that CI/CD pipelines no longer need direct access to the ArgoCD API server to perform the deployment. Instead, the pipeline makes a commit and pushes to the Git repository with the changes to the manifests in the tracking Git repo.</p>
<p>If you want to set to the Automated Sync policy, you need to paste this in the <code>config.yaml</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">aks-nodejs-argocd</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">destination:</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">nodejs</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">"https://kubernetes.default.svc"</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">manifests</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">"https://github.com/Lucifergene/aks-nodejs-argocd"</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">circleci-project-setup</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">syncPolicy:</span>
    <span class="hljs-attr">automated:</span>
      <span class="hljs-attr">prune:</span> <span class="hljs-literal">false</span>
      <span class="hljs-attr">selfHeal:</span> <span class="hljs-literal">false</span>
</code></pre>
<p>Commit and <a target="_blank" href="https://circleci.com/blog/pushing-a-project-to-github/">push</a> these files into the <code>main</code> branch of the GitHub repository you cloned earlier.</p>
<h2 id="heading-creating-the-continuous-integration-pipeline"><strong>Creating the continuous integration pipeline</strong></h2>
<p>The objective of this tutorial is to show how you can deploy applications on Kubernetes through <a target="_blank" href="https://circleci.com/continuous-integration/">continuous integration</a> (CI) using CircleCI and <a target="_blank" href="https://circleci.com/blog/a-brief-history-of-devops-part-iv-continuous-delivery-and-continuous-deployment/">continuous deployment</a> (CD) via ArgoCD. The CI pipeline should trigger the process of building the container and pushing it to Docker Hub, and the CD should deploy the application on Kubernetes.</p>
<p>To create the CI pipeline, you will be using CircleCI integrated with your GitHub account. CircleCI configuration is named <code>config.yml</code> and lives in the <code>.circleci</code> directory in the project’s root folder. The path to the configuration is <code>.circleci/config.yml</code>.</p>
<p>The content <code>config.yml</code> is:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">2.1</span>

<span class="hljs-attr">orbs:</span>
  <span class="hljs-attr">docker:</span> <span class="hljs-string">circleci/docker@2.1.1</span>
  <span class="hljs-attr">azure-aks:</span> <span class="hljs-string">circleci/azure-aks@0.3.0</span>
  <span class="hljs-attr">kubernetes:</span> <span class="hljs-string">circleci/kubernetes@1.3.0</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">argocd-manual-sync:</span>
    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">cimg/base:stable</span>
    <span class="hljs-attr">parameters:</span>
      <span class="hljs-attr">server:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Server IP of of ArgoCD
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
      <span class="hljs-attr">username:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Username for ArgoCD
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
      <span class="hljs-attr">password:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Password for ArgoCD
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">ArgoCD</span> <span class="hljs-string">CLI</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            URL=https://&lt;&lt; parameters.server &gt;&gt;/download/argocd-linux-amd64
            [ -w /usr/local/bin ] &amp;&amp; SUDO="" || SUDO=sudo
            $SUDO curl --insecure -sSL -o /usr/local/bin/argocd $URL
            $SUDO chmod +x /usr/local/bin/argocd
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">ArgoCD</span> <span class="hljs-string">CLI</span> <span class="hljs-string">login</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">argocd</span> <span class="hljs-string">login</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.server</span> <span class="hljs-string">&gt;&gt;</span> <span class="hljs-string">--insecure</span> <span class="hljs-string">--username</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.username</span> <span class="hljs-string">&gt;&gt;</span> <span class="hljs-string">--password</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.password</span> <span class="hljs-string">&gt;&gt;</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Manual</span> <span class="hljs-string">sync</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">argocd</span> <span class="hljs-string">app</span> <span class="hljs-string">sync</span> <span class="hljs-string">$APP_NAME</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Wait</span> <span class="hljs-string">for</span> <span class="hljs-string">application</span> <span class="hljs-string">to</span> <span class="hljs-string">reach</span> <span class="hljs-string">a</span> <span class="hljs-string">synced</span> <span class="hljs-string">and</span> <span class="hljs-string">healthy</span> <span class="hljs-string">state</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">argocd</span> <span class="hljs-string">app</span> <span class="hljs-string">wait</span> <span class="hljs-string">$APP_NAME</span>

  <span class="hljs-attr">argocd-configure:</span>
    <span class="hljs-attr">executor:</span> <span class="hljs-string">azure-aks/default</span>
    <span class="hljs-attr">parameters:</span>
      <span class="hljs-attr">cluster-name:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Name of the AKS cluster
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
      <span class="hljs-attr">resource-group:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Resource group that the cluster is in
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Pull</span> <span class="hljs-string">Updated</span> <span class="hljs-string">code</span> <span class="hljs-string">from</span> <span class="hljs-string">repo</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">git</span> <span class="hljs-string">pull</span> <span class="hljs-string">origin</span> <span class="hljs-string">$CIRCLE_BRANCH</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">azure-aks/update-kubeconfig-with-credentials:</span>
          <span class="hljs-attr">cluster-name:</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.cluster-name</span> <span class="hljs-string">&gt;&gt;</span>
          <span class="hljs-attr">install-kubectl:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">perform-login:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">resource-group:</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.resource-group</span> <span class="hljs-string">&gt;&gt;</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">kubernetes/create-or-update-resource:</span>
          <span class="hljs-attr">resource-file-path:</span> <span class="hljs-string">argocd/config.yaml</span>

  <span class="hljs-attr">bump-docker-tag-kustomize:</span>
    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">cimg/base:stable</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">kustomize</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            URL=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v4.5.2/kustomize_v4.5.2_linux_amd64.tar.gz
            curl -L $URL | tar zx
            [ -w /usr/local/bin ] &amp;&amp; SUDO="" || SUDO=sudo
            $SUDO chmod +x ./kustomize
            $SUDO mv ./kustomize /usr/local/bin
</span>      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Bump</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Tag</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            cd manifests
            kustomize edit set image $APP_NAME=$DOCKER_LOGIN/$APP_NAME:$CIRCLE_SHA1
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">add_ssh_keys:</span>
          <span class="hljs-attr">fingerprints:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">"$SSH_FINGERPRINT"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Commit</span> <span class="hljs-string">&amp;</span> <span class="hljs-string">Push</span> <span class="hljs-string">to</span> <span class="hljs-string">GitHub</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            git config user.email "$GITHUB_EMAIL"
            git config user.name "CircleCI User"
            git checkout $CIRCLE_BRANCH           
            git add manifests/kustomization.yaml
            git commit -am "Bumps docker tag [skip ci]"
            git push origin $CIRCLE_BRANCH
</span>
<span class="hljs-attr">workflows:</span>
  <span class="hljs-attr">Deploy-App-on-AKS:</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">docker/publish:</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">$DOCKER_LOGIN/$APP_NAME</span>
          <span class="hljs-attr">tag:</span> <span class="hljs-string">$CIRCLE_SHA1,latest</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">bump-docker-tag-kustomize:</span>
          <span class="hljs-attr">requires:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">docker/publish</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">argocd-configure:</span>
          <span class="hljs-attr">cluster-name:</span> <span class="hljs-string">$CLUSTER_NAME</span>
          <span class="hljs-attr">resource-group:</span> <span class="hljs-string">$RESOURCE_GROUP</span>
          <span class="hljs-attr">requires:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">bump-docker-tag-kustomize</span>
      <span class="hljs-comment"># Paste the following only when you opt for the ArgoCD manual-sync-policy:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">argocd-manual-sync:</span>
          <span class="hljs-attr">server:</span> <span class="hljs-string">$ARGOCD_SERVER</span>
          <span class="hljs-attr">username:</span> <span class="hljs-string">$ARGOCD_USERNAME</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">$ARGOCD_PASSWORD</span>
          <span class="hljs-attr">requires:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">argocd-configure</span>
</code></pre>
<p>The CI workflow consists of three jobs:</p>
<ul>
<li><p><code>docker/publish</code> builds and pushes the container to Docker Hub.</p>
</li>
<li><p><code>bump-docker-tag-kustomize</code> updates the Docker Image Tag and generates a consolidated Kubernetes configuration file.</p>
</li>
<li><p><code>argocd-configure</code> applies the ArgoCD Configuration on the AKS cluster.</p>
</li>
<li><p><code>argocd-manual-sync</code> is needed only when you will be opting for the manual sync policy. For automatic sync, you can omit this job from the file.</p>
</li>
</ul>
<p>In this workflow, we have extensively used <a target="_blank" href="https://circleci.com/orbs/">CircleCI orbs</a>. Orbs are open-source, shareable packages of parameterizable, reusable configuration elements, including jobs, commands, and executors. The orbs have been used directly or are used in creating custom jobs.</p>
<p>Commit and <a target="_blank" href="https://circleci.com/blog/pushing-a-project-to-github/">push</a> the changes to your GitHub repository.</p>
<h2 id="heading-setting-up-the-project-on-circleci"><strong>Setting up the project on CircleCI</strong></h2>
<p>The next step to deploying your application to AKS is connecting the application in your GitHub repository to CircleCI.</p>
<p>Go to your <a target="_blank" href="https://app.circleci.com/">CircleCI dashboard</a> and select the Projects tab on the left panel. Click the <strong>Set Up Project</strong> button for the GitHub repository containing the code (<code>aks-nodejs-argocd</code>).</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-circleci-dashboard.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-circleci-dashboard.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="CircleCI Dashboard" /></a></p>
<p>When prompted to select your config.yml file, click the <strong>Fastest</strong> option and type <code>main</code> the branch name. CircleCI will automatically locate the <code>config.yml</code> file. Click <strong>Set Up Project</strong>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-yaml-select.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-yaml-select.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="Select your config.yml file" /></a></p>
<p>The workflow will run, but will soon display an <code>status</code> of <code>Failed</code>. This is because you need to set up a user key and configure the environment variables.</p>
<p>To set up the user key, go to Project Settings and click <strong>SSH Keys</strong> from the left panel. In the User Key section, click <strong>Authorize with GitHub</strong>. The user key is needed by CircleCI to push changes to your GitHub account on behalf of the repository owner during the execution of the workflow.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-user-key.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-user-key.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="User key" /></a></p>
<p>To configure the environment variables, click <strong>Environment Variables</strong>. Select the <strong>Add Environment Variable</strong> option. On the next screen, type the environment variable and the value you want to assign to it.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-env-vars.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-env-vars.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="Environment Variables" /></a></p>
<p>The environment variables used in the file are:</p>
<ul>
<li><p><code>APP_NAME</code> : Container Image Name (aks-nodejs-argocd)</p>
</li>
<li><p><code>ARGOCD_PASSWORD</code> : ArgoCD portal password</p>
</li>
<li><p><code>ARGOCD_SERVER</code> : ArgoCD Server IP Address</p>
</li>
<li><p><code>ARGOCD_USERNAME</code> : ArgoCD portal username (admin)</p>
</li>
<li><p><code>AZURE_PASSWORD</code> : Azure Account Password</p>
</li>
<li><p><code>AZURE_USERNAME</code> : Azure Account Username</p>
</li>
<li><p><code>CLUSTER_NAME</code> : AKS Cluster Name (NodeCluster)</p>
</li>
<li><p><code>DOCKER_LOGIN</code> : Docker Hub Username</p>
</li>
<li><p><code>DOCKER_PASSWORD</code> : Docker Hub Password</p>
</li>
<li><p><code>GITHUB_EMAIL</code> : GitHub Account Email Address</p>
</li>
<li><p><code>RESOURCE_GROUP</code> : AKS Resource Group (NodeRG)</p>
</li>
<li><p><code>SSH_FINGERPRINT</code> : SSH Fingerprint of User Key used for pushing the updated Docker tag to GitHub</p>
</li>
</ul>
<p>To locate the <strong>SSH Fingerprint</strong>, go to <strong>Project Settings</strong> and select <strong>SSH Keys</strong> from the sidebar. Scroll down to the <strong>User Key</strong> section and copy the key.</p>
<p>Re-run the workflow. This time the <code>status</code> will show <code>Success</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-passed-workflow.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-passed-workflow.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="Success workflow" /></a></p>
<p>You will also find another pipeline having the <code>status</code> as <code>Not Run</code>. That is because you have explicitly instructed CircleCI to skip the pipeline by including <code>[skip ci]</code> it in the commit message. When CircleCI commits the updated configuration files to GitHub, <code>[skip ci]</code> prevents a self-triggering loop of the workflow.</p>
<h2 id="heading-monitoring-the-application-on-argocd-dashboard"><strong>Monitoring the application on ArgoCD Dashboard</strong></h2>
<p>A <code>status</code> that shows <code>Success</code> when the workflow is re-run means that the application has been deployed on the AKS cluster.</p>
<p>To observe and monitor the resources that are currently running on the AKS Cluster, log in to the ArgoCD Web Portal.</p>
<p>Earlier in this tutorial, you learned how to fetch the ArgoCD Server IP, username, and password for logging in to the portal. After logging in, you will be on the Applications page.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-argocd-application-page.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-argocd-application-page.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="ArgoCD Application" /></a></p>
<p>Click the application name. You will be redirected to a page with the tree view of all resources running on the AKS Cluster and their real-time status.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-argocd-app-tree-view.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-argocd-app-tree-view.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="ArgoCD App Tree view" /></a></p>
<h2 id="heading-accessing-the-application-on-aks"><strong>Accessing the application on AKS</strong></h2>
<p>To access the application, you need the external IP address of the cluster. You can use the Azure CLI to find the <code>External-IP</code>.</p>
<p>Configure <code>kubectl</code> to connect to AKS using this command:</p>
<pre><code class="lang-bash">az aks get-credentials --resource-group NodeRG --name NodeCluster
</code></pre>
<p>You created all the resources in the <code>nodejs</code> namespace. To get all the resources in that namespace, use this command:</p>
<pre><code class="lang-bash">kubectl get all --namespace nodejs
</code></pre>
<p>Copy the <code>External-IP</code> corresponding to <code>service/nodejs</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-external-app-ip.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-external-app-ip.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="External-IP" /></a></p>
<p>You can access the application at <code>http://&lt;EXTERNAL-IP&gt;</code>. In my case, that is <code>http://20.121.253.220/</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-21-final-application.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-21-final-application.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="Final application" /></a></p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>In this tutorial, you learned how to deploy your applications continuously on a Kubernetes cluster following GitOps practices with ArgoCD. This included configuring an automated CI pipeline. With the pipeline properly configured, any changes made to the application code are instantly updated on the application URL. Say goodbye to manually configure and deploying applications on Kubernetes.</p>
<p>As a bonus, you can change the values of the environment variables to use the CircleCI configuration file for similar applications.</p>
<p>The complete source code for this tutorial can also be found <a target="_blank" href="https://github.com/CIRCLECI-GWP/aks-nodejs-argocd">here on GitHub</a>.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Deploy a serverless workload on Kubernetes using Knative and ArgoCD]]></title><description><![CDATA[Originally published in CircleCI Blog: Deploy a serverless workload on Kubernetes using Knative and ArgoCD by Avik Kundu
Containers and microservices have revolutionized the way applications are deployed on the cloud. Since its launch in 2014, Kubern...]]></description><link>https://blog.avikkundu.com/deploy-serverless-workload-with-knative</link><guid isPermaLink="true">https://blog.avikkundu.com/deploy-serverless-workload-with-knative</guid><category><![CDATA[knative]]></category><category><![CDATA[CircleCI]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[CI/CD]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Tue, 04 Oct 2022 15:09:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1670598240284/v5o1USVFw.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Originally published in</em> <a target="_blank" href="https://circleci.com/blog/"><em>CircleCI</em></a> <em>Blog:</em> <a target="_blank" href="https://circleci.com/blog/deploy-serverless-workload-with-knative/"><em>Deploy a serverless workload on Kubernetes using Knative and ArgoCD</em></a> <em>by</em> <a target="_blank" href="https://circleci.com/blog/author/avik-kundu/"><em>Avik Kundu</em></a></p>
<p>Containers and microservices have revolutionized the way applications are deployed on the cloud. Since its launch in 2014, <a target="_blank" href="https://kubernetes.io/">Kubernetes</a> has become a standard tool for container orchestration. It provides a set of primitives to run resilient, distributed applications.</p>
<p>One of the key difficulties that developers face is being able to focus more on the details of the code than the infrastructure for it. The serverless approach to computing can be an effective way to solve this problem.</p>
<p>Serverless allows running event-driven functions by abstracting the underlying infrastructure. Compared to traditional Platform as a Service (PaaS), serverless allows your dev team to focus on the functionality of the service. Infrastructure issues, such as scaling and fault tolerance, are no longer a roadblock.</p>
<p><a target="_blank" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/">Knative</a> is an open-source enterprise-level solution to build serverless and event-driven applications. Its components can be used to build and deploy serverless applications on Kubernetes. Originally developed by Google, Knative now has contributors from IBM, Red Hat, and VMWare.</p>
<p><a target="_blank" href="https://argoproj.github.io/cd/">ArgoCD</a> is a Kubernetes-native continuous deployment (CD) tool. It deploys code changes directly to Kubernetes resources by pulling them from Git repositories. ArgoCD follows the GitOps pattern, unlike some external CD solutions, which can support only push-based deployments. This tool gives developers the ability to control application updates and infrastructure setup from a unified platform.</p>
<p>In this tutorial, you will learn how to deploy a <a target="_blank" href="https://nodejs.org/">Node.js</a> application as a serverless workload with <a target="_blank" href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/">Knative</a> on <a target="_blank" href="https://azure.microsoft.com/en-us/services/kubernetes-service/#overview">Azure Kubernetes Service</a> (AKS) using <a target="_blank" href="https://circleci.com/signup">CircleCI</a> and <a target="_blank" href="https://argoproj.github.io/cd/">ArgoCD</a>. You will be creating a continuous integration pipeline with <a target="_blank" href="https://circleci.com/orbs/">CircleCI orbs</a>, which are reusable packages of YAML configuration that condense repeated pieces of config into a single line of code. The pipeline is triggered when you push the code into the <a target="_blank" href="https://github.com/">GitHub</a> repository. The result is an automated pipeline that triggers ArgoCD to deploy the latest version of the application on the Kubernetes cluster.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>To follow along with this tutorial, you will need:</p>
<ul>
<li><p><a target="_blank" href="https://hub.docker.com/">Docker Hub</a> account</p>
</li>
<li><p><a target="_blank" href="https://github.com/">GitHub</a> account</p>
</li>
<li><p><a target="_blank" href="https://azure.microsoft.com/en-in/features/azure-portal/">Microsoft Azure</a> account</p>
</li>
<li><p><a target="_blank" href="https://circleci.com/signup">CircleCI</a> account</p>
</li>
<li><p><a target="_blank" href="https://kubernetes.io/docs/tasks/tools/">Kubectl</a> installed on your system</p>
</li>
<li><p><a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/cli_installation/">ArgoCD CLI</a> installed on your system</p>
</li>
<li><p><a target="_blank" href="https://github.com/Azure/azure-cli">Azure CLI</a> installed on your system</p>
</li>
<li><p><a target="_blank" href="https://nodejs.org/">Node.js</a> installed on your system</p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/get-docker/">Docker Engine</a> installed on your system</p>
</li>
</ul>
<p>After you have these items in a place you are ready to go to the next section.</p>
<h2 id="heading-cloning-the-nodejs-application"><strong>Cloning the Node.js application</strong></h2>
<p>In this tutorial, our main focus is on deploying the application on Kubernetes. To save time, you can directly <a target="_blank" href="https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository">clone</a> the <a target="_blank" href="https://github.com/Lucifergene/nodejs-knative-argocd">Node.js application</a> to your GitHub and continue with the rest of the process.</p>
<p>To clone the project, run:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/CIRCLECI-GWP/nodejs-knative-argocd.git
</code></pre>
<p>There are 2 branches in this repository:</p>
<ul>
<li><p>The <code>main</code> the branch contains only the Node.js application code.</p>
</li>
<li><p>The <code>circleci-project-setup</code> the branch contains the application code, along with all YAML files that you will create in this tutorial.</p>
</li>
</ul>
<p>Check out the <code>main</code> branch.</p>
<p>The Node.js application lives in the <code>app.js</code> file:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
<span class="hljs-keyword">const</span> path = <span class="hljs-built_in">require</span>(<span class="hljs-string">"path"</span>);
<span class="hljs-keyword">const</span> morgan = <span class="hljs-built_in">require</span>(<span class="hljs-string">"morgan"</span>);
<span class="hljs-keyword">const</span> bodyParser = <span class="hljs-built_in">require</span>(<span class="hljs-string">"body-parser"</span>);
<span class="hljs-comment">/* eslint-disable no-console */</span>
<span class="hljs-keyword">const</span> port = process.env.PORT || <span class="hljs-number">1337</span>;
<span class="hljs-keyword">const</span> app = express();
app.use(morgan(<span class="hljs-string">"dev"</span>));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ <span class="hljs-attr">extended</span>: <span class="hljs-string">"true"</span> }));
app.use(bodyParser.json({ <span class="hljs-attr">type</span>: <span class="hljs-string">"application/vnd.api+json"</span> }));
app.use(express.static(path.join(__dirname, <span class="hljs-string">"./"</span>)));
app.get(<span class="hljs-string">"*"</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  res.sendFile(path.join(__dirname, <span class="hljs-string">"./index.html"</span>));
});
app.listen(port, <span class="hljs-function">(<span class="hljs-params">err</span>) =&gt;</span> {
  <span class="hljs-keyword">if</span> (err) {
    <span class="hljs-built_in">console</span>.log(err);
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`App at: http://localhost:<span class="hljs-subst">${port}</span>`</span>);
  }
});
<span class="hljs-built_in">module</span>.exports = app;
</code></pre>
<p>The key takeaway from this code is <code>port number</code> where the application will be running. In this case, it is <code>1337</code>.</p>
<p>You can run the application locally by first installing the dependencies. In the project’s root, type:</p>
<pre><code class="lang-bash">npm install
</code></pre>
<p>Then run the application with the command:</p>
<pre><code class="lang-bash">node app.js
</code></pre>
<p>The application should now be up and running at the address <code>http://localhost:1337</code>.</p>
<h2 id="heading-containerizing-the-nodejs-application"><strong>Containerizing the Node.js application</strong></h2>
<p>The first step for deploying an application to Kubernetes is containerizing it. Containerizing applications that use Docker as the container runtime tool requires you to create a <a target="_blank" href="https://docs.docker.com/engine/reference/builder/">Dockerfile</a>. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.</p>
<p>Create a new file in the root directory of the project and name it <code>Dockerfile</code>. Copy this content to the file:</p>
<pre><code class="lang-plaintext"># Set the base image to use for subsequent instructions
FROM node:alpine
# Set the working directory for any subsequent ADD, COPY, CMD, ENTRYPOINT,
# or RUN instructions that follow it in the Dockerfile
WORKDIR /usr/src/app
# Copy files or folders from source to the dest path in the image's filesystem.
COPY package.json /usr/src/app/
COPY . /usr/src/app/
# Execute any commands on top of the current image as a new layer and commit the results.
RUN npm install --production
# Define the network ports that this container will listen to at runtime.
EXPOSE 1337
# Configure the container to be run as an executable.
ENTRYPOINT ["npm", "start"]
</code></pre>
<p>If you have <a target="_blank" href="https://docs.docker.com/get-docker/">Docker</a> installed, you can build and run the container locally for testing. Later in this tutorial, you will learn how to automate this process using CircleCI orbs.</p>
<p>To build and tag the container, you can type:</p>
<pre><code class="lang-bash">docker build -t nodejs-knative-argocd:latest .
</code></pre>
<p>Confirm that the image was successfully created by running this command from your terminal:</p>
<pre><code class="lang-bash">docker images
</code></pre>
<p>Then run the container with the command:</p>
<pre><code class="lang-bash">docker run -it -p 1337:1337 nodejs-knative-argocd:latest
</code></pre>
<p>The application should now be up and running at the address <code>http://127.0.0.1:1337</code>.</p>
<p>Commit and <a target="_blank" href="https://circleci.com/blog/pushing-a-project-to-github/">push</a> the changes to your GitHub repository.</p>
<h2 id="heading-configuring-knative-service-manifests"><strong>Configuring Knative Service manifests</strong></h2>
<p>In Knative, <a target="_blank" href="https://knative.dev/docs/serving/services/">Services</a> are used to deploy an application. To create an application using Knative, you must create a <a target="_blank" href="https://yaml.org/">YAML</a> file that defines a Service. This YAML file specifies metadata about the application, points to the hosted image of the app and allows the Service to be configured.</p>
<p>Create a directory named <code>knative</code> in the root directory of the project. Then, create a new file in the new directory and name it <code>service.yaml</code>.</p>
<p>The contents of the <code>service.yaml</code> are:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">serving.knative.dev/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs-knative-argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs-knative-argocd</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containerConcurrency:</span> <span class="hljs-number">0</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">docker.io/avik6028/nodejs-knative-argocd:latest</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">user-container</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">1337</span>
              <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
          <span class="hljs-attr">readinessProbe:</span>
            <span class="hljs-attr">successThreshold:</span> <span class="hljs-number">1</span>
            <span class="hljs-attr">tcpSocket:</span>
              <span class="hljs-attr">port:</span> <span class="hljs-number">0</span>
          <span class="hljs-attr">resources:</span> {}
      <span class="hljs-attr">enableServiceLinks:</span> <span class="hljs-literal">false</span>
      <span class="hljs-attr">timeoutSeconds:</span> <span class="hljs-number">300</span>
<span class="hljs-attr">status:</span> {}
</code></pre>
<p>The key takeaway from this code block is the <code>spec.template.metadata.name</code> and <code>spec.template.spec.containers[0].image</code>. These denote the name of the template and the container image that will be pulled and deployed with Knative on the Kubernetes cluster. These values will be updated with the latest container image information during the continuous integration process.</p>
<p>Commit and push these files into the <code>main</code> branch of the GitHub repository you cloned earlier.</p>
<h2 id="heading-launching-the-azure-kubernetes-service-aks-cluster"><strong>Launching the Azure Kubernetes Service (AKS) cluster</strong></h2>
<p>In this tutorial, you will be learning to deploy the application on the <a target="_blank" href="https://azure.microsoft.com/en-us/services/kubernetes-service/#overview">AKS</a> cluster. To create the AKS cluster, you should have a Microsoft Azure account and the Azure CLI installed on your computer. <a target="_blank" href="https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli">Connect the CLI to your Azure account</a>.</p>
<p>Now you can launch an AKS cluster with the help of Azure CLI.</p>
<p>Create a Resource Group using this command:</p>
<pre><code class="lang-bash">az group create --name NodeRG --location eastus
</code></pre>
<p>Launch a two-node cluster:</p>
<pre><code class="lang-bash">az aks create --resource-group NodeRG --name NodeCluster --node-count 2 --enable-addons http_application_routing
</code></pre>
<p><strong>Note:</strong> <em>If you generated any SSH keys in your system previously, you need to add the optional parameter</em> <code>--generate-ssh-keys</code> to the command. This parameter will auto-generate SSH public and private key files if they are missing. The keys will be stored in the <code>~/.ssh</code> directory.</p>
<p>The AKS cluster will take 10 to 15 minutes to launch.</p>
<h2 id="heading-installing-knative-in-the-kubernetes-cluster"><strong>Installing Knative in the Kubernetes cluster</strong></h2>
<p>Once the cluster is up and running, you need to install Knative inside the cluster to use it for deploying your serverless workload.</p>
<p>To install the application, use the Azure CLI once again.</p>
<p>Configure <code>kubectl</code> to connect to AKS using this command:</p>
<pre><code class="lang-bash">az aks get-credentials --resource-group NodeRG --name NodeCluster
</code></pre>
<p>To install the Knative core components and custom resources, execute these commands:</p>
<pre><code class="lang-bash">kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-core.yaml
</code></pre>
<p>Knative also requires a networking layer for exposing its services externally. You need to install <a target="_blank" href="https://github.com/knative-sandbox/net-kourier">Kourier</a>, a lightweight Knative Serving ingress.</p>
<pre><code class="lang-bash">kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.7.0/kourier.yaml
</code></pre>
<p>Configure Knative Serving to use Kourier by default by running:</p>
<pre><code class="lang-bash">kubectl patch configmap/config-network \
  --namespace knative-serving \
  --<span class="hljs-built_in">type</span> merge \
  --patch <span class="hljs-string">'{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'</span>
</code></pre>
<p>You can configure DNS so you do not need to run curl commands with a host header. Knative provides a Kubernetes Job called <code>default-domain</code> that configures Knative Serving to use <code>sslip.io</code> as the default DNS suffix.</p>
<pre><code class="lang-bash">kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.7.1/serving-default-domain.yaml
</code></pre>
<p>Once you execute the commands, Knative will be installed in <code>knative-serving</code> namespace. To get all the resources into the namespace:</p>
<pre><code class="lang-bash">kubectl get all --namespace knative-serving
</code></pre>
<h2 id="heading-installing-argocd-in-the-aks-cluster"><strong>Installing ArgoCD in the AKS cluster</strong></h2>
<p>Once the cluster is up and running, you need to install ArgoCD inside the cluster to use it for deploying your application.</p>
<p>To install ArgoCD, enter:</p>
<pre><code class="lang-bash">kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>When you execute these commands, ArgoCD will be installed in the ‘argocd’ namespace. To get all the resources into the namespace:</p>
<pre><code class="lang-bash">kubectl get all --namespace argocd
</code></pre>
<h3 id="heading-exposing-the-argocd-api-server"><strong>Exposing the ArgoCD API server</strong></h3>
<p>By default, the ArgoCD API server is not exposed to an external IP. Because you will access the application from the internet during this tutorial, you need to expose the ArgoCD server with an external IP via Service Type Load Balancer.</p>
<p>Change the argocd-server service type to LoadBalancer:</p>
<pre><code class="lang-bash">kubectl patch svc argocd-server -n argocd -p <span class="hljs-string">'{"spec": {"type": "LoadBalancer"}}'</span>
</code></pre>
<p><strong>Note:</strong> <em>You can also use Kubectl port forwarding to connect to the API server without exposing the service. Use this command:</em> <code>kubectl port-forward svc/argocd-server -n argocd 8080:443</code></p>
<p>You can now access the API server using <code>https://localhost:8080</code>.</p>
<h2 id="heading-accessing-the-argocd-web-portal"><strong>Accessing the ArgoCD web portal</strong></h2>
<p>Once you have exposed the ArgoCD API server with an external IP, you can access the portal with the external IP Address that was generated.</p>
<p>Because you installed ArgoCD in the <code>argocd</code> namespace, use this command to get all the resources for the namespace:</p>
<pre><code class="lang-bash">kubectl get all --namespace argocd
</code></pre>
<p>Copy the <code>External-IP</code> corresponding to <code>service/argocd-server</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-external-ip.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-external-ip.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="External-IP" /></a></p>
<p>You can access the application at <code>http://&lt;EXTERNAL-IP&gt;</code>.<br />I used <code>http://52.146.29.61/</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-argocd-application.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-argocd-application.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="ArgoCD Application" /></a></p>
<p>To log into the portal, you will need the username and password. The username is set as <code>admin</code> by default.</p>
<p>To fetch the password, execute this command:</p>
<pre><code class="lang-bash">kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=<span class="hljs-string">"{.data.password}"</span> | base64 -d; <span class="hljs-built_in">echo</span>
</code></pre>
<p>Use this username-password combination to log into the ArgoCD portal.</p>
<h2 id="heading-configuring-kubernetes-manifests-for-argocd"><strong>Configuring Kubernetes manifests for ArgoCD</strong></h2>
<p>To configure ArgoCD to deploy your application on Kubernetes, you will have to set up ArgoCD to connect the Git Repository and Kubernetes in a declarative way using <a target="_blank" href="https://yaml.org/">YAML</a> for configuration.</p>
<p>Apart from this method, you can also set up ArgoCD from the Web Portal or use the ArgoCD CLI. But since in this article, we are trying to follow the GitOps principles which state that the Git repository should act as the sole source of truth, the declarative method via YAML files serves best.</p>
<p>One of the key features and capabilities of ArgoCD is to sync via manual or automatic policy for the deployment of applications to a Kubernetes cluster.</p>
<p>To get started, create a directory named <code>argocd</code> in the root directory of the project. Create a new file in the new directory and name it as <code>config.yaml</code>.</p>
<h3 id="heading-manual-sync-policy"><strong>Manual Sync Policy</strong></h3>
<p>As the name suggests, through this policy, you will be able to manually synchronize your application via the CI/CD pipelines.<br />Whenever a code change is made, the CI/CD pipeline is triggered and calls the ArgoCD server APIs to start the sync process based on the changes you will commit. For communicating with the ArgoCD server APIs, you can use the ArgoCD CLI. You can also use one of the SDKs available for various programming languages.</p>
<p>For setting up the Manual Sync policy for ArgoCD, paste this in the <code>config.yaml</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs-knative-argocd</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">destination:</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">nodejs</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">'https://kubernetes.default.svc'</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">knative</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">'https://github.com/CIRCLECI-GWP/nodejs-knative-argocd'</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">circleci-project-setup</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>
</code></pre>
<h3 id="heading-automated-sync-policy"><strong>Automated Sync policy</strong></h3>
<p>ArgoCD can automatically sync an application when it detects differences between the desired manifests in Git and the live state in the cluster.</p>
<p>A benefit of automatic sync is that CI/CD pipelines no longer need direct access to the ArgoCD API server to perform the deployment. Instead, the pipeline makes a commit and pushes to the Git repository with the changes to the manifests in the tracking Git repo.</p>
<p>If you want to set to the Automated Sync policy, you need to paste this in the <code>config.yaml</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nodejs-knative-argocd</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">destination:</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">nodejs</span>
    <span class="hljs-attr">server:</span> <span class="hljs-string">'https://kubernetes.default.svc'</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">knative</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">'https://github.com/CIRCLECI-GWP/nodejs-knative-argocd'</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">circleci-project-setup</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">syncPolicy:</span>
    <span class="hljs-attr">automated:</span>
      <span class="hljs-attr">prune:</span> <span class="hljs-literal">false</span>
      <span class="hljs-attr">selfHeal:</span> <span class="hljs-literal">false</span>
</code></pre>
<p>Commit and <a target="_blank" href="https://circleci.com/blog/pushing-a-project-to-github/">push</a> these files into the <code>main</code> branch of the GitHub repository you had cloned earlier.</p>
<h2 id="heading-creating-the-continuous-integration-pipeline"><strong>Creating the continuous integration pipeline</strong></h2>
<p>The objective of this tutorial is to show how you can deploy a serverless workload with Knative on Kubernetes through <a target="_blank" href="https://circleci.com/continuous-integration/">continuous integration</a> (CI) via CircleCI and <a target="_blank" href="https://circleci.com/blog/a-brief-history-of-devops-part-iv-continuous-delivery-and-continuous-deployment/">continuous deployment</a> (CD) via ArgoCD.</p>
<p>To create the CI pipeline, we will be using CircleCI integrated with your GitHub account. CircleCI configuration lives in the <code>.circleci</code> directory in the project’s root folder in the form of <code>config.yml</code> file, i.e., the path to the configuration is <code>.circleci/config.yml</code>.</p>
<p>The contents of <code>config.yml</code> are:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">2.1</span>

<span class="hljs-attr">orbs:</span>
  <span class="hljs-attr">docker:</span> <span class="hljs-string">circleci/docker@2.1.1</span>
  <span class="hljs-attr">azure-aks:</span> <span class="hljs-string">circleci/azure-aks@0.3.0</span>
  <span class="hljs-attr">kubernetes:</span> <span class="hljs-string">circleci/kubernetes@1.3.0</span>

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">argocd-manual-sync:</span>
    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">cimg/base:stable</span>
    <span class="hljs-attr">parameters:</span>
      <span class="hljs-attr">server:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Server IP of of ArgoCD
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
      <span class="hljs-attr">username:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Username for ArgoCD
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
      <span class="hljs-attr">password:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Password for ArgoCD
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">ArgoCD</span> <span class="hljs-string">CLI</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            URL=https://&lt;&lt; parameters.server &gt;&gt;/download/argocd-linux-amd64
            [ -w /usr/local/bin ] &amp;&amp; SUDO="" || SUDO=sudo
            $SUDO curl --insecure -sSL -o /usr/local/bin/argocd $URL
            $SUDO chmod +x /usr/local/bin/argocd
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">ArgoCD</span> <span class="hljs-string">CLI</span> <span class="hljs-string">login</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">argocd</span> <span class="hljs-string">login</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.server</span> <span class="hljs-string">&gt;&gt;</span> <span class="hljs-string">--insecure</span> <span class="hljs-string">--username</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.username</span> <span class="hljs-string">&gt;&gt;</span> <span class="hljs-string">--password</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.password</span> <span class="hljs-string">&gt;&gt;</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Manual</span> <span class="hljs-string">sync</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">argocd</span> <span class="hljs-string">app</span> <span class="hljs-string">sync</span> <span class="hljs-string">$APP_NAME</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Wait</span> <span class="hljs-string">for</span> <span class="hljs-string">application</span> <span class="hljs-string">to</span> <span class="hljs-string">reach</span> <span class="hljs-string">a</span> <span class="hljs-string">synced</span> <span class="hljs-string">and</span> <span class="hljs-string">healthy</span> <span class="hljs-string">state</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">argocd</span> <span class="hljs-string">app</span> <span class="hljs-string">wait</span> <span class="hljs-string">$APP_NAME</span>

  <span class="hljs-attr">argocd-configure:</span>
    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">cimg/base:stable</span>
    <span class="hljs-attr">parameters:</span>
      <span class="hljs-attr">cluster-name:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Name of the AKS cluster
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
      <span class="hljs-attr">resource-group:</span>
        <span class="hljs-attr">description:</span> <span class="hljs-string">|
          Resource group that the cluster is in
</span>        <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Pull</span> <span class="hljs-string">Updated</span> <span class="hljs-string">code</span> <span class="hljs-string">from</span> <span class="hljs-string">repo</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">git</span> <span class="hljs-string">pull</span> <span class="hljs-string">origin</span> <span class="hljs-string">$CIRCLE_BRANCH</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">azure-aks/update-kubeconfig-with-credentials:</span>
          <span class="hljs-attr">cluster-name:</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.cluster-name</span> <span class="hljs-string">&gt;&gt;</span>
          <span class="hljs-attr">install-kubectl:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">perform-login:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">resource-group:</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">parameters.resource-group</span> <span class="hljs-string">&gt;&gt;</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">kubernetes/create-or-update-resource:</span>
          <span class="hljs-attr">resource-file-path:</span> <span class="hljs-string">argocd/config.yaml</span>

  <span class="hljs-attr">bump-docker-tag:</span>
    <span class="hljs-attr">docker:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">cimg/base:stable</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">yq</span> <span class="hljs-bullet">-</span> <span class="hljs-string">portable</span> <span class="hljs-string">yaml</span> <span class="hljs-string">processor</span> 
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            URL=https://github.com/mikefarah/yq/releases/download/3.3.4/yq_linux_amd64
            [ -w /usr/local/bin ] &amp;&amp; SUDO="" || SUDO=sudo
            $SUDO wget $URL
            $SUDO mv ./yq_linux_amd64 /usr/local/bin/yq
            $SUDO chmod +x /usr/local/bin/yq
</span>      <span class="hljs-bullet">-</span> <span class="hljs-string">checkout</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Update</span> <span class="hljs-string">Knative</span> <span class="hljs-string">Service</span> <span class="hljs-string">manifest</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            yq w -i knative/service.yaml spec.template.metadata.name "$APP_NAME-$CIRCLE_BUILD_NUM"
            yq w -i knative/service.yaml spec.template.spec.containers[0].image "docker.io/$DOCKER_LOGIN/$APP_NAME:$CIRCLE_SHA1"
</span>      <span class="hljs-bullet">-</span> <span class="hljs-attr">add_ssh_keys:</span>
          <span class="hljs-attr">fingerprints:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">"$SSH_FINGERPRINT"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">run:</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">Commit</span> <span class="hljs-string">&amp;</span> <span class="hljs-string">Push</span> <span class="hljs-string">to</span> <span class="hljs-string">GitHub</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">|
            git config user.email "$GITHUB_EMAIL"
            git config user.name "CircleCI User"
            git checkout $CIRCLE_BRANCH           
            git commit -am "Bumps docker tag [skip ci]"
            git push origin $CIRCLE_BRANCH
</span>
<span class="hljs-attr">workflows:</span>
  <span class="hljs-attr">Deploy-App-on-AKS:</span>
    <span class="hljs-attr">jobs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">docker/publish:</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">$DOCKER_LOGIN/$APP_NAME</span>
          <span class="hljs-attr">tag:</span> <span class="hljs-string">$CIRCLE_SHA1,latest</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">bump-docker-tag:</span>
          <span class="hljs-attr">requires:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">docker/publish</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">argocd-configure:</span>
          <span class="hljs-attr">cluster-name:</span> <span class="hljs-string">$CLUSTER_NAME</span>
          <span class="hljs-attr">resource-group:</span> <span class="hljs-string">$RESOURCE_GROUP</span>
          <span class="hljs-attr">requires:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">bump-docker-tag</span>
<span class="hljs-comment"># Paste the following only when you opt for the ArgoCD manual-sync-policy:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">argocd-manual-sync:</span>
          <span class="hljs-attr">server:</span> <span class="hljs-string">$ARGOCD_SERVER</span>
          <span class="hljs-attr">username:</span> <span class="hljs-string">$ARGOCD_USERNAME</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">$ARGOCD_PASSWORD</span>
          <span class="hljs-attr">requires:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">argocd-configure</span>
</code></pre>
<p>The CI workflow consists of 3 jobs:</p>
<ul>
<li><p>The <code>docker/publish</code> the job builds and pushes the container to Dockerhub</p>
</li>
<li><p>The <code>bump-docker-tag</code> job updates the Knative Service YAML with the latest container image tag</p>
</li>
<li><p>The <code>argocd-configure</code> job applies the ArgoCD Configuration on the AKS cluster</p>
</li>
<li><p>The <code>argocd-manual-sync</code> job is needed only when you will be opting for the <code>manual-sync-policy</code>. For <code>automatic-sync</code>, you can omit this job from the file.</p>
</li>
</ul>
<p>In this workflow, we have extensively used <a target="_blank" href="https://circleci.com/orbs/">CircleCI orbs</a>, which are open-source, shareable packages of parameterizable reusable configuration elements, including jobs, commands, and executors. The orbs have been either used directly or are used in creating custom jobs.</p>
<p>Commit and <a target="_blank" href="https://circleci.com/blog/pushing-a-project-to-github/">push</a> the changes to your GitHub repository.</p>
<h2 id="heading-setting-up-the-project-on-circleci"><strong>Setting up the project on CircleCI</strong></h2>
<p>The next step to deploying your application to AKS is connecting the application in our GitHub repository to CircleCI.</p>
<p>Go to your <a target="_blank" href="https://app.circleci.com/">CircleCI dashboard</a> and select the Projects tab on the left panel. Now, you have to click on the <code>Set Up Project</code> button corresponding to the GitHub repository which contains the code (nodejs-knative-argocd).</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-circleci-dashboard.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-circleci-dashboard.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="CircleCI dashboard" /></a></p>
<p>On the <strong>Select, your config.yml file</strong> screen, select the <strong>Fastest</strong> option and type <code>main</code> the branch name. CircleCI will automatically locate the <code>config.yml</code> file. Click <strong>Set Up Project</strong>.</p>
<p>The workflow will start running automatically. But after some time, it will display the <code>status</code> as <code>Failed</code>. This is because you have to set up a <strong>User Key</strong> and configure the <strong>Environment Variables</strong> from <strong>Project Settings</strong> in CircleCI.</p>
<p>To set up the User Key, select the <strong>SSH Keys</strong> option from the left panel of the <strong>Project Settings</strong>. Under the <strong>User Key</strong> section, click <strong>Authorize with GitHub</strong>. The User Key is needed by CircleCI to push changes to your GitHub account on behalf of the repository owner, during the execution of the workflow.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-user-key.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-user-key.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="User Key" /></a></p>
<p>To configure the Environment Variables, select the <strong>Environment Variables</strong> option from the left panel of the <strong>Project Settings</strong>. Select the <strong>Add Environment Variable</strong> option. On the next screen, type the environment variable and the value you want it to be assigned to.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-env-vars.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-env-vars.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="Environment Variables" /></a></p>
<p>The Environment Variables used in the file are listed below:</p>
<ul>
<li><p><code>APP_NAME</code> : Container Image Name (nodejs-knative-argocd)</p>
</li>
<li><p><code>ARGOCD_PASSWORD</code> : ArgoCD portal password</p>
</li>
<li><p><code>ARGOCD_SERVER</code> : ArgoCD Server IP Address</p>
</li>
<li><p><code>ARGOCD_USERNAME</code> : ArgoCD portal username (admin)</p>
</li>
<li><p><code>AZURE_PASSWORD</code> : Azure Account Password</p>
</li>
<li><p><code>AZURE_USERNAME</code> : Azure Account Username</p>
</li>
<li><p><code>CLUSTER_NAME</code> : AKS Cluster Name (NodeCluster)</p>
</li>
<li><p><code>DOCKER_LOGIN</code> : Dockerhub Username</p>
</li>
<li><p><code>DOCKER_PASSWORD</code> : Dockerhub Password</p>
</li>
<li><p><code>GITHUB_EMAIL</code> : GitHub Account Email Address</p>
</li>
<li><p><code>RESOURCE_GROUP</code> : AKS Resource Group (NodeRG)</p>
</li>
<li><p><code>SSH_FINGERPRINT</code> : SSH Fingerprint of User Key used for pushing the updated Docker tag to GitHub</p>
</li>
</ul>
<p>To locate the <strong>SSH Fingerprint</strong>, go to <strong>Project Settings</strong> and select <strong>SSH Keys</strong> from the sidebar. Scroll down to the <strong>User Key</strong> section and copy the key.</p>
<p>Re-run the workflow. This time the <code>status</code> will show <code>Success</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-passed-workflow.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-passed-workflow.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="Success Workflow" /></a></p>
<p>You will also find another pipeline with the <code>status</code> as <code>Not Run</code>. That is because you have explicitly instructed CircleCI to skip the pipeline by including <code>[skip ci]</code> it in the commit message. When CircleCI commits the updated configuration files to GitHub, <code>[skip ci]</code> prevents a self-triggering loop of the workflow.</p>
<h2 id="heading-monitoring-the-application-on-argocd-dashboard"><strong>Monitoring the application on ArgoCD Dashboard</strong></h2>
<p>A <code>status</code> that shows <code>Success</code> when the workflow is re-run means that the application has been deployed on the AKS cluster.</p>
<p>To observe and monitor the resources that are currently running on the AKS Cluster, log in to the ArgoCD Web Portal.</p>
<p>Earlier in this tutorial, you learned how to fetch the ArgoCD Server IP, username, and password for logging in to the portal. After logging in, you be on the Applications page.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-argocd-application-page.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-argocd-application-page.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="ArgoCD Application" /></a></p>
<p>Click the application name. You will be redirected to a page with the tree view of all resources running on the AKS Cluster and their real-time status.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-argocd-app-tree-view.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-argocd-app-tree-view.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="ArgoCD App Tree View" /></a></p>
<h2 id="heading-accessing-the-application-on-aks"><strong>Accessing the application on AKS</strong></h2>
<p>To access the application, you will need the DNS name of the <code>route</code> created by the Knative Service.</p>
<p>You created all the resources in the <code>nodejs</code> namespace. To get all the resources in that namespace, use this command:</p>
<pre><code class="lang-bash">kubectl get all --namespace nodejs
</code></pre>
<p>Copy the <code>URL</code> for <code>service.serving.knative.dev/nodejs-knative-argocd</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-external-app-ip.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-external-app-ip.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="DNS Name" /></a></p>
<p>Use this URL to access the application. For me, the URL is <code>http://nodejs-knative-argocd.nodejs.52.146.24.47.sslip.io/</code>.</p>
<p><a target="_blank" href="https://production-cci-com.imgix.net/blog/media/2022-10-14-final-application.png?ixlib=rb-3.2.1&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data&amp;fm=jpg"><img src="https://production-cci-com.imgix.net/blog/media/2022-10-14-final-application.png?ixlib=rb-3.2.1&amp;w=2000&amp;auto=format&amp;fit=max&amp;q=60&amp;ch=DPR%2CWidth%2CViewport-Width%2CSave-Data" alt="Final Application" /></a></p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>And that is the end of the tutorial. By following this guide, you learned how to develop an automated CI pipeline for deploying your serverless workload continuously on a Kubernetes cluster following GitOps practices with Knative and ArgoCD. Once the pipeline is properly configured, any changes made to the application code will be instantly reflected on the workload URL. There is no further need for configuring and deploying applications on Kubernetes manually. You can change the values of the environment variables to use the CircleCI configuration file for similar applications.</p>
<p>The complete source code for this tutorial can also be found <a target="_blank" href="https://github.com/CIRCLECI-GWP/nodejs-knative-argocd">here on GitHub</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Building Serverless URL Shortener Service on AWS]]></title><description><![CDATA[The AWS Serverless Ecosystem

Serverless is a way to describe the services, practices, and strategies that enable you to build more agile applications so you can innovate and respond to change faster. With serverless computing, infrastructure managem...]]></description><link>https://blog.avikkundu.com/serverless-url-shortener-on-aws</link><guid isPermaLink="true">https://blog.avikkundu.com/serverless-url-shortener-on-aws</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Sat, 02 Apr 2022 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998603692/kq7ceT0i9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-the-aws-serverless-ecosystem">The AWS Serverless Ecosystem</h2>
<blockquote>
<p>Serverless is a way to describe the services, practices, and strategies that enable you to build more agile applications so you can innovate and respond to change faster. With serverless computing, infrastructure management tasks like capacity provisioning and patching are handled by AWS, so you can focus on only writing code that serves your customers.</p>
</blockquote>
<p><strong><em>Serverless Computing = FaaS [Functions as a Service] + BaaS [Backend as a Service]</em></strong></p>
<h3 id="heading-serverless-services-of-aws">Serverless Services of AWS:</h3>
<ul>
<li><p><strong>Compute</strong>: AWS Lambda, AWS Fargate</p>
</li>
<li><p><strong>Storage</strong>: Amazon DynamoDB, Amazon S3, etc.</p>
</li>
<li><p><strong>Application Integration</strong>: Amazon API Gateway, etc.</p>
</li>
</ul>
<hr />
<h2 id="heading-introduction">Introduction</h2>
<p>In this walkthrough, we are going to develop an URL Shortener Service using various services of the AWS Serverless Ecosystem. We are going to focus mainly on the backend of the application.</p>
<p>To implement our project in a simplified way, we will use only the 2 most important services: the <strong>API Gateway</strong> and <strong>DynamoDB</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998584346/uaFjX1xjs.jpeg" alt="Architecture Diagram" /></p>
<h3 id="heading-aws-lambda">AWS Lambda</h3>
<p>AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. With Lambda, you can run code for virtually any type of application or backend service ΓÇö all with zero administration.</p>
<p>In this application, we are not going to use AWS Lambda, since our application runs on simpler logic ΓÇö i.e. to store short-URLs into the Database and re-direct to the long-URLs once we call the short-URL.</p>
<h3 id="heading-amazon-api-gateway">Amazon API Gateway</h3>
<p>Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the ΓÇ£front doorΓÇ¥ for applications to access data, business logic, or functionality from your backend services.</p>
<p>In our application, we are going to utilize the functionality of the API Gateway through which we can perform ETL operations during the transit of data between the API Gateway and DynamoDB.</p>
<h3 id="heading-amazon-dynamodb">Amazon DynamoDB</h3>
<p>Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. ItΓÇÖs a fully managed, multi-region, multi-active, durable database with built-in security, backup and restores, and in-memory caching for internet-scale applications.</p>
<p>DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.</p>
<hr />
<h2 id="heading-getting-started">Getting Started</h2>
<p>To follow this project, you only need to have an AWS account. The entire application would be developed within the AWS Web Console.</p>
<p>Let's understand the workflow of the application. We are currently focussing on 2 main features prevalent to all URL-Shorteners.</p>
<h3 id="heading-1-storing-the-short-url-long-url-and-owner-inside-dynamodb">1. Storing the Short-URL, Long-URL and owner inside DynamoDB</h3>
<p>In the backend, once a user sends a POST request to the Route, with all the required parameters, the API Gateway receives the data, transforms it and pushes it into the DynamoDB.</p>
<h3 id="heading-2-redirecting-to-the-long-url-once-the-short-url-is-hit">2. Redirecting to the Long-URL once the short-URL is hit</h3>
<p>Once a user mentions the short-URL in the HTTP header, the API gateway receives the data, processes it and searches it inside DynamoDB. Once the corresponding Long URL is found, the API Gateway redirects to the Long URL.</p>
<hr />
<h2 id="heading-setting-up-dynamodb-database">Setting Up DynamoDB Database</h2>
<p>First of all, we need to configure our Database. For that, we need to create a Table.</p>
<p>The table would be consisting of 3 Columns: longURL , <strong>shortId</strong> , owner . We would be using the shortId attribute as the <strong>Primary key</strong> of the table.</p>
<p>In the configuration, please use the exact names mentioned.</p>
<ul>
<li><p><strong>Table Name:</strong> URL-Shortener</p>
</li>
<li><p><strong>Primary Key:</strong> shortId</p>
</li>
<li><p><strong>Table settings:</strong> Use Default Settings</p>
</li>
</ul>
<p>Once you create the Table, you would land on the following page mentioning all the Table Details:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998585720/42zOXkxFq.png" alt /></p>
<hr />
<h2 id="heading-setting-up-the-api-gateway">Setting up the API Gateway</h2>
<p>This is the most important service in our architecture. Through this service, we are going to perform multiple operations.</p>
<ol>
<li><p>Create API Endpoints for <strong>GET</strong> and <strong>POST</strong> requests.</p>
</li>
<li><p>Transform request parameters received from the API into DynamoDB understandable format.</p>
</li>
<li><p>Convert the response received from the DynamoDB into a format understandable by browsers for re-direction.</p>
</li>
</ol>
<p>We need to create an API of the type Rest API from the API Gateway console.</p>
<p>After selecting the <strong>REST-API Gateway type</strong>, we need to select the following configurations.</p>
<ul>
<li><p><strong>Protocol</strong>: REST</p>
</li>
<li><p><strong>Create a new API:</strong> New API</p>
</li>
<li><p><strong>API name:</strong> URLShortener</p>
</li>
</ul>
<p><strong><em>This would create a new API.</em></strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998587217/6AhcK1gIh.png" alt="API Gateway Console" /><em>API Gateway Console</em></p>
<p>Now, we need to create an New Resource under the / route. The name of the resource is set as: url-shortener .</p>
<p>Under this resource, we need to create multiple methods for <strong>GET </strong>and <strong>POST </strong>requests.</p>
<hr />
<h2 id="heading-setting-up-post-request">Setting Up POST Request</h2>
<p>Under the /url-shortener resource, we need to create a method named ΓÇ£POSTΓÇ¥. In this method, we are going to modify our POST request.</p>
<p>Once the POST Method is selected, we have to use the following information during its setup:</p>
<ul>
<li><p><strong>Integration type:</strong> AWS Service</p>
</li>
<li><p><strong>AWS Region:</strong> ap-south-1 [region where the DynamoDB Instance would be running]</p>
</li>
<li><p><strong>AWS Service:</strong> DynamoDB</p>
</li>
<li><p><strong>HTTP method:</strong> POST</p>
</li>
<li><p><strong>Action:</strong> UpdateItem</p>
</li>
<li><p><strong>Execution role:</strong> [ IAM Role in which DynamoDB write permissions are given ]</p>
</li>
</ul>
<p>Once the setup is completed, we land on the following page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998588618/c-s39UFF4.png" alt /></p>
<p>Now, we need to transform the request parameters received from the client into something that would be understood by DynamoDB.</p>
<p>For this, we are going to utilize the Integration Request feature of the API Gateway. Through this feature, we are going to add a <strong>Mapping Template</strong> based on which the transformation would take place.</p>
<p>On clicking the <strong>Integration Request</strong> from the above page, we would be landing on the following page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998590413/rH2rMQAfmT.png" alt /></p>
<p>Under the <strong>Mapping Templates section</strong>, we need to add the following code:</p>
<p>{% gist https://gist.github.com/Lucifergene/180738ec994ce28d4b4d8fa7c71bbab7 %}</p>
<p>Now, we have set up the process through which data would be saved into the DynamoDB. But now, we have to also format the Response that DynamoDB would be sending, to a format understandable by the Client.</p>
<p>For this, we need to set up another <strong>Mapping Template</strong> in the Integration Response section.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998591912/WZ05u0uY7K.png" alt /></p>
<p>Under the <strong>Mapping Templates section</strong>, we need to add the following code:</p>
<p>{% gist https://gist.github.com/Lucifergene/39bf8dc169f4ca6d1f1d5eaa591f5e9e %}</p>
<p>Once this is set up, the response of the DynamoDB would be converted in a form understandable by the client.</p>
<p>Thus, we have set up our POST request, which would save the request parameters in the DynamoDB and would send the response back to the client. To test, we need to click on the <strong>TEST </strong>option from the above console.</p>
<p>In the Request Body, we need to type the following:</p>
<pre><code>{
  "longURL": "[<span class="hljs-string">https://www.google.co.in</span>](<span class="hljs-link">https://www.google.co.in</span>)",
  "owner": "Avik",
  "shortURL": "Google"
}
</code></pre><p>On submitting the above JSON, we must receive a 200 Status code and a similar response body as the following. Thus we have successfully saved the contents to the DynamoDB.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998593494/JGlOqPCGT-.png" alt /></p>
<hr />
<h2 id="heading-setting-up-get-request">Setting Up GET Request</h2>
<p>The GET request is somewhat different from the POST request. Here, the user would be appending the short URL to the API endpoint. This shortURL would be sent to the DynamoDB by the API Gateway to perform the search operation. Once the associated long URL is found, the API Gateway automatically re-directs to the long URL.</p>
<p>Under the /url-shortener resource, <strong>we create another resource named as {shortURL}</strong>, which would be having a dynamic resource path, as it is the place where the short URLs would be appended.</p>
<p>Inside the newly created sub-resource, we create the GET request with the following settings:</p>
<ul>
<li><p><strong>Integration type:</strong> AWS Service</p>
</li>
<li><p><strong>AWS Region:</strong> ap-south-1 [region where the DynamoDB Instance would be running]</p>
</li>
<li><p><strong>AWS Service:</strong> DynamoDB</p>
</li>
<li><p><strong>HTTP method:</strong> POST</p>
</li>
<li><p><strong>Action:</strong> GetItem</p>
</li>
<li><p><strong>Execution role:</strong> [ IAM Role in which DynamoDB write permissions are given ]</p>
</li>
</ul>
<p>Once the setup is completed, we land on the following page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998595000/SDeNNFiFq.png" alt /></p>
<p>Now, we have to perform 3 transformations while the data would be transferred back and forth between the API Gateway and DynamoDB.</p>
<h3 id="heading-a-integration-request">A. Integration Request</h3>
<p>First, we need to transform the request parameters received from the client into something that would be understood by DynamoDB. For this, we are going to utilize the <strong>Integration Request </strong>feature of the API Gateway. Through this feature, we are going to add a <strong>Mapping Template</strong> based on which the transformation would take place.</p>
<p>On clicking the <strong>Integration Request</strong> from the above page, we would be landing on the following page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998596441/bRMmhdk-xm.png" alt /></p>
<p>Under the <strong>Mapping Templates section</strong>, we need to add the following code:</p>
<p>{% gist https://gist.github.com/Lucifergene/ca5e96a9744d4eeecd478ba6d097600b %}</p>
<h3 id="heading-b-method-response">B. Method Response</h3>
<p>We know that for URL redirections, <strong>302 HTTP Status code</strong> is used. Therefore in the response header, we need to set the appropriate status code since by default <strong>200</strong>is<strong> </strong>set.</p>
<p>In the <strong>Method Response</strong> section, we need to <strong>delete the 200 status code association</strong> and <strong>add the 302 HTTP Status Code</strong>. To instruct the API gateway to redirect to the URL set in the Location key in the Response header, we need to add it to the corresponding 302 Response Header.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998598238/vNQ9zThiv.png" alt /></p>
<h3 id="heading-c-integration-response">C. Integration Response</h3>
<p>After setting up the <strong>Method Response</strong>, now, we have to also format the Response that DynamoDB would be sending, to a format understandable by the Client.</p>
<p>For this, we need to set up another <strong>Mapping Template</strong> in the Integration Response section.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998600084/kOja6C8qA.png" alt /></p>
<p>Under the <strong>Mapping Templates section</strong>, we need to add the following code:</p>
<p>{% gist https://gist.github.com/Lucifergene/b2be264e74bf4cb5c38bf57bd5f710af %}</p>
<p>Thus, we have set up our GET request, which would redirect the short URL to the actual LongURL after fetching it from DynamoDB.</p>
<p>To test, we need to click on the <strong>TEST</strong> option from the above console. In the {shortId} field, we need to enter the shortId of the URL and click on the <strong>TEST</strong> button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998602099/H33gHk_xZ.png" alt /></p>
<p>Thus we have received a <strong>302</strong> response code and upon studying the response header, we see a location key that contains the actual Long URL.</p>
<hr />
<p><strong>And we reached the end of the solution!!!</strong></p>
<p>You can visit the repository from below:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Lucifergene/Serverless-URL-Shortener">https://github.com/Lucifergene/Serverless-URL-Shortener</a></div>
<hr />
<p>You can reach out on my <a target="_blank" href="https://twitter.com/avik6028">Twitter</a>, <a target="_blank" href="https://instagram.com/avik6028">Instagram</a>, or <a target="_blank" href="https://linkedin.com/in/avik-kundu-0b837715b">LinkedIn</a> if you need more help. I would be more than happy.</p>
]]></content:encoded></item><item><title><![CDATA[Cloudflare URL Shortener]]></title><description><![CDATA[You can optionally add your own aliases. If not, an random string would be assigned.

https://gist.github.com/Lucifergene/9d3d805c7a6db23be58d4a4d11831b87
You need to paste this script in the Index.JS file of your Cloudflare Worker
The Service: https...]]></description><link>https://blog.avikkundu.com/cloudflare-url-shortener</link><guid isPermaLink="true">https://blog.avikkundu.com/cloudflare-url-shortener</guid><category><![CDATA[cloudflare]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Thu, 31 Mar 2022 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998576702/5xvWzb9Hr.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998575398/Whp2z0bwi7.png" alt="Alt Text" />
<em>You can optionally add your own aliases. If not, an random string would be assigned.</em>
</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="9d3d805c7a6db23be58d4a4d11831b87"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/Lucifergene/9d3d805c7a6db23be58d4a4d11831b87" class="embed-card">https://gist.github.com/Lucifergene/9d3d805c7a6db23be58d4a4d11831b87</a></div><hr />
<p><strong>You need to paste this script in the <code>Index.JS</code> file of your Cloudflare Worker</strong></p>
<p>The Service: https://short-linker.lucifergene.workers.dev/</p>
<p>You can visit the repository below:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Lucifergene/Cloudflare-URL-Shortener">https://github.com/Lucifergene/Cloudflare-URL-Shortener</a></div>
<hr />
<p>You can reach out on my <a target="_blank" href="https://twitter.com/avik6028">Twitter</a>, <a target="_blank" href="https://instagram.com/avik6028">Instagram</a>, or <a target="_blank" href="https://linkedin.com/in/avik-kundu-0b837715b">LinkedIn</a> if you need more help. I would be more than happy.</p>
]]></content:encoded></item><item><title><![CDATA[Deploying WordPress with MySQL on Top of Amazon EKS]]></title><description><![CDATA[Whenever it comes to creating a website for any freelancing or business, WordPress was and is the first choice for millions of developers. Although the backend service stills use PHP even when advanced tools like Node.js, Django have revolutionized t...]]></description><link>https://blog.avikkundu.com/deploying-wordpress-with-mysql-on-top-of-amazon-eks</link><guid isPermaLink="true">https://blog.avikkundu.com/deploying-wordpress-with-mysql-on-top-of-amazon-eks</guid><category><![CDATA[WordPress]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[aws-eks]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Mon, 31 Jan 2022 18:39:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001059044/FqpE99Xvg.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Whenever it comes to creating a website for any freelancing or business, <strong>WordPress</strong> was and is the first choice for millions of developers. Although the backend service stills use <strong>PHP</strong> even when advanced tools like Node.js, Django have revolutionized the ways for creating interactive and dynamic Web Applications.</p>
<p>The main reason as far I understand is the simplicity and fast development process. You do not have to be a Full Stack developer to create a website for a local grocery shop or a small business. WordPress is built in such a way that with a few button clicks, you are ready with a basic website. All these benefits portray how it is still the Market Leader among other <strong>Content Management Systems,</strong> even after nearly 2 decades of its launch.</p>
<h3 id="heading-a-move-towards-containerized-deployment">A Move Towards Containerized Deployment</h3>
<p><strong>Containerization</strong> or the ability to build containers around any applications has changed the way deployment takes place in the age of <strong>Cloud-Ops</strong>. The concept of building containers has solved different kinds of problems faced in the traditional approach of deploying applications, mostly <strong>security</strong> and <strong>portability</strong>. <a target="_blank" href="https://www.docker.com/">Docker</a>, <a target="_blank" href="https://podman.io/">Podman</a> and <a target="_blank" href="https://cri-o.io/">Cri-o</a> are some of the tools which manage the procedure of creating containers for applications. Once containers are created, they can be run on any system irrespective of their configuration apart from the hardware requirements. <strong>Running Dockerized applications requires just a single line of code to start the containers!</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001031251/0LUd33poM.png" alt /></p>
<p>Till now, it seems nice! But now imagine a situation where you are building a large application, preferably a Web Application with a dedicated frontend and a backend. Now you wish to follow a micro-service based architecture where you will be dockerizing the frontend and backend of the applications separately in different containers and then connecting them with different <strong>REST APIs</strong>. At this time, you have to handle 2 containers simultaneously. You have to make sure both the containers are running all the time and if any container crashes, you have to manually restart the services. The process further complicates when you are trying to create some more microservices to add more features to your container. At last, you will find it very difficult to handle and manage all containers together.</p>
<h3 id="heading-the-need-for-container-orchestration-services">The need for Container Orchestration Services</h3>
<p>Wouldn’t be it better if there’s a service that is continuously running in the background and is managing all the containers together? Whenever a container crashes, it would automatically re-launch it and when the traffic to the website increases, instantly it will scale up the infrastructure by deploying more containers to balance the load.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001033021/SuY7XoAeb.png" alt /></p>
<p>In Kubernetes, the atomic unit of scheduling in Pod. Pods are slightly different from containers. We can run multiple containers in a single pod but not vice-versa. This comes very conveniently when we have containers that are fully dependent on each other, e.g. WordPress and MySQL are dependent since all data from WordPress would be stored in MySQL.</p>
<h3 id="heading-understanding-the-architecture-of-kubernetes">Understanding the Architecture of Kubernetes</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001034696/39VHczgbA.png" alt /></p>
<p>Whenever we work with K8s, we at first have to create a cluster. A cluster contains a Master node and several Worker nodes or slaves.</p>
<p>The job of the Master Node is to schedule and monitor the logs of the containers running inside the pods, present inside the Worker Nodes. Whenever a client requests a Pod to be launched to the Master, the client connects to the API server at port 6443 of the master. Then the Master takes the client request to the <strong>Kubelet</strong> program present at the Worker Nodes. Based on the request, the Kubelet program communicates with internal docker engine to perform the required job.</p>
<p><strong>Some of the other services running across the cluster:</strong></p>
<h4 id="heading-on-master-nodes">On Master Nodes:</h4>
<ol>
<li><strong>etc</strong>: It stores the configuration information which can be used by each of the nodes in the cluster. Here the Master store permanent data like secrets i.e key-value information, config files etc.</li>
<li><strong>API Server</strong>: Kubernetes is an API server that provides all the operations on the cluster using the API.</li>
<li><strong>Controller Manager</strong>: This component is responsible for most of the collectors that regulate the state of the cluster and performs a task.</li>
<li><strong>Scheduler</strong>: Responsible for workload utilization and allocating pod to the new node.</li>
</ol>
<h4 id="heading-on-worker-nodes">On Worker Nodes:</h4>
<ol>
<li><strong>Docker:</strong> The first requirement of each node is Docker which helps in running the encapsulated application containers in a relatively isolated but lightweight operating environment.</li>
<li><strong>Kubelet Service:</strong> This is a small service in each node responsible for relaying information to and from the control plane service. It interacts with <strong>etc</strong> store to read configuration details and wright values.</li>
<li><strong>Kubernetes Proxy Service:</strong> This is a proxy service that runs on each node and helps in making services available to the external host.</li>
</ol>
<p><a target="_blank" href="https://www.tutorialspoint.com/kubernetes/kubernetes_architecture.htm">Kubernetes - Architecture</a></p>
<p>When running a Kubernetes cluster, one of the foremost challenges is deciding which cloud or datacenter it’s going to be deployed to. After that, you still need to filter your options when selecting the right network, user, storage, and logging integrations for your use cases.</p>
<h3 id="heading-benefits-of-amazon-eks-why-use-eks">Benefits of Amazon EKS: Why use EKS?</h3>
<p>Through EKS, normally cumbersome steps are done for you, like creating the Kubernetes master cluster, as well as configuring service discovery, Kubernetes primitives, and networking. Existing tools will more than likely work through EKS with minimal mods, if any.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001036333/pVn5exBWL.png" alt /></p>
<p>With Amazon EKS, the Kubernetes <strong>Control plane</strong> — including the backend persistence layer and the API servers — is provisioned and scaled across various <strong>AWS availability zones</strong>, resulting in high availability and eliminating a single point of failure. Unhealthy control plane nodes are detected and replaced, and patching is provided for the control plane. <strong>The result is a resilient AWS-managed Kubernetes cluster that can withstand even the loss of an availability zone.</strong></p>
<p>And of course, as part of the AWS landscape, EKS is integrated with various AWS services, making it easy for organizations to scale and secure applications seamlessly. From <strong>AWS Identity Access Management (IAM)</strong> for authentication to <strong>Elastic Load Balancing</strong> for load distribution, the straightforwardness and convenience factor of using EKS can’t be understated.</p>
<p><a target="_blank" href="https://www.sumologic.com/blog/eks/">What is Amazon Elastic Kubernetes Service (EKS)? | Sumo Logic</a></p>
<h3 id="heading-getting-started">Getting Started</h3>
<h4 id="heading-some-pre-requisites">Some Pre-Requisites:</h4>
<p>You need to have an AWS account. It cannot be the Starter Program since EKS is not supported there. Secondly, you must have a basic knowledge of AWS and Kubernetes. Third, you must have AWS CLI set up in your system with a dedicated profile allowing ADMIN Access so that it can directly use the EKS.</p>
<p>Although AWS CLI provides commands to manage EKS, but they are not efficient enough to perform complex tasks. Therefore, we are going to use another CLI built especially for EKS. You can download it from the GitHub link given below.</p>
<p><a target="_blank" href="https://github.com/weaveworks/eksctl">GitHub - Weaveworks/excel: The official CLI for Amazon EKS</a></p>
<p>Apart from that, we need to have <strong>kubectl</strong> installed in our system too, for communicating with the Pods running on EKS. It is a managed service so everything will be managed by it except kubectl command because it is a client program, which will help us to connect with the pods.</p>
<p><a target="_blank" href="https://kubernetes.io/docs/tasks/tools/install-kubectl/">Install Tools</a></p>
<h4 id="heading-starting-the-eks-cluster">Starting the EKS Cluster</h4>
<p>To start the EKS cluster, we need to set up a YAML file containing the infrastructure of the cluster. Information like the number of Worker Nodes, allowed EC2 instances, AWS key for connecting the instances with our local terminal and many more, are mentioned in this file.</p>
<p>After we write the desired infrastructure in our YAML file, we will have to execute the file with the EKSCTL CLI we have installed.</p>
<p><code>eksctl create a cluster -f cluster.yaml</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001038677/i6w4P7MTO.png" alt /></p>
<p>This command will create the entire cluster in 1 click. The creation of the cluster would take a certain amount of time.</p>
<h4 id="heading-setting-up-the-kubectl-cli">Setting up the kubectl CLI</h4>
<p>After the cluster is launched, we need to connect our system with the pods so that we can work on the cluster. Kubernetes has been installed in the instances already by EKS. Therefore to connect our kubectl with the Kubernetes on the instances, we need to update the KubeConfiguration file first. For this, we use the following command:</p>
<p><code>aws eks update-kubeconfig  --name cluster1</code></p>
<p>We can check the connectivity with the command: kubectl cluster-info</p>
<p>For finding the number of nodes: <code>kubectl get nodes</code></p>
<p>For finding the number of pods: <code>kubectl get pods</code></p>
<p>To get detailed information of the instances on which the pods are running: <code>kubectl get pods -o wide</code></p>
<p>Before we work, we need to create a namespace for our application in the K8s.</p>
<p>For that we use the following command: <code>kubectl create namespace wp-msql</code></p>
<p>Now we have to set it to be the default Namespace:</p>
<p><code>kubectl config set-context --current --namespace=wp-msql</code></p>
<p>For checking how many pods are running inside the namespace ‘<strong>kube-system’</strong> we have to execute: <code>kubectl get pods -n kube-system</code></p>
<h4 id="heading-installing-wordpress-and-mysql">Installing WordPress and MySQL</h4>
<p>Now, we are ready to install WordPress and Mysql in our cluster. For that, we need to copy the 3 files given below in a folder.</p>
<p>This file contains information about the different settings to be applied to our MySQL pod.</p>
<p>Similarly, this file contains information about the different settings to be applied to our WordPress pod.</p>
<p>At last, we create a <strong>Kustomization file</strong> to specify the order of execution of the files along with the secret keys.</p>
<p>After putting the above scripts in a folder, we can build the infrastructure using the following command: <code>kubectl create -k .</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001040527/cFuMQ-2O7.png" alt /></p>
<p><strong>Our WordPress server along with MySQL is now launched in the EKS!</strong></p>
<p>To customize the site, we need a URL to visit. For that, we will be using the <strong>Public DNS</strong> provided by the <strong>External Load Balancer (ELB)</strong>.</p>
<p><strong>On visiting the URL of the LB, we will reach this page.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001042494/w_AjecVL0.png" alt /></p>
<p>After configuration and posting our first post, we will reach to this following page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001045448/BzWg4Jqck.png" alt /></p>
<h4 id="heading-shutting-down-eks-cluster">Shutting down EKS cluster</h4>
<p>Since EKS is not a free service, it is better to shut down whenever it is not required. For that we use a simple command: <code>eksctl delete cluster -f cluster1.yaml</code>.<br />This will delete the entire cluster along with the EC2 instances, Load Balancers, etc. which were automatically created by EKS.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001047443/iQl2DwWf0.png" alt /></p>
<p>So, this was a basic walkthrough about the process of deploying applications on EKS easily.</p>
<h3 id="heading-monitoring-our-deployment">Monitoring our Deployment</h3>
<p>We can use the <strong>HELM</strong> to install monitoring tools like Prometheus and Grafana.</p>
<p>Helm is a tool for managing <strong>Charts</strong>. Charts are packages of pre-configured Kubernetes resources. It is a tool that streamlines installing and managing Kubernetes applications. It can be installed in the system with the following commands. A server-side tool called <strong>TILLER</strong> is also required.</p>
<p>Through HELM, it becomes easier to manage applications in <strong>Kubernetes</strong> on EKS. It has huge Libary of pre-configured code for various Softwares which can be installed with a single click.</p>
<p><a target="_blank" href="https://helm.sh/">Helm</a></p>
<p><strong>For monitoring our Application, we need to install Prometheus and Grafana.</strong></p>
<h4 id="heading-prometheus">Prometheus</h4>
<p>To install Prometheus through Helm, we use the following commands:</p>
<p>Since we have forwarded the port, visiting port 8888 will show the Prometheus screen.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001049151/8jEiEtotq.png" alt /></p>
<h4 id="heading-grafana">Grafana</h4>
<p>To install <strong>Grafana</strong> through Helm, we use the following commands:</p>
<p>To get the IP address where Grafana is running, we type the following: kubectl get svc -n grafana . By visiting the URL of the LoadBalancer, we see the Grafana screen.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001051215/eVbbAGWUk.png" alt /></p>
<h3 id="heading-other-modifications">Other Modifications</h3>
<h4 id="heading-amazon-efs">Amazon EFS</h4>
<p>There are mainly 3 different types of storages available: File, Block and Object Storage. By default, EKS will set the storage system for the clusters as Block Storage and will be using EBS for that.</p>
<p>Instead of using EBS as our storage service for our data, we can use <strong>Amazon EFS or Elastic File Storage</strong>. EFS is highly preferred because it allows <strong>connecting to multiple instances at once</strong>, which EBS does not. For implementing EFS, we need a provisioner since it is not provided by default.</p>
<p><a target="_blank" href="https://cloud.netapp.com/blog/aws-efs-is-it-the-right-storage-solution-for-you">AWS EFS: Is It the Right Storage Solution for You?</a></p>
<h4 id="heading-amazon-fargate">Amazon Fargate</h4>
<p>This is what AWS Fargate is about. It completely abstracts the underlying infrastructure, and we see every one of our containers as a single machine.</p>
<p>We just have to specify what resource we need for every container and it will do the heavy lifting for us. We don’t have to manage multi-layered access rules anymore. We can fine-tune the permissions between our containers like we would do between single EC2 instances.</p>
<p>We can launch a Fargate cluster similar to the EKS cluster. The configuration has to be set as the following:</p>
<p>We can launch the above cluster by executing the command: eksctl create cluster -f fargate_cluster.yaml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001054579/29KViJah1.png" alt /></p>
<p><strong>Since Fargate is more advanced than EKS, the charges are higher than EKS.</strong> We can shut down the entire Fargate Cluster by the command: eksctl delete cluster -f fargate_cluster.yaml</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649001056302/Ij0Apgq47.png" alt /></p>
<p><a target="_blank" href="https://www.freecodecamp.org/news/amazon-fargate-goodbye-infrastructure-3b66c7e3e413/">An intro to Amazon Fargate: what it is, why it's awesome (and not), and when to use it.</a></p>
<p><strong>All codes for the walkthrough are published in the Github account.</strong></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Lucifergene/Amazon-EKS-Training">https://github.com/Lucifergene/Amazon-EKS-Training</a></div>
<p>You can reach out on my <a target="_blank" href="https://twitter.com/avik6028">Twitter</a>, <a target="_blank" href="https://instagram.com/avik6028">Instagram</a>, or on <a target="_blank" href="https://linkedin.com/in/avik-kundu-0b837715b">LinkedIn</a> if you need more help. I would be more than happy.</p>
<p>If you have come up to this, <strong>do drop an 👏 if you liked this article.</strong></p>
<p><strong>Good Luck</strong> 😎 and <strong>happy coding</strong> 👨‍💻</p>
]]></content:encoded></item><item><title><![CDATA[Keeping the Lights on !! — Automation with Ansible Tower]]></title><description><![CDATA[Overview
There is unprecedented demand for resource provisioning due to COVID-19, these days. Various open-source technologies are being used to respond to the challenges faced by the business. Automation is the key requirement among various firms an...]]></description><link>https://blog.avikkundu.com/automation-with-ansible-tower</link><guid isPermaLink="true">https://blog.avikkundu.com/automation-with-ansible-tower</guid><category><![CDATA[ansible]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Sat, 01 Jan 2022 11:25:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1649000728906/BS3bzQvtD.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-overview">Overview</h3>
<p>There is unprecedented demand for resource provisioning due to COVID-19, these days. Various <strong>open-source technologies</strong> are being used to respond to the challenges faced by the business. <strong>Automation</strong> is the key requirement among various firms and companies to keep their business running.</p>
<blockquote>
<p>Can you roll out fixes at scale ? Can you automate repeatable IT tasks without compromising compliance?</p>
<p>Remote workers are demanding self-service, can you give it to them? Without breaking the bank?</p>
</blockquote>
<h4 id="heading-ansible-automation-can-be-the-answer-to-all-the-questions"><strong>Ansible automation can be the answer to all the questions.</strong></h4>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649000718961/uRzyP1qH3.png" alt /></p>
<p>Ansible is the most popular automation tool in the Github today, with more than a quarter-million downloads per month. There are more than 3500 contributors submitting new modules all the time.</p>
<p>Ansible can help to keep the lights on by automating the remediation of problems before they affect the systems and how we can keep IT systems secure.</p>
<p>The basic use-cases of Ansible are <strong>Provisioning</strong>, <strong>Configuration Management(CM)</strong>, <strong>Security Remediation</strong>, <strong>Application deployment</strong>, etc.</p>
<h3 id="heading-getting-started">Getting Started</h3>
<h4 id="heading-why-ansible"><strong>Why Ansible?</strong></h4>
<ul>
<li><strong>Agentless:</strong> Unlike Puppet, Chef, Salt, etc. Ansible operates only over SSH</li>
<li><strong>Built with Python:</strong> It's a universal language these days.</li>
<li><strong>Self-documenting:</strong> Simple YAML files describing the playbooks and roles.</li>
<li><strong>Feature-rich:</strong> Some call these “<strong>batteries included</strong>”, but there are over 150 modules provided out of the box, and new ones are pretty easy to write.</li>
</ul>
<h4 id="heading-important-terms"><strong>Important Terms</strong></h4>
<ol>
<li><strong>PlayBook</strong>: Playbooks are the bread and butter of Ansible. They represent collections of ‘<strong>plays</strong>’, configuration policies which get applied to defined groups of hosts. Basically, it contains all the instructions we provide to the Ansible to perform the desired tasks. It is written in Declarative languages like <strong>YAML(preferred)</strong> or <strong>JSON</strong>. By constructing proper Playbooks, there’s almost no limit to what you can do with Ansible.</li>
<li><strong>Collections:</strong> Playbooks, however, can get very complex. <a target="_blank" href="https://www.ansible.com/blog/getting-started-with-ansible-collections">Ansible Collections</a> are pre-packaged content that can be used as-is or modified to meet your needs. The content found in Ansible Collections includes those for specific purposes, tools, and even demos to help you learn the ins and outs of Ansible. They are certified by <strong>Redhat</strong>.</li>
</ol>
<blockquote>
<p><em>This is an intermediate level workshop where it is understood that the basics of the</em> <strong><em>Ansible</em></strong> <em>technology is known. Here we are going to focus more on</em> <strong><em>Ansible Tower</em></strong><em>, which is an advanced tool.</em></p>
</blockquote>
<h4 id="heading-the-main-question-that-brings-us-to-the-topic-is"><strong>The main question that brings us to the topic is:</strong></h4>
<blockquote>
<p>What will happen, if the main server on which we are running Ansible, goes down, i.e. my control node goes down?</p>
</blockquote>
<p>That's where <a target="_blank" href="https://www.ansible.com/products/tower"><strong>Ansible Tower</strong></a> comes into the picture. <strong>It gives Clustering features. We can have multiple towers deployed and share a common database so that if one server goes down, others can continue the management.</strong></p>
<h3 id="heading-ansible-tower">Ansible Tower</h3>
<p>One of the major gripes from Ansible users is that it didn’t have a proper GUI. This was an especially critical issue because good UI is important for occasional and new users to get comfortable and familiar with an application, before diving into the complexities of the CLI and playbook creation. Ansible itself was (and still is) rather new, so most of its users were by definition, new users.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649000721585/SOPj9Va0U.png" alt /></p>
<p>Ansible Tower, previously called the <strong>AWX</strong> project, is the fix to this problem. It is a comprehensive web-based UI for Ansible, containing the most important Ansible features, especially those that render better as graphical rather than text-based output, such as real-time node monitoring.</p>
<p><a target="_blank" href="https://www.upguard.com/blog/ansible-vs-ansible-tower">Ansible vs Ansible Tower: What are The Key Differences | UpGuard</a></p>
<p><strong>Some of the important features of Ansible Tower are listed below. The</strong> <a target="_blank" href="https://www.ansible.com/products/tower#towerfeatures"><strong>full feature list</strong></a> <strong>is available off the Ansible website.</strong></p>
<ol>
<li><strong>Role-based access control:</strong> you can set up teams and users in various roles. These can integrate with your existing LDAP or AD environment.</li>
<li><strong>Job scheduling:</strong> schedule your jobs and set repetition options</li>
<li><strong>Portal mode:</strong> this is a simplified view of automation jobs for newbies and less experienced Ansible users. This is an excellent feature as it truly lowers the entry barriers to starting to use Ansible.</li>
<li><strong>Fully documented REST API:</strong> allows you to integrate Ansible into your existing toolset and environment</li>
<li><strong>Tower Dashboard:</strong> use this to quickly view a summary of your entire environment. Simplifies things for sysadmins while sipping their coffee.</li>
<li><strong>Cloud integration:</strong> Tower is compatible with the major cloud environments: Amazon EC2, Rackspace, Azure.</li>
</ol>
<h3 id="heading-use-cases">Use-Cases</h3>
<h4 id="heading-1-provisioning">1. Provisioning</h4>
<p>We can create an AWS instance using Ansible Tower using a pre-created Job Template.</p>
<p>First, we need to create the Playbook containing the infrastructure as code. We are going to create a VPC and an Internet Gateway. Inside the VPC, we create a Subnet and finally provision an RHEL 8 EC2 instance in it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649000723236/cFbj9plWys.png" alt /></p>
<p>This use-case is an Example of <strong>CD</strong>, part of <strong>DevOps</strong>. These days once there is any code change in big firms, they re immediately deployed. Once deployed in a testing environment, it becomes easy to understand the potential errors and security flaws.</p>
<p>From the Ansible Tower GUI, we can build a job template to perform the desired task. In the template, we can add the playbook, credentials, etc. Tower has a database to securely store the credentials. After that, we select the Launch Button. Automatically we see the entire infrastructure is built on AWS.</p>
<h4 id="heading-2-chatops">2. ChatOps</h4>
<p>We can integrate <strong>Slack</strong> with <strong>Ansible Tower</strong> so that any changes related to the production immediately updates all developers connected to the slack channel.</p>
<p>In Ansible Tower, we have the feature of creating Workflows, which helps to run multiple playbooks together. We can visualize it in the Visualizer and update the workflow.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649000724942/XiX6USlIE.png" alt /></p>
<p>Most companies set up a continuous delivery pipeline for testing the application in a Development environment. But the process of Deploying to the Production environment needs manual intervention. We can use Ansible Tower to do the approval of the workflow to pass the application to the production environment.</p>
<h4 id="heading-3-extending-ansible-automations-to-3rd-party-tools">3. Extending Ansible Automations to 3rd-Party Tools</h4>
<p>Since Ansible requires Python to run, we can leverage various features of Python in Ansible. For example, we can create virtual environments in python and install specific packages in it. We can later use the particular virtual environment in the Tower.</p>
<p><a target="_blank" href="https://www.oracle.com/cloud/"><strong>Oracle Cloud</strong></a> integration is not provided by Ansible Tower out of the box. But we can connect Ansible tower to the Oracle Cloud through the oci package provided by the latter. The package can be installed in the virtual environment and Tower can be instructed to use the environment. Also, we can create <strong>Credential Types</strong> to store the credentials, since the cloud is not having the support to store the Credentials by default in Ansible Tower.</p>
<p>Thus we can set up any of the Cloud Providers and use the existing tools and infrastructure and have them imported into Ansible Tower to securely perform the automation and orchestration.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649000726339/jsBwdz3WW.png" alt /></p>
<p>You can reach out on my <a target="_blank" href="https://twitter.com/avik6028">Twitter</a>, <a target="_blank" href="https://instagram.com/avik6028">Instagram</a>, or on <a target="_blank" href="https://linkedin.com/in/avik-kundu-0b837715b">LinkedIn</a> if you need more help. I would be more than happy.</p>
<p>If you have come up to this, <strong>do drop an 👏 if you liked this article.</strong></p>
<p><strong>Good Luck</strong> 😎 and <strong>happy coding</strong> 👨‍💻</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1649000727907/JBhNoTW4w.gif" alt /></p>
]]></content:encoded></item><item><title><![CDATA[My Experience at Civo Hackathon 2021]]></title><description><![CDATA[https://www.youtube.com/watch?v=lhdiBAoL80s
Learning Kubernetes has been one of my top priorities this year. I spent quite a lot of time finding good resources to learn and have hands-on experience with the technology.
Finally, I came across Civo Kub...]]></description><link>https://blog.avikkundu.com/my-experience-at-civo-hackathon-2021</link><guid isPermaLink="true">https://blog.avikkundu.com/my-experience-at-civo-hackathon-2021</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Sun, 05 Dec 2021 00:58:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998567871/K7RgeeAol.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=lhdiBAoL80s">https://www.youtube.com/watch?v=lhdiBAoL80s</a></div>
<p>Learning Kubernetes has been one of my top priorities this year. I spent quite a lot of time finding good resources to learn and have hands-on experience with the technology.</p>
<p><strong>Finally, I came across Civo Kubernetes, when one of my seniors recommended the platform to me.  Yeah, it was the one I was looking for! </strong></p>
<p>The Civo Kubernetes Platform provided me with fully managed K3s clusters as well as high-quality learning videos about Kubernetes from the platform developers themselves. I instantly got a $250 credit in my Civo account once I signed up with my credit card.
<br /></p>
<h1 id="heading-introduction">Introduction</h1>
<p>I am currently in the <strong>final year</strong> of my Bachelor's degree in Computer Engineering, from <strong>KIIT University, Bhubaneswar, India</strong>. </p>
<p>In fact, this is my <strong>2nd victory in a nationwide hackathon</strong> this year. Earlier this year, I had finished as the First Runners-up at the <strong>TCS Inframinds Hackathon</strong>. Apart from that, I am currently a <strong>DevOps intern</strong> at <strong>Highradius Technologies</strong> and also an enthusiastic <strong>open-source contributor</strong>. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998554611/v0Hu33cWv.png" alt="Civo hackathon" /></p>
<p>When I got to know about the <strong>Civo Hackathon</strong>, I planned to take part in it as I needed hands-on experience with <strong>Kubernetes</strong>. Also, the speaker line-up before the commencing of the hackathon was interesting. I got to know about the platform as well as about monitoring and profiling from the developer advocates of Civo. </p>
<p>The hackathon spanned over the <strong>2nd weekend of November 2021</strong>, starting from Friday, when we had the speaker sessions till Sunday evening. The results were finally announced the very next Monday.</p>
<h3 id="heading-much-to-my-surprise-i-finished-up-2nd"><strong>Much to my surprise, I finished up 2nd !!!!!!</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998557206/MMPy5gJaY.gif" alt="I was dumbstruck and excited!" /></p>
<h1 id="heading-my-project">My Project</h1>
<p>The project I built is a <strong>Computer-Aided Diagnostic System</strong> that is used to predict whether a person has been infected with COVID-19. </p>
<p>The prediction is possible through the integration of the COVID-19 X-Ray Classifier into a Web Application. By uploading Frontal Chest X-Rays, the model can perform classification between COVID and non-COVID X-Rays using Modified DenseNet architectures.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998559678/RsMNguHes.jpeg" alt="My Hackathon Project" /></p>
<p>The users are given the option to save the results to the Database to receive further advice from doctors. The data is stored securely in MongoDB. Apart from that, REST API has been provided for Developers to access the Deep Learning Model and get prediction data in their applications. I have also enabled monitoring facilities for the application. </p>
<p><strong>The entire project was hosted on the Civo Kubernetes Platform.</strong>
<br /></p>
<h1 id="heading-how-i-built-it">How I built it</h1>
<p>The project kickstarted with the Development of the Web Application. The UI was finalized and then the application was developed. Several open-source styles, libraries and toolkits were used during the development of the Frontend with HTML, CSS and JavaScript.</p>
<p>After completion, the backend of the application was developed with <strong>Python &amp; Flask framework</strong>. The routes were created and mapped to the Frontend. The Deep Learning Model was integrated with the backend REST APIs. Various libraries such as Numpy, Pillow and Tensorflow was used to manage the model. Finally, MongoDB was integrated with the backend to save the Form data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998561260/Q5jHJpUf9.jpeg" alt="Web Application" /></p>
<p><strong>This completed the Web Application Development.</strong></p>
<p>The next stage involved deploying the application on Civo K3s Cluster by developing an automated DevOps CI/CD Pipeline. First, the entire application code was pushed to a GitHub repository. Through this step, the code version control is ensured and any change in the code would automatically trigger the entire pipeline.</p>
<p>To deploy applications on K8s, the application needed to be containerized. The building of the Docker container should automatically take place once any code gets changed. After building the container, it needs to be pushed to a Docker Repository, here Dockerhub. Also, the old Docker Image Tag mentioned in the code would need to be replaced by the new Docker Image Tag. For automating all these, a Continuous Integration Pipeline was created with the help of Github Actions as the CI tool.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998562847/qFKTWveIH.jpeg" alt="Github Actions" /></p>
<p>A workflow file was written to sequence the jobs that needed to be performed, once the code get changed in the repository. The jobs involved building and pushing the Docker container to Dockerhub. 
After pushing, the new container tag replaced the older one mentioned in the customization file automatically, with the help of Kustomize.io. The Deployment, Service and Ingress YAML files were pushed to the repository as K8s needed these files during deployment.</p>
<p><code>Github Actions Workflow</code> file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">CIVO</span> <span class="hljs-string">HACKATHON</span> <span class="hljs-string">WORKFLOW</span>

<span class="hljs-attr">on:</span> 
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span> [ <span class="hljs-string">master</span> ]

<span class="hljs-attr">jobs:</span>

  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>    
    <span class="hljs-attr">steps:</span>      
      <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">push</span> <span class="hljs-string">Docker</span> <span class="hljs-string">image</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/build-push-action@v1.1.0</span>
        <span class="hljs-attr">with:</span>
          <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKER_USER</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">password:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKER_PASSWORD</span> <span class="hljs-string">}}</span>
          <span class="hljs-attr">repository:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKER_USER</span> <span class="hljs-string">}}/civo-hackathon</span>
          <span class="hljs-attr">tags:</span> <span class="hljs-string">${{</span> <span class="hljs-string">github.sha</span> <span class="hljs-string">}},</span> <span class="hljs-string">latest</span>


  <span class="hljs-attr">deploy:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">Deploy</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">needs:</span> <span class="hljs-string">build</span>

    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Check</span> <span class="hljs-string">out</span> <span class="hljs-string">code</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Setup</span> <span class="hljs-string">Kustomize</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">imranismail/setup-kustomize@v1</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">kustomize-version:</span> <span class="hljs-string">"3.6.1"</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Update</span> <span class="hljs-string">Kubernetes</span> <span class="hljs-string">resources</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-attr">DOCKER_USERNAME:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKER_USER</span> <span class="hljs-string">}}</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
       cd kustomize/base
       kustomize edit set image civo-hackathon=$DOCKER_USERNAME/civo-hackathon:$GITHUB_SHA
       cat kustomization.yaml
</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Commit</span> <span class="hljs-string">files</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        git config --local user.email "action@github.com"
        git config --local user.name "GitHub Action"
        git commit -am "Bump docker tag"
</span>    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Push</span> <span class="hljs-string">changes</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">ad-m/github-push-action@master</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">github_token:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.GITHUB_TOKEN</span> <span class="hljs-string">}}</span>
</code></pre>
<p><strong>This completed the Continuous Integration process.</strong></p>
<p>The final stage was to deploy the Docker Image pushed in DockerHub, into a CIVO k3s Cluster. For this, a K3s cluster was created on Civo. Due to CPU intensive nature of the application, the Largest Node configuration was selected. Then through the Civo CLI, the KubeConfig file was connected with the local KubeCTL tool.</p>
<p>Through KubeCTL, a namespace was created and ArgoCD was installed in it. Inside ArgoCD, the configuration was provided to continuously track the GitHub Repository for changes in the Kustomization file. </p>
<p>Since previously through CI, we had managed to update the Kustomization file after a new code change took place, this update in the Kustomization file triggered the ArgoCD to re-deploy the application based on the newer Docker Image Tag provided. Thus after an initial manual Sync, ArgoCD managed to complete the Continuous Deployment process.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998564346/R3oBaZxEY.jpeg" alt="Argo CD" /></p>
<p><strong>The CI/CD Pipeline was successfully created which helped to automatically deploy code changes to production.</strong></p>
<p>After the application was properly working, I proceeded with installing Prometheus and Grafana in a separate namespace in the cluster to fetch and visualize the metrics. For that, I edited the Flask application to make it generate metrics to be fed to Prometheus. </p>
<p>Then I developed a Service Monitor for exposing the metrics endpoint of the application which in turn would be automatically added to the Prometheus Target group. Now, I was able to fetch metrics into Prometheus from the Web application. After that, I set up the Grafana Dashboard to visualize the metrics.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1648998565847/Vn3APv57u.png" alt="Grafana Dashboard" /></p>
<p><strong>This finally concluded the project.</strong>
<br /></p>
<h1 id="heading-my-experience">My Experience</h1>
<p>Overall, I had a great experience learning and executing new things within a short span of time. The process of deployment could not be made smoother without the Civo Platform. The fact that we can launch a cluster within a few minutes, along with having a marketplace from where we can pick services we want to preinstall in the cluster, really simplified the process for newbie Kubernetes developers like me. </p>
<p>Apart from that, the presence of the Kubernetes academy which contains beginner-friendly videos about all the different features of K8s, integrated into the platform, helped me to quickly navigate and get my doubts cleared before applying stuff on my cluster.</p>
<p>And of course, we had the option of directly contacting the Civo team via Slack to get out queries resolved. Special thanks to <strong>Saiyam Pathak</strong>, for his Monitoring video, which really helped me set up the monitoring stack easily.
<br /></p>
<h1 id="heading-whats-next-for-the-project">What's next for the project</h1>
<p>Although I tried my best to incorporate all the domains of DevOps into my application, still there are some places that need attention. </p>
<p>First and foremost, I tried to incorporate the GitOps principle as much as possible, which included pushing the application code, Kubernetes Manifests as well as Terraform scripts to the Git. But still, there were some settings that I had to manually set inside the cluster like setting up the ArgoCD. Since ArgoCD supports GitOps, I would be declaring the settings from Git itself.</p>
<p>Apart from that, I would be incorporating some Logging and Profiling tools in the cluster, that would give a better picture of the application deployment.</p>
<p>Last but not the least, the model which has been deployed can currently perform classification only. But recently, in some researches, it has been proved that through Instance Segmentation on the X-Rays, we can actually measure the severity of the spread of the virus by precisely identifying the locations of the GGOs. In the future, I want to integrate such a model with the application, so that users can also measure the severity of the virus instantly.</p>
<hr />
<p>You can visit the repository from below:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/Lucifergene/civo-hackathon">https://github.com/Lucifergene/civo-hackathon</a></div>
<h4 id="heading-demo-httpscovid-predictionedherokuappcom">Demo : https://covid-predictioned.herokuapp.com/</h4>
<h4 id="heading-devpost-httpsdevpostcomsoftwarecovid-19-prognosis">DevPost : https://devpost.com/software/covid-19-prognosis</h4>
<h4 id="heading-civo-httpswwwcivocom">Civo - https://www.civo.com/</h4>
<hr />
<p>You can reach out on my <a target="_blank" href="https://twitter.com/avik6028">Twitter</a>, <a target="_blank" href="https://instagram.com/avik6028">Instagram</a>, or <a target="_blank" href="https://linkedin.com/in/avik-kundu-0b837715b">LinkedIn</a> if you need more help. I would be more than happy.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Deploying React Application on Web using AWS Amplify]]></title><description><![CDATA[Today, the majority of businesses have switched to cloud computing and are willing to take a chance. Amazon Web Options (AWS) cloud platform is often regarded as the best among the several cloud adoption services currently offered. Over 90 services a...]]></description><link>https://blog.avikkundu.com/deploying-react-application-on-web-using-aws-amplify</link><guid isPermaLink="true">https://blog.avikkundu.com/deploying-react-application-on-web-using-aws-amplify</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws-amplify]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Tue, 09 Nov 2021 21:45:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1659048584803/ZM5-9Qglr.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, the majority of businesses have switched to cloud computing and are willing to take a chance. Amazon Web Options (AWS) cloud platform is often regarded as the best among the several cloud adoption services currently offered. Over 90 services and products that are intended to assist developers in building quick, dependable, serverless, and secure web and mobile applications are now part of Amazon's constantly expanding portfolio.</p>
<h2 id="heading-introduction">Introduction</h2>
<p>AWS Amplify is a full suite of tools and services created to make it simple for developers to build and release apps. It was introduced in 2017. Additionally, it might have ready-to-use components, code libraries, and a built-in command-line interface (CLI). The ability to swiftly and securely integrate a variety of tasks, from API to AI, is this tool's most important asset.</p>
<p>The user experience is another factor in the introduction of AWS Amplify. User experience is the most crucial factor that must be taken into account while creating any programme. The user experience across many platforms, including online and mobile, was intended to be unified through AWS Amplify.</p>
<p>Amplify effortlessly scalable with your business from thousands of users to tens of millions of users and covers the entire mobile application development workflow from version control and code testing to production deployment.</p>
<p>The open source Amplify libraries and CLI, which are a component of the Amplify framework, provide a pluggable interface that you may personalise and develop your own plugins for.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659045376887/4xLJi4AfZ.png" alt="image.png" /></p>
<h3 id="heading-benefits">Benefits</h3>
<ul>
<li><p>Don't reinvent the wheel; instead, concentrate on your business. Why create your own authentication system when there are already ones out there that can handle MFA, social providers, and all business features?</p>
</li>
<li><p>Use Amplify to prototype cool ideas and discard features one at a time if necessary. Why not take advantage of the high-quality proof of concept work Amplify makes possible? As an illustration, you could construct your API using produced AppSync resolvers and then gradually replace them with custom resolvers as needed (similarly to Create React App).</p>
</li>
<li><p>Solid DevOps requires good multi-environment design. On-the-fly bootstrapping of new environments is possible. Additionally, the amplification environments can use the same git procedure.</p>
</li>
</ul>
<h2 id="heading-getting-started">Getting Started</h2>
<p>In this article, you will be seeing how we can easily deploy any web application on AWS Amplify. For example, here we are going to deploy a React Application. 
The source code can be found on the following <a target="_blank" href="https://github.com/aditya-sridhar/simple-reactjs-app">Github</a> link.</p>
<h3 id="heading-pre-requisites">Pre-requisites</h3>
<ul>
<li>an AWS account (of course 🤪)</li>
<li>Node.JS installed in your system</li>
<li>a Github Account and Git CLI installed in your system</li>
<li>Code Editor of your choice (VSCode preferred)</li>
</ul>
<h3 id="heading-cloning-and-testing-the-application-locally">Cloning and Testing the Application Locally</h3>
<p>First, we are going to clone the repository and run the application locally in our system.
Create a directory in your system and open it with VSCode. You can use the in-app command terminal to execute the code snippets.  </p>
<p>To start, clone the repository with the following command:</p>
<pre><code>git <span class="hljs-keyword">clone</span> https:<span class="hljs-comment">//github.com/aditya-sridhar/simple-reactjs-app</span>
</code></pre><p>Once the repository is cloned, you will see a new directory inside the folder. Navigate inside the directory and type the following commands:</p>
<pre><code><span class="hljs-built_in">npm</span> install
</code></pre><p>This will install all the dependencies required to run the application, in your system.</p>
<p>Once completed, you can run the application with the following command:</p>
<pre><code>npm <span class="hljs-keyword">start</span>
</code></pre><p>The web browser will pop up and you will be seeing the following page:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659047112637/Bp4mvS294.png" alt="image.png" /></p>
<p>Thus, we have successfully cloned and tested the application in our local system.</p>
<h3 id="heading-pushing-the-application-code-to-your-github-account">Pushing the application code to your Github Account</h3>
<p>Now, we have to upload the code to your Github account, so that you can use the code with AWS Amplify. For creating a repository, we are going to use the Github CLI.  </p>
<p>First, delete the <code>.git</code> folder present in the root directory.</p>
<p>Then, initialize the directory with the following command:</p>
<pre><code>git <span class="hljs-keyword">init</span>
</code></pre><p>After that, lets create a repository with the Github CLI. Type the following command:</p>
<pre><code>gh repo create aws<span class="hljs-operator">-</span>react<span class="hljs-operator">-</span>hosting
</code></pre><p>This will create a repository in your Github Account. Finally. we will be pushing the codes in this repository.</p>
<pre><code>git add .
git commit <span class="hljs-operator">-</span>m <span class="hljs-string">"First Commit"</span>
git remote add origin <span class="hljs-operator">&lt;</span>link<span class="hljs-operator">-</span>to<span class="hljs-operator">-</span>your<span class="hljs-operator">-</span>github<span class="hljs-operator">-</span>repo<span class="hljs-operator">&gt;</span>
git push <span class="hljs-operator">-</span>u origin main
</code></pre><h3 id="heading-deploying-on-aws-amplify">Deploying on AWS Amplify</h3>
<p>Now let's move on to deploy the application to AWS Amplify.</p>
<ol>
<li><p>Open the AWS HomePage.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659047019643/Y2-hQtuIW.png" alt="Hong Ly - How to Deploy React App to AWS Amplify for FREE [gTxEZrsDk3w - 1280x720 - 5m03s].png" /></p>
</li>
<li><p>Login and visit the Console.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659047341284/N0IT_1s8h.png" alt="Hong Ly - How to Deploy React App to AWS Amplify for FREE [gTxEZrsDk3w - 1280x720 - 5m08s].png" /></p>
</li>
<li><p>Select AWS Amplify from the List.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659047383761/1bdZuzjL-.png" alt="Hong Ly - How to Deploy React App to AWS Amplify for FREE [gTxEZrsDk3w - 1280x720 - 5m28s].png" /></p>
</li>
<li><p>You will reach to the following page:
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659047428687/3b5tS7Sls.png" alt="Hong Ly - How to Deploy React App to AWS Amplify for FREE [gTxEZrsDk3w - 1280x720 - 5m38s].png" /></p>
</li>
<li><p>Select GitHub from the Options available.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659047484225/snF_-VR0p.png" alt="Hong Ly - How to Deploy React App to AWS Amplify for FREE [gTxEZrsDk3w - 1280x720 - 5m58s].png" /></p>
</li>
<li><p>Authorize the connection and select the repository you had created earlier.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659048194516/3JANkQft1.png" alt="Hong Ly - How to Deploy React App to AWS Amplify for FREE [gTxEZrsDk3w - 1280x720 - 6m28s].png" /></p>
</li>
<li><p>Click Next and Save and Deploy.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659048262433/egDFtdSiG.png" alt="Hong Ly - How to Deploy React App to AWS Amplify for FREE [gTxEZrsDk3w - 1280x720 - 6m48s].png" /></p>
</li>
<li><p>Finally, you will come to the following page:
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659048330864/lq7tFpHRp.png" alt="Hong Ly - How to Deploy React App to AWS Amplify for FREE [gTxEZrsDk3w - 1280x720 - 7m08s].png" /></p>
</li>
<li><p>Once all the stages are completed, click on the link to see the live website.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659047112637/Bp4mvS294.png" alt="image.png" /></p>
</li>
</ol>
<p><strong>Thus, we have successfully deployed a React application on AWS Amplify!!!</strong></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>For web and mobile developers, Amazon Web Services Amplify can be quite helpful. For full-stack apps, all built-in authentication, alerts, and APIs may be implemented with a minimum of effort. Giving your clients the best value instead of wasting time managing the application infrastructure allows you to concentrate on your top priorities.</p>
]]></content:encoded></item><item><title><![CDATA[Performing Instance Segmentation on X-Ray Images with Mask R-CNN]]></title><description><![CDATA[COVID-19 or novel coronavirus disease, which has already been declared as a Worldwide pandemic, at first had an outbreak in a small town of China, named Wuhan. More than two hundred countries around the world have already been affected by this severe...]]></description><link>https://blog.avikkundu.com/mask-rcnn-covid-segment</link><guid isPermaLink="true">https://blog.avikkundu.com/mask-rcnn-covid-segment</guid><category><![CDATA[Deep Learning]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Sat, 18 Jul 2020 14:57:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1601548180933/odDU3PPIi.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>COVID-19 or novel coronavirus disease</strong>, which has already been declared as a <strong>Worldwide pandemic</strong>, at first had an outbreak in a small town of China, named <strong>Wuhan</strong>. More than two hundred countries around the world have already been affected by this severe virus as it spreads by human interaction.</p>
<p>Moreover, the symptoms of novel coronavirus are quite similar to the general flu. Screening of infected patients is considered as a critical step in the fight against COVID-19. Therefore, it is highly relevant to recognize positive cases as early as possible to avoid further spreading of this epidemic. However, there are several methods to detect COVID-19 positive patients, which are typically performed based on respiratory samples and among them one of the critical approach which is treated as radiology imaging or X-Ray imaging. <strong>Recent findings from X-Ray imaging techniques suggest that such images contain relevant information about the SARS-CoV-2 virus.</strong></p>
<hr />
<h1 id="introduction">Introduction</h1>
<p><strong>Deep learning</strong> is a popular area of research in the field of artificial intelligence. It enables end-to-end modelling to deliver promised results using input data without the need for manual feature extraction. The use of <strong>Machine Learning methods</strong> for diagnostics in the medical field has recently gained popularity as a complementary tool for doctors. Due to this, in recent times, many radiological images have been extensively used to detect COVID-19 confirmed cases.</p>
<img alt="Image for post" src="https://miro.medium.com/max/1200/1*QyW8eCwpRzw5rDpW75-YfA.gif" />

<p><strong>Mask RCNN</strong> is a conceptually <strong>simple, flexible, and general framework</strong> for <strong>object instance segmentation</strong>. The approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. This method extends <strong>Faster R-CNN</strong> by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, it is easy to generalize to other tasks.</p>
<hr />
<h1 id="understanding-image-segmentation">Understanding Image Segmentation</h1>
<p><strong>Image segmentation</strong> is the process of partitioning a digital image into multiple segments. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.</p>
<p><strong>Image segmentation</strong> is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, Image Segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.</p>
<p>There are 2 types of Image Segmentation- <strong>Instance Segmentation</strong> and <strong>Semantic Segmentation</strong>.</p>
<img alt="Image for post" src="https://miro.medium.com/max/2066/1*NPdEj1NMY3120E9yM9egTw.png" />

<hr />
<h1 id="related-works">Related Works</h1>
<p><strong>Sethy</strong> classified the properties obtained from different models of CNN with the <strong>SVM Classifier</strong> using X-Ray images. Besides, <strong>Wang</strong> suggested a Deep model for COVID19 patients recognition and achieved an accuracy of <strong>92.4%</strong> in the classification of standard classes, non-COVID, and COVID-19 Pneumonia. In another study, a <strong>ResNet5026</strong> model was proposed by <strong>Narin</strong>, and it achieved a COVID-19 detection accuracy of <strong>98%</strong>. In terms of COVID-19 patients detection using X-Ray images, the Deep model of <strong>Ioannis</strong> reached a success rate of <strong>98.75%</strong> for two classes and <strong>93.48%</strong> for three classes. By comprising multiple CNN models, <strong>Hemdan</strong> has proposed a <strong>COVIDX-Net model</strong> that is capable of detecting confirmed cases of COVID-19. A transfer learning-based framework has been advised by <strong>Karmany</strong> to identify medical diagnoses and treatable diseases using image-based deep learning.</p>
<hr />
<h1 id="understanding-ground-glass-opacity-in-x-rays">Understanding Ground Glass Opacity in X-Rays</h1>
<p>The COVID-19 pandemic has brought radiologists’ penchant for descriptive terms front-and-centre, with frequent references to one feature in particular: <strong>ground-glass opacities.</strong></p>
<img alt="Image for post" src="https://miro.medium.com/max/900/1*0Ij0cllOksaV6I6uMbfuEA.jpeg" />

<p>The term refers to the <strong>hazy, white-flecked pattern</strong> seen on lung CT scans, indicative of increased density. It’s not quite as dense as the “<a target="_blank" href="https://radiopaedia.org/articles/crazy-paving?lang=us">crazy-paving</a>” pattern, which looks like a mosaic or pavers, and less confounding than the “<a target="_blank" href="https://radiopaedia.org/articles/head-cheese-sign-lungs?lang=us">head cheese sign</a>,” a juxtaposition of three or more densities present in the same lung.</p>
<p>Ground-glass opacities aren’t likely to be found in healthy lungs, though, and wouldn’t result from exposures like air pollution or smoking. There are a lot of diseases that can cause ground-glass opacities, but in COVID-19, there’s a distinct distribution, a preference for certain parts of the lung. COVID-related ground-glass opacities also have a very round shape that’s unusual compared with other ground-glass opacities.</p>
<hr />
<h1 id="technologies-used">Technologies Used</h1>
<h2 id="supervisely">Supervisely</h2>
<blockquote>
<p>“Supervisely is a powerful platform for computer vision development, where individual researchers and large teams can annotate and experiment with datasets and neural networks.”</p>
</blockquote>
<img alt="Image for post" src="https://miro.medium.com/max/2100/1*2h6vB1aFpFaQjK8ITW360g.png" />

<p>Supervisely provides the following advantages:</p>
<ul>
<li><strong>Get from idea to a toy prototype in several minutes.</strong> It will take you 5 minutes to manually label 10 images, run data preparation script, train and apply the model.</li>
<li><strong>Leverage the largest Deep Learning models collection available.</strong> You can use Deep Learning models in a unified, framework-independent way. So the experiments are fast and cheap, it’s easy to compare the performance of different models on your task.</li>
<li><strong>Fast iterations.</strong> Active learning to improve your models continuously is a huge benefit to our platform.</li>
<li><strong>Get ready-to-use ecosystems.</strong> Organizing workflow of data annotators, reviewers, data scientists and domain experts in a way that results are sharable and available with the emphasis on fast iterations usually implies creating complex front-end/back-end infrastructure that we provide out of the box.</li>
</ul>
<hr />
<h1 id="getting-started">Getting Started</h1>
<h2 id="requisites">Requisites</h2>
<p>We should have an active <strong>AWS</strong> account to connect our Supervisely account to an instance for training. We should know how to start an <strong>AMI Linux Instance</strong> there and install the software in it.</p>
<hr />
<h1 id="working-with-supervisely">Working with Supervisely</h1>
<p>First, we need to create an account in <strong>Supervisely</strong>. After creating the account, we need to create a <strong>Workspace</strong> and a <strong>team</strong>.</p>
<h2 id="1-uploading-the-dataset-of-images">1. Uploading the dataset of Images</h2>
<p>Then we need to create a <strong>Project</strong>. Inside the Project, we upload the dataset or images.</p>
<img alt="Image for post" src="https://miro.medium.com/max/3840/1*Anlq5fPkP6rJyv0S_Jzlaw.png" />

<p><img src="https://miro.medium.com/max/60/1*q8iALhWRwR10OiUnACujAA.png?q=20" alt="Image for post" /></p>
<img alt="Image for post" src="https://miro.medium.com/max/3796/1*q8iALhWRwR10OiUnACujAA.png" />

<h2 id="2-annotating-all-uploaded-images">2. Annotating all Uploaded Images</h2>
<p>After creating the project and uploading the images, we need to <strong>annotate</strong> the images, <strong>so that our model knows what exactly to look in the images</strong>.</p>
<img alt="Image for post" src="https://miro.medium.com/max/3840/1*M6PpZTRW8bKfkxDDA7amLQ.png" />

<h2 id="3-performing-data-augmentation">3. Performing Data Augmentation</h2>
<p>After annotation, we need to increase the number of images available in our dataset for getting accurate results. For this, we use a <strong>DTL code</strong> which would perform some necessary changes in our image to create some new versions of it. Some of the techniques we use are: rotating, increasing or decreasing contrast or the brightness of our images to create the new versions.</p>
<img alt="Image for post" src="https://miro.medium.com/max/3840/1*-Ypu4VDLv23k8AFXOEkKZQ.png" />

<p>For this, We need to upload a DTL code shown below:</p>
<p>After completing up to this, we will find another folder automatically created which contains at least 4 times the images we originally provided.</p>
<h2 id="5-connecting-to-ec2-instance-to-train-the-model">5. Connecting to EC2 Instance to train the model</h2>
<p>Now we need to select a <strong>Neural Network</strong> model from the list for training. In our case, we are going to use the <strong>Mask RCNN model</strong>.</p>
<img alt="Image for post" src="https://miro.medium.com/max/3840/1*fdkzl6q52eE2VIJun6pbAA.png" />

<p>Now is the time when we need to create an instance in AWS and connect it with the <strong>Supervisely</strong> to perform the training operations.</p>
<p>The pre-requisite for an instance by default set by <strong>Supervisely</strong> includes having a GPU. But since GPU are costly and we have to request AWS for increasing the limit, we will just train our model and download the weight file. After that, we would manually run the weight file in our local machine to view the output.</p>
<p>In AWS, we run an <strong>Amazon Linux</strong> instance and connect it with our local machine via <strong>ssh</strong>. After that, we install Docker inside the instance since Supervisely needs <strong>Docker</strong> as it will automatically download a Docker image of the program which will perform the training.</p>
<img alt="Image for post" src="https://miro.medium.com/max/2706/1*q73UYOIJNz4-0To158TcpQ.png" />

<p>After we install Docker in the instance, we need to connect Supervisely with the instance, using the highlighted Bash Script.</p>
<img alt="Image for post" src="https://miro.medium.com/max/3840/1*wue39Pm8oYQfZ0UDBMb46g.png" />

<p>This will download the Supervisely Docker image in our Instance. All the requires dependencies required for training our model are packaged in this Docker image.</p>
<p>After this, from the Neural Networks tab, we start the Training Process of our model.</p>
<img alt="Image for post" src="https://miro.medium.com/max/3840/1*dZl0b5ckRsFPellNp0N_dg.png" />

<p>Since we don't have any GPU in our instance, we would find this error:</p>
<img alt="Image for post" src="https://miro.medium.com/max/3840/1*bmIxfmRPeYl4W1lcVRYzuQ.png" />

<p>But, we can still download the weights file in the following way:</p>
<img alt="Image for post" src="https://miro.medium.com/max/3840/1*7gzinLUm7-Fj629_3ZxGRA.png" />

<h2 id="6-finding-the-output">6. Finding the Output</h2>
<p>After downloading the weights file, we update the Mask RCNN demo code available at Matterplot Repository accordingly to accept this weight file.</p>
<img alt="Image for post" src="https://miro.medium.com/max/2230/1*03wKf8W6hEZW2Wj-Vi2d6g.png" />

<img alt="Image for post" src="https://miro.medium.com/max/2886/1*20eMV2cKvMiVgAtdiOFNpw.png" />

<hr />
<h1 id="conclusion">Conclusion</h1>
<p>Thus by the above process, we were able to perform instance segmentation on COVID Chest X-Rays. Our model confirmed that the X-ray provided was having <strong>Ground Glass Opacities,</strong> which in turn predicted that the associated person might be infected.</p>
<p>By more proper annotations on the training images, we can increase the accuracy of the model so that it can mask the exact area of the GGOs in the future. Moreover, we can provide a powerful remote instance having GPUs, which can automate the entire process remotely, rather than testing the weights manually.</p>
<hr />
<p>You can reach out on my <a target="_blank" href="https://github.com/Lucifergene/">GitHub</a>, <a target="_blank" href="https://twitter.com/avik6028">Twitter</a>, <a target="_blank" href="https://instagram.com/avik6028">Instagram</a>, or on <a target="_blank" href="https://linkedin.com/in/avik-kundu-0b837715b">LinkedIn</a> if you need more help. I would be more than happy.</p>
<p>If you have come up to this, <strong>do drop an 👏 if you liked this article.</strong></p>
<p><strong>Good Luck</strong> 😎 and <strong>happy coding</strong> 👨‍💻</p>
]]></content:encoded></item><item><title><![CDATA[Building the Perfect Face Recognition Model with the Integration of ML-Ops]]></title><description><![CDATA[Face recognition models have been in the markets for decades now. It all started with the Eigenface Approach in the late ’80s to early ’90s. The Eigenface method is today used as a basis of many deep learning algorithms, paving way for modern facial ...]]></description><link>https://blog.avikkundu.com/mlops-face-recognition</link><guid isPermaLink="true">https://blog.avikkundu.com/mlops-face-recognition</guid><category><![CDATA[Deep Learning]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Avik Kundu]]></dc:creator><pubDate>Sun, 24 May 2020 04:52:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1601614712168/sX4iPRgrDx.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p> <strong>Face recognition models</strong> have been in the markets for decades now. It all started with the <strong>Eigenface Approach</strong> in the late ’80s to early ’90s. The Eigenface method is today used as a basis of many deep learning algorithms, paving way for modern facial recognition solutions.</p>
<p>The modern-day <strong>game-changers</strong> spurred on by the <strong>Annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC).</strong></p>
<img alt="Image for post" src="https://miro.medium.com/max/1786/1*yP7fM_kTkQTQi3NHE3arMQ.png" />

<p>Source: <a target="_blank" href="https://anyconnect.com/blog/the-history-of-facial-recognition-technologies">anyconnect.com</a></p>
<p>In 2012, <strong>AlexNet</strong>, which was a <strong>deep convolutional neural net (CNN)</strong> bested that result by getting an error rate of <strong>15.3%</strong>. This was a game-changer because it was the first time that such results were achieved.</p>
<p>Subsequent image processing solutions in the following years improved on results of <strong>AlexNet</strong>. In 2014, <strong>GooLeNet/Inception</strong> achieved an error rate of <strong>6.67%</strong>. In 2015, <strong>ResNet</strong> further brought the error rate down to <strong>3.6%</strong>.</p>
<hr />
<h1 id="introduction">Introduction</h1>
<h2 id="1-what-do-we-mean-by-ml-ops">1. What do we mean by ML-Ops?</h2>
<p>The process of integrating the tools and concepts of <strong>DevOps</strong> for solving the problems faced while <strong>training Machine Learning Models</strong>, by <strong>automatic adjustment of the Hyper-parameters,</strong> leading to <strong>increased accuracy</strong>, is the <strong>fundamental concept</strong> behind <strong>ML-Ops</strong>.</p>
<p>The most important factor from <strong>DevOps</strong>, i.e. a focus on <strong>Continuous Integration/Continuous Delivery</strong> <strong>(CI/CD)</strong> is applied directly to <strong>model generation</strong>, while <strong>regular deployment</strong>, <strong>diagnostics</strong> and <strong>further training</strong> can also be done on a frequent process, rather than <strong>waiting for one large upload at much slower intervals</strong>.</p>
<h3 id="check-out-this-great-article-to-know-more">Check out this great Article to know more</h3>
<p><a target="_blank" href="Know More">https://neptune.ai/blog/mlops-what-it-is-why-it-matters-and-how-to-implement-it-from-a-data-scientist-perspective</a> </p>
<h2 id="2-transfer-learning-in-the-world-of-cnn">2. Transfer-Learning in the world of CNN</h2>
<p>It is a machine learning method where a model developed for <strong>a task is reused as the starting point for a model on a second task</strong>.</p>
<p>It is a <strong>popular approach in deep learning</strong> where <strong>pre-trained models are used as the starting point</strong> on <strong>Computer Vision(CV)</strong> and <strong>Natural Language Processing(NLP)</strong> tasks are given the <strong>vast compute and time resources required to develop neural network models</strong> on these problems.</p>
<img alt="Difference between Traditional ML vs Transfer Learning" src="https://miro.medium.com/max/1838/1*b4GiiiIgxhfd3pUd86ZUuw.png" />

<p>Source: <a target="_blank" href="https://towardsdatascience.com/what-is-transfer-learning-8b1a0fa42b4">towardsdatascience.com</a></p>
<p><strong>This method can give high accuracy with limited images and resources.</strong></p>
<hr />
<h1 id="synopsis">Synopsis</h1>
<p>This project explains the process of automating the task of adjusting the Hyper-parameters of our Face Recognition Model, for attaining the perfect accuracy using Docker, Jenkins and Git/GitHub.</p>
<h1 id="briefing-about-the-face-recognition-model">Briefing about the Face Recognition model</h1>
<p>The Face Recognition model is built using the method of <strong>Transfer Learning</strong>. <strong>VGG16</strong> pre-trained model is used for the purpose.</p>
<p>The possible <strong>Hyper-parameter Tunings</strong> here:</p>
<p>★ Adjusting the number of <code>FC layers</code></p>
<p>★ Adjusting the <code>Learning Rate</code></p>
<p>★ Choosing an <code>optimizer</code> and a <code>loss function</code></p>
<p>★ Deciding on the <code>batch size</code> and <code>number of epochs</code></p>
<p>The Code for the Face Recognition model can be downloaded from <a target="_blank" href="https://github.com/Lucifergene/Face-Recognition-with-Transfer-Learning">here</a>.</p>
<p>Video Demonstration of the Model</p>
<h1 id="pre-requisites">Pre-requisites</h1>
<p>First of all, we are assuming that <strong>Docker</strong>, <strong>Git</strong>, and <strong>Jenkins with the Git Plugin</strong> are installed in the system.</p>
<p>In this article, we are directly beginning with integrating our Face Recognition Model with DevOps tools.</p>
<p>In this article, we are going to <strong>RHEL 8.2</strong> as our Host OS.</p>
<hr />
<h1 id="getting-started">Getting Started</h1>
<p>We are going to use <strong>Docker containers</strong> to build and <strong>run our Machine Learning Models</strong>. Different <strong>custom Docker containers</strong> will be built using the <strong>DockerFile</strong> to support different architectures of the ML models.</p>
<p>Through <strong>Jenkins</strong>, we are going to create multiple jobs as follows:</p>
<p><strong>JOB#1:</strong> Pulling the Github repository automatically when some developers push the repository to Github.</p>
<p><strong>JOB#2:</strong> By looking at the code or program file, Jenkins will automatically start the respective image containers to deploy code and start training.</p>
<p>( eg. If code uses <strong>CNN</strong>, then Jenkins should start the container that has already installed all the software required for the <strong>CNN</strong> processing)</p>
<p><strong>JOB#3:</strong>Training the model and predicting the accuracy or metrics.</p>
<p><strong>JOB#4:</strong> If the Metrics Accuracy is less than <strong>90%</strong>, then tweaking the machine learning model architecture.</p>
<p><strong>JOB#5:</strong> Retraining the model and notifying that the best model is being created.</p>
<p><strong>JOB#6:(Monitoring)</strong> If container where the app is running, fails due to any reason then this job will automatically start the container again from the last trained model.</p>
<h1 id="1-setting-up-the-docker-containers">1. Setting up the Docker Containers</h1>
<p>We are going to create 3 Docker containers for serving different ML models.</p>
<p>✓ CNN model</p>
<p>✓ ANN model</p>
<p>✓ Linear/Logistic Regression Models</p>
<h2 id="11-setting-up-the-docker-container-for-cnn-and-ann-models">1.1. Setting up the Docker container for CNN and ANN models</h2>
<p>For setting up this container, we are going to use the <code>tensorflow/tensorflow</code> image from DockerHub.</p>
<img alt="Image for post" src="https://miro.medium.com/max/3784/1*h6Px08SzF7SHAZMWYLtGWQ.png" />

<p>To download the image to our local machine, we have to run the following command from the Command Line:</p>
<p><code>docker pull tensorflow/tensorflow</code></p>
<p>After installing the image, we need to modify the image so that we can run our CNN &amp; ANN models in the container. We will be using Dockerfile to build our custom image.</p>
<p>To create the custom image, we need to create an empty file named <code>Dockerfile</code> anywhere in our host machine.</p>
<p>The contents of the Dockerfile will be as follows:</p>
<pre><code>FROM tensorflow/tensorflow:latest

RUN pip3 <span class="hljs-keyword">install</span> keras -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> numpy -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> pandas -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> pillow -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> scikit-learn -y &amp;&amp;\&lt;/span&gt;
</code></pre><p>After making the <code>Dockerfile</code> we need to build it to create our image using:</p>
<p><code>docker build -t cnn_image:v1 .</code> (from the same folder)</p>
<p>This would create our custom <code>cnn_image</code> image in the local machine.</p>
<h2 id="12-docker-container-for-linearlogistic-regression-models">1.2. Docker container for Linear/Logistic Regression Models</h2>
<p>For setting up this container, we are going to use the <code>centos</code> image from DockerHub.</p>
<img alt="Image for post" src="https://miro.medium.com/max/3788/1*JBYnTULrwwHl9V8_Evkeqg.png" />

<p>To download the image to our local machine, we have to run the following command from the Command Line:</p>
<p><code>docker pull centos</code></p>
<p>After installing the image, we need to modify the image so that we can run our Linear/Logistic Regression Models in the container. We will be using Dockerfile to build our custom image.</p>
<p>To create the custom image, we need to create an empty file named <code>Dockerfile</code> anywhere in our host machine.</p>
<p>The contents of the Dockerfile will be as follows:</p>
<pre><code>FROM centos:latest

RUN yum <span class="hljs-keyword">install</span> epel-<span class="hljs-keyword">release</span> -y &amp;&amp;\
    yum <span class="hljs-keyword">update</span> -y &amp;&amp;\
    yum <span class="hljs-keyword">install</span> python36 -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> scikit-learn -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> numpy -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> pandas -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> matplotlib -y &amp;&amp;\
    pip3 <span class="hljs-keyword">install</span> pillow -y &amp;&amp;\
    yum <span class="hljs-keyword">update</span> -y&lt;/span&gt;
</code></pre><p>After making the <code>Dockerfile</code> we need to build it to create our image using:</p>
<p><code>docker build -t regression_image:v1 .</code> (from the same folder)</p>
<p>This would create our custom <code>regression_image</code> image in the local machine.</p>
<p>★ Thus we have successfully set up the Docker containers in our system. ★</p>
<p><strong>To verify the installation of Docker images we can check the list of all images installed in our local machine</strong>: <code>docker image ls</code></p>
<hr />
<h1 id="2-building-the-jenkins-pipeline">2. Building the Jenkins Pipeline</h1>
<h2 id="21-job-1-automatic-code-download">2.1. Job-1: Automatic Code Download</h2>
<p>Before downloading the code, we need to create some folders on our local machine which would act as <strong>volumes</strong> for the <strong>Docker containers</strong>.</p>
<p>To create the folders in our local machine:</p>
<pre><code><span class="hljs-keyword">mkdir</span> /root/Desktop/ml_models
cd /root/Desktop/ml_models/
<span class="hljs-keyword">mkdir</span> cnn ann reg&lt;<span class="hljs-regexp">/span&gt;</span>
</code></pre><p>First, the downloaded codes would be copied in the <code>ml_models</code> directory.</p>
<p>For creating the Job for downloading codes:</p>
<ol>
<li>Select <code>new item</code> option from the Jenkins menu.</li>
<li>Assign a name to the Job ( eg. <strong>model_download</strong> )and select it to be a <code>Freestyle</code> project.</li>
<li>From the <code>Configure Job</code> option, we set the configurations.</li>
<li>From the <strong>Source Code Management</strong> section, we select Git and mention the URL of our GitHub Repository and select the branch as <code>master</code>.</li>
<li>In the <strong>Build Triggers</strong> section, we select <code>Poll SCM</code> and set the value to <code>* * * * *</code>.
<strong>This means that the Job would check any code change from GitHub every minute.</strong></li>
<li>In the Build Section, we type the following script: <code>sudo cp -v -r -f * /root/Desktop/ml_models</code><strong>This command would copy all the content downloaded from the GitHub master branch to the specified folder for deployment.</strong></li>
<li>On clicking the <strong>Save</strong> option, we add the Job to our Job List.</li>
</ol>
<p>On coming back to the Job List page, we can see the <strong>Job</strong> is being built. If the colour of the ball turns <strong>blue</strong>, it means the Job has been successfully executed. If the colour changes to <strong>red</strong>, it means there has been some error in between. We can see the console output to check the error.</p>
<p><strong>Till now, we have successfully downloaded the codes from GitHub to our Server System automatically.</strong></p>
<h2 id="22-job-2-classifying-the-files-based-on-the-architecture-of-the-model">2.2. Job-2: Classifying the files based on the architecture of the model</h2>
<p>Once the files have been downloaded, we need to copy the files to their respective folders automatically.</p>
<p>For creating the Job for classifying the files:</p>
<ol>
<li>Select <code>new item</code> option from the Jenkins menu.</li>
<li>Assign a name to the Job ( eg. <strong>model_classification</strong> )and select it to be a <code>Freestyle</code> project.</li>
<li>From the <code>Configure Job</code> option, we set the configurations.</li>
<li>From the <strong>Build Triggers</strong> section, we select <code>Build after other projects are built</code> and mention <code>model_download</code> as the project to watch. This is called a <strong>DownStreaming Job</strong>.</li>
<li>In the <strong>Build</strong> Section, we type the following script:</li>
</ol>
<p>6. On clicking the <strong>Save</strong> option, we add the Job to our Job List.</p>
<p><strong>Thus, we have successfully transferred the files to their respective folders. Also, we have set these folders as volumes of the Docker Containers and started the service.</strong></p>
<h2 id="23-job-3-training-the-model-and-predicting-the-accuracy-or-metrics">2.3. Job-3: Training the model and predicting the accuracy or metrics</h2>
<p>Now, we have to start training the model already loaded to the respective Docker container.</p>
<p>For creating the Job for Training the model:</p>
<ol>
<li>Select <code>new item</code> option from the Jenkins menu.</li>
<li>Assign a name to the Job ( eg. <strong>model_training</strong> )and select it to be a <code>Freestyle</code> project.</li>
<li>From the <code>Configure Job</code> option, we set the configurations.</li>
<li>From the <strong>Build Triggers</strong> section, we select <code>Build after other projects are built</code> and mention <code>model_classification</code> as the project to watch. This is called a <strong>DownStreaming Job</strong>.</li>
<li>In the <strong>Build</strong> Section, we type the following script:</li>
</ol>
<p>6. On clicking the <strong>Save</strong> option, we add the Job to our Job List.</p>
<p><strong>By the end of this job, we have downloaded, classified and trained the model. We have also found out the accuracy of the model after training.</strong></p>
<p><strong>For this project, we are setting 90% as the needed Accuracy for the project.</strong></p>
<p>Now, <strong>if we find the accuracy obtained is not sufficient</strong>, we have to do the Hyper-parameter tuning. <strong>This would start the Job-4</strong>.</p>
<p>Otherwise, <strong>a mail would be sent to the user stating the Desired accuracy</strong> has been reached. <strong>This would be done by Job-5</strong>.</p>
<h2 id="24-job-4-retraining-the-model-to-increase-the-accuracy">2.4. Job-4: Retraining the model to increase the Accuracy</h2>
<p>Suppose, after training the model, we find out the accuracy is below the desired amount. Thus, we have to adjust the <strong>hyper-parameters</strong> for <strong>increasing the accuracy of the models</strong>.</p>
<p><strong>This is where DevOps steps in</strong>. With the help of <strong>Continuous Integration Pipeline (CI Pipeline)</strong>, we can automate the <strong>process of Hyper-parameter tuning</strong>. Thus the work which would require a lot of days if done manually can be <strong>finished within a few hours</strong> without much human intervention.</p>
<p><strong>Note:</strong> After training and testing our Face Recognition model locally, it has been found out that adding some <strong>extra Fully Connected Layers (FC Layer)</strong>, increases the accuracy beyond our desired mark.</p>
<p><strong>Thus we are focussing on adjusting a single hyperparameter for this article.</strong> Later on, we can easily add the function of checking other hyperparameters, <strong>if a specific model demands one</strong>.</p>
<p>For creating the Job for Retraining the model:</p>
<ol>
<li>Select <code>new item</code> option from the Jenkins menu.</li>
<li>Assign a name to the Job ( eg. <strong>model_retrain</strong>)and select it to be a <code>Freestyle</code> project.</li>
<li>From the <strong>Configure Job</strong> option, we set the configurations.</li>
<li>From the <strong>Build Triggers</strong> section, we select <strong>Trigger builds remotely</strong> an option.</li>
<li>Provide an <strong>Authentication Token</strong></li>
<li>In the <strong>Build</strong> Section, we type the following script:</li>
</ol>
<p>7. On clicking the <strong>Save</strong> option, we add the Job to our Job List.</p>
<p>Thus, the Job has been setup. To Trigger the Build, the following command would run the job:
<code>curl --user "&lt;username&gt;:&lt;password&gt;" JENKINS_URL/view/Mlops-project-1/job/model_retrain/build?token=TOKEN_NAME</code></p>
<p>e.g. <code>curl --user "admin:admin" http://192.123.32.2932:8080/view/Mlops-project-1/job/model_retrain/build?token=retraining_model</code></p>
<h2 id="25-job-5-notifying-that-the-best-model-is-being-created">2.5. Job-5: Notifying that the best model is being created</h2>
<p>If the trained model gives the desired accuracy at the beginning or by Hyper-parameter tuning, a mail is automatically sent to the user confirming the action.</p>
<p>For creating the Job for Notifying that the best model is being created:</p>
<ol>
<li>Select <code>new item</code> option from the Jenkins menu.</li>
<li>Assign a name to the Job ( eg. <strong>model_notify</strong>)and select it to be a <code>Freestyle</code> project.</li>
<li>From the <strong>Configure Job</strong> option, we set the configurations.</li>
<li>From the <strong>Build Triggers</strong> section, we select <strong>Trigger builds remotely</strong> an option.</li>
<li>Provide an <strong>Authentication Token</strong></li>
<li>In the <strong>Build</strong> Section, we type the following script:</li>
</ol>
<pre><code>sudo cp -rf /root/Desktop/ml_models/ *
sudo python3 sendmail.py<span class="hljs-tag">&lt;/<span class="hljs-name">span</span>&gt;</span>
</code></pre><p>7. On clicking the <strong>Save</strong> option, we add the Job to our Job List.</p>
<p>Thus, the Job has been setup. To Trigger the Build, the following command would run the job:
<code>curl --user "&lt;username&gt;:&lt;password&gt;" JENKINS_URL/view/Mlops-project-1/job/model_notify/build?token=TOKEN_NAME</code></p>
<p>e.g. <code>curl --user "admin:admin" http://192.123.32.2932:8080/view/Mlops-project-1/job/model_notify/build?token=model_notification</code></p>
<hr />
<p>Now, we have to <strong>introduce these remote triggers</strong> we have created in our Model file. For that, at the end of the code, <strong>we add a conditional statement</strong>:</p>
<h2 id="25-job-6-additional-monitoring-job">2.5. Job-6: Additional <strong>Monitoring Job</strong></h2>
<p>If the container where the app is running, fails due to any reason then this job will automatically start the container again from the last trained model.</p>
<p>For monitoring the Jobs created:</p>
<ol>
<li>Select a <strong>new item</strong> option from the Jenkins menu.</li>
<li>Assign a name to the Job ( eg. <strong>monitor_job</strong> )and select it to be a <strong>Freestyle</strong> project.</li>
<li>From the <strong>Configure Job</strong> option, we set the configurations.</li>
<li>From the <strong>Build Triggers</strong> section, we select <code>Build after other projects are built</code> and mention <code>model_train</code> &amp; <code>model_retrain</code> as the project to watch.</li>
</ol>
<p>It is important to <strong>select “Trigger even if the build fails” option</strong> from the drop-down list.</p>
<p>5. In the <strong>Build</strong> Section, we type the following script:</p>
<p>6. From the <strong>Post Build Actions</strong> dropdown, we select “<strong>Build Other Projects</strong>” and mention <code>model_train</code> as the project to build.</p>
<p>7. On clicking the <strong>Save</strong> option, we add the Job to our Job List.</p>
<p><strong>Therefore, whenever a container stops due to some problems during training the model, Jobs#3 &amp; Jobs#4 would fail. This would trigger this Job#6 to restart the containers and again start Job#3.</strong></p>
<hr />
<h1 id="understanding-the-complete-workflow-properly">Understanding the Complete Workflow properly</h1>
<p>When a user adds a new model in the connected GitHub account,</p>
<p>★ Jenkins would download the code into the local system.</p>
<p>★ Once the code is received, <strong>Job#2</strong> would <strong>classify the model</strong> and add it to the <strong>respective folder</strong> and <strong>attach the folder as the volume</strong> of the D<strong>ocker Container</strong>.</p>
<p><strong>★ Job#3</strong> would execute the file inside the <strong>Docker container</strong> and <strong>train</strong> the model and <strong>predict the accuracy or metrics</strong>.</p>
<p>★ Now, if the accuracy is below the desired, <strong>Job#4</strong> would run. It would <strong>retrain the model by changing the hyper-parameters</strong>.</p>
<p>★ Once the accuracy becomes greater than the desired, <strong>Job#5</strong> will be fired, resulting in the <strong>automatic sending of an e-mail</strong> to the Developer.</p>
<p>★ At last, <strong>Job#6</strong> is set as a <strong>Monitoring Job</strong>. It would continuously check whether the container crashes during training and would restart them.</p>
<hr />
<h1 id="conclusion">Conclusion</h1>
<p>Previously, we had an additional <strong>3 Dense layers</strong> attached to the pre-trained model of <strong>VGG16</strong>. We came to an accuracy of <strong>86%</strong>.</p>
<p>After <strong>running this Pipeline</strong>, <strong>2 more layers were added</strong> at the end through these automation tools, due to which the <strong>accuracy touched 92%</strong>.</p>
<p>This method of <strong>Automated Hyperparameter Tuning</strong> would help in adjusting the accuracy of Machine Learning models faster and efficiently. This is the main reason for using the <strong>power of ML-Ops</strong> to solve these real-life situations.</p>
<hr />
<p>You can reach out on my <a target="_blank" href="https://twitter.com/avik6028">Twitter</a>, <a target="_blank" href="https://instagram.com/avik6028">Instagram</a>, or on <a target="_blank" href="https://linkedin.com/in/avik-kundu-0b837715b">LinkedIn</a> if you need more help. I would be more than happy.</p>
<p><strong>Good Luck</strong> 😎 and <strong>happy coding</strong> 👨‍💻</p>
]]></content:encoded></item></channel></rss>