<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Terraform - Architect the cloud</title>
	<atom:link href="https://blog.slepcevic.net/category/terraform/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.slepcevic.net</link>
	<description></description>
	<lastBuildDate>Mon, 06 Oct 2025 09:23:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9</generator>
	<item>
		<title>Your own AI coding assistant running on Akamai cloud!</title>
		<link>https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/</link>
					<comments>https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Tue, 18 Feb 2025 18:17:12 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI coding]]></category>
		<category><![CDATA[bolt.diy]]></category>
		<category><![CDATA[Linode]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://blog.slepcevic.net/?p=760</guid>

					<description><![CDATA[<p>What? You want some AI to write my code? AI-powered coding assistants are the main talk in the developer world for a while, there&#8217;s no denying that. I can&#8217;t count the times I&#8217;ve read somewhere the AI will replace developers...</p>
<p>The post <a href="https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/">Your own AI coding assistant running on Akamai cloud!</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<h3 class="wp-block-heading">What? You want some AI to write my code?</h3>



<p>AI-powered coding assistants are the main talk in the developer world for a while, there&#8217;s no denying that. I can&#8217;t count the times I&#8217;ve read somewhere the AI will replace developers in the next X years. You’ve probably seen tools like <strong>GitHub Copilot, ChatGPT, or Tabnine</strong> popping up everywhere.</p>



<p>They promise to boost productivity, help with debugging, and even teach you new coding techniques. Sounds amazing, right? But like anything, AI-powered coding assistants have their downsides too. So, let’s talk about what makes them great—and where they might fall short.</p>



<h3 class="wp-block-heading">Why AI Coding Assistants Are a Game-Changer</h3>



<p>Obviously, one of the biggest advantages of using an AI assistant is the time it saves. Instead of writing the same repetitive boilerplate code over and over and over again, you can generate it in seconds. Need a quick function to parse JSON? AI has you covered.  Easy peasy! Stuck on how to structure your SQL query? Just ask. This means less time spent on the boring stuff and more time on actual problem-solving.</p>



<p>AI is also a fantastic debugging tool. It can analyze your code, catch potential issues, and suggest fixes before you even run it. Instead of spending hours combing through error messages and Stack Overflow threads, you get quick, relevant suggestions that help you move forward faster.</p>



<p>And let’s not forget about learning. If you’re picking up a new language or framework, an AI assistant can guide you with real-time examples, explain unfamiliar syntax, and even generate sample projects. It’s like having a 24/7 coding mentor who doesn’t judge your questions.</p>



<p>Beyond just speed and learning, AI can actually help improve code quality. It can suggest best practices, helps format your code, and even recommends refactoring when your code gets messy. Plus, if you’re working in a team, it can assist with keeping code style consistent and even generate useful commit messages or documentation. Wouldn&#8217;t it be cool if we could plug the AI into our pipeline and make sure that all rules are being followed?</p>



<h3 class="wp-block-heading">The downsides no one(everyone) talks about?</h3>



<p>As cool as AI coding assistants are, you don&#8217;t need to be a genius to see that they&#8217;re far from perfect. One of the biggest concerns I personally see is over-reliance. If you’re constantly relying on AI to write your code, do you really understand what’s happening under the hood? This can be a problem when something breaks, and you don’t know how to fix it because you never really wrote the thing in the first place <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> I&#8217;m sure you love reading someone else&#8217;s codebase and debugging that &lt;3</p>



<p>Another issue is that AI-generated code isn’t always optimized or even correct! It might suggest something that works but isn’t efficient, secure, or maintainable. If you blindly accept AI suggestions without reviewing them, you could end up with a mess of inefficient or buggy code.</p>



<p>Then there’s the question of security. AI assistants are trained on huge datasets, and sometimes they can generate code that includes security vulnerabilities. If you’re working on sensitive stuff, you have to be extra careful about what code you’re using and where it’s coming from.</p>



<p>Let&#8217;s talk privacy! Many AI coding tools rely on cloud-based processing, meaning your code might be sent to external servers for analysis. If you’re working on proprietary or confidential code, you need to be aware of the risks and check the privacy policies of the tools you’re using.</p>



<p>And finally, while AI can make you more productive, it can also be a bit of a crutch. Some developers might start relying too much on AI for even basic things, which can slow down their growth and problem-solving skills in the long run. </p>



<h3 class="wp-block-heading">So, Should You Use One?</h3>



<p>AI coding assistants are undeniably powerful tools, but they work best when used wisely. They’re great for boosting productivity, helping with debugging, and learning new technologies—but they shouldn’t replace actual coding knowledge and problem-solving skills. Think of them as a really smart assistant, not a replacement for your own expertise.</p>



<p>If you use AI responsibly—review its suggestions, stay mindful of security risks, and make sure you’re still learning and improving as a developer it can be a fantastic addition to your workflow, just don’t let it do all the thinking for you <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<h3 class="wp-block-heading">Still interested and want to start using AI in your daily work? Enter bolt.diy <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></h3>



<p><strong>bolt.diy</strong> is the open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models&nbsp;</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<div class="responsive-embed widescreen"><iframe title="I Forked Bolt.new and Made it WAY Better" width="1000" height="563" src="https://www.youtube.com/embed/3PFcAu_oU80?list=PLyrg3m7Ei-MpOPKdenkQNcx8ueI36RNrA" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
</div></figure>



<p><strong>bolt.diy</strong> was originally started by&nbsp;<a href="https://www.youtube.com/@ColeMedin">Cole Medin</a>&nbsp;but has quickly grown into a massive community effort to build the one of the open source AI coding assistants out there.</p>



<h2 class="wp-block-heading">What do I need to get this deployed?</h2>



<p>Well, just Terraform and a Linode account. <br>In the backend we will deploy a VM with a GPU attached, install bolt.diy, ollama and ask it to write some code! Maybe a simple Tic-Tac-Toe game? <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>Ideally you would run your bolt.diy deployment on a separate machine from the machine running the model, but for our use case, current deployment model is more than enough.</p>



<p>Like most of the things on this blog, guess what we&#8217;re gonna use? Yes! IaC!!! <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>Here&#8217;s a link to the <a href="https://github.com/aslepcev/linode-bolt.diy" target="_blank" rel="noopener" title="">Github repository</a> containing the Terraform code. </p>



<p>Code will do the following:</p>



<ol class="wp-block-list">
<li>Deploy a GPU based instance in Akamai Connected Cloud</li>



<li>Use cloud-init to install the following:
<ul class="wp-block-list">
<li>curl</li>



<li>wget</li>



<li>nodejs</li>



<li>npm</li>



<li>nvtop &#8211; great tool to monitor your GPU usage</li>



<li>Nvidia drivers</li>
</ul>
</li>



<li>Deploy and configure a firewall which will allow SSH and <strong>bolt.diy</strong> access from your IP. </li>



<li>Configure <strong>bolt </strong>and <strong>ollama </strong>to run as a Linux service. For <strong>ollama </strong>service, we are always making sure we have a model downloaded and created with 32K context size. </li>
</ol>



<h2 class="wp-block-heading">How do you deploy it? </h2>



<p>Just fill in your Linode API token and the desired region, Linode token and your IP address in <strong>variables.tf </strong>file and run the following commands:</p>



<pre class="wp-block-code"><code>git clone https://github.com/aslepcev/linode-bolt.diy
cd linode-bolt.diy
#Fill in the variables.tf file now
terrafom init
terraform plan
terraform apply</code></pre>



<p>After a short 5-6 minute wait, everything should be deployed and ready to use. Go ahead and visit the IP address of your VM on the port 5173. </p>



<p><strong>Example url</strong>: http://172.233.246.209:5173</p>



<p>Make sure that Ollama is selected as a provider and you&#8217;re off to the races!</p>



<h2 class="wp-block-heading">What can it do? </h2>



<p>Well, it really depends on the model we are running. With the <strong><a href="https://www.linode.com/pricing/#compute-gpu" target="_blank" rel="noopener" title="">RTX 4000 Ada GPU</a></strong>, we can comfortably run a <strong>14B parameter model with 32K context size</strong> which is &#8220;ok&#8221; for smaller and simpler stuff. </p>



<p>I tested it out with a simple task of creating a Tic-Tac-Toe game in NodeJS. It got the functionality right the first time, but it looked like something only a mother could love <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="505" src="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-1024x505.png" alt="" class="wp-image-770" srcset="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-1024x505.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-300x148.png 300w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-768x379.png 768w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-1536x758.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1.png 1615w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>I just told it to make it a bit prettier and add some color; these were the results I got:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="694" height="589" src="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-3.png" alt="" class="wp-image-771" srcset="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-3.png 694w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-3-300x255.png 300w" sizes="(max-width: 694px) 100vw, 694px" /></figure>



<p>Interestingly, during the coding process, it made a mistake which it managed to identify and fix all on its own! All I did was press the &#8220;<strong>Ask Bolt</strong>&#8221; button. </p>



<p></p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="616" height="251" src="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-fix.png" alt="" class="wp-image-772" style="width:838px;height:auto" srcset="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-fix.png 616w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-fix-300x122.png 300w" sizes="auto, (max-width: 616px) 100vw, 616px" /></figure>



<p>Also, here&#8217;s a fully functioning Space Invaders alike game which it also wrote</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="975" height="665" src="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-space.png" alt="" class="wp-image-774" srcset="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-space.png 975w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-space-300x205.png 300w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-space-768x524.png 768w" sizes="auto, (max-width: 975px) 100vw, 975px" /></figure>



<h2 class="wp-block-heading">What if I want to run a larger model? 32B parameters or even larger? </h2>



<p>That&#8217;s very easy! Since Ollama can use multiple GPU&#8217;s, all we need to do is scale up the VM we are using to the one which includes two or more GPU&#8217;s. Akamai offers maximum of 4 GPU&#8217;s per VM which brings up to 80 GB of VRAM which we can use to run our model. I will not experiment with larger models in this blog post; this is something we will benchmark and try out in the future. </p>



<p>Cheers! Alex.</p>



<p>P.S &#8211; parts of this post were written by bolt.diy <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p></p><p>The post <a href="https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/">Your own AI coding assistant running on Akamai cloud!</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Akamai Application Platform on LKE</title>
		<link>https://blog.slepcevic.net/akamai-application-platform-on-lke/</link>
					<comments>https://blog.slepcevic.net/akamai-application-platform-on-lke/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Thu, 24 Oct 2024 00:06:40 +0000</pubDate>
				<category><![CDATA[Akamai APL]]></category>
		<category><![CDATA[Akamai Application Platform]]></category>
		<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[APL]]></category>
		<category><![CDATA[Otomi]]></category>
		<category><![CDATA[Terraform]]></category>
		<guid isPermaLink="false">https://blog.slepcevic.net/?p=674</guid>

					<description><![CDATA[<p>The Akamai Application Platform is a Kubernetes-centric solution that supports the full application lifecycle from development to deployment. It leverages tools from the Cloud Native Computing Foundation (CNCF) landscape, providing an integrated environment for managing containerized workloads. Optimized for Linode...</p>
<p>The post <a href="https://blog.slepcevic.net/akamai-application-platform-on-lke/">Akamai Application Platform on LKE</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>The <strong>Akamai Application Platform</strong> is a Kubernetes-centric solution that supports the full application lifecycle from development to deployment. It leverages tools from the Cloud Native Computing Foundation (CNCF) landscape, providing an integrated environment for managing containerized workloads. </p>



<p>Optimized for Linode Kubernetes Engine (LKE), the platform enhances automation and self-service, streamlining application management without requiring custom-built Kubernetes solutions.</p>



<p>Let&#8217;s get it up and running; like everything I do, we will avoid clickops and use IaC. </p>



<p>I&#8217;ve put the entire codebase on Github so you can simply clone it, configure your Linode token and run Terraform apply.</p>



<p>Code will deploy a LKE cluster in Amsterdam with 3 x <strong>Linode 8 GB</strong> instances and then use HELM provider to deploy Akamai Application Platform for you. </p>



<p>It will also create a DNS zone with the domain specified in the code and configure APL with full DNS integration, meaning you&#8217;ll be pretty much ready to go and deploy apps. </p>



<p><strong>Please note that the cluster might scale up in case you decide to deploy a lot of applications. </strong></p>



<p><strong>Step 1 </strong>&#8211; clone the repository</p>



<pre class="wp-block-code"><code>git clone https://github.com/slepix/ApplicationPlatform-Linode.git</code></pre>



<p></p>



<p><strong>Step 2</strong> &#8211; fill out apl.tfvars file</p>



<pre class="wp-block-code"><code>linode_token = "YourLinodeToken"
domain = "yourdomain.com"
region = "nl-ams"
k8s_version = "1.30"
soaemail = "SoaEmail@yourdomain.com"</code></pre>



<p></p>



<p><strong>Step 3</strong> &#8211; run terraform plan to verify what will be deployed</p>



<pre class="wp-block-code"><code>terraform plan --var-file="apl.tfvars"</code></pre>



<p></p>



<p><strong>Step 4 </strong>&#8211; run Terraform apply and grab some coffee</p>



<pre class="wp-block-code"><code>terraform apply --var-file="apl.tfvars"</code></pre>



<p></p>



<p>After the cluster has been deployed; you can go to Linode console and open the Kubernetes dashboard (you will need to download the kubeconfig file as well). </p>



<p>If all went smooth (and it will, because IaC :D), you should see a bunch of containers popping up while APL configures itself. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="411" src="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl1-1024x411.png" alt="" class="wp-image-679" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl1-1024x411.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl1-300x120.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl1-768x308.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl1-1536x616.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl1.png 1968w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Click on Jobs on the left hand side and then on the three dots next to &#8220;apl&#8221; job. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="426" src="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl3-1024x426.png" alt="" class="wp-image-680" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl3-1024x426.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl3-300x125.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl3-768x320.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl3-1536x639.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl3.png 1656w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Click on Logs, which will bring up a new window with the logs.  Once the installation has completed, you will see the username, password and the URL where you can log in after everything has been set up. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="641" src="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl2-1024x641.png" alt="" class="wp-image-681" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl2-1024x641.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl2-300x188.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl2-768x480.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl2.png 1159w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Have in mind that setup will take a good 20-30 mins; there&#8217;s a lot of things which needs to be configured in the background for you <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>After everything has been deployed, the console website should become accessible. Login with the credentials listed in the logs of your job and you should be presented with the console. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="990" height="694" src="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl5.png" alt="" class="wp-image-682" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl5.png 990w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl5-300x210.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl5-768x538.png 768w" sizes="auto, (max-width: 990px) 100vw, 990px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="519" src="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl6-1024x519.png" alt="" class="wp-image-684" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/10/apl6-1024x519.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl6-300x152.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl6-768x389.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl6-1536x779.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2024/10/apl6.png 1619w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Make sure to follow post-installation steps mentioned <a href="https://apl-docs.net/docs/get-started/installation/post-installation-steps" target="_blank" rel="noopener" title="">here</a> in order to finish the configuration.</strong></p>



<p>Cheers, </p>



<p>Alex!</p><p>The post <a href="https://blog.slepcevic.net/akamai-application-platform-on-lke/">Akamai Application Platform on LKE</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/akamai-application-platform-on-lke/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deploying a GLOBAL sentiment Analysis service using DeepSparse and Akamai Connected Cloud</title>
		<link>https://blog.slepcevic.net/deploying-a-global-sentiment-analysis-service-using-deepsparse-and-akamai-connected-cloud/</link>
					<comments>https://blog.slepcevic.net/deploying-a-global-sentiment-analysis-service-using-deepsparse-and-akamai-connected-cloud/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Wed, 12 Jun 2024 22:31:06 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Akamai Gecko]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linode Gecko]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[Akamai CLB]]></category>
		<category><![CDATA[Akamai Cloud Load Balancer]]></category>
		<category><![CDATA[DeepSparse]]></category>
		<category><![CDATA[EdgeAI]]></category>
		<category><![CDATA[Global computing]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[Linode VM]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Neural Magic]]></category>
		<guid isPermaLink="false">https://blog.slepcevic.net/?p=441</guid>

					<description><![CDATA[<p>In the previous post, we explored how to deploy a sentiment analysis application using Neural Magic’s DeepSparse on Akamai Connected Cloud (Linode). We leveraged just two dual-core VMs and a Nodebalancer to process a pretty impressive number(40K) of movie reviews...</p>
<p>The post <a href="https://blog.slepcevic.net/deploying-a-global-sentiment-analysis-service-using-deepsparse-and-akamai-connected-cloud/">Deploying a GLOBAL sentiment Analysis service using DeepSparse and Akamai Connected Cloud</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>In the <a href="https://blog.slepcevic.net/sentiment-analysis-of-40-thousand-movie-reviews-in-20-minutes-using-neural-magics-deepsparse-inference-runtime-and-linode/" target="_blank" rel="noopener" title="previous post">previous post</a>, we explored how to deploy a sentiment analysis application using Neural Magic’s DeepSparse on Akamai Connected Cloud (Linode). </p>



<p>We leveraged just two dual-core VMs and a Nodebalancer to process a pretty impressive number(40K) of movie reviews in just 20 minutes. However, deploying in a single region can lead to latency/availability issues and doesn&#8217;t fully utilize the global reach of modern cloud infrastructure Akamai Connected Cloud offers. </p>



<p>Also, single region deployments are kinda boring <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>In this post, we&#8217;ll expand our deployment by setting up virtual machines in <strong>all available</strong> Linode regions and replacing the current <a href="https://www.linode.com/products/nodebalancers/" title="Nodebalancer ">Nodebalancer </a>with Akamai’s new <a href="https://www.linode.com/green-light/" title="Cloud Load Balancer">Cloud Load Balancer</a> (currently in beta access). </p>



<p><strong>What is Akamai&#8217;s new Cloud Load Balancer you may ask? It&#8217;s really cool peace of tech.</strong></p>



<p>Think of it like an umbrella over the internet; it gives you the possibility to load balance your workloads across ANY location; it can be on prem, Akamai&#8217;s cloud, some other hyper-scaler, heck, it can even be your home IP address if you want to <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> </p>



<p>As long as it can be reached over the internet, Cloud Load Balancer can use it to deliver the request. </p>



<p>Joking aside, here&#8217;s a more official description of the service:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>The Akamai Cloud Load Balancer (ACLB) (formerly referred to as Akamai Global Load Balancer) is a layer 4 and 7 load balancer that distributes traffic based on performance, weight, and content (HTTP headers, query strings, etc.). The ACLB is multi-region, independent of Akamai Delivery, and built for East-West and North-South traffic. Key features include multi-region/multi-cloud load balancing and method selection.</p>
</blockquote>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="259" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/clb3-1024x259.png" alt="" class="wp-image-447" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/clb3-1024x259.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/clb3-300x76.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/clb3-768x195.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/clb3.png 1303w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">Why Scale Globally?</h3>



<p>Scaling out our application across multiple regions has several benefits:</p>



<ol class="wp-block-list">
<li><strong>Reduced Latency</strong>: By having servers closer to our users, we can significantly reduce the time it takes for requests to travel back and forth.</li>



<li><strong>High Availability</strong>: Distributing the load across multiple regions ensures that if one region goes down, well, we kinda don&#8217;t care, our app stays online <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></li>



<li><strong>Better Performance</strong>: I mean, you can&#8217;t beat physics; simply having the possibility to do compute closer to the user improves performance and user experience.</li>
</ol>



<h3 class="wp-block-heading">Step-by-Step Deployment Guide</h3>



<p>Ok, let&#8217;s get into the meaty part; codebase from our previous post hasn&#8217;t changed significantly; only thing which we changed is that we&#8217;re not hardcoding out region anymore, but we are fetching the list of available regions from Linode API and deploying an instance in each region. </p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>In the code which fetches the regions, you will notice that I commented out the authentication part. </p>



<p>Some regions are available only to authenticated users; if you&#8217;re one of those, just uncomment those few lines and the full region list will be returned to you. </p>
</blockquote>



<p>Let&#8217;s start with terraform plan and see what we will create. </p>



<pre class="wp-block-code"><code>terraform plan</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="942" height="690" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/1.png" alt="" class="wp-image-449" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/1.png 942w, https://blog.slepcevic.net/wp-content/uploads/2024/06/1-300x220.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/1-768x563.png 768w" sizes="auto, (max-width: 942px) 100vw, 942px" /></figure>



<p>Ok, 25 instances, just what we expect since Akamai has 25 compute regions currently publicly available. </p>



<p>Let&#8217;s proceed with terraform apply </p>



<pre class="wp-block-code"><code>terraform apply</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="690" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/2-1024x690.png" alt="" class="wp-image-450" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/2-1024x690.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/2-300x202.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/2-768x518.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/2.png 1300w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>I don&#8217;t know about you, but I always nerd out on seeing a bunch of servers popping up in the console <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
</blockquote>



<p>In a minute or two we should have all instances deployed. After the instances have been deployed, <strong>cloud-init</strong> will kick off and install DeepSparse server with an Nginx proxy in front (check out the previous post or the YAML file in the repo for more details).</p>



<p><strong>Awesome</strong>! After we&#8217;ve got the infrastructure up and running, last step is to add our nodes to the Cloud load balancer pool; at the moment we will need to do some Clickops <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f641.png" alt="🙁" class="wp-smiley" style="height: 1em; max-height: 1em;" /> CLB service is currently in beta so IaC support isn&#8217;t out yet. </p>



<p>First step is creating a Cloud Load Balancer by clicking the &#8220;<strong>Create Cloud Load Balancer</strong>&#8221; button and giving it a name.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="410" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/4-1024x410.png" alt="" class="wp-image-452" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/4-1024x410.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/4-300x120.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/4-768x307.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/4.png 1322w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>During the beta period, Cloud Load Balancer is deployed only in 5 locations. This number will grow drastically once the service goes GA. </p>
</blockquote>



<p>With Akamai&#8217;s Cloud Load Balancer, everything starts with a &#8220;<strong>Configuration</strong>&#8220;. Let&#8217;s create one by pressing &#8220;<strong>Add Configuration</strong>&#8221; button. </p>



<p>We will configure our load balancer to &#8220;listen&#8221; on both HTTP and HTTPS. Once we selected HTTPS as our protocol, we need to add a certificate. </p>



<p>In order to do that, we need to prepare our <strong>certificate</strong> and <strong>private key</strong> which we will paste into the configuration field.<em> In this case I will use a self-signed certificate. </em></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1298" height="578" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/6-1024x456.png" alt="" class="wp-image-453" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/6-1024x456.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/6-300x134.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/6-768x342.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/6.png 1298w" sizes="auto, (max-width: 1298px) 100vw, 1298px" /></figure>



<p>At this stage we will only cover the configuration for the HTTPS protocol, HTTP is really easy and won&#8217;t bother wasting your time on it. </p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="485" height="886" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/7.png" alt="" class="wp-image-454" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/7.png 485w, https://blog.slepcevic.net/wp-content/uploads/2024/06/7-164x300.png 164w" sizes="auto, (max-width: 485px) 100vw, 485px" /></figure>
</div>


<p>We need to paste in the certificate &amp; the key, enter the SNI hostname and press &#8220;<strong>Create and Add</strong>&#8221; button. </p>



<p>After we&#8217;ve got the configuration and the certificate added, we need to create a &#8220;<strong>Route</strong>&#8220;. Let&#8217;s click on  &#8220;<strong>Create a New HTTP Route</strong>&#8221; button and give it a name. </p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="485" height="483" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/9.png" alt="" class="wp-image-455" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/9.png 485w, https://blog.slepcevic.net/wp-content/uploads/2024/06/9-300x300.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/9-150x150.png 150w" sizes="auto, (max-width: 485px) 100vw, 485px" /></figure>
</div>


<p>Great, we&#8217;ve created a route, but the route is currently empty and it doesn&#8217;t route anything. We will come back to this a bit later. </p>



<p>Next step is to save our configuration and click on &#8220;<strong>Service Targets</strong>&#8221; tab. </p>



<p>This is the place where we will define our target groups and origin servers. Click on &#8220;<strong>New Service Target</strong>&#8221; button</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="349" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/10-1024x349.png" alt="" class="wp-image-456" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/10-1024x349.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/10-300x102.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/10-768x262.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/10.png 1301w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Next steps are quite explanatory, we need to give it a name and add the nodes which we want to load balance across. </p>



<p><strong>Remember, this can be one of the existing Linode instance or it can be ANY IP address which can be reached via the internet. </strong></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="440" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/11-1024x440.png" alt="" class="wp-image-457" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/11-1024x440.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/11-300x129.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/11-768x330.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/11-1536x660.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2024/06/11.png 1801w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>This is the step where I could really use IaC support, we need to add all 25 servers by using ClickOps to our &#8220;<strong>Endpoints</strong>&#8221; list.  <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>This is also the place where you can also select the <strong>load balancing algorithm</strong> which will be used to balance requests between the nodes. At the moment there are 5 of them available:</p>



<ul class="wp-block-list">
<li><strong>Round Robin</strong></li>



<li><strong>Least Request</strong></li>



<li><strong>Ring Hash</strong></li>



<li><strong>Random</strong></li>



<li><strong>Maglev</strong></li>
</ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="547" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/15-1024x547.png" alt="" class="wp-image-460" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/15-1024x547.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/15-300x160.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/15-768x410.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/15-1536x821.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2024/06/15.png 1783w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Last step in &#8220;<strong>Service Target</strong>&#8221; configuration is to set the path and host header we will use for the health checks on the nodes and click on &#8220;<strong>Save Service Target</strong>&#8221; button. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="508" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/12-1024x508.png" alt="" class="wp-image-458" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/12-1024x508.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/12-300x149.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/12-768x381.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/12-1536x762.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2024/06/12.png 1807w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>We&#8217;re almost there, I promise <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /> </strong></p>



<p>Final step is to go back to the &#8220;<strong>Routes</strong>&#8221; tab, click on the route which we&#8217;ve created earlier and click on Edit button.</p>



<p>In the rule configuration we will enter the hostname which we want to match upon and select our &#8220;<strong>Service target</strong>&#8221; from the dropdown. </p>



<p>We can also do advanced request matching based on <strong>path, header, method, regex or query string</strong> but for now we will use path prefix. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="535" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/13-1-1024x535.png" alt="" class="wp-image-461" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/13-1-1024x535.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/13-1-300x157.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/13-1-768x401.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/13-1-1536x803.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2024/06/13-1.png 1814w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Ok, we have configured our Cloud Load balancer; final step to get our application running is to create a CNAME record for my subdomain &#8220;<strong>clbtest.slepcevic.net</strong>&#8221; and point it to &#8220;<strong>MDEE053110.mesh.akadns.net</strong>&#8221; (<em>visible in the Summary page of the load balancer</em>). </p>



<p>Let&#8217;s go ahead and visit our website. We see that our DeepSparse API server is happily responding and ready to receive requests! Woohoo!</p>



<p><strong>Yes, it&#8217;s that easy. </strong>In less than 10 minutes we have deployed a globally distributed application on Akamai Connected Cloud. Once IaC support for Cloud Load Balancer is rolled out, we can bring this time down to 5 minutes without any problems. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="649" src="https://blog.slepcevic.net/wp-content/uploads/2024/06/16-1024x649.png" alt="" class="wp-image-462" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/16-1024x649.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/06/16-300x190.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/16-768x487.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/06/16.png 1264w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Ok, 25 regions is cool, but that isn&#8217;t truly global is it? <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></h2>



<p>Yes, you&#8217;re right; with 25 regions we have covered the large majority of the global (internet) population. Can we do better? For sure! Welcome Gecko!</p>



<h3 class="wp-block-heading">Gecko?</h3>



<p>Akamai’s new initiative, code-named Gecko, is set to revolutionize cloud computing by integrating cloud capabilities directly into Akamai&#8217;s extensive edge network. This move aligns perfectly with Akamai’s strategy to provide high-performance, low-latency, and globally scalable solutions. By embedding compute capabilities at or VERY near the edge, Gecko aims to deliver workloads closer to users, devices, and data sources than ever before.</p>



<h3 class="wp-block-heading">What Does Gecko Mean for Our Deployment in the future?</h3>



<p>Gecko&#8217;s will enable us to deploy our sentiment analysis application in hundreds of new locations worldwide, <strong>including traditionally hard-to-reach areas</strong>. This means extremely reduced latency, improved performance, and enhanced availability for users across the world. </p>



<h3 class="wp-block-heading">The Benefits of Deploying on Gecko</h3>



<ol class="wp-block-list">
<li><strong>Ultra-Low Latency</strong>: By running our workloads even closer to end-users, we can drastically reduce the time it takes to process and respond to requests.</li>



<li><strong>Global Reach</strong>: With Gecko, we can deploy in cities and regions where traditional cloud providers struggle to reach, ensuring a truly global presence. </li>



<li><strong>Scalability and Flexibility</strong>: With Akamai&#8217;s large compute footprint, we can scale out our application to tens of thousands of nodes across hundreds of locations. </li>



<li><strong>Consistent Experience</strong>: Let&#8217;s be real, if you&#8217;re running a global application, you&#8217;re most probably dealing with multiple providers; with Gecko we can consolidate all of your workloads and location coverage with a single provider. Just the operational benefits of that should be enough to &#8220;tickle&#8221; your brain into considering it for your application. </li>
</ol>



<h2 class="wp-block-heading">Want to try it yourself?</h2>



<p>Have fun <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Go ahead and clone the repo from <a href="https://github.com/slepix/neuralmagic-globalLinode" target="_blank" rel="noopener" title="">https://github.com/slepix/neuralmagic-globalLinode</a> and sign up to receive beta access to the new Cloud Load balancer on <a href="https://www.linode.com/green-light/" target="_blank" rel="noopener" title="">https://www.linode.com/green-light/</a> .  </p>



<p>NeuralMagic running across all Linode (including Gecko) regions in the next post? Perhaps <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>Cheers! Alex</p><p>The post <a href="https://blog.slepcevic.net/deploying-a-global-sentiment-analysis-service-using-deepsparse-and-akamai-connected-cloud/">Deploying a GLOBAL sentiment Analysis service using DeepSparse and Akamai Connected Cloud</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/deploying-a-global-sentiment-analysis-service-using-deepsparse-and-akamai-connected-cloud/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Easy way to deploy Linode instances across all (core) regions using Terraform</title>
		<link>https://blog.slepcevic.net/easy-way-to-deploy-linode-instances-across-all-core-regions-using-terraform/</link>
					<comments>https://blog.slepcevic.net/easy-way-to-deploy-linode-instances-across-all-core-regions-using-terraform/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Mon, 03 Jun 2024 22:57:35 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[Linode regions]]></category>
		<guid isPermaLink="false">https://blog.slepcevic.net/?p=426</guid>

					<description><![CDATA[<p>In preparations for a upcoming blog post related to running AI workloads on Akamai Connected Cloud, I needed to deploy instances in all available Linode regions. Since there&#8217;s no way that I&#8217;ll manually create a list of regions, I thought...</p>
<p>The post <a href="https://blog.slepcevic.net/easy-way-to-deploy-linode-instances-across-all-core-regions-using-terraform/">Easy way to deploy Linode instances across all (core) regions using Terraform</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>In preparations for a upcoming blog post related to running AI workloads on Akamai Connected Cloud, I needed to deploy instances in all available Linode regions. Since there&#8217;s no way that I&#8217;ll manually create a list of regions, I thought it would be cool to use Terraforms HTTP provider to dynamically fetch the list of regions and deploy an instance to it. </p>



<p></p>



<h3 class="wp-block-heading">What’s the Plan?</h3>



<p></p>



<ol class="wp-block-list">
<li><strong>Fetch the list of regions</strong>: Use Terraform&#8217;s HTTP provider to retrieve the list of Linode regions via the Linode API.</li>



<li><strong>Deploy Instances</strong>: Use Terraform’s Linode provider to deploy instances in each of these regions.</li>



<li><strong>Grab some coffee</strong>: Sit back and relax while Terraform does the heavy lifting.</li>
</ol>



<h3 class="wp-block-heading">Pre-requisites</h3>



<p>Before we begin, make sure you have:</p>



<ul class="wp-block-list">
<li>A Linode account (If not, sign up <a href="https://www.linode.com/" target="_blank" rel="noopener" title="">here</a>).</li>



<li>A Linode API token (Grab it from your <a href="https://cloud.linode.com/profile/tokens" target="_blank" rel="noopener" title="">Linode dashboard</a>).</li>



<li>Terraform installed on your machine (Download it from <a href="https://developer.hashicorp.com/terraform/install" target="_blank" rel="noopener" title="">here</a>).</li>
</ul>



<h3 class="wp-block-heading">Step-by-Step Guide</h3>



<h4 class="wp-block-heading">Step 1: Define Providers and Variables</h4>



<p>We’ll start by defining our providers and variables. The HTTP provider will fetch the regions, and the Linode provider will deploy our instances.</p>



<p></p>



<pre class="wp-block-code"><code>provider "http" {}

variable "token" {
type = string
default = "mytoken" # Replace with your actual Linode API token
}</code></pre>



<h4 class="wp-block-heading">Step 2: Fetch Regions from Linode API.</h4>



<p>Next, let’s fetch the list of regions using the HTTP provider. We’ll decode the JSON response to extract the region IDs. </p>



<p><strong>Please note that if you uncomment the &#8220;request_headers&#8221; part, you will get a list of regions only available to your user/account. </strong> By default you will get a list of all public regions. </p>



<pre class="wp-block-code"><code>data "http" "regions" {
  url = "https://api.linode.com/v4/regions"

#   request_headers = {
#     Authorization = "Bearer ${var.token}"
#   }
}

locals {
  regions = &#91;for region in jsondecode(data.http.regions.response_body).data : region.id]
}</code></pre>



<h4 class="wp-block-heading">Step 3: Deploy Linode Instances</h4>



<p>Now comes the fun part! We’ll utilize Terraform’s &#8220;<code>for_each"</code> feature to loop through the regions and deploy a virtual machine. </p>



<pre class="wp-block-code"><code><code>provider "linode" {
  token = var.token
}

resource "linode_instance" "instances" {
  for_each = toset(local.regions)

  label    = "linode-${each.key}"
  region   = each.key
  type     = "g6-standard-1"  # Example Linode type, adjust as needed
  image    = "linode/ubuntu20.04"  # Example image, adjust as needed
  root_pass = "your_secure_password"  # Replace with a secure password
  authorized_keys = &#91;file("~/.ssh/id_rsa.pub")]  # Adjust path to your public key
}<span style="background-color: initial; font-family: inherit; font-size: inherit; color: initial;"></span></code></code></pre>



<h4 class="wp-block-heading">Step 4: Outputs</h4>



<p>Finally, let’s define some outputs to see the regions and instances we’ve created.</p>



<pre class="wp-block-code"><code><code>output "regions" {
  value = local.regions
}

output "instances" {
  value = { for instance in linode_instance.instances : instance.id => instance }
}
</code></code></pre>



<h3 class="wp-block-heading">Wrapping Up</h3>



<p>And there you have it! With just a few lines of code, we’ve automated the deployment of Linode instances across all regions. Terraform takes care of the heavy lifting, allowing you to focus on what matters most – enjoying a cup of coffee while your infrastructure magically sets itself up :D+</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="974" height="441" src="http://blog.slepcevic.net/wp-content/uploads/2024/06/Screenshot-2024-06-04-004302.png" alt="" class="wp-image-434" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/06/Screenshot-2024-06-04-004302.png 974w, https://blog.slepcevic.net/wp-content/uploads/2024/06/Screenshot-2024-06-04-004302-300x136.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/06/Screenshot-2024-06-04-004302-768x348.png 768w" sizes="auto, (max-width: 974px) 100vw, 974px" /></figure>



<p>Until next time, Alex!</p><p>The post <a href="https://blog.slepcevic.net/easy-way-to-deploy-linode-instances-across-all-core-regions-using-terraform/">Easy way to deploy Linode instances across all (core) regions using Terraform</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/easy-way-to-deploy-linode-instances-across-all-core-regions-using-terraform/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Sentiment analysis of 40 thousand movie reviews in 20 minutes using Neural Magic&#8217;s DeepSparse inference runtime and Linode virtual machines.</title>
		<link>https://blog.slepcevic.net/sentiment-analysis-of-40-thousand-movie-reviews-in-20-minutes-using-neural-magics-deepsparse-inference-runtime-and-linode/</link>
					<comments>https://blog.slepcevic.net/sentiment-analysis-of-40-thousand-movie-reviews-in-20-minutes-using-neural-magics-deepsparse-inference-runtime-and-linode/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Sat, 30 Mar 2024 22:54:00 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[DeepSparse]]></category>
		<category><![CDATA[Linode]]></category>
		<category><![CDATA[Neural Magic]]></category>
		<guid isPermaLink="false">https://blog.slepcevic.net/?p=287</guid>

					<description><![CDATA[<p>First, let me start with a word or two about DeepSparse. DeepSparse is a sparsity-aware inference runtime that delivers GPU-class performance on commodity CPUs, purely in software, anywhere. GPUs Are Not Optimal &#8211; Machine learning inference has evolved over the...</p>
<p>The post <a href="https://blog.slepcevic.net/sentiment-analysis-of-40-thousand-movie-reviews-in-20-minutes-using-neural-magics-deepsparse-inference-runtime-and-linode/">Sentiment analysis of 40 thousand movie reviews in 20 minutes using Neural Magic’s DeepSparse inference runtime and Linode virtual machines.</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>First, let me start with a word or two about DeepSparse. </p>



<p><strong>DeepSparse is a sparsity-aware inference runtime that delivers GPU-class performance on commodity CPUs, purely in software, anywhere.</strong></p>



<p><strong>GPUs Are Not Optimal</strong> &#8211; Machine learning inference has evolved over the years led by GPU advancements. GPUs are fast and powerful, but they can be expensive, have shorter life spans, and require a lot of electricity and cooling.</p>



<p>Other major problems with GPU&#8217;s, especially if you&#8217;re thinking in the context of <a href="https://www.akamai.com/newsroom/press-release/akamai-takes-cloud-computing-to-the-edge" title="">Edge computing</a>, is that they can&#8217;t be packed as densely and are power ineffective compared to CPU&#8217;s; not to mention availability these days.</p>



<p>Since Akamai recently partnered up with Neural Magic, I&#8217;ve decided to write a quick tutorial on how to easily get started with running a simple <strong>DeepSparse sentiment analysis workload</strong>. </p>



<p>In case you want more about Akamai and Neural Magic&#8217;s partnership, make sure to watch this excellent video from TFiR. It will also give you a great summary of Akamai&#8217;s Project Gecko.</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<div class="responsive-embed widescreen"><iframe loading="lazy" title="Akamai partners with Neural Magic to bring AI to edge use cases" width="1000" height="563" src="https://www.youtube.com/embed/MG45UM4SlbQ?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
</div></figure>



<p></p>



<h3 class="wp-block-heading">What is Sentiment analysis?</h3>



<p><strong>Sentiment analysis</strong>&nbsp;(also known as&nbsp;<strong>opinion mining</strong>&nbsp;or&nbsp;<strong>emotion AI</strong>) is the use of&nbsp;<a href="https://en.wikipedia.org/wiki/Natural_language_processing">natural language processing</a>,&nbsp;<a href="https://en.wikipedia.org/wiki/Text_analytics">text analysis</a>,&nbsp;<a href="https://en.wikipedia.org/wiki/Computational_linguistics">computational linguistics</a>, and&nbsp;<a href="https://en.wikipedia.org/wiki/Biometrics">biometrics</a>&nbsp;to systematically identify, extract, quantify, and study affective states and subjective information. Sentiment analysis is widely applied to&nbsp;<a href="https://en.wikipedia.org/wiki/Voice_of_the_customer">voice of the customer</a>&nbsp;materials such as reviews and survey responses, online and social media, and healthcare materials for applications that range from&nbsp;<a href="https://en.wikipedia.org/wiki/Marketing">marketing</a>&nbsp;to&nbsp;<a href="https://en.wikipedia.org/wiki/Customer_relationship_management">customer service</a>&nbsp;to clinical medicine.&nbsp;</p>



<p>Why is DeepSparse cool? Because I&#8217;m doing analysis of 40 thousands movie reviews in 20 minutes using only <strong>TWO DUAL CORE Linode VM&#8217;s. Mind officially blown. </strong></p>



<p></p>



<p></p>



<p>Let&#8217;s do some math here; rounding it up to 120 thousand processed reviews an hour, with 2 instances and a load balancer, we can process over<strong> 86 million requests a month</strong> which will cost you a <strong>staggering 82$ <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></strong>. </p>



<p><strong>If you&#8217;re doing that on other cloud providers, you&#8217;re paying a five digit monthly bill for that pleasure. </strong></p>



<p></p>



<h2 class="wp-block-heading">Want to try it yourself? It&#8217;s easy!</h2>



<p>If you want to try it out on Linode, follow instructions below. </p>



<p>If you want to check out Neural Magic DeepSparse repo, head out <a href="https://github.com/neuralmagic/deepsparse" target="_blank" rel="noopener" title="">here</a>.</p>



<p><strong>Step 1.  Clone the Repository</strong>.</p>



<p>Open your terminal or command prompt and run the following command: </p>



<pre class="wp-block-code"><code>git clone https://github.com/slepix/neuralmagic-linode</code></pre>



<p>This code will deploy <strong>2 x Dedicated 4 GB</strong> virtual machines and a <strong>Nodebalancer</strong>. It will also install Neural Magic&#8217;s DeepSparse runtime as a Linux service and  install &amp; configure Nginx to proxy requests to DeepSparse server listening on 127.0.0.1:5543. </p>



<p class="has-vivid-red-color has-text-color has-link-color wp-elements-f0a0be9819cd2bc070b1912e9e812e46"><strong>WARNING: THIS IS NOT PRODUCTION GRADE SERVER CONFIGURATION!</strong></p>



<p class="has-vivid-red-color has-text-color has-link-color wp-elements-42cceec5227b00bdbbe795cf30c2587a">It&#8217;s just a POC! Secure your servers and consult Neural Magic documentation if you want to go to production. </p>



<p><strong>Step 2. </strong>&#8211; <strong>Terraform init</strong></p>



<p>Navigate to the repo using the following command: </p>



<pre class="wp-block-code"><code>cd neuralmagic-linode</code></pre>



<p>If you haven&#8217;t already installed Terraform on your machine, you can download it from the <a href="https://developer.hashicorp.com/terraform/install?product_intent=terraform" target="_blank" rel="noopener" title="">official Terraform website</a> and follow the installation instructions for your operating system.</p>



<p><strong>Step 3. </strong></p>



<p>Initialize Terraform by running:</p>



<pre class="wp-block-code"><code>terraform init</code></pre>



<p><strong>Step 4. </strong>&#8211; <strong>Configure your Linode token</strong></p>



<p>Open <strong>variables.tf </strong>file and paste in your Linode token. If you don&#8217;t know how to create a Linode PAT, check this article <strong><a href="https://www.linode.com/docs/products/tools/api/guides/manage-api-tokens/" target="_blank" rel="noopener" title="">here</a></strong>. It should look similar like the picture. You can also adjust the region while you&#8217;re here <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<pre class="wp-block-preformatted">Token in the picture is not valid. It's just an example. </pre>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="772" height="275" src="https://blog.slepcevic.net/wp-content/uploads/2024/03/tokendemo.png" alt="" class="wp-image-293" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/03/tokendemo.png 772w, https://blog.slepcevic.net/wp-content/uploads/2024/03/tokendemo-300x107.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/03/tokendemo-768x274.png 768w" sizes="auto, (max-width: 772px) 100vw, 772px" /></figure>



<p><strong>Step 5</strong> &#8211; <strong>Run Terraform apply</strong></p>



<p>After configuring your variables, you can apply the Terraform configuration by running:</p>



<pre class="wp-block-code"><code>terraform apply</code></pre>



<p>Terraform will show you a plan of the changes it intends to make. </p>



<p>Review the plan carefully, and if everything looks good, type &#8220;<code><strong>yes"</strong></code> and press Enter to apply the changes. Give it 5-6 minutes to finish everything and by visiting your Nodebalancer IP, you should be presented with a landing page for DeepSparse server API. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="518" src="https://blog.slepcevic.net/wp-content/uploads/2024/03/Screenshot-2024-03-31-233941-1024x518.png" alt="" class="wp-image-294" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/03/Screenshot-2024-03-31-233941-1024x518.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/03/Screenshot-2024-03-31-233941-300x152.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/03/Screenshot-2024-03-31-233941-768x388.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/03/Screenshot-2024-03-31-233941.png 1457w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Step 6. </strong></p>



<p>After the installation is done, it&#8217;s finally time to send some data to our API and see how it performs. </p>



<p>We can do that by using <strong>curl </strong>or <strong>invoke-webrequest</strong> if you&#8217;re on Windows and using Powershell. </p>



<p><strong>CURL: </strong></p>



<pre class="wp-block-code"><code>sentence="Neural Magic &amp; Akamai are cool!"
nodebalancer="172.233.34.110" #PUT YOUR NODEBALANCER IP HERE
curl -X POST http://$nodebalancer/v2/models/sentiment_analysis/infer -H "Content-Type: application/json" -d "{\"sequences\": \"$sentence\"}"</code></pre>



<p><strong>PowerShell:</strong></p>



<p></p>



<pre class="wp-block-code"><code>$sentence = "Neural Magic &amp; Akamai are cool!"
$nodebalancer = "172.233.34.110"

$path = "v2/models/sentiment_analysis/infer"
$api = "http://$nodebalancer/$path"
$body = @{
   sequences = $sentence
} | ConvertTo-Json

(Invoke-WebRequest -Uri $api -Method Post -ContentType "application/json" -Body $body -ErrorAction Stop).content</code></pre>



<p>In both cases make sure to paste in the <strong>IP address of the Nodebalancer</strong> you deployed and modify the sentence as you wish. </p>



<h2 class="wp-block-heading">Benchmark time!</h2>



<p>In the repository, I&#8217;ve included a file called movies.csv and three files; two PowerShell and one Python file.</p>



<p><strong>movies.zip</strong> &#8211; unzip this one in the same folder where your benchmark scripts are. </p>



<p><strong>analyze.ps1</strong> &#8211; PowerShell based benchmark, sends requests in serial &#8211; not performant. </p>



<p><strong>panalyze.ps1</strong> &#8211; PowerShell based benchmark, sends requests in parallel &#8211; better performant</p>



<p><strong>pypanalyze.py</strong> &#8211; Python based benchmark, sends requests in parallel &#8211; <strong>best performer (doh!) &lt;-use this</strong></p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="581" height="153" src="http://blog.slepcevic.net/wp-content/uploads/2024/04/Screenshot-2024-04-01-014021.png" alt="" class="wp-image-327" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/04/Screenshot-2024-04-01-014021.png 581w, https://blog.slepcevic.net/wp-content/uploads/2024/04/Screenshot-2024-04-01-014021-300x79.png 300w" sizes="auto, (max-width: 581px) 100vw, 581px" /></figure>



<p>All you need to do to in order to kick off a benchmark is to update the the URL variable with your Nodebalancer IP and you&#8217;re off to the races. </p>



<h2 class="wp-block-heading">Does it scale?</h2>



<p><strong>Yes!</strong> For kicks I&#8217;ve added a third node and the same job finished in 825 seconds. Feel free to add as many nodes as you like and see what numbers you can get. Additionally, you can play with the number of workers in the Python file. </p>



<pre class="wp-block-preformatted">Note 1: python script has been written with the help of ChatGPT :) Results matched with my PowerShell version against verified smaller sample size(check note 2), so I'm gonna call it good :)
	 
Note 2: PowerShell versions don't handle some comments as they should and end up sending garbage to the API. Happens in 3% of the cases. Most probably some encoding/character issue which I couldn't be bothered to fix :)

Note3: Movies.csv file has been generated by using data from https://kaggle.com/

</pre>



<p>Cheers, </p>



<p>Alex. </p>



<p></p><p>The post <a href="https://blog.slepcevic.net/sentiment-analysis-of-40-thousand-movie-reviews-in-20-minutes-using-neural-magics-deepsparse-inference-runtime-and-linode/">Sentiment analysis of 40 thousand movie reviews in 20 minutes using Neural Magic’s DeepSparse inference runtime and Linode virtual machines.</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/sentiment-analysis-of-40-thousand-movie-reviews-in-20-minutes-using-neural-magics-deepsparse-inference-runtime-and-linode/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deploying HarperDB Docker container on Linode VM using Terraform and cloud-init</title>
		<link>https://blog.slepcevic.net/deploying-harperdb-docker-container-on-linode-vm-using-terraform-and-cloud-init/</link>
					<comments>https://blog.slepcevic.net/deploying-harperdb-docker-container-on-linode-vm-using-terraform-and-cloud-init/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Tue, 27 Feb 2024 12:45:04 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[HarperDB]]></category>
		<category><![CDATA[Linode]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=227</guid>

					<description><![CDATA[<p>This blog post is a small update to the excellent post guys from HarperDB wrote &#8211; https://www.harperdb.io/development/tutorials/deploying-harperdb-on-digital-ocean-linode-with-terraform Since Linode now support cloud-init and metadata service, I decided to extend their example by using cloud-init to do the installation of Docker...</p>
<p>The post <a href="https://blog.slepcevic.net/deploying-harperdb-docker-container-on-linode-vm-using-terraform-and-cloud-init/">Deploying HarperDB Docker container on Linode VM using Terraform and cloud-init</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>This blog post is a small update to the excellent post guys from HarperDB wrote &#8211; <a href="https://www.harperdb.io/development/tutorials/deploying-harperdb-on-digital-ocean-linode-with-terraform" target="_blank" rel="noreferrer noopener">https://www.harperdb.io/development/tutorials/deploying-harperdb-on-digital-ocean-linode-with-terraform</a></p>



<p>Since Linode now support <a href="https://www.linode.com/docs/guides/applications/configuration-management/cloud-init/">cloud-init</a> and <a href="https://www.linode.com/docs/guides/using-metadata-cloud-init-on-any-distribution/">metadata service</a>, I decided to extend their example by using cloud-init to do the installation of Docker Engine and HarperDB container. </p>



<p>All you need to do to get HarperDB running is to copy all of these files in the same folder while making sure to keep the filenames the same (<strong>harperdb.yaml</strong> at least). </p>



<p>After that, simply run Terraform init, then Terraform apply and in 2-3 minutes you should have your HarperDB instance up and running. </p>



<figure class="wp-block-image"><img loading="lazy" decoding="async" width="1083" height="540" src="https://blog.slepcevic.net/wp-content/uploads/2024/02/harpervscode.png" alt="This is how your folder structure should look like when you copy all the files. " class="wp-image-253" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/02/harpervscode.png 1083w, https://blog.slepcevic.net/wp-content/uploads/2024/02/harpervscode-300x150.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/02/harpervscode-1024x511.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/02/harpervscode-768x383.png 768w" sizes="auto, (max-width: 1083px) 100vw, 1083px" /></figure>



<figure class="wp-block-pullquote has-small-font-size" style="font-style:normal;font-weight:700"><blockquote><p><strong>Please change the passwords for your Linode VM and HarperDB in the compute.tf &amp; harperdb.yaml files &lt;3</strong><br><strong> Don&#8217;t be that guy! <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></strong></p><cite>Alex, 2024</cite></blockquote></figure>



<pre class="wp-block-code"><code>terraform init
terraform apply -var="token=YourLinodeToken"</code></pre>



<p><strong>harperdb.yaml</strong></p>



<pre class="wp-block-code"><code>#cloud-config
runcmd:
  - sudo apt-get update
  - sudo apt-get install -yq ca-certificates curl gnupg lsb-release
  - sudo install -m 0755 -d /etc/apt/keyrings
  - for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
  - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  - sudo chmod a+r /etc/apt/keyrings/docker.gpg
  - echo "deb &#091;arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release &amp;&amp; echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
  - sudo apt-get update
  - sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  - sudo mkdir /home/harperdb
  - sudo chmod 777 /home/harperdb
  - sudo docker run -d -v /home/harperdb:/home/harperdb/hdb -e HDB_ADMIN_USERNAME=HDB_ADMIN -e HDB_ADMIN_PASSWORD=<strong>password </strong>-p 9925:9925 -p 9926:9926 harperdb/harperdb</code></pre>



<p><strong>compute.tf</strong></p>



<pre class="wp-block-code"><code>resource "linode_instance" "harperdb" {
  image = "linode/ubuntu22.04"
  region = "nl-ams" #Pick The region you want
  type = "g6-standard-1"
  root_pass = "YourRootPassword!" #Change this :D
   metadata {
    user_data = base64encode(file("${path.module}/harperdb.yaml"))
  }
}

resource "linode_firewall" "harperdb_firewall" {
  label = "harperdb"

  inbound {
    label    = "ssh"
    action   = "ACCEPT"
    protocol = "TCP"
    ports    = "22"
    ipv4     = &#091;"0.0.0.0/0"]
    ipv6     = &#091;"::/0"]
  }

  inbound {
    label    = "harperdb"
    action   = "ACCEPT"
    protocol = "TCP"
    ports    = "9925-9926"
    ipv4     = &#091;"0.0.0.0/0"]
    ipv6     = &#091;"::/0"]
  }

  inbound_policy = "DROP"
  outbound_policy = "ACCEPT"
  linodes = &#091;linode_instance.harperdb.id]
}

terraform {
  required_providers {
    linode = {
      source = "linode/linode"
    }
  }
}

provider "linode" {
  token = var.token
}

variable "token" {
    default = ""
    type = string
}</code></pre>



<p>Alex. </p><p>The post <a href="https://blog.slepcevic.net/deploying-harperdb-docker-container-on-linode-vm-using-terraform-and-cloud-init/">Deploying HarperDB Docker container on Linode VM using Terraform and cloud-init</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/deploying-harperdb-docker-container-on-linode-vm-using-terraform-and-cloud-init/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Assigning a firewall to Linode NodeBalancer using Terraform and Cloud Manager.</title>
		<link>https://blog.slepcevic.net/assigning-a-firewall-to-linode-nodebalancer-using-terraform-and-cloud-manager/</link>
					<comments>https://blog.slepcevic.net/assigning-a-firewall-to-linode-nodebalancer-using-terraform-and-cloud-manager/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Tue, 19 Dec 2023 11:25:08 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[Firewall]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[Linode]]></category>
		<category><![CDATA[Nodebalancer]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=220</guid>

					<description><![CDATA[<p>Linode has finally rolled out support to assign a firewall device to a NodeBalancer (Linode&#8217;s managed load balancing service)! Assigning a firewall via CloudManager is quite easy and self-explanatory. Create a firewall, add the rules you need and simply select...</p>
<p>The post <a href="https://blog.slepcevic.net/assigning-a-firewall-to-linode-nodebalancer-using-terraform-and-cloud-manager/">Assigning a firewall to Linode NodeBalancer using Terraform and Cloud Manager.</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><a href="https://www.linode.com">Linode</a> has<strong> finally rolled out support to assign a firewall device to a NodeBalancer</strong> (Linode&#8217;s managed load balancing service)!</p>



<p>Assigning a firewall via CloudManager is quite easy and self-explanatory. Create a firewall, add the rules you need and simply select that firewall when you create a load balancer. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="522" src="https://blog.slepcevic.net/wp-content/uploads/2023/12/Firewall-Nodebalancer-Screenshot-1064x542-1-1024x522.png" alt="" class="wp-image-221" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/12/Firewall-Nodebalancer-Screenshot-1064x542-1-1024x522.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2023/12/Firewall-Nodebalancer-Screenshot-1064x542-1-300x153.png 300w, https://blog.slepcevic.net/wp-content/uploads/2023/12/Firewall-Nodebalancer-Screenshot-1064x542-1-768x391.png 768w, https://blog.slepcevic.net/wp-content/uploads/2023/12/Firewall-Nodebalancer-Screenshot-1064x542-1.png 1064w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>More fun part is to use IaC, mainly Terraform. </p>



<p>Example code which will create a NodeBalancer,  Firewall device and assign it to a load balancer. </p>



<p>NodeBalancer Terraform code: </p>



<pre class="wp-block-code"><code>resource "linode_nodebalancer" "primaryregion-lb" {
    label = "nodebalancer-web-${var.primary_region}"
    region = var.primary_region
    client_conn_throttle = 0
}

resource "linode_nodebalancer_config" "primaryregion-lb-config" {
    nodebalancer_id = linode_nodebalancer.primaryregion-lb.id
    port = 80
    protocol = "http"
    check = "http"
    check_path = "/"
    check_attempts = 3
    check_timeout = 30
    stickiness = "none"
    algorithm = "leastconn"
}

resource "linode_nodebalancer_node" "primary" {
    count = "2"
    nodebalancer_id = linode_nodebalancer.primaryregion-lb.id
    config_id = linode_nodebalancer_config.primaryregion-lb-config.id
    address = "${element(linode_instance.web-primary.*.private_ip_address, count.index)}:80"
    label = "nodebalancer-web-${var.primary_region}"
    weight = 50
}</code></pre>



<p>Terraform code to create a Firewall and assign it to the said load balancer.</p>



<pre class="wp-block-code"><code>
resource "linode_firewall" "lb-fw" {
  label = "lb-pub"

  inbound {
    label    = "allow-http"
    action   = "ACCEPT"
    protocol = "TCP"
    ports    = "80"
    ipv4     = &#091;"0.0.0.0/0"]
    ipv6     = &#091;"::/0"]
  }

  inbound {
    label    = "allow-https"
    action   = "ACCEPT"
    protocol = "TCP"
    ports    = "443"
    ipv4     = &#091;"0.0.0.0/0"]
    ipv6     = &#091;"::/0"]
  }

  inbound_policy = "DROP"

  outbound_policy = "ACCEPT"

  nodebalancers = &#091;linode_nodebalancer.primaryregion-lb.id]
}</code></pre>



<p>If you&#8217;ve ever used Terraform and Linode, you will notice it&#8217;s exactly the same approach we do when we want to assign a firewall device to a Linode (virtual machine); only difference is that we reference the &#8220;nodebalancer&#8221; instead of &#8220;linodes&#8221;. </p>



<p></p>



<p>Cheers, </p>



<p>Alex. </p><p>The post <a href="https://blog.slepcevic.net/assigning-a-firewall-to-linode-nodebalancer-using-terraform-and-cloud-manager/">Assigning a firewall to Linode NodeBalancer using Terraform and Cloud Manager.</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/assigning-a-firewall-to-linode-nodebalancer-using-terraform-and-cloud-manager/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 
Lazy Loading (feed)

Served from: blog.slepcevic.net @ 2026-01-06 11:33:32 by W3 Total Cache
-->