<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI coding - Architect the cloud</title>
	<atom:link href="https://blog.slepcevic.net/tag/ai-coding/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.slepcevic.net</link>
	<description></description>
	<lastBuildDate>Mon, 06 Oct 2025 09:23:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9</generator>
	<item>
		<title>Your own AI coding assistant running on Akamai cloud!</title>
		<link>https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/</link>
					<comments>https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Tue, 18 Feb 2025 18:17:12 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI coding]]></category>
		<category><![CDATA[bolt.diy]]></category>
		<category><![CDATA[Linode]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://blog.slepcevic.net/?p=760</guid>

					<description><![CDATA[<p>What? You want some AI to write my code? AI-powered coding assistants are the main talk in the developer world for a while, there&#8217;s no denying that. I can&#8217;t count the times I&#8217;ve read somewhere the AI will replace developers...</p>
<p>The post <a href="https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/">Your own AI coding assistant running on Akamai cloud!</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<h3 class="wp-block-heading">What? You want some AI to write my code?</h3>



<p>AI-powered coding assistants are the main talk in the developer world for a while, there&#8217;s no denying that. I can&#8217;t count the times I&#8217;ve read somewhere the AI will replace developers in the next X years. You’ve probably seen tools like <strong>GitHub Copilot, ChatGPT, or Tabnine</strong> popping up everywhere.</p>



<p>They promise to boost productivity, help with debugging, and even teach you new coding techniques. Sounds amazing, right? But like anything, AI-powered coding assistants have their downsides too. So, let’s talk about what makes them great—and where they might fall short.</p>



<h3 class="wp-block-heading">Why AI Coding Assistants Are a Game-Changer</h3>



<p>Obviously, one of the biggest advantages of using an AI assistant is the time it saves. Instead of writing the same repetitive boilerplate code over and over and over again, you can generate it in seconds. Need a quick function to parse JSON? AI has you covered.  Easy peasy! Stuck on how to structure your SQL query? Just ask. This means less time spent on the boring stuff and more time on actual problem-solving.</p>



<p>AI is also a fantastic debugging tool. It can analyze your code, catch potential issues, and suggest fixes before you even run it. Instead of spending hours combing through error messages and Stack Overflow threads, you get quick, relevant suggestions that help you move forward faster.</p>



<p>And let’s not forget about learning. If you’re picking up a new language or framework, an AI assistant can guide you with real-time examples, explain unfamiliar syntax, and even generate sample projects. It’s like having a 24/7 coding mentor who doesn’t judge your questions.</p>



<p>Beyond just speed and learning, AI can actually help improve code quality. It can suggest best practices, helps format your code, and even recommends refactoring when your code gets messy. Plus, if you’re working in a team, it can assist with keeping code style consistent and even generate useful commit messages or documentation. Wouldn&#8217;t it be cool if we could plug the AI into our pipeline and make sure that all rules are being followed?</p>



<h3 class="wp-block-heading">The downsides no one(everyone) talks about?</h3>



<p>As cool as AI coding assistants are, you don&#8217;t need to be a genius to see that they&#8217;re far from perfect. One of the biggest concerns I personally see is over-reliance. If you’re constantly relying on AI to write your code, do you really understand what’s happening under the hood? This can be a problem when something breaks, and you don’t know how to fix it because you never really wrote the thing in the first place <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> I&#8217;m sure you love reading someone else&#8217;s codebase and debugging that &lt;3</p>



<p>Another issue is that AI-generated code isn’t always optimized or even correct! It might suggest something that works but isn’t efficient, secure, or maintainable. If you blindly accept AI suggestions without reviewing them, you could end up with a mess of inefficient or buggy code.</p>



<p>Then there’s the question of security. AI assistants are trained on huge datasets, and sometimes they can generate code that includes security vulnerabilities. If you’re working on sensitive stuff, you have to be extra careful about what code you’re using and where it’s coming from.</p>



<p>Let&#8217;s talk privacy! Many AI coding tools rely on cloud-based processing, meaning your code might be sent to external servers for analysis. If you’re working on proprietary or confidential code, you need to be aware of the risks and check the privacy policies of the tools you’re using.</p>



<p>And finally, while AI can make you more productive, it can also be a bit of a crutch. Some developers might start relying too much on AI for even basic things, which can slow down their growth and problem-solving skills in the long run. </p>



<h3 class="wp-block-heading">So, Should You Use One?</h3>



<p>AI coding assistants are undeniably powerful tools, but they work best when used wisely. They’re great for boosting productivity, helping with debugging, and learning new technologies—but they shouldn’t replace actual coding knowledge and problem-solving skills. Think of them as a really smart assistant, not a replacement for your own expertise.</p>



<p>If you use AI responsibly—review its suggestions, stay mindful of security risks, and make sure you’re still learning and improving as a developer it can be a fantastic addition to your workflow, just don’t let it do all the thinking for you <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<h3 class="wp-block-heading">Still interested and want to start using AI in your daily work? Enter bolt.diy <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></h3>



<p><strong>bolt.diy</strong> is the open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models&nbsp;</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<div class="responsive-embed widescreen"><iframe title="I Forked Bolt.new and Made it WAY Better" width="1000" height="563" src="https://www.youtube.com/embed/3PFcAu_oU80?list=PLyrg3m7Ei-MpOPKdenkQNcx8ueI36RNrA" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></div>
</div></figure>



<p><strong>bolt.diy</strong> was originally started by&nbsp;<a href="https://www.youtube.com/@ColeMedin">Cole Medin</a>&nbsp;but has quickly grown into a massive community effort to build the one of the open source AI coding assistants out there.</p>



<h2 class="wp-block-heading">What do I need to get this deployed?</h2>



<p>Well, just Terraform and a Linode account. <br>In the backend we will deploy a VM with a GPU attached, install bolt.diy, ollama and ask it to write some code! Maybe a simple Tic-Tac-Toe game? <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>Ideally you would run your bolt.diy deployment on a separate machine from the machine running the model, but for our use case, current deployment model is more than enough.</p>



<p>Like most of the things on this blog, guess what we&#8217;re gonna use? Yes! IaC!!! <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>Here&#8217;s a link to the <a href="https://github.com/aslepcev/linode-bolt.diy" target="_blank" rel="noopener" title="">Github repository</a> containing the Terraform code. </p>



<p>Code will do the following:</p>



<ol class="wp-block-list">
<li>Deploy a GPU based instance in Akamai Connected Cloud</li>



<li>Use cloud-init to install the following:
<ul class="wp-block-list">
<li>curl</li>



<li>wget</li>



<li>nodejs</li>



<li>npm</li>



<li>nvtop &#8211; great tool to monitor your GPU usage</li>



<li>Nvidia drivers</li>
</ul>
</li>



<li>Deploy and configure a firewall which will allow SSH and <strong>bolt.diy</strong> access from your IP. </li>



<li>Configure <strong>bolt </strong>and <strong>ollama </strong>to run as a Linux service. For <strong>ollama </strong>service, we are always making sure we have a model downloaded and created with 32K context size. </li>
</ol>



<h2 class="wp-block-heading">How do you deploy it? </h2>



<p>Just fill in your Linode API token and the desired region, Linode token and your IP address in <strong>variables.tf </strong>file and run the following commands:</p>



<pre class="wp-block-code"><code>git clone https://github.com/aslepcev/linode-bolt.diy
cd linode-bolt.diy
#Fill in the variables.tf file now
terrafom init
terraform plan
terraform apply</code></pre>



<p>After a short 5-6 minute wait, everything should be deployed and ready to use. Go ahead and visit the IP address of your VM on the port 5173. </p>



<p><strong>Example url</strong>: http://172.233.246.209:5173</p>



<p>Make sure that Ollama is selected as a provider and you&#8217;re off to the races!</p>



<h2 class="wp-block-heading">What can it do? </h2>



<p>Well, it really depends on the model we are running. With the <strong><a href="https://www.linode.com/pricing/#compute-gpu" target="_blank" rel="noopener" title="">RTX 4000 Ada GPU</a></strong>, we can comfortably run a <strong>14B parameter model with 32K context size</strong> which is &#8220;ok&#8221; for smaller and simpler stuff. </p>



<p>I tested it out with a simple task of creating a Tic-Tac-Toe game in NodeJS. It got the functionality right the first time, but it looked like something only a mother could love <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="505" src="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-1024x505.png" alt="" class="wp-image-770" srcset="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-1024x505.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-300x148.png 300w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-768x379.png 768w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1-1536x758.png 1536w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt1.png 1615w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>I just told it to make it a bit prettier and add some color; these were the results I got:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="694" height="589" src="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-3.png" alt="" class="wp-image-771" srcset="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-3.png 694w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-3-300x255.png 300w" sizes="(max-width: 694px) 100vw, 694px" /></figure>



<p>Interestingly, during the coding process, it made a mistake which it managed to identify and fix all on its own! All I did was press the &#8220;<strong>Ask Bolt</strong>&#8221; button. </p>



<p></p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="616" height="251" src="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-fix.png" alt="" class="wp-image-772" style="width:838px;height:auto" srcset="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-fix.png 616w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-fix-300x122.png 300w" sizes="auto, (max-width: 616px) 100vw, 616px" /></figure>



<p>Also, here&#8217;s a fully functioning Space Invaders alike game which it also wrote</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="975" height="665" src="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-space.png" alt="" class="wp-image-774" srcset="https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-space.png 975w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-space-300x205.png 300w, https://blog.slepcevic.net/wp-content/uploads/2025/02/bolt-space-768x524.png 768w" sizes="auto, (max-width: 975px) 100vw, 975px" /></figure>



<h2 class="wp-block-heading">What if I want to run a larger model? 32B parameters or even larger? </h2>



<p>That&#8217;s very easy! Since Ollama can use multiple GPU&#8217;s, all we need to do is scale up the VM we are using to the one which includes two or more GPU&#8217;s. Akamai offers maximum of 4 GPU&#8217;s per VM which brings up to 80 GB of VRAM which we can use to run our model. I will not experiment with larger models in this blog post; this is something we will benchmark and try out in the future. </p>



<p>Cheers! Alex.</p>



<p>P.S &#8211; parts of this post were written by bolt.diy <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p></p><p>The post <a href="https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/">Your own AI coding assistant running on Akamai cloud!</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/your-own-ai-coding-assistant-running-on-akamai-cloud/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 
Lazy Loading (feed)

Served from: blog.slepcevic.net @ 2025-12-28 20:10:31 by W3 Total Cache
-->