<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Architecture - Architect the cloud</title>
	<atom:link href="https://blog.slepcevic.net/category/architecture/feed/" rel="self" type="application/rss+xml" />
	<link>https://blog.slepcevic.net</link>
	<description></description>
	<lastBuildDate>Wed, 23 Oct 2024 23:23:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9</generator>
	<item>
		<title>Get your own own Github runner deployed and configured on Linode in less than 5 minutes.</title>
		<link>https://blog.slepcevic.net/get-your-own-own-github-runner-deployed-and-configured-on-linode-in-less-than-5-minutes/</link>
					<comments>https://blog.slepcevic.net/get-your-own-own-github-runner-deployed-and-configured-on-linode-in-less-than-5-minutes/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Wed, 23 Oct 2024 23:19:20 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[GitHub runner]]></category>
		<category><![CDATA[Linux]]></category>
		<guid isPermaLink="false">https://blog.slepcevic.net/?p=685</guid>

					<description><![CDATA[<p>Why would you run your own runner anyways? GitHub Actions (along with Azure DevOps) has emerged as a powerful managed tool that allows developers to automate workflows directly within their GitHub repositories. While GitHub provides hosted runners to execute these...</p>
<p>The post <a href="https://blog.slepcevic.net/get-your-own-own-github-runner-deployed-and-configured-on-linode-in-less-than-5-minutes/">Get your own own Github runner deployed and configured on Linode in less than 5 minutes.</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><strong>Why would you run your own runner anyways?</strong></p>



<p>GitHub Actions (along with Azure DevOps) has emerged as a powerful managed tool that allows developers to automate workflows directly within their GitHub repositories. While GitHub provides hosted runners to execute these workflows, running your own GitHub runner can offer several advantages. </p>



<p><strong>1. Money talks</strong></p>



<p>One of the primary benefits of running your own GitHub runner is cost efficiency. GitHub Actions provides a certain number of free minutes for public and private repositories, but once you exceed these limits, costs can add up quickly, especially if your execution takes a while <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f641.png" alt="🙁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p><strong>2</strong>. <strong>Customization and Control</strong></p>



<p>When you run your own GitHub runner, you gain full control over the environment in which your workflows execute. This means you can customize the runner&#8217;s operating system, software, and dependencies to match your project’s specific needs. It doesn&#8217;t matter anymore if you need a particular version of a programming language, specialized libraries, or specific system configurations, your self-hosted runner can be tailored to your requirements.</p>



<p><strong>3</strong>. <strong>Performance and Speed</strong></p>



<p>Self-hosted runners can significantly enhance the performance of your CI/CD pipelines. Since these runners are dedicated to your projects, you can optimize them for speed and efficiency. You can run builds on beefy machines, use faster storage, or even set up parallel execution across multiple runners to speed up your workflows. This is especially beneficial for larger projects or teams with multiple repositories or a bunch of members working in parallel. </p>



<p><strong>4. Security and Compliance</strong></p>



<p>For organizations handling sensitive data or operating in regulated industries, security is number one. Running your own GitHub runner allows you to maintain control over your CI/CD environment. You can implement your own security measures, restrict network access, and ensure that sensitive information does not leave your secured infrastructure. Additionally, you can regularly update and audit your runner to comply with internal policies or external regulations.</p>



<p><strong>5. Reduced Queue Times</strong></p>



<p>Using GitHub&#8217;s hosted runners means you may encounter queue times, especially during peak usage periods. By setting up your own runners, you can mitigate these delays, ensuring that your workflows kick off as soon as possible. </p>



<p><strong>How do I get it running? </strong></p>



<p><strong>Step 1</strong> &#8211; Clone the repository using the following command. </p>



<pre class="wp-block-code"><code>git clone https://github.com/slepix/GitHubRunner-Linode.git</code></pre>



<p></p>



<h5 class="wp-block-heading">You will need to prepare 5 things; it&#8217;s not hard, I promise. </h5>



<ol class="wp-block-list">
<li><strong>Linode API token</strong> with permission to deploy virtual machines &#8211; <a href="https://techdocs.akamai.com/cloud-computing/docs/manage-personal-access-tokens" target="_blank" rel="noopener" title="">more info</a></li>



<li><strong>GitHub PAT</strong> limited only to the repository you want to connect &#8211;<a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens" target="_blank" rel="noopener" title=""> more details</a></li>



<li><strong>GitHub repository</strong> name you want to connect your runner to. </li>



<li><strong>GitHub username</strong> which owns the repository. </li>



<li><strong>Root password</strong> for the runner VM &#8211; can be anything, as long as it&#8217;s long and complex. </li>
</ol>



<p><strong>Step 2</strong> &#8211; Fill in the details in <strong>linoderunner.tfvars</strong> file; it should look something like this. </p>



<p>*These are random values, so make sure to replace them with your own. </p>



<pre class="wp-block-code"><code>linode_api_token = "eee44387b0030bd6bb051452bg65gz56z465fba5d77c5a238ea8e12f"
github_pat = "github_pat_1HG534e67d3K52IUSL_D2vM1pzjDGjX5sCiUEXWD6TRKDut4jnJty"
root_password = "Rand0mSecurePassword.123!" # Root password for your VM
github_repo = "myawesomeapp" # Your Github repo name
github_username = "slepix" # Your GitHub username</code></pre>



<p><strong>Step 3</strong> &#8211; run the following command:</p>



<pre class="wp-block-code"><code>terraform apply --var-file="linoderunner.tfvars"</code></pre>



<p>Entire codebase is available at <a href="https://github.com/slepix/GithubRunner-Linode" target="_blank" rel="noopener" title="">https://github.com/slepix/GithubRunner-Linode</a></p>



<p>Ok, let&#8217;s take a look at some code. Once again, we&#8217;ll go with Terraform and cloud-init to deploy and configure our server. Ideally you would use some configuration management tool like Puppet, Ansible, Chef or similar, but for this use case, we can keep it simple. </p>



<p>Using cloud-init, we create a new user called &#8220;<strong>gitrunner</strong>&#8221; which will be used to run the agent, update all the packes, install jq (needed by the agent configuration script) and kick off the installation of the runner as a service. </p>



<p><strong>compute.tf file</strong> &#8211; this is where you can adjust the region, OS and instance type you want to run. </p>



<pre class="wp-block-code"><code>resource "linode_instance" "github_runner" {
  image     = "linode/ubuntu22.04"
  region    = "nl-ams"
  type      = "g6-nanode-1"
  label     = "github-runner"
  root_pass = var.root_password  # Set the root password

  metadata {
    user_data = base64encode(templatefile("./linode.yaml.tpl", {
      githubpat = var.github_pat
      githubuser = var.github_username
      githubrepo = var.github_repo
    }))
  }
}</code></pre>



<p><strong>linode.yaml.tpl file</strong></p>



<pre class="wp-block-code"><code>#cloud-config
package_update: true
packages:
  - jq

users:
  - name: gitrunner
    shell: /bin/bash
    groups:
      - sudo
    sudo:
      - ALL=(ALL) NOPASSWD:ALL

runcmd:
  - export RUNNER_CFG_PAT=${githubpat}
  - su gitrunner -c "cd /home/gitrunner &amp;&amp; curl -s https://raw.githubusercontent.com/actions/runner/main/scripts/create-latest-svc.sh | bash -s ${githubuser}/${githubrepo}" 
</code></pre>



<p>If all went good, you should see the new GitHub runner appear in your runner overview in a few mins. </p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="316" src="https://blog.slepcevic.net/wp-content/uploads/2024/10/runner-1024x316.png" alt="" class="wp-image-686" srcset="https://blog.slepcevic.net/wp-content/uploads/2024/10/runner-1024x316.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2024/10/runner-300x92.png 300w, https://blog.slepcevic.net/wp-content/uploads/2024/10/runner-768x237.png 768w, https://blog.slepcevic.net/wp-content/uploads/2024/10/runner.png 1152w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Happy building and deploying!</p>



<p>Alex!</p><p>The post <a href="https://blog.slepcevic.net/get-your-own-own-github-runner-deployed-and-configured-on-linode-in-less-than-5-minutes/">Get your own own Github runner deployed and configured on Linode in less than 5 minutes.</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/get-your-own-own-github-runner-deployed-and-configured-on-linode-in-less-than-5-minutes/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Building your IT landscape on Akamai Connected Cloud – Part 3 – Writing down requirements</title>
		<link>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-3-writing-down-requirements/</link>
					<comments>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-3-writing-down-requirements/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Thu, 28 Sep 2023 12:38:00 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=115</guid>

					<description><![CDATA[<p>Part 1 –&#160;Intro Part 2 –&#160;Defining the project Part 3 &#8211; this post Welcome to the third post of &#8220;Building your IT landscape on Akamai Connected Cloud” series. In this post we will start writing and breaking down our FR&#8217;s...</p>
<p>The post <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-3-writing-down-requirements/">Building your IT landscape on Akamai Connected Cloud – Part 3 – Writing down requirements</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Part 1 –&nbsp;<a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/" target="_blank" rel="noreferrer noopener">Intro</a></p>



<p>Part 2 –&nbsp;<a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-2-defining-the-project/">Defining the project</a></p>



<p>Part 3 &#8211; this post</p>



<p></p>



<p>Welcome to the third post of &#8220;Building your IT landscape on Akamai Connected Cloud” series.</p>



<p>In this post we will start writing and breaking down our FR&#8217;s and NFR&#8217;s. </p>



<p>What are those you may ask? </p>



<h2 class="wp-block-heading"><strong>Non-Functional requirement</strong> <strong>(NFR)</strong></h2>



<p>In&nbsp;<a href="https://en.wikipedia.org/wiki/Systems_engineering">systems engineering</a>&nbsp;and&nbsp;<a href="https://en.wikipedia.org/wiki/Requirements_engineering">requirements engineering</a>, a&nbsp;<strong>non-functional requirement</strong>&nbsp;(<strong>NFR</strong>) is a&nbsp;<a href="https://en.wikipedia.org/wiki/Requirement">requirement</a>&nbsp;that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. They are contrasted with&nbsp;<a href="https://en.wikipedia.org/wiki/Functional_requirement">functional requirements</a>&nbsp;that define specific behavior or functions. The plan for implementing&nbsp;<em>functional</em>&nbsp;requirements is detailed in the&nbsp;<a href="https://en.wikipedia.org/wiki/Systems_design">system&nbsp;<em>design</em></a>. The plan for implementing&nbsp;<em>non-functional</em>&nbsp;requirements is detailed in the&nbsp;<a href="https://en.wikipedia.org/wiki/Systems_architecture">system&nbsp;<em>architecture</em></a>, because they are usually&nbsp;<a href="https://en.wikipedia.org/wiki/Architecturally_significant_requirements">architecturally significant requirements</a>.<sup><a href="https://en.wikipedia.org/wiki/Non-functional_requirement#cite_note-ASR_Chen-1">[1]</a></sup></p>



<p>Broadly, functional requirements define what a system is supposed to&nbsp;<em>do</em>&nbsp;and non-functional requirements define how a system is supposed to&nbsp;<em>be</em>.&nbsp;<a href="https://en.wikipedia.org/wiki/Functional_requirement">Functional requirements</a>&nbsp;are usually in the form of &#8220;system shall do &lt;requirement&gt;&#8221;, an individual action or part of the system, perhaps explicitly in the sense of a&nbsp;<a href="https://en.wikipedia.org/wiki/Function_(mathematics)">mathematical function</a>, a&nbsp;<a href="https://en.wikipedia.org/wiki/Black_box">black box</a>&nbsp;description input, output, process and control&nbsp;<a href="https://en.wikipedia.org/wiki/Functional_model">functional model</a>&nbsp;or&nbsp;<a href="https://en.wikipedia.org/wiki/IPO_Model">IPO Model</a>. In contrast, non-functional requirements are in the form of &#8220;system shall be &lt;requirement&gt;&#8221;, an overall property of the system as a whole or of a particular aspect and not a specific function. The system&#8217;s overall properties commonly mark the difference between whether the development project has succeeded or failed.</p>



<p>Non-functional requirements are often called the &#8220;<a href="https://en.wikipedia.org/wiki/List_of_system_quality_attributes">quality attributes</a>&#8221; of a system. Other terms for non-functional requirements are &#8220;qualities&#8221;, &#8220;quality goals&#8221;, &#8220;quality of service requirements&#8221;, &#8220;constraints&#8221;, &#8220;non-behavioral requirements&#8221;,<sup><a href="https://en.wikipedia.org/wiki/Non-functional_requirement#cite_note-2">[2]</a></sup>&nbsp;or &#8220;technical requirements&#8221;.<sup><a href="https://en.wikipedia.org/wiki/Non-functional_requirement#cite_note-3">[3]</a></sup>&nbsp;Informally these are sometimes called the &#8220;<a href="https://en.wiktionary.org/wiki/ility">ilities</a>&#8220;, from attributes like stability and portability. Qualities—that is non-functional requirements—can be divided into two main categories:</p>



<ol class="wp-block-list">
<li>Execution qualities, such as safety, security and usability, which are observable during operation (at run time).</li>



<li>Evolution qualities, such as&nbsp;<a href="https://en.wikipedia.org/wiki/Software_testability">testability</a>, maintainability, extensibility and scalability, which are embodied in the static structure of the system.<sup><a href="https://en.wikipedia.org/wiki/Non-functional_requirement#cite_note-Wiegers13-4">[4]</a></sup><sup><a href="https://en.wikipedia.org/wiki/Non-functional_requirement#cite_note-Young01-5">[5]</a></sup></li>
</ol>



<p>It is important to specify non-functional requirements in a specific and measurable way.<sup><a href="https://en.wikipedia.org/wiki/Non-functional_requirement#cite_note-6">[6]</a></sup><sup><a href="https://en.wikipedia.org/wiki/Non-functional_requirement#cite_note-MGUZH-7">[7]</a></sup></p>



<p><strong>Functional requirements</strong></p>



<p>In&nbsp;<a href="https://en.wikipedia.org/wiki/Software_engineering">software engineering</a>&nbsp;and&nbsp;<a href="https://en.wikipedia.org/wiki/Systems_engineering">systems engineering</a>, a&nbsp;<strong>functional requirement</strong>&nbsp;defines a function of a&nbsp;<a href="https://en.wikipedia.org/wiki/System">system</a>&nbsp;or its component, where a function is described as a specification of behavior between inputs and outputs.<sup><a href="https://en.wikipedia.org/wiki/Functional_requirement#cite_note-FultonAirborne17-1">[1]</a></sup></p>



<p>Functional requirements may involve calculations, technical details, data manipulation and processing, and other specific functionality that define what a system is supposed to accomplish.<sup><a href="https://en.wikipedia.org/wiki/Functional_requirement#cite_note-2">[2]</a></sup>&nbsp;Behavioral requirements describe all the cases where the system uses the functional requirements, these are captured in&nbsp;<a href="https://en.wikipedia.org/wiki/Use_case">use cases</a>. Functional requirements are supported by&nbsp;<a href="https://en.wikipedia.org/wiki/Non-functional_requirement">non-functional requirements</a>&nbsp;(also known as &#8220;quality requirements&#8221;), which impose constraints on the design or implementation (such as performance requirements, security, or reliability). Generally, functional requirements are expressed in the form &#8220;system must do &lt;requirement&gt;,&#8221; while non-functional requirements take the form &#8220;system shall be &lt;requirement&gt;.&#8221;<sup><a href="https://en.wikipedia.org/wiki/Functional_requirement#cite_note-LoucopoulosRequire05-3">[3]</a></sup>&nbsp;The plan for implementing functional requirements is detailed in the system design, whereas&nbsp;<em>non-functional</em>&nbsp;requirements are detailed in the&nbsp;<a href="https://en.wikipedia.org/wiki/System_architecture">system architecture</a>.<sup><a href="https://en.wikipedia.org/wiki/Functional_requirement#cite_note-AdamsNon15-4">[4]</a></sup><sup><a href="https://en.wikipedia.org/wiki/Functional_requirement#cite_note-J%C3%B6nssonImpact06-5">[5]</a></sup></p>



<p>As defined in&nbsp;<a href="https://en.wikipedia.org/wiki/Requirements_engineering">requirements engineering</a>, functional requirements specify particular results of a system. This should be contrasted with non-functional requirements, which specify overall characteristics such as cost and&nbsp;<a href="https://en.wikipedia.org/wiki/Reliability_engineering">reliability</a>. Functional requirements drive the application architecture of a system, while non-functional requirements drive the technical architecture of a system.<sup><a href="https://en.wikipedia.org/wiki/Functional_requirement#cite_note-AdamsNon15-4">[4]</a></sup></p>



<h2 class="wp-block-heading">Non-Functional requirements</h2>



<p></p>



<p><strong>Scalability</strong>: The architecture must support horizontal scaling by dynamically adding or removing servers based on demand. It should utilize auto-scaling groups and container orchestration platforms to efficiently manage resource allocation.</p>



<p><strong>High Availability</strong>: The system must achieve a minimum uptime of 99.999%, employing fault-tolerant design patterns such as redundant server clusters, load balancers, and automatic failover mechanisms. It should utilize multi-region deployment to ensure high availability across geographically distributed data centers.</p>



<p><strong>Performance</strong>: The system should maintain a maximum network latency of 50 milliseconds and aim for an average response time of under 100 milliseconds for all critical operations. It should optimize database queries, utilize in-memory caching, and employ content delivery networks (CDNs) for efficient content distribution.</p>



<p><strong>Security</strong>: The architecture must enforce end-to-end encryption for data transmission, employing industry-standard cryptographic protocols (e.g., TLS) and secure key management practices. It should implement fine-grained access controls, two-factor authentication, and intrusion detection and prevention systems (IDS/IPS) to safeguard user data.</p>



<p><strong>Data Backup and Recovery</strong>: The architecture should perform regular backups of user profiles, game progress, and configuration data, leveraging incremental backups and differential techniques for efficient storage utilization. It must implement backup redundancy across multiple geographically dispersed data centers and employ automated backup integrity verification.</p>



<p><strong>Geographic Distribution</strong>: The system should utilize globally distributed data centers strategically placed to reduce network latency, leveraging content delivery networks (CDNs) and edge computing technologies. It should employ geo-routing techniques to direct users to the nearest data center, minimizing network hops.</p>



<p><strong>Load Balancing</strong>: The architecture should employ dynamic load balancing algorithms based on factors like server capacity, network latency, and current workload. It should utilize intelligent load balancers that distribute traffic evenly across available servers while considering resource utilization metrics.</p>



<p><strong>Elasticity</strong>: The system should dynamically scale compute and storage resources based on real-time demand, leveraging auto-scaling policies, and cloud provider-specific scaling features. It should employ predictive scaling algorithms based on historical usage patterns and anticipated traffic spikes.</p>



<p><strong>Network Performance</strong>: The architecture must ensure low packet loss rates (&lt;0.5%) and maintain a minimum network bandwidth of 100 Gbps for optimal delivery experiences. It should utilize advanced network protocols and traffic optimization techniques (e.g., congestion control, Quality of Service) to minimize latency and packet jitter.</p>



<p><strong>Cross-Platform Compatibility</strong>: The system must provide consistent gameplay experiences across various platforms (Windows, macOS, Linux, Xbox, PlayStation, iOS, Android). It should support platform-specific optimizations (e.g., DirectX, Vulkan) and employ adaptive streaming technologies to adjust game quality based on device capabilities.</p>



<p><strong>Content Delivery</strong>: The architecture should leverage a globally distributed content delivery network (CDN) with edge caching to accelerate game content delivery. It should utilize HTTP/2 or QUIC protocols for efficient content transmission and employ delta compression and differential updates to minimize bandwidth usage during game updates and patches.</p>



<p><strong>Integration with Third-Party Services</strong>: The system should provide well-documented APIs and SDKs for seamless integration with payment gateways, social media platforms, and analytics tools. It should support OAuth 2.0 for secure user authentication and authorization, and employ asynchronous messaging protocols (e.g., AMQP) for reliable inter-service communication.</p>



<p><strong>Compliance and Legal Requirements</strong>: The architecture must comply with relevant data protection regulations (e.g., GDPR, CCPA), ensuring user consent management, anonymization of personal data, and secure storage practices</p>



<p><strong>Monitoring and Analytics</strong>: The architecture should include a comprehensive monitoring and analytics framework that collects real-time performance metrics, system logs, and user behavior data. It should leverage centralized logging systems, distributed tracing, and log aggregation tools for efficient monitoring and troubleshooting. Additionally, it should employ machine learning algorithms and anomaly detection techniques to identify potential security threats and performance bottlenecks.</p>



<p><strong>Modularity and Extensibility</strong>: The architecture should be designed with a modular and extensible approach, using microservices and service-oriented architecture (SOA) principles. It should allow for the seamless integration of new game titles, features, and external services by leveraging containerization technologies (e.g., Docker) and orchestration platforms (e.g., Kubernetes). The system should facilitate independent deployment and scalability of individual components without impacting the overall system performance or user experience.</p>



<h2 class="wp-block-heading"><strong>FUNCTIONAL REQUIREMENTS</strong></h2>



<p><strong>Game Storage and Management</strong>: The infrastructure should provide a scalable and efficient storage solution for hosting game files, patches, updates, and downloadable content. It should support large file storage and provide mechanisms for organizing and managing game assets.</p>



<p><strong>Content Distribution</strong>: The infrastructure should incorporate a content delivery network (CDN) or a similar mechanism to distribute game files and updates to users globally. It should ensure low-latency and high-bandwidth content delivery for optimal user experience.</p>



<p><strong>File Compression and Optimization</strong>: The infrastructure should include tools and processes to compress and optimize game files, reducing their size without compromising quality. This helps minimize bandwidth requirements and improves download and installation times.</p>



<p><strong>Bandwidth Management</strong>: The infrastructure should monitor and manage bandwidth usage effectively to ensure fair allocation and prevent congestion during peak usage periods. It should implement traffic shaping or rate limiting mechanisms to optimize network performance.</p>



<p><strong>Dynamic Scaling</strong>: The infrastructure should support dynamic scaling to accommodate fluctuations in user demand for game downloads and updates. It should automatically scale resources, such as storage capacity and network bandwidth, to handle increased traffic and ensure fast and reliable file delivery.</p>



<p><strong>Redundancy and Data Replication</strong>: The infrastructure should implement redundancy and data replication mechanisms to ensure high availability and data durability. It should replicate game files across multiple storage nodes or data centers to mitigate the risk of data loss and minimize downtime.</p>



<p><strong>Parallel Processing</strong>: The infrastructure should leverage parallel processing techniques to optimize the distribution of large game files. It should split files into smaller chunks and distribute them concurrently to speed up downloads and ensure efficient utilization of network resources.</p>



<p><strong>File Integrity Verification</strong>: The infrastructure should incorporate mechanisms to verify the integrity of game files during storage and transmission. It should use checksums or other hashing algorithms to detect and handle corrupted or tampered files, ensuring users receive error-free game content.</p>



<p><strong>Metadata Management</strong>: The infrastructure should provide a robust metadata management system to store and retrieve information related to game files. It should support efficient indexing and searching of game metadata, enabling users to discover and access relevant game content easily.</p>



<p><strong>Version Control and Rollbacks</strong>: The infrastructure should facilitate version control and rollbacks for game files and updates. It should enable users to access previous versions of games and easily revert to a stable version in case of issues with new releases.</p>



<p><strong>Secure File Transfer</strong>: The infrastructure should ensure secure transmission of game files and updates over the network. It should utilize encryption protocols (e.g., SSL/TLS) to protect data during transit, preventing unauthorized access or tampering.</p>



<p><strong>Geo-replication and Regional Caching</strong>: The infrastructure should support geo-replication and regional caching of game files to reduce latency and improve download speeds for users in different geographical locations. It should strategically place storage nodes or caching servers in proximity to users for efficient content delivery.</p>



<p><strong>User Account Storage</strong>: The infrastructure should provide secure and scalable storage for user account data, including profiles, preferences, and game libraries. It should ensure fast retrieval of user-specific information to enable personalized experiences across different devices.</p>



<p><strong>API for Content Management</strong>: The infrastructure should offer APIs for seamless integration with game developers and content providers. It should provide functionality to upload, manage, and distribute game files programmatically, allowing for automated content ingestion and updates.</p>



<p><strong>Usage Analytics and Reporting</strong>: The infrastructure should include analytics and reporting capabilities to track usage statistics, such as download counts, bandwidth consumption, and user engagement with game content. It should provide insights to improve content delivery strategies and optimize resource allocation.</p>



<p></p>



<p>In the next post, we will finally get to the &#8220;meaty&#8221; part and start drawing some architecture diagrams and writing some code. </p>



<p></p>



<p>Cheers, Alex. </p><p>The post <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-3-writing-down-requirements/">Building your IT landscape on Akamai Connected Cloud – Part 3 – Writing down requirements</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-3-writing-down-requirements/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Linode&#8217;s Metadata service is live!</title>
		<link>https://blog.slepcevic.net/linode-metadata-service/</link>
					<comments>https://blog.slepcevic.net/linode-metadata-service/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Wed, 13 Sep 2023 14:04:00 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=177</guid>

					<description><![CDATA[<p>Linode finally introduced an internal metadata service! &#60;3 I&#8217;m sure all of you have been in a situation where after deploying Compute Instances, it’s almost always needed to perform additional configuration before your server is ready to do any real...</p>
<p>The post <a href="https://blog.slepcevic.net/linode-metadata-service/">Linode’s Metadata service is live!</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><strong>Linode finally introduced an internal metadata service! &lt;3</strong></p>



<p></p>



<p>I&#8217;m sure all of you have been in a situation where after deploying Compute Instances, it’s almost always needed to perform additional configuration before your server is ready to do any real work. </p>



<p>This configuration might include creating a new user, adding an SSH key, installing software, etc&#8230;it could also include more complicated tasks like configuring a web server or other software that runs on the instance. Performing these tasks manually can be error prone, slow and is not scalable. </p>



<p>To automate this process, Linode offers two provisioning automation tools: Metadata/cloudinit and <a href="https://www.linode.com/docs/products/tools/stackscripts/">StackScripts</a>.</p>



<p></p>



<p><strong>Cool, but what is Linode&#8217;s Metadata service? Well, in a nutshell it&#8217;s an API which is accessible only within your instance, nothing more <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></strong></p>



<p>While the Metadata service is designed to be consumed by <a href="https://cloudinit.readthedocs.io/en/latest/" target="_blank" rel="noreferrer noopener">cloud-init</a>, there are no barriers in using it without cloud-init and with any other software/script you can think of. If your software can send and read http requests, it can use Linode&#8217;s metadata service.  </p>



<p>This allows you to use the same tools across multiple cloud providers enabling you to be really flexible on where and how you run your workloads. </p>



<p></p>



<p>One of the great use-cases for metadata is to dynamically configure your environment based on instance tags or some other parameters. </p>



<p>Let&#8217;s take this example; we have our dev and production environment and we want to make sure that the VM which we deployed is configured in the same way, just with different user data (test database vs a production database). Your deployment pipeline process can query the instance&#8217;s metadata service, read the tags and instantly know that it&#8217;s deploying to a test environment and which database to use. </p>



<p>Other use-case is for bootstrapping your servers and installing some configuration management agents like Chef or similar. </p>



<p>The Metadata service provides both <em>instance data</em> and optional <em>user data</em>, both of which are explained below:</p>



<ul class="wp-block-list">
<li><strong>Instance data:</strong> The instance data includes information about the Compute Instance, including its label, plan size, region, host identifier, tags and more.</li>



<li><strong>User data:</strong>&nbsp;User data is one of the most powerful features of the Metadata service and allows you to define your desired system configuration, including creating users, installing software, configuring settings, and more. User data is supplied by the user when deploying, rebuilding, or cloning a Compute Instance. This user data can be written as a cloud-config file, or it can be any script that can be executed on the target distribution image, such as a bash script.User data can be submitted directly in the Cloud Manager, Linode CLI, or Linode API. It’s also often programmatically provided through IaC (Infrastructure as Code) provisioning tools like&nbsp;<a href="https://www.linode.com/docs/guides/how-to-build-your-infrastructure-using-terraform-and-linode/">Terraform</a>.</li>
</ul>



<p>Ok, how do I use metadata service? It&#8217;s really quite simple</p>



<ol class="wp-block-list">
<li>Log into the instance which is running an OS image which supports metadata service</li>



<li>Generate your API token using the following command</li>
</ol>



<pre class="wp-block-code"><code>export TOKEN=$(curl -X PUT -H "Metadata-Token-Expiry-Seconds: 3600" http://169.254.169.254/v1/token)</code></pre>



<p>This will put the authentication token into a TOKEN variable. </p>



<p>3. Fetch the data</p>



<pre class="wp-block-code"><code>#Instance info
curl -H "Metadata-Token: $TOKEN" http://169.254.169.254/v1/instance
#Network info
curl -H "Metadata-Token: $TOKEN" http://169.254.169.254/v1/network
#User data
curl -H "Metadata-Token: $TOKEN" http://169.254.169.254/v1/user-data | base64 --decode</code></pre>



<p>What data do I get back? </p>



<pre class="wp-block-code"><code>#Instance info
root@localhost:~# curl -H "Metadata-Token: $TOKEN" http://169.254.169.254/v1/instance
backups.enabled: false
host_uuid: 85815b0f16cfa8b12aa12a36530476e701111111
id: 51048111
label: jumphost-us-iad
region: us-iad
specs.disk: 25600
specs.gpus: 0
specs.memory: 1024
specs.transfer: 1000
specs.vcpus: 1
tags: app:jumphost
tags: region:us-iad
tags: stage:dev
type: g6-nanode-1

#Network info
root@localhost:~# curl -H "Metadata-Token: $TOKEN" http://169.254.169.254/v1/network
ipv4.public: 172.233.197.127/32
ipv6.link_local: fe80::f03c:93ff:fe56:3512/128
ipv6.slaac: 2600:3c05::f03c:93ff:fe56:3512/128
</code></pre>



<p>Cheers, Alex!</p><p>The post <a href="https://blog.slepcevic.net/linode-metadata-service/">Linode’s Metadata service is live!</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/linode-metadata-service/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to deploy a DHCP server in a Linode VLAN</title>
		<link>https://blog.slepcevic.net/how-to-deploy-a-dhcp-server-in-a-linode-vlan/</link>
					<comments>https://blog.slepcevic.net/how-to-deploy-a-dhcp-server-in-a-linode-vlan/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Fri, 08 Sep 2023 09:43:00 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[Linode]]></category>
		<category><![CDATA[Terraform]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=171</guid>

					<description><![CDATA[<p>When a VLAN is assigned to a network interface and given an IPAM address, the Compute Instance should automatically be able to communicate over that private network. This is due to Network Helper, which is enabled by default on most instances....</p>
<p>The post <a href="https://blog.slepcevic.net/how-to-deploy-a-dhcp-server-in-a-linode-vlan/">How to deploy a DHCP server in a Linode VLAN</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>When a VLAN is assigned to a network interface and given an IPAM address, the Compute Instance should automatically be able to communicate over that private network. This is due to <a href="https://www.linode.com/docs/products/compute/compute-instances/guides/network-helper/">Network Helper</a>, which is enabled by default on most instances. For compatible distributions, Network Helper adjusts the internal network configuration files. Any network interfaces defined in the Compute Instance’s selected <a href="https://www.linode.com/docs/products/compute/compute-instances/guides/configuration-profiles/">Configuration Profile</a> (including those with VLANs attached) are automatically configured.</p>



<p>While Network Helper is really useful, it&#8217;s becomes a problem when you deploy an instance serving as an internet gateway in your VLAN. In order to assign an IP address, DNS and a gateway, we need to deploy a DHCP server. Network Helper cannot set DNS and gateway settings. </p>



<p></p>



<p>Please have in mind that you can also install the DHCP server on your IGW (Internet gateway) instance as well, but for the purposes of this post, we will assume that we have a standalone DHCP server. </p>



<p></p>



<p>First thing we need to do is to create a Linode StackScript which we will use to automatically install and configure our DHCP server. </p>



<pre class="wp-block-code"><code>#!/bin/bash
#&lt;UDF name="DEFLEASETIME" Label="Default lease time" example="3600" /&gt;
#&lt;UDF name="MAXLEASETIME" Label="Max lease time" example="14400" /&gt;
#&lt;UDF name="SUBNET" Label="Subnet" example="10.10.10.0" /&gt;
#&lt;UDF name="NETMASK" Label="Netmask" example="255.255.255.0" /&gt;
#&lt;UDF name="STARTRANGE" Label="Start range" example="10.10.10.10" /&gt;
#&lt;UDF name="ENDRANGE" Label="End range" example="10.10.10.250"/&gt;
#&lt;UDF name="ROUTER" Label="Router" example="10.10.10.1" /&gt;
#&lt;UDF name="DNS1" Label="DNS1" example="10.10.10.2" /&gt;
#&lt;UDF name="DNS2" Label="DNS2" example="10.10.10.3" /&gt;
apt update -y
apt install -y isc-dhcp-server
mv /etc/dhcp/dhcpd.conf{,.backup}
cat &gt; /etc/dhcp/dhcpd.conf &lt;&lt; EOF
default-lease-time $DEFLEASETIME;
max-lease-time $MAXLEASETIME;
authoritative;
 
subnet $SUBNET netmask $NETMASK {
 range $STARTRANGE $ENDRANGE;
 option routers $ROUTER;
 option domain-name-servers $DNS1, $DNS2;
}
EOF
sed -i 's/INTERFACESv4=""/INTERFACESv4="eth1"/' /etc/default/isc-dhcp-server
systemctl restart isc-dhcp-server.service
</code></pre>



<p>After we&#8217;ve got our StackScript <a href="https://www.linode.com/docs/products/tools/stackscripts/guides/create/">created</a>, we can deploy a VM from it by clicking on &#8220;Deploy New Linode&#8221; button and filling out all the required details. Make sure that you assign the virtual machine to the VLAN you are using in your region. </p>



<figure class="wp-block-image size-full"><img decoding="async" width="578" height="863" src="https://blog.slepcevic.net/wp-content/uploads/2023/10/dhcpblog.png" alt="" class="wp-image-172" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/10/dhcpblog.png 578w, https://blog.slepcevic.net/wp-content/uploads/2023/10/dhcpblog-201x300.png 201w" sizes="(max-width: 578px) 100vw, 578px" /></figure>



<p>You can use default settings for &#8220;<strong>Default lease time</strong>&#8221; and &#8220;<strong>Max lease time</strong>&#8220;, just make sure to adjust the &#8220;<strong>Subnet</strong>&#8221; and mask to match the ones you&#8217;re using in your VLAN. </p>



<p>We can also use Terraform to create our Stackscript by using &#8220;<strong>linode_stackscript</strong>&#8221; resource &#8211; https://registry.terraform.io/providers/linode/linode/latest/docs/resources/stackscript</p>



<p></p>



<p>For details on how to use Terraform to deploy instance from a StackScript, check out this blog post on how to use &#8220;<strong>stackscript_data</strong>&#8221; block to provide StackScript parameters &#8211; <a href="https://blog.slepcevic.net/deploying-linode-marketplace-stackscripts-with-terraform/">https://blog.slepcevic.net/deploying-linode-marketplace-stackscripts-with-terraform/</a></p>



<p></p>



<p>Cheers, Alex. </p>



<p></p><p>The post <a href="https://blog.slepcevic.net/how-to-deploy-a-dhcp-server-in-a-linode-vlan/">How to deploy a DHCP server in a Linode VLAN</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/how-to-deploy-a-dhcp-server-in-a-linode-vlan/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deploying Linode Marketplace StackScripts with Terraform</title>
		<link>https://blog.slepcevic.net/deploying-linode-marketplace-stackscripts-with-terraform/</link>
					<comments>https://blog.slepcevic.net/deploying-linode-marketplace-stackscripts-with-terraform/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Thu, 29 Jun 2023 08:22:00 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[Linode]]></category>
		<category><![CDATA[linode-cli]]></category>
		<category><![CDATA[Terraform]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=146</guid>

					<description><![CDATA[<p>Recently, I needed to deploy a 3 node Redis Cluster from Linode&#8217;s Marketplace. Quick glance at Linode&#8217;s Terraform provider shows that we can deploy instances using StackScripts, but in order to do that, we need to specify the StackScript ID....</p>
<p>The post <a href="https://blog.slepcevic.net/deploying-linode-marketplace-stackscripts-with-terraform/">Deploying Linode Marketplace StackScripts with Terraform</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Recently, I needed to deploy a 3 node Redis Cluster from Linode&#8217;s Marketplace. </p>



<p>Quick glance at Linode&#8217;s Terraform provider shows that we can deploy instances using StackScripts, but in order to do that, we need to specify the StackScript ID. If we are deploying from our own StackScript, it&#8217;s easy, you just reference it like this and you&#8217;re good to go.</p>



<pre class="wp-block-code"><code>resource "linode_stackscript" "foo" {
  label = "foo"
  description = "Installs a Package"
  script = &lt;&lt;EOF
#!/bin/bash
# &lt;UDF name="package" label="System Package to Install" example="nginx" default=""&gt;
apt-get -q update &amp;&amp; apt-get -q -y install $PACKAGE
EOF
  images = &#091;"linode/ubuntu18.04", "linode/ubuntu16.04lts"]
  rev_note = "initial version"
}

resource "linode_instance" "foo" {
  image  = "linode/ubuntu18.04"
  label  = "foo"
  region = "us-east"
  type   = "g6-nanode-1"
  authorized_keys    = &#091;"..."]
  root_pass      = "..."

  <strong>stackscript_id = linode_stackscript.foo.id</strong>
  stackscript_data = {
    "package" = "nginx"
  }
}</code></pre>



<p>&#8220;Problem&#8221; comes when we want to deploy a product from the marketplace, in order to do that we need to collect a few things first. </p>



<ol class="wp-block-list">
<li>ID of the StackScript you want to deploy</li>



<li>List of supported OS images</li>



<li>Check which variables we need to provide to the StackScript. </li>
</ol>



<p>Step 1 &#8211; Fetching the StackScript ID. In this case, I&#8217;ll do it using linode-cli, but you&#8217;re free to use any other method</p>



<pre class="wp-block-code"><code>linode-cli stackscripts list --label "Redis Sentinel Cluster One-Click"
</code></pre>



<p>Output of this command looks like this</p>



<p><img decoding="async" width="958" height="161" class="wp-image-150" style="width: 1000px" src="https://blog.slepcevic.net/wp-content/uploads/2023/07/Screenshot-2023-07-03-100015.png" alt="" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/07/Screenshot-2023-07-03-100015.png 958w, https://blog.slepcevic.net/wp-content/uploads/2023/07/Screenshot-2023-07-03-100015-300x50.png 300w, https://blog.slepcevic.net/wp-content/uploads/2023/07/Screenshot-2023-07-03-100015-768x129.png 768w" sizes="(max-width: 958px) 100vw, 958px" /></p>



<p>From here, we need to write down the ID of the script, and check the list of supported images. In this case, I&#8217;ll deploy a stackscript with the ID 1132204 which is owned by Linode. </p>



<p>Next step is to check which variables we need to provide to the StackScript. We can easily see that by running the following Powershell command:</p>



<pre class="wp-block-code"><code>$stackscripts = ((wget https://cloud.linode.com/api/v4/linode/stackscripts?page_size=500).Content | convertfrom-json)
$stackscripts.data | where {$_.id -like "1132204"} | select -expand user_defined_fields
name                   label
----                   -----
token_password         Your Linode API token
sudo_username          The limited sudo user to be created in the cluster
sslheader              SSL Information
country_name           Details for self-signed SSL certificates: Country or Region
state_or_province_name State or Province
locality_name          Locality
organization_name      Organization
email_address          Email Address
ca_common_name         CA Common Name
common_name            Common Name
clusterheader          Cluster Settings
add_ssh_keys           Add Account SSH Keys to All Nodes?
cluster_size           Redis cluster size</code></pre>



<p>And finally put all of those variables into our Terraform code, which in end looks something like this: </p>



<pre class="wp-block-code"><code>resource "linode_instance" "redis" {
        image = "linode/ubuntu22.04"
        label = "RedisCluster"
        group = "Redis"
        tags = &#091; "stage:dev" ]
        region = "eu-central"
        type = "g6-standard-1"
        authorized_users = &#091; "yourUsername" ]
        root_pass = "SuperRandomPassW0rd.123!"
<strong>        stackscript_id = <em>1132204</em>
        stackscript_data = {
            "token_password" = "yourLinodeToken"
            "sudo_username" = "myuser"
            "sslheader" = "Yes" #Default YES
            "country_name" = "NL"  #Two letter country code
            "state_or_province_name" = "South Holland" 
            "locality_name" = "Rotterdam" 
            "organization_name" = "Org Name" 
            "email_address" = "yourEmail@here.com" 
            "ca_common_name" = "Redis CA" 
            "clusterheader" = "Yes" 
            "add_ssh_keys" = "no" #yes or no
            "cluster_size" = "3" #3 or 5</strong>
        }
}
</code></pre>



<p>Notice we are required to provide our Linode token in order to deploy this cluster; this can be the same token you&#8217;re using for your existing Terraform code, so you can simple reference a environment variable in your code and off you go <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f600.png" alt="😀" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<p>Cheers, Alex. </p><p>The post <a href="https://blog.slepcevic.net/deploying-linode-marketplace-stackscripts-with-terraform/">Deploying Linode Marketplace StackScripts with Terraform</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/deploying-linode-marketplace-stackscripts-with-terraform/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Installing and configuring 2FA enabled web managed access solution (jumphost) to any infrastructure deployed on Akamai Connected Cloud Compute – part 2</title>
		<link>https://blog.slepcevic.net/installing-and-configuring-2fa-enabled-web-managed-access-solution-jumphost-to-any-infrastructure-deployed-on-akamai-connected-cloud-compute-part-2/</link>
					<comments>https://blog.slepcevic.net/installing-and-configuring-2fa-enabled-web-managed-access-solution-jumphost-to-any-infrastructure-deployed-on-akamai-connected-cloud-compute-part-2/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Wed, 21 Jun 2023 11:15:44 +0000</pubDate>
				<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[Access]]></category>
		<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Cockpit]]></category>
		<category><![CDATA[Jumphost]]></category>
		<category><![CDATA[Linode]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=28</guid>

					<description><![CDATA[<p>In previous post, we deployed our Cockpit server which we will use as a jump host/tunnel. For us to actually start using the solution, we need to do a few things: Creating users Creating users is extremely easy doing Cockpit;...</p>
<p>The post <a href="https://blog.slepcevic.net/installing-and-configuring-2fa-enabled-web-managed-access-solution-jumphost-to-any-infrastructure-deployed-on-akamai-connected-cloud-compute-part-2/">Installing and configuring 2FA enabled web managed access solution (jumphost) to any infrastructure deployed on Akamai Connected Cloud Compute – part 2</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>In previous <a href="https://blog.slepcevic.net/installing-and-configuring-2fa-enabled-web-managed-access-solution-jumphost-to-any-infrastructure-deployed-on-akamai-connected-cloud-compute-part-1/" target="_blank" rel="noreferrer noopener">post</a>, we deployed our Cockpit server which we will use as a jump host/tunnel. </p>



<p>For us to actually start using the solution, we need to do a few things: </p>



<ol class="wp-block-list">
<li>Create the users</li>



<li>Create or import SSH keys &amp; configure 2FA using Google Authenticator</li>



<li>Configure network and firewalls around our jumphost and rest of the infrastructure</li>



<li>Connect</li>
</ol>



<h2 class="wp-block-heading">Creating users</h2>



<p>Creating users is extremely easy doing Cockpit; on top of that, it gives the users to self manage their keys and 2FA configuration. </p>



<ol class="wp-block-list">
<li>Go to the URL or IP of the Cockpit server you just deployed and log in using the credentials you&#8217;ve configured in your StackScript. </li>



<li>On the left hand side, click the &#8220;Account&#8221; menu option and then on &#8220;Create new account&#8221; button</li>



<li>Enter the user details and press &#8220;Create button&#8221;.</li>
</ol>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="976" height="615" src="https://blog.slepcevic.net/wp-content/uploads/2023/06/Screenshot-2023-06-21-115101.png" alt="" class="wp-image-126" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/06/Screenshot-2023-06-21-115101.png 976w, https://blog.slepcevic.net/wp-content/uploads/2023/06/Screenshot-2023-06-21-115101-300x189.png 300w, https://blog.slepcevic.net/wp-content/uploads/2023/06/Screenshot-2023-06-21-115101-768x484.png 768w" sizes="auto, (max-width: 976px) 100vw, 976px" /></figure>



<p></p>



<p>After the user(s) has been successfully created, let&#8217;s switch to the user&#8217;s point of view and see how rest of the onboarding process looks like. </p>



<ol class="wp-block-list">
<li>Log in with the user&#8217;s credential we&#8217;ve just created and navigate to &#8220;Terminal&#8221; option. </li>



<li>Once there, type in: google-authenticator and press enter. This will start the process of configuring your 2FA authentication</li>



<li>You are free to modify the authentication behavior, just make sure to save the authenticator file when the wizard asks you to do so. </li>
</ol>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="851" height="638" src="https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115834.png" alt="" class="wp-image-135" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115834.png 851w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115834-300x225.png 300w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115834-768x576.png 768w" sizes="auto, (max-width: 851px) 100vw, 851px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="797" src="https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115900-1024x797.png" alt="" class="wp-image-136" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115900-1024x797.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115900-300x233.png 300w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115900-768x597.png 768w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115900.png 1220w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>As a final step, we just need to upload our SSH public key and we&#8217;re all set to start securely connecting to our infrastructure. </p>



<p>Entire procedure is really straight forward; go to &#8220;Accounts&#8221;, click on your username, and on the right hand side, under the &#8220;SSH Keys&#8221; section, click the &#8220;+&#8221; button. Paste you PUBLIC ssh key and click &#8220;Add Key&#8221; button. </p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="514" src="https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115551-1024x514.png" alt="" class="wp-image-134" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115551-1024x514.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115551-300x151.png 300w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115551-768x386.png 768w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115551.png 1219w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="665" height="425" src="https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115459.png" alt="" class="wp-image-133" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115459.png 665w, https://blog.slepcevic.net/wp-content/uploads/2023/03/Screenshot-2023-06-21-115459-300x192.png 300w" sizes="auto, (max-width: 665px) 100vw, 665px" /></figure>



<p>That&#8217;s it! You&#8217;re done! In the upcoming blog posts, I&#8217;ll talk how to configure your local machine to connect to your infrastructure, how to secure the infrastructure you&#8217;re connecting to, and finally, how can we make this solution highly available. </p>



<p></p>



<p>Cheers, Alex. </p><p>The post <a href="https://blog.slepcevic.net/installing-and-configuring-2fa-enabled-web-managed-access-solution-jumphost-to-any-infrastructure-deployed-on-akamai-connected-cloud-compute-part-2/">Installing and configuring 2FA enabled web managed access solution (jumphost) to any infrastructure deployed on Akamai Connected Cloud Compute – part 2</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/installing-and-configuring-2fa-enabled-web-managed-access-solution-jumphost-to-any-infrastructure-deployed-on-akamai-connected-cloud-compute-part-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Building your IT landscape on Akamai Connected Cloud &#8211; Part 2 &#8211; Defining the project</title>
		<link>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-2-defining-the-project/</link>
					<comments>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-2-defining-the-project/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Sun, 23 Apr 2023 23:49:50 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Open Source]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[Infrastructure as code]]></category>
		<category><![CDATA[Linode]]></category>
		<category><![CDATA[Migration]]></category>
		<category><![CDATA[Terraform]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=53</guid>

					<description><![CDATA[<p>Related posts Part 1 &#8211; Intro Part 2 &#8211; this post Welcome to the second part of the &#8220;Building your IT landscape on Akamai Connected Cloud&#8221; series. In this post we will define the basic scenario and requirements for our...</p>
<p>The post <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-2-defining-the-project/">Building your IT landscape on Akamai Connected Cloud – Part 2 – Defining the project</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p></p>



<p><strong><em>Related posts</em></strong></p>



<p>Part 1 &#8211; <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/" data-type="URL" data-id="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/" target="_blank" rel="noreferrer noopener">Intro</a></p>



<p>Part 2 &#8211; <a rel="noreferrer noopener" href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-2-defining-the-project/" target="_blank">this post</a></p>



<p>Welcome to the second part of the &#8220;Building your IT landscape on Akamai Connected Cloud&#8221; series. </p>



<p>In this post we will define the basic scenario and requirements for our &#8220;<strong>demo</strong>&#8221; project which we will build on Akamai Connected Cloud. </p>



<h4 class="wp-block-heading"><strong>SCENARIO</strong></h4>



<p>We have a company called &#8220;<strong>Vapor</strong>&#8221; which is an online only company selling games and offering them for download.  </p>



<p>Having bad experiences with a MSP, they turned to <a href="https://www.akamai.com" target="_blank" rel="noreferrer noopener">Akamai </a>for help. </p>



<p>Besides aiming for better performance, <strong>Vapor&#8217;s</strong> DevOps team wanted to be more hands-on and have the ability to tweak the things &#8220;<strong>under the hood</strong>&#8221; so the entire solution can be rearchitected down the line to fit the new cloud provider. </p>



<p>Project has two major phases; first and critical phase is to do a <strong>lift and shift</strong> of the current platform and improve it with minimum amount of effort; introduce monitoring and management software and create pipelines. Current solution experiences a lot of downtime which directly correlates to lost revenue. </p>



<p>Second phase is to containerize and modernize the entire stack by utilizing Akamai&#8217;s managed Kubernetes service (LKE) and compute capabilities. Additionally, Vapor has plans to establish presence in APAC, EMEA and AMER regions in the near future so that requirement should be taken into consideration while designing the infrastructure. </p>



<p></p>



<h4 class="wp-block-heading">Current Tech stack</h4>



<p>Custom made application based on PHP and MySQL with few thousands of articles. </p>



<p>Application  is running on eight web servers behind a load balancer. Standalone host is running MySQL database engine, while game data is stored in object storage 200 TB in size. </p>



<p></p>



<p>Hardware specifications</p>



<ul class="wp-block-list">
<li>8 x Web servers: 16 CPU cores, 64 GB RAM, 200 GB SSD
<ul class="wp-block-list">
<li>Average CPU usage is 70%</li>



<li>80% memory usage</li>



<li>400 IOPS on average</li>
</ul>
</li>



<li>Database server: 32 CPU cores, 128 GB RAM, 500 GB SSD
<ul class="wp-block-list">
<li>DB is around 300 GB and growing 10% every quarter</li>



<li>90% RAM usage</li>



<li>5500 IOPS on average, with occasional spikes to 10000 IOPS</li>
</ul>
</li>



<li>Load balancer
<ul class="wp-block-list">
<li>1000 requests/s on average, with occasional spikes to 2500 req/s. </li>
</ul>
</li>



<li>Object storage
<ul class="wp-block-list">
<li>Currently hosted with a different cloud provider</li>
</ul>
</li>
</ul>



<p>Monitoring and management is handled by the current MSP, so we have an open playing field when it comes to choosing technologies which we will use for monitoring, access, security, etc&#8230;</p>



<p><strong>Simplistic overview of the current infrastructure</strong></p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="669" height="374" src="https://blog.slepcevic.net/wp-content/uploads/2023/03/mclovin.png" alt="" class="wp-image-64" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/03/mclovin.png 669w, https://blog.slepcevic.net/wp-content/uploads/2023/03/mclovin-300x168.png 300w" sizes="auto, (max-width: 669px) 100vw, 669px" /></figure>



<p></p>



<h4 class="wp-block-heading"><strong>Where do we want to be in the future? </strong></h4>



<p>Before we start writing any code (yes, everything we will do will be done in code), we need to define our project goals, <a href="https://en.wikipedia.org/wiki/Functional_requirement" target="_blank" rel="noreferrer noopener">functional</a> and <a href="https://en.wikipedia.org/wiki/Non-functional_requirement" target="_blank" rel="noreferrer noopener">non-functional</a> requirements and try to predict the future a bit <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>



<h4 class="wp-block-heading"><strong>Let&#8217;s start with the main project goals; </strong></h4>



<ul class="wp-block-list">
<li>Improve availability</li>



<li>Lower hosting costs compared to the current provider</li>



<li>Improve performance</li>



<li>Build for scale</li>



<li>Accommodate new modernization efforts which are in the pipeline</li>



<li>Support world wide presence with the ability to do edge computing in the future</li>



<li>Make it secure</li>



<li>Make it easy to run</li>



<li>Make it automated</li>



<li>Make developers happy &lt;3</li>
</ul>



<h4 class="wp-block-heading">Overview of the new infrastructure layout</h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="456" src="https://blog.slepcevic.net/wp-content/uploads/2023/04/Screenshot-2023-04-16-204916-1024x456.png" alt="" class="wp-image-77" srcset="https://blog.slepcevic.net/wp-content/uploads/2023/04/Screenshot-2023-04-16-204916-1024x456.png 1024w, https://blog.slepcevic.net/wp-content/uploads/2023/04/Screenshot-2023-04-16-204916-300x134.png 300w, https://blog.slepcevic.net/wp-content/uploads/2023/04/Screenshot-2023-04-16-204916-768x342.png 768w, https://blog.slepcevic.net/wp-content/uploads/2023/04/Screenshot-2023-04-16-204916.png 1442w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>As you can see, our infrastructure layout can be broken down into classic <strong>DT(A)P</strong> approach, along side a management and backup account. </p>



<p>Our management account will run all &#8220;operational&#8221; services we need to run our infrastructure, like monitoring, build and deployment pipelines, security tooling, secure access services, etc&#8230;, while we will have Development, Test and Production accounts for our dev/test and production workloads. </p>



<p>Goal will be to make our development, test and production accounts identical in every aspect besides scale. That will ensure that all infrastructure and application tests we will be running are giving realistic results. </p>



<p>Finally, we will have a dedicated Akamai Connected Cloud account in a different region where we will run our backup software and DR infrastructure. </p>



<p>Additionally, any <a href="https://www.linode.com/" target="_blank" rel="noreferrer noopener">Akamai Connected Cloud</a> service we might need (like <a href="https://www.linode.com/products/dedicated-cpu/" target="_blank" rel="noreferrer noopener">virtual machines</a>, <a rel="noreferrer noopener" href="https://www.linode.com/products/object-storage/" target="_blank">object storage</a>, <a rel="noreferrer noopener" href="https://www.linode.com/products/kubernetes/" target="_blank">LKE clusters</a>, etc&#8230;) will also be deployed in it&#8217;s own corresponding (DTAP) account. </p>



<p>Our intention is to have all environments physically and logically separated as much as possible. </p>



<p>Entire infrastructure will be built using code, primarily Terraform and Ansible for the first phase, while for the second phase, we will look into using some Kubernetes and application management tooling and pipelines. TBD. </p>



<p>In the next post we will start to define our functional and non-functional requirements for the entire project, drilling down into specific category and start defining our roadmaps. </p>



<p></p>



<p>Cheers, Alex. </p>



<p></p><p>The post <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-2-defining-the-project/">Building your IT landscape on Akamai Connected Cloud – Part 2 – Defining the project</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-2-defining-the-project/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Building your IT landscape on Akamai Connected Cloud &#8211; Part 1 &#8211; INTRO</title>
		<link>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/</link>
					<comments>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/#respond</comments>
		
		<dc:creator><![CDATA[Alesandro Slepčević]]></dc:creator>
		<pubDate>Tue, 28 Mar 2023 09:42:18 +0000</pubDate>
				<category><![CDATA[Akamai Connected Cloud]]></category>
		<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Open Source]]></category>
		<guid isPermaLink="false">http://172.233.40.105/blog.slepcevic.net/?p=37</guid>

					<description><![CDATA[<p>Related posts Part 1 &#8211; Intro Part 2 &#8211; Defining the project Intro These blog post series will serve as an example on how to utilize Akamai Connected Cloud to build and run your company&#8217;s IT landscape. In the series...</p>
<p>The post <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/">Building your IT landscape on Akamai Connected Cloud – Part 1 – INTRO</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><strong><em>Related posts</em></strong></p>



<p>Part 1 &#8211; <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/" data-type="URL" data-id="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/" target="_blank" rel="noreferrer noopener">Intro</a></p>



<p>Part 2 &#8211; <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-part-2-defining-the-project/" target="_blank" rel="noreferrer noopener">Defining the project</a></p>



<h4 class="wp-block-heading">Intro</h4>



<p>These blog post series will serve as an example on how to utilize Akamai Connected Cloud to build and run your company&#8217;s IT landscape. </p>



<p>In the series we will cover everything from infrastructure design, building the infrastructure using modern IaaC tools, to management and monitoring. </p>



<p>After going through the infrastructure part, we will shift focus on building our application&#8217;s build and deployment pipelines. </p>



<p>As a final part of the series, we will round it up with security tooling</p>



<h4 class="wp-block-heading">Plan:</h4>



<ul class="wp-block-list">
<li>Define the project</li>



<li>Write down requirements</li>



<li>Build infrastructure diagrams</li>



<li>Build the infrastructure
<ul class="wp-block-list">
<li>Terraform for IaaC</li>



<li>Ansible/Puppet/Chef for configuration management (TBD)</li>



<li>Core infrastructure</li>



<li>Management infrastructure</li>



<li>Monitoring infrastructure</li>



<li>Build/deployment infrastructure</li>
</ul>
</li>



<li>Build pipelines</li>



<li>Security tooling</li>



<li>Management lifecycle</li>



<li>Modernize</li>
</ul>



<p></p>



<h4 class="wp-block-heading has-medium-font-size"><strong>What is Akamai Connected Cloud?</strong></h4>



<p>Akamai Connected Cloud delivers enterprise-grade, cloud-based solutions at scale around the globe. Akamai’s cloud computing services place compute, storage, database, and other services closer to your customers, key industries, and IT centers.&nbsp;</p>



<p>This means developers can focus on coding while Akamai handles the infrastructure, placing your workloads according to your needs and delivering apps where and when your users want them. Our aggressive egress pricing is designed to bring CDN-like economics to cloud data transfer, offering significantly discounted rates compared with other cloud providers, helping you maximize your cloud ROI.</p>



<h4 class="wp-block-heading">Build, deploy, and secure performant workloads</h4>



<p>Akamai Connected Cloud is a continuum of compute from core to edge, paired with our security, CDN, and 24/7/365 support. This will enable you to more efficiently build, deploy, and secure performant workloads that require single-digit millisecond latency and global reach.&nbsp;</p>



<p>And with Linode’s developer-friendly DNA, you’ll find it simpler to deploy distributed applications. Developers will be able to use these capabilities for applications, workloads, and use cases that haven’t been imagined yet.</p>



<p></p>



<h4 class="wp-block-heading">Which cloud services are available in Akamai Connected Cloud?</h4>



<ul class="wp-block-list">
<li>Compute
<ul class="wp-block-list">
<li>Shared CPU virtual machines</li>



<li>Dedicated CPU virtual machines</li>



<li>LKE &#8211; Managed Kubernetes</li>
</ul>
</li>



<li>Storage
<ul class="wp-block-list">
<li>Block storage</li>



<li>Object Storage</li>
</ul>
</li>



<li>Managed services &amp; one click clusters
<ul class="wp-block-list">
<li>MySQL</li>



<li>PostegreSQL</li>



<li>Redis</li>



<li>MongoDB</li>
</ul>
</li>



<li>Networking
<ul class="wp-block-list">
<li>Load balancer</li>



<li>Cloud firewall</li>



<li>VLAN</li>



<li>DNS Manager</li>



<li>DDoS Protection</li>
</ul>
</li>



<li>Delivery
<ul class="wp-block-list">
<li>Adaptive Media</li>



<li>Download Delivery</li>



<li>Ion</li>



<li>Global Traffic</li>
</ul>
</li>



<li>Security
<ul class="wp-block-list">
<li>Guardicore</li>



<li>Kona Site Defender</li>



<li>App &amp; API Protector</li>



<li>Bot Manager</li>



<li>Account Protector</li>



<li>EAA</li>
</ul>
</li>
</ul>



<p>Stay tuned for the next post where we will start defining our project and scope down the requirements. </p><p>The post <a href="https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/">Building your IT landscape on Akamai Connected Cloud – Part 1 – INTRO</a> first appeared on <a href="https://blog.slepcevic.net">Architect the cloud</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://blog.slepcevic.net/building-your-it-landscape-on-akamai-connected-cloud-intro/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 
Lazy Loading (feed)

Served from: blog.slepcevic.net @ 2025-12-26 06:31:06 by W3 Total Cache
-->