https://www.michaelburch.net/ Michael Burch's Blog me@michaelburch.net Copyright © 2023 2023-09-24T22:58:24Z https://www.michaelburch.net/favicon-32x32.png Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/last-day-at-microsoft.html Last Day at Microsoft Michael Burch 2023-09-22T00:00:00Z <p>Today was my last day at Microsoft. Working here was a great opportunity for me, and I've met and worked with some really wonderful folks. After trying for more than a year to pivot from consulting to another role within the company, I decided now was the time to leave.</p> <p>Microsoft is, in many ways, a great company to work for. They offer a lot in terms of employee development, and generally have highly skilled people who work with integrity. Benefits are very good and I was in a remote position that offered real flexibility for me and my family. As workplaces go, this one was pretty great. I started as a contractor at a time when Microsoft Consulting (aka ISD, Industry Solutions Delivery) needed Kubernetes expertise. I've been in consulting roles a few times during my career, most recently having led the Azure consulting practice at IBM so I was no stranger to working on short term customer engagements. I was, however, ready to get out of consulting altogether and focus on doing one job for one company.</p> <h2 id="lack-of-internal-opportunities">Lack of Internal Opportunities</h2> <p>Most of my career I have been building and scaling distributed systems along with developing software that runs on them. I thrive on solving puzzles with technology and I find that consulting roles offer less of this type of work and, maybe necessarily, a stronger focus on developing good patterns and practices for consulting engagements. I didn't want to be a better consultant, I wanted to sharpen my technical skills and build exciting and complex systems. I hoped that taking the job as a full-time employee at Microsoft would give me a foot in the door and the chance at another role doing just that. Unfortunately, despite all the encouragement and assurances of help to move into another role I couldn't even get an interview for any other internal position. This was discouraging, and affirmed for me that what I wanted to do (systems and software development) and what Microsoft wanted me to do (consulting) were entirely different. This was reason enough, but there was more.</p> <h2 id="direction-of-the-company">Direction of the company</h2> <p>I do not share the enthusiasm for generative A.I. that pervades every product and service today. For what it's worth, I wasn't excited or interested in NFTs either. I'm no better at predicting the success or failure of a given technology than anyone else so maybe it really will be as &quot;significant as the PC&quot; as some have suggested - time will tell. Don't get me wrong, I use ChatGPT, GitHub Copilot and Midjourney regularly and they are helpful tools and I am fascinated by models that are being trained on religious texts. I do not like the &quot;spray and pray&quot; method of shoving this technology into every product and waiting to see where it succeeds. It's careless and desperate and will not be good for consumers. Haste makes waste as the saying goes. This has already led to <a href="https://fortune.com/2023/09/19/microsoft-ai-researchers-accidental-data-leak/" rel="noopener" target="_blank">at least one incident</a> and I'm sure there are more to come.</p> <h2 id="compensation">Compensation</h2> <p>Compensation at Microsoft is good. I was well paid and the benefits package was generous. I wasn't sitting around comparing my salary to others, or to other available roles and wishing things were different. I didn't start looking for a new role because I needed or wanted more money. Somewhere along the way, the link between my individual performance, the success of the business, and my compensation was broken. This last year was an <a href="https://www.microsoft.com/en-us/Investor/earnings/FY-2023-Q4/press-release-webcast" rel="noopener" target="_blank">undeniable success for Microsoft</a> as a company, and for my specific division specifically, and my personal job metrics (or &quot;Connect&quot; in MS terms) were higher than last year and yet the variable component of my pay was down significantly. It really doesn't matter how well compensated you are at work, a double-digit percentage drop in compensation has a massive impact (both financial and psychological) on an employee.</p> <p>I am really grateful for the opportunity at Microsoft. I've been a fan of their products and services for a long time. I've fought hard to implement some of their best products for other companies that I've worked for. I've owned many of their (mostly abandoned) consumer products including the Microsoft Band and Windows Phone. I still believe the Azure cloud is the best option for companies to build and scale applications without operating their own datacenters. Through my work there I have met some amazingly talented folks and seen first-hand how some of the largest companies are using the cloud. I'm also happy to be moving on and excited to focus on what comes next.</p> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/add-gpu-drivers-to-azure-images.html Add GPU drivers to Azure Images Michael Burch 2023-01-31T00:00:00Z <p>Cloud-based GPUs provide a flexible, scalable, and cost-effective solution for training complex machine learning and deep learning models. NVIDIA is the vendor to beat in this space, providing high-performance GPUs and the CUDA programming model used by many A.I. workloads including ChatGPT. Despite the popularity of NVIDIA GPUs and wide support for GPU equipped virtual machines in the the cloud, the CUDA drivers are not included with many stock VM images. Installing these drivers on your own custom images enables you to spin up more GPUs faster, whether on virtual machines or scale sets.</p> <h2 id="packer-configuration">Packer Configuration</h2> <p>For this post, I'll be using the ARM Builder for <a href="https://developer.hashicorp.com/packer/plugins/builders/azure/arm" rel="noopener" target="_blank">Packer</a> to install the CUDA drivers in RHEL 8.6. This will use the existing Azure RHEL 8.6 image as a baseline, and then perform the installation scripts and commands that are needed to add the CUDA drivers from NVIDIA. You can find the official CUDA install instructions on the <a href="https://developer.nvidia.com/cuda-downloads?target_os=Linux&amp;target_arch=x86_64&amp;Distribution=RHEL&amp;target_version=8&amp;target_type=rpm_network" rel="noopener" target="_blank">NVIDIA download page</a>.</p> <blockquote> <p>I recommend trying out any customizations on a VM before attempting to capture an image with Packer.</p> </blockquote> <p>I started by creating a Packer configuration file, rhel8-nvidia-nc.pkr.hcl. Note that you will need a service principal that Packer can use to connect to Azure and create virtual machines and images. Replace the tenant_id, subscription_id, client_id, and client_secret values below with your own Azure details. The <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/build-image-with-packer#create-azure-credentials" rel="noopener" target="_blank">Microsoft documentation</a> for this process details exactly what you need.</p> <pre><code>source "azure-arm" "rhel8_nvidia_t4" { azure_tags = { OS = "RHEL8" task = "Image deployment" } client_id = "&lt;your-sp-client-id-here&gt;" client_secret = "&lt;your-sp-client-secret-here&gt;" image_offer = "RHEL" image_publisher = "RedHat" image_sku = "86-gen2" location = "West US 3" managed_image_name = "rhel8-nvidia-t4" managed_image_resource_group_name = "rg-hub-wus3-demo" os_type = "Linux" os_disk_size_gb = "128" subscription_id = "&lt;your-azure-subscription-id-here&gt;" tenant_id = "&lt;your-azure-tenant-id-here&gt;" vm_size = "Standard_NC4as_T4_v3" virtual_network_name = "vnet-hub-wus3-demo" virtual_network_subnet_name = "vnets-srv-hub-wus3-demo" virtual_network_resource_group_name = "rg-hub-wus3-demo" } build { sources = ["source.azure-arm.rhel8_nvidia_t4"] provisioner "shell" { execute_command = "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'" inline = [ "dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm", "dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo", "dnf clean all", "dnf -y module install nvidia-driver:latest-dkms", "dnf install -y kernel kernel-tools kernel-headers kernel-devel", "/usr/sbin/waagent -force -deprovision+user &amp;&amp; export HISTSIZE=0 &amp;&amp; sync" ] inline_shebang = "/bin/sh -x" } } </code></pre> <p>Some of the options here are self-explanatory, but here are a few that might not be. These three identify the base image that Packer will start with:</p> <ul> <li>image_offer</li> <li>image_publisher</li> <li>image_sku</li> </ul> <p>You can find the values for these with the Azure CLI, for example to list all the images published by RedHat with the associated offer and sku values:</p> <pre><code>az vm image list --publisher RedHat --all </code></pre> <blockquote> <p>Note that the SKU value also indicates which VM generation the image is for. In my case, I am deploying a <em>Standard_NC4as_T4_v3</em> which supports both Generation 1 and 2 on Azure. Check your VM size to see what Generations are supported and select an appropriate image.</p> </blockquote> <p>There are two settings that control how you can find the customized image later in the Azure portal. You can control the name of the image and the resource group where it will be stored. I named my image <em>rhel8-nvidia-t4</em> because it's a RHEL image with NVIDIA drivers using the Tesla T4 GPU. Packer will create it's own resource group for the temporary VM it builds to capture the image and then delete it when complete. The resulting image will be stored in the resource group I specified, <em>rg-hub-wus3-demo</em>, which is my West US 3 regional hub. I've also specified the os_disk_size_gb parameter - the CUDA drivers (and toolkit) are fairly large and won't fit on the 64GB default disk.</p> <p>I've also configured Packer to connect the temporary build VM to an existing virtual network. This isn't required, but if you don't connect to an existing VNET Packer will create a public IP address for the temp VM and I don't want that in this case. I have an existing hub and spoke deployment and would rather use that to handle routing my egress traffic, even for a temporary VM.</p> <p>You can also take advantage of <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/spot-vms" rel="noopener" target="_blank">spot pricing</a> in Azure for Packer builds. If you're using a spot-eligible size you can configure images for which you might otherwise not have sufficient quota. Spot pricing can be configured by adding the following snippet to the source properties:</p> <pre><code> source "azure-arm" "rhel8_nvidia_t4" { ... virtual_network_resource_group_name = "rg-hub-wus3-demo" spot { eviction_policy = "Deallocate" } } ... </code></pre> <p>You may also need to append the following plugin configuration to the end of your Packer configuration file:</p> <pre><code>packer { required_plugins { azure = { version = "&gt;= 1.4.0" source = "github.com/hashicorp/azure" } } } </code></pre> <p>The remainder of the file is the script that is run to install the GPU drivers. Note that there is an additional command at the end that deprovisions the Azure agent, removes user accounts, and cleans the history. This ensures that VMs created from this image later are clean and will have a functional Azure agent. This should be the last command run, so if you need to perform addition customizations you should add them above this line. Now that the configuration is complete, let's run Packer and see if the build succeeds.</p> <h2 id="create-an-image-with-packer">Create an image with Packer</h2> <p>I was disappointed that packer isn't available via <a href="https://learn.microsoft.com/en-us/windows/package-manager/winget/" rel="noopener" target="_blank">winget</a>, but thankfully the install was as easy as downloading one executable and dropping in a folder in my path. After that, there are just two commands to run to kick off the build:</p> <pre><code>michaelburch ❯ packer init rhel8-nvidia-nc.pkr.hcl michaelburch ❯ packer build rhel8-nvidia-nc.pkr.hcl azure-arm.rhel8_nvidia_t4: output will be in this color. ==&gt; azure-arm.rhel8_nvidia_t4: Running builder ... ==&gt; azure-arm.rhel8_nvidia_t4: Getting tokens using client secret ==&gt; azure-arm.rhel8_nvidia_t4: Getting tokens using client secret azure-arm.rhel8_nvidia_t4: Creating Azure Resource Manager (ARM) client ... </code></pre> <p>After about 10 minutes, the build is complete (cleanup and all!) and Packer provides the ID of the new image. Overall it took about a minute to provision the temporay VM, 6 minutes to run my customizations, and 3 minutes to capture the image and cleanup. 6 minutes doesn't seem like a long time, unless you need to provision a lot of these very quickly and then it seems like an eternity. Performing this step once when we create the image saves us 6 minutes each time we start a new instance, since we don't have to wait for these steps to be completed by cloud-init or some configuration management tool.</p> <pre><code>==&gt; azure-arm.rhel8_nvidia_t4: Resource group has been deleted. Build 'azure-arm.rhel8_nvidia_t4' finished after 10 minutes 18 seconds. ==&gt; Wait completed after 10 minutes 18 seconds ==&gt; Builds finished. The artifacts of successful builds are: --&gt; azure-arm.rhel8_nvidia_t4: Azure.ResourceManagement.VMImage: OSType: Linux ManagedImageResourceGroupName: rg-hub-wus3-demo ManagedImageName: rhel8-nvidia-t4 ManagedImageId: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-hub-wus3-demo/providers/Microsoft.Compute/images/rhel8-nvidia-t4 ManagedImageLocation: West US 3 michaelburch ❯ </code></pre> <p>Now the image is complete and I can find in in my resource group. The Packer output shows the "ManagedImageId" and I can copy that value directly into a bicep or ARM template to deploy a new VM, I can also find it in the Azure portal later if I forget (or never saw this output because it was part of a build pipeline).</p> <p><img src="https://www.michaelburch.net/images/packer-image-props.png" alt="alt text" title="azure portal screenshot"></p> <p></p> <h2 id="deploy-a-vm-with-the-new-image">Deploy a VM with the new image</h2> <p>The ManagedImageId can be used to reference the image when deploying a VM using Bicep, ARM or Azure CLI. I usually deploy with bicep templates so I just need to update the image reference property on the VM. All that's needed is to find the platform image reference in the template, which looks something like this:</p> <p><img src="https://www.michaelburch.net/images/bicep-image-ref-rhel.png" alt="alt text" title="bicep image reference"></p> <p>and replace it with the ManagedImageId:</p> <p><img src="https://www.michaelburch.net/images/bicep-image-ref-custom.png" alt="alt text" title="bicep image reference with custom image"></p> <p>With that change made, I can deploy the bicep template and have a new VM up and running with my custom image. The same can be achieved with the following Azure CLI command in powershell (replace the ` with \ for bash):</p> <blockquote> <p>I'm deploying this into an existing vnet with no public IP and using an existing SSH key and NSG. The most relevant parameter is <em>--image</em></p> </blockquote> <pre><code class="language-pwsh">az vm create --resource-group rg-packer-demo ` --name vm-nvt4-demo-1 ` --image /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-hub-wus3-demo/providers/Microsoft.Compute/images/rhel8-nvidia-t4 ` --admin-username michael ` --ssh-key-values c:\users\michaelburch\.ssh\id_rsa.pub ` --vnet-name vnet-packer-demo ` --subnet server-subnet ` --nsg nsg-packer-default ` --size Standard_NC4as_T4_v3 ` --public-ip-address '""' It is recommended to use parameter "--public-ip-sku Standard" to create new VM with Standard public IP. Please note that the default public IP used for VM creation will be changed from Basic to Standard in the future. { "fqdns": "", "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-packer-demo/providers/Microsoft.Compute/virtualMachines/vm-nvt4-demo-1", "location": "westus3", "macAddress": "60-45-BD-CC-55-47", "powerState": "VM running", "privateIpAddress": "192.168.49.133", "publicIpAddress": "", "resourceGroup": "rg-packer-demo", "zones": "" } michaelburch ❯ $command = Get-History -Count 1 michaelburch ❯ $($command.EndExecutionTime - $command.StartExecutionTime).TotalSeconds 99.5999679 </code></pre> <p>It's easy to see that deploying a VM using a custom image is significantly faster, just 99.5 seconds! The real question is, does it actually <em>work</em> ? We can SSH to the VM and find out:</p> <pre><code>michaelburch ❯ ssh michael@192.168.49.133 The authenticity of host '192.168.49.133 (192.168.49.133)' can't be established. ED25519 key fingerprint is SHA256:W1CknBZ2sGSCNLELhmIx9F2fXXyZsrKTfSPMAuseFlw. This key is not known by any other names Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.49.133' (ED25519) to the list of known hosts. Activate the web console with: systemctl enable --now cockpit.socket [michael@vm-nvt4-demo-1 ~]$ nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000001:00:00.0 Off | 0 | | N/A 51C P0 26W / 70W | 2MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ [michael@vm-nvt4-demo-1 ~]$ exit </code></pre> <p>Success! Running <code>nvidia-smi</code> shows us that the drivers are loaded and the Tesla T4 GPU is successfully detected.</p> <h2 id="dive-deeper">Dive Deeper</h2> <p>This is a fairly minimal example of creating and using custom images in Azure. More advanced scenarios, such as versioning, sharing images across tenants, or deploying at greater scale are enabled with the use of <a href="https://learn.microsoft.com/en-us/cli/azure/sig/image-version?view=azure-cli-latest#az-sig-image-version-create-examples" rel="noopener" target="_blank">Shared Image Galleries</a>. You can also take advantage of <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/image-builder-overview?tabs=azure-powershell" rel="noopener" target="_blank">Azure VM Image Builder</a> to deploy your existing HCL or JSON Packer configurations without needing to install anything. Below are links to reference documentation that I used or found helpful while writing this post:</p> <ul> <li><a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/build-image-with-packer" rel="noopener" target="_blank">Microsoft Documentation for Packer</a></li> <li><a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/n-series-driver-setup" rel="noopener" target="_blank">Azure N-Series Driver Setup</a></li> <li><a href="https://developer.nvidia.com/cuda-downloads?target_os=Linux&amp;target_arch=x86_64&amp;Distribution=RHEL&amp;target_version=8&amp;target_type=rpm_local" rel="noopener" target="_blank">NVIDIA CUDA Driver Install</a></li> <li><a href="https://developer.hashicorp.com/packer/plugins/builders/azure/arm" rel="noopener" target="_blank">Packer Reference Guide</a></li> </ul> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/run-apps-on-kubernetes-without-managing-kubernetes.html Run apps on Kubernetes without managing Kubernetes Michael Burch 2023-01-20T00:00:00Z <p>Kubernetes is a great platform for running scalable apps and is available pretty much everywhere. Many who own or deploy apps have enough to manage already and don't have the cycles to operate their own Kubernetes clusters. Azure Container Apps (ACA) is the fastest way to get started running apps on Kubernetes in Azure, with service mesh, ingress, and secure defaults. Best of all, it shifts the operational burden to the cloud provider. This post reviews the advantages of ACA as well as some of the key features that aren't available to help determine if ACA is a good fit for your application.</p> <h2 id="powered-by-kubernetes">Powered by Kubernetes</h2> <p>Microsoft launched Azure Container Apps in 2022 and describes the service as being <a href="https://learn.microsoft.com/en-us/azure/container-apps/compare-options#azure-container-apps" rel="noopener" target="_blank">&quot;powered by Kubernetes and open-source technologies&quot;</a>. Practically, this means that Azure runs a Kubernetes cluster for you and exposes a subset of the platform's features for you to deploy and manage your app. When compared to other Kubernetes deployment options, like running your own bare-metal cluster or building one in AKS this method significantly cuts back on the stuff you have to manage yourself.</p> <div class="container-right container-row"> <?# CaptionImage Src="/images/aca-compare.png" AltText="a table comparing features of AKS and ACA" Style="container-left"?>Shared Responsibility table<?#/CaptionImage ?> </div> <p>The table I show here may not match other shared responsibility models out there. I currently run a Mastodon instance on an AKS cluster, and even though I am using Azure-provided Ubuntu images for the nodes and security updates are automatically installed I still consider that I am responsible for maintaining these items. After all, the node images won't be updated unless I trigger or configure that to happen and security patches won't be applied unless I reboot the node (manually or via kured). In short, if a classic shared responsibility model identifies that I (as the app owner) share the responsibility then I am ultimately responsible.</p> <p>Cloud providers like Azure have made a lot of progress toward helping customers automate the management and maintenance of Kubernetes clusters. A common challenge that I see is that Kubernetes itself is updated frequently, and often will introduce breaking changes. Even when operating an AKS or EKS cluster, teams still need to stay current on all of the changes and ensure their clusters are on appropriate versions. ACA and services like it eliminate this challenge - with a few trade offs.</p> <p></p> <h2 id="optimized-for-microservices">Optimized for microservices</h2> <p>The target use case for Azure Container Apps is microservices and general purpose containers. This makes it ideal for long running background tasks, web apps, and API endpoints. Some of the main Kubernetes features that are available when using ACA include:</p> <ul> <li>Event-driven auto scaling</li> <li>Traffic splitting</li> <li>HTTP/S Ingress</li> </ul> <p>KEDA and HTTP scaling triggers are supported, and greatly simplified. ACA also includes a managed service mesh which enables you to deploy mutliple revisions of an app an split traffic between them. This service mesh also secures traffic between apps, which means that unless DAPR is used the only communication that is allowed between container apps is HTTP/S. Additionally, the only <em>ingress</em> traffic that can be allowed is HTTP/S.</p> <p>Ingress on Kubernetes is a large complex topic. ACA is a highly opinionated service, which reduces this complexity. This can be very limiting, since you aren't able to allow any other traffic into your app except HTTP/S (and only on ports 80 and 443). There's a significant reward for this - only HTTP/S traffic on well-known ports is allowed to reach your app <em>out of the box</em>.</p> <h2 id="limitations">Limitations</h2> <p>Things change quickly in the world of Kubernetes and containers. As of this writing, there are some signifcant limitations that might make you consider deploying to AKS or another service instead of ACA. I'll save you the trouble of digging through the documentation to find the limitations that I have found to be the most significant:</p> <ul> <li><p>No <a href="https://learn.microsoft.com/en-us/azure/container-apps/firewall-integration" rel="noopener" target="_blank">Internet Egress routing</a> <br/> If you are an enterprise and want to route all of the traffic leaving your containers and going to the Internet through your own central firewall - ACA is not for you. The docs word it this way &quot;Using custom user-defined routes (UDRs) or ExpressRoutes, other than with UDRs of selected destinations that you own, are not yet supported for Container App Environments with VNETs. Therefore, securing outbound traffic with a firewall is not yet supported.&quot;</p> </li> <li><p>Only <a href="https://learn.microsoft.com/en-us/azure/container-apps/containers#limitations" rel="noopener" target="_blank">linux/amd64 container images</a> are supported <br/> I'm fairly sure this is the most commonly deployed platform, but with the rise of ARM64 and Microsoft's own efforts with Windows containers this could be a deal breaker for some.</p> </li> <li><p>No privileged containers <br/> Generally, even if you can run privileged containers, you shouldn't. There are some edge cases though so if you need them, consider AKS instead.</p> </li> <li><p><a href="https://learn.microsoft.com/en-us/azure/container-apps/containers#configuration" rel="noopener" target="_blank">Resource Limits</a> <br/> Apps running in ACA are limited to 2 CPU Cores and 4GB of memory. This is a cumulative limit so if you've got multiple containers in one app (like a DAPR sidecar with mongodb container) their combined CPU cannot exceed 2 cores.</p> </li> <li><p>Port restrictions <br/> Unless you are using DAPR, apps running in ACA can only communicate <em>with each other</em> over HTTP/S on ports 80 and 443. Egress traffic to external services is not restrcited by ACA. It seems likely that for the target use case this will be the most common desired configuration, but if you are hoping to migrate an existing app and it requires communicating between two services on other ports you'll need to consider AKS or another service.</p> </li> </ul> <p>Overall, ACA delivers on it's goal to enable teams to easily deploy containerized apps in the cloud. While similar to Azure Kubernetes Service, Azure Container Apps is generally easier to use and requires less prior knowledge of Kubernetes. There are some meaningful limits that may make it a non-starter for some projects, although it is actively developed and I'm sure it will continue to grow and evolve with input from customers.</p> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/building-a-pc-in-2022.html Building a PC in 2022 Michael Burch 2022-09-13T00:00:00Z <p>I built the first PC I ever used from some spare parts my dad had laying around. That was in 1989, I was 9 years old and instantly became a PC enthusiast. I had just built a system with a 286 CPU in an AT form factor tower case with a 20 Megabyte (not a typo) HDD and was already looking forward to upgrading to a 386. I built every PC I owned from then on, using the old parts to make a PC for a friend or family member. Then in 2011 I bought a laptop and gave away my last desktop build. There simply weren't any options for building a desktop class machine in a laptop form factor. Until now.</p> <h2 id="consumer-electronics-is-broken">Consumer Electronics is broken</h2> <p><a href="https://frame.work/" rel="noopener" target="_blank">Framework</a> makes a DIY Laptop kit with modern power and capabilities in an incredible 13 inch thin and light form factor.</p> <p>According to Framework, the consumer electronics industry is broken. The typical business model in the space promotes cranking out new devices every year that can be disposed of when the next model comes out. There's certainly a glut of &quot;disposable&quot; hardware out there, and that's the problem that Framework is trying to address by sustainably producing laptops that can easily be repaired and upgraded by the end user. It's an ambitious goal, and as a natural skeptic myself I doubted they would get anywhere.</p> <p>However they have recently released the first upgrade, a user replaceable mainboard that fits in the same chassis which makes it possible to reuse most of the existing hardware while upgrading the core components like the CPU. After the upgrade, you don't just have to throw away the old board you can <em><strong>3D print your own case and make a mini PC out of it</strong></em>. This is so similar to the experience I've had upgrading desktop machines over the years that I just had to try it out.</p> <h2 id="it-takes-a-community">It takes a community</h2> <p>Framework the company is still young, and while they do fairly well at planning orders they certainly aren't sitting on piles of inventory ready to ship. This means you could be waiting awhile. I received mine in just over a month, although others have been waiting 100+ days, YMMV.</p> <p>Thankfully, there's a great community of Framework customers who are active on <a href="https://community.frame.work/" rel="noopener" target="_blank">the official forums</a>, Reddit, Discord, and even IRC. This is a great resource for everything from <a href="https://community.frame.work/t/batch-2-intel-12th-guild/19490/427" rel="noopener" target="_blank">&quot;waiting therapy&quot;</a> to technical support.</p> <p>Even little things, like the fact that the first boot after installing new memory can take awhile are sometimes more easily found in the community than in the official docs. You can RTFM for stuff like that but you can also RYAF (Remind Your Anxious Friends) with a helpful post.</p> <p>Participating in the forums gave me an experience much like what I had going to brick an mortar computer stores long ago and talking with the employees about various upgrades. We were even able to collaborate on a public spreadsheet to track our collective order progress without anyone messing it up. Turns out the Internet can still be a nice place to hang out.</p> <h2 id="assembly">Assembly</h2> <p>It's been years since I built a PC for myself, so thankfully I had an 8 year old around to help me. This was a fun project to share with my daughter, and a chance to teach her something about what's inside these things we play Minecraft on.</p> <div class="container-right container-row"> <?# CaptionImage Src="/images/fwbox1.jpg" AltText="a photo of the Framework Laptop shipping box" Style="container-left"?>Laptop<?#/CaptionImage ?> <?# CaptionImage Src="/images/fwbox2.jpg" AltText="a photo of the second layer of the Framework Laptop shipping box" Style="container-right"?>Components<?#/CaptionImage ?> </div> <p></p> I purchased the DIY Edition of the Framework Laptop, so all of my components arrived neatly organized in one two-layered box. I ordered the NVMe drive, memory, and power adapter from Framework although all of these are optional and can be purchased elsewhere if you like. <p></p> <div class="container container-row"> <?# CaptionImage Src="/images/fwunboxed.jpg" Style="container-right"?>Framework Components<?#/CaptionImage ?> <p>This is also a great opportunity to reuse hardware you might have laying around as NVMe drives are becoming more common in desktops and the laptop can be charged with almost any USB-C charger.</p> <p>I layed out all of the components next to the laptop, along with the included screwdriver before getting started.</p> </div> <p></p> <div class="container container-row"> <?# CaptionImage Src="/images/fwopen.jpg" Style="container-left"?>Keyboard removed<?#/CaptionImage ?> <p>Assembly was simple. The laptop has 5 captive screws on the bottom which can be loosened with the included screwdriver. With that done, all we had to do was flip it over and lift the keyboard off to expose the internals.</p> <p>The keyboard is attached with a cable that pulls up easily to disconnect. All components are clearly labeled and the QR codes helpfully open a browser to the instructions if needed.</p> </div> <p></p> <div class="container container-row"> <?# CaptionImage Src="/images/fwinstall.jpg" Style="container-right"?>Installing NVMe drive<?#/CaptionImage ?> <p>We took a brief instructional tour of all the components and then set to work adding the memory and NVMe drive. The included screwdriver is also used to secure the drive, making this a very simple single tool process. With all the internal components installed we reattached the keyboard cable and set the keyboard back in place. It snaps into place magnetically which is a nice touch.</p> <p>There's also something different about the way the wrist rest area feels on this compared to other laptops. I prefer the feel of the Framework to my work PC (a Surface Book) for sure.</p> </div> <p></p> <p>After all the internals were installed we turned the laptop back over and tightened up the screws. The last step was to install the expansion cards. This is something unique to the Framework: there are four expansion slots, which can each hold a card that provides connectivity such as USB, USB-C, HDMI, MicroSD, Ethernet, or additional storage. I've been slowly moving to USB-C over the past several years and most of my accessories use that connector now, so I ordered 4 USB-C expansion cards which will be installed most of the time. I also picked up HDMI and MicroSD cards to throw in my bag in case I ever do need them.</p> <div class="container-right container-row"> <?# CaptionImage Src="/images/fwcards.jpg" AltText="a photo showing installation of Framework Laptop expansion cards" Style="container-left"?>Expansion card<?#/CaptionImage ?> <?# CaptionImage Src="/images/fwboot.jpg" AltText="a photo of the framework boot screen" Style="container-right"?>First boot!<?#/CaptionImage ?> </div> <p></p> <p>Once all the components were done, it was time to (cross fingers) boot up and install an operating system. I'm a Windows fan, and also an Ubuntu fan but I wanted to try something new this time so I booted to Fedora 36. I was pleasantly surprised at how quickly I was able to get to work. The install was fast, everything worked out of the box and configuration was simple. Overall, this was a great experience.</p> <p>Framework is really focused on creating repairable, sustainable products. It's all over their website and promotional materials and it's a good goal. They should stay focused on this because I'm sure it's an uphill battle.</p> <p>Maybe unintentionally, they will succeed in reviving the PC enthusiast market. People like me who enjoy building their own computers can now build some pretty exciting laptops. I would personally like to see an effort to standardize on mainboard form factors, something laptops have never had. There's a great potential here and I'm excited to see how it develops and I hope others will support companies like Framework.</p> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/serverless-python-apps-on-azure-functions.html Serverless Python Apps on Azure Functions Michael Burch 2020-10-30T00:00:00Z <p>Azure Functions makes it easy to run apps written in Python (or Java/dotnet/JS/TS/etc) in a scalable, fully managed environment. I like to see my code come to life and make it available for others but don't want to think about High Availability, scaling, or OS updates. Azure Functions takes care of all of that and has a very generous free tier. In this post I'll cover how to get Python code for a Todo API running in Azure Functions with a Svelte (JS) front-end and a serverless database with Azure CosmosDB</p> <h2 id="the-todo-application">The Todo Application</h2> <p>I've blogged about <a href="https://www.michaelburch.net/deploying-an-app-on-openshift.html">the app we'll be deploying before</a>. This is a very basic Todo list app that can add, update, and delete todo items from a list. The frontend is written in <a href="https://svelte.dev/" rel="noopener" target="_blank">Svelte</a>, a Javascript framework known for it's speed and simplicity. I'll be reusing the same frontend code from that post, but I'll replace the API with Azure Functions written in Python.</p> <p>I wanted to make this a true serverless app across all tiers - frontend, app, and database so I'll be using Azure CosmosDB. CosmosDB is a document database that can support multiple APIs including SQL and Mongo. I'm using the SQL API because I prefer the query syntax. Cosmos also offers a generous free tier that will be more than enough for this small app.</p> <p>The code for both the web app and API is available in my <a href="https://github.com/michaelburch/todo" rel="noopener" target="_blank">GitHub todo repo</a>. I also have deployed a working example of this code so you can see the app in action here: <a href="https://todo.trailworks.io/" rel="noopener" target="_blank">https://todo.trailworks.io</a></p> <h2 id="todo-item-data-model">Todo Item Data Model</h2> <p>I like to start a project by defining what my data model will look like. In this case, I'm defining my Todo Item as follows:</p> <pre><code class="language-json">{ "tenantId": "d1119361-0ff2-4aa5-93d9-439f31afbbcf", "name": "get coffee", "isComplete": false, "id": "a68024ff-34d8-4bfb-a8c7-0b3cbb66efda" } </code></pre> <p>The 'tenantId' field is a GUID that uniquely identifies the user, so that each user has their own unique Todo list. In the Svelte frontend, I'm using a cookie to populate this field. The other fields defined here give the Todo item a name, a boolean value that tells us if this Todo has been completed, and a unique identifier for the item itself. There are many other fields you might want on a Todo list (category, priority, assignee, etc.) but this will do for a basic example.</p> <p>I'll define my TodoItem as a Python class and give it a helper function to deserialize it from JSON.</p> <pre><code class="language-python">import uuid class TodoItem(dict): def __init__(self, tenantId, name, isComplete, itemId): dict.__init__(self, tenantId=tenantId, name=name, isComplete=isComplete, id=itemId) def from_json(dct): complete = dct.get('isComplete', False) tenantId = dct.get('tenantId', str(uuid.uuid4())) itemId = dct.get('id', str(uuid.uuid4())) return TodoItem(tenantId, dct['name'], complete, itemId) </code></pre> <p>I decided to use Azure CosmosDB to store the Todo Items. Using a document database like this, I can easily store objects in JSON format. Cosmos has a free tier and with that, you'll get the first 400 RU/s (per month)and 5 GB of storage in the account free for the lifetime of the account. As with any "free" offering, there are plenty of caveats. It's worth reviewing the <a href="https://docs.microsoft.com/en-us/azure/cosmos-db/optimize-dev-test#azure-cosmos-db-free-tier" rel="noopener" target="_blank">documentation</a> to understand them all. I have been running this particular account with a couple of databases for a few months now and it's cost me nothing.</p> <h2 id="azure-function-setup">Azure Function Setup</h2> <p>Azure Functions is another service with a generous <a href="https://azure.microsoft.com/en-us/pricing/details/functions/" rel="noopener" target="_blank">free tier</a>. Again, my cost for running these functions over the last 3 months has been very low at $0.01. This is less than what electricity would cost me to run this on a server at my house.</p> <p>I've defined four functions for this project:</p> <ol> <li>get-todos</li> <li>create-todo</li> <li>update-todo</li> <li>delete-todo</li> </ol> <p>Each function uses an HTTP trigger, since it will be accessed by the frontend app over HTTP. I'm using route parameters to pass the tenantId and itemId values. The function.json file for the get-todos function looks like this:</p> <pre><code class="language-json">{ "scriptFile": "__init__.py", "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "get" ], "route": "{tenantId}/todos/" }, { "type": "http", "direction": "out", "name": "$return" } ] } </code></pre> <h2 id="using-cosmosdb-with-python">Using CosmosDB with Python</h2> <p>Microsoft has some great <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-input?tabs=python#http-trigger-look-up-id-from-route-data-5" rel="noopener" target="_blank">examples</a> of how to make use of the CosmosDB bindings in multiple langauges, including Python. These are a good starting point and I found them very helpful when getting started. However, my original C# version of this app uses an Entity Framework code first database and I wanted similar functionality here. I found I could achieve something similar with the <a href="https://docs.microsoft.com/en-us/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient?view=azure-python" rel="noopener" target="_blank">Cosmos Client</a> and the following code:</p> <pre><code class="language-python">import logging import json import os import azure.functions as func from ..shared_code import TodoItem from azure.cosmos import exceptions, CosmosClient, PartitionKey def main(req: func.HttpRequest) -&gt; func.HttpResponse: logging.info('Listing todo items') headers = {"Content-Type": "application/json"} try: # Read client settings from environment database_name = os.environ['DB_NAME'] collection_name = os.environ['COLLECTION_NAME'] # Read tenantId from route param tenantId = req.route_params.get('tenantId') logging.info(f'tenant {tenantId} ') # Create an empty documentlist todos = func.DocumentList() # Create database and collection if not already existing client = CosmosClient.from_connection_string(os.environ['DB_CSTR']) client.create_database_if_not_exists(database_name,False,0) </code></pre> <p>Once deployed, I can configure the 'os.environ' settings like DB_NAME and COLLECTION_NAME in the function app:</p> <p><img src="https://www.michaelburch.net/images/todo-function-config.png" style="max-height:350px" alt="screenshot of function app config" title="screenshot of function app config"></p> <p>During development and testing, these same settings can be defined in a local.settings.json file using the following format:</p> <pre><code class="language-json">// local.settings.json { "IsEncrypted": false, "Values": { "DB_NAME": "freetierdb", "COLLECTION_NAME":"todos", "DB_CSTR": "&lt;azure-cosmos-connection-string&gt;" } } </code></pre> <p>Working with the CosmosDB SDK in Python is fun, even if the documentation is a bit sparse. There are some good general examples but finding specific documentation was frustrating at times. One thing I really appreciate about Python is how simple it is to accept a JSON payload and store it in the database:</p> <pre><code class="language-python"> # Create item using JSON from request body req_body = req.get_json() todoItem = TodoItem.from_json(req_body) todoItem["tenantId"] = f'{tenantId}' # Create item in database doc.set(func.Document.from_dict(todoItem)) </code></pre> <h2 id="deploying-the-frontend">Deploying the frontend</h2> <p>There are pletny of ways that I <strong>could</strong> deploy the frontend application, and in this case I decided to deploy it as a static website in Azure Storage (just like I do with <a href="https://www.michaelburch.net/blog/hosting-a-static-site-in-azure.html">this blog</a>). Static Web Apps is an Azure feature that is currently in Preview that looks really promising, but since I'm already familiar with static websites in Azure Storage that's how I've deployed it.</p> <p><img src="https://www.michaelburch.net/images/todo-storage.png" style="max-height:350px" alt="screenshot of storage config" title="screenshot of storage config"></p> <p>I like this option because I can quickly create a storage account in Azure, enable static web hosting, enable Azure CDN and define a custom domain for HTTPS and have a very robust web host that costs me next to nothing. The storage account cost and CDN costs over the last 3 months total $0.02. All together for this project, albeit with very little traffic, I spent less than 5 cents over the last 3 months. This isn't representative of actual production costs for a commercial project, but for simple proof of concept type work or hobby projects this is a great way to go.</p> <p>Azure Functions could also be a good low-cost option for teaching cloud based development to people who are new to technology and looking to get right into the code instead of spending a lot of time on setup and hardware configuration.</p> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/time-travel-with-circuit-playground-express.html Time Travel with Circuit Playground Express Michael Burch 2020-05-28T00:00:00Z <p>My 6 year old daughter has been using her imagination to help make homeschool history more fun. She came up with the idea for a time travel helmet that would transport her to early Egypt, Greece and beyond. This past weekend we used a Circuit Playground Express, Microsoft MakeCode and a cardboard box to bring her time travel helmet design to life! This was a fun project for both of us and also a great introduction to coding for my kiddo.</p> <h2 id="the-design">The Design</h2> <div class="container-right container-row"> <?# CaptionImage Src="/images/helmet-design.jpg" AltText="a child's drawing of a time travel helmet" Style="container-left"?>Design<?#/CaptionImage ?> <?# CaptionImage Src="/images/helmet-complete.jpg" AltText="the completed helmet" Style="container-right"?>Finished Product<?#/CaptionImage ?> </div> My daughter loves to draw, so when I asked her what a time travel helmet would look like she was prepared. She came up with a design, drew it out and explained all of the components. Engineering doesn't always create what the design team has in mind but in this case I think we got pretty close. <p>We used an old Amazon box and some tape to assemble the helmet, with a length of cardboard run through the middle that serves as the top of the helmet and a shelf for the battery holder. A time travel helmet wouldn't be complete without some noisy, flashy, technical looking time circuits.</p> <p>Clearly the <em>Back to the Future</em> movies would have failed if not for the flux capacitor (&quot;which makes time travel possible&quot;). That's where the <a href="https://smile.amazon.com/Adafruit-Circuit-Playground-Express/dp/B0764NQ1WW/?ref=smi_se_dshb_sn_smi&amp;ein=22-3886094&amp;ref_=smi_chpf_redirect&amp;ref_=smi_ext_ch_22-3886094_cl" rel="noopener" target="_blank">Circuit Playground Express</a> comes in.</p> <h2 id="the-robot-computer">The &quot;Robot Computer&quot;</h2> <p>Since Flux Capacitor was taken, my daughter named our time travel electronics &quot;the robot computer&quot;. I've had a Circuit Playground Express in my laptop bag since I attended Microsoft Ignite back in November (it was handed out for free). The amount of functionality packed into this small device is really amazing, check out the link above for all the details.</p> <p>Here is the small subset of features used in our time travel helmet:</p> <div class="container container-row"> <?# CaptionImage Src="/images/cpx.jpg" Style="container-left"?>Circuit Playground Express<?#/CaptionImage ?> <ul> <li><p>10 x mini NeoPixels, (for colorful indicator lights)</p> </li> <li><p>1 x Motion sensor (LIS3DH triple-axis accelerometer with tap detection, free-fall detection, we used it to stop the time travel process with a shake of the head)</p> </li> <li><p>1 x Mini speaker with class D amplifier (to make the time travel sound, of course)</p> </li> <li><p>7 pads can act as capacitive touch inputs (we're using pad A3 to turn the time circuits on)</p> </li> <li><p>2 MB of SPI Flash storage (we copied code to the device and stored it here)</p> </li> <li><p>MicroUSB port for programming and debugging (used this to transfer code)</p> </li> </ul> </div> <p></p> <h2 id="supplies">Supplies</h2> <div class="container container-row"> <?# CaptionImage Src="/images/cpx-battery.jpg" Style="container-right"?>Battery holder<?#/CaptionImage ?> Aside from the cardboard box and some tape, here are the supplies used in the project: <p></p> <ul> <li><p><a href="https://smile.amazon.com/Adafruit-Circuit-Playground-Express/dp/B0764NQ1WW/?ref=smi_se_dshb_sn_smi&amp;ein=22-3886094&amp;ref_=smi_chpf_redirect&amp;ref_=smi_ext_ch_22-3886094_cl" rel="noopener" target="_blank">Circuit Playground Express</a></p> </li> <li><p><a href="https://smile.amazon.com/Low-Voltage-Power-Solutions-Decorations/dp/B07M7Q4GXN/?ref=smi_se_dshb_sn_smi&amp;ein=22-3886094&amp;ref_=smi_chpf_redirect&amp;ref_=smi_ext_ch_22-3886094_cl" rel="noopener" target="_blank">3xAAA Battery Holder</a></p> </li> <li><p><a href="https://smile.amazon.com/Energizer-Rechargeable-Batteries-Pre-Charged-Recharge/dp/B000BESLQK/?ref=smi_se_dshb_sn_smi&amp;ein=22-3886094&amp;ref_=smi_chpf_redirect&amp;ref_=smi_ext_ch_22-3886094_cl" rel="noopener" target="_blank">Rechargeable AAA Batteries</a></p> </li> </ul> <p>The battery holder we used has both the JST type connector for connecting to the Circuit Playground and an on/off switch which is very useful if you accidentally play an annoying sound on an infinite loop. The Circuit Playground itself is attached to the helmet with some spare CAT6 (my kiddo was happy to have a choice of colors for the wire!)</p> </div> <p></p> <blockquote> <p>These aren't affiliate links, and I don't make anything off of them. They ARE Amazon Smile links and if you use them Amazon donates a small amount to charities like <a href="https://sheissafe.org/" rel="noopener" target="_blank">She is Safe</a>, an organization that prevents, rescues and restores women and girls from abuse and exploitation.</p> </blockquote> <h2 id="block-based-coding-with-microsoft-makecode">Block Based Coding with Microsoft MakeCode</h2> <p><a href="https://makecode.com/" rel="noopener" target="_blank">Microsoft MakeCode</a> is a (free!) web based code editor. You can use it to write Javascript or Python code for devices from Lego, Cue, Adafruit and others. You can also build using a block based code editor, similar to <a href="https://scratch.mit.edu/" rel="noopener" target="_blank">Scratch</a>, which is great for anyone just learning to code or those of us with little patience.</p> <p>We opened the MakeCode site in a browser and started a new project for the Circuit Playground Express. MakeCode shows a picture of the device we are coding for - but it's not just a picture, it's a device simulator! We can simulate pushing buttons and shaking the device as we write code and instantly see how it will work on the device.</p> <div class="container container-row"> <?# CaptionImage Src="/images/makecode-mavis.jpg" AltText="a child writing code for the first time" Style="container-left" ?>First time coder!<?#/CaptionImage ?> <?# CaptionImage Src="/images/makecode-blocks.png" AltText="screenshot of the Microsoft MakeCode interface" Style="container-left" ?>MakeCode Interface<?#/CaptionImage ?> </div> <p>We added code blocks to do the following:</p> <ul> <li>Play a sound when powered on</li> <li>Start &quot;time traveling&quot; when a button is pushed (she picked the A3 button)</li> <li>Play a sound and strobe the lights until time traveling is stopped (she found a cool animation called &quot;comet&quot; for this)</li> <li>Stop time travel when the helmet is shaken</li> </ul> <p>MakeCode made this project so easy and very approachable for both of us! My daughter caught on very quickly and it was easy and intuitive for me to answer any questions she had. Actually I think I asked more questions than she did - questions like &quot;what color lights do you want?&quot;.</p> <p>Once we had it working the way we wanted, we tried it out on the device simulator and then clicked download. All we had to do was plug the Circuit Playground into the computer with the included USB cable and drag and drop our code file over to the device. At this point it instantly rebooted and started running the code we created!</p> <p>Another great feature of MakeCode is it allows you to see the code you created with blocks as Javascript. Here's the code we created:</p> <pre><code>input.touchA3.onEvent(ButtonEvent.Click, function () { traveling = 1 while (traveling) { light.showAnimation(light.cometAnimation, 100) music.playMelody(&quot;C5 B A G F E D C &quot;, 900) } }) input.onGesture(Gesture.Shake, function () { traveling = 0 light.clear() }) let traveling = 0 music.playMelody(&quot;C5 B A G F E D C &quot;, 400) traveling = 0 </code></pre> <h2 id="an-excellent-adventure">An Excellent Adventure</h2> <p>Once the code was completed, we attached the Circuit Playground and battery holder to the helmet, switched it on and it was ready to go. I'm really impressed with the Circuit Playground Express. It has a wealth of features, is totally approachable for beginners, and can also be programmed with Circuit Python and Javascript for more advanced use. My daughter will surely come up with some new feature requests for her time travel helmet soon - and I'm looking forward to it! In the mean time, her homeschool history lessons should be much more entertaining.</p> <div class="container "> <?# CaptionImage Src="/images/mavis-helmet.jpg" AltText="a happy child wearing the time travel helmet described above" Style="container-left"?>It's time traveler time!<?#/CaptionImage ?> </div> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/deploying-an-app-on-openshift.html Deploying an App on OpenShift Michael Burch 2020-05-12T00:00:00Z <p>Deploying an app to OpenShift is the easiest method I've used to get application code running in containers. I've spent hours writing Docker files and building YAML deployments for Kubernetes, and even more time troubleshooting ingress resources and overlooked dependencies. With OpenShift, I can check my code into GitHub and then point and click (or script) my way to a working deployment in just minutes. No need to write a dockerfile, no manual YAML writing, no developing interim build images or working with bespoke DevOps tooling. Just write code and then deploy it with one simple interface. This post covers deploying a classic three tier application (web app, API, database) using a Todo list application I developed with Svelte and dotnet core and a Microsoft SQL database running in a Linux container.</p> <h2 id="the-todo-application">The Todo Application</h2> <p>First, a little about the app we'll be deploying. This is a very basic Todo list app that can add, update, and delete todo items from a list. Since this post is mostly focused on the simplicity of deploying an app on OpenShift, I tried to keep the application code as minimal and readable as possible. The frontend is written in <a href="https://svelte.dev/" rel="noopener" target="_blank">Svelte</a>, a Javascript framework known for it's speed and simplicity. This is my first attempt at a Svelte app, so be kind.</p> <p>The Todo API is written in C#, and uses the excellent <a href="https://github.com/featherhttp/framework" rel="noopener" target="_blank">featherhttp framework</a> from David Fowler. I used one of David's <a href="https://github.com/davidfowl/Todos" rel="noopener" target="_blank">todo examples</a> and added EF support for SQL server, some minimal CORS settings and a few additional methods.</p> <p>I've chosen Microsoft SQL for the database backend just because I wanted to try SQL Server in a Linux container. An enterprise environment that has OpenShift probably also has MS SQL deployed somewhere and could benefit from migrating to containers so it seems like a good fit for this exercise. Ultimately the API is using Entity Framework Core, so it could easily connect to any other RDBMS or NoSQL provider and aside from just being curious about SQL on Linux I would probably have chosen MongoDB.</p> <p>The code for both the web app and API is available in my <a href="https://github.com/michaelburch/todo" rel="noopener" target="_blank">GitHub todo repo</a>.</p> <h2 id="starting-a-project">Starting a project</h2> <p>I'll begin with a new project in OpenShift. As I've mentioned before, OpenShift is an Enterprise Distribution of Kubernetes much like RHEL is an Enterprise Distribution of Linux. OpenShift introduces some custom resources on top of vanilla Kubernetes, and the first one is a Kubernetes namespace with additional annotations. This resource is called a project.</p> <p>I created a project named todo-demo in the UI by clicking the drop down next to '<em>Projects</em>', typing the name and clicking '<em>Create</em>'</p> <p><img src="https://www.michaelburch.net/images/openshift-create-proj.png" style="max-height:350px" alt="screenshot of creating new project in OpenShift" title="screenshot of creating new project in OpenShift"></p> <blockquote> <p>I'll also note the command-line equivalent of each step like so:</p> </blockquote> <pre><code class="language-bash">oc new-project todo-demo </code></pre> <h2 id="deploying-the-database">Deploying the database</h2> <p>The first component I want to deploy in my new project is the database. I've chosen Microsoft SQL Server so I will create a deployment from the MSSQL container image, using the latest version of SQL 2019. This image requires two environment variables, "ACCEPT_EULA" (for accepting the SQL license terms) and "SA_PASSWORD" (for setting the password for the SQL sa login). I'll use the same application name for all components, "<em>todo</em>".</p> <blockquote> <p>I named this component "database". This is important to remember as it is also the DNS name that SQL will be known by within the cluster, so later when connecting the API to this database I will use this name.</p> </blockquote> <table> <tbody><tr> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-deploy-image.png"?-->1. Select image<!--?#/Figure ?--> </td> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-deploy-image-2.png"?-->2. Provide names and click create<!--?#/Figure ?--> </td> </tr> </tbody></table> <blockquote> <p>Note that I'm using a public Docker image from the Microsoft Container Registry, mcr.microsoft.com/mssql/server:2019-latest. You can browse available images and find required variables and usage instructions on <a href="https://hub.docker.com/_/microsoft-mssql-server" rel="noopener" target="_blank">Docker hub</a></p> </blockquote> <p>When creating the component in the web interface, it's deployed as soon as you click <em><em>Create</em></em>. This image requires two environment variables, which can be provided by editing the database deployment and setting the values as follows:</p> <table> <tbody><tr> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-edit-deploy.png"?-->3. Right-click, select Edit Deployment<!--?#/Figure ?--> </td> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-edit-deploy-2.png"?-->4. Select Environment,set values, click save<!--?#/Figure ?--> </td> </tr> </tbody></table> <p>Conveniently, this can be accomplished in a single step by using the command-line:</p> <pre><code class="language-bash">oc new-app --docker-image=mcr.microsoft.com/mssql/server:2019-latest --name=database -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=BHP2gE5#+" -l app.kubernetes.io/part-of=todo </code></pre> <p>Now that the database is up and running, we can move on to deploying the API. That was BY FAR the easiest install of Microsoft SQL Server I've ever done.</p> <h2 id="deploying-the-api">Deploying the API</h2> <p>This is where it really gets exciting. I haven't created a dockerfile for the API, I've really just barely finished the code and pushed it to GitHub. Now, I'll tell OpenShift to deploy that code next to my SQL database.</p> <p>First, I'll click '<em>Add</em>' and select the '<em>From Git</em>' option. On the following screen I'll provide my Git repo URL, <a href="https://github.com/michaelburch/todo" rel="noopener" target="_blank">https://github.com/michaelburch/todo</a>, and since I have both the web app and API in a single repo I will use advanced options to specify a path so that only the API is built.</p> <table> <tbody><tr> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-add-git.png"?-->1. Add from Git<!--?#/Figure ?--> </td> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-config-repo.png"?-->2. Provide repository details<!--?#/Figure ?--> </td> </tr> </tbody></table> <p>Next, I'll select a builder image, name my component and click create. OpenShift will then grab my application code, build a docker image, push it to an internal (to OpenShift) container registry and deploy the container for me! I want the API to connect to the SQL server I just deployed, so I will provide the connection string in an environment variable</p> <pre><code class="language-bash">DB_CSTR="Server=database;Database=TodoItems;User Id=sa;Password=BHP2gE5#+;" </code></pre> <p>The API itself is built on dotnet core, so I'll select the latest dotnet core builder image.</p> <table> <tbody><tr> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-select-builder.png"?-->3. Select builder image<!--?#/Figure ?--> </td> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-name-api.png"?-->4. Name and create<!--?#/Figure ?--> </td> </tr> </tbody></table> <table> <tbody><tr> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-edit-api.png"?-->5. Right-click, select Edit DeploymentConfig<!--?#/Figure ?--> </td> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-edit-api-2.png"?-->6. Select Environment,set values, click save<!--?#/Figure ?--> </td> </tr> </tbody></table> <p>Again, the command-line saves a few steps. This time though, the API needs to be exposed (meaning accessible outside of the OpenShift cluster) so that when I load the app in my browser it can communicate with the API. The API is exposed with just one extra line:</p> <pre><code class="language-bash">oc new-app https://github.com/michaelburch/todo --context-dir=/api --name=api -e "DB_CSTR=Server=database;Database=TodoItems;User Id=sa;Password=BHP2gE5#+;" -l app.kubernetes.io/part-of=todo oc expose service/api </code></pre> <p>I'll need to know the URL for the API before I deploy the app. I can see that through the topology view by clicking on my API component and looking at the route value:</p> <p><img src="https://www.michaelburch.net/images/openshift-show-route.png" style="max-height:350px" alt="screenshot showing OpenShift route" title="screenshot showing OpenShift route"></p> <p>or via this command:</p> <pre><code class="language-bash">oc get route/api NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD api api-todo-demo.apps-crc.testing api 8080-tcp None </code></pre> <blockquote> <p>This command shows the hostname I can use to access the API. The full URL, defined by routes in my API code will be <a href="http://api-todo-demo.apps-crc.testing/api">http://api-todo-demo.apps-crc.testing/api</a>. The API is using an Entity Framework 'code-first' database, so the database will be created on the first request if it doesn't already exist.</p> </blockquote> <h2 id="deploying-the-app">Deploying the app</h2> <p>Deploying the web app follows the same pattern as the API deployment. I used the same repo, specifying '<em>/app</em>' for the context folder this time and selected the NodeJS Builder image.</p> <p>Since this is a Javascript app that runs in the browser, there is no 'environment' to pull variables from. This means that any configuration variables need to be passed to the app at build time. It's easy to provide these values when creating the component and specifying the build image:</p> <table> <tbody><tr> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-edit-buildconfig.png"?-->1. Click build configuration under advanced options<!--?#/Figure ?--> </td> <td v-align="middle" align="center"> <!--?# Figure Src="/images/openshift-edit-buildconfig-2.png"?-->2. Enter values, click create<!--?#/Figure ?--> </td> </tr> </tbody></table> <blockquote> <p>I've set the API_URL to the route URL created in the previous step. I've also set some common Node build variables, HOST and PORT, to configure the app to listen on a default port and all IP addresses.</p> </blockquote> <p>Clicking on the arrow next to the web app in the topology view will open the app in a new tab.</p> <h2 id="too-easy">Too easy!</h2> <p>Despite having a less than awesome experience <a href="https://www.michaelburch.net/blog/Getting-Started-with-Openshift.html" rel="noopener" target="_blank">setting up my OpenShift development environment</a>, I think this is where the product really shines. I was able to go from nothing to a fully deployed three tier application in under 10 minutes, which is even more impressive considering that one tier is Microsoft SQL Server.</p> <p>I'm always suspicious when I hear that something complex (like Kubernetes) can be made simpler by adding another complex thing to it (OpenShift). In this case I think it's true. I originally set out to answer the question "Why bother with OpenShift?" and with this little test I think the answer is "because it shortens the time between writing code and having it up and running". I didn't <strong>want</strong> to like OpenShift, but I really enjoyed this and hope I'll get a chance to use it on a real project soon.</p> <h2 id="just-for-fun">Just for fun</h2> <p>I like to verify that everything is really working as expected, so I exposed my SQL deployment with a NodePort service, fired up Azure Data Studio (AKA SSMS) and ran a simple query a few times while adding and updating items:</p> <p><img src="https://www.michaelburch.net/images/todo-demo.gif" alt="screenshot showing Todo app in action" title="screenshot showing Todo app in action"></p> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/getting-started-with-openshift.html Getting Started with OpenShift 4.4 Michael Burch 2020-05-05T00:00:00Z <p>"You down with OCP? Yeah you know me!" OCP in this case is the <strong>O</strong>penShift <strong>C</strong>ontainer <strong>P</strong>latform. I think it's best described as an Enterprise Distribution of Kubernetes, much like RHEL is an Enterprise Distribution of Linux. If you're like me and have been working with Kubernetes for awhile, you may be wondering why you would need an Enterprise Distribution like OpenShift. I decided to answer that question by trying out OpenShift for the first time this week, setting up a single node development cluster on my laptop - read along and share your thoughts in the comments!</p> <h2 id="why-bother">Why bother?</h2> <p>Full disclosure - I work for IBM (who now owns RedHat). I have no involvement with OpenShift in my daily work, and any opinions expressed here are my own and do not represent my employer in anyway. I will say that while I have known <strong>of</strong> OpenShift for as many years as I have been working with Kubernetes (K8s) I have, to date, deliberately avoided it. I've always thought of it as a simplified bundle of K8s and some proprietary stuff and since I already knew K8s, why bother? I've certainly heard more about it recently, and I am finally curious enough to take a look.</p> <h2 id="an-openshift-by-another-name">An OpenShift by another name</h2> <p>The first thing I noticed is the confusing array of names - OpenShift, OCP, CRC, OpenShift Origin, OpenShift Dedicated, OpenShift Kubernetes Engine, etc. This coupled with the dizzying number of (mostly deprecated) options for running a local instance make it really difficult to get started. Seriously, do a quick search online and you'll find plenty of references to old versions and complicated setups involving VMWare workstation or VirtualBox. This is just one of those areas of tech that is growing and changing rapidly, and unfortunately that makes it difficult to know where to begin. By comparison, in Docker Desktop you only have to select 'Enable Kubernetes' from the menu and you're up and running with a local K8s cluster.</p> <p>For this post, I've decided to use the following:</p> <ol> <li>OpenShift Container Platform (OCP) version 4.4</li> <li>A local, single node cluster using RedHat CodeReady Containers (CRC) version 1.10</li> <li>Windows 10 with Hyper-V</li> </ol> <h2 id="prereqs">Prereqs</h2> <p>Before getting started, you should have Hyper-V enabled on Windows 10. If you don't, go ahead and enable it and reboot. I'll wait.</p> <p>From PowerShell, run:</p> <pre><code class="language-powershell">Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All </code></pre> <p>Next, you'll need to download a copy of RedHat CodeReady Containers. You can download from a mirror link on the <a href="https://github.com/code-ready/crc/releases" rel="noopener" target="_blank">GitHub releases page</a>, <strong>however</strong> you will still need a free RedHat account to get everything working. This is the sort of thing I usually avoid.</p> <blockquote> <p>In fact, I spent years <a href="https://github.com/docker/docker.github.io/issues/6910#issuecomment-403502065" rel="noopener" target="_blank">avoiding the sign-up form</a> to download Docker Desktop because I dislike registration protected downloads so much. In this case, a RedHat account is used to generate a set of Docker image registry credentials. I suppose you can't really call yourself an Enterprise product if your registry doesn't require authentication...</p> </blockquote> <p>I suggest starting <a href="https://cloud.redhat.com/openshift/install/crc/installer-provisioned" rel="noopener" target="_blank">here</a> to sign-up (or sign-in to an existing account) and access the downloads. On this page, download two things:</p> <ol> <li>The correct archive for your OS. It's approximately 2GB so may take some time.</li> <li>The pull secret. Download this to a file. It's the Docker registry credentials you'll need to get started</li> </ol> <p><img src="https://www.michaelburch.net/images/openshift-downloads.png" alt="screenshot of OpenShift download page" title="screenshot of OpenShift download page"></p> <h2 id="setup-and-start">Setup and start</h2> <p>What you have now is a 2GB self extracting exe that will create a Hyper-V virtual machine on your computer with OCP installed. Start by unzipping the downloaded archive. Then open a command prompt (<strong>NOT</strong> as administrator) and run:</p> <pre><code>crc setup </code></pre> <p>You should see output like the following: <img src="https://www.michaelburch.net/images/crc-setup.png" alt="screenshot of CRC setup output" title="screenshot of CRC setup output"></p> <p>Next, you need to configure the location of your pull secret file. I downloaded mine to my default downloads directory so I ran</p> <pre><code>crc config set pull-secret-file c:\Users\MichaelBurch\Downloads\pull-secret </code></pre> <p>followed by the start command</p> <blockquote> <p>I have specified a nameserver for the VM to use for external DNS. This seems to be a known issue, and I have had trouble without it so I recommend specifying one of your choice.</p> </blockquote> <pre><code>crc start -n 8.8.8.8 </code></pre> <p>Unfortunately, after about 5 minutes I got this error:</p> <pre><code>INFO Verifying validity of the cluster certificates ... INFO Adding 8.8.8.8 as nameserver to the instance ... INFO Will run as admin: add dns server address to interface vEthernet (Default Switch) INFO Check internal and public DNS query ... WARN Failed public DNS query from the cluster: ssh command error: command : host -R 3 quay.io err : Process exited with status 1 output : quay.io has address 54.152.57.199 quay.io has address 34.225.79.222 INFO Check DNS query from host ... ERRO Failed to query DNS from host: lookup api.crc.testing: no such host </code></pre> <h2 id="troubleshooting">Troubleshooting</h2> <p>It turns out that OpenShift uses two domains, .crc.testing and .apps-crc.testing. The first is for the OpenShift API server and the second is a convenient name for exposed applications running in the cluster. OpenShift calls these "Routes", which is similar to an Ingress resource in Kubernetes. This is a nice feature, but it requires that the host computer (my laptop in this case) can resolve names in those domains. The OpenShift VM is the DNS server for the domains, and I can see that this DNS server was added to an interface on my laptop. Unfortunately, it has a much higher interface metric and will never actually be consulted for name resolution.</p> <p>Windows exposes the Name Resolution Policy Table (NRPT) for just this reason. What I decided to do is add an entry into the NRPT for each of these domains, telling my laptop to resolve names in these domains using the DNS server running in the OpenShift VM.</p> <pre><code class="language-powershell">#Remove any NRPT rules for testing domains Get-DnsClientNrptRule | ? {$_.namespace -like '*.testing'} | Remove-DnsClientNrptRule -Force #Add rule, using first available IP of crc vm Add-DnsClientNrptRule -Namespace ".crc.testing" -NameServers (get-vm -Name crc).NetworkAdapters[0].IPAddresses[0] #Add rule, using first available IP of crc vm Add-DnsClientNrptRule -Namespace ".apps-crc.testing" -NameServers (get-vm -Name crc).NetworkAdapters[0].IPAddresses[0] </code></pre> <p>Another problem is that when the 'crc start' command failed, it exited without cleaning up. Now I have a broken OpenShift cluster and need to start fresh. Thankfully, they've made this very easy:</p> <pre><code>C:\Users\MichaelBurch\Downloads\crc-windows-amd64\crc-windows-1.10.0-amd64&gt;crc delete Do you want to delete the OpenShift cluster? [y/N]: y (crc) Waiting for host to stop... Deleted the OpenShift cluster </code></pre> <blockquote> <p>I did report this problem, hopefully the OpenShift/CRC team will provide an official fix rather than my quick workaround. You can follow the status of the issue here <a href="https://github.com/code-ready/crc/issues/1193" rel="noopener" target="_blank">https://github.com/code-ready/crc/issues/1193</a></p> </blockquote> <h2 id="if-at-first-you-dont-succeed">If at first you don't succeed...</h2> <p>Now I can start it back up. Here's what a good result looks like:</p> <pre><code>C:\Users\MichaelBurch\Downloads\crc-windows-amd64\crc-windows-1.10.0-amd64&gt;crc start -n 8.8.8.8 ... INFO Adding 8.8.8.8 as nameserver to the instance ... INFO Will run as admin: add dns server address to interface vEthernet (Default Switch) INFO Check internal and public DNS query ... WARN Failed public DNS query from the cluster: ssh command error: command : host -R 3 quay.io err : Process exited with status 1 output : quay.io has address 23.20.49.22 INFO Check DNS query from host ... INFO Generating new SSH key INFO Copying kubeconfig file to instance dir ... INFO Starting OpenShift kubelet service INFO Configuring cluster for first start INFO Adding user's pull secret ... INFO Updating cluster ID ... INFO Starting OpenShift cluster ... [waiting 3m] INFO INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' INFO To login as an admin, run 'oc login -u kubeadmin -p jh3kL-Te6cD-BKDG7-3rvSu https://api.crc.testing:6443' INFO INFO You can now run 'crc console' and use these credentials to access the OpenShift web console Started the OpenShift cluster </code></pre> <p>And sure enough - 'crc console' opens my default browser (Firefox) and the provided kubeadmin credentials allow me to login</p> <p><img src="https://www.michaelburch.net/images/openshift-dashboard.png" alt="screenshot of OpenShift Dashboard" title="screenshot of OpenShift Dashboard"></p> <h2 id="wrapping-up">Wrapping up</h2> <p>I wouldn't consider this a smooth, seamless experience. It feels like this particular deployment option is an after-thought (and possibly not even tested on Windows). Another compelling option for getting started with OpenShift development is the <a href="https://www.openshift.com/products/online/" rel="noopener" target="_blank">free starter plan</a> which I may look into next. For some reason, I always default to the local installation option when trying out new development tools and mostly it works well. My laptop setup isn't <strong>that</strong> complex so I doubt that I'm some odd corner case. I suspect that the setup experience is better on a laptop running RHEL, or maybe even MacOS but I'm not going to spend any more time testing that.</p> <p>I'll continue this experiment in a future post, hopefully I'll get an actual app deployed! In the meantime, post in the comments here if you've had any experience with a local OpenShift environment.</p> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/multi-language-multi-cloud-deployments-with-pulumi.html Multi-language, multi-cloud deployments with Pulumi Michael Burch 2020-04-07T00:00:00Z <p>Pulumi is a SDK that can be used to describe an entire application stack using modern programming languages and deploy that stack to multiple cloud providers. This is an exciting new approach to infrastructure as code that can help development teams collaborate more effectively. If you've ever used a product like Terraform, Packer or CloudFoundry you may appreciate being able to use the same language as your application code to describe and deploy the infrastructure. In today's post, I'll detail an example of deploying common components for a web application using Pulumi, C# and Typescript.</p> <h2 id="modern-infrastructure-as-code">Modern Infrastructure as Code</h2> <p>Infrastructure as Code (IaC) is widely regarded as essential to any DevOps practice. It brings the promise of repeatable, predictable deployments for new projects and streamlined scaling for existing apps. Seven years ago, I can remember racking and stacking a server, manually installing software on it, applying OS and security configurations and then spending weeks going back and forth with the application team to get an app deployed and functioning. When it was all said and done, some steps were automated, fewer were accurately documented, and pretty much all were repeated next time. Times have changed.</p> <p>For this post, I'll be deploying a set of servers in Azure that scales up and down based on load, a layer-7 load balancer that distributes traffic to them, and separate public and private subnets. The complete source for the stack is available on <a href="https://github.com/michaelburch/pulumi-examples" rel="noopener" target="_blank">GitHub</a> I will also automate the installation of some basic OS components to get the web servers up and running. Here's a crude diagram of the environment to be created:</p> <p><img src="https://www.michaelburch.net/images/vmss-appgw.jpeg" alt="diagram of VMSS and AppGateway" title="diagram of Azure VMSS and AppGateway"></p> <p>This is a common setup that you might see for an ASP.NET application. Even the IaC approach itself is fairly common, in fact I've deployed a similar stack using Terraform with relative ease. Terraform code for this type of stack would look something like this:</p> <pre><code># Create a virtual network within the resource group resource "azurerm_virtual_network" "example" { name = "example-network" resource_group_name = azurerm_resource_group.example.name location = var.location address_space = [var.addressSpace] } </code></pre> <p>This is simple enough, but it does require an understanding of <a href="https://www.terraform.io/docs/configuration/syntax.html" rel="noopener" target="_blank">Terraform's domain specific language</a>. Not a problem for a DevOps team, but as we start to see DevOps staff become integrated into other teams there a some serious productivity gains to be had from everyone working in the same language. Just think of all the extra PR reviewers that could be available to you! That's where <a href="https://www.pulumi.com/" rel="noopener" target="_blank">Pulumi</a> comes in - as an SDK, it can be used from a number of languages:</p> <pre><table> <tbody><tr> <td style="text-align: center;">C#</td><td style="text-align: center">TypeScript</td> </tr> <tr> <td><pre><code class="hljs csharp"> // Create Networking components var vnet = new VirtualNetwork($"{stackId}-vnet", new VirtualNetworkArgs { ResourceGroupName = resourceGroup.Name, AddressSpaces = addressSpace }); </code></pre></td> <td><pre><code class="hljs ts"> // Create Networking components const network = new azure.network.VirtualNetwork(`${stackId}-vnet`, { resourceGroupName, addressSpaces: addressSpace }); </code></pre> </td> </tr> </tbody></table> </pre> <h3 id="starting-a-pulumi-project">Starting a Pulumi project</h3> <p>Installing Pulumi is easy - I'm using Ubuntu on Windows with WSL, so I just open a terminal and run:</p> <pre><code class="language-bash">curl -fsSL https://get.pulumi.com | sh </code></pre> <p>I'll be deploying to Azure, and I'm not (yet) using any CI/CD tools so I followed the excellent <a href="https://www.pulumi.com/docs/intro/cloud-providers/azure/setup/" rel="noopener" target="_blank">Azure setup instructions</a> on the Pulumi website to configure my project for Service Principal Authentication.</p> <p>I'll start a new C# project with 'pulumi new azure-csharp ' and give it some basic details <img src="https://www.michaelburch.net/images/pulumi-new-az-cs-1.png" alt="screenshot 'pulumi' new command output" title="screenshot 'pulumi new' command output"></p> <p>Now that my project is created and configured to access my Azure subscription, I can start defining resources.</p> <h3 id="defining-config-values">Defining config values</h3> <p>The above Azure setup instructions also provide a great introduction to providing configuration values to the project. This is similar to what you might do with Terraform variables - provide a way to reuse this code as a template for future deployments by supplying different values at runtime. The example above sets these values for the specific Azure environment:</p> <pre><code>pulumi config set azure:clientId &lt;clientID&gt; &amp;&amp; pulumi config set azure:clientSecret &lt;clientSecret&gt; --secret &amp;&amp; pulumi config set azure:tenantId &lt;tenantID&gt; &amp;&amp; pulumi config set azure:subscriptionId &lt;subscriptionId&gt; </code></pre> <p>I want to add more configuration settings for things like the region, address ranges, DNS name, and credentials that I will use in my deployment.</p> <pre><code>pulumi config set region CentralUS pulumi config set adminUser michael ... </code></pre> <h3 id="defining-resources">Defining resources</h3> <p>Rather than post all of the code for the project here, I'll highlight some of the more important steps and encourage you to review the complete stack in my <a href="https://github.com/michaelburch/pulumi-examples" rel="noopener" target="_blank">GitHub repo</a>, and also review the much more complete examples provided by the Pulumi team.</p> <p>First, I'll configure my Virtual Network, subnets, and app Gateway referencing config settings that I created earlier:</p> <pre><code>// Create Networking components var vnet = new VirtualNetwork($"{stackId}-vnet", new VirtualNetworkArgs { ResourceGroupName = resourceGroup.Name, AddressSpaces = addressSpace }); // Create a private subnet for the VMSS var privateSubnet = new Subnet($"{stackId}-privateSubnet", new SubnetArgs { ResourceGroupName = resourceGroup.Name, AddressPrefix = privateSubnetPrefix, VirtualNetworkName = vnet.Name }); // Create a public subnet for the Application Gateway var publicSubnet = new Subnet($"{stackId}-publicSubnet", new SubnetArgs { ResourceGroupName = resourceGroup.Name, AddressPrefix = publicSubnetPrefix, VirtualNetworkName = vnet.Name }); // Create a public IP and App Gateway var publicIp = new PublicIp($"{stackId}-pip", new PublicIpArgs { ResourceGroupName = resourceGroup.Name, Sku = "Basic", AllocationMethod = "Dynamic", DomainNameLabel = dnsPrefix }, new CustomResourceOptions { DeleteBeforeReplace = true }); var appGw = new ApplicationGateway($"{stackId}-appgw", new ApplicationGatewayArgs { ResourceGroupName = resourceGroup.Name, Sku = new ApplicationGatewaySkuArgs { Tier = "Standard", Name = "Standard_Small", Capacity = 1 }...} </code></pre> <p>Next, I'll create the VM Scale Set for my web servers. I'm using the Azure VM CustomScript Extension to run a very short command to install IIS. In a typical environment, this would be a much larger script that would be stored elsewhere and downloaded by the extension before running.</p> <pre><code>// Enable VM agent and script extension UpgradePolicyMode = "Automatic", OsProfileWindowsConfig = new ScaleSetOsProfileWindowsConfigArgs { ProvisionVmAgent = true }, Extensions = new InputList&lt;ScaleSetExtensionsArgs&gt; { new ScaleSetExtensionsArgs { Publisher = "Microsoft.Compute", Name = "IIS-Script-Extension", Type = "CustomScriptExtension", TypeHandlerVersion = "1.4", // Settings is a JSON string // This command uses powershell to install windows webserver features Settings = "{\"commandToExecute\":\"powershell Add-WindowsFeature Web-Server,Web-Asp-Net45,NET-Framework-Features\"}" } } </code></pre> <h3 id="deploying-and-validating">Deploying and Validating</h3> <p>Finally, I'll deploy this stack and make sure that everything worked with the following command:</p> <pre><code class="language-bash">pulumi up </code></pre> <p>Pulumi will evaluate the project, determine which actions will be taken and then prompt for approval. When the deployment is complete, I get a nice summary screen with confirmation:</p> <p><img src="https://www.michaelburch.net/images/pulumi-up-az-cs-complete.png" alt="screenshot 'pulumi up' command output" title="screenshot 'pulumi up' command output"></p> <p>From this output I can see the URL given to my application gateway, 'aspnettodo.centralus.cloudapp.azure.com'. Browsing to that confirms that IIS is installed and responding to requests.</p> <p><img src="https://www.michaelburch.net/images/iis-welcome.png" alt="screenshot of browser loading content from this project" title="screenshot of browser loading content from this project"></p> <p>Now that it's complete, I can tear down the entire stack with 'pulumi destroy'. And in just a few minutes I've built, deployed and destroyed a complete web server environment using Pulumi and C#! The IIS Welcome page isn't very interesting though - maybe next time I'll deploy an actual application and try out more capabilities of Pulumi.</p> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more. https://www.michaelburch.net/blog/kubernetes-on-raspberry-pi-with-k3s.html Kubernetes on Raspberry Pi with K3s Michael Burch 2020-03-10T00:00:00Z <p>Kubernetes makes it possible to describe an application and deploy it to the cloud or to on-premise infrastructure using the same code and deployment tools. Using K3s, that on-premise infrastructure can even be a Raspberry Pi (or a cluster of them!). This post describes deploying MongoDB to Kubernetes running on a Raspberry Pi 3.</p> <h2 id="preparing-the-raspberry-pi">Preparing the Raspberry Pi</h2> <p>The first step is to install an operating system image on the Pi. There are plenty of tutorials out there for this, so I won't cover it here. The <a href="https://www.raspberrypi.org/documentation/installation/installing-images/README.md" rel="noopener" target="_blank">official instructions</a> work just fine.</p> <p>I'll be deploying MongoDB, which is 64-bit only so I need an OS image with at least a 64-bit kernel. The <a href="https://www.raspberrypi.org/downloads/raspbian/" rel="noopener" target="_blank">latest Raspbian image</a> (Buster Lite, 2020-02-13) can support this with a simple option change.</p> <p>I'm starting fresh, so I'll apply the image and then set a couple of config options before booting up the Pi:</p> <ul> <li>Enable 64-bit support</li> <li>Enable SSH</li> </ul> <h3 id="enable-64-bit-kernel">Enable 64-bit kernel</h3> <p>All that's necessary for this is to tell the Pi to load a 64-bit kernel. I'll do that by opening config.txt in the root of the boot partition and adding this line at the bottom:</p> <pre><code>arm_64bit=1 </code></pre> <p><img src="https://www.michaelburch.net/images/raspi-config-txt-arm64.png" alt="screenshot of config.txt enabling arm_64bit" title="screenshot of config.txt enabling arm_64bit"></p> <blockquote> <p>If this isn't your first boot and you've already got a recent Raspbian image, you can make this change quickly with <code>echo 'arm_64bit=1' &gt;&gt; /boot/config.txt</code> and then rebooting</p> </blockquote> <h3 id="enable-ssh-at-first-boot">Enable SSH at first boot</h3> <p>This isn't strictly necessary, but I don't have a spare monitor around so SSH is a must for me, and I want it to be enabled from first boot. Adding an empty file named 'ssh' (ssh.txt works too) to the root of the boot partition will accomplish this:</p> <p><img src="https://www.michaelburch.net/images/ssh-raspi.png" alt="screenshot of ssh.txt in boot partition" title="screenshot of ssh.txt in boot partition"></p> <p>That's it! At this point I have a fresh Raspbian image that will boot up with a 64-bit kernel and have ssh enabled. All I have to do is plug in the SD card and boot it up.</p> <h3 id="installing-kubernetes">Installing Kubernetes</h3> <p>Kubernetes on it's own is too much for my little Raspberry Pi 3. I'm going to use a minimal Kubernetes distribution from Rancher called K3s. The great thing about K3s is that it's designed for small embedded systems but also scales to large clusters. The default configuration replaces etcd with a sqlite database and uses containerd as the container runtime rather than a full install of Docker. This all adds up to a smaller footprint and simpler installation.</p> <blockquote> <p>The script below will install the version of k3s appropriate for your current kernel. If you haven't switched to a 64-bit kernel yet, now is the time. See the steps above for details, and confirm your current kernel architecture with <code>uname -a </code>, looking for 'aarch64 GNU/Linux'</p> </blockquote> <p>Installing K3s is easy. I'll just login to the Pi and run -</p> <pre><code class="language-bash">curl -sfL https://get.k3s.io | sh - </code></pre> <p>In a couple of minutes, I'm up and running with a single node Kubernetes cluster.</p> <p>K3s, unlike a standard Kubernetes install, writes the kubeconfig file to /etc/rancher/k3s/k3s.yaml. Kubectl installed by K3s will automatically use this configuration.</p> <blockquote> <p>By default, only root will be able to access this kubeconfig meaning you would have to prefix all of your kubectl commands with 'sudo'. You can permit other users to access the file with <code>sudo chmod 644 /etc/rancher/k3s/k3s.yaml </code></p> </blockquote> <p>You can also copy /etc/rancher/k3s/k3s.yaml to your local PC, and replace '127.0.0.1' with the IP address of the Pi to manage Kubernetes remotely.</p> <h3 id="deploying-mongodb">Deploying MongoDB</h3> <p>I've defined a minimal configuration for mongodb and saved it as mongo.yaml.</p> <pre><code class="language-yaml">apiVersion: apps/v1 kind: StatefulSet metadata: name: mongodb spec: serviceName: database replicas: 1 selector: matchLabels: app: database template: metadata: labels: app: database selector: mongodb spec: containers: - name: mongodb image: mongo env: - name: MONGO_INITDB_ROOT_USERNAME value: admin - name: MONGO_INITDB_ROOT_PASSWORD value: password --- </code></pre> <p>This will deploy a single replica of the latest official mongo image from DockerHub, with some <em>very</em> insecure credentials and no persistent storage. It's really the bare minimum to get the application up and running, which I can do with this command:</p> <pre><code class="language-bash">pi@raspberrypi:~ $ kubectl apply -f mongo.yaml statefulset.apps/mongodb created </code></pre> <p>I can verify that the pod is up and running with this:</p> <pre><code class="language-bash">pi@raspberrypi:~ $ kubectl get pod NAME READY STATUS RESTARTS AGE mongodb-0 1/1 Running 0 3m48s </code></pre> <p>This doesn't expose mongodb on my local network, only to other pods running in the cluster. I could add simple web app to the cluster and use mongo as the data store. Or I could add a service definition and expose it to apps running elsewhere on the network.</p> <p>This is a good starting point for a basic deployment in Kubernetes, and K3s is the easiest and fastest Kubernetes install I've seen so far.</p> <h3 id="bonus-adding-persistent-storage">BONUS: Adding persistent storage</h3> <p>A database isn't very useful without persistent storage. I have a Synology NAS on my network that serves storage over iSCSI, so I'll quickly add that to my deployment by adding a volume and mounting it in the pod:</p> <pre><code class="language-yaml"> env: - name: MONGO_INITDB_ROOT_USERNAME value: admin - name: MONGO_INITDB_ROOT_PASSWORD value: password volumeMounts: - name: iscsipd-rw mountPath: /data/db volumes: - name: iscsipd-rw iscsi: targetPortal: 192.168.0.226:3260 portals: ['192.168.0.226:3260'] iqn: iqn.2000-01.com.synology:media.Target-1.2818f865af lun: 1 fsType: xfs readOnly: false --- </code></pre> Michael Burch is a technologist, cloud enthusiast, programmer, runner, hiker, husband, father and more.