Introduction
Kubernetes is a powerful tool for managing applications and services. However, it can be difficult to estimate the cost of your workloads when you're new to Kubernetes. This article will walk through some common use cases that are relevant to estimating costs in Kubernetes as well as provide some tips on how to use Goldilocks when calculating these numbers.
Goldilocks
Goldilocks is a tool that helps you estimate the cost of running Kubernetes workloads in AWS. Goldilocks is based on the Goldilocks principle, which says that a system should be "just right" when it comes to its capacity and performance.
The tool will give you an idea of how much time and money it would take to run your workloads at different scales, including those connected with high availability requirements.
CapEx vs. OpEx
Kubernetes is a platform for running containers, and so it’s important to understand the difference between CapEx and OpEx.
CapEx is the capital expenditure of buying hardware. For example, if you want to run Kubernetes on your server at home or in an office setting, you may need to purchase new servers with enough power and memory capacity to handle all of your workloads (especially if they are running multiple virtual machines). This can be expensive! However, this expense isn't limited just by how much money has been spent on new servers; it also includes other operating costs such as ongoing maintenance work tasks such as rebooting them when needed after software updates happen during installation phases when upgrading their operating systems within those environments too.
OpEx refers specifically towards operational expenses like energy consumption costs associated with running these systems continuously 24 hours per day 7 days per week 365 days per year without exception ever since inception until decommissioning occurs sometime thereafter whereby no longer required anymore due t
Kubernetes 101
Kubernetes is a platform for managing containerized application deployments. Kubernetes abstracts away the details of individual containers and allows you to manage entire applications as single entities.
The control plane runs on master nodes, while worker nodes can either be idle or running in a normal pod. Each pod contains one or more containers and has its own IP address (the pods are not visible outside their host). This means that when you deploy your application onto Kubernetes, it will need to be accessible via port 443 by all pods within your cluster.
What is a container?
Containers are a unit of deployment. They're lightweight, isolated and portable. Containers can be used for application development, infrastructure orchestration and more.
Containers are an important part of the DevOps toolchain because they enable fast feedback loops between developers and QA testers. It's also much easier to run multiple containers on one host than it is to run multiple virtual machines (VMs).
Kubernetes abstractions
Kubernetes abstractions
The term Kubernetes abstraction refers to a set of components that are used by the Kubernetes API and CLI. Abstractions can be divided into two categories: pods and deployments. A pod is an individual container or service, whereas a deployment represents a logical unit of work for containers or services running in it (e.g., scaling up/down).
Pods are the smallest unit of deployment; they have no special properties other than being run on their own node(s) within a cluster (or local cluster). Controllers manage pods, replicasets manage replicasets and so forth. The number of nodes doesn't matter when considering deployment units because each node will run one instance at any given time if you're using k8s cluster mode (which we'll cover later).
Pods
A pod is a group of containers. These containers can be microservices, and they are typically ephemeral: they live for a short time and then die off when you no longer need them. In Kubernetes, pods are the smallest unit of scheduling (i.e., how long it will take for your workloads to execute).
Pods are created by Kubernetes when you create resources such as services or deployments in the cluster; they’re also destroyed when those resources are removed from the cluster (for example, if an application stops running). Pods are therefore similar to processes on Windows—they exist for only so long before being “eliminated” from memory by their host operating system so that other applications can access more resources at any given time!
Controllers/ReplicaSets
ReplicaSets are responsible for maintaining the desired number of Pod replicas. They can be used to scale up, down or restart Pods as needed.
ReplicaSets have an API that allows you to perform rolling updates on your Pods (i.e., perform rolling updates without downtime).
Deployments/StatefulSets
Deployments and statefulsets are the two pieces of Kubernetes that handle service-levelization.
Deployments are used to create new Kubernetes resources and update existing ones, like adding or removing replicas from a pod (or from multiple pods). Statefulsets allow you to store the state of any number of deployed pods in a single place.
Why would you want to use these? Well, if you have an application with many different parts that need to be updated at specific times—for example, if your app requires user authentication every time someone logs in—you can use deployments so that all those changes happen simultaneously without having to redeploy everything manually on each change. With statefulset persistence enabled for each service in your cluster (which is what we'll do here), we'll be able to rollback any changes made during deployment by simply deleting it from our database!
Services/Ingress
You can use the Kubernetes ingress service type to define how external users access your applications. Ingress defines the rules for how external services connect to your cluster and enables you to expose resources such as HTTP endpoints and DNS records for public consumption.
Jobs/CronJobs
CronJobs are used to run a task on a schedule. If you have a job that runs every hour and it takes three hours, then it would be more efficient to run that job once per day (rather than hourly). CronJobs can be set to run at any time, but they don't have an upper limit on how many times they can run per day or week.
The downside of using cronjobs is that you'll only get one shot at running your job; if something goes wrong in the middle of its execution then there's no way for anyone else to know about it unless they've also written their own custom script or tool around this problem.
Take Goldilocks for a test drive with your own workload to see how it works!
Now that you have a basic understanding of how the tool works, let's take it for a test drive with your own workload.
Start by creating an evaluation environment by following these steps:
Deploy Kubernetes on Google Cloud Platform (GCP) or another supported cloud provider. This can be done using kubectl commands or the gcloud command-line interface (CLI).
Create a deployment for Goldilocks and import your data into it as described in [How To Import Data To Kubernetes](docs.googleusercontent.com/document/d/1YsZa..).
Conclusion
We hope you’ve enjoyed today’s lesson on Kubernetes and how to estimate costs for your own workloads, but we’re not done yet! We have many more articles in our Learn Kubernetes series that will help you with deployment strategies and best practices. If you want to learn more about how Kubernetes works under the hood, check out our other blog posts!