The Use Case for Kubernetes at the Edge
We are republishing this blog here. Earlier, it was published on The New Stack.
Kubernetes is an integral part of enterprises transforming into digital-first businesses. According to the VMware’s “State of Kubernetes 2020” report, Kubernetes is currently being used in 59% of data centers to improve resource utilization and bring agility in software development cycle. Importantly, Kubernetes can manage and orchestrate containerized applications along with legacy virtual machines in a distributed cloud environment. And not just for pure applications, Kubernetes can also be helpful for running AI/ML and GPU workloads.
In addition to the data center, the Linux Foundation‘s The State of Edge Report clearly depicts that Kubernetes is the go-to platforms for edge cloud, at least for those edges that require dynamic orchestration for applications and centralized management of workloads.
We can say that Kubernetes is extending to the edge the benefits offered by the cloud native computing software development model, with flexible and automated management of applications that span across the disaggregated cloud environment.
Here are a few of the key developments around how Kubernetes has evolved to manage the applications and workloads at the edge.
- One of the most exciting Kubernetes-based open-source projects is Akri. It is released by Microsoft to power the edge with advanced capabilities to connect to smaller “leaf devices,” like microcontrollers, cameras, sensors, making them all a part of the Kubernetes cluster. Collecting output from these leaf devices, Akri, basically extends the Kubernetes device framework.
- Microsoft has released another open-source project called Krustlet that runs WebAssembly (WASM) modules to utilize the Kubernetes along with containers. Developers can compile code from their familiar languages (C, C++, C#, Rust, Go, etc) using WASM. This is important for edge computing as the same codebase can be executed on a wide range of devices.
- Hyperconverged (HCI) systems have emerged as a go-to solution to host virtualized and containerized applications that require less computing solution at the edge. The State of Edge report revealed that hyperscaler cloud providers are integrating HCI systems with the hybrid cloud environment. These HCI systems are also integrated with public cloud Kubernetes services to deploy and manage resources that are containerized and virtualized and require lifecycle management from a central cloud.
- In 2020, cloud vendors like Google, Amazon Web Services, IBM and Azure launched solutions targeted for building edge clouds. Kubernetes services respective to each cloud vendor are utilized to orchestrate the resources at the edge.
- Linux Foundation Edge project Baetyl was updated with a native support to Kubernetes. Baetyl basically runs on Kubernetes at the edge to manage and deploy applications. Baetyl also supports lightweight K3S Kubernetes platforms K3S. Furthermore, all Baetyl functions can be work as a Kubernetes service, with a self-update capability.
- Linux Foundation Edge’s Project EVE enables lifecycle management and remote orchestration of any application and hardware at the edge.
- The Open Infrastructure’s StarlingX project for edge and IoT use case deployments has evolved to run the containers to host infrastructure services at the edge. StarlingX basically combines OpenStack and Kubernetes to run the virtual machines and containers together. Also, it also makes possible to run only containers at the edge that utilize Kubernetes lightweight components.
Enterprises and telecom operators are focusing on achieving great amount of flexibility, observability and dynamic orchestration by deploying and testing Kubernetes at the edge. Many vendors like Canonical, Mirantis and Suse have their own open-source platforms built on top of Kubernetes to support edge use cases. Hyperscaler cloud providers are integrating their edge solutions with their native Kubernetes services. Such trends show how Kubernetes is the key platform for adopters.