Working with OKE Virtual Nodes

Oracle University Podcast

Jul 2 2024 • 14 mins

Want to gain insights into how virtual nodes provide a serverless Kubernetes experience?
Join hosts Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, as they compare managed nodes and virtual nodes. Continuing from the previous episode, they explore how virtual nodes enhance Kubernetes deployments in Oracle Cloud Infrastructure.
Oracle University Learning Community: https://education.oracle.com/ou-community
X (formerly Twitter): https://twitter.com/Oracle_Edu
Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.
--------------------------------------------------------
Episode Transcript:

00:00

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

00:25

Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

Nikita: Hey everyone! In our last episode, we examined OCI Container Engine for Kubernetes, including its key features and benefits.

Lois: Yeah, that was an interesting one. Today, we’re going to discuss virtual nodes and their role in enhancing Kubernetes deployments in Oracle Cloud Infrastructure.

Nikita: We’re going to compare virtual nodes and managed nodes, and look at their differences and advantages. To take us through all this, we have Mahendra Mehra with us. Mahendra is a senior OCI instructor with Oracle University.

01:09

Lois: Hi Mahendra! From our discussion last week, we know that when creating a node pool with Container Engine for Kubernetes, we have the option of specifying the type of Oracle nodes as either managed nodes or virtual nodes. But I’m sure there are some key differences in the features supported by each type, right?

Mahendra: The primary point of differentiation between virtual nodes and managed nodes is in their management approach. When it comes to managed nodes, users are responsible for managing the nodes. They have the flexibility to configure them to meet the specific requirements.

Users are also responsible for upgrading Kubernetes on managed nodes and for managing cluster capacity. You can create managed nodes and node pools in both basic clusters and enhanced clusters, whereas in virtual nodes, virtual nodes provide a serverless Kubernetes, experience, enabling users to run containerized applications at scale. The Kubernetes software is upgraded and security patches are applied while respecting application's availability requirements.
You can only create virtual nodes and virtual node pools in enhanced clusters.

02:17

Nikita: What about differences in terms of resource allocation? Are there any differences we should be aware of?

Mahendra: When it comes to managed nodes, the resource allocation is at the node pool level and the users specify CPU and memory resource requirements for a given node pool. In the virtual nodes, the resource allocation is done at the pod level, where you can specify the CPU and memory resource requirements, but this time, as requests and limits in the pod specification.

02:45

Lois: What about differences in the approach to load balancing?

Mahendra: When it comes to managed nodes, load balancing is between the worker nodes, whereas in virtual nodes, load balancing is between pods.

Also, load balancer security list management is never enabled, and you always must manually configure security rules. When using virtual nodes, load balances distribute traffic among pods' IP addresses and then assign node port.

03:12

Lois: And when it comes to pod networking?

Mahendra: Under managed nodes, both the VCN-Native Pod Networking CNI plugin and the flannel CNI plugin are supported. When it comes to virtual nodes, only VCN-Native Pod Networking is supported.

Also, only one VNIC is attached to each virtual node. Remember, IP addresses are not pre-allocated before pods are created. And the VCN-Native Pod Networking CNI plugin is not shown as running in the kube-system namespace. Pod subnet route tables must have route rules defined for a NAT gateway and a service gateway.

03:48

Nikita: OK… I have a question, Mahendra. When it comes to scaling Kubernetes clusters and node pools, can users adjust the cluster capacity in response to their changing requirements?

Mahendra: When it comes to managed nodes, customers can scale the cluster and node pool up and down by changing the number of managed node pools and nodes respectively. They also have an option to enable autoscaling to automatically scale managed node pools and pods.

When it comes to virtual nodes, operational overhead of cluster capacity management is handled for you by OCI. A virtual node pool scales automatically and can support up to 1000 pods per virtual node. Users also have an option to increase the number of virtual node pools or virtual nodes to scale up the cluster or node pool respectively.

04:37

Lois: And what about the pricing for each?

Mahendra: Under managed nodes, you pay for the compute instances that execute applications, whereas under virtual nodes, you pay for the exact compute resources consumed by each Kubernetes pod.

04:55

Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

05:34

Nikita: Welcome back! We were just discussing how when you have to choose between virtual nodes and managed nodes for your Kubernetes cluster, you need to consider several key points of differentiation, like the management approach, resource allocation, load balancing, pod networking, scaling, and pricing.

Lois: Yeah, it’s important to understand the benefits and drawbacks of each approach to make informed decisions. Mahendra, now let’s talk about the prerequisites to configure clusters with virtual nodes and the IAM policies that are required to use virtual nodes.

Mahendra: Before you can use virtual nodes, you always have to set up at least one IAM policy, which is required in all circumstances by both tenancy administrators and non-administrator users.

This basically means, to create and use clusters with virtual nodes and virtual node pools, you must endorse Container Engine for Kubernetes service to allow virtual nodes to create container instances in the Container Engine for Kubernetes service tenancy with a VNIC connected to a subnet of a VCN in your tenancy.

All you need to do is create a policy in the root compartment with policy statements from the official documentation page. You will find them under the Working with Virtual Nodes section within the Container Engine topic.

06:55

Lois: Mahendra, how do you create and configure virtual nodes and virtual node pools?

Mahendra: Creating virtual nodes is a pivotal step and it involves setting up a virtual node pool in a new cluster. This is exclusively applicable to enhanced clusters. You can initiate this process using the console, the CLI, or the API.

Configuring your virtual node pools involves defining critical parameters. Firstly, we have the node count. This represents the number of virtual nodes you wish to create within your virtual node pool. These nodes will be strategically placed in the availability domains that you specify.

Now, it's important to carefully consider the placement of these nodes. You can distribute them across different availability domains, ensuring high availability for your applications. Additionally, you have the option to place these nodes in a regional subnet, which is the recommended approach for optimal performance.

07:53

Nikita: Isn’t the pod shape another important parameter? Can you tell us a bit about it?

Mahendra: Pod shape refers to the type of shape you want for pods running on your virtual nodes within the virtual node pool. The pod shape is crucial as it determines the processor type on which you want your pods to run. It is important to note that only shapes available in your tenancy and supported by Container Engine for Kubernetes will be shown. So choose a shape that aligns with the requirements of your applications and services.
A noteworthy point is that you explicitly specify the CPU and memory resource requirements for virtual nodes in the pod specification file. This ensures that your virtual nodes have the necessary resources to handle the workloads of your applications. Precision in specifying these requirements is key to achieving optimal performance.

08:49

Lois: What is the network setup for virtual nodes?

Mahendra: The pod running on virtual nodes utilize VCN-native pod networking, and it's crucial to specify how these pods in the node pool communicate with each other. This involves setting up a pod subnet, which is a regional subnet configured specially to host pods.

The pod subnet you specify for virtual nodes must be private. Oracle recommends that the pod subnet and the virtual node subnets are the same. In addition to subnet configurations, you have the option to use security rules in network security group to control access to the pod subnet. This involves defining security rules within one or more NSGs that you specify with a maximum limit of five network security groups.

Also, it is worth noting that using network security group is recommended over using security list. Now, let's shift our focus to virtual node communication. For this, you will configure a virtual node subnet. This subnet can be either a regional subnet, which is recommended, or an availability domain-specific subnet. And it's designed to host your virtual nodes.

10:02

Nikita: What are some key considerations for virtual node subnets?

Mahendra: If you've specified load balancer subnets, ensure that the virtual node subnets are different. As with pod communication, Oracle recommends that the pod subnet and the virtual node subnet are the same, with the added condition that the virtual node subnet must be private.

10:23

Lois: Mahendra, can you take us through the fundamental tasks involved in managing virtual nodes and virtual node pools?

Mahendra: Whether you're creating a new enhanced cluster using the Console, or looking to scale up an existing one, the creation process is versatile.

Creating virtual nodes involves establishing a virtual node pool. Virtual nodes can only be created within enhanced clusters. Listing virtual nodes task offers visibility into virtual nodes within a virtual node pool. Whether you prefer Console, CLI, or the API, you have the flexibility to choose the method that suits your workflow best.

For a comprehensive understanding of your virtual node pools, navigate to the Cluster List page, and click on the name of the cluster. This will unveil the specifics of the virtual node pool you are interested in.

Now let's talk about updating virtual node pools. Whether your initiating a new enhanced cluster, or expanding an existing one, the update process ensures your cluster aligns with your evolving requirements. You can easily update the virtual node pool’s name for clarity. You can also dynamically change the number of virtual nodes to meet the workload demands, and you can fine tune the Node Placement using options like Availability Domain and Fault Domain settings.

Moving on to an essential aspect of node pool management, that is deletion. It's crucial to understand that deleting a node pool is a permanent action. Once deleted, the node pool cannot be recovered.

12:04

Lois: Before we wrap up, Mahendra, can you talk about the critical factors when allocating CPU, memory, and storage resources to pods provisioned by virtual nodes within your OKE cluster?

Mahendra: To ensure optimal performance, OKE calculates CPU and memory allocations at the pod level, a distinctive feature when using virtual nodes. This approach stands in contrast to the traditional worker node-level allocation.

The allocation process takes into account several factors. First one is the CPU and memory requests and limits. These are specified for each container in the pod spec file, if present. Secondly, number of containers in the pod. The total number of containers impacts the overall resource requirements. And kube-proxy and container runtime requirements. A small but essential consideration taking up 0.25 GB of memory and negligible CPU. Pod CPU and memory requests must meet a minimum of 0.125 OCPUs and 0.5 GB of memory.

13:12

Nikita: Thank you, Mahendra, for this really insightful session. If you’re interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course.

Lois: You’ll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we’ll journey into the world of self-managed nodes and discuss how to manage Kubernetes deployments. Until then, this is Lois Houston…

Nikita: And Nikita Abraham, signing off!

13:45

That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.