2
and 10
, inclusive. 5
5
. The total log size (container_log_max_size*container_log_max_files)
per container cannot exceed 1 percent of the total storage of the node.none
or static
none
none
which is the default CPU affinity scheme, providing no affinity beyond what the OS scheduler does automatically.static
allows Pods in the Guaranteed QoS class with integer CPU requests to be assigned exclusive use of CPUs.true
or false
true
false
means that the CPU limits for Pods are ignored.cpuCFSQuota
is that a rogue Pod can consume more CPU resources than intended. "100ms"
cpu.cfs_period_us
, which specifies the period of how often a cgroup's access to CPU resources should be reallocated. This option lets you tune the CPU throttling behavior.imageGcHighThresholdPercent
80
imageGcLowThresholdPercent
is the percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. The percent is calculated by dividing this field value by 100. When specified, the value must be less than imageGcThresholdPercent
. imageGcLowThresholdPercent
85
imageGcHighThresholdPercent
is the percent of disk usage above which image garbage collection is run. Highest disk usage to garbage collect to. The percent is calculated by dividing this field value by 100. When specified, the value must be greater than imageGcLowThresholdPercent
. "ns", "us" (or "µs"), "ms", "s", "m", "h"
2m
imageMinimumGcAge
is the minimum age for an unused image before it is garbage collected.0s
imageMaximumGcAge
is the maximum age an image can be unused before it is garbage collected. The default of this field is "0s", which disables this field. Meaning, images won't be garbage collected based on being unused for too long. When specified, the value must be greater than imageMinimumGcAge
. imageMaximumGcAge is available on GKE versions 1.30.7-gke.1076000, 1.31.3-gke.1023000 or laterinsecureKubeletReadonlyPortEnabled
true
or false
)true
10255
on every new node pool in your cluster. If you configure this setting in this file, you can't use a GKE API client to change the setting at the cluster level.none
sysctl
names or groups. Allowed sysctl
groups: kernel.shm*
, kernel.msg*
, kernel.sem
, fs.mqueue.*
, and net.*
. Example: [kernel.msg*, net.ipv4.route.min_pmtu]
. none
sysctl
names or sysctl
groups, which can be set on the Pods.To tune the performance of your system, you can modify the following Kernel attributes:
kernel.shmmni
kernel.shmmax
kernel.shmall
net.core.busy_poll
net.core.busy_read
net.core.netdev_max_backlog
net.core.rmem_max
net.core.rmem_default
net.core.wmem_default
net.core.wmem_max
net.core.optmem_max
net.core.somaxconn
net.ipv4.tcp_rmem
net.ipv4.tcp_wmem
net.ipv4.tcp_tw_reuse
net.ipv6.conf.all.disable_ipv6
net.ipv6.conf.default.disable_ipv6
net.netfilter.nf_conntrack_acct
- Available on GKE versions 1.32.0-gke.1448000 or later.net.netfilter.nf_conntrack_max
- Available on GKE versions 1.32.0-gke.1448000 or later.net.netfilter.nf_conntrack_buckets
- Available on GKE versions 1.32.0-gke.1448000 or later.net.netfilter.nf_conntrack_tcp_timeout_close_wait
- Available on GKE versions 1.32.0-gke.1448000 or later.net.netfilter.nf_conntrack_tcp_timeout_established
- Available on GKE versions 1.32.0-gke.1448000 or later.net.netfilter.nf_conntrack_tcp_timeout_time_wait
- Available on GKE versions 1.32.0-gke.1448000 or later.vm.max_map_count
Different Linux namespaces might have unique values for a given sysctl
, while others are global for the entire node. Updating sysctl
options by using a node system configuration ensures that the sysctl
is applied globally on the node and in each namespace, resulting in each Pod having identical sysctl
values in each Linux namespace.
The kubelet and the container runtime use Linux kernel cgroups for resource management, such as limiting how much CPU or memory each container in a Pod can access. There are two versions of the cgroup subsystem in the kernel: cgroupv1
and cgroupv2
. Kubernetes support for cgroupv2
was introduced as alpha in Kubernetes version 1.18, beta in 1.22, and GA in 1.25. For more details, refer to the Kubernetes cgroups v2 documentation.
Node system configuration lets you customize the cgroup configuration of your node pools. You can use cgroupv1
or cgroupv2
. GKE uses cgroupv2
for new Standard node pools running version 1.26 and later, and cgroupv1
for versions earlier than 1.26. For node pools created with node auto-provisioning, the cgroup configuration depends on the initial cluster version, not the node pool version. cgroupv1
is not supported on Arm machines.
You can use node system configuration to change the setting for a node pool to use cgroupv1
or cgroupv2
explicitly. Just upgrading an existing node pool to 1.26 doesn't change the setting to cgroupv2
, as existing node pools created running a version earlier than 1.26—without a customized cgroup configuration—continue to use cgroupv1
unless you explicitly specify otherwise.
For example, to configure your node pool to use cgroupv2
, use a node system configuration file such as:
linuxConfig:cgroupMode:'CGROUP_MODE_V2'
The supported cgroupMode
options are:
CGROUP_MODE_V1
: Use cgroupv1
on the node pool.CGROUP_MODE_V2
: Use cgroupv2
on the node pool.CGROUP_MODE_UNSPECIFIED
: Use the default GKE cgroup configuration.To use cgroupv2
, the following requirements and limitations apply:
/sys/fs/cgroup/...
), ensure they are compatible with the cgroupv2
API. cgroupv2
.8u372
, JDK 11.0.16 or later, or JDK 15 or later.When you add a node system configuration, GKE must recreate the nodes to implement the changes. After you've added the configuration to a node pool and the nodes have been recreated, you can verify the new configuration.
You can verify the cgroup configuration for nodes in a node pool with gcloud CLI or the kubectl
command-line tool:
Check the cgroup configuration for a node pool:
gcloudcontainernode-poolsdescribePOOL_NAME\--format='value(Config.effectiveCgroupMode)'
Replace POOL_NAME
with the name of your node pool.
The potential output is one of the following:
EFFECTIVE_CGROUP_MODE_V1
: the nodes use cgroupv1
EFFECTIVE_CGROUP_MODE_V2
: the nodes use cgroupv2
The output only shows the new cgroup configuration after the nodes in the node pool have been recreated. The output is empty for Windows server node pools, which don't support cgroup.
To verify the cgroup configuration for nodes in this node pool with kubectl
, pick a node and connect to it using the following instructions:
mynode
in the command with the name of any node in the node pool.You can use the node system configuration file to use the Linux kernel feature huge pages.
Kubernetes supports huge pages on nodes as a type of resource, similar to CPU or memory. Use the following parameters to instruct your Kubernetes nodes to pre-allocate huge pages for consumption by Pods. To manage your Pods' consumption of huge pages, see Manage HugePages.
To pre-allocate huge pages for your nodes, specify the amounts and sizes. For example, to configure your nodes to allocate three 1-gigabyte-sized huge pages, and 1024 2-megabyte-sized huge pages, use a node system configuration such as the following:
linuxConfig:hugepageConfig:hugepage_size2m:1024hugepage_size1g:3
To use huge pages, the following limitations and requirements apply:
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-04-17 UTC.