This is the multi-page printable view of this section. Click here to print.
Kubernetes objects can be created, updated, and deleted by storing multiple object configuration files in a directory and using kubectl apply
to recursively create and update those objects as needed. This method retains writes made to live objects without merging the changes back into the object configuration files. kubectl diff
also gives you a preview of what changes apply
will make.
Install kubectl
.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
To check the version, enter kubectl version
.
The kubectl
tool supports three kinds of object management:
See Kubernetes Object Management for a discussion of the advantages and disadvantage of each kind of object management.
Declarative object configuration requires a firm understanding of the Kubernetes object definitions and configuration. Read and complete the following documents if you have not already:
Following are definitions for terms used in this document:
kubectl apply
. Configuration files are typically stored in source control, such as Git.kubectl apply
to write the changes.Use kubectl apply
to create all objects, except those that already exist, defined by configuration files in a specified directory:
kubectl apply -f <directory>
This sets the kubectl.kubernetes.io/last-applied-configuration: '{...}'
annotation on each object. The annotation contains the contents of the object configuration file that was used to create the object.
-R
flag to recursively process directories.Here's an example of an object configuration file:
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:selector:matchLabels:app:nginxminReadySeconds:5template:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80
Run kubectl diff
to print the object that will be created:
kubectl diff -f https://k8s.io/examples/application/simple_deployment.yaml
diff
uses server-side dry-run, which needs to be enabled on kube-apiserver
.
Since diff
performs a server-side apply request in dry-run mode, it requires granting PATCH
, CREATE
, and UPDATE
permissions. See Dry-Run Authorization for details.
Create the object using kubectl apply
:
kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml
Print the live configuration using kubectl get
:
kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml
The output shows that the kubectl.kubernetes.io/last-applied-configuration
annotation was written to the live configuration, and it matches the configuration file:
kind:Deploymentmetadata:annotations:# ...# This is the json representation of simple_deployment.yaml# It was written by kubectl apply when the object was createdkubectl.kubernetes.io/last-applied-configuration:| {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}}# ...spec:# ...minReadySeconds:5selector:matchLabels:# ...app:nginxtemplate:metadata:# ...labels:app:nginxspec:containers:- image:nginx:1.14.2# ...name:nginxports:- containerPort:80# ...# ...# ...# ...
You can also use kubectl apply
to update all objects defined in a directory, even if those objects already exist. This approach accomplishes the following:
kubectl diff -f <directory> kubectl apply -f <directory>
-R
flag to recursively process directories.Here's an example configuration file:
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:selector:matchLabels:app:nginxminReadySeconds:5template:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80
Create the object using kubectl apply
:
kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml
Print the live configuration using kubectl get
:
kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml
The output shows that the kubectl.kubernetes.io/last-applied-configuration
annotation was written to the live configuration, and it matches the configuration file:
kind:Deploymentmetadata:annotations:# ...# This is the json representation of simple_deployment.yaml# It was written by kubectl apply when the object was createdkubectl.kubernetes.io/last-applied-configuration:| {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}}# ...spec:# ...minReadySeconds:5selector:matchLabels:# ...app:nginxtemplate:metadata:# ...labels:app:nginxspec:containers:- image:nginx:1.14.2# ...name:nginxports:- containerPort:80# ...# ...# ...# ...
Directly update the replicas
field in the live configuration by using kubectl scale
. This does not use kubectl apply
:
kubectl scale deployment/nginx-deployment --replicas=2
Print the live configuration using kubectl get
:
kubectl get deployment nginx-deployment -o yaml
The output shows that the replicas
field has been set to 2, and the last-applied-configuration
annotation does not contain a replicas
field:
apiVersion:apps/v1kind:Deploymentmetadata:annotations:# ...# note that the annotation does not contain replicas# because it was not updated through applykubectl.kubernetes.io/last-applied-configuration:| {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}}# ...spec:replicas:2# written by scale# ...minReadySeconds:5selector:matchLabels:# ...app:nginxtemplate:metadata:# ...labels:app:nginxspec:containers:- image:nginx:1.14.2# ...name:nginxports:- containerPort:80# ...
Update the simple_deployment.yaml
configuration file to change the image from nginx:1.14.2
to nginx:1.16.1
, and delete the minReadySeconds
field:
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.16.1# update the imageports:- containerPort:80
Apply the changes made to the configuration file:
kubectl diff -f https://k8s.io/examples/application/update_deployment.yaml kubectl apply -f https://k8s.io/examples/application/update_deployment.yaml
Print the live configuration using kubectl get
:
kubectl get -f https://k8s.io/examples/application/update_deployment.yaml -o yaml
The output shows the following changes to the live configuration:
replicas
field retains the value of 2 set by kubectl scale
. This is possible because it is omitted from the configuration file.image
field has been updated to nginx:1.16.1
from nginx:1.14.2
.last-applied-configuration
annotation has been updated with the new image.minReadySeconds
field has been cleared.last-applied-configuration
annotation no longer contains the minReadySeconds
field.apiVersion:apps/v1kind:Deploymentmetadata:annotations:# ...# The annotation contains the updated image to nginx 1.16.1,# but does not contain the updated replicas to 2kubectl.kubernetes.io/last-applied-configuration:| {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.16.1","name":"nginx", "ports":[{"containerPort":80}]}]}}}}# ...spec:replicas:2# Set by `kubectl scale`. Ignored by `kubectl apply`.# minReadySeconds cleared by `kubectl apply`# ...selector:matchLabels:# ...app:nginxtemplate:metadata:# ...labels:app:nginxspec:containers:- image:nginx:1.16.1# Set by `kubectl apply`# ...name:nginxports:- containerPort:80# ...# ...# ...# ...
kubectl apply
with the imperative object configuration commands create
and replace
is not supported. This is because create
and replace
do not retain the kubectl.kubernetes.io/last-applied-configuration
that kubectl apply
uses to compute updates.There are two approaches to delete objects managed by kubectl apply
.
kubectl delete -f <filename>
Manually deleting objects using the imperative command is the recommended approach, as it is more explicit about what is being deleted, and less likely to result in the user deleting something unintentionally:
kubectl delete -f <filename>
kubectl apply -f <directory> --prune
As an alternative to kubectl delete
, you can use kubectl apply
to identify objects to be deleted after their manifests have been removed from a directory in the local filesystem.
In Kubernetes 1.33, there are two pruning modes available in kubectl apply:
Kubernetes v1.5 [alpha]
--prune
with kubectl apply
in allow list mode. Which objects are pruned depends on the values of the --prune-allowlist
, --selector
and --namespace
flags, and relies on dynamic discovery of the objects in scope. Especially if flag values are changed between invocations, this can lead to objects being unexpectedly deleted or retained.To use allowlist-based pruning, add the following flags to your kubectl apply
invocation:
--prune
: Delete previously applied objects that are not in the set passed to the current invocation.--prune-allowlist
: A list of group-version-kinds (GVKs) to consider for pruning. This flag is optional but strongly encouraged, as its default value is a partial list of both namespaced and cluster-scoped types, which can lead to surprising results.--selector/-l
: Use a label selector to constrain the set of objects selected for pruning. This flag is optional but strongly encouraged.--all
: use instead of --selector/-l
to explicitly select all previously applied objects of the allowlisted types.Allowlist-based pruning queries the API server for all objects of the allowlisted GVKs that match the given labels (if any), and attempts to match the returned live object configurations against the object manifest files. If an object matches the query, and it does not have a manifest in the directory, and it has a kubectl.kubernetes.io/last-applied-configuration
annotation, it is deleted.
kubectl apply -f <directory> --prune -l <labels> --prune-allowlist=<gvk-list>
Kubernetes v1.27 [alpha]
kubectl apply --prune --applyset
is in alpha, and backwards incompatible changes might be introduced in subsequent releases.To use ApplySet-based pruning, set the KUBECTL_APPLYSET=true
environment variable, and add the following flags to your kubectl apply
invocation:
--prune
: Delete previously applied objects that are not in the set passed to the current invocation.--applyset
: The name of an object that kubectl can use to accurately and efficiently track set membership across apply
operations.KUBECTL_APPLYSET=true kubectl apply -f <directory> --prune --applyset=<name>
By default, the type of the ApplySet parent object used is a Secret. However, ConfigMaps can also be used in the format: --applyset=configmaps/<name>
. When using a Secret or ConfigMap, kubectl will create the object if it does not already exist.
It is also possible to use custom resources as ApplySet parent objects. To enable this, label the Custom Resource Definition (CRD) that defines the resource you want to use with the following: applyset.kubernetes.io/is-parent-type: true
. Then, create the object you want to use as an ApplySet parent (kubectl does not do this automatically for custom resources). Finally, refer to that object in the applyset flag as follows: --applyset=<resource>.<group>/<name>
(for example, widgets.custom.example.com/widget-name
).
With ApplySet-based pruning, kubectl adds the applyset.kubernetes.io/part-of=<parentID>
label to each object in the set before they are sent to the server. For performance reasons, it also collects the list of resource types and namespaces that the set contains and adds these in annotations on the live parent object. Finally, at the end of the apply operation, it queries the API server for objects of those types in those namespaces (or in the cluster scope, as applicable) that belong to the set, as defined by the applyset.kubernetes.io/part-of=<parentID>
label.
Caveats and restrictions:
--namespace
flag is required when using any namespaced parent, including the default Secret. This means that ApplySets spanning multiple namespaces must use a cluster-scoped custom resource as the parent object.You can use kubectl get
with -o yaml
to view the configuration of a live object:
kubectl get -f <filename|url> -o yaml
When kubectl apply
updates the live configuration for an object, it does so by sending a patch request to the API server. The patch defines updates scoped to specific fields of the live object configuration. The kubectl apply
command calculates this patch request using the configuration file, the live configuration, and the last-applied-configuration
annotation stored in the live configuration.
The kubectl apply
command writes the contents of the configuration file to the kubectl.kubernetes.io/last-applied-configuration
annotation. This is used to identify fields that have been removed from the configuration file and need to be cleared from the live configuration. Here are the steps used to calculate which fields should be deleted or set:
last-applied-configuration
and missing from the configuration file.Here's an example. Suppose this is the configuration file for a Deployment object:
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.16.1# update the imageports:- containerPort:80
Also, suppose this is the live configuration for the same Deployment object:
apiVersion:apps/v1kind:Deploymentmetadata:annotations:# ...# note that the annotation does not contain replicas# because it was not updated through applykubectl.kubernetes.io/last-applied-configuration:| {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"minReadySeconds":5,"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx", "ports":[{"containerPort":80}]}]}}}}# ...spec:replicas:2# written by scale# ...minReadySeconds:5selector:matchLabels:# ...app:nginxtemplate:metadata:# ...labels:app:nginxspec:containers:- image:nginx:1.14.2# ...name:nginxports:- containerPort:80# ...
Here are the merge calculations that would be performed by kubectl apply
:
last-applied-configuration
and comparing them to values in the configuration file. Clear fields explicitly set to null in the local object configuration file regardless of whether they appear in the last-applied-configuration
. In this example, minReadySeconds
appears in the last-applied-configuration
annotation, but does not appear in the configuration file. Action: Clear minReadySeconds
from the live configuration.image
in the configuration file does not match the value in the live configuration. Action: Set the value of image
in the live configuration.last-applied-configuration
annotation to match the value of the configuration file.Here is the live configuration that is the result of the merge:
apiVersion:apps/v1kind:Deploymentmetadata:annotations:# ...# The annotation contains the updated image to nginx 1.16.1,# but does not contain the updated replicas to 2kubectl.kubernetes.io/last-applied-configuration:| {"apiVersion":"apps/v1","kind":"Deployment", "metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"}, "spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}}, "spec":{"containers":[{"image":"nginx:1.16.1","name":"nginx", "ports":[{"containerPort":80}]}]}}}}# ...spec:selector:matchLabels:# ...app:nginxreplicas:2# Set by `kubectl scale`. Ignored by `kubectl apply`.# minReadySeconds cleared by `kubectl apply`# ...template:metadata:# ...labels:app:nginxspec:containers:- image:nginx:1.16.1# Set by `kubectl apply`# ...name:nginxports:- containerPort:80# ...# ...# ...# ...
How a particular field in a configuration file is merged with the live configuration depends on the type of the field. There are several types of fields:
primitive: A field of type string, integer, or boolean. For example, image
and replicas
are primitive fields. Action: Replace.
map, also called object: A field of type map or a complex type that contains subfields. For example, labels
, annotations
,spec
and metadata
are all maps. Action: Merge elements or subfields.
list: A field containing a list of items that can be either primitive types or maps. For example, containers
, ports
, and args
are lists. Action: Varies.
When kubectl apply
updates a map or list field, it typically does not replace the entire field, but instead updates the individual subelements. For instance, when merging the spec
on a Deployment, the entire spec
is not replaced. Instead the subfields of spec
, such as replicas
, are compared and merged.
Primitive fields are replaced or cleared.
-
is used for "not applicable" because the value is not used.Field in object configuration file | Field in live object configuration | Field in last-applied-configuration | Action |
---|---|---|---|
Yes | Yes | - | Set live to configuration file value. |
Yes | No | - | Set live to local configuration. |
No | - | Yes | Clear from live configuration. |
No | - | No | Do nothing. Keep live value. |
Fields that represent maps are merged by comparing each of the subfields or elements of the map:
-
is used for "not applicable" because the value is not used.Key in object configuration file | Key in live object configuration | Field in last-applied-configuration | Action |
---|---|---|---|
Yes | Yes | - | Compare sub fields values. |
Yes | No | - | Set live to local configuration. |
No | - | Yes | Delete from live configuration. |
No | - | No | Do nothing. Keep live value. |
Merging changes to a list uses one of three strategies:
The choice of strategy is made on a per-field basis.
Treat the list the same as a primitive field. Replace or delete the entire list. This preserves ordering.
Example: Use kubectl apply
to update the args
field of a Container in a Pod. This sets the value of args
in the live configuration to the value in the configuration file. Any args
elements that had previously been added to the live configuration are lost. The order of the args
elements defined in the configuration file is retained in the live configuration.
# last-applied-configuration valueargs:["a","b"]# configuration file valueargs:["a","c"]# live configurationargs:["a","b","d"]# result after mergeargs:["a","c"]
Explanation: The merge used the configuration file value as the new list value.
Treat the list as a map, and treat a specific field of each element as a key. Add, delete, or update individual elements. This does not preserve ordering.
This merge strategy uses a special tag on each field called a patchMergeKey
. The patchMergeKey
is defined for each field in the Kubernetes source code: types.go When merging a list of maps, the field specified as the patchMergeKey
for a given element is used like a map key for that element.
Example: Use kubectl apply
to update the containers
field of a PodSpec. This merges the list as though it was a map where each element is keyed by name
.
# last-applied-configuration valuecontainers:- name:nginximage:nginx:1.16- name: nginx-helper-a # key:nginx-helper-a; will be deleted in resultimage:helper:1.3- name: nginx-helper-b # key:nginx-helper-b; will be retainedimage:helper:1.3# configuration file valuecontainers:- name:nginximage:nginx:1.16- name:nginx-helper-bimage:helper:1.3- name: nginx-helper-c # key:nginx-helper-c; will be added in resultimage:helper:1.3# live configurationcontainers:- name:nginximage:nginx:1.16- name:nginx-helper-aimage:helper:1.3- name:nginx-helper-bimage:helper:1.3args:["run"]# Field will be retained- name: nginx-helper-d # key:nginx-helper-d; will be retainedimage:helper:1.3# result after mergecontainers:- name:nginximage:nginx:1.16# Element nginx-helper-a was deleted- name:nginx-helper-bimage:helper:1.3args:["run"]# Field was retained- name:nginx-helper-c# Element was addedimage:helper:1.3- name:nginx-helper-d# Element was ignoredimage:helper:1.3
Explanation:
args
in the live configuration. kubectl apply
was able to identify that "nginx-helper-b" in the live configuration was the same "nginx-helper-b" as in the configuration file, even though their fields had different values (no args
in the configuration file). This is because the patchMergeKey
field value (name) was identical in both.As of Kubernetes 1.5, merging lists of primitive elements is not supported.
patchStrategy
tag in types.go If no patchStrategy
is specified for a field of type list, then the list is replaced.The API server sets certain fields to default values in the live configuration if they are not specified when the object is created.
Here's a configuration file for a Deployment. The file does not specify strategy
:
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:selector:matchLabels:app:nginxminReadySeconds:5template:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80
Create the object using kubectl apply
:
kubectl apply -f https://k8s.io/examples/application/simple_deployment.yaml
Print the live configuration using kubectl get
:
kubectl get -f https://k8s.io/examples/application/simple_deployment.yaml -o yaml
The output shows that the API server set several fields to default values in the live configuration. These fields were not specified in the configuration file.
apiVersion:apps/v1kind:Deployment# ...spec:selector:matchLabels:app:nginxminReadySeconds:5replicas:1# defaulted by apiserverstrategy:rollingUpdate:# defaulted by apiserver - derived from strategy.typemaxSurge:1maxUnavailable:1type:RollingUpdate# defaulted by apiservertemplate:metadata:creationTimestamp:nulllabels:app:nginxspec:containers:- image:nginx:1.14.2imagePullPolicy:IfNotPresent# defaulted by apiservername:nginxports:- containerPort:80protocol:TCP# defaulted by apiserverresources:{}# defaulted by apiserverterminationMessagePath:/dev/termination-log# defaulted by apiserverdnsPolicy:ClusterFirst# defaulted by apiserverrestartPolicy:Always# defaulted by apiserversecurityContext:{}# defaulted by apiserverterminationGracePeriodSeconds:30# defaulted by apiserver# ...
In a patch request, defaulted fields are not re-defaulted unless they are explicitly cleared as part of a patch request. This can cause unexpected behavior for fields that are defaulted based on the values of other fields. When the other fields are later changed, the values defaulted from them will not be updated unless they are explicitly cleared.
For this reason, it is recommended that certain fields defaulted by the server are explicitly defined in the configuration file, even if the desired values match the server defaults. This makes it easier to recognize conflicting values that will not be re-defaulted by the server.
Example:
# last-applied-configurationspec:template:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80# configuration filespec:strategy:type:Recreate# updated valuetemplate:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80# live configurationspec:strategy:type:RollingUpdate# defaulted valuerollingUpdate:# defaulted value derived from typemaxSurge :1maxUnavailable:1template:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80# result after merge - ERROR!spec:strategy:type: Recreate # updated value:incompatible with rollingUpdaterollingUpdate: # defaulted value:incompatible with "type: Recreate"maxSurge :1maxUnavailable:1template:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80
Explanation:
strategy.type
.strategy.type
to RollingUpdate
and defaults the strategy.rollingUpdate
values.strategy.type
to Recreate
. The strategy.rollingUpdate
values remain at their defaulted values, though the server expects them to be cleared. If the strategy.rollingUpdate
values had been defined initially in the configuration file, it would have been more clear that they needed to be deleted.strategy.rollingUpdate
is not cleared. The strategy.rollingupdate
field cannot be defined with a strategy.type
of Recreate
.Recommendation: These fields should be explicitly defined in the object configuration file:
Fields that do not appear in the configuration file can be cleared by setting their values to null
and then applying the configuration file. For fields defaulted by the server, this triggers re-defaulting the values.
These are the only methods you should use to change an individual object field:
kubectl apply
.kubectl scale
.Add the field to the configuration file. For the field, discontinue direct updates to the live configuration that do not go through kubectl apply
.
As of Kubernetes 1.5, changing ownership of a field from a configuration file to an imperative writer requires manual steps:
kubectl.kubernetes.io/last-applied-configuration
annotation on the live object.Kubernetes objects should be managed using only one method at a time. Switching from one method to another is possible, but is a manual process.
Migrating from imperative command management to declarative object configuration involves several manual steps:
Export the live object to a local configuration file:
kubectl get <kind>/<name> -o yaml > <kind>_<name>.yaml
Manually remove the status
field from the configuration file.
kubectl apply
does not update the status field even if it is present in the configuration file.Set the kubectl.kubernetes.io/last-applied-configuration
annotation on the object:
kubectl replace --save-config -f <kind>_<name>.yaml
Change processes to use kubectl apply
for managing the object exclusively.
Set the kubectl.kubernetes.io/last-applied-configuration
annotation on the object:
kubectl replace --save-config -f <kind>_<name>.yaml
Change processes to use kubectl apply
for managing the object exclusively.
The recommended approach is to define a single, immutable PodTemplate label used only by the controller selector with no other semantic meaning.
Example:
selector:matchLabels:controller-selector:"apps/v1/deployment/nginx"template:metadata:labels:controller-selector:"apps/v1/deployment/nginx"
Kustomize is a standalone tool to customize Kubernetes objects through a kustomization file.
Since 1.14, kubectl also supports the management of Kubernetes objects using a kustomization file. To view resources found in a directory containing a kustomization file, run the following command:
kubectl kustomize <kustomization_directory>
To apply those resources, run kubectl apply
with --kustomize
or -k
flag:
kubectl apply -k <kustomization_directory>
Install kubectl
.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
To check the version, enter kubectl version
.
Kustomize is a tool for customizing Kubernetes configurations. It has the following features to manage application configuration files:
ConfigMaps and Secrets hold configuration or sensitive data that are used by other Kubernetes objects, such as Pods. The source of truth of ConfigMaps or Secrets are usually external to a cluster, such as a .properties
file or an SSH keyfile. Kustomize has secretGenerator
and configMapGenerator
, which generate Secret and ConfigMap from files or literals.
To generate a ConfigMap from a file, add an entry to the files
list in configMapGenerator
. Here is an example of generating a ConfigMap with a data item from a .properties
file:
# Create a application.properties filecat <<EOF >application.properties FOO=Bar EOFcat <<EOF >./kustomization.yaml configMapGenerator: - name: example-configmap-1 files: - application.properties EOF
The generated ConfigMap can be examined with the following command:
kubectl kustomize ./
The generated ConfigMap is:
apiVersion:v1data:application.properties:| FOO=Barkind:ConfigMapmetadata:name:example-configmap-1-8mbdf7882g
To generate a ConfigMap from an env file, add an entry to the envs
list in configMapGenerator
. Here is an example of generating a ConfigMap with a data item from a .env
file:
# Create a .env filecat <<EOF >.env FOO=Bar EOFcat <<EOF >./kustomization.yaml configMapGenerator: - name: example-configmap-1 envs: - .env EOF
The generated ConfigMap can be examined with the following command:
kubectl kustomize ./
The generated ConfigMap is:
apiVersion:v1data:FOO:Barkind:ConfigMapmetadata:name:example-configmap-1-42cfbf598f
.env
file becomes a separate key in the ConfigMap that you generate. This is different from the previous example which embeds a file named application.properties
(and all its entries) as the value for a single key.ConfigMaps can also be generated from literal key-value pairs. To generate a ConfigMap from a literal key-value pair, add an entry to the literals
list in configMapGenerator. Here is an example of generating a ConfigMap with a data item from a key-value pair:
cat <<EOF >./kustomization.yaml configMapGenerator: - name: example-configmap-2 literals: - FOO=Bar EOF
The generated ConfigMap can be checked by the following command:
kubectl kustomize ./
The generated ConfigMap is:
apiVersion:v1data:FOO:Barkind:ConfigMapmetadata:name:example-configmap-2-g2hdhfc6tk
To use a generated ConfigMap in a Deployment, reference it by the name of the configMapGenerator. Kustomize will automatically replace this name with the generated name.
This is an example deployment that uses a generated ConfigMap:
# Create an application.properties filecat <<EOF >application.propertiesFOO=BarEOFcat <<EOF >deployment.yamlapiVersion:apps/v1kind:Deploymentmetadata:name:my-applabels:app:my-appspec:selector:matchLabels:app:my-apptemplate:metadata:labels:app:my-appspec:containers:- name:appimage:my-appvolumeMounts:- name:configmountPath:/configvolumes:- name:configconfigMap:name:example-configmap-1EOFcat <<EOF >./kustomization.yamlresources:- deployment.yamlconfigMapGenerator:- name:example-configmap-1files:- application.propertiesEOF
Generate the ConfigMap and Deployment:
kubectl kustomize ./
The generated Deployment will refer to the generated ConfigMap by name:
apiVersion:v1data:application.properties:| FOO=Barkind:ConfigMapmetadata:name:example-configmap-1-g4hk9g2ff8---apiVersion:apps/v1kind:Deploymentmetadata:labels:app:my-appname:my-appspec:selector:matchLabels:app:my-apptemplate:metadata:labels:app:my-appspec:containers:- image:my-appname:appvolumeMounts:- mountPath:/configname:configvolumes:- configMap:name:example-configmap-1-g4hk9g2ff8name:config
You can generate Secrets from files or literal key-value pairs. To generate a Secret from a file, add an entry to the files
list in secretGenerator
. Here is an example of generating a Secret with a data item from a file:
# Create a password.txt filecat <<EOF >./password.txt username=admin password=secret EOFcat <<EOF >./kustomization.yaml secretGenerator: - name: example-secret-1 files: - password.txt EOF
The generated Secret is as follows:
apiVersion:v1data:password.txt:dXNlcm5hbWU9YWRtaW4KcGFzc3dvcmQ9c2VjcmV0Cg==kind:Secretmetadata:name:example-secret-1-t2kt65hgtbtype:Opaque
To generate a Secret from a literal key-value pair, add an entry to literals
list in secretGenerator
. Here is an example of generating a Secret with a data item from a key-value pair:
cat <<EOF >./kustomization.yaml secretGenerator: - name: example-secret-2 literals: - username=admin - password=secret EOF
The generated Secret is as follows:
apiVersion:v1data:password:c2VjcmV0username:YWRtaW4=kind:Secretmetadata:name:example-secret-2-t52t6g96d8type:Opaque
Like ConfigMaps, generated Secrets can be used in Deployments by referring to the name of the secretGenerator:
# Create a password.txt filecat <<EOF >./password.txt username=admin password=secret EOFcat <<EOF >deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-app labels: app: my-app spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: app image: my-app volumeMounts: - name: password mountPath: /secrets volumes: - name: password secret: secretName: example-secret-1 EOFcat <<EOF >./kustomization.yaml resources: - deployment.yaml secretGenerator: - name: example-secret-1 files: - password.txt EOF
The generated ConfigMaps and Secrets have a content hash suffix appended. This ensures that a new ConfigMap or Secret is generated when the contents are changed. To disable the behavior of appending a suffix, one can use generatorOptions
. Besides that, it is also possible to specify cross-cutting options for generated ConfigMaps and Secrets.
cat <<EOF >./kustomization.yaml configMapGenerator: - name: example-configmap-3 literals: - FOO=Bar generatorOptions: disableNameSuffixHash: true labels: type: generated annotations: note: generated EOF
Runkubectl kustomize ./
to view the generated ConfigMap:
apiVersion:v1data:FOO:Barkind:ConfigMapmetadata:annotations:note:generatedlabels:type:generatedname:example-configmap-3
It is quite common to set cross-cutting fields for all Kubernetes resources in a project. Some use cases for setting cross-cutting fields:
Here is an example:
# Create a deployment.yamlcat <<EOF >./deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx EOFcat <<EOF >./kustomization.yaml namespace: my-namespace namePrefix: dev- nameSuffix: "-001" labels: - pairs: app: bingo includeSelectors: true commonAnnotations: oncallPager: 800-555-1212 resources: - deployment.yaml EOF
Run kubectl kustomize ./
to view those fields are all set in the Deployment Resource:
apiVersion:apps/v1kind:Deploymentmetadata:annotations:oncallPager:800-555-1212labels:app:bingoname:dev-nginx-deployment-001namespace:my-namespacespec:selector:matchLabels:app:bingotemplate:metadata:annotations:oncallPager:800-555-1212labels:app:bingospec:containers:- image:nginxname:nginx
It is common to compose a set of resources in a project and manage them inside the same file or directory. Kustomize offers composing resources from different files and applying patches or other customization to them.
Kustomize supports composition of different resources. The resources
field, in the kustomization.yaml
file, defines the list of resources to include in a configuration. Set the path to a resource's configuration file in the resources
list. Here is an example of an NGINX application comprised of a Deployment and a Service:
# Create a deployment.yaml filecat <<EOF > deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 EOF# Create a service.yaml filecat <<EOF > service.yaml apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx EOF# Create a kustomization.yaml composing themcat <<EOF >./kustomization.yaml resources: - deployment.yaml - service.yaml EOF
The resources from kubectl kustomize ./
contain both the Deployment and the Service objects.
Patches can be used to apply different customizations to resources. Kustomize supports different patching mechanisms through StrategicMerge
and Json6902
using the patches
field. patches
may be a file or an inline string, targeting a single or multiple resources.
The patches
field contains a list of patches applied in the order they are specified. The patch target selects resources by group
, version
, kind
, name
, namespace
, labelSelector
and annotationSelector
.
Small patches that do one thing are recommended. For example, create one patch for increasing the deployment replica number and another patch for setting the memory limit. The target resource is matched using group
, version
, kind
, and name
fields from the patch file.
# Create a deployment.yaml filecat <<EOF > deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 EOF# Create a patch increase_replicas.yamlcat <<EOF > increase_replicas.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: replicas: 3 EOF# Create another patch set_memory.yamlcat <<EOF > set_memory.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: template: spec: containers: - name: my-nginx resources: limits: memory: 512Mi EOFcat <<EOF >./kustomization.yaml resources: - deployment.yaml patches: - path: increase_replicas.yaml - path: set_memory.yaml EOF
Run kubectl kustomize ./
to view the Deployment:
apiVersion:apps/v1kind:Deploymentmetadata:name:my-nginxspec:replicas:3selector:matchLabels:run:my-nginxtemplate:metadata:labels:run:my-nginxspec:containers:- image:nginxname:my-nginxports:- containerPort:80resources:limits:memory:512Mi
Not all resources or fields support strategicMerge
patches. To support modifying arbitrary fields in arbitrary resources, Kustomize offers applying JSON patch through Json6902
. To find the correct Resource for a Json6902
patch, it is mandatory to specify the target
field in kustomization.yaml
.
For example, increasing the replica number of a Deployment object can also be done through Json6902
patch. The target resource is matched using group
, version
, kind
, and name
from the target
field.
# Create a deployment.yaml filecat <<EOF > deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 EOF# Create a json patchcat <<EOF > patch.yaml - op: replace path: /spec/replicas value: 3 EOF# Create a kustomization.yamlcat <<EOF >./kustomization.yaml resources: - deployment.yaml patches: - target: group: apps version: v1 kind: Deployment name: my-nginx path: patch.yaml EOF
Run kubectl kustomize ./
to see the replicas
field is updated:
apiVersion:apps/v1kind:Deploymentmetadata:name:my-nginxspec:replicas:3selector:matchLabels:run:my-nginxtemplate:metadata:labels:run:my-nginxspec:containers:- image:nginxname:my-nginxports:- containerPort:80
In addition to patches, Kustomize also offers customizing container images or injecting field values from other objects into containers without creating patches. For example, you can change the image used inside containers by specifying the new image in the images
field in kustomization.yaml
.
cat <<EOF > deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 EOFcat <<EOF >./kustomization.yaml resources: - deployment.yaml images: - name: nginx newName: my.image.registry/nginx newTag: 1.4.0 EOF
Run kubectl kustomize ./
to see that the image being used is updated:
apiVersion:apps/v1kind:Deploymentmetadata:name:my-nginxspec:replicas:2selector:matchLabels:run:my-nginxtemplate:metadata:labels:run:my-nginxspec:containers:- image:my.image.registry/nginx:1.4.0name:my-nginxports:- containerPort:80
Sometimes, the application running in a Pod may need to use configuration values from other objects. For example, a Pod from a Deployment object need to read the corresponding Service name from Env or as a command argument. Since the Service name may change as namePrefix
or nameSuffix
is added in the kustomization.yaml
file. It is not recommended to hard code the Service name in the command argument. For this usage, Kustomize can inject the Service name into containers through replacements
.
# Create a deployment.yaml file (quoting the here doc delimiter)cat <<'EOF' > deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx command: ["start", "--host", "MY_SERVICE_NAME_PLACEHOLDER"] EOF# Create a service.yaml filecat <<EOF > service.yaml apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx EOFcat <<EOF >./kustomization.yaml namePrefix: dev- nameSuffix: "-001" resources: - deployment.yaml - service.yaml replacements: - source: kind: Service name: my-nginx fieldPath: metadata.name targets: - select: kind: Deployment name: my-nginx fieldPaths: - spec.template.spec.containers.0.command.2 EOF
Run kubectl kustomize ./
to see that the Service name injected into containers is dev-my-nginx-001
:
apiVersion:apps/v1kind:Deploymentmetadata:name:dev-my-nginx-001spec:replicas:2selector:matchLabels:run:my-nginxtemplate:metadata:labels:run:my-nginxspec:containers:- command:- start- --host- dev-my-nginx-001image:nginxname:my-nginx
Kustomize has the concepts of bases and overlays. A base is a directory with a kustomization.yaml
, which contains a set of resources and associated customization. A base could be either a local directory or a directory from a remote repo, as long as a kustomization.yaml
is present inside. An overlay is a directory with a kustomization.yaml
that refers to other kustomization directories as its bases
. A base has no knowledge of an overlay and can be used in multiple overlays.
The kustomization.yaml
in an overlay directory may refer to multiple bases
, combining all the resources defined in these bases into a unified configuration. Additionally, it can apply customizations on top of these resources to meet specific requirements.
Here is an example of a base:
# Create a directory to hold the basemkdir base # Create a base/deployment.yamlcat <<EOF > base/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx EOF# Create a base/service.yaml filecat <<EOF > base/service.yaml apiVersion: v1 kind: Service metadata: name: my-nginx labels: run: my-nginx spec: ports: - port: 80 protocol: TCP selector: run: my-nginx EOF# Create a base/kustomization.yamlcat <<EOF > base/kustomization.yaml resources: - deployment.yaml - service.yaml EOF
This base can be used in multiple overlays. You can add different namePrefix
or other cross-cutting fields in different overlays. Here are two overlays using the same base.
mkdir dev cat <<EOF > dev/kustomization.yaml resources: - ../base namePrefix: dev- EOFmkdir prod cat <<EOF > prod/kustomization.yaml resources: - ../base namePrefix: prod- EOF
Use --kustomize
or -k
in kubectl
commands to recognize resources managed by kustomization.yaml
. Note that -k
should point to a kustomization directory, such as
kubectl apply -k <kustomization directory>/
Given the following kustomization.yaml
,
# Create a deployment.yaml filecat <<EOF > deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: selector: matchLabels: run: my-nginx replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80 EOF# Create a kustomization.yamlcat <<EOF >./kustomization.yaml namePrefix: dev- labels: - pairs: app: my-nginx includeSelectors: true resources: - deployment.yaml EOF
Run the following command to apply the Deployment object dev-my-nginx
:
> kubectl apply -k ./ deployment.apps/dev-my-nginx created
Run one of the following commands to view the Deployment object dev-my-nginx
:
kubectl get -k ./
kubectl describe -k ./
Run the following command to compare the Deployment object dev-my-nginx
against the state that the cluster would be in if the manifest was applied:
kubectl diff -k ./
Run the following command to delete the Deployment object dev-my-nginx
:
> kubectl delete -k ./ deployment.apps "dev-my-nginx" deleted
Field | Type | Explanation |
---|---|---|
bases | []string | Each entry in this list should resolve to a directory containing a kustomization.yaml file |
commonAnnotations | map[string]string | annotations to add to all resources |
commonLabels | map[string]string | labels to add to all resources and selectors |
configMapGenerator | []ConfigMapArgs | Each entry in this list generates a ConfigMap |
configurations | []string | Each entry in this list should resolve to a file containing Kustomize transformer configurations |
crds | []string | Each entry in this list should resolve to an OpenAPI definition file for Kubernetes types |
generatorOptions | GeneratorOptions | Modify behaviors of all ConfigMap and Secret generator |
images | []Image | Each entry is to modify the name, tags and/or digest for one image without creating patches |
labels | map[string]string | Add labels without automically injecting corresponding selectors |
namePrefix | string | value of this field is prepended to the names of all resources |
nameSuffix | string | value of this field is appended to the names of all resources |
patchesJson6902 | []Patch | Each entry in this list should resolve to a Kubernetes object and a Json Patch |
patchesStrategicMerge | []string | Each entry in this list should resolve a strategic merge patch of a Kubernetes object |
replacements | []Replacements | copy the value from a resource's field into any number of specified targets. |
resources | []string | Each entry in this list must resolve to an existing resource configuration file |
secretGenerator | []SecretArgs | Each entry in this list generates a Secret |
vars | []Var | Each entry is to capture text from one resource's field |
Kubernetes objects can quickly be created, updated, and deleted directly using imperative commands built into the kubectl
command-line tool. This document explains how those commands are organized and how to use them to manage live objects.
Install kubectl
.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
To check the version, enter kubectl version
.
The kubectl
tool supports three kinds of object management:
See Kubernetes Object Management for a discussion of the advantages and disadvantage of each kind of object management.
The kubectl
tool supports verb-driven commands for creating some of the most common object types. The commands are named to be recognizable to users unfamiliar with the Kubernetes object types.
run
: Create a new Pod to run a Container.expose
: Create a new Service object to load balance traffic across Pods.autoscale
: Create a new Autoscaler object to automatically horizontally scale a controller, such as a Deployment.The kubectl
tool also supports creation commands driven by object type. These commands support more object types and are more explicit about their intent, but require users to know the type of objects they intend to create.
create <objecttype> [<subtype>] <instancename>
Some objects types have subtypes that you can specify in the create
command. For example, the Service object has several subtypes including ClusterIP, LoadBalancer, and NodePort. Here's an example that creates a Service with subtype NodePort:
kubectl create service nodeport <myservicename>
In the preceding example, the create service nodeport
command is called a subcommand of the create service
command.
You can use the -h
flag to find the arguments and flags supported by a subcommand:
kubectl create service nodeport -h
The kubectl
command supports verb-driven commands for some common update operations. These commands are named to enable users unfamiliar with Kubernetes objects to perform updates without knowing the specific fields that must be set:
scale
: Horizontally scale a controller to add or remove Pods by updating the replica count of the controller.annotate
: Add or remove an annotation from an object.label
: Add or remove a label from an object.The kubectl
command also supports update commands driven by an aspect of the object. Setting this aspect may set different fields for different object types:
set
<field>
: Set an aspect of an object.The kubectl
tool supports these additional ways to update a live object directly, however they require a better understanding of the Kubernetes object schema.
edit
: Directly edit the raw configuration of a live object by opening its configuration in an editor.patch
: Directly modify specific fields of a live object by using a patch string. For more details on patch strings, see the patch section in API Conventions.You can use the delete
command to delete an object from a cluster:
delete <type>/<name>
kubectl delete
for both imperative commands and imperative object configuration. The difference is in the arguments passed to the command. To use kubectl delete
as an imperative command, pass the object to be deleted as an argument. Here's an example that passes a Deployment object named nginx:kubectl delete deployment/nginx
There are several commands for printing information about an object:
get
: Prints basic information about matching objects. Use get -h
to see a list of options.describe
: Prints aggregated detailed information about matching objects.logs
: Prints the stdout and stderr for a container running in a Pod.set
commands to modify objects before creationThere are some object fields that don't have a flag you can use in a create
command. In some of those cases, you can use a combination of set
and create
to specify a value for the field before object creation. This is done by piping the output of the create
command to the set
command, and then back to the create
command. Here's an example:
kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client | kubectl set selector --local -f - 'environment=qa' -o yaml | kubectl create -f -
kubectl create service -o yaml --dry-run=client
command creates the configuration for the Service, but prints it to stdout as YAML instead of sending it to the Kubernetes API server.kubectl set selector --local -f - -o yaml
command reads the configuration from stdin, and writes the updated configuration to stdout as YAML.kubectl create -f -
command creates the object using the configuration provided via stdin.--edit
to modify objects before creationYou can use kubectl create --edit
to make arbitrary changes to an object before it is created. Here's an example:
kubectl create service clusterip my-svc --clusterip="None" -o yaml --dry-run=client > /tmp/srv.yaml kubectl create --edit -f /tmp/srv.yaml
kubectl create service
command creates the configuration for the Service and saves it to /tmp/srv.yaml
.kubectl create --edit
command opens the configuration file for editing before it creates the object.Kubernetes objects can be created, updated, and deleted by using the kubectl
command-line tool along with an object configuration file written in YAML or JSON. This document explains how to define and manage objects using configuration files.
Install kubectl
.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
To check the version, enter kubectl version
.
The kubectl
tool supports three kinds of object management:
See Kubernetes Object Management for a discussion of the advantages and disadvantage of each kind of object management.
You can use kubectl create -f
to create an object from a configuration file. Refer to the kubernetes API reference for details.
kubectl create -f <filename|url>
replace
command drops all parts of the spec not specified in the configuration file. This should not be used with objects whose specs are partially managed by the cluster, such as Services of type LoadBalancer
, where the externalIPs
field is managed independently from the configuration file. Independently managed fields must be copied to the configuration file to prevent replace
from dropping them.You can use kubectl replace -f
to update a live object according to a configuration file.
kubectl replace -f <filename|url>
You can use kubectl delete -f
to delete an object that is described in a configuration file.
kubectl delete -f <filename|url>
If configuration file has specified the generateName
field in the metadata
section instead of the name
field, you cannot delete the object using kubectl delete -f <filename|url>
. You will have to use other flags for deleting the object. For example:
kubectl delete <type> <name> kubectl delete <type> -l <label>
You can use kubectl get -f
to view information about an object that is described in a configuration file.
kubectl get -f <filename|url> -o yaml
The -o yaml
flag specifies that the full object configuration is printed. Use kubectl get -h
to see a list of options.
The create
, replace
, and delete
commands work well when each object's configuration is fully defined and recorded in its configuration file. However when a live object is updated, and the updates are not merged into its configuration file, the updates will be lost the next time a replace
is executed. This can happen if a controller, such as a HorizontalPodAutoscaler, makes updates directly to a live object. Here's an example:
If you need to support multiple writers to the same object, you can use kubectl apply
to manage the object.
Suppose you have the URL of an object configuration file. You can use kubectl create --edit
to make changes to the configuration before the object is created. This is particularly useful for tutorials and tasks that point to a configuration file that could be modified by the reader.
kubectl create -f <url> --edit
Migrating from imperative commands to imperative object configuration involves several manual steps.
Export the live object to a local object configuration file:
kubectl get <kind>/<name> -o yaml > <kind>_<name>.yaml
Manually remove the status field from the object configuration file.
For subsequent object management, use replace
exclusively.
kubectl replace -f <kind>_<name>.yaml
The recommended approach is to define a single, immutable PodTemplate label used only by the controller selector with no other semantic meaning.
Example label:
selector:matchLabels:controller-selector:"apps/v1/deployment/nginx"template:metadata:labels:controller-selector:"apps/v1/deployment/nginx"
This task shows how to use kubectl patch
to update an API object in place. The exercises in this task demonstrate a strategic merge patch and a JSON merge patch.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
To check the version, enter kubectl version
.
Here's the configuration file for a Deployment that has two replicas. Each replica is a Pod that has one container:
apiVersion:apps/v1kind:Deploymentmetadata:name:patch-demospec:replicas:2selector:matchLabels:app:nginxtemplate:metadata:labels:app:nginxspec:containers:- name:patch-demo-ctrimage:nginxtolerations:- effect:NoSchedulekey:dedicatedvalue:test-team
Create the Deployment:
kubectl apply -f https://k8s.io/examples/application/deployment-patch.yaml
View the Pods associated with your Deployment:
kubectl get pods
The output shows that the Deployment has two Pods. The 1/1
indicates that each Pod has one container:
NAME READY STATUS RESTARTS AGE patch-demo-28633765-670qr 1/1 Running 0 23s patch-demo-28633765-j5qs3 1/1 Running 0 23s
Make a note of the names of the running Pods. Later, you will see that these Pods get terminated and replaced by new ones.
At this point, each Pod has one Container that runs the nginx image. Now suppose you want each Pod to have two containers: one that runs nginx and one that runs redis.
Create a file named patch-file.yaml
that has this content:
spec:template:spec:containers:- name:patch-demo-ctr-2image:redis
Patch your Deployment:
kubectl patch deployment patch-demo --patch-file patch-file.yaml
View the patched Deployment:
kubectl get deployment patch-demo --output yaml
The output shows that the PodSpec in the Deployment has two Containers:
containers:- image:redisimagePullPolicy:Alwaysname:patch-demo-ctr-2...- image:nginximagePullPolicy:Alwaysname:patch-demo-ctr...
View the Pods associated with your patched Deployment:
kubectl get pods
The output shows that the running Pods have different names from the Pods that were running previously. The Deployment terminated the old Pods and created two new Pods that comply with the updated Deployment spec. The 2/2
indicates that each Pod has two Containers:
NAME READY STATUS RESTARTS AGE patch-demo-1081991389-2wrn5 2/2 Running 0 1m patch-demo-1081991389-jmg7b 2/2 Running 0 1m
Take a closer look at one of the patch-demo Pods:
kubectl get pod <your-pod-name> --output yaml
The output shows that the Pod has two Containers: one running nginx and one running redis:
containers: - image: redis ... - image: nginx ...
The patch you did in the preceding exercise is called a strategic merge patch. Notice that the patch did not replace the containers
list. Instead it added a new Container to the list. In other words, the list in the patch was merged with the existing list. This is not always what happens when you use a strategic merge patch on a list. In some cases, the list is replaced, not merged.
With a strategic merge patch, a list is either replaced or merged depending on its patch strategy. The patch strategy is specified by the value of the patchStrategy
key in a field tag in the Kubernetes source code. For example, the Containers
field of PodSpec
struct has a patchStrategy
of merge
:
type PodSpec struct { ... Containers []Container `json:"containers" patchStrategy:"merge" patchMergeKey:"name" ...`...}
You can also see the patch strategy in the OpenApi spec:
"io.k8s.api.core.v1.PodSpec": {...,"containers": {"description": "List of containers belonging to the pod. ...."},"x-kubernetes-patch-merge-key": "name","x-kubernetes-patch-strategy": "merge"}
And you can see the patch strategy in the Kubernetes API documentation.
Create a file named patch-file-tolerations.yaml
that has this content:
spec:template:spec:tolerations:- effect:NoSchedulekey:disktypevalue:ssd
Patch your Deployment:
kubectl patch deployment patch-demo --patch-file patch-file-tolerations.yaml
View the patched Deployment:
kubectl get deployment patch-demo --output yaml
The output shows that the PodSpec in the Deployment has only one Toleration:
tolerations:- effect:NoSchedulekey:disktypevalue:ssd
Notice that the tolerations
list in the PodSpec was replaced, not merged. This is because the Tolerations field of PodSpec does not have a patchStrategy
key in its field tag. So the strategic merge patch uses the default patch strategy, which is replace
.
type PodSpec struct { ... Tolerations []Toleration `json:"tolerations,omitempty" protobuf:"bytes,22,opt,name=tolerations"`...}
A strategic merge patch is different from a JSON merge patch. With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the existing list.
The kubectl patch
command has a type
parameter that you can set to one of these values:
Parameter value | Merge type |
---|---|
json | JSON Patch, RFC 6902 |
merge | JSON Merge Patch, RFC 7386 |
strategic | Strategic merge patch |
For a comparison of JSON patch and JSON merge patch, see JSON Patch and JSON Merge Patch.
The default value for the type
parameter is strategic
. So in the preceding exercise, you did a strategic merge patch.
Next, do a JSON merge patch on your same Deployment. Create a file named patch-file-2.yaml
that has this content:
spec:template:spec:containers:- name:patch-demo-ctr-3image:gcr.io/google-samples/hello-app:2.0
In your patch command, set type
to merge
:
kubectl patch deployment patch-demo --type merge --patch-file patch-file-2.yaml
View the patched Deployment:
kubectl get deployment patch-demo --output yaml
The containers
list that you specified in the patch has only one Container. The output shows that your list of one Container replaced the existing containers
list.
spec:containers:- image:gcr.io/google-samples/hello-app:2.0...name:patch-demo-ctr-3
List the running Pods:
kubectl get pods
In the output, you can see that the existing Pods were terminated, and new Pods were created. The 1/1
indicates that each new Pod is running only one Container.
NAME READY STATUS RESTARTS AGE patch-demo-1307768864-69308 1/1 Running 0 1m patch-demo-1307768864-c86dc 1/1 Running 0 1m
Here's the configuration file for a Deployment that uses the RollingUpdate
strategy:
apiVersion:apps/v1kind:Deploymentmetadata:name:retainkeys-demospec:selector:matchLabels:app:nginxstrategy:rollingUpdate:maxSurge:30%template:metadata:labels:app:nginxspec:containers:- name:retainkeys-demo-ctrimage:nginx
Create the deployment:
kubectl apply -f https://k8s.io/examples/application/deployment-retainkeys.yaml
At this point, the deployment is created and is using the RollingUpdate
strategy.
Create a file named patch-file-no-retainkeys.yaml
that has this content:
spec:strategy:type:Recreate
Patch your Deployment:
kubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-no-retainkeys.yaml
In the output, you can see that it is not possible to set type
as Recreate
when a value is defined for spec.strategy.rollingUpdate
:
The Deployment "retainkeys-demo" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
The way to remove the value for spec.strategy.rollingUpdate
when updating the value for type
is to use the retainKeys
strategy for the strategic merge.
Create another file named patch-file-retainkeys.yaml
that has this content:
spec:strategy:$retainKeys:- typetype:Recreate
With this patch, we indicate that we want to retain only the type
key of the strategy
object. Thus, the rollingUpdate
will be removed during the patch operation.
Patch your Deployment again with this new patch:
kubectl patch deployment retainkeys-demo --type strategic --patch-file patch-file-retainkeys.yaml
Examine the content of the Deployment:
kubectl get deployment retainkeys-demo --output yaml
The output shows that the strategy object in the Deployment does not contain the rollingUpdate
key anymore:
spec:strategy:type:Recreatetemplate:
The patch you did in the preceding exercise is called a strategic merge patch with retainKeys strategy. This method introduces a new directive $retainKeys
that has the following strategies:
$retainKeys
list.$retainKeys
list must be a superset or the same as the fields present in the patch.The retainKeys
strategy does not work for all objects. It only works when the value of the patchStrategy
key in a field tag in the Kubernetes source code contains retainKeys
. For example, the Strategy
field of the DeploymentSpec
struct has a patchStrategy
of retainKeys
:
type DeploymentSpec struct { ...// +patchStrategy=retainKeys Strategy DeploymentStrategy `json:"strategy,omitempty" patchStrategy:"retainKeys" ...`...}
You can also see the retainKeys
strategy in the OpenApi spec:
"io.k8s.api.apps.v1.DeploymentSpec": {...,"strategy": {"$ref": "#/definitions/io.k8s.api.apps.v1.DeploymentStrategy","description": "The deployment strategy to use to replace existing pods with new ones.","x-kubernetes-patch-strategy": "retainKeys"},....}
And you can see the retainKeys
strategy in the Kubernetes API documentation.
The kubectl patch
command takes YAML or JSON. It can take the patch as a file or directly on the command line.
Create a file named patch-file.json
that has this content:
{ "spec": { "template": { "spec": { "containers": [ { "name": "patch-demo-ctr-2", "image": "redis" } ] } } } }
The following commands are equivalent:
kubectl patch deployment patch-demo --patch-file patch-file.yaml kubectl patch deployment patch-demo --patch 'spec:\n template:\n spec:\n containers:\n - name: patch-demo-ctr-2\n image: redis'kubectl patch deployment patch-demo --patch-file patch-file.json kubectl patch deployment patch-demo --patch '{"spec": {"template": {"spec": {"containers": [{"name": "patch-demo-ctr-2","image": "redis"}]}}}}'
kubectl patch
with --subresource
The flag --subresource=[subresource-name]
is used with kubectl commands like get, patch, edit, apply and replace to fetch and update status
, scale
and resize
subresource of the resources you specify. You can specify a subresource for any of the Kubernetes API resources (built-in and CRs) that have status
, scale
or resize
subresource.
For example, a Deployment has a status
subresource and a scale
subresource, so you can use kubectl
to get or modify just the status
subresource of a Deployment.
Here's a manifest for a Deployment that has two replicas:
apiVersion:apps/v1kind:Deploymentmetadata:name:nginx-deploymentspec:selector:matchLabels:app:nginxreplicas:2# tells deployment to run 2 pods matching the templatetemplate:metadata:labels:app:nginxspec:containers:- name:nginximage:nginx:1.14.2ports:- containerPort:80
Create the Deployment:
kubectl apply -f https://k8s.io/examples/application/deployment.yaml
View the Pods associated with your Deployment:
kubectl get pods -l app=nginx
In the output, you can see that Deployment has two Pods. For example:
NAME READY STATUS RESTARTS AGE nginx-deployment-7fb96c846b-22567 1/1 Running 0 47s nginx-deployment-7fb96c846b-mlgns 1/1 Running 0 47s
Now, patch that Deployment with --subresource=[subresource-name]
flag:
kubectl patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":3}}'
The output is:
scale.autoscaling/nginx-deployment patched
View the Pods associated with your patched Deployment:
kubectl get pods -l app=nginx
In the output, you can see one new pod is created, so now you have 3 running pods.
NAME READY STATUS RESTARTS AGE nginx-deployment-7fb96c846b-22567 1/1 Running 0 107s nginx-deployment-7fb96c846b-lxfr2 1/1 Running 0 14s nginx-deployment-7fb96c846b-mlgns 1/1 Running 0 107s
View the patched Deployment:
kubectl get deployment nginx-deployment -o yaml
...spec:replicas:3...status:...availableReplicas:3readyReplicas:3replicas:3
kubectl patch
and specify --subresource
flag for resource that doesn't support that particular subresource, the API server returns a 404 Not Found error.In this exercise, you used kubectl patch
to change the live configuration of a Deployment object. You did not change the configuration file that you originally used to create the Deployment object. Other commands for updating API objects include kubectl annotate, kubectl edit, kubectl replace, kubectl scale, and kubectl apply.
Kubernetes v1.30 [alpha]
(enabled by default: false)Kubernetes relies on API data being actively re-written, to support some maintenance activities related to at rest storage. Two prominent examples are the versioned schema of stored resources (that is, the preferred storage schema changing from v1 to v2 for a given resource) and encryption at rest (that is, rewriting stale data based on a change in how the data should be encrypted).
Install kubectl
.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version v1.30.To check the version, enter kubectl version
.
Ensure that your cluster has the StorageVersionMigrator
and InformerResourceVersion
feature gates enabled. You will need control plane administrator access to make that change.
Enable storage version migration REST api by setting runtime config storagemigration.k8s.io/v1alpha1
to true
for the API server. For more information on how to do that, read enable or disable a Kubernetes API.
To begin with, configure KMS provider to encrypt data at rest in etcd using following encryption configuration.
kind:EncryptionConfigurationapiVersion:apiserver.config.k8s.io/v1resources:- resources:- secretsproviders:- aescbc:keys:- name:key1secret:c2VjcmV0IGlzIHNlY3VyZQ==
Make sure to enable automatic reload of encryption configuration file by setting --encryption-provider-config-automatic-reload
to true.
Create a Secret using kubectl.
kubectl create secret generic my-secret --from-literal=key1=supersecret
Verify the serialized data for that Secret object is prefixed with k8s:enc:aescbc:v1:key1
.
Update the encryption configuration file as follows to rotate the encryption key.
kind:EncryptionConfigurationapiVersion:apiserver.config.k8s.io/v1resources:- resources:- secretsproviders:- aescbc:keys:- name:key2secret:c2VjcmV0IGlzIHNlY3VyZSwgaXMgaXQ/- aescbc:keys:- name:key1secret:c2VjcmV0IGlzIHNlY3VyZQ==
To ensure that previously created secret my-secret
is re-encrypted with new key key2
, you will use Storage Version Migration.
Create a StorageVersionMigration manifest named migrate-secret.yaml
as follows:
kind:StorageVersionMigrationapiVersion:storagemigration.k8s.io/v1alpha1metadata:name:secrets-migrationspec:resource:group:""version:v1resource:secrets
Create the object using kubectl as follows:
kubectl apply -f migrate-secret.yaml
Monitor migration of Secrets by checking the .status
of the StorageVersionMigration. A successful migration should have its Succeeded
condition set to true. Get the StorageVersionMigration object as follows:
kubectl get storageversionmigration.storagemigration.k8s.io/secrets-migration -o yaml
The output is similar to:
kind:StorageVersionMigrationapiVersion:storagemigration.k8s.io/v1alpha1metadata:name:secrets-migrationuid:628f6922-a9cb-4514-b076-12d3c178967cresourceVersion:"90"creationTimestamp:"2024-03-12T20:29:45Z"spec:resource:group:""version:v1resource:secretsstatus:conditions:- type:Runningstatus:"False"lastUpdateTime:"2024-03-12T20:29:46Z"reason:StorageVersionMigrationInProgress- type:Succeededstatus:"True"lastUpdateTime:"2024-03-12T20:29:46Z"reason:StorageVersionMigrationSucceededresourceVersion:"84"
Verify the stored secret is now prefixed with k8s:enc:aescbc:v1:key2
.
Consider a scenario where a CustomResourceDefinition (CRD) is created to serve custom resources (CRs) and is set as the preferred storage schema. When it's time to introduce v2 of the CRD, it can be added for serving only with a conversion webhook. This enables a smoother transition where users can create CRs using either the v1 or v2 schema, with the webhook in place to perform the necessary schema conversion between them. Before setting v2 as the preferred storage schema version, it's important to ensure that all existing CRs stored as v1 are migrated to v2. This migration can be achieved through Storage Version Migration to migrate all CRs from v1 to v2.
Create a manifest for the CRD, named test-crd.yaml
, as follows:
apiVersion:apiextensions.k8s.io/v1kind:CustomResourceDefinitionmetadata:name:selfierequests.stable.example.comspec:group:stable.example.comnames:plural:SelfieRequestssingular:SelfieRequestkind:SelfieRequestlistKind:SelfieRequestListscope:Namespacedversions:- name:v1served:truestorage:trueschema:openAPIV3Schema:type:objectproperties:hostPort:type:stringconversion:strategy:Webhookwebhook:clientConfig:url:"https://127.0.0.1:9443/crdconvert"caBundle:<CABundle info>conversionReviewVersions:- v1- v2
Create CRD using kubectl:
kubectl apply -f test-crd.yaml
Create a manifest for an example testcrd. Name the manifest cr1.yaml
and use these contents:
apiVersion:stable.example.com/v1kind:SelfieRequestmetadata:name:cr1namespace:default
Create CR using kubectl:
kubectl apply -f cr1.yaml
Verify that CR is written and stored as v1 by getting the object from etcd.
ETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C
where [...]
contains the additional arguments for connecting to the etcd server.
Update the CRD test-crd.yaml
to include v2 version for serving and storage and v1 as serving only, as follows:
apiVersion:apiextensions.k8s.io/v1kind:CustomResourceDefinitionmetadata:name:selfierequests.stable.example.comspec:group:stable.example.comnames:plural:SelfieRequestssingular:SelfieRequestkind:SelfieRequestlistKind:SelfieRequestListscope:Namespacedversions:- name:v2served:truestorage:trueschema:openAPIV3Schema:type:objectproperties:host:type:stringport:type:string- name:v1served:truestorage:falseschema:openAPIV3Schema:type:objectproperties:hostPort:type:stringconversion:strategy:Webhookwebhook:clientConfig:url:"https://127.0.0.1:9443/crdconvert"caBundle:<CABundle info>conversionReviewVersions:- v1- v2
Update CRD using kubectl:
kubectl apply -f test-crd.yaml
Create CR resource file with name cr2.yaml
as follows:
apiVersion:stable.example.com/v2kind:SelfieRequestmetadata:name:cr2namespace:default
Create CR using kubectl:
kubectl apply -f cr2.yaml
Verify that CR is written and stored as v2 by getting the object from etcd.
ETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr2 [...] | hexdump -C
where [...]
contains the additional arguments for connecting to the etcd server.
Create a StorageVersionMigration manifest named migrate-crd.yaml
, with the contents as follows:
kind:StorageVersionMigrationapiVersion:storagemigration.k8s.io/v1alpha1metadata:name:crdsvmspec:resource:group:stable.example.comversion:v1resource:SelfieRequest
Create the object using kubectl as follows:
kubectl apply -f migrate-crd.yaml
Monitor migration of secrets using status. Successful migration should have Succeeded
condition set to "True" in the status field. Get the migration resource as follows:
kubectl get storageversionmigration.storagemigration.k8s.io/crdsvm -o yaml
The output is similar to:
kind:StorageVersionMigrationapiVersion:storagemigration.k8s.io/v1alpha1metadata:name:crdsvmuid:13062fe4-32d7-47cc-9528-5067fa0c6ac8resourceVersion:"111"creationTimestamp:"2024-03-12T22:40:01Z"spec:resource:group:stable.example.comversion:v1resource:testcrdsstatus:conditions:- type:Runningstatus:"False"lastUpdateTime:"2024-03-12T22:40:03Z"reason:StorageVersionMigrationInProgress- type:Succeededstatus:"True"lastUpdateTime:"2024-03-12T22:40:03Z"reason:StorageVersionMigrationSucceededresourceVersion:"106"
Verify that previously created cr1 is now written and stored as v2 by getting the object from etcd.
ETCDCTL_API=3 etcdctl get /kubernetes.io/stable.example.com/testcrds/default/cr1 [...] | hexdump -C
where [...]
contains the additional arguments for connecting to the etcd server.