play

play with k8s

# You can bootstrap a cluster as follows:

# 1. Initializes cluster master node:

 kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16

Initializing machine ID from random generator.
W0319 11:57:24.371550     723 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/docker/containerd/containerd.sock". Please update your configuration!
I0319 11:57:24.687295     723 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.27
[init] Using Kubernetes version: v1.27.12
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-210-generic
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0319 11:57:25.234503     723 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.12, falling back to the nearest etcd version (3.5.7-0)
W0319 11:57:33.868317     723 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 192.168.0.13]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.0.13 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.0.13 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0319 11:57:46.567944     723 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.12, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.003492 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: bp8rgo.0w4o2r4dsllbf13x
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.13:6443 --token bp8rgo.0w4o2r4dsllbf13x \
        --discovery-token-ca-cert-hash sha256:f6d519cd62c54280b29dbaec5e46c6ad3aa21f214c34f456211af8ebb7cde7e5
Waiting for api server to startup
Warning: resource daemonsets/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
daemonset.apps/kube-proxy configured
No resources found
# 2. Initialize cluster networking:

 kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
configmap/kube-router-cfg created
daemonset.apps/kube-router created
serviceaccount/kube-router created
clusterrole.rbac.authorization.k8s.io/kube-router created
clusterrolebinding.rbac.authorization.k8s.io/kube-router created

kubeadm join 192.168.0.13:6443 --token bp8rgo.0w4o2r4dsllbf13x \
>         --discovery-token-ca-cert-hash sha256:f6d519cd62c54280b29dbaec5e46c6ad3aa21f214c34f456211af8ebb7cde7e5
W0319 12:07:39.003998    1929 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/docker/containerd/containerd.sock". Please update your configuration!
[preflight] Running pre-flight checks
        [WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-210-generic
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_BLKIO: enabled
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1
        [WARNING Port-10250]: Port 10250 is in use
        [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 3. (Optional) Create an nginx deployment:

 kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml
cat .kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJME1ETXhPVEV4TlRjME1Wb1hEVE0wTURNeE56RXhOVGMwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlB2CnM3RnFqaGNBSGgxY3lRT3BKcEhpbmNqNE1YdnlLNDJkT0lzaUJTQUFmeEdxdUlGbUVpNXRxYVNBQk1MQ25QN2gKcnNvUWVBY3MyaHYxUkEwY2trVDFxSWwwdml3aVhaUElkSFJzZ1JiYk4xaVBOUTBiN1RpeTdqQXpTK1JPZEdLTwp0RmRNSG9tc3VkTHdpTjEzK211cVp1eWxsTGQ2M1FQY0xLZFNKVk5oOHlwZHVuYWNMbUlkVlB4SkhrQ1YrMnF5CnYxWXBCZU9KVU1lSmFERDZTSGZ2YlZLZkIyaDdncks5UkIvZmkwTEF0V1RaL0FXV1V0RGFsQ09lakRaL2h0MHQKTmk1VVRDRVB4NDduV2tSWVAxUjJDbUVOQXVFVGg0eEFyeTQzZnBZK2ZhWEpLWklGQUF0dzJkamQ0U1dDLzRrZQo0aEdnL1VidGl3dEJvczBBb3BrQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZOUFJGWU1zang2c3NMcWkrNk9hdWt6UU5kMlFNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSGJ1bUNpeVBNYTZIQ2U5cURjKwoxY2x1K2cyVzk2Rm0rVWZEczY3SFp6aXgyUHJPaDd4Mk5SRXliVEQ0Tk1rM2pPcGd0WnZBNUt2WkNscDVRR0ZsClA2Tjc3Mm1Ua2lNOWR1UmZqZ0ZqTHMrMWxuT0Z0TkVzR1BKVTVvOVp1QmYzY0N4RmEvS2ZEcW00dTFGd21iblMKVW4rSmlXMElXZEtrRkF6bjNhcktTWTdUWWZIdkNjMzg1RmpFY0JjVkhKMWJPamQ3T0YzQThGWVErSnV1RnZYagpvN0RHTmcvUjB0NGpDL3pkZDNFWThyZWtsMUdFeWczckNuKzNyZXRNVWlKaEUzenBzbzlOR0hEbnRVcDVlMG01Cm8rdUpua3NVd09waFBPRi9waXo3eHI4YmVNQWptK2psK211b1k2Z0YwdDcwTy9QcGJ3NWJuRVBBUHdyRFB1RlAKYmgwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.0.13:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJQ0FzQU40K0Nsa2N3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBek1Ua3hNVFUzTkRGYUZ3MHlOVEF6TVRreE1UVTNORFZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhTMXFMWDlHSDZabHVBUUMKVVgvaUtpNk00eDRvY0FtVjlCL0gyVzIraENZRnVjRFZodHBzVEZDT1hPMWxaQXZ6ZFM2QmpMeEd5dVJ1SjI4OQpKR243MmQvQkZzQkRiWVV3RkxLOXl6eUJmOVUvN3o4MzN6bzJ5YUY2RUluWGEraG5rWmV0UjdxYXE1QUJSWUlZCjZqOHM3UnJ0SjVEZEJTN1VnUnlhRUpRMTR2MWprUkdUbm1SeGJ1aU9LTTlLemxSVGw3Q0kxQnhPLysrY0pjam4KTk1vamlLVHdQK1JwanB6V3NCalRRMFZvZDlRZTJ0cnF0U0xEeGhxNWNia1hUR3RkaC94RzMwR2VLNUgvSFduZwpKWENKTnlSUG1qMmEzaTN1NisxZDd0dCtrUVpXNWFWQjRnc2Y0bHNJeGVsWTB5VUlvMHpFaDJRRTVFbjhxZ3ovCm50SUxId0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JUVDBSV0RMSThlckxDNm92dWptcnBNMERYZAprREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUHNzNVlWbW04aTdZUjJBQ3BjRXFjWWF6OWJKTlF2QTc5L1pUClRJQ3VnTnpwd3diQm1IdmViZjFPQ3VadTRkVmJOMm1rSW9QVFdlTCtWTXZJMlZvY3p4ZEp1WWwwSnA1VUc1QisKOTVML1NPR2h6NmlxTm1mdkMxanVCQjlSSWlLSXh3VjdLNUZjN1picFhMeUNVNFhwV1Njb1VaQ2JNK2VWR2gzMQoxYWYzdlhPWGY2MjRBN0YzcFU2KzFoalo3Z0NjSnpOMHRpTGZkV1JCYUZaQ3gycnBiNnFiQVBVNno4eWxnL1VjClMxMzZISEQ1bysrZUhPOVVOemRXcDl6RXFWVTZFZ1EyeGJZWDRwMnluZWo4Zzd3UHBlQ3JMZnMxVHU4WFByK2oKLytJdmlEbGMyRXVFN2duajhxNG1yYVd5K1NuazFFcmZnL3ZzWHkyK0V6N25yaXcyc2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeFMxcUxYOUdINlpsdUFRQ1VYL2lLaTZNNHg0b2NBbVY5Qi9IMlcyK2hDWUZ1Y0RWCmh0cHNURkNPWE8xbFpBdnpkUzZCakx4R3l1UnVKMjg5SkduNzJkL0JGc0JEYllVd0ZMSzl5enlCZjlVLzd6ODMKM3pvMnlhRjZFSW5YYStobmtaZXRSN3FhcTVBQlJZSVk2ajhzN1JydEo1RGRCUzdVZ1J5YUVKUTE0djFqa1JHVApubVJ4YnVpT0tNOUt6bFJUbDdDSTFCeE8vKytjSmNqbk5Nb2ppS1R3UCtScGpweldzQmpUUTBWb2Q5UWUydHJxCnRTTER4aHE1Y2JrWFRHdGRoL3hHMzBHZUs1SC9IV25nSlhDSk55UlBtajJhM2kzdTYrMWQ3dHQra1FaVzVhVkIKNGdzZjRsc0l4ZWxZMHlVSW8wekVoMlFFNUVuOHFnei9udElMSHdJREFRQUJBb0lCQUNJcHd6ck0wWDZNV3hWdQpCR1RRam9RV2VxeWpQZ2hqY01yU2N0TDJVOHNidDJRK3lBQk1lZlVqQS9lUDNrQmVYYmxRN0h0UTU4Y2htd0JVCklyamJjQnFJelRDKzhTL1pvc0lEVWlVVGY3Q0JaMGx4bjZHYXVZRm42L2xQbUxhR2x1TS83M0w5SHUxWXp3K1gKQWZBY01CR0kxOHhDS2psS3F1RVA3cFd1eUVOaXEyK3dKci96SVFlZTM5MmZCTHkwTGt6OFFCZ3BNajFheTN3ZgpBQloyREM0bFQvcTlnYTIvdlhiU2d1bC82SEswNHJ6VTF3M245YUEvd2pnN0FLODUwaGtOUjdZOEx4cnNIbDVSClpTRVpmTE5seG9PV0JZbmQrajcwRjlNVG9WangweWFoNEVDUy9NVUtRQ0JhSnVjaGZMa3lkeFRhNmRUczA5U0QKc0hscG5Wa0NnWUVBeUd4MFhuQVE4UjQ4QXNwM01zTG1qN1RjenVZc3d3RVJsUCsyd3JGSHRXWHFmYXpGNFpCcgp5aXBZT21jUmNUWktrOHhYVFdzSUZkY2F1TDVyOWY5dUxSZlhTWUtZNnAwMnU0QlNTOE01S2JiM3ZucTRETDRrCmhJcysyWG9tdTRJcmdsaVdtb3RCVnFmQ1FFQ3FKU0tQWkpYQXBVVGZRMlZDTzdTMkhSand6ek1DZ1lFQSs5cUUKZlFldndhSlA1VDZydW5HSm03cTZ1TkRCZW1YZ0Q0UE5keWRkVG5TVUhqV29vaDN5d0h6VE0remo5RSt3SnlwagpzVzJSUVQwaC9YL05uZUZIanUxMjRRTFhiZ1pFM3NndVlwMktpK3JOUlVYdE1UM2YrWnZEemhXZmZnalBEeitMCjVQMWtNSjdKZEZpT1grcTkvL24vR0pSS05wT2s2K0JuNGxjZWhHVUNnWUJ0ZTVFMTVWSGI1UUF6SmhabkRFQ1cKVDk0dXgxMjhTR0VxVzJXaWhPVC9HbVUxc2FPR3pEV1ZnZndnS3gxRUVydzZjRzFnUlE5dG5zdGlEK001eGdhUgphMnlYSTFnVkVUeE13SlF6L3JqREtNZThyWnpNbVRHcGxjY0hWY3JDc3lEQlcxTXBxTmhRVmVPdTVhUU1GUXp2CmpUNW5DWEJNaUl2ZHdhR1owMzM0TVFLQmdIYisvenhvQmxYeUQ5ZmI3WjNSQ1Zpb09KTWNKMTVpaGlRdWZVVUEKTjJqYlVpU1g2ODUxWWY0cXZFdTdjTlU5VlppYndiRFNlU0FlOTFGa01rMlhaSTBXaStXeXh3RDRPMUFidXpiagpBdFFySThQSVQxTEZ6bTZNZDA2SER1Mm8wZFI5ak9hc0JzdW1LcjhySEZJYmdweFFqWVFhaEpvVzFvU1FhZVVhCmpwTzFBb0dCQUk2RDlNL01JMGNjRThORk1Uak9QSVk0ZlZkUFZrTUhtQ04xQnRiS2xnbnNLWTJpd093UndEdkoKWjhrMnpJampsZDJWRXpKRHk1WXRwRjIzWWcxc0U2Znpmakd2VzhOTDU3N1JWMm50MVR3QkYzNnNJeUNxU1BBUgpEbzBWSmx2cHJSS0w1WldEWHd3MGJKMjhwbGRmZ0VPVEhLcW5WeTdQRm9uOXNWVHY5dk1qCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.12", GitCommit:"12031002905c0410706974560cbdf2dad9278919", GitTreeState:"clean", BuildDate:"2024-03-15T02:06:14Z", GoVersion:"go1.21.8", Compiler:"gc", Platform:"linux/amd64"}

export KUBECONFIG=/etc/kubernetes/admin.conf
ls /etc/kubernetes/
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf

Page Source