1、干净系统OpenEuler 24.03 kebukey 3.1.7,部署k8s 1.31.2 部署成功并正常运行
2、制作离线包:
export KKZONE=cn
kk create manifest
导出 artifact
kk artifact export -m manifest-sample.yaml
3、在干净的OpenEuler 24.03 上部署,就开始报错,需要如何处理
kk init registry -f config-sample.yaml -a kubekey-artifact.tar.gz
kk artifact image push -f config-sample.yaml -a kubekey-artifact.tar.gz
kk create cluster -f config-sample.yaml -a kubekey-artifact.tar.gz –with-packages
运行到
09:39:41 CST [InitKubernetesModule] Init cluster using kubeadm
09:43:44 CST stdout: [node1]
之后就开始报错
W1105 09:39:41.684068 28646 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:39:41.684872 28646 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:39:41.686411 28646 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using ‘kubeadm config images pull’
W1105 09:39:41.789157 28646 checks.go:846] detected that the sandbox image “registry.k8s.io/pause:3.8” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “dockerhub.kubekey.local/kubesphereio/pause:3.10” as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-apiserver/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-controller-manager/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-proxy/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/coredns:1.9.3": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/coredns:1.9.3": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/coredns/manifests/1.9.3": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.10: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.10: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/pause:3.10": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/pause:3.10": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.10": tls: failed to verify certificate: x509: certificate signed by unknown authority
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost node1 node1.cluster.local] and IPs [10.233.0.1 192.168.1.16 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.065246ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.000569986s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher
09:43:44 CST stdout: [node1]
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W1105 09:43:44.941617 32864 reset.go:123] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get “https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s”: dial tcp 192.168.1.16:6443: connect: connection refused
[preflight] Running pre-flight checks
W1105 09:43:44.941779 32864 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
09:43:44 CST message: [node1]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W1105 09:39:41.684068 28646 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:39:41.684872 28646 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:39:41.686411 28646 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using ‘kubeadm config images pull’
W1105 09:39:41.789157 28646 checks.go:846] detected that the sandbox image “registry.k8s.io/pause:3.8” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “dockerhub.kubekey.local/kubesphereio/pause:3.10” as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-apiserver/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-controller-manager/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-proxy/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/coredns:1.9.3": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/coredns:1.9.3": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/coredns/manifests/1.9.3": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.10: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.10: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/pause:3.10": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/pause:3.10": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.10": tls: failed to verify certificate: x509: certificate signed by unknown authority
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost node1 node1.cluster.local] and IPs [10.233.0.1 192.168.1.16 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.065246ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.000569986s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
09:43:44 CST retry: [node1]
09:47:52 CST stdout: [node1]
W1105 09:43:50.129525 33036 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:43:50.130277 33036 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:43:50.131747 33036 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using ‘kubeadm config images pull’
W1105 09:43:50.227063 33036 checks.go:846] detected that the sandbox image “registry.k8s.io/pause:3.8” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “dockerhub.kubekey.local/kubesphereio/pause:3.10” as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-apiserver/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-controller-manager/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-proxy/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/coredns:1.9.3": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/coredns:1.9.3": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/coredns/manifests/1.9.3": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.10: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.10: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/pause:3.10": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/pause:3.10": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.10": tls: failed to verify certificate: x509: certificate signed by unknown authority
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost node1 node1.cluster.local] and IPs [10.233.0.1 192.168.1.16 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.120482ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.000558568s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher
09:47:53 CST stdout: [node1]
[reset] Reading configuration from the cluster…
[reset] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
W1105 09:47:53.519987 36731 reset.go:123] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get “https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s”: dial tcp 192.168.1.16:6443: connect: connection refused
[preflight] Running pre-flight checks
W1105 09:47:53.520170 36731 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
09:47:53 CST message: [node1]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W1105 09:43:50.129525 33036 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:43:50.130277 33036 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:43:50.131747 33036 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using ‘kubeadm config images pull’
W1105 09:43:50.227063 33036 checks.go:846] detected that the sandbox image “registry.k8s.io/pause:3.8” of the container runtime is inconsistent with that used by kubeadm.It is recommended to use “dockerhub.kubekey.local/kubesphereio/pause:3.10” as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-apiserver/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-controller-manager/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.2": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/kube-proxy/manifests/v1.31.2": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/coredns:1.9.3": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/coredns:1.9.3": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/coredns/manifests/1.9.3": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.10: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.10: failed to pull and unpack image "dockerhub.kubekey.local/kubesphereio/pause:3.10": failed to resolve reference "dockerhub.kubekey.local/kubesphereio/pause:3.10": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.10": tls: failed to verify certificate: x509: certificate signed by unknown authority
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost node1 node1.cluster.local] and IPs [10.233.0.1 192.168.1.16 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “super-admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 502.120482ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.000558568s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
09:47:53 CST retry: [node1]
09:48:13 CST stdout: [node1]
W1105 09:47:58.730321 36959 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:47:58.731240 36959 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:47:58.732934 36959 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.1.16:2379/version": dial tcp 192.168.1.16:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
09:48:13 CST stdout: [node1]
[preflight] Running pre-flight checks
W1105 09:48:13.979614 37222 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in “/var/lib/kubelet”
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the “iptables” command.
If your cluster was setup to utilize IPVS, run ipvsadm –clear (or similar)
to reset your system’s IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
09:48:13 CST message: [node1]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W1105 09:47:58.730321 36959 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:47:58.731240 36959 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:47:58.732934 36959 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.1.16:2379/version": dial tcp 192.168.1.16:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
09:48:13 CST failed: [node1]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:
failed: [node1] [KubeadmInit] exec failed after 3 retries: init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W1105 09:47:58.730321 36959 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “ClusterConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:47:58.731240 36959 common.go:101] your configuration file uses a deprecated API spec: “kubeadm.k8s.io/v1beta3” (kind: “InitConfiguration”). Please use ‘kubeadm config migrate –old-config old.yaml –new-config new.yaml’, which will write the new, similar spec using a newer API version.
W1105 09:47:58.732934 36959 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.1.16:2379/version": dial tcp 192.168.1.16:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1