The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
13:50:39 CST message: [master]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubeadm init –config=/etc/kubernetes/kubeadm-config.yaml –ignore-preflight-errors=FileExisting-crictl,ImagePull”
W1211 13:46:33.866866 31746 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.26.12
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12: output: E1211 13:46:34.465341 31899 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/kube-apiserver/manifests/v1.26.12\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12"
time=“2024-12-11T13:46:34+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-apiserver/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12: output: E1211 13:46:34.801358 31989 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/kube-controller-manager/manifests/v1.26.12\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12"
time=“2024-12-11T13:46:34+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-controller-manager/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12: output: E1211 13:46:35.112048 32092 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.26.12\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12"
time=“2024-12-11T13:46:35+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-scheduler/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12: output: E1211 13:46:35.393414 32177 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/kube-proxy/manifests/v1.26.12\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12"
time=“2024-12-11T13:46:35+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.26.12\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/kube-proxy/manifests/v1.26.12\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/pause:3.9: output: E1211 13:46:35.701777 32277 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/pause:3.9\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/pause:3.9\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.9\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/pause:3.9"
time=“2024-12-11T13:46:35+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/pause:3.9\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/pause:3.9\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/pause/manifests/3.9\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphereio/coredns:1.9.3: output: E1211 13:46:36.012321 32371 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \\"dockerhub.kubekey.local/kubesphereio/coredns:1.9.3\\": failed to resolve reference \\"dockerhub.kubekey.local/kubesphereio/coredns:1.9.3\\": failed to do request: Head \\"https://dockerhub.kubekey.local/v2/kubesphereio/coredns/manifests/1.9.3\\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="dockerhub.kubekey.local/kubesphereio/coredns:1.9.3"
time=“2024-12-11T13:46:36+08:00” level=fatal msg="pulling image: failed to pull and unpack image \“dockerhub.kubekey.local/kubesphereio/coredns:1.9.3\”: failed to resolve reference \“dockerhub.kubekey.local/kubesphereio/coredns:1.9.3\”: failed to do request: Head \“https://dockerhub.kubekey.local/v2/kubesphereio/coredns/manifests/1.9.3\”: tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [codebase codebase.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local] and IPs [10.233.0.1 172.16.21.35 127.0.0.1 172.16.20.20]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
To see the stack trace of this error execute with –v=5 or higher: Process exited with status 1
13:50:39 CST retry: [master]