操作系统信息
Centos stream 9,4C/8G
kubekey 版本信息:
[root@luban-node01 kubekey]# ./kk version
kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.6", GitCommit:"5cad5b5357e80fee211faed743c8f9d452c13b5b", GitTreeState:"clean", BuildDate:"2024-09-03T07:23:37Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
[root@luban-node01 kubekey]#
Kubernetes版本信息
安装 1.31.0
问题是什么
使用kk 离线安装安装 Kubernetes 1.31.0,报错如下:
[root@luban-node01 kubekey]# ./kk create cluster -f config-sample.yaml -a kubekey-artifact.tar.gz
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
09:59:12 CST [GreetingsModule] Greetings
09:59:12 CST message: [luban-worker3]
Greetings, KubeKey!
09:59:13 CST message: [luban-controlplane3]
Greetings, KubeKey!
09:59:13 CST message: [luban-controlplane1]
Greetings, KubeKey!
09:59:14 CST message: [luban-worker2]
Greetings, KubeKey!
09:59:14 CST message: [luban-worker1]
Greetings, KubeKey!
09:59:14 CST message: [luban-registry]
Greetings, KubeKey!
09:59:14 CST message: [luban-controlplane2]
Greetings, KubeKey!
09:59:14 CST success: [luban-worker3]
09:59:14 CST success: [luban-controlplane3]
09:59:14 CST success: [luban-controlplane1]
09:59:14 CST success: [luban-worker2]
09:59:14 CST success: [luban-worker1]
09:59:14 CST success: [luban-registry]
09:59:14 CST success: [luban-controlplane2]
09:59:14 CST [NodePreCheckModule] A pre-check on nodes
09:59:16 CST success: [luban-registry]
09:59:16 CST success: [luban-worker3]
09:59:16 CST success: [luban-controlplane1]
09:59:16 CST success: [luban-worker1]
09:59:16 CST success: [luban-worker2]
09:59:16 CST success: [luban-controlplane2]
09:59:16 CST success: [luban-controlplane3]
09:59:16 CST [ConfirmModule] Display confirmation form
+---------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+---------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| luban-controlplane1 | y | y | y | y | y | y | | y | y | | | | | | CST 09:59:16 |
| luban-controlplane2 | y | y | y | y | y | y | | y | y | | | | | | CST 09:59:16 |
| luban-controlplane3 | y | y | y | y | y | y | | y | y | | | | | | CST 09:59:16 |
| luban-registry | y | y | y | y | y | y | | y | y | | | | | | CST 09:59:18 |
| luban-worker1 | y | y | y | y | y | y | | y | y | | | | | | CST 09:59:16 |
| luban-worker2 | y | y | y | y | y | y | | y | y | | | | | | CST 09:59:16 |
| luban-worker3 | y | y | y | y | y | y | | y | y | | | | | | CST 09:59:18 |
+---------------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Install k8s with specify version: v1.31.0
Continue this installation? [yes/no]: yes
09:59:19 CST success: [LocalHost]
09:59:19 CST [UnArchiveArtifactModule] Check the KubeKey artifact md5 value
09:59:34 CST success: [LocalHost]
09:59:34 CST [UnArchiveArtifactModule] UnArchive the KubeKey artifact
09:59:34 CST skipped: [LocalHost]
09:59:34 CST [UnArchiveArtifactModule] Create the KubeKey artifact Md5 file
09:59:34 CST skipped: [LocalHost]
09:59:34 CST [NodeBinariesModule] Download installation binaries
09:59:34 CST message: [localhost]
downloading amd64 kubeadm v1.31.0 ...
09:59:35 CST message: [localhost]
kubeadm exists
09:59:35 CST message: [localhost]
downloading amd64 kubelet v1.31.0 ...
09:59:37 CST message: [localhost]
kubelet exists
09:59:37 CST message: [localhost]
downloading amd64 kubectl v1.31.0 ...
09:59:39 CST message: [localhost]
kubectl exists
09:59:39 CST message: [localhost]
downloading amd64 helm v3.14.3 ...
09:59:40 CST message: [localhost]
helm exists
09:59:40 CST message: [localhost]
downloading amd64 kubecni v1.2.0 ...
09:59:41 CST message: [localhost]
kubecni exists
09:59:41 CST message: [localhost]
downloading amd64 crictl v1.29.0 ...
09:59:42 CST message: [localhost]
crictl exists
09:59:42 CST message: [localhost]
downloading amd64 etcd v3.5.13 ...
09:59:43 CST message: [localhost]
etcd exists
09:59:43 CST message: [localhost]
downloading amd64 containerd 1.7.13 ...
09:59:44 CST message: [localhost]
containerd exists
09:59:44 CST message: [localhost]
downloading amd64 runc v1.1.12 ...
09:59:44 CST message: [localhost]
runc exists
09:59:44 CST message: [localhost]
downloading amd64 calicoctl v3.27.4 ...
09:59:46 CST message: [localhost]
calicoctl exists
09:59:46 CST success: [LocalHost]
09:59:46 CST [ConfigureOSModule] Get OS release
09:59:46 CST success: [luban-worker3]
09:59:46 CST success: [luban-registry]
09:59:46 CST success: [luban-controlplane1]
09:59:46 CST success: [luban-worker2]
09:59:46 CST success: [luban-controlplane2]
09:59:46 CST success: [luban-controlplane3]
09:59:46 CST success: [luban-worker1]
09:59:46 CST [ConfigureOSModule] Prepare to init OS
09:59:49 CST success: [luban-registry]
09:59:49 CST success: [luban-worker3]
09:59:49 CST success: [luban-worker2]
09:59:49 CST success: [luban-controlplane1]
09:59:49 CST success: [luban-worker1]
09:59:49 CST success: [luban-controlplane3]
09:59:49 CST success: [luban-controlplane2]
09:59:49 CST [ConfigureOSModule] Generate init os script
09:59:49 CST success: [luban-worker3]
09:59:49 CST success: [luban-registry]
09:59:49 CST success: [luban-controlplane1]
09:59:49 CST success: [luban-controlplane2]
09:59:49 CST success: [luban-worker1]
09:59:49 CST success: [luban-worker2]
09:59:49 CST success: [luban-controlplane3]
09:59:49 CST [ConfigureOSModule] Exec init os script
09:59:51 CST stdout: [luban-registry]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
09:59:52 CST stdout: [luban-worker1]
Permissive
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
09:59:52 CST stdout: [luban-worker3]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
09:59:52 CST stdout: [luban-worker2]
Permissive
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
09:59:52 CST stdout: [luban-controlplane1]
Permissive
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
09:59:52 CST stdout: [luban-controlplane3]
Permissive
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
09:59:52 CST stdout: [luban-controlplane2]
Permissive
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 1048576
net.ipv4.neigh.default.gc_thresh1 = 512
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_max_tw_buckets = 1048576
net.ipv4.tcp_max_orphans = 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.udp_rmem_min = 131072
net.ipv4.udp_wmem_min = 131072
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.arp_accept = 1
net.ipv4.conf.default.arp_accept = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.default.arp_ignore = 1
vm.max_map_count = 262144
vm.swappiness = 0
vm.overcommit_memory = 0
fs.inotify.max_user_instances = 524288
fs.inotify.max_user_watches = 524288
fs.pipe-max-size = 4194304
fs.aio-max-nr = 262144
kernel.pid_max = 65535
kernel.watchdog_thresh = 5
kernel.hung_task_timeout_secs = 5
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1
09:59:52 CST success: [luban-registry]
09:59:52 CST success: [luban-worker1]
09:59:52 CST success: [luban-worker3]
09:59:52 CST success: [luban-worker2]
09:59:52 CST success: [luban-controlplane1]
09:59:52 CST success: [luban-controlplane3]
09:59:52 CST success: [luban-controlplane2]
09:59:52 CST [ConfigureOSModule] configure the ntp server for each node
ntpserver: luban-registry, current host: luban-controlplane3
ntpserver: luban-registry, current host: luban-controlplane2
ntpserver: luban-registry, current host: luban-worker2
ntpserver: luban-registry, current host: luban-registry
ntpserver: luban-registry, current host: luban-worker1
ntpserver: luban-registry, current host: luban-worker3
ntpserver: luban-registry, current host: luban-controlplane1
09:59:56 CST stdout: [luban-controlplane3]
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? luban-registry.cluster.l> 0 6 0 - +0ns[ +0ns] +/- 0ns
09:59:57 CST stdout: [luban-controlplane2]
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? luban-registry.cluster.l> 0 6 0 - +0ns[ +0ns] +/- 0ns
09:59:57 CST stdout: [luban-worker2]
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? luban-registry.cluster.l> 0 6 0 - +0ns[ +0ns] +/- 0ns
09:59:57 CST stdout: [luban-worker3]
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? luban-registry.cluster.l> 0 6 0 - +0ns[ +0ns] +/- 0ns
09:59:58 CST stdout: [luban-worker1]
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? luban-registry.cluster.l> 0 6 0 - +0ns[ +0ns] +/- 0ns
09:59:58 CST stdout: [luban-registry]
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? luban-registry.cluster.l> 0 6 1 - +0ns[ +0ns] +/- 0ns
09:59:59 CST stdout: [luban-controlplane1]
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? luban-registry.cluster.l> 0 6 0 - +0ns[ +0ns] +/- 0ns
09:59:59 CST success: [luban-controlplane3]
09:59:59 CST success: [luban-controlplane2]
09:59:59 CST success: [luban-worker2]
09:59:59 CST success: [luban-worker3]
09:59:59 CST success: [luban-worker1]
09:59:59 CST success: [luban-registry]
09:59:59 CST success: [luban-controlplane1]
09:59:59 CST [KubernetesStatusModule] Get kubernetes cluster status
09:59:59 CST success: [luban-controlplane1]
09:59:59 CST success: [luban-controlplane2]
09:59:59 CST success: [luban-controlplane3]
09:59:59 CST [InstallContainerModule] Sync containerd binaries
10:00:06 CST success: [luban-controlplane2]
10:00:06 CST success: [luban-controlplane1]
10:00:06 CST success: [luban-controlplane3]
10:00:06 CST success: [luban-worker3]
10:00:06 CST success: [luban-worker2]
10:00:06 CST success: [luban-worker1]
10:00:06 CST [InstallContainerModule] Generate containerd service
10:00:10 CST success: [luban-worker3]
10:00:10 CST success: [luban-worker1]
10:00:10 CST success: [luban-worker2]
10:00:10 CST success: [luban-controlplane2]
10:00:10 CST success: [luban-controlplane3]
10:00:10 CST success: [luban-controlplane1]
10:00:10 CST [InstallContainerModule] Generate containerd config
10:00:11 CST success: [luban-worker3]
10:00:11 CST success: [luban-controlplane1]
10:00:11 CST success: [luban-worker1]
10:00:11 CST success: [luban-controlplane3]
10:00:11 CST success: [luban-worker2]
10:00:11 CST success: [luban-controlplane2]
10:00:11 CST [InstallContainerModule] Enable containerd
10:00:20 CST success: [luban-controlplane2]
10:00:20 CST success: [luban-controlplane1]
10:00:20 CST success: [luban-worker2]
10:00:20 CST success: [luban-controlplane3]
10:00:20 CST success: [luban-worker1]
10:00:20 CST success: [luban-worker3]
10:00:20 CST [InstallContainerModule] Sync crictl binaries
10:00:23 CST success: [luban-worker3]
10:00:23 CST success: [luban-worker1]
10:00:23 CST success: [luban-worker2]
10:00:23 CST success: [luban-controlplane2]
10:00:23 CST success: [luban-controlplane1]
10:00:23 CST success: [luban-controlplane3]
10:00:23 CST [InstallContainerModule] Generate crictl config
10:00:24 CST success: [luban-worker3]
10:00:24 CST success: [luban-controlplane1]
10:00:24 CST success: [luban-worker1]
10:00:24 CST success: [luban-controlplane2]
10:00:24 CST success: [luban-worker2]
10:00:24 CST success: [luban-controlplane3]
10:00:24 CST [CopyImagesToRegistryModule] Copy images to a private registry from an artifact OCI Path
10:00:24 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.27.4-amd64
10:00:24 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/cni:v3.27.4-amd64
Getting image source signatures
Copying blob ad5042aba4ea skipped: already exists
Copying blob ad5afd8b9110 skipped: already exists
Copying blob 4912cdb2d88f skipped: already exists
Copying blob 836c0ab60609 skipped: already exists
Copying blob fb1dccfdab01 skipped: already exists
Copying blob 48559947bc83 skipped: already exists
Copying blob 10bfba271dd6 skipped: already exists
Copying blob b937ea73bd2f skipped: already exists
Copying blob 43c634c0d348 skipped: already exists
Copying blob b498e09d7286 skipped: already exists
Copying blob 4c50ad0b0819 skipped: already exists
Copying blob 4f4fb700ef54 skipped: already exists
Copying config 38d7b2ae27 done
Writing manifest to image destination
Storing signatures
10:00:29 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.9.3-amd64
10:00:29 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/coredns:1.9.3-amd64
Getting image source signatures
Copying blob d92bdee79785 skipped: already exists
Copying blob f2401d57212f skipped: already exists
Copying config a2fe663586 done
Writing manifest to image destination
Storing signatures
10:00:29 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.22.20-amd64
10:00:29 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.22.20-amd64
Getting image source signatures
Copying blob d6460eb5ced5 skipped: already exists
Copying blob 9088d8860fc5 skipped: already exists
Copying blob cf529ee3fa7a skipped: already exists
Copying blob 87c9185cc69e skipped: already exists
Copying blob d312b7a6b045 skipped: already exists
Copying config 0d5dc31130 done
Writing manifest to image destination
Storing signatures
10:00:31 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.31.0-amd64
10:00:31 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.0-amd64
Getting image source signatures
Copying blob b2ce0e066077 skipped: already exists
Copying blob 2bdf44d7aa71 skipped: already exists
Copying blob 058cf3d8c2ba skipped: already exists
Copying blob b6824ed73363 skipped: already exists
Copying blob 7c12895b777b skipped: already exists
Copying blob 33e068de2649 skipped: already exists
Copying blob 5664b15f108b skipped: already exists
Copying blob 27be814a09eb skipped: already exists
Copying blob 4aa0ea1413d3 skipped: already exists
Copying blob 3f4e2c586348 skipped: already exists
Copying blob 9aee425378d2 skipped: already exists
Copying blob 8801f2e6d7f8 skipped: already exists
Copying blob edf631c66644 skipped: already exists
Copying config f96f166b76 done
Writing manifest to image destination
Storing signatures
10:00:33 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.31.0-amd64
10:00:33 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.0-amd64
Getting image source signatures
Copying blob b2ce0e066077 skipped: already exists
Copying blob 2bdf44d7aa71 skipped: already exists
Copying blob 058cf3d8c2ba skipped: already exists
Copying blob b6824ed73363 skipped: already exists
Copying blob 7c12895b777b skipped: already exists
Copying blob 33e068de2649 skipped: already exists
Copying blob 5664b15f108b skipped: already exists
Copying blob 27be814a09eb skipped: already exists
Copying blob 4aa0ea1413d3 skipped: already exists
Copying blob 3f4e2c586348 skipped: already exists
Copying blob 9aee425378d2 skipped: already exists
Copying blob 8801f2e6d7f8 skipped: already exists
Copying blob cc5896ab1d4b skipped: already exists
Copying config 25c4c99e45 done
Writing manifest to image destination
Storing signatures
10:00:36 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.27.4-amd64
10:00:36 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.27.4-amd64
Getting image source signatures
Copying blob 10a91b291da7 skipped: already exists
Copying blob faf49d0505ff skipped: already exists
Copying blob 94dd3e4aad5e skipped: already exists
Copying blob 773cd02ddd65 skipped: already exists
Copying blob 63c33a538dbc skipped: already exists
Copying blob 55991ce51cac skipped: already exists
Copying blob 8be438b17140 skipped: already exists
Copying blob 63af43f699dd skipped: already exists
Copying blob 5e4ffd087882 skipped: already exists
Copying blob 4ddc208db1fd skipped: already exists
Copying blob 5929896e7bd1 skipped: already exists
Copying blob 14cd3b0420b9 skipped: already exists
Copying blob d381ef6b9923 skipped: already exists
Copying blob dea0625e1a7c skipped: already exists
Copying config 913a31a98a done
Writing manifest to image destination
Storing signatures
10:00:39 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.31.0-amd64
10:00:39 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.0-amd64
Getting image source signatures
Copying blob 6ad376dad8cd skipped: already exists
Copying blob ee3d1355fd5b skipped: already exists
Copying config 97c0316b55 done
Writing manifest to image destination
Storing signatures
10:00:39 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.31.0-amd64
10:00:39 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.0-amd64
Getting image source signatures
Copying blob b2ce0e066077 skipped: already exists
Copying blob 2bdf44d7aa71 skipped: already exists
Copying blob 058cf3d8c2ba skipped: already exists
Copying blob b6824ed73363 skipped: already exists
Copying blob 7c12895b777b skipped: already exists
Copying blob 33e068de2649 skipped: already exists
Copying blob 5664b15f108b skipped: already exists
Copying blob 27be814a09eb skipped: already exists
Copying blob 4aa0ea1413d3 skipped: already exists
Copying blob 3f4e2c586348 skipped: already exists
Copying blob 9aee425378d2 skipped: already exists
Copying blob 8801f2e6d7f8 skipped: already exists
Copying blob 52e1437459d6 skipped: already exists
Copying config b16cde7611 done
Writing manifest to image destination
Storing signatures
10:00:43 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.27.4-amd64
10:00:43 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/node:v3.27.4-amd64
Getting image source signatures
Copying blob 90f0e8a0ee36 skipped: already exists
Copying blob cdf2843bc0f1 skipped: already exists
Copying blob 7402091d8d69 skipped: already exists
Copying config 1a29b08306 done
Writing manifest to image destination
Storing signatures
10:00:44 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.10-amd64
10:00:44 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/pause:3.10-amd64
Getting image source signatures
Copying blob 61d9e957431b skipped: already exists
Copying config 55c4b248e8 done
Writing manifest to image destination
Storing signatures
10:00:44 CST Source: oci:/root/kubekey/kubekey/images:registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.9-amd64
10:00:44 CST Destination: docker://dockerhub.kubekey.local/kubesphereio/pause:3.9-amd64
Getting image source signatures
Copying blob 61fec91190a0 skipped: already exists
Copying config ada54d1fe6 done
Writing manifest to image destination
Storing signatures
10:00:46 CST success: [LocalHost]
10:00:46 CST [CopyImagesToRegistryModule] Push multi-arch manifest to private registry
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/kube-apiserver:v1.31.0
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:498eaee7ea261c0ded2b05da839b446c8c9e1a4b1fb6d19faae50051b36a27fe Length: 393
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/kube-proxy:v1.31.0
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:c0099715d3708334f636a49e5306d9446e7970bba5bc1651693c837df198afce Length: 392
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/node:v3.27.4
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:4e67c5040ed0a7a60ae8b844dc7b9db54d5c6106b9cdcfc42ad56f527dbc1ef0 Length: 392
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/pause:3.10
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:0c2557d61d929463657a99a9f83da52420c68016f3c309b3cd5ee3946fe4f5e0 Length: 392
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/pause:3.9
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:7315ae9eecc0fce77d454f77cf85e4437836b77542ea1f38de59ac71df32869d Length: 392
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/cni:v3.27.4
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:d7d784e34f8ff023d38961bb8e3ceca6f02e4973554f80cf5d217a9d4f91d4fa Length: 393
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/coredns:1.9.3
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:67c5f759dc05ed4408037dcb3dfc4d16b6b8de5bc1e7a9880ccfd3156737a422 Length: 392
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/k8s-dns-node-cache:1.22.20
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:d24c38635e172fc9432f56a23ad8c42ff1d12387a379654af1c5ddba1d0f2497 Length: 393
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/kube-controller-manager:v1.31.0
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:9096079e04d3170731b476a10f984826a0048c8d3417fe0991f59f63929a91c2 Length: 393
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/kube-controllers:v3.27.4
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:319d1ff311cde987df54268fbbe07247094d9fceb0878af1d331555b04dd19d3 Length: 393
10:00:46 CST Push multi-arch manifest list: dockerhub.kubekey.local/kubesphereio/kube-scheduler:v1.31.0
INFO[0092] Retrieving digests of member images
10:00:46 CST Digest: sha256:4a687e3b240e4a75ee15dceceebac2b1179978cf57b69c3a195f600308901e20 Length: 393
10:00:46 CST success: [LocalHost]
10:00:46 CST [PullModule] Start to pull images on all nodes
10:00:46 CST message: [luban-worker3]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.9
10:00:46 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.9
10:00:46 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.9
10:00:46 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.9
10:00:46 CST message: [luban-worker2]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.9
10:00:46 CST message: [luban-worker1]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.9
10:00:46 CST message: [luban-worker3]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0
10:00:46 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0
10:00:46 CST message: [luban-worker1]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0
10:00:46 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0
10:00:46 CST message: [luban-worker2]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0
10:00:46 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0
10:00:46 CST message: [luban-worker3]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.9.3
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0
10:00:47 CST message: [luban-worker1]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.9.3
10:00:47 CST message: [luban-worker2]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.9.3
10:00:47 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0
10:00:47 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0
10:00:47 CST message: [luban-worker3]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.22.20
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0
10:00:47 CST message: [luban-worker1]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.22.20
10:00:47 CST message: [luban-worker3]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.27.4
10:00:47 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0
10:00:47 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0
10:00:47 CST message: [luban-worker2]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.22.20
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0
10:00:47 CST message: [luban-worker3]
downloading image: dockerhub.kubekey.local/calico/cni:v3.27.4
10:00:47 CST message: [luban-worker1]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.27.4
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.9.3
10:00:47 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0
10:00:47 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0
10:00:47 CST message: [luban-worker2]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.27.4
10:00:47 CST message: [luban-worker3]
downloading image: dockerhub.kubekey.local/calico/node:v3.27.4
10:00:47 CST message: [luban-worker1]
downloading image: dockerhub.kubekey.local/calico/cni:v3.27.4
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.22.20
10:00:47 CST message: [luban-worker3]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.27.4
10:00:47 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.9.3
10:00:47 CST message: [luban-worker2]
downloading image: dockerhub.kubekey.local/calico/cni:v3.27.4
10:00:47 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.9.3
10:00:47 CST message: [luban-worker1]
downloading image: dockerhub.kubekey.local/calico/node:v3.27.4
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.27.4
10:00:47 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.22.20
10:00:47 CST message: [luban-worker2]
downloading image: dockerhub.kubekey.local/calico/node:v3.27.4
10:00:47 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.22.20
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/calico/cni:v3.27.4
10:00:47 CST message: [luban-worker1]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.27.4
10:00:47 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.27.4
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/calico/node:v3.27.4
10:00:47 CST message: [luban-worker2]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.27.4
10:00:47 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.27.4
10:00:47 CST message: [luban-controlplane1]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.27.4
10:00:47 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/calico/cni:v3.27.4
10:00:48 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/calico/cni:v3.27.4
10:00:48 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/calico/node:v3.27.4
10:00:48 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/calico/node:v3.27.4
10:00:48 CST message: [luban-controlplane3]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.27.4
10:00:48 CST message: [luban-controlplane2]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.27.4
10:00:48 CST success: [luban-registry]
10:00:48 CST success: [luban-worker3]
10:00:48 CST success: [luban-worker1]
10:00:48 CST success: [luban-worker2]
10:00:48 CST success: [luban-controlplane1]
10:00:48 CST success: [luban-controlplane3]
10:00:48 CST success: [luban-controlplane2]
10:00:48 CST [ETCDPreCheckModule] Get etcd status
10:00:48 CST success: [luban-controlplane1]
10:00:48 CST success: [luban-controlplane2]
10:00:48 CST success: [luban-controlplane3]
10:00:48 CST [CertsModule] Fetch etcd certs
10:00:48 CST success: [luban-controlplane1]
10:00:48 CST skipped: [luban-controlplane2]
10:00:48 CST skipped: [luban-controlplane3]
10:00:48 CST [CertsModule] Generate etcd Certs
[certs] Generating "ca" certificate and key
[certs] admin-luban-controlplane1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] member-luban-controlplane1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] node-luban-controlplane1 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] admin-luban-controlplane2 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] member-luban-controlplane2 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] node-luban-controlplane2 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] admin-luban-controlplane3 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] member-luban-controlplane3 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] node-luban-controlplane3 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane2 luban-controlplane3 luban-registry luban-worker1 luban-worker2 luban-worker3] and IPs [127.0.0.1 ::1 192.168.10.202 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
10:00:51 CST success: [LocalHost]
10:00:51 CST [CertsModule] Synchronize certs file
10:01:03 CST success: [luban-controlplane1]
10:01:03 CST success: [luban-controlplane3]
10:01:03 CST success: [luban-controlplane2]
10:01:03 CST [CertsModule] Synchronize certs file to master
10:01:03 CST skipped: [luban-controlplane3]
10:01:03 CST skipped: [luban-controlplane1]
10:01:03 CST skipped: [luban-controlplane2]
10:01:03 CST [InstallETCDBinaryModule] Install etcd using binary
10:01:08 CST success: [luban-controlplane1]
10:01:08 CST success: [luban-controlplane2]
10:01:08 CST success: [luban-controlplane3]
10:01:08 CST [InstallETCDBinaryModule] Generate etcd service
10:01:09 CST success: [luban-controlplane1]
10:01:09 CST success: [luban-controlplane3]
10:01:09 CST success: [luban-controlplane2]
10:01:09 CST [InstallETCDBinaryModule] Generate access address
10:01:09 CST skipped: [luban-controlplane3]
10:01:09 CST skipped: [luban-controlplane2]
10:01:09 CST success: [luban-controlplane1]
10:01:09 CST [ETCDConfigureModule] Health check on exist etcd
10:01:09 CST skipped: [luban-controlplane3]
10:01:09 CST skipped: [luban-controlplane1]
10:01:09 CST skipped: [luban-controlplane2]
10:01:09 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
10:01:10 CST success: [luban-controlplane1]
10:01:10 CST success: [luban-controlplane2]
10:01:10 CST success: [luban-controlplane3]
10:01:10 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
10:01:12 CST success: [luban-controlplane1]
10:01:12 CST success: [luban-controlplane2]
10:01:12 CST success: [luban-controlplane3]
10:01:12 CST [ETCDConfigureModule] Restart etcd
10:01:18 CST success: [luban-controlplane2]
10:01:18 CST success: [luban-controlplane3]
10:01:18 CST success: [luban-controlplane1]
10:01:18 CST [ETCDConfigureModule] Health check on all etcd
10:01:20 CST success: [luban-controlplane3]
10:01:20 CST success: [luban-controlplane1]
10:01:20 CST success: [luban-controlplane2]
10:01:20 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
10:01:21 CST success: [luban-controlplane1]
10:01:21 CST success: [luban-controlplane2]
10:01:21 CST success: [luban-controlplane3]
10:01:21 CST [ETCDConfigureModule] Health check on all etcd
10:01:22 CST success: [luban-controlplane1]
10:01:22 CST success: [luban-controlplane3]
10:01:22 CST success: [luban-controlplane2]
10:01:22 CST [ETCDBackupModule] Backup etcd data regularly
10:01:22 CST success: [luban-controlplane1]
10:01:22 CST success: [luban-controlplane3]
10:01:22 CST success: [luban-controlplane2]
10:01:22 CST [ETCDBackupModule] Generate backup ETCD service
10:01:23 CST success: [luban-controlplane1]
10:01:23 CST success: [luban-controlplane3]
10:01:23 CST success: [luban-controlplane2]
10:01:23 CST [ETCDBackupModule] Generate backup ETCD timer
10:01:23 CST success: [luban-controlplane1]
10:01:23 CST success: [luban-controlplane3]
10:01:23 CST success: [luban-controlplane2]
10:01:23 CST [ETCDBackupModule] Enable backup etcd service
10:01:24 CST success: [luban-controlplane3]
10:01:24 CST success: [luban-controlplane1]
10:01:24 CST success: [luban-controlplane2]
10:01:24 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
10:01:54 CST success: [luban-controlplane1]
10:01:54 CST success: [luban-worker3]
10:01:54 CST success: [luban-controlplane2]
10:01:54 CST success: [luban-controlplane3]
10:01:54 CST success: [luban-worker1]
10:01:54 CST success: [luban-worker2]
10:01:54 CST [InstallKubeBinariesModule] Change kubelet mode
10:01:55 CST success: [luban-worker3]
10:01:55 CST success: [luban-controlplane1]
10:01:55 CST success: [luban-worker1]
10:01:55 CST success: [luban-controlplane3]
10:01:55 CST success: [luban-worker2]
10:01:55 CST success: [luban-controlplane2]
10:01:55 CST [InstallKubeBinariesModule] Generate kubelet service
10:01:55 CST success: [luban-worker3]
10:01:55 CST success: [luban-controlplane1]
10:01:55 CST success: [luban-worker1]
10:01:55 CST success: [luban-controlplane2]
10:01:55 CST success: [luban-controlplane3]
10:01:55 CST success: [luban-worker2]
10:01:55 CST [InstallKubeBinariesModule] Enable kubelet service
10:01:56 CST success: [luban-worker3]
10:01:56 CST success: [luban-worker2]
10:01:56 CST success: [luban-controlplane2]
10:01:56 CST success: [luban-worker1]
10:01:56 CST success: [luban-controlplane1]
10:01:56 CST success: [luban-controlplane3]
10:01:56 CST [InstallKubeBinariesModule] Generate kubelet env
10:01:57 CST success: [luban-worker3]
10:01:57 CST success: [luban-controlplane1]
10:01:57 CST success: [luban-worker1]
10:01:57 CST success: [luban-controlplane2]
10:01:57 CST success: [luban-worker2]
10:01:57 CST success: [luban-controlplane3]
10:01:57 CST [InitKubernetesModule] Generate kubeadm config
10:01:58 CST skipped: [luban-controlplane3]
10:01:58 CST skipped: [luban-controlplane2]
10:01:58 CST success: [luban-controlplane1]
10:01:58 CST [InitKubernetesModule] Generate audit policy
10:01:58 CST skipped: [luban-controlplane3]
10:01:58 CST skipped: [luban-controlplane1]
10:01:58 CST skipped: [luban-controlplane2]
10:01:58 CST [InitKubernetesModule] Generate audit webhook
10:01:58 CST skipped: [luban-controlplane3]
10:01:58 CST skipped: [luban-controlplane1]
10:01:58 CST skipped: [luban-controlplane2]
10:01:58 CST [InitKubernetesModule] Init cluster using kubeadm
10:06:28 CST stdout: [luban-controlplane1]
W1029 10:01:58.463756 24462 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:01:58.465127 24462 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:01:58.467593 24462 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1029 10:01:58.830081 24462 checks.go:846] detected that the sandbox image "dockerhub.kubekey.local/kubesphere/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "dockerhub.kubekey.local/kubesphere/pause:3.10" as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-apiserver/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-controller-manager/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-scheduler/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-proxy/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/coredns/coredns:1.9.3: failed to pull image dockerhub.kubekey.local/coredns/coredns:1.9.3: failed to pull and unpack image "dockerhub.kubekey.local/coredns/coredns:1.9.3": failed to resolve reference "dockerhub.kubekey.local/coredns/coredns:1.9.3": failed to do request: Head "https://dockerhub.kubekey.local/v2/coredns/coredns/manifests/1.9.3": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/pause:3.10: failed to pull image dockerhub.kubekey.local/kubesphere/pause:3.10: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/pause:3.10": failed to resolve reference "dockerhub.kubekey.local/kubesphere/pause:3.10": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/pause/manifests/3.10": tls: failed to verify certificate: x509: certificate signed by unknown authority
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane1.cluster.local luban-controlplane2 luban-controlplane2.cluster.local luban-controlplane3 luban-controlplane3.cluster.local luban-registry luban-registry.cluster.local luban-worker1 luban-worker1.cluster.local luban-worker2 luban-worker2.cluster.local luban-worker3 luban-worker3.cluster.local] and IPs [10.233.0.1 192.168.10.202 127.0.0.1 192.168.10.199 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 500.995771ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.451798477s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
10:06:39 CST stdout: [luban-controlplane1]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1029 10:06:39.041323 24583 reset.go:123] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
[preflight] Running pre-flight checks
W1029 10:06:39.041702 24583 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
10:06:39 CST message: [luban-controlplane1]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W1029 10:01:58.463756 24462 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:01:58.465127 24462 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:01:58.467593 24462 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1029 10:01:58.830081 24462 checks.go:846] detected that the sandbox image "dockerhub.kubekey.local/kubesphere/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "dockerhub.kubekey.local/kubesphere/pause:3.10" as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-apiserver/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-controller-manager/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-scheduler/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-proxy/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/coredns/coredns:1.9.3: failed to pull image dockerhub.kubekey.local/coredns/coredns:1.9.3: failed to pull and unpack image "dockerhub.kubekey.local/coredns/coredns:1.9.3": failed to resolve reference "dockerhub.kubekey.local/coredns/coredns:1.9.3": failed to do request: Head "https://dockerhub.kubekey.local/v2/coredns/coredns/manifests/1.9.3": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/pause:3.10: failed to pull image dockerhub.kubekey.local/kubesphere/pause:3.10: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/pause:3.10": failed to resolve reference "dockerhub.kubekey.local/kubesphere/pause:3.10": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/pause/manifests/3.10": tls: failed to verify certificate: x509: certificate signed by unknown authority
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane1.cluster.local luban-controlplane2 luban-controlplane2.cluster.local luban-controlplane3 luban-controlplane3.cluster.local luban-registry luban-registry.cluster.local luban-worker1 luban-worker1.cluster.local luban-worker2 luban-worker2.cluster.local luban-worker3 luban-worker3.cluster.local] and IPs [10.233.0.1 192.168.10.202 127.0.0.1 192.168.10.199 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 500.995771ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.451798477s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
10:06:39 CST retry: [luban-controlplane1]
10:10:56 CST stdout: [luban-controlplane1]
W1029 10:06:44.226268 24627 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:06:44.227543 24627 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:06:44.229546 24627 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1029 10:06:44.403454 24627 checks.go:846] detected that the sandbox image "dockerhub.kubekey.local/kubesphere/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "dockerhub.kubekey.local/kubesphere/pause:3.10" as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-apiserver/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-controller-manager/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-scheduler/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-proxy/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/coredns/coredns:1.9.3: failed to pull image dockerhub.kubekey.local/coredns/coredns:1.9.3: failed to pull and unpack image "dockerhub.kubekey.local/coredns/coredns:1.9.3": failed to resolve reference "dockerhub.kubekey.local/coredns/coredns:1.9.3": failed to do request: Head "https://dockerhub.kubekey.local/v2/coredns/coredns/manifests/1.9.3": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/pause:3.10: failed to pull image dockerhub.kubekey.local/kubesphere/pause:3.10: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/pause:3.10": failed to resolve reference "dockerhub.kubekey.local/kubesphere/pause:3.10": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/pause/manifests/3.10": tls: failed to verify certificate: x509: certificate signed by unknown authority
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane1.cluster.local luban-controlplane2 luban-controlplane2.cluster.local luban-controlplane3 luban-controlplane3.cluster.local luban-registry luban-registry.cluster.local luban-worker1 luban-worker1.cluster.local luban-worker2 luban-worker2.cluster.local luban-worker3 luban-worker3.cluster.local] and IPs [10.233.0.1 192.168.10.202 127.0.0.1 192.168.10.199 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001888116s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.441455343s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
10:11:06 CST stdout: [luban-controlplane1]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1029 10:11:06.689521 24912 reset.go:123] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
[preflight] Running pre-flight checks
W1029 10:11:06.689788 24912 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
10:11:06 CST message: [luban-controlplane1]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W1029 10:06:44.226268 24627 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:06:44.227543 24627 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:06:44.229546 24627 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W1029 10:06:44.403454 24627 checks.go:846] detected that the sandbox image "dockerhub.kubekey.local/kubesphere/pause:3.9" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "dockerhub.kubekey.local/kubesphere/pause:3.10" as the CRI sandbox image.
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-apiserver/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-controller-manager/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-scheduler/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0: failed to pull image dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0": failed to resolve reference "dockerhub.kubekey.local/kubesphere/kube-proxy:v1.31.0": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/kube-proxy/manifests/v1.31.0": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/coredns/coredns:1.9.3: failed to pull image dockerhub.kubekey.local/coredns/coredns:1.9.3: failed to pull and unpack image "dockerhub.kubekey.local/coredns/coredns:1.9.3": failed to resolve reference "dockerhub.kubekey.local/coredns/coredns:1.9.3": failed to do request: Head "https://dockerhub.kubekey.local/v2/coredns/coredns/manifests/1.9.3": tls: failed to verify certificate: x509: certificate signed by unknown authority
[WARNING ImagePull]: failed to pull image dockerhub.kubekey.local/kubesphere/pause:3.10: failed to pull image dockerhub.kubekey.local/kubesphere/pause:3.10: failed to pull and unpack image "dockerhub.kubekey.local/kubesphere/pause:3.10": failed to resolve reference "dockerhub.kubekey.local/kubesphere/pause:3.10": failed to do request: Head "https://dockerhub.kubekey.local/v2/kubesphere/pause/manifests/3.10": tls: failed to verify certificate: x509: certificate signed by unknown authority
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost luban-controlplane1 luban-controlplane1.cluster.local luban-controlplane2 luban-controlplane2.cluster.local luban-controlplane3 luban-controlplane3.cluster.local luban-registry luban-registry.cluster.local luban-worker1 luban-worker1.cluster.local luban-worker2 luban-worker2.cluster.local luban-worker3 luban-worker3.cluster.local] and IPs [10.233.0.1 192.168.10.202 127.0.0.1 192.168.10.199 192.168.10.203 192.168.10.204 192.168.10.205 192.168.10.206 192.168.10.207 192.168.10.198]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001888116s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is not healthy after 4m0.441455343s
Unfortunately, an error has occurred:
context deadline exceeded
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: could not initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
10:11:06 CST retry: [luban-controlplane1]
10:11:27 CST stdout: [luban-controlplane1]
W1029 10:11:11.894313 24968 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:11:11.895459 24968 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:11:11.897492 24968 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.10.202:2379/version": dial tcp 192.168.10.202:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
10:11:27 CST stdout: [luban-controlplane1]
[preflight] Running pre-flight checks
W1029 10:11:27.180913 25029 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
10:11:27 CST message: [luban-controlplane1]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W1029 10:11:11.894313 24968 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:11:11.895459 24968 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:11:11.897492 24968 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.10.202:2379/version": dial tcp 192.168.10.202:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
10:11:27 CST skipped: [luban-controlplane3]
10:11:27 CST skipped: [luban-controlplane2]
10:11:27 CST failed: [luban-controlplane1]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:
failed: [luban-controlplane1] [KubeadmInit] exec failed after 3 retries: init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl,ImagePull"
W1029 10:11:11.894313 24968 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:11:11.895459 24968 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1029 10:11:11.897492 24968 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.31.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ExternalEtcdVersion]: Get "https://192.168.10.202:2379/version": dial tcp 192.168.10.202:2379: connect: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
[root@luban-node01 kubekey]#