请问一下安装了一个多小时提示这个报错是哪里配置错了?新机器,环境干净,前面uninstall过两次
FAILED - RETRYING: kubeadm | Initialize first master (3 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (2 retries left).
FAILED - RETRYING: kubeadm | Initialize first master (1 retries left).
fatal: [master1]: FAILED! => {
“attempts”: 3,
“changed”: true,
“cmd”: [
“timeout”,
“-k”,
“300s”,
“300s”,
“/usr/local/bin/kubeadm”,
“init”,
“–config=/etc/kubernetes/kubeadm-config.yaml”,
“–ignore-preflight-errors=all”,
“–skip-phases=addon/coredns”,
“–upload-certs”
],
“delta”: “0:05:00.037168″,
“end”: “2020-08-07 14:50:01.396185”,
“failed_when_result”: true,
“rc”: 124,
“start”: “2020-08-07 14:45:01.359017”
}
STDOUT:
[init] Using Kubernetes version: v1.16.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder “/etc/kubernetes/ssl”
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Using the existing “sa” key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/admin.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Using existing kubeconfig file: “/etc/kubernetes/scheduler.conf”
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
[controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
[controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[controlplane] Adding extra host path mount “etc-pki-tls” to “kube-apiserver”
[controlplane] Adding extra host path mount “etc-pki-ca-trust” to “kube-apiserver”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 5m0s
[kubelet-check] Initial timeout of 40s passed.
STDERR:
[WARNING Port-6443]: Port 6443 is in use
[WARNING Port-10251]: Port 10251 is in use
[WARNING Port-10252]: Port 10252 is in use
[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Port-10250]: Port 10250 is in use
MSG:
non-zero return code
NO MORE HOSTS LEFT *******************************************************************************************************
PLAY RECAP ***************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0
master1 : ok=417 changed=76 unreachable=0 failed=1
master2 : ok=406 changed=78 unreachable=0 failed=0
master3 : ok=406 changed=78 unreachable=0 failed=0
node1 : ok=330 changed=61 unreachable=0 failed=0
Friday 07 August 2020 14:50:01 +0800 (0:20:16.326) 0:30:25.065 *********
kubernetes/master : kubeadm | Initialize first master ———————————————————- 1216.33s
container-engine/docker : ensure docker packages are installed ————————————————— 36.87s
kubernetes/preinstall : Install packages requirements ———————————————————— 26.10s
etcd : Gen_certs | Write etcd master certs ———————————————————————– 23.58s
container-engine/docker : Docker | reload docker —————————————————————– 15.88s
bootstrap-os : Install libselinux python package —————————————————————— 8.91s
etcd : Gen_certs | Gather etcd master certs ———————————————————————– 6.01s
etcd : Configure | Check if etcd cluster is healthy ————————————————————— 4.87s
etcd : Install | Copy etcdctl binary from docker container ——————————————————– 4.78s
download : download | Download files / images ——————————————————————— 4.45s
download : download_file | Download item ————————————————————————– 4.26s
download : download_file | Download item ————————————————————————– 4.09s
etcd : wait for etcd up ——————————————————————————————- 3.50s
etcd : reload etcd ———————————————————————————————— 3.20s
kubernetes/node : install | Copy kubelet binary from download dir ————————————————- 3.13s
download : download | Sync files / images from ansible host to nodes ———————————————- 3.08s
etcd : Gen_certs | run cert generation script ——————————————————————— 2.77s
download : download_file | Download item ————————————————————————– 2.62s
chrony : start chrony server ————————————————————————————– 2.47s
container-engine/docker : ensure service is started if docker packages are already present ———————— 2.45s
failed!
please refer to https://kubesphere.io/docs/v2.1/zh-CN/faq/faq-install/