Kubernetes版本v1.17.3 kubesphere 3.11 默认用户登录失败
操作系统信息
例如:虚拟机/物理机,Centos7.5/Ubuntu18.04,4C/8G
Centos7
Kubernetes版本信息
将 kubectl version
命令执行结果贴在下方
[root@k8s-node1 ~]# kubectl version
Client Version: version.Info{Major:“1”, Minor:“17”, GitVersion:“v1.17.3”, GitCommit:“06ad960bfd03b39c8310aaf92d1e7c12ce618213”, GitTreeState:“clean”, BuildDate:“2020-02-11T18:14:22Z”, GoVersion:“go1.13.6”, Compiler:“gc”, Platform:“linux/amd64”}
Server Version: version.Info{Major:“1”, Minor:“17”, GitVersion:“v1.17.3”, GitCommit:“06ad960bfd03b39c8310aaf92d1e7c12ce618213”, GitTreeState:“clean”, BuildDate:“2020-02-11T18:07:13Z”, GoVersion:“go1.13.6”, Compiler:“gc”, Platform:“linux/amd64”}
容器运行时
将 docker version
/ crictl version
/ nerdctl version
结果贴在下方
[root@k8s-node1 ~]# docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:17:57 2021
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:33 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.22
GitCommit: 8165feabfdfe38c65b599c4993d227328c231fca
runc:
Version: 1.1.8
GitCommit: v1.1.8-0-g82f18fe
docker-init:
Version: 0.18.0
GitCommit: fec3683
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# crictl version
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] validate service connection: CRI v1 runtime API is not implemented for endpoint “unix:///var/run/dockershim.sock”: rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService
ERRO[0000] validate service connection: CRI v1 runtime API is not implemented for endpoint “unix:///run/containerd/containerd.sock”: rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService
E0910 23:12:19.800480 30123 remote_runtime.go:145] “Version from runtime service failed” err="rpc error: code = Unavailable desc = connection error: desc = \“transport: Error while dialing dial unix /run/crio/crio.sock: connect: no such file or directory\”"
FATA[0000] getting the runtime version: rpc error: code = Unavailable desc = connection error: desc = “transport: Error while dialing dial unix /run/crio/crio.sock: connect: no such file or directory”
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# nerdctl version
-bash: nerdctl: command not found
[root@k8s-node1 ~]#
KubeSphere版本信息
例如:v2.1.1/v3.0.0。离线安装还是在线安装。在已有K8s上安装还是使用kk安装。
在线安装 v3.1.1
[root@k8s-node1 ~]# kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
clusterconfiguration.installer.kubesphere.io/ks-installer configured
[root@k8s-node1 ~]#
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath=‘{.items[0].metadata.name}’) -f2023-09-10T21:04:12Z INFO : shell-operator latest
问题是什么
问题: 默认用户登录失败
报错日志是什么,最好有截图。
Type `help’ to learn how to use Xshell prompt.
[D:\~]$
Connecting to 192.168.56.100:22…
Connection established.
To escape to local shell, press ‘Ctrl+Alt+]’.
WARNING! The remote SSH server rejected X11 forwarding request.
Last login: Sun Sep 10 21:26:49 2023 from 192.168.56.1
[root@k8s-node1 ~]# kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default tomcat6-5f7ccf4cb9-4nx9r 1/1 Running 0 71m
default tomcat6-5f7ccf4cb9-dgc9s 1/1 Running 0 71m
default tomcat6-5f7ccf4cb9-svht8 1/1 Running 2 16h
ingress-nginx nginx-ingress-controller-9nxcr 1/1 Running 6 15h
ingress-nginx nginx-ingress-controller-c5znk 1/1 Running 3 5h52m
ingress-nginx nginx-ingress-controller-mgxhs 0/1 CrashLoopBackOff 106 15h
kube-flannel kube-flannel-ds-2lwj5 1/1 Running 4 44h
kube-flannel kube-flannel-ds-l8×48 1/1 Running 5 44h
kube-flannel kube-flannel-ds-pcz79 1/1 Running 6 44h
kube-system coredns-7f9c544f75-2vvgt 1/1 Running 2 45h
kube-system coredns-7f9c544f75-t77g6 1/1 Running 2 45h
kube-system etcd-k8s-node1 1/1 Running 7 45h
kube-system kube-apiserver-k8s-node1 1/1 Running 7 45h
kube-system kube-controller-manager-k8s-node1 1/1 Running 7 45h
kube-system kube-flannel-ds-amd64-b7cnx 0/1 CrashLoopBackOff 97 44h
kube-system kube-flannel-ds-amd64-g8rkd 0/1 CrashLoopBackOff 66 44h
kube-system kube-flannel-ds-amd64-r2nsv 0/1 CrashLoopBackOff 239 44h
kube-system kube-proxy-2hdnc 1/1 Running 6 44h
kube-system kube-proxy-m8z84 1/1 Running 5 44h
kube-system kube-proxy-wvf2p 1/1 Running 6 45h
kube-system kube-scheduler-k8s-node1 1/1 Running 7 45h
kube-system snapshot-controller-0 1/1 Running 0 37m
kube-system tiller-deploy-6d68b98c95-s4cn2 1/1 Running 2 15h
kubesphere-controls-system default-http-backend-5d464dd566-g5tft 1/1 Running 0 67m
kubesphere-controls-system kubectl-admin-6c9bd5b454-pmbdt 1/1 Running 0 63m
kubesphere-monitoring-system alertmanager-main-0 2/2 Running 0 30m
kubesphere-monitoring-system alertmanager-main-1 2/2 Running 0 31m
kubesphere-monitoring-system alertmanager-main-2 2/2 Running 0 31m
kubesphere-monitoring-system kube-state-metrics-7bc59d8bdd-m6sgd 3/3 Running 0 32m
kubesphere-monitoring-system node-exporter-27f7g 2/2 Running 0 30m
kubesphere-monitoring-system node-exporter-4vfjx 2/2 Running 2 32m
kubesphere-monitoring-system node-exporter-7qd77 2/2 Running 0 31m
kubesphere-monitoring-system notification-manager-deployment-7f7f4656f9-w6f9c 1/1 Running 0 28m
kubesphere-monitoring-system notification-manager-deployment-7f7f4656f9-w8mmh 1/1 Running 0 28m
kubesphere-monitoring-system notification-manager-operator-75cff7dc87-6rs2x 2/2 Running 4 29m
kubesphere-monitoring-system prometheus-k8s-0 3/3 Running 1 30m
kubesphere-monitoring-system prometheus-k8s-1 3/3 Running 1 31m
kubesphere-monitoring-system prometheus-operator-677cc4c58b-gb4tv 2/2 Running 0 32m
kubesphere-system ks-apiserver-65bd7db448-bnpd9 1/1 Running 0 28m
kubesphere-system ks-console-6647d64857-7v42d 1/1 Running 0 36m
kubesphere-system ks-controller-manager-558c9b95d-kjzcg 1/1 Running 4 28m
kubesphere-system ks-installer-79c7b6cccf-scpq9 1/1 Running 0 40m
kubesphere-system openldap-0 1/1 Running 0 68m
kubesphere-system redis-77f4bf4455-kmkkz 1/1 Running 0 37m
openebs maya-apiserver-77f486b6b9-6mkww 1/1 Running 0 71m
openebs openebs-admission-server-5d87f9f5d7-zq2hx 0/1 CrashLoopBackOff 29 5h50m
openebs openebs-localpv-provisioner-cfbd956fc-5tw78 1/1 Running 4 71m
openebs openebs-ndm-fxwxc 1/1 Running 1 5h50m
openebs openebs-ndm-operator-674d75bd7c-2rh74 1/1 Running 1 5h50m
openebs openebs-ndm-v7ktn 1/1 Running 5 5h50m
openebs openebs-ndm-z2b2p 0/1 CrashLoopBackOff 27 5h50m
openebs openebs-provisioner-b5dc795f-q7nc6 1/1 Running 9 5h50m
openebs openebs-snapshot-operator-db88f5744-dcl65 2/2 Running 4 71m
[root@k8s-node1 ~]#
[root@k8s-node1 ~]#
[root@k8s-node1 ~]#
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# kubectl get svc/ks-console -n kubesphere-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ks-console NodePort 10.96.65.106 <none> 80:30880/TCP 71m
[root@k8s-node1 ~]# kubectl -n kubesphere-system logs -l app=ks-controller-manager
E0910 21:46:42.914123 1 user_controller.go:239] Timeout: request did not complete within requested timeout 34s
E0910 21:46:42.914224 1 basecontroller.go:132] error syncing ‘admin’ in user-controller: Timeout: request did not complete within requested timeout 34s, requeuing
E0910 21:47:17.068614 1 user_controller.go:239] Internal error occurred: failed calling webhook “users.iam.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=4s: context deadline exceeded
E0910 21:47:17.068721 1 basecontroller.go:132] error syncing ‘admin’ in user-controller: Internal error occurred: failed calling webhook “users.iam.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=4s: context deadline exceeded, requeuing
E0910 21:47:51.298285 1 user_controller.go:239] Internal error occurred: failed calling webhook “users.iam.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=4s: context deadline exceeded
E0910 21:47:51.298422 1 basecontroller.go:132] error syncing ‘admin’ in user-controller: Internal error occurred: failed calling webhook “users.iam.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=4s: context deadline exceeded, requeuing
E0910 21:48:25.581361 1 user_controller.go:239] Timeout: request did not complete within requested timeout 34s
E0910 21:48:25.581591 1 basecontroller.go:132] error syncing ‘admin’ in user-controller: Timeout: request did not complete within requested timeout 34s, requeuing
E0910 21:49:00.083397 1 user_controller.go:239] Internal error occurred: failed calling webhook “users.iam.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=4s: context deadline exceeded
E0910 21:49:00.083635 1 basecontroller.go:132] error syncing ‘admin’ in user-controller: Internal error occurred: failed calling webhook “users.iam.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2?timeout=4s: context deadline exceeded, requeuing