Cauchy 老师,这是安装过程的脚本,初学还没有屡出脉络,请帮助看看
[root@master ~]# mkdir /backup
[root@master ~]# cd /backup/
[root@ls backup]# export KKZONE=cn
[root@ls backup]# curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
Downloading kubekey v2.3.0 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v2.3.0/kubekey-v2.3.0-linux-amd64.tar.gz …
Kubekey v2.3.0 Download Complete!
[root@ls backup]# ./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.3.1
[root@master backup]# ./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.3.1
_
| | / / | | | | / /
| |/ / | | | |/ / __
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ || | |) | / |\ \ / |_| |
\| \/\,|./ \\| \/\|\__, |
__/ |
|___/
16:36:20 CST [GreetingsModule] Greetings
16:36:20 CST message: [master]
Greetings, KubeKey!
16:36:20 CST success: [master]
16:36:20 CST [NodePreCheckModule] A pre-check on nodes
16:36:21 CST success: [master]
16:36:21 CST [ConfirmModule] Display confirmation form
+——–+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+——–+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+
| master | y | y | y | y | y | y | | y | y | 20.10.8 | 1.6.9 | | | | CST 16:36:21 |
+——–+——+——+———+———-+——-+——-+———+———–+——–+———+————+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
16:36:41 CST success: [LocalHost]
16:36:41 CST [NodeBinariesModule] Download installation binaries
16:36:41 CST message: [localhost]
downloading amd64 kubeadm v1.22.12 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.7M 100 43.7M 0 0 1016k 0 0:00:44 0:00:44 –:–:– 1015k
16:37:26 CST message: [localhost]
downloading amd64 kubelet v1.22.12 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 115M 100 115M 0 0 1009k 0 0:01:56 0:01:56 –:–:– 1023k
16:39:23 CST message: [localhost]
downloading amd64 kubectl v1.22.12 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.7M 100 44.7M 0 0 1016k 0 0:00:45 0:00:45 –:–:– 1032k
16:40:09 CST message: [localhost]
downloading amd64 helm v3.9.0 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.0M 100 44.0M 0 0 1009k 0 0:00:44 0:00:44 –:–:– 965k
16:40:54 CST message: [localhost]
downloading amd64 kubecni v0.9.1 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 37.9M 100 37.9M 0 0 1013k 0 0:00:38 0:00:38 –:–:– 1014k
16:41:32 CST message: [localhost]
downloading amd64 crictl v1.24.0 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13.8M 100 13.8M 0 0 1010k 0 0:00:14 0:00:14 –:–:– 1039k
16:41:46 CST message: [localhost]
downloading amd64 etcd v3.4.13 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16.5M 100 16.5M 0 0 1002k 0 0:00:16 0:00:16 –:–:– 1033k
16:42:03 CST message: [localhost]
downloading amd64 docker 20.10.8 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 58.1M 100 58.1M 0 0 81931 0 0:12:23 0:12:23 –:–:– 69096
16:54:28 CST success: [LocalHost]
16:54:28 CST [ConfigureOSModule] Get OS release
16:54:28 CST success: [master]
16:54:28 CST [ConfigureOSModule] Prepare to init OS
16:54:28 CST success: [master]
16:54:28 CST [ConfigureOSModule] Generate init os script
16:54:28 CST success: [master]
16:54:28 CST [ConfigureOSModule] Exec init os script
16:54:29 CST stdout: [master]
setenforce: SELinux is disabled
Disabled
vm.swappiness = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
16:54:29 CST success: [master]
16:54:29 CST [ConfigureOSModule] configure the ntp server for each node
16:54:29 CST skipped: [master]
16:54:29 CST [KubernetesStatusModule] Get kubernetes cluster status
16:54:29 CST success: [master]
16:54:29 CST [InstallContainerModule] Sync docker binaries
16:54:29 CST skipped: [master]
16:54:29 CST [InstallContainerModule] Generate docker service
16:54:29 CST skipped: [master]
16:54:29 CST [InstallContainerModule] Generate docker config
16:54:29 CST skipped: [master]
16:54:29 CST [InstallContainerModule] Enable docker
16:54:29 CST skipped: [master]
16:54:29 CST [InstallContainerModule] Add auths to container runtime
16:54:29 CST skipped: [master]
16:54:29 CST [PullModule] Start to pull images on all nodes
16:54:29 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
16:54:32 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
16:55:03 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
16:55:34 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
16:55:52 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
16:56:28 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
16:56:42 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
16:57:15 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
16:57:56 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
16:59:37 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
17:00:54 CST message: [master]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
17:01:01 CST success: [master]
17:01:01 CST [ETCDPreCheckModule] Get etcd status
17:01:01 CST success: [master]
17:01:01 CST [CertsModule] Fetch etcd certs
17:01:01 CST success: [master]
17:01:01 CST [CertsModule] Generate etcd Certs
[certs] Generating “ca” certificate and key
[certs] admin-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master] and IPs [127.0.0.1 ::1 192.168.0.4]
[certs] member-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master] and IPs [127.0.0.1 ::1 192.168.0.4]
[certs] node-master serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost master] and IPs [127.0.0.1 ::1 192.168.0.4]
17:01:02 CST success: [LocalHost]
17:01:02 CST [CertsModule] Synchronize certs file
17:01:03 CST success: [master]
17:01:03 CST [CertsModule] Synchronize certs file to master
17:01:03 CST skipped: [master]
17:01:03 CST [InstallETCDBinaryModule] Install etcd using binary
17:01:04 CST success: [master]
17:01:04 CST [InstallETCDBinaryModule] Generate etcd service
17:01:04 CST success: [master]
17:01:04 CST [InstallETCDBinaryModule] Generate access address
17:01:04 CST success: [master]
17:01:04 CST [ETCDConfigureModule] Health check on exist etcd
17:01:04 CST skipped: [master]
17:01:04 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
17:01:04 CST success: [master]
17:01:04 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
17:01:04 CST success: [master]
17:01:04 CST [ETCDConfigureModule] Restart etcd
17:01:05 CST stdout: [master]
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
17:01:05 CST success: [master]
17:01:05 CST [ETCDConfigureModule] Health check on all etcd
17:01:05 CST success: [master]
17:01:05 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
17:01:06 CST success: [master]
17:01:06 CST [ETCDConfigureModule] Health check on all etcd
17:01:06 CST success: [master]
17:01:06 CST [ETCDBackupModule] Backup etcd data regularly
17:01:06 CST success: [master]
17:01:06 CST [ETCDBackupModule] Generate backup ETCD service
17:01:06 CST success: [master]
17:01:06 CST [ETCDBackupModule] Generate backup ETCD timer
17:01:06 CST success: [master]
17:01:06 CST [ETCDBackupModule] Enable backup etcd service
17:01:06 CST success: [master]
17:01:06 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
17:01:13 CST success: [master]
17:01:13 CST [InstallKubeBinariesModule] Synchronize kubelet
17:01:13 CST success: [master]
17:01:13 CST [InstallKubeBinariesModule] Generate kubelet service
17:01:13 CST success: [master]
17:01:13 CST [InstallKubeBinariesModule] Enable kubelet service
17:01:13 CST success: [master]
17:01:13 CST [InstallKubeBinariesModule] Generate kubelet env
17:01:14 CST success: [master]
17:01:14 CST [InitKubernetesModule] Generate kubeadm config
17:01:15 CST success: [master]
17:01:15 CST [InitKubernetesModule] Init cluster using kubeadm
17:01:26 CST stdout: [master]
W1115 17:01:15.278872 8387 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.12
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost master master.cluster.local] and IPs [10.233.0.1 192.168.0.4 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.004869 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.22” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see –upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 0yrmre.wc97pxnqjuvao3l6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 –token 0yrmre.wc97pxnqjuvao3l6 \
–discovery-token-ca-cert-hash sha256:5953b8a64f7014175c5e480377384a32050078d8035b1964a47e95463c7d9f47 \
–control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 –token 0yrmre.wc97pxnqjuvao3l6 \
–discovery-token-ca-cert-hash sha256:5953b8a64f7014175c5e480377384a32050078d8035b1964a47e95463c7d9f47
17:01:26 CST success: [master]
17:01:26 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
17:01:27 CST success: [master]
17:01:27 CST [InitKubernetesModule] Remove master taint
17:01:27 CST stdout: [master]
node/master untainted
17:01:27 CST stdout: [master]
error: taint “node-role.kubernetes.io/control-plane:NoSchedule” not found
17:01:27 CST [WARN] Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl taint nodes master node-role.kubernetes.io/control-plane=:NoSchedule-”
error: taint “node-role.kubernetes.io/control-plane:NoSchedule” not found: Process exited with status 1
17:01:27 CST success: [master]
17:01:27 CST [InitKubernetesModule] Add worker label
17:01:27 CST stdout: [master]
node/master labeled
17:01:27 CST success: [master]
17:01:27 CST [ClusterDNSModule] Generate coredns service
17:01:27 CST success: [master]
17:01:27 CST [ClusterDNSModule] Override coredns service
17:01:27 CST stdout: [master]
service “kube-dns” deleted
17:01:28 CST stdout: [master]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create –save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
17:01:28 CST success: [master]
17:01:28 CST [ClusterDNSModule] Generate nodelocaldns
17:01:29 CST success: [master]
17:01:29 CST [ClusterDNSModule] Deploy nodelocaldns
17:01:29 CST stdout: [master]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
17:01:29 CST success: [master]
17:01:29 CST [ClusterDNSModule] Generate nodelocaldns configmap
17:01:29 CST success: [master]
17:01:29 CST [ClusterDNSModule] Apply nodelocaldns configmap
17:01:29 CST stdout: [master]
configmap/nodelocaldns created
17:01:29 CST success: [master]
17:01:29 CST [KubernetesStatusModule] Get kubernetes cluster status
17:01:29 CST stdout: [master]
v1.22.12
17:01:30 CST stdout: [master]
master v1.22.12 [map[address:192.168.0.4 type:InternalIP] map[address:master type:Hostname]]
17:01:32 CST stdout: [master]
I1115 17:01:31.862076 10473 version.go:255] remote version is much newer: v1.25.4; falling back to: stable-1.22
[upload-certs] Storing the certificates in Secret “kubeadm-certs” in the “kube-system” Namespace
[upload-certs] Using certificate key:
7e71d682f408b2f02546c8d8d6d40e8a45159ee09ddd93562e350fa2bfd4a3a5
17:01:33 CST stdout: [master]
secret/kubeadm-certs patched
17:01:33 CST stdout: [master]
secret/kubeadm-certs patched
17:01:33 CST stdout: [master]
secret/kubeadm-certs patched
17:01:33 CST stdout: [master]
gr591l.6hhc32518nc3yo82
17:01:33 CST success: [master]
17:01:33 CST [JoinNodesModule] Generate kubeadm config
17:01:33 CST skipped: [master]
17:01:33 CST [JoinNodesModule] Join control-plane node
17:01:33 CST skipped: [master]
17:01:33 CST [JoinNodesModule] Join worker node
17:01:33 CST skipped: [master]
17:01:33 CST [JoinNodesModule] Copy admin.conf to ~/.kube/config
17:01:33 CST skipped: [master]
17:01:33 CST [JoinNodesModule] Remove master taint
17:01:33 CST skipped: [master]
17:01:33 CST [JoinNodesModule] Add worker label to master
17:01:33 CST skipped: [master]
17:01:33 CST [JoinNodesModule] Synchronize kube config to worker
17:01:33 CST skipped: [master]
17:01:33 CST [JoinNodesModule] Add worker label to worker
17:01:33 CST skipped: [master]
17:01:33 CST [DeployNetworkPluginModule] Generate calico
17:01:33 CST success: [master]
17:01:33 CST [DeployNetworkPluginModule] Deploy calico
17:01:34 CST stdout: [master]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
17:01:34 CST success: [master]
17:01:34 CST [ConfigureKubernetesModule] Configure kubernetes
17:01:34 CST success: [master]
17:01:34 CST [ChownModule] Chown user $HOME/.kube dir
17:01:34 CST success: [master]
17:01:34 CST [AutoRenewCertsModule] Generate k8s certs renew script
17:01:34 CST success: [master]
17:01:34 CST [AutoRenewCertsModule] Generate k8s certs renew service
17:01:34 CST success: [master]
17:01:34 CST [AutoRenewCertsModule] Generate k8s certs renew timer
17:01:35 CST success: [master]
17:01:35 CST [AutoRenewCertsModule] Enable k8s certs renew service
17:01:35 CST success: [master]
17:01:35 CST [SaveKubeConfigModule] Save kube config as a configmap
17:01:35 CST success: [LocalHost]
17:01:35 CST [AddonsModule] Install addons
17:01:35 CST success: [LocalHost]
17:01:35 CST [DeployStorageClassModule] Generate OpenEBS manifest
17:01:35 CST success: [master]
17:01:35 CST [DeployStorageClassModule] Deploy OpenEBS as cluster default StorageClass
17:01:36 CST success: [master]
17:01:36 CST [DeployKubeSphereModule] Generate KubeSphere ks-installer crd manifests
17:01:36 CST success: [master]
17:01:36 CST [DeployKubeSphereModule] Apply ks-installer
17:01:37 CST stdout: [master]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
17:01:37 CST success: [master]
17:01:37 CST [DeployKubeSphereModule] Add config to ks-installer manifests
17:01:37 CST success: [master]
17:01:37 CST [DeployKubeSphereModule] Create the kubesphere namespace
17:01:37 CST success: [master]
17:01:37 CST [DeployKubeSphereModule] Setup ks-installer config
17:01:37 CST stdout: [master]
secret/kube-etcd-client-certs created
17:01:37 CST success: [master]
17:01:37 CST [DeployKubeSphereModule] Apply ks-installer
17:01:40 CST stdout: [master]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
17:01:40 CST success: [master]
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.4:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
“Cluster Management”. If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.
#####################################################
https://kubesphere.io 2022-11-15 17:15:07
#####################################################
17:15:10 CST success: [master]
17:15:10 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.
Please check the result using the command:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l ‘app in (ks-install, ks-installer)’ -o jsonpath=‘{.items[0].metadata.name}’) -f