• 开发
  • 【直播分享】手把手教你搭建 KubeSphere 前后端本地开发环境

主题:手把手教你搭建 KubeSphere 前后端本地开发环境

分享人

  • Jeff Zhang - KubeSphere 后端研发负责人
  • Leo Liu - KubeSphere 前端研发负责人

分享大纲

  • 1. KubeSphere 3.x 前后端架构与技术栈简介
  • 2. 如何在本地搭建 KubeSphere 3.x 后端开发环境
  • 3. 如何在本地搭建 KubeSphere 3.x 前端开发环境
  • 4. 如何参与 KubeSphere 开源贡献,以及开发流程介绍
  • 5. FAQ:在线提问与答疑

参与方式

今晚 8 点,B 站直播间:
http://live.bilibili.com/22580654

直播视频回放

https://www.bilibili.com/video/bv1bp4y1r77B

Prerequisities

Environment

Reference

Clone repo

export working_dir=$GOPATH/src/kubesphere.io
export user={your github profile name}

$ mkdir -p $working_dir
$ cd $working_dir
$ git clone https://github.com/$user/kubesphere.git
$ cd $working_dir/kubesphere
$ git remote add upstream https://github.com/kubesphere/kubesphere.git

# Never push to upstream master
$ git remote set-url --push upstream no_push

# Confirm your remotes make sense:
$ git remote -v

Code structure

─ ~/go/src/kubesphere.io/kubesphere 
╰─ tree -L 2
.
├── CONTRIBUTING.md
├── LICENSE
├── Makefile
├── OWNERS
├── PROJECT
├── README.md
├── README_zh.md
├── api                 // generated api swagger doc
│   ├── api-rules
│   ├── ks-openapi-spec
│   └── openapi-spec
├── build              // dockerfile 
│   ├── ks-apiserver
│   └── ks-controller-manager
├── cmd                // command line 
│   ├── controller-manager
│   └── ks-apiserver
├── config             // used by code-generator
│   ├── crd
│   ├── crds
│   ├── default
│   ├── manager
│   ├── rbac
│   ├── samples
│   └── webhook
├── doc.go
├── docs
│   ├── images
│   ├── powered-by-kubesphere.md
│   └── roadmap.md
├── go.mod
├── go.sum
├── hack               // scripts to help build ks 
│   ├── boilerplate.go.txt
│   ├── custom-boilerplate.go.txt
│   ├── docker_build.sh
│   ├── generate_certs.sh
│   ├── generate_client.sh
│   ├── generate_group.sh
│   ├── gobuild.sh
│   ├── install_kubebuilder.sh
│   ├── lib
│   ├── lint-dependencies.sh
│   ├── pin-dependency.sh
│   ├── update-vendor-licenses.sh
│   └── update-vendor.sh
├── install         // deprecated
│   ├── ingress-controller
│   ├── scripts
│   └── swagger-ui
├── pkg
│   ├── api          
│   ├── apis         // CRD package
│   ├── apiserver    
│   ├── client       // used by code-generator, informer/lister/clientset
│   ├── constants
│   ├── controller   // controllers
│   ├── db           // deprecated
│   ├── informers    
│   ├── kapis        // KubeSphere specific apis, api path starts with /kapis
│   ├── models       // real business logic       
│   ├── server
│   ├── simple       // client interface with other services, redis/ldap/es/p8s
│   ├── test         
│   ├── tools.go
│   ├── utils
│   ├── version
│   └── webhook
├── test
│   ├── e2e
│   ├── network
│   └── testdata 
├── tools          // used to generate api doc
│   ├── cmd
│   ├── lib
│   └── tools.go

Build KubeSphere

$ make test    // takes a really long time
$ make all     // build ks-apiserver/ks-controller-manager 

$ make ks-apiserver
$ go build -o bin/cmd/ks-apiserver cmd/ks-apiserver/apiserver.go

╭─ ~/go/src/kubesphere.io/kubesphere
╰─ bin/cmd/ks-apiserver  --kubeconfig ~/.kube/config

W1124 20:27:04.397558   75520 options.go:169] ks-apiserver starts without redis provided, it will use in memory cache. This may cause inconsistencies when running ks-apiserver with multiple replicas.
W1124 20:27:04.400550   75520 routers.go:173] open /etc/kubesphere/ingress-controller: no such file or directory
E1124 20:27:04.400569   75520 routers.go:68] error happened during loading external yamls, open /etc/kubesphere/ingress-controller: no such file or directory
I1124 20:27:04.404213   75520 apiserver.go:308] Start cache objects
I1124 20:27:05.039295   75520 apiserver.go:514] Finished caching objects
I1124 20:27:05.039348   75520 apiserver.go:240] Start listening on :9090 

Add your own code

git fetch upstream
git checkout master
git rebase upstream/master  // don't forget to rebase

git checkout -b myfeature

Test

For testing, you use AlwaysAllow mode to skip authorization.

authorization:
  mode: "AlwaysAllow"
curl -v http://[ks-apiserver.kubesphere-system.svc]/kapis/[apiGroup]/[apiVersion]

Test with ks-console

Swap ks-apiserver with local running server, easy for debugging.

sudo telepresence --namespace kubesphere-system --swap-deployment ks-apiserver

For 3.0 and older version

sudo telepresence --namespace kubesphere-system --swap-deployment ks-apiserver --also-proxy redis.kubesphere-system.svc --also-proxy openldap.kubesphere-system.svc

Telepresence alternative kt-connect

Don’t forget to quit telepresence after debugging.

Commit your change and make a PR

It’s a good practice to create issue first, then assign to yourself before coding.

git add .
git commit -s -m "awesome changes"
git push

Use proper label when create PR, then address the link issue.

Signed-off-by: yuswift yuswiftli@yunify.com

What type of PR is this?

/kind feature

What this PR does / why we need it:

To lightweight member cluster installation, this pr help us to run ks-controller-manager without ldap option.

Which issue(s) this PR fixes:

Fixes #3056

Special notes for reviewers:

This pr is only for ldap. I will create another one for redis.

Additional documentation, usage docs, etc.:

7 天 后

你好,请加个问题,我本地跑ks-apiserver,无法连接kubesphere.yaml配置的redis地址
这是配置文件详情
authorization:
mode: "AlwaysAllow"
authentication:
authenticateRateLimiterMaxTries: 10
authenticateRateLimiterDuration: 10m0s
loginHistoryRetentionPeriod: 168h
maximumClockSkew: 10s
multipleLogin: True
kubectlImage: kubesphere/kubectl:v1.0.0
jwtSecret: "w9D3e92lGnXLBhLdvKmwj9s2L0hlYuhe"
ldap:
host: openldap.kubesphere-system.svc:389
managerDN: cn=admin,dc=kubesphere,dc=io
managerPassword: admin
userSearchBase: ou=Users,dc=kubesphere,dc=io
groupSearchBase: ou=Groups,dc=kubesphere,dc=io
redis:
host: redis.kubesphere-system.svc
port: 6379
password: ""
db: 0
s3:
endpoint: http://minio.kubesphere-system.svc:9000
region: us-east-1
disableSSL: true
forcePathStyle: true
accessKeyID: openpitrixminioaccesskey
secretAccessKey: openpitrixminiosecretkey
bucket: s2i-binaries
mysql:
host: mysql.kubesphere-system.svc:3306
username: root
password: password
maxIdleConnections: 100
maxOpenConnections: 100
maxConnectionLifeTime: 10s
openpitrix:
runtimeManagerEndpoint: "hyperpitrix.openpitrix-system.svc:9103"
clusterManagerEndpoint: "hyperpitrix.openpitrix-system.svc:9104"
repoManagerEndpoint: "hyperpitrix.openpitrix-system.svc:9101"
appManagerEndpoint: "hyperpitrix.openpitrix-system.svc:9102"
categoryManagerEndpoint: "hyperpitrix.openpitrix-system.svc:9113"
attachmentManagerEndpoint: "hyperpitrix.openpitrix-system.svc:9122"
repoIndexerEndpoint: "hyperpitrix.openpitrix-system.svc:9108"
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090

这是kubeconfig
`apiVersion: v1
clusters:

  • lgy 回复了此帖

    lgy k8s集群跑在服务器上,我用xshell做的隧道,映射的端口

    • Jeff 回复了此帖

      lgy 你看下视频,跟着视频做下

      • lgy 回复了此帖

        Jeff 我就是跟着视频做的,只不过视频是最简单的k8s集群,我需要连接其他组件服务的

        • Jeff 回复了此帖

          lgy 文档里这里不是提了么

          For 3.0 and older version

          sudo telepresence --namespace kubesphere-system --swap-deployment ks-apiserver --also-proxy redis.kubesphere-system.svc --also-proxy openldap.kubesphere-system.svc

            Jeff windos好像不支持telepresence 。。

              1 个月 后

              Jeff
              请问为什么开启 telepresence 之后 访问 console 会 404 ?

              环境:

              kubesphere v3.0.0版本,服务正常启动,访问<IP>:30880,使用admin/P@88w0rd可正常登陆,基本功能使用正常。
              系统: macOS
              k8s部署方式:minikube start –cpus=4 –memory=8096mb –kubernetes-version=v1.18.8 –driver=virtualbox –image-mirror-country cn –registry-mirror=“https://docker.mirrors.ustc.edu.cn
              k8s版本:v1.18.8

              console日志:

                <-- GET / 2021/01/02T15:11:39.062
              { code: 404,
                statusText: 'Not Found',
                message: '404 page not found\n' }
                --> GET / 404 11ms - 2021/01/02T15:11:39.073
              { code: 404,
                statusText: 'Not Found',
                message: '404 page not found\n' }
                <-- GET /favicon.ico 2021/01/02T15:11:39.451
              { code: 404,
                statusText: 'Not Found',
                message: '404 page not found\n' }
                --> GET /favicon.ico 404 9ms - 2021/01/02T15:11:39.460
              { code: 404,
                statusText: 'Not Found',
                message: '404 page not found\n' }
                <-- GET / 2021/01/02T15:11:43.256
              { UnauthorizedError: Not Login
                  at Object.throw (/opt/kubesphere/console/server/server.js:31701:11)
                  at getCurrentUser (/opt/kubesphere/console/server/server.js:9037:14)
                  at renderView (/opt/kubesphere/console/server/server.js:23231:46)
                  at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
                  at next (/opt/kubesphere/console/server/server.js:6871:18)
                  at /opt/kubesphere/console/server/server.js:70183:16
                  at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
                  at next (/opt/kubesphere/console/server/server.js:6871:18)
                  at /opt/kubesphere/console/server/server.js:77986:37
                  at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
                  at next (/opt/kubesphere/console/server/server.js:6871:18)
                  at /opt/kubesphere/console/server/server.js:70183:16
                  at dispatch (/opt/kubesphere/console/server/server.js:6870:32)
                  at next (/opt/kubesphere/console/server/server.js:6871:18)
                  at /opt/kubesphere/console/server/server.js:77986:37
                  at dispatch (/opt/kubesphere/console/server/server.js:6870:32) message: 'Not Login' }
                --> GET / 302 2ms 43b 2021/01/02T15:11:43.258
                <-- GET /login 2021/01/02T15:11:43.259
              { code: 404,
                statusText: 'Not Found',
                message: '404 page not found\n' }
                --> GET /login 200 18ms 14.82kb 2021/01/02T15:11:43.277

              pod 启动正常

              ➜  ~ kubectl get pod -n kubesphere-system
              NAME                                            READY   STATUS    RESTARTS   AGE
              ks-apiserver-937d549a97b040c4a30b291204025919   1/1     Running   0          16m
              ks-console-b4df86d6f-hjj8c                      1/1     Running   0          147m
              ks-controller-manager-7fd596f5f6-nkc8t          1/1     Running   0          145m
              ks-installer-7cb866bd-d549d                     1/1     Running   0          149m
              openldap-0                                      1/1     Running   0          147m
              redis-644bc597b9-vb9k8                          1/1     Running   0          147m

              curl -v http://192.168.99.104:30880/kapis/config.kubesphere.io/v1alpha2/configs/configz

              ➜  ~ curl -v http://192.168.99.104:30880/kapis/config.kubesphere.io/v1alpha2/configs/configz
              *   Trying 192.168.99.104...
              * TCP_NODELAY set
              * Connected to 192.168.99.104 (192.168.99.104) port 30880 (#0)
              > GET /kapis/config.kubesphere.io/v1alpha2/configs/configz HTTP/1.1
              > Host: 192.168.99.104:30880
              > User-Agent: curl/7.64.1
              > Accept: */*
              >
              < HTTP/1.1 404 Not Found
              < vary: Origin
              < content-type: text/plain; charset=utf-8
              < x-content-type-options: nosniff
              < date: Sat, 02 Jan 2021 15:20:51 GMT
              < content-length: 19
              < connection: close
              <
              404 page not found
              * Closing connection 0

              spSFjH.png
              spS6V1.png
              spSsbR.png
              sppoSU.png

              5 个月 后

              昨天clone的master版本,全部搭好后启动一切正常,多次点击后Not Found

              请问是windows环境不兼容吗?

                Feynman 更改标题为「【直播分享】手把手教你搭建 KubeSphere 前后端本地开发环境
                2 个月 后

                我也是本地启动了ks-apiserver然后telepresence替换远程pod之后网页无法访问,
                ~/go/src/kubesphere.io/kubesphere/bin/cmd/ks-apiserver --kubeconfig ~/.kube/config输出:

                W0812 11:26:05.645335   16149 options.go:178] ks-apiserver starts without redis provided, it will use in memory cache. This may cause inconsistencies when running ks-apiserver with multiple replicas.
                W0812 11:26:05.678203   16149 routers.go:174] open /etc/kubesphere/ingress-controller: no such file or directory
                E0812 11:26:05.678244   16149 routers.go:69] error happened during loading external yamls, open /etc/kubesphere/ingress-controller: no such file or directory
                I0812 11:26:05.685234   16149 apiserver.go:359] Start cache objects
                W0812 11:26:05.803593   16149 apiserver.go:490] resource iam.kubesphere.io/v1alpha2, Resource=groups not exists in the cluster
                W0812 11:26:05.803652   16149 apiserver.go:490] resource iam.kubesphere.io/v1alpha2, Resource=groupbindings not exists in the cluster
                W0812 11:26:05.803720   16149 apiserver.go:490] resource iam.kubesphere.io/v1alpha2, Resource=groups not exists in the cluster
                W0812 11:26:05.803741   16149 apiserver.go:490] resource iam.kubesphere.io/v1alpha2, Resource=groupbindings not exists in the cluster
                W0812 11:26:05.803781   16149 apiserver.go:490] resource network.kubesphere.io/v1alpha1, Resource=ippools not exists in the cluster
                I0812 11:26:06.305088   16149 apiserver.go:563] Finished caching objects
                I0812 11:26:06.305163   16149 apiserver.go:286] Start listening on :9090

                telepresence --namespace kubesphere-system --swap-deployment ks-apiserver --also-proxy redis.kubesphere-system.svc --also-proxy openldap.kubesphere-system.svc
                输出:

                T: Using a Pod instead of a Deployment for the Telepresence proxy. If you experience problems, please file an issue!
                T: Set the environment variable TELEPRESENCE_USE_DEPLOYMENT to any non-empty value to force the old behavior, e.g.,
                T:     env TELEPRESENCE_USE_DEPLOYMENT=1 telepresence --run curl hello
                
                T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless services with --also-proxy. For a full list 
                T: of method limitations see https://telepresence.io/reference/methods.html
                T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
                T: Starting network proxy to cluster by swapping out Deployment ks-apiserver with a proxy Pod
                T: Forwarding remote port 9090 to local port 9090.
                
                T: Setup complete. Launching your command.
                @kubernetes-admin@cluster.local|bash-4.3# 

                kubesphere.yaml:

                authentication:
                  authenticateRateLimiterMaxTries: 10
                  authenticateRateLimiterDuration: 10m0s
                  loginHistoryRetentionPeriod: 168h
                  maximumClockSkew: 10s
                  multipleLogin: False
                  kubectlImage: kubesphere/kubectl:v1.0.0
                  jwtSecret: "OYBwwbPevij4SfbRXaolQSxCEyx84gEk"
                authorization:
                  mode: "AlwaysAllow"
                ldap:
                  host: openldap.kubesphere-system.svc:389
                  managerDN: cn=admin,dc=kubesphere,dc=io
                  managerPassword: admin
                  userSearchBase: ou=Users,dc=kubesphere,dc=io
                  groupSearchBase: ou=Groups,dc=kubesphere,dc=io
                monitoring:
                  endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090

                看telepresence的日志输出是

                [19] SSH port forward (socks and proxy poll): exit 0
                 185.2  22 |  s: SW#594:10.233.0.3:53: deleting (14 remain)
                 185.2  22 |  s: SW'unknown':Mux#613: deleting (13 remain)
                 185.2  22 |  s: SW#602:10.233.0.3:53: deleting (12 remain)
                 185.2  22 |  s: SW'unknown':Mux#623: deleting (11 remain)
                 185.2  22 |  s: SW#612:10.233.0.3:53: deleting (10 remain)
                 185.2  22 |  s: SW'unknown':Mux#648: deleting (9 remain)
                 185.2  22 |  s: SW#637:10.233.0.3:53: deleting (8 remain)
                 185.2  22 |  s: SW'unknown':Mux#649: deleting (7 remain)
                 185.2  22 |  s: SW#638:10.233.0.3:53: deleting (6 remain)
                 185.2  22 |  s: SW#-1:10.233.78.64:389: deleting (5 remain)
                 185.2  22 |  s: SW'unknown':Mux#940: deleting (4 remain)
                 185.2  22 |  s: SW#-1:10.233.78.64:389: deleting (3 remain)
                 185.2  22 |  s: SW'unknown':Mux#941: deleting (2 remain)
                 185.2  22 |  s: SW'unknown':Mux#939: deleting (1 remain)
                 185.2  22 |  s: SW#17:10.233.0.3:53: deleting (0 remain)
                 185.3 TEL | [22] sshuttle: exit -15

                本地curl可以访问
                curl -v http://10.233.78.223:9090/kapis/resources.kubesphere.io/v1alpha3/deployments
                网页api全部无法访问

                @kubernetes-admin@cluster.local|bash-4.3# curl -v http://10.12.75.55:30880/kapis/config.kubesphere.io/v1alpha2/configs/configz
                *   Trying 10.12.75.55...
                * Connected to 10.12.75.55 (10.12.75.55) port 30880 (#0)
                > GET /kapis/config.kubesphere.io/v1alpha2/configs/configz HTTP/1.1
                > Host: 10.12.75.55:30880
                > User-Agent: curl/7.47.0
                > Accept: */*
                > 
                * Empty reply from server
                * Connection #0 to host 10.12.75.55 left intact
                curl: (52) Empty reply from server

                  zwkdhm 在集群环境上访问下API看是否通,看下ks-console日志

                    Jeff 大神我在telepresence加了个参数--method inject-tcp,好像就可以了,是什么原因呢,是因为这个headless svc的原因么
                    root@k8s-01:~# kubectl get svc -A

                    NAMESPACE                      NAME                                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
                    default                        kubernetes                                ClusterIP   10.233.0.1      <none>        443/TCP                        190d
                    kube-system                    coredns                                   ClusterIP   10.233.0.3      <none>        53/UDP,53/TCP,9153/TCP         190d
                    kube-system                    etcd                                      ClusterIP   None            <none>        2379/TCP                       190d
                    kube-system                    kube-controller-manager-svc               ClusterIP   None            <none>        10252/TCP                      190d
                    kube-system                    kube-scheduler-svc                        ClusterIP   None            <none>        10251/TCP                      190d
                    kube-system                    kubelet                                   ClusterIP   None            <none>        10250/TCP,10255/TCP,4194/TCP   190d
                    kube-system                    metrics-server                            ClusterIP   10.233.24.198   <none>        443/TCP                        190d
                    kubesphere-controls-system     default-http-backend                      ClusterIP   10.233.47.72    <none>        80/TCP                         190d
                    kubesphere-monitoring-system   alertmanager-main                         ClusterIP   10.233.44.235   <none>        9093/TCP                       190d
                    kubesphere-monitoring-system   alertmanager-operated                     ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP     190d
                    kubesphere-monitoring-system   kube-state-metrics                        ClusterIP   None            <none>        8443/TCP,9443/TCP              190d
                    kubesphere-monitoring-system   node-exporter                             ClusterIP   None            <none>        9100/TCP                       190d
                    kubesphere-monitoring-system   notification-manager-controller-metrics   ClusterIP   10.233.53.154   <none>        8443/TCP                       190d
                    kubesphere-monitoring-system   notification-manager-svc                  ClusterIP   10.233.9.248    <none>        19093/TCP                      190d
                    kubesphere-monitoring-system   prometheus-k8s                            ClusterIP   10.233.48.130   <none>        9090/TCP                       190d
                    kubesphere-monitoring-system   prometheus-operated                       ClusterIP   None            <none>        9090/TCP                       190d
                    kubesphere-monitoring-system   prometheus-operator                       ClusterIP   None            <none>        8443/TCP                       190d
                    kubesphere-system              ks-apiserver                              ClusterIP   10.233.21.82    <none>        80/TCP                         190d
                    kubesphere-system              ks-console                                NodePort    10.233.24.152   <none>        80:30880/TCP                   190d
                    kubesphere-system              ks-controller-manager                     ClusterIP   10.233.35.26    <none>        443/TCP                        190d
                    kubesphere-system              openldap                                  ClusterIP   None            <none>        389/TCP                        190d
                    kubesphere-system              redis                                     ClusterIP   10.233.59.1     <none>        6379/TCP                       190d

                    之前的ks-console日志:

                    {"log":"{ FetchError: request to http://ks-apiserver.kubesphere-system.svc/kapis/config.kubesphere.io/v1alpha2/configs/oauth failed, reason: socket hang up\n","stream":"stderr","time":"2021-08-12T08:17:25.154628598Z"}
                    {"log":"    at ClientRequest.\u003canonymous\u003e (/opt/kubesphere/console/server/server.js:80604:11)\n","stream":"stderr","time":"2021-08-12T08:17:25.15465047Z"}
                    {"log":"    at emitOne (events.js:116:13)\n","stream":"stderr","time":"2021-08-12T08:17:25.154656974Z"}
                    {"log":"    at ClientRequest.emit (events.js:211:7)\n","stream":"stderr","time":"2021-08-12T08:17:25.154661551Z"}
                    {"log":"    at Socket.socketOnEnd (_http_client.js:437:9)\n","stream":"stderr","time":"2021-08-12T08:17:25.15466586Z"}
                    {"log":"    at emitNone (events.js:111:20)\n","stream":"stderr","time":"2021-08-12T08:17:25.154670278Z"}
                    {"log":"    at Socket.emit (events.js:208:7)\n","stream":"stderr","time":"2021-08-12T08:17:25.154674514Z"}
                    {"log":"    at endReadableNT (_stream_readable.js:1064:12)\n","stream":"stderr","time":"2021-08-12T08:17:25.154678527Z"}
                    {"log":"    at _combinedTickCallback (internal/process/next_tick.js:139:11)\n","stream":"stderr","time":"2021-08-12T08:17:25.154682732Z"}
                    {"log":"    at process._tickCallback (internal/process/next_tick.js:181:9)\n","stream":"stderr","time":"2021-08-12T08:17:25.154687012Z"}
                    {"log":"  message: 'request to http://ks-apiserver.kubesphere-system.svc/kapis/config.kubesphere.io/v1alpha2/configs/oauth failed, reason: socket hang up',\n","stream":"stderr","time":"2021-08-12T08:17:25.154691412Z"}
                    {"log":"  type: 'system',\n","stream":"stderr","time":"2021-08-12T08:17:25.154696141Z"}
                    {"log":"  errno: 'ECONNRESET',\n","stream":"stderr","time":"2021-08-12T08:17:25.154700064Z"}
                    {"log":"  code: 'ECONNRESET' }\n","stream":"stderr","time":"2021-08-12T08:17:25.154704175Z"}
                    {"log":"  --\u003e GET /login 200 9ms 14.82kb 2021/08/12T16:17:25.159\n","stream":"stdout","time":"2021-08-12T08:17:25.159315736Z"}
                    {"log":"  \u003c-- GET /kapis/resources.kubesphere.io/v1alpha2/components 2021/08/12T16:17:27.421\n","stream":"stdout","time":"2021-08-12T08:17:27.421992195Z"}
                    {"log":"  \u003c-- GET /kapis/resources.kubesphere.io/v1alpha3/deployments?sortBy=updateTime\u0026limit=10 2021/08/12T16:17:29.688\n","stream":"stdout","time":"2021-08-12T08:17:29.689260211Z"}
                    {"log":"  \u003c-- GET / 2021/08/12T16:17:35.147\n","stream":"stdout","time":"2021-08-12T08:17:35.148138272Z"}
                    3 个月 后

                    3.1.1安装后找不到kubesphere.yaml文件。采用all-in-one的方式安装kubesphere.yaml不存在/etc/kubesphere目录下

                      开启telepresence时候报错 请问是什么原因呢

                      kubesphere git:(master) ✗ sudo telepresence --namespace kubesphere-system --swap-deployment ks-apiserver --also-proxy redis.kubesphere-system.svc --also-proxy openldap.kubesphere-system.svc
                      T: Using a Pod instead of a Deployment for the Telepresence proxy. If you experience problems, please file an issue!
                      T: Set the environment variable TELEPRESENCE_USE_DEPLOYMENT to any non-empty value to force the old behavior, e.g.,
                      T:     env TELEPRESENCE_USE_DEPLOYMENT=1 telepresence --run curl hello
                      
                      T: Starting proxy with method 'vpn-tcp', which has the following limitations: All processes are affected, only one telepresence can run per machine, and you can't use other VPNs. You may need to add cloud hosts and headless 
                      T: services with --also-proxy. For a full list of method limitations see https://telepresence.io/reference/methods.html
                      T: Volumes are rooted at $TELEPRESENCE_ROOT. See https://telepresence.io/howto/volumes.html for details.
                      T: Starting network proxy to cluster by swapping out Deployment ks-apiserver with a proxy Pod
                      T: Forwarding remote port 9090 to local port 9090.
                      
                      
                      Looks like there's a bug in our code. Sorry about that!
                      
                      Background process (SSH port forward (exposed ports)) exited with return code 255. Command was:
                        ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oConnectTimeout=5 -q -p 51702 telepresence@127.0.0.1 -R '*:9090:127.0.0.1:9090'
                      
                      
                      Background process (SSH port forward (socks and proxy poll)) exited with return code 255. Command was:
                        ssh -N -oServerAliveInterval=1 -oServerAliveCountMax=10 -F /dev/null -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oConnectTimeout=5 -q -p 51702 telepresence@127.0.0.1 -L127.0.0.1:51712:127.0.0.1:9050 -R9055:127.0.0.1:51713
                      
                      
                      Here are the last few lines of the logfile (see /Users/zimingli/github/lesterhnu/kubesphere/telepresence.log for the complete logs):
                      
                        18.0  21 | c : DNS request from ('10.2.30.237', 13778) to None: 35 bytes
                        18.5  21 | c : DNS request from ('10.2.30.237', 27803) to None: 35 bytes
                        18.5 TEL | [17] SSH port forward (exposed ports): exit 255
                        18.5 TEL | [18] SSH port forward (socks and proxy poll): exit 255
                        19.1 TEL | [32] timed out after 5.01 secs.
                        19.1 TEL | [33] Capturing: python3 -c 'import socket; socket.gethostbyname("hellotelepresence-5.a.sanity.check.telepresence.io")'
                        19.1  21 | c : DNS request from ('10.2.30.237', 65254) to None: 68 bytes
                        19.4  21 | c : DNS request from ('10.2.30.237', 60471) to None: 35 bytes
                        20.1 TEL | [33] timed out after 1.01 secs.
                        20.2  21 | c : DNS request from ('10.2.30.237', 65254) to None: 68 bytes
                        20.4  21 | c : DNS request from ('10.2.30.237', 60471) to None: 35 bytes
                        21.4  21 | c : DNS request from ('10.2.30.237', 64923) to None: 37 bytes

                        lesterhnu

                        ➜  kubesphere git:(master) ✗ ./bin/cmd/ks-apiserver --kubeconfig ~/.kube/config
                        W1111 15:02:32.189924   55531 metricsserver.go:238] Metrics API not available.
                        W1111 15:02:32.190132   55531 options.go:183] ks-apiserver starts without redis provided, it will use in memory cache. This may cause inconsistencies when running ks-apiserver with multiple replicas.
                        I1111 15:02:32.412198   55531 interface.go:60] start helm repo informer
                        W1111 15:02:32.432146   55531 routers.go:175] open /etc/kubesphere/ingress-controller: no such file or directory
                        E1111 15:02:32.432186   55531 routers.go:70] error happened during loading external yamls, open /etc/kubesphere/ingress-controller: no such file or directory
                        I1111 15:02:32.447266   55531 apiserver.go:356] Start cache objects
                        W1111 15:02:32.797402   55531 apiserver.go:509] resource snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses not exists in the cluster
                        W1111 15:02:32.797432   55531 apiserver.go:509] resource snapshot.storage.k8s.io/v1, Resource=volumesnapshots not exists in the cluster
                        W1111 15:02:32.797445   55531 apiserver.go:509] resource snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents not exists in the cluster
                        I1111 15:02:33.206151   55531 apiserver.go:562] Finished caching objects
                        I1111 15:02:33.206182   55531 apiserver.go:278] Start listening on :9090
                        W1111 15:04:48.461319   55531 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
                        W1111 15:04:48.461273   55531 reflector.go:436] pkg/client/informers/externalversions/factory.go:128: watch of *v1alpha2.Strategy ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
                        W1111 15:04:48.461273   55531 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
                        W1111 15:04:48.461318   55531 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
                        W1111 15:04:48.461852   55531 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding