因為在OpenShift 4.1環境中不建議直接登錄集群主機操作,因此很多操作可能需要在外部的Client VM上完成。當然用rhel的worker node的同事也可以和原來習慣保持一致。
這里記錄一下經常遇到的一些問題:
-
如何查看密碼
在4.1集群安裝完后,系統會打印一句話出來,比如
INFO Creating infrastructure resources... ********************************************************************************************* INFO Waiting up to 30m0s for the Kubernetes API at https://api.cluster-8447.sandbox.opentlc.com:6443... INFO API v1.13.4+3a25c9b up INFO Waiting up to 30m0s for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 30m0s for the cluster at https://api.cluster.sandbox.opentlc.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/cluster/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster.sandbox.opentlc.com INFO Login to the console with user: kubeadmin, password: TyCzM-ShJPQ-cgepT-dkDwq
一定要拷貝出來啊。。。如果萬一沒有拷貝,那在哪里還能找到呢?
可以在安裝目錄(cluster名)下,有個叫.openshift_install.log的文件,在那里可以找到
-
設置集群訪問
export KUBECONFIG=$HOME/cluster-${GUID}/auth/kubeconfig echo "export KUBECONFIG=$HOME/cluster-${GUID}/auth/kubeconfig" >>$HOME/.bashrc
-
上傳鏡像到內部鏡像倉庫
暴露image-registry路由,缺省不暴露route,只暴露image-registry.openshift-image-registry.svc服務
[root@clientvm 0 ~]# oc get svc -n openshift-image-registry NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE image-registry ClusterIP 172.30.134.180 <none> 5000/TCP 5h2m
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
Podman登錄
oc login -u kubeadm HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') podman login -u kubeadm -p $(oc whoami -t) --tls-verify=false $HOST
胡亂搞了一個Dockerfile,然后build
podman build -t default-route-openshift-image-registry.apps.cluster-8447.sandbox452.opentlc.com/myproject/mytomcat:slim .
[root@clientvm 127 ~/cluster-8447]# podman images REPOSITORY TAG IMAGE ID CREATED SIZE default-route-openshift-image-registry.apps.cluster-8447.sandbox452.opentlc.com/myproject/mytomcat slim ec32b2cdbea2 About a minute ago 518 MB <none> <none> 0426c1689356 5 minutes ago 500 MB docker.io/library/openjdk 8-jdk 08ded5f856cc 6 days ago 500 MB
然后push鏡像,切記使用--tls-verify=false
[root@clientvm 125 ~]# podman push default-route-openshift-image-registry.apps.cluster-d60b.sandbox509.opentlc.com/myproject/mytomcat:slim --tls-verify=false Getting image source signatures Copying blob ea23cfa0bea9 done Copying blob 2bf534399aca done Copying blob eb25e0278d41 done Copying blob 46ff59048438 done Copying blob f613cd1e50cc done Copying blob 1c95c77433e8 done Copying blob 6d520b2e1077 done Copying config 7670309228 done Writing manifest to image destination Copying config 7670309228 done Writing manifest to image destination Storing signatures
push完可以看到imagestream
生成應用
[root@clientvm 0 ~/cluster-8447]# oc new-app mytomcat:slim --> Found image ec32b2c (6 minutes old) in image stream "myproject/mytomcat" under tag "slim" for "mytomcat:slim" * This image will be deployed in deployment config "mytomcat" * Port 8080/tcp will be load balanced by service "mytomcat" * Other containers can access this service through the hostname "mytomcat" * WARNING: Image "myproject/mytomcat:slim" runs as the 'root' user which may not be permitted by your cluster administrator --> Creating resources ... deploymentconfig.apps.openshift.io "mytomcat" created service "mytomcat" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/mytomcat' Run 'oc status' to view your app.
通過命令可以直接部署,但如果在deployment.yaml中間的image直接寫myproject/mytomcat會出錯的。
如果是自己建立模板和通過yaml文件部署,image字段的正確寫法是全路徑。
image-registry.openshift-image-registry.svc:5000/openshift/mytomcat:8-slim
-
添加用戶
OpenShift 4.1 的用戶認證這塊也基於Operator實現,和3.11很大的區別在於3.11是配置在master-config.yaml下面,缺省是HTPasswd,
而4是缺省identity provider是沒有的,需要基於authentication的cr配置出來。
在Cluster Setting的Global Configuration里面可以看到OAuth這項內容。
點擊進去可以看到identity Provider為空
缺省只能用kubeadmin登錄,如果需要添加用戶,首先需要創建CR(Custom Resource)
如果我們還是以原來的HTPasswd方式,步驟如下:
1.在客戶端創建users.htpasswd文件,並寫入用戶
htpasswd -c -B -b users.htpasswd admin welcome1
如果要添加多個用戶,用下面命令
htpasswd -b users.htpasswd eric welcome1
htpasswd -b users.htpasswd alice welcome1
2. 在openshift-config下創建secret
oc create secret generic htpass-secret --from-file=htpasswd=/root/users.htpasswd -n openshift-config
如果是以后在文件中又添加了用戶,可以用下面命令更新
oc create secret generic htpass-secret --from-file=htpasswd=/root/users.htpasswd -n openshift-config --dry-run -o yaml | oc apply -f -
完成后可以在openshift-config下看到這個secret, 選擇edit secret,看到里面包含得用戶名
3. 更新CR, 寫一個yaml文件, 這一步也可以直接在界面上添加。
[root@clientvm 0 ~]# cat htpass.yaml apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_htpasswd_provider mappingMethod: claim type: HTPasswd htpasswd: fileData: name: htpass-secret
cluster的CR已經存在,所有通過apply去更新
[root@clientvm 0 ~]# oc apply -f htpass.yaml Warning: oc apply should be used on resource created by either oc create --save-config or oc apply oauth.config.openshift.io/cluster configured
完成后可以看到在Oauth里面包含了一個my_htpasswd_provider
檢查Pod的狀態(在openshift-authentication project下),如果沒有重新更新,就手工delete Pod讓他重新裝載一遍.
用oc get users看一看,怎么什么都沒有。。。這里有個坑,只有登錄過后的用戶才能看到,所以直接登錄吧
[root@clientvm 0 ~]# oc login -u eric Authentication required for https://api.cluster-8447.sandbox452.opentlc.com:6443 (openshift) Username: eric Password: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>
再切換回 kubeadmin用戶,就可以看到了
[root@clientvm 0 ~]# oc get users NAME UID FULL NAME IDENTITIES admin 463b2706-c3d9-11e9-b6ad-0a580a81001f my_htpasswd_provider:admin alice d73b3e6f-c3db-11e9-ba6d-0a580a80001a my_htpasswd_provider:alice eric 4c8b7952-c3de-11e9-ab5a-0a580a82001b my_htpasswd_provider:eric
設置為集群管理員
oc adm policy add-cluster-role-to-user cluster-admin admin
Console上LogOut
鼠標點擊選擇my_htpasswd_provider,一定要選這個,如果選上面的是不會讓你登錄的,然后用用戶名登錄就可以了。