k8s系列---hpa擴容


centos-master:172.16.100.60

centos-minion:172.16.100.62

k8s,etcd,docker等都是采用yum裝的,部署參考的k8s權威指南和一個視頻,視頻在百度網盤里,忘記具體步驟了,安裝不難,關鍵在於第一次接觸,改的文件記不住下次在寫個安裝的步驟吧

 

首先安裝heapster  我安裝的是1.2.0的版本

個人感覺只要幾個yaml文件就行了,里面其他東西干嘛用的,沒用到嘛

[root@centos-master influxdb]# pwd
/usr/src/heapster-1.2.0/deploy/kube-config/influxdb
[root@centos-master influxdb]# ls
grafana-deploment.yaml  heapster-deployment.yaml  influxdb-deployment.yaml
grafana-service.yaml    heapster-service.yaml     influxdb-service.yaml

  

[root@centos-master influxdb]# cat heapster-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 2
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      containers:
      - name: heapster
        image: docker.io/ist0ne/heapster-amd64:latest
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:http://172.16.100.60:8080?inClusterConfig=false
        - --sink=influxdb:http://10.254.129.95:8086

  關於--source和--sink

剛下載完的是這樣的,具體代表什么可以查一下,100.60是集群的master,129.95是個啥?我忘了好像是某個kubectl get svc里面的某個地址,因為svc我刪過,找不到原來的IP了

        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb:8086

  

[root@centos-master influxdb]# cat heapster-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

  

[root@centos-master influxdb]# cat grafana-deploment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: docker.io/ist0ne/heapster-grafana-amd64:latest
        ports:
          - containerPort: 3000
            protocol: TCP
        volumeMounts:
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GRAFANA_PORT
          value: "3000"
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
      volumes:
      - name: grafana-storage
        emptyDir: {}

  

[root@centos-master influxdb]# cat grafana-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP. 
  # type: LoadBalancer
  ports:
  - port: 80
    targetPort: 3000
  selector:
    name: influxGrafana

  

[root@centos-master influxdb]# cat influxdb-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: docker.io/ist0ne/heapster-influxdb-amd64:v1.1.1
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}

  

[root@centos-master influxdb]# cat influxdb-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels: null
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 8083
    targetPort: 8083
  - name: api
    port: 8086
    targetPort: 8086
  selector:
    name: influxGrafana

 

以上是全部的heapster需要的yaml文件

kubectl create -f ../influxdb

生成相應的pod和svc

然后就改創建具體的pod應用了

[root@centos-master yaml]# cat php-apache-rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: php-apache
spec:
  replicas: 1
  template:
    metadata:
      name: php-apache
      labels:
        app: php-apache
    spec:
      containers:
      - name: php-apache
        image: siriuszg/hpa-example
        resources:
          requests:
            cpu: 200m
        ports:
        - containerPort: 80

  

[root@centos-master yaml]# cat php-apache-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: php-apache
spec:
  ports:
  - port: 80
  selector:
    app: php-apache

  

[root@centos-master yaml]# cat busybox.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec: 
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    name: busybox

  

[root@centos-master yaml]# cat hpa-php-apache.yaml 
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: php-apache
spec:
  scaleTargetRef:
    apiVersion: v1
    kind: ReplicationController
    name: php-apache
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 10

  

 

注意如果是集群狀態,php-apache和busybox可能不是 node的,訪問會出問題,要么把他們弄到一個node里要么安裝flannel把node都連起來,遲早都要做這一步的。

kubect create -f 上面的這些文件

 

檢查heaptser是否成功

[root@centos-master yaml]# kubectl top node 
NAME            CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%   
centos-minion   105m         2%        1368Mi          34%  

 

如果出現上面的代表成功了,不知道為什么我的在這檢測不到127.0.0.1的這個node

沒出現的話仔細查看日志,查看node是否起來是否加入到了集群,/var/log/message     或者 kubectl describe hpa php-apache  或者查pod php-apapche的日志看看

還有一點就是網上都是用的kube-system這個namespace我用的時候總是檢測不到,后來去掉了,用的默認的命名空間就是上面的配置,發現可以檢測了,不知道原因

 

檢查hpa

[root@centos-master yaml]# kubectl get hpa
NAME         REFERENCE                          TARGET    CURRENT   MINPODS   MAXPODS   AGE
php-apache   ReplicationController/php-apache   10%       0%        1         10        23h
[root@centos-master yaml]# kubectl get hpa --namespace=kube-system 
NAME         REFERENCE                          TARGET    CURRENT     MINPODS   MAXPODS   AGE
php-apache   ReplicationController/php-apache   50%       <waiting>   1         10        20h

 

會發現默認的hpa是current是能檢測到值的,但是之前用的kube-system的一直是 waitng狀態

 

進入到busybox里進行壓力測試

[root@centos-master ~]# kubectl exec -ti busybox -- sh
/ # while true; do wget -q -O- http://10.254.221.176 > /dev/null ; done

  

過十幾秒發現pod增加了,而且cpu的current也增大了,但是有個問題,按理來說應該會自動收縮的,但是他只會自動擴容,收縮不成功,當停掉壓力測試的時候,還是這么多的pod

並沒有根據cpu的降低而減少pod的數量,奇怪

[root@centos-master yaml]# kubectl get pods -o wide | grep php-apache
php-apache-5bcgk     1/1       Running   0          44s       10.0.34.2    127.0.0.1
php-apache-b4nv5     1/1       Running   0          44s       10.0.16.4    centos-minion
php-apache-kw1m0     1/1       Running   0          44s       10.0.34.17   127.0.0.1
php-apache-vz2rx     1/1       Running   0          3h        10.0.16.3    centos-minion
[root@centos-master yaml]# kubectl get hpa
NAME         REFERENCE                          TARGET    CURRENT   MINPODS   MAXPODS   AGE
php-apache   ReplicationController/php-apache   10%       25%       1         10        23h
[root@centos-master yaml]# 

  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM