Spark on K8S(Standalone)




Spark on K8S 的幾種模式

Start Minikube

sudo minikube start --driver=none \
                    --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
                    --kubernetes-version="v1.15.3"

如果啟動失敗可以嘗試先刪除集群 minikube delete

Download Spark

https://archive.apache.org/dist/spark/

Spark 和 Hadoop 關系比較緊密,可以下載帶 Hadoop 的版本,這樣會有 Hadoop 的 jar 包可以用,不然可能會出現找不到包和類的錯誤,哪怕其實沒用到 Hadoop

Build Spark Image

Spark 2.3 開始提供 bin/docker-image-tool.sh 工具用於 build image

sudo ./bin/docker-image-tool.sh -t my_spark_2.4_hadoop_2.7 build

遇到類似下面的錯誤

WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz: temporary error (try again later)
ERROR: unsatisfiable constraints:
  bash (missing):
    required by: world[bash]

這是網絡問題,可以修改 ./bin/docker-image-tool.sh,在里面的 docker build 命令加上 --network=host 使容器使用宿主機網絡 (要確保宿主機網絡是 OK 的)

啟動集群

定義 manifest

---
apiVersion: v1
kind: Service
metadata:
  name: spark-manager
spec:
  type: ClusterIP
  ports:
  - name: rpc
    port: 7077
  - name: ui
    port: 8080
  selector:
    app: spark
    component: sparkmanager
---
apiVersion: v1
kind: Service
metadata:
  name: spark-manager-rest
spec:
  type: NodePort
  ports:
  - name: rest
    port: 8080
    targetPort: 8080
  selector:
    app: spark
    component: sparkmanager
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spark-manager
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spark
      component: sparkmanager
  template:
    metadata:
      labels:
        app: spark
        component: sparkmanager
    spec:
      containers:
      - name: sparkmanager
        image: spark:my_spark_2.4_hadoop_2.7
        workingDir: /opt/spark
        command: ["/bin/bash", "-c", "/opt/spark/sbin/start-master.sh && while true;do echo hello;sleep 6000;done"]
        ports:
        - containerPort: 7077
          name: rpc
        - containerPort: 8080
          name: ui
        livenessProbe:
          tcpSocket:
            port: 7077
          initialDelaySeconds: 30
          periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spark-worker
spec:
  replicas: 2
  selector:
    matchLabels:
      app: spark
      component: worker
  template:
    metadata:
      labels:
        app: spark
        component: worker
    spec:
      containers:
      - name: sparkworker
        image: spark:my_spark_2.4_hadoop_2.7
        workingDir: /opt/spark
        command: ["/bin/bash", "-c", "/opt/spark/sbin/start-slave.sh spark://spark-manager:7077 && while true;do echo hello;sleep 6000;done"]

啟動

sudo kubectl create -f standalone.yaml

查看 pod 狀態

spark-manager-cfc7f9fb-679tc      1/1     Running     0          16s
spark-worker-6f55fddc87-sgnfh     1/1     Running     0          16s
spark-worker-6f55fddc87-w5zgm     1/1     Running     0          16s

查看 service

spark-manager                         ClusterIP   10.108.230.84    <none>        7077/TCP,8080/TCP   6m16s
spark-manager-rest                    NodePort    10.106.200.126   <none>        8080:30277/TCP      6m16s

查看 rest service 信息

lin@lin-VirtualBox:~$ sudo kubectl get svc spark-manager-rest
NAME                 TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
spark-manager-rest   NodePort   10.106.200.126   <none>        8080:30277/TCP   7m59s

登陸 10.106.200.126:8080 就可以看到 Spark Manager 的 Web UI,可以看到 worker 信息,和 Job 信息

如果要看更詳細的 Job 信息還需要啟動 spark history server

提交 Job

登陸其中一台 worker

sudo kubectl exec -t -i spark-worker-6f55fddc87-w5zgm /bin/bash

提交 Job

# 第二個 wordcount.py 作為參數用
bin/spark-submit \
        --master spark://spark-manager:7077 \
        --num-executors 2 \
        --name spark-test \
        /opt/spark/examples/src/main/python/wordcount.py \
        /opt/spark/examples/src/main/python/wordcount.py

注意在 standalone 模式下的 Python 不支持 cluster 模式,即 driver 必然運行在執行 spark-submit 的容器上

Log

Driver 的 log 隨 spark-submit 命令打出來

Executor 的 log 分布在每個 Worker 的 work 目錄下

/opt/spark/work/app-20200727062422-0002/0/stderr

app-20200727062422-0002 是 Job 的 Id,可以在 Web UI 上看到,也可以在 Driver 的 log 看到




免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM