為你的應用加上skywalking(鏈路監控)


skywalking是什么?為什么要給你的應用加上skywalking?


在介紹skywalking之前,我們先來了解一個東西,那就是APM(Application Performance Management)系統。

一、什么是APM系統

APM (Application Performance Management) 即應用性能管理系統,是對企業系統即時監控以實現
對應用程序性能管理和故障管理的系統化的解決方案。應用性能管理,主要指對企業的關鍵業務應用進
行監測、優化,提高企業應用的可靠性和質量,保證用戶得到良好的服務,降低IT總擁有成本。
APM系統是可以幫助理解系統行為、用於分析性能問題的工具,以便發生故障的時候,能夠快速定位和
解決問題。

說白了就是隨着微服務的的興起,傳統的單體應用拆分為不同功能的小應用,用戶的一次請求會經過多個系統,不同服務之間的調用非常復雜,其中任何一個系統出錯都可能影響整個請求的處理結果。為了解決這個問題,Google 推出了一個分布式鏈路跟蹤系統 Dapper ,之后各個互聯網公司都參照Dapper 的思想推出了自己的分布式鏈路跟蹤系統,而這些系統就是分布式系統下的APM系統。

目前市面上的APM系統有很多,比如skywalking、pinpoint、zipkin等。其中

  • Zipkin:由Twitter公司開源,開放源代碼分布式的跟蹤系統,用於收集服務的定時數據,以解決微服務架構中的延遲問題,包括:數據的收集、存儲、查找和展現。
  • Pinpoint:一款對Java編寫的大規模分布式系統的APM工具,由韓國人開源的分布式跟蹤組件。
  • Skywalking:國產的優秀APM組件,是一個對JAVA分布式應用程序集群的業務運行情況進行追蹤、告警和分析的系統。

二、什么是skywalking

SkyWalking是apache基金會下面的一個開源APM項目,為微服務架構和雲原生架構系統設計。它通過探針自動收集所需的標,並進行分布式追蹤。通過這些調用鏈路以及指標,Skywalking APM會感知應用間關系和服務間關系,並進行相應的指標統計。Skywalking支持鏈路追蹤和監控應用組件基本涵蓋主流框架和容器,如國產RPC Dubbo和motan等,國際化的spring boot,spring cloud。官方網站:http://skywalking.apache.org/


Skywalking的具有以下幾個特點:

  1. 多語言自動探針,Java,.NET Core和Node.JS。
  2. 多種監控手段,語言探針和service mesh。
  3. 輕量高效。不需要額外搭建大數據平台。
  4. 模塊化架構。UI、存儲、集群管理多種機制可選。
  5. 支持告警。
  6. 優秀的可視化效果。

Skywalking整體架構如下:


整體架構包含如下三個組成部分:
1. 探針(agent)負責進行數據的收集,包含了Tracing和Metrics的數據,agent會被安裝到服務所在的服務器上,以方便數據的獲取。
2. 可觀測性分析平台OAP(Observability Analysis Platform),接收探針發送的數據,並在內存中使用分析引擎(Analysis Core)進行數據的整合運算,然后將數據存儲到對應的存儲介質上,比如Elasticsearch、MySQL數據庫、H2數據庫等。同時OAP還使用查詢引擎(Query Core)提供HTTP查詢接口。
3. Skywalking提供單獨的UI進行數據的查看,此時UI會調用OAP提供的接口,獲取對應的數據然后進行展示。

三、搭建並使用

搭建其實很簡單,官方有提供搭建案例。


上文提到skywalking的后端數據存儲的介質可以是Elasticsearch、MySQL數據庫、H2數據庫等,我這里使用Elasticsearch作為數據存儲,而且為了便與擴展和收集其他應用日志,我將單獨搭建Elasticsearch。

3.1、搭建elasticsearch

為了增加es的擴展性,按角色功能分為master節點、data數據節點、client客戶端節點。其整體架構如下:
image.png
其中:

  • Elasticsearch數據節點Pods被部署為一個有狀態集(StatefulSet)
  • Elasticsearch master節點Pods被部署為一個Deployment
  • Elasticsearch客戶端節點Pods是以Deployment的形式部署的,其內部服務將允許訪問R/W請求的數據節點
  • Kibana部署為Deployment,其服務可在Kubernetes集群外部訪問


(1)先創建estatic的命名空間(es-ns.yaml):

apiVersion: v1
kind: Namespace
metadata:
  name: elastic

執行kubectl apply -f es-ns.yaml


(2)部署es master
配置清單如下(es-master.yaml):

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-master-config
  labels:
    app: elasticsearch
    role: master
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: true
      data: false
      ingest: false

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-master
  labels:
    app: elasticsearch
    role: master
spec:
  ports:
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: master
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: elasticsearch-master
  labels:
    app: elasticsearch
    role: master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
      role: master
  template:
    metadata:
      labels:
        app: elasticsearch
        role: master
    spec:
    	initContainers:
      - name: init-sysctl
        image: busybox:1.27.2
        command:
        - sysctl
        - -w
        - vm.max_map_count=262144
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch-master
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-master
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: "ES_JAVA_OPTS"
          value: "-Xms512m -Xmx512m"
        ports:
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: storage
          mountPath: /data
      volumes:
      - name: config
        configMap:
          name: elasticsearch-master-config
      - name: "storage"
        emptyDir:
          medium: ""
---

然后執行kubectl apply -f ``es-master.yaml創建配置清單,然后pod變為running狀態即為部署成功。

# kubectl get pod -n elastic
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-master-77d5d6c9db-xt5kq   1/1     Running   0          67s


(3)部署es data
配置清單如下(es-data.yaml):

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-data-config
  labels:
    app: elasticsearch
    role: data
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: false
      data: true
      ingest: false

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-data
  labels:
    app: elasticsearch
    role: data
spec:
  ports:
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: data
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: elastic
  name: elasticsearch-data
  labels:
    app: elasticsearch
    role: data
spec:
  serviceName: "elasticsearch-data"
  selector:
    matchLabels:
      app: elasticsearch
      role: data
  template:
    metadata:
      labels:
        app: elasticsearch
        role: data
    spec:
      initContainers:
      - name: init-sysctl
        image: busybox:1.27.2
        command:
        - sysctl
        - -w
        - vm.max_map_count=262144
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch-data
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-data
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: "ES_JAVA_OPTS"
          value: "-Xms1024m -Xmx1024m"
        ports:
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: elasticsearch-data-persistent-storage
          mountPath: /data/db
      volumes:
      - name: config
        configMap:
          name: elasticsearch-data-config
  volumeClaimTemplates:
  - metadata:
      name: elasticsearch-data-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: managed-nfs-storage
      resources:
        requests:
          storage: 20Gi
---

執行kubectl apply -f es-data.yaml創建配置清單,其狀態變為running即為部署成功。

# kubectl get pod -n elastic
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-data-0                    1/1     Running   0          4s
elasticsearch-master-77d5d6c9db-gklgd   1/1     Running   0          2m35s
elasticsearch-master-77d5d6c9db-gvhcb   1/1     Running   0          2m35s
elasticsearch-master-77d5d6c9db-pflz6   1/1     Running   0          2m35s

(4)部署es client
配置清單如下(es-client.yaml):

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: elasticsearch-client-config
  labels:
    app: elasticsearch
    role: client
data:
  elasticsearch.yml: |-
    cluster.name: ${CLUSTER_NAME}
    node.name: ${NODE_NAME}
    discovery.seed_hosts: ${NODE_LIST}
    cluster.initial_master_nodes: ${MASTER_NODES}

    network.host: 0.0.0.0

    node:
      master: false
      data: false
      ingest: true

    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: elasticsearch-client
  labels:
    app: elasticsearch
    role: client
spec:
  ports:
  - port: 9200
    name: client
  - port: 9300
    name: transport
  selector:
    app: elasticsearch
    role: client
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: elasticsearch-client
  labels:
    app: elasticsearch
    role: client
spec:
  selector:
    matchLabels:
      app: elasticsearch
      role: client
  template:
    metadata:
      labels:
        app: elasticsearch
        role: client
    spec:
      initContainers:
      - name: init-sysctl
        image: busybox:1.27.2
        command:
        - sysctl
        - -w
        - vm.max_map_count=262144
        securityContext:
          privileged: true
      containers:
      - name: elasticsearch-client
        image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
        env:
        - name: CLUSTER_NAME
          value: elasticsearch
        - name: NODE_NAME
          value: elasticsearch-client
        - name: NODE_LIST
          value: elasticsearch-master,elasticsearch-data,elasticsearch-client
        - name: MASTER_NODES
          value: elasticsearch-master
        - name: "ES_JAVA_OPTS"
          value: "-Xms256m -Xmx256m"
        ports:
        - containerPort: 9200
          name: client
        - containerPort: 9300
          name: transport
        volumeMounts:
        - name: config
          mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
          readOnly: true
          subPath: elasticsearch.yml
        - name: storage
          mountPath: /data
      volumes:
      - name: config
        configMap:
          name: elasticsearch-client-config
      - name: "storage"
        emptyDir:
          medium: ""

執行kubectl apply -f es-client.yaml創建配置清單,其狀態變為running即為部署成功。

# kubectl get pod -n elastic
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-client-f79cf4f7b-pbz9d    1/1     Running   0          5s
elasticsearch-data-0                    1/1     Running   0          3m11s
elasticsearch-master-77d5d6c9db-gklgd   1/1     Running   0          5m42s
elasticsearch-master-77d5d6c9db-gvhcb   1/1     Running   0          5m42s
elasticsearch-master-77d5d6c9db-pflz6   1/1     Running   0          5m42s

(5)生成密碼
我們啟用了 xpack 安全模塊來保護我們的集群,所以我們需要一個初始化的密碼。我們可以執行如下所示的命令,在客戶端節點容器內運行 bin/elasticsearch-setup-passwords 命令來生成默認的用戶名和密碼:

# kubectl exec $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
     -n elastic \
     -- bin/elasticsearch-setup-passwords auto -b

Changed password for user apm_system
PASSWORD apm_system = QNSdaanAQ5fvGMrjgYnM

Changed password for user kibana_system
PASSWORD kibana_system = UFPiUj0PhFMCmFKvuJuc

Changed password for user kibana
PASSWORD kibana = UFPiUj0PhFMCmFKvuJuc

Changed password for user logstash_system
PASSWORD logstash_system = Nqes3CCxYFPRLlNsuffE

Changed password for user beats_system
PASSWORD beats_system = Eyssj5NHevFjycfUsPnT

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 7Po4RLQQZ94fp7F31ioR

Changed password for user elastic
PASSWORD elastic = n816QscHORFQMQWQfs4U

注意需要將 elastic 用戶名和密碼也添加到 Kubernetes 的 Secret 對象中:

kubectl create secret generic elasticsearch-pw-elastic \
     -n elastic \
     --from-literal password=n816QscHORFQMQWQfs4U

(6)、驗證集群狀態

kubectl exec -n elastic  \
        $(kubectl get pods -n elastic | grep elasticsearch-client | sed -n 1p | awk '{print $1}') \
        -- curl -u elastic:n816QscHORFQMQWQfs4U http://elasticsearch-client.elastic:9200/_cluster/health?pretty

{
  "cluster_name" : "elasticsearch",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 2,
  "active_shards" : 2,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

上面status的狀態為green,表示集群正常。到這里ES集群就搭建完了。為了方便操作可以再部署一個kibana服務,如下:

---
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: elastic
  name: kibana-config
  labels:
    app: kibana
data:
  kibana.yml: |-
    server.host: 0.0.0.0

    elasticsearch:
      hosts: ${ELASTICSEARCH_HOSTS}
      username: ${ELASTICSEARCH_USER}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: Service
metadata:
  namespace: elastic
  name: kibana
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
    name: webinterface
  selector:
    app: kibana
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    prometheus.io/http-probe: 'true'
    prometheus.io/scrape: 'true'
  name: kibana
  namespace: elastic
spec:
  rules:
    - host: kibana.coolops.cn
      http:
        paths:
          - backend:
              serviceName: kibana
              servicePort: 5601 
            path: /
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: elastic
  name: kibana
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.8.0
        ports:
        - containerPort: 5601
          name: webinterface
        env:
        - name: ELASTICSEARCH_HOSTS
          value: "http://elasticsearch-client.elastic.svc.cluster.local:9200"
        - name: ELASTICSEARCH_USER
          value: "elastic"
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elasticsearch-pw-elastic
              key: password
        volumeMounts:
        - name: config
          mountPath: /usr/share/kibana/config/kibana.yml
          readOnly: true
          subPath: kibana.yml
      volumes:
      - name: config
        configMap:
          name: kibana-config
---

然后執行kubectl apply -f kibana.yaml創建kibana,查看pod的狀態是否為running。

# kubectl get pod -n elastic 
NAME                                    READY   STATUS    RESTARTS   AGE
elasticsearch-client-f79cf4f7b-pbz9d    1/1     Running   0          30m
elasticsearch-data-0                    1/1     Running   0          33m
elasticsearch-master-77d5d6c9db-gklgd   1/1     Running   0          36m
elasticsearch-master-77d5d6c9db-gvhcb   1/1     Running   0          36m
elasticsearch-master-77d5d6c9db-pflz6   1/1     Running   0          36m
kibana-6b9947fccb-4vp29                 1/1     Running   0          3m51s

如下圖所示,使用上面我們創建的 Secret 對象的 elastic 用戶和生成的密碼即可登錄:
image.png
登錄后界面如下:
image.png

3.2、搭建skywalking server

我這里使用helm安裝


(1)安裝helm,這里是使用的helm3

wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
tar zxvf helm-v3.0.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/bin/

說明:helm3沒有tiller這個服務端了,直接用kubeconfig進行驗證通信,所以建議部署在master節點


(2)下載skywalking的代碼

mkdir /home/install/package -p
cd /home/install/package
git clone https://github.com/apache/skywalking-kubernetes.git


(3)進入chart目錄進行安裝

cd skywalking-kubernetes/chart
helm repo add elastic https://helm.elastic.co
helm dep up skywalking
helm install my-skywalking skywalking -n skywalking \
        --set elasticsearch.enabled=false \
        --set elasticsearch.config.host=elasticsearch-client.elastic.svc.cluster.local \
        --set elasticsearch.config.port.http=9200 \
        --set elasticsearch.config.user=elastic \
        --set elasticsearch.config.password=n816QscHORFQMQWQfs4U

先要創建一個skywalking的namespace: kubectl create ns skywalking


(4)查看所有pod是否處於running

# kubectl get pod
NAME                                     READY   STATUS       RESTARTS   AGE
my-skywalking-es-init-x89pr                 0/1     Completed    0          15h
my-skywalking-oap-694fc79d55-2dmgr          1/1     Running      0          16h
my-skywalking-oap-694fc79d55-bl5hk          1/1     Running      4          16h
my-skywalking-ui-6bccffddbd-d2xhs           1/1     Running      0          16h

也可以通過以下命令來查看chart。

# helm list --all-namespaces
NAME               	NAMESPACE  	REVISION	UPDATED                                	STATUS  	CHART                    	APP VERSION
my-skywalking      	skywalking 	1       	2020-09-29 14:42:10.952238898 +0800 CST	deployed	skywalking-3.1.0         	8.1.0      

如果要修改配置,則直接修改value.yaml,如下我們修改my-skywalking-ui的service為NodePort,則如下修改:

.....
ui:
  name: ui
  replicas: 1
  image:
    repository: apache/skywalking-ui
    tag: 8.1.0
    pullPolicy: IfNotPresent
....
  service:
    type: NodePort 
    # clusterIP: None
    externalPort: 80
    internalPort: 8080

....

然后使用以下命名升級即可。

helm upgrade sky-server ../skywalking -n skywalking

然后我們可以查看service是否變為NodePort了。

# kubectl get svc -n skywalking 
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)               AGE
my-skywalking-oap   ClusterIP   10.109.109.131   <none>        12800/TCP,11800/TCP   88s
my-skywalking-ui    NodePort    10.102.247.110   <none>        80:32563/TCP          88s

現在就可以通過UI界面查看skywalking了,界面如下:
image.png

3.3、應用接入skywalking agent

現在skywalking的服務端已經安裝好了,接下來就是應用接入了,所謂的應用接入就是應用在啟動的時候加入skywalking agent,在容器中接入agent的方式我這里介紹兩種。

  • 在制作應用鏡像的時候把agent所需的文件和包一起打進去
  • 以sidecar的形式給應用容器接入agent


首先我們應該下載對應的agent軟件包:

wget https://mirrors.tuna.tsinghua.edu.cn/apache/skywalking/8.1.0/apache-skywalking-apm-8.1.0.tar.gz
tar xf apache-skywalking-apm-8.1.0.tar.gz

(1)在制作應用鏡像的時候把agent所需的文件和包一起打進去
開發類似下面的Dockerfile,然后直接build鏡像即可,這種方法比較簡單

FROM harbor-test.coolops.com/coolops/jdk:8u144_test
RUN mkdir -p /usr/skywalking/agent/
ADD apache-skywalking-apm-bin/agent/ /usr/skywalking/agent/

注意:這個Dockerfile是咱們應用打包的基礎鏡像,不是應用的Dockerfile


(2)、以sidecar的形式添加agent包
首先制作一個只有agent的鏡像,如下:

FROM busybox:latest 
ENV LANG=C.UTF-8
RUN set -eux && mkdir -p /usr/skywalking/agent/
ADD apache-skywalking-apm-bin/agent/ /usr/skywalking/agent/
WORKDIR /

然后我們像下面這樣開發deployment的yaml清單。

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: demo-sw
  name: demo-sw
spec:
  replicas: 1
  selector:
    matchLabels:
      name: demo-sw
  template:
    metadata:
      labels:
        name: demo-sw
    spec:
      initContainers:
      - image: innerpeacez/sw-agent-sidecar:latest
        name: sw-agent-sidecar
        imagePullPolicy: IfNotPresent
        command: ['sh']
        args: ['-c','mkdir -p /skywalking/agent && cp -r /usr/skywalking/agent/* /skywalking/agent']
        volumeMounts:
        - mountPath: /skywalking/agent
          name: sw-agent
      containers:
      - image: harbor.coolops.cn/skywalking-java:1.7.9
        name: demo
        command:
        - java -javaagent:/usr/skywalking/agent/skywalking-agent.jar -Dskywalking.agent.service_name=${SW_AGENT_NAME} -jar demo.jar
        volumeMounts:
        - mountPath: /usr/skywalking/agent
          name: sw-agent
        ports:
        - containerPort: 80
        env:
          - name: SW_AGENT_COLLECTOR_BACKEND_SERVICES
            value: 'my-skywalking-oap.skywalking.svc.cluster.local:11800'
          - name: SW_AGENT_NAME
            value: cartechfin-open-platform-skywalking
      volumes:
      - name: sw-agent
        emptyDir: {}

我們在啟動應用的時候只要引入skywalking的javaagent即可,如下:

java -javaagent:/path/to/skywalking-agent/skywalking-agent.jar -Dskywalking.agent.service_name=${SW_AGENT_NAME} -jar yourApp.jar


然后我們就可以在UI界面看到注冊上來的應用了,如下:
image.png
可以查看JVM數據,如下:
image.png


也可以查看其拓撲圖,如下:
image.png
還可以追蹤不同的uri,如下:
image.png


到這里整個服務就搭建完了,你也可以試一下。


參考文檔:
1、https://github.com/apache/skywalking-kubernetes
2、http://skywalking.apache.org/zh/blog/2019-08-30-how-to-use-Skywalking-Agent.html
3、https://github.com/apache/skywalking/blob/5.x/docs/cn/Deploy-skywalking-agent-CN.md


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM