Spark SQL學習筆記


Spark SQL學習筆記

窗口函數

窗口函數的定義引用一個大佬的定義: a window function calculates a return value for every input row of a table based on a group of rows。窗口函數與與其他函數的區別:

普通函數: 作用於每一條記錄,計算出一個新列(記錄數不變);
聚合函數: 作用於一組記錄(全部數據按照某種方式分為多組),計算出一個聚合值(記錄數變小);
窗口函數: 作用於每一條記錄,逐條記錄去指定多條記錄來計算一個值(記錄數不變)。
窗口函數語法結構: 函數名(參數)OVER(PARTITION BY 子句 ORDER BY 子句 ROWS/RANGE子句)

函數名:
OVER: 關鍵字,說明這是窗口函數,不是普通的聚合函數;
子句
PARTITION BY: 分組字段
ORDER BY: 排序字段
ROWS/RANG窗口子句: 用於控制窗口的尺寸邊界,有兩種(ROW,RANGE)
ROW: 物理窗口,數據篩選基於排序后的index
RANGE: 邏輯窗口,數據篩選基於值
主要有以下三種窗口函數

ranking functions

analytic functions

aggregate functions

窗口函數一般用來

1、我們需要統計用戶的總使用時長(累加歷史)

2、前台展現頁面需要對多個維度進行查詢,如:產品、地區等等

3、需要展現的表格頭如: 產品、2015-04、2015-05、2015-06

over()開窗函數的默認關鍵字

開窗的窗口范圍:
over(order by sroce range between 5 preceding and 5 following):窗口范圍為當前行數據幅度減5加5后的范圍內的。
over(order by sroce rows between 5 preceding and 5 following):窗口范圍為當前行前后各移動5行。
與over()函數結合的函數的介紹無敵的肉包

在使用聚合函數后,會將多行變成一行,而開窗函數是將一行變成多行;
並且在使用聚合函數后,如果要顯示其他的列必須將列加入到group by中,而使用開窗函數后,可以不使用group by,直接將所有信息顯示出來。
開窗函數適用於在每一行的最后一列添加聚合函數的結果。
開窗函數作用
為每條數據顯示聚合信息.(聚合函數() over())
為每條數據提供分組的聚合函數結果(聚合函數() over(partition by 字段) as 別名) 
--按照字段分組,分組后進行計算
與排名函數一起使用(row number() over(order by 字段) as 別名)
常用分析函數:(最常用的應該是1.2.3 的排序)

聚合函數

row_number() over(partition by ... order by ...)是根據表中字段進行分組,然后根據表中的字段排序;其實就是根據其排序順序,給組中的每條記錄添
加一個序號;且每組的序號都是從1開始,可利用它的這個特性進行分組取top-n
rank() over(partition by ... order by ...)
dense_rank() over(partition by ... order by ...)

count() over(partition by ... order by ...)
max() over(partition by ... order by ...)
min() over(partition by ... order by ...)
sum() over(partition by ... order by ...)
avg() over(partition by ... order by ...)
first_value() over(partition by ... order by ...)
last_value() over(partition by ... order by ...)

比較函數

lag() over(partition by ... order by ...) #比較函數
lead() over(partition by ... order by ...) 
lag 和lead 可以獲取結果集中,按一定排序所排列的當前行的上下相鄰若干offset 的某個行的某個列(不用結果集的自關聯); 
lag ,lead 分別是向前,向后; 
lag 和lead 有三個參數,第一個參數是列名,第二個參數是偏移的offset,第三個參數是超出記錄窗口時的默認值

SQL實現

query = """
SELECT 
ROW_NUMBER() OVER (ORDER BY time) AS row,
train_id, 
station, 
time, 
LEAD(time,1) OVER (ORDER BY time) AS time_next 
FROM schedule
"""
spark.sql(query).show()

# Give the number of the bad row as an integer
bad_row = 7

# Provide the missing clause, SQL keywords in upper case
clause = 'PARTITION BY train_id'

點表示法dataframe實現

聚合函數

# Give the identical result in each command
spark.sql('SELECT train_id, MIN(time) AS start FROM schedule GROUP BY train_id').show()
df.groupBy('train_id').agg({'time':'min'}).withColumnRenamed('min(time)', 'start').show()

# Print the second column of the result
spark.sql('SELECT train_id, MIN(time), MAX(time) FROM schedule GROUP BY train_id').show()
result = df.groupBy('train_id').agg({'time':'min', 'time':'max'})
result.show()
print(result.columns[1])

Using Python version 3.5.2 (default, Nov 23 2017 16:37:01)
SparkSession available as 'spark'.

<script.py> output:
    +--------+-----+
    |train_id|start|
    +--------+-----+
    |     217|6:06a|
    |     324|7:59a|
    +--------+-----+
    
    +--------+-----+
    |train_id|start|
    +--------+-----+
    |     217|6:06a|
    |     324|7:59a|
    +--------+-----+
    
    +--------+---------+---------+
    |train_id|min(time)|max(time)|
    +--------+---------+---------+
    |     217|    6:06a|    6:59a|
    |     324|    7:59a|    9:05a|
    +--------+---------+---------+
    
    +--------+---------+
    |train_id|max(time)|
    +--------+---------+
    |     217|    6:59a|
    |     324|    9:05a|
    +--------+---------+
    
    max(time)

sql語句的形式

# Write a SQL query giving a result identical to dot_df
query = "SELECT train_id, MIN(time) AS start, MAX(time) AS end FROM schedule GROUP BY train_id"
sql_df = spark.sql(query)
sql_df.show()

<script.py> output:
    +--------+-----+-----+
    |train_id|start|  end|
    +--------+-----+-----+
    |     217|6:06a|6:59a|
    |     324|7:59a|9:05a|
    +--------+-----+-----+
# Obtain the identical result using dot notation 
dot_df = df.withColumn('time_next', lead('time', 1)
        .over(Window.partitionBy('train_id')
        .orderBy('time')))

SQL查詢

# Create a SQL query to obtain an identical result to dot_df 
query = """
SELECT *, 
(UNIX_TIMESTAMP(LEAD(time, 1) OVER (PARTITION BY train_id ORDER BY time),'H:m') 
 - UNIX_TIMESTAMP(time, 'H:m'))/60 AS diff_min 
FROM schedule 
"""
sql_df = spark.sql(query)
sql_df.show()

1、UNIX_TIMESTAMP() :若無參數調用,則返回一個 Unix timestamp ('1970-01-01 00:00:00' GMT 之后的秒數) 作為無符號整數,得到當前時間戳 
2、UNIX_TIMESTAMP(date) :若用date 來調用 UNIX_TIMESTAMP(),它會將參數值以'1970-01-01 00:00:00' GMT后的秒數的形式返回。date 可以是一個 DATE 字符串、一個 DATETIME字符串、一個 TIMESTAMP或一個當地時間的YYMMDD 或YYYMMDD格式的數字。
<script.py> output:
+--------+-------------+-----+--------+
|train_id| station| time|diff_min|
+--------+-------------+-----+--------+
| 217| Gilroy|6:06a| 9.0|
| 217| San Martin|6:15a| 6.0|
| 217| Morgan Hill|6:21a| 15.0|
| 217| Blossom Hill|6:36a| 6.0|
| 217| Capitol|6:42a| 8.0|
| 217| Tamien|6:50a| 9.0|
| 217| San Jose|6:59a| null|
| 324|San Francisco|7:59a| 4.0|
| 324| 22nd Street|8:03a| 13.0|
| 324| Millbrae|8:16a| 8.0|
| 324| Hillsdale|8:24a| 7.0|
| 324| Redwood City|8:31a| 6.0|
| 324| Palo Alto|8:37a| 28.0|
| 324| San Jose|9:05a| null|
+--------+-------------+-----+--------+


## Loading natural language text

加載文本時數據
```r
# Load the dataframe
df = spark.read.load('sherlock_sentences.parquet')

# Filter and show the first 5 rows
df.where('id > 70').show(5, truncate=False)

<script.py> output:
    +--------------------------------------------------------+---+
    |clause                                                  |id |
    +--------------------------------------------------------+---+
    |i answered                                              |71 |
    |indeed i should have thought a little more              |72 |
    |just a trifle more i fancy watson                       |73 |
    |and in practice again i observe                         |74 |
    |you did not tell me that you intended to go into harness|75 |
    +--------------------------------------------------------+---+
    only showing top 5 rows

# Split the clause column into a column called words 
split_df = clauses_df.select(split('clause', ' ').alias('words')) #alias就是名字的縮寫
split_df.show(5, truncate=False)

# Explode the words column into a column called word 
exploded_df = split_df.select(explode('words').alias('word')) #explode方法可以從規定的Array或者Map中使用每一個元素創建一列
exploded_df.show(10)

# Count the resulting number of rows in exploded_df
print("\nNumber of rows: ", exploded_df.count())

First 5 rows of clauses_df:
+----------------------------------------+---+
|clause                                  |id |
+----------------------------------------+---+
|title                                   |0  |
|the adventures of sherlock holmes author|1  |
|sir arthur conan doyle release date     |2  |
|march 1999                              |3  |
|ebook 1661                              |4  |
+----------------------------------------+---+

<script.py> output:
    +-----------------------------------------------+
    |words                                          |
    +-----------------------------------------------+
    |[title]                                        |
    |[the, adventures, of, sherlock, holmes, author]|
    |[sir, arthur, conan, doyle, release, date]     |
    |[march, 1999]                                  |
    |[ebook, 1661]                                  |
    +-----------------------------------------------+
    only showing top 5 rows
    
    +----------+
    |      word|
    +----------+
    |     title|
    |       the|
    |adventures|
    |        of|
    |  sherlock|
    |    holmes|
    |    author|
    |       sir|
    |    arthur|
    |     conan|
    +----------+
    only showing top 10 rows
    
    
    Number of rows:  1279

sql滑動窗口

LAG

# Word for each row, previous two and subsequent two words
query = """
SELECT
part,
LAG(word, 2) OVER(PARTITION BY part ORDER BY id) AS w1,
LAG(word, 1) OVER(PARTITION BY part ORDER BY id) AS w2,
word AS w3,
LEAD(word, 1) OVER(PARTITION BY part ORDER BY id) AS w4,
LEAD(word, 2) OVER(PARTITION BY part ORDER BY id) AS w5
FROM text
"""
spark.sql(query).where("part = 12").show(10)

Table 1: First 10 rows of chapter 12

+-----+---------+----+--------------------+
|   id|     word|part|               title|
+-----+---------+----+--------------------+
|95166|      xii|  12|Sherlock Chapter XII|
|95167|      the|  12|Sherlock Chapter XII|
|95168|adventure|  12|Sherlock Chapter XII|
|95169|       of|  12|Sherlock Chapter XII|
|95170|      the|  12|Sherlock Chapter XII|
|95171|   copper|  12|Sherlock Chapter XII|
|95172|  beeches|  12|Sherlock Chapter XII|
|95173|       to|  12|Sherlock Chapter XII|
|95174|      the|  12|Sherlock Chapter XII|
|95175|      man|  12|Sherlock Chapter XII|
+-----+---------+----+--------------------+
only showing top 10 rows


Table 2: First 10 rows of the desired result

+----+---------+---------+---------+---------+---------+
|part|       w1|       w2|       w3|       w4|       w5|
+----+---------+---------+---------+---------+---------+
|  12|     null|     null|      xii|      the|adventure|
|  12|     null|      xii|      the|adventure|       of|
|  12|      xii|      the|adventure|       of|      the|
|  12|      the|adventure|       of|      the|   copper|
|  12|adventure|       of|      the|   copper|  beeches|
|  12|       of|      the|   copper|  beeches|       to|
|  12|      the|   copper|  beeches|       to|      the|
|  12|   copper|  beeches|       to|      the|      man|
|  12|  beeches|       to|      the|      man|      who|
|  12|       to|      the|      man|      who|    loves|
+----+---------+---------+---------+---------+---------+

<script.py> output:
    +----+---------+---------+---------+---------+---------+
    |part|       w1|       w2|       w3|       w4|       w5|
    +----+---------+---------+---------+---------+---------+
    |  12|     null|     null|      xii|      the|adventure|
    |  12|     null|      xii|      the|adventure|       of|
    |  12|      xii|      the|adventure|       of|      the|
    |  12|      the|adventure|       of|      the|   copper|
    |  12|adventure|       of|      the|   copper|  beeches|
    |  12|       of|      the|   copper|  beeches|       to|
    |  12|      the|   copper|  beeches|       to|      the|
    |  12|   copper|  beeches|       to|      the|      man|
    |  12|  beeches|       to|      the|      man|      who|
    |  12|       to|      the|      man|      who|    loves|
    +----+---------+---------+---------+---------+---------+
    only showing top 10 rows

Repartition

通過創建更過或更少的分區將數據隨機的打散,讓數據在不同分區之間相對均勻。這個操作經常是通過網絡進行數打散。

# Repartition text_df into 12 partitions on 'chapter' column
repart_df = text_df.repartition(12, 'chapter')

# Prove that repart_df has 12 partitions
repart_df.rdd.getNumPartitions()

First 5 rows of text_df
+---+-------+------------------+
| id|   word|           chapter|
+---+-------+------------------+
|305|scandal|Sherlock Chapter I|
|306|     in|Sherlock Chapter I|
|307|bohemia|Sherlock Chapter I|
|308|      i|Sherlock Chapter I|
|309|     to|Sherlock Chapter I|
+---+-------+------------------+
only showing top 5 rows


Table 1

+---------------------+
|chapter              |
+---------------------+
|Sherlock Chapter I   |
|Sherlock Chapter II  |
|Sherlock Chapter III |
|Sherlock Chapter IV  |
|Sherlock Chapter IX  |
|Sherlock Chapter V   |
|Sherlock Chapter VI  |
|Sherlock Chapter VII |
|Sherlock Chapter VIII|
|Sherlock Chapter X   |
|Sherlock Chapter XI  |
|Sherlock Chapter XII |
+---------------------+

分類變量排序

分組排序

# Find the top 10 sequences of five words
query = """
SELECT w1, w2, w3, w4, w5, COUNT(*) AS count FROM (
   SELECT word AS w1,
   LEAD(word,1) OVER(PARTITION BY part ORDER BY id ) AS w2,
   LEAD(word,2) OVER(PARTITION BY part ORDER BY id ) AS w3,
   LEAD(word,3) OVER(PARTITION BY part ORDER BY id ) AS w4,
   LEAD(word,4) OVER(PARTITION BY part ORDER BY id ) AS w5
   FROM text
)
GROUP BY w1, w2, w3, w4, w5
ORDER BY count DESC
LIMIT 10
""" 
df = spark.sql(query)
df.show()

<script.py> output:
    +-----+---------+------+-------+------+-----+
    |   w1|       w2|    w3|     w4|    w5|count|
    +-----+---------+------+-------+------+-----+
    |   in|      the|  case|     of|   the|    4|
    |    i|     have|    no|  doubt|  that|    3|
    | what|       do|   you|   make|    of|    3|
    |  the|   church|    of|     st|monica|    3|
    |  the|      man|   who|entered|   was|    3|
    |dying|reference|    to|      a|   rat|    3|
    |    i|       am|afraid|   that|     i|    3|
    |    i|    think|  that|     it|    is|    3|
    |   in|      his| chair|   with|   his|    3|
    |    i|     rang|   the|   bell|   and|    3|
    +-----+---------+------+-------+------+-----+

distinct

# Unique 5-tuples sorted in descending order
query = """
SELECT DISTINCT w1, w2, w3, w4, w5 FROM (
   SELECT word AS w1,
   LEAD(word,1) OVER(PARTITION BY part ORDER BY id ) AS w2,
   LEAD(word,2) OVER(PARTITION BY part ORDER BY id ) AS w3,
   LEAD(word,3) OVER(PARTITION BY part ORDER BY id ) AS w4,
   LEAD(word,4) OVER(PARTITION BY part ORDER BY id ) AS w5
   FROM text
)
ORDER BY w1 DESC, w2 DESC, w3 DESC, w4 DESC, w5 DESC 
LIMIT 10
"""
df = spark.sql(query)
df.show()
<script.py> output:
    +----------+------+---------+------+-----+
    |        w1|    w2|       w3|    w4|   w5|
    +----------+------+---------+------+-----+
    |   zealand| stock|   paying|     4|  1/4|
    |   youwill|   see|     your|   pal|again|
    |   youwill|    do|     come|  come| what|
    |     youth|though|   comely|    to| look|
    |     youth|    in|       an|ulster|  who|
    |     youth|either|       it|     s| hard|
    |     youth| asked| sherlock|holmes|  his|
    |yourselves|  that|       my|  hair|   is|
    |yourselves|behind|    those|  then| when|
    |  yourself|  your|household|   and|  the|
    +----------+------+---------+------+-----+
#   Most frequent 3-tuple per chapter
query = """
SELECT chapter, w1, w2, w3, count FROM
(
  SELECT
  chapter,
  ROW_NUMBER() OVER (PARTITION BY chapter ORDER BY count DESC) AS row,
  w1, w2, w3, count
  FROM ( %s )
)
WHERE row = 1
ORDER BY chapter ASC
""" % subquery

spark.sql(query).show()

<script.py> output:
    +-------+-------+--------+-------+-----+
    |chapter|     w1|      w2|     w3|count|
    +-------+-------+--------+-------+-----+
    |      1|     up|      to|    the|    6|
    |      2|    one|      of|    the|    8|
    |      3|     mr|  hosmer|  angel|   13|
    |      4|   that|      he|    was|    8|
    |      5|   that|      he|    was|    6|
    |      6|neville|      st|  clair|   15|
    |      7|   that|       i|     am|    7|
    |      8|     dr|grimesby|roylott|    8|
    |      9|   that|      it|    was|    7|
    |     10|   lord|      st|  simon|   28|
    |     11|      i|   think|   that|    8|
    |     12|    the|  copper|beeches|   10|
    +-------+-------+--------+-------+-----+

caching

緩存數據表

緩存將數據保存在內存中,這樣不用每次重新獲取數據,提升效率
但是緩存的過程本身很慢~
緩存之后再加載就會很快

Spark 中一個很重要的能力是將數據持久化(或稱為緩存),在多個操作間都可以訪問這些持久化的數據。當持久化一個 RDD 時,每個節點的其它分區都可以使用 RDD 在內存中進行計算,在該數據上的其他 action 操作將直接使用內存中的數據。這樣會讓以后的 action 操作計算速度加快(通常運行速度會加速 10 倍)。緩存是迭代算法和快速的交互式使用的重要工具。

RDD 可以使用 persist() 方法或 cache() 方法進行持久化。數據將會在第一次 action 操作時進行計算,並緩存在節點的內存中。Spark 的緩存具有容錯機制,如果一個緩存的 RDD 的某個分區丟失了,Spark 將按照原來的計算過程,自動重新計算並進行緩存。

在 shuffle 操作中(例如 reduceByKey),即便是用戶沒有調用 persist 方法,Spark 也會自動緩存部分中間數據。這么做的目的是,在 shuffle 的過程中某個節點運行失敗時,不需要重新計算所有的輸入數據。如果用戶想多次使用某個 RDD,強烈推薦在該 RDD 上調用 persist 方法。董可倫

# Unpersists df1 and df2 and initializes a timer
prep(df1, df2) 

# Cache df1
df1.cache()

# Run actions on both dataframes
run(df1, "df1_1st") 
run(df1, "df1_2nd")
run(df2, "df2_1st")
run(df2, "df2_2nd", elapsed=True)

# Prove df1 is cached
print(df1.is_cached)

<script.py> output:
    df1_1st : 3.1s
    df1_2nd : 0.1s
    df2_1st : 0.3s
    df2_2nd : 0.1s
    Overall elapsed : 3.9
    True

persist()

RDD分布式抽象數據集

_useDisk:使用磁盤
_useMemory:使用內存
_useOffHeap:使用堆外存,這是Java虛擬機里面的概念,堆外內存意味着把內存對象分配在Java虛擬機的堆以外的內存,這些內存直接受操作系統管理(而不是虛擬機)。這樣做的結果就是能保持一個較小的堆,以減少垃圾收集對應用的影響。
_deserialized:使用反序列化,其逆過程序列化(Serialization)是java提供的一種機制,將對象表示成一連串的字節;而反序列化就表示將字節恢復為對象的過程。序列化是對象永久化的一種機制,可以將對象及其屬性保存起來,並能在反序列化后直接恢復這個對象
_replication:副本數,默認是一個

Persist df2 using memory and disk storage level

df2.persist(storageLevel=pyspark.StorageLevel.MEMORY_AND_DISK)

spark.catalog.isCached

一個api

# List the tables
print("Tables:\n", spark.catalog.listTables())

# Cache table1 and Confirm that it is cached
spark.catalog.cacheTable('table1')
print("table1 is cached: ", spark.catalog.isCached('table1'))

# Uncache table1 and confirm that it is uncached
spark.catalog.uncacheTable('table1')
print("table1 is cached: ", spark.catalog.isCached('table1'))

Logging

學習一下記錄日志以及防止cpu隱形丟失
查看日志信息的幾種方式

DEBUG詳細信息,常用於調試

INFO程序正常運行過程中產生的一些信息

WARNING警告用戶,雖然程序還在正常工作,但有可能發生錯誤

ERROR由於更嚴重的問題,程序已不能執行一些功能了

CRITICAL嚴重錯誤,程序已不能繼續運行

# Uncomment the 5 statements that do NOT trigger text_df
logging.debug("text_df columns: %s", text_df.columns)
logging.info("table1 is cached: %s", spark.catalog.isCached(tableName="table1"))
# logging.warning("The first row of text_df: %s", text_df.first())
logging.error("Selected columns: %s", text_df.select("id", "word"))
logging.info("Tables: %s", spark.sql("SHOW tables").collect())
logging.debug("First row: %s", spark.sql("SELECT * FROM table1 LIMIT 1"))
# logging.debug("Count: %s", spark.sql("SELECT COUNT(*) AS count FROM table1").collect())

# Log selected columns of text_df as error message
logging.error("Selected columns: %s", text_df.select("id", "word"))

explain

查詢數據的執行計划

自定義函數

# Returns true if the value is a nonempty vector
nonempty_udf = udf(lambda x:  
    True if (x and hasattr(x, "toArray") and x.numNonzeros())
    else False, BooleanType())

# Returns first element of the array as string
s_udf = udf(lambda x: str(x[0]) if (x and type(x) is list and len(x) > 0)
    else '', StringType())

感覺這里稍微熟悉一些了
用戶自定義函數

# Returns true if the value is a nonempty vector
nonempty_udf = udf(lambda x:  
    True if (x and hasattr(x, "toArray") and x.numNonzeros())
    else False, BooleanType())

# Returns first element of the array as string
s_udf = udf(lambda x: str(x[0]) if (x and type(x) is list and len(x) > 0)
    else '', StringType())

array_contains

判斷某個字符串是否包含某個元素

# Transform df using model
result = model.transform(df.withColumnRenamed('in', 'words'))\
        .withColumnRenamed('words', 'in')\
        .withColumnRenamed('vec', 'invec')
result.drop('sentence').show(3, False)

# Add a column based on the out column called outvec
result = model.transform(result.withColumnRenamed('out', 'words'))\
        .withColumnRenamed('words', 'out')\
        .withColumnRenamed('vec', 'outvec')
result.select('invec', 'outvec').show(3,False)	

spark可以部署一些線上模型

哦哦,spark也試一門語言,有包,哈哈,目前是這樣理解的

# Split the examples into train and test, use 80/20 split
df_trainset, df_testset = df_examples.randomSplit((0.80, 0.20), 42)

# Print the number of training examples
print("Number training: ", df_trainset.count())

# Print the number of test examples
print("Number test: ", df_testset.count())

<script.py> output:
    Number training:  2091
    Number test:  495

邏輯回歸的例子

# Import the logistic regression classifier
from pyspark.ml.classification import LogisticRegression

# Instantiate logistic setting elasticnet to 0.0
logistic = LogisticRegression(maxIter=100, regParam=0.4, elasticNetParam=0.0)

# Train the logistic classifer on the trainset
df_fitted = logistic.fit(df_trainset)

# Print the number of training iterations
print("Training iterations: ", df_fitted.summary.totalIterations)

<script.py> output:
    Training iterations:  21

# Score the model on test data
testSummary = df_fitted.evaluate(df_testset)

# Print the AUC metric
print("\ntest AUC: %.3f" % testSummary.areaUnderROC)


lag() over() 與 lead() over() 函數是跟偏移量相關的兩個分析函數,通過這兩個函數可以在一次查詢中取出同一字段的前 N 行的數據 (lag) 和后 N 行的數據 (lead) 作為獨立的列, 從而更方便地進行進行數據過濾。這種操作可以代替表的自聯接,並且 LAG 和 LEAD 有更高的效率。

入門先到這里啦,看的雲里霧里的哈哈

我覺得可以先熟悉pyspark里面的ml和mlib


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM