TCP SOCKET中backlog參數的用途是什么? ---圖解


 

 

 

recv_queue中的包大小,為內核的包大小,而不是ip包大小。

如果發出去的包太大,需要修改write_queue和tx_queue兩個參數,tx_queue主要是流量控制。

多進程必須在socket后再fork,即使設置了REUSEADDR,從hashtable看出原因。
net.ipv4.tcp_max_syn_backlog參數決定了SYN_RECV狀態隊列的數量,一般默認值為512或者1024,即超過這個數量,系統將不再接受新的TCP連接請求.
sync cookie,外網要注意,同一個集線器來的用戶,可能導致大量不可以建立鏈接。
somaxconn決定了listen監聽隊列的大小
select有1024的限制,即使沒有達到1024,但是分配的fd大於1024也會有問題。
epoll多用於單進程多線程

 

On the sender, as shown in Fig 2, a user application writes the data into the TCP send buffer by calling the write() system call. Like the TCP recv buffer, the send buffer is a crucial
parameter to get maximum throughput. The maximum size of the congestion window is related to the amount of send buffer space allocated to the TCP socket. The send buffer holds all outstanding packets (for potential retransmission) as well as all data queued to be transmitted. Therefore, the congestion window can never grow larger than send buffer can accommodate. If the send buffer is too small, the congestion window will not fully open, limiting the throughput. On the other hand, a large send buffer allows the congestion window
to grow to a large value. If not constrained by the TCP recv buffer, the number of outstanding packets will also grow as the congestion window grows, causing packet loss if the end-toend path can not hold the large number of outstanding packets. The size of the send buffer can be set by modifying the /proc/sys/net/ipv4/tcp wmem variable, which also takes three different values, i.e., min, default, and max.

The analogue to the receiver’s netdev max backlog is the sender’s txqueuelen. The TCP layer builds packets when data is available in the send buffer or ACK packets in response to
data packets received. Each packet is pushed down to the IP layer for transmission. The IP layer enqueues each packet in an output queue (qdisc) associated with the NIC. The size of the
qdisc can be modified by assigning a value to the txqueuelen variable associated with each NIC device. If the output queue is full, the attempt to enqueue a packet generates a localcongestion event, which is propagated upward to the TCP layer. The TCP congestion-control algorithm then enters into the Congestion Window Reduced (CWR) state, and reduces
the congestion window by one every other ACK (known as rate halving). After a packet is successfully queued inside the output queue, the packet descriptor (sk buff) is then placed
in the output ring buffer tx ring. When packets are available inside the ring buffer, the device driver invokes the NIC DMA engine to transmit packets onto the wire.

While the above parameters dictate the flow-control profile of a connection, the congestion-control behavior can also have a large impact on the throughput. TCP uses one of several
congestion control algorithms to match its sending rate with the bottleneck-link rate. Over a connectionless network, a large number of TCP flows and other types of traffic share the same
bottleneck link. As the number of flows sharing the bottleneck link changes, the available bandwidth for a certain TCP flow varies. Packets get lost when the sending rate of a TCP flow
is higher than the available bandwidth. On the other hand, packets are not lost due to competition with other flows in a circuit as bandwidth is reserved. However, when a fast sender
is connected to a circuit with lower rate, packets can get lost due to buffer overflow at the switch. When a TCP connection is set up, a TCP sender uses ACK packets as a ’clock, known as ACK-clocking, to inject new packets into the network [1]. Since TCP receivers cannot send ACK packets faster than the bottleneck-link rate, a TCP senders transmission rate while under ACK-clocking is matched with the bottleneck link rate. In order to start the ACK-clock, a TCP sender uses the slow-start mechanism. During the slow-start phase, for each ACK packet received,a TCP sender transmits two data packets back-to-back. Since ACK packets are coming at the bottleneck-link rate, the senderis essentially transmitting data twice as fast as the bottleneck link can sustain. The slow-start phase ends when the size of the congestion window grows beyond ssthresh. In many congestion control algorithms, such as BIC [2], the initial slow start threshold (ssthresh) can be adjusted, as can other factors such as the maximum increment, to make BIC more or less aggressive. However, like changing the buffers via the sysctl function, these are system-wide changes which could adversely affect other ongoing and future connections. A TCP sender is allowed to send the minimum of the congestion window and the receivers advertised window number of packets. Therefore, the number of outstanding packets is doubled in each roundtrip time, unless bounded by the receivers advertised window. As packets are being forwarded by the bottleneck-link rate, doubling the number of outstanding packets in each roundtrip time will also double the buffer occupancy inside the bottleneck switch. Eventually, there will be packet losses inside the bottleneck switch once the buffer overflows.

After packet loss occurs, a TCP sender enters into the congestion avoidance phase. During congestion avoidance, the congestion window is increased by one packet in each roundtrip time. As ACK packets are coming at the bottleneck link rate, the congestion window keeps growing, as does the the number of outstanding packets. Therefore, packets will get lost again once the number of outstanding packets grows larger than the buffer size in the bottleneck switch plus the number of packets on the wire.

There are many other parameters that are relevant to the operation of TCP in Linux, and each is at least briefly explained in the documentation included in the distribution (Documentation/networking/ip-sysctl.txt). An example of a configurable parameter in the TCP implementation is the RFC2861 congestion window restart function. RFC2861 proposes restarting the congestion window if the sender is idle for a period of time (one RTO). The purpose is to ensure that the congestion window reflects the current state of the network. If the connection has been idle, the congestion window may reflect an obsolete view of the network and so is reset. This behavior can be disabled using the sysctl tcp slow start after idle
but, again, this change affects all connections system-wide.

 

backlog參數設置:

既可以在linux內核參數設置(修改文件/etc/sysctl相關參數)

也可以在socket系統調用listen函數時設置(第二個參數),

這二者區別是:

前者為全局性的,影響所有socket,

后者為局部性的,影響當前socket。

 

 http://www.cnblogs.com/ggjucheng/archive/2012/11/01/2750217.html

https://www.frozentux.net/ipsysctl-tutorial/chunkyhtml/tcpvariables.html

http://www.cnxct.com/something-about-phpfpm-s-backlog/

http://tech.uc.cn/?p=1790

http://blog.csdn.net/vingstar/article/details/10857993

http://blog.csdn.net/lizhitao/article/details/9204405

在前年時,業務中遇到好多次因為PHP-FPM的backlog參數引發的性能問題,一直想去詳細研究一番,還特意在2013年總結里提到這事《為何PHP5.5.6中fpm backlog Changed default listen() backlog to 65535》,然而,我稍於懶惰,一拖再拖,直至今日,才動腦去想,動筆去寫。

2013年12月14發布的PHP5.5.6中,changelog中有一條變更,

FPM:
Changed default listen() backlog to 65535.

這條改動,是在10月28日改的,見increase backlog to the highest value everywhere

It makes no sense to use -1 for *BSD (which is the highest value there)
and still use 128 for Linux.
Lets raise it right to up the limit and let the people lower it if they
think that 3.5Mb is too much for a process.
IMO this is better than silently dropping connections.

patch提交者認為,提高backlog數量,哪怕出現timeout之類錯誤,也比因為backlog滿了之后,悄悄的忽略TCP SYN的請求要好。

當我最近開始想好好了解一下backlog的細節問題時,發現fpm的默認backlog已經不是65535了,現在是511了。(沒寫在changelog中,你沒注意到,這不怪你。)我翻閱了github提交記錄,找到這次改動,是在2014年7月22日,Set FPM_BACKLOG_DEFAULT to 511

It is too large for php-fpm to set the listen backlog to 65535.
It is realy NOT a good idea to clog the accept queue especially
when the client or nginx has a timeout for this connection.

Assume that the php-fpm qps is 5000. It will take 13s to completely
consume the 65535 backloged connections. The connection maybe already
have been closed cause of timeout of nginx or clients. So when we accept
the 65535th socket, we get a broken pipe.

Even worse, if hundreds of php-fpm processes get a closed connection
they are just wasting time and resouces to run a heavy task and finally
get error when writing to the closed connection(error: Broken Pipe).

The really max accept queue size will be backlog+1(ie, 512 here).
We take 511 which is the same as nginx and redis.

其中理由是“backlog值為65535太大了。會導致前面的nginx(或者其他客戶端)超時”,而且提交者舉例計算了一下,假設FPM的QPS為5000,那么65535個請求全部處理完需要13s的樣子。但前端的nginx(或其他客戶端)已經等待超時,關閉了這個連接。當FPM處理完之后,再往這個SOCKET ID 寫數據時,卻發現連接已關閉,得到的是“error: Broken Pipe”,在nginx、redis、apache里,默認的backlog值兜是511。故這里也建議改為511。(后來發現,此patch提交者,是360團隊「基礎架構快報」中《TCP三次握手之backlog》的作者shafreeck

對於backlog的參數的含義,在linux kernel手冊中也給了注釋說明listen – listen for connections on a socket

#include /* See NOTES */
#include 
int listen(int sockfd, int backlog);
Description

listen() marks the socket referred to by sockfd as a passive socket, that is, as a socket that will be used to accept incoming connection requests using accept(2).
The sockfd argument is a file descriptor that refers to a socket of type SOCK_STREAM or SOCK_SEQPACKET.

The backlog argument defines the maximum length to which the queue of pending connections for sockfd may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds.

Notes

To accept connections, the following steps are performed:
1.
A socket is created with socket(2).
2.
The socket is bound to a local address using bind(2), so that other sockets may be connect(2)ed to it.
3.
A willingness to accept incoming connections and a queue limit for incoming connections are specified with listen().
4.
Connections are accepted with accept(2).
POSIX.1-2001 does not require the inclusion of , and this header file is not required on Linux. However, some historical (BSD) implementations required this header file, and portable applications are probably wise to include it.

The behavior of the backlog argument on TCP sockets changed with Linux 2.2. Now it specifies the queue length for completely established sockets waiting to be accepted, instead of the number of incomplete connection requests. The maximum length of the queue for incomplete sockets can be set using /proc/sys/net/ipv4/tcp_max_syn_backlog. When syncookies are enabled there is no logical maximum length and this setting is ignored. See tcp(7) for more information.

If the backlog argument is greater than the value in /proc/sys/net/core/somaxconn, then it is silently truncated to that value; the default value in this file is 128. In kernels before 2.4.25, this limit was a hard coded value, SOMAXCONN, with the value 128.

backlog的定義是已連接但未進行accept處理的SOCKET隊列大小,已是(並非syn的SOCKET隊列)。如果這個隊列滿了,將會發送一個ECONNREFUSED錯誤信息給到客戶端,即 linux 頭文件 /usr/include/asm-generic/errno.h中定義的“Connection refused”,(如果協議不支持重傳,該請求會被忽略)如下:

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
//...
#define ENETDOWN    100 /* Network is down */
#define ENETUNREACH 101 /* Network is unreachable */
#define ENETRESET   102 /* Network dropped connection because of reset */
#define ECONNABORTED    103 /* Software caused connection abort */
#define ECONNRESET  104 /* Connection reset by peer */
#define ENOBUFS     105 /* No buffer space available */
#define EISCONN     106 /* Transport endpoint is already connected */
#define ENOTCONN    107 /* Transport endpoint is not connected */
#define ESHUTDOWN   108 /* Cannot send after transport endpoint shutdown */
#define ETOOMANYREFS    109 /* Too many references: cannot splice */
#define ETIMEDOUT   110 /* Connection timed out */
#define ECONNREFUSED    111 /* Connection refused */
#define EHOSTDOWN   112 /* Host is down */
#define EHOSTUNREACH    113 /* No route to host */
#define EALREADY    114 /* Operation already in progress */
#define EINPROGRESS 115 /* Operation now in progress */
//...

在linux 2.2以前,backlog大小包括了半連接狀態和全連接狀態兩種隊列大小。linux 2.2以后,分離為兩個backlog來分別限制半連接SYN_RCVD狀態的未完成連接隊列大小跟全連接ESTABLISHED狀態的已完成連接隊列大小。互聯網上常見的TCP SYN FLOOD惡意DOS攻擊方式就是用/proc/sys/net/ipv4/tcp_max_syn_backlog來控制的,可參見《TCP洪水攻擊(SYN Flood)的診斷和處理》。

在使用listen函數時,內核會根據傳入參數的backlog跟系統配置參數/proc/sys/net/core/somaxconn中,二者取最小值,作為“ESTABLISHED狀態之后,完成TCP連接,等待服務程序ACCEPT”的隊列大小。在kernel 2.4.25之前,是寫死在代碼常量SOMAXCONN,默認值是128。在kernel 2.4.25之后,在配置文件/proc/sys/net/core/somaxconn (即 /etc/sysctl.conf 之類 )中可以修改。我稍微整理了流程圖,如下:

如圖,服務端收到客戶端的syn請求后,將這個請求放入syns queue中,然后服務器端回復syn+ack給客戶端,等收到客戶端的ack后,將此連接放入accept queue。

大約了解其參數代表意義之后,我稍微測試了一番,並抓去了部分數據,首先確認系統默認參數

1
2
3
4
5
6
7
8
9
root@vmware-cnxct:/home/cfc4n# cat /proc/sys/net/core/somaxconn
128
root@vmware-cnxct:/home/cfc4n# ss -lt
State      Recv-Q Send-Q         Local Address:Port                    Peer Address:Port
LISTEN     0      128                        *:ssh                           *:*
LISTEN     0      128                 0.0.0.0:9000                           *:*
LISTEN     0      128                       *:http                           *:*
LISTEN     0      128                       :::ssh                           :::*
LISTEN     0      128                      :::http                           :::*

在FPM的配置中,listen.backlog值默認為511,而如上結果中看到的Send-Q卻是128,可見確實是以/proc/sys/net/core/somaxconn跟listen參數的最小值作為backlog的值。

01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
cfc4n@cnxct:~$ ab -n 10000 -c 300 http: //172.16.218.128/3.php
This is ApacheBench, Version 2.3 <$Revision: 1604373 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http: //www.zeustech.net/
Licensed to The Apache Software Foundation, http: //www.apache.org/
 
Benchmarking 172.16.218.128 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
 
 
Server Software:        nginx/1.4.6
Server Hostname:        172.16.218.128
Server Port:            80
 
Document Path:          /3.php
Document Length:        55757 bytes
 
Concurrency Level:      300
Time taken for tests:   96.503 seconds
Complete requests:      10000
Failed requests:        7405
    (Connect: 0, Receive: 0, Length: 7405, Exceptions: 0)
Non-2xx responses:      271
Total transferred:      544236003 bytes
HTML transferred:       542499372 bytes
Requests per second:    103.62 [#/sec] (mean)
Time per request:       2895.097 [ms] (mean)
Time per request:       9.650 [ms] (mean, across all concurrent requests)
Transfer rate:          5507.38 [Kbytes/sec] received
 
Connection Times (ms)
               min  mean[+/-sd] median   max
Connect:        0    9  96.7      0    1147
Processing:     8 2147 6139.2    981   60363
Waiting:        8 2137 6140.1    970   60363
Total:          8 2156 6162.8    981   61179
 
Percentage of the requests served within a certain time (ms)
   50%    981
   66%   1074
   75%   1192
   80%   1283
   90%   2578
   95%   5352
   98%  13534
   99%  42346
  100%  61179 (longest request)

apache ab這邊的結果中,非2xx的http響應有271個,在NGINX日志數據如下:

01
02
03
04
05
06
07
08
09
10
11
root@vmware-cnxct:/var/ log /nginx# cat grep.error. log |wc -l
271
root@vmware-cnxct:/var/ log /nginx# cat grep.access. log |wc -l
10000
root@vmware-cnxct:/var/ log /nginx# cat grep.access. log |awk '{print $9}' |sort|uniq -c
    9729 200
     186 502
      85 504
root@vmware-cnxct:/var/ log /nginx# cat grep.error. log |awk '{print $8  $9  $10 $11}' |sort |uniq -c
     186 (111: Connection refused) while
      85 out (110: Connection timed

從nginx結果中看出,本次壓測總請求數為10000。http 200響應數量9729個;http 502 響應數量186個;http 504響應數量未85個;即非2xx響應總數為502+504總數,為271個。同時,也跟error.log中數據吻合。同時,也跟TCP數據包中的RST包數量吻合。
tcp.connection.rst-271

在nginx error中,錯誤號為111,錯誤信息為“Connection refused”的有186條,對應着所有http 502響應錯誤的請求;錯誤號為110,錯誤信息為“Connection timed out”的有85條,對應着所有http 504響應錯誤的請求。在linux errno.h頭文件定義中,錯誤號111對應着ECONNREFUSED;錯誤號110對應着ETIMEDOUT。linux man手冊里,對listen參數的說明中,也提到,若client連不上server時,會報告ECONNREFUSED的錯。

Nginx error日志中的詳細錯誤如下:

1
2
3
4
5
//backlog  過大,fpm處理不過來,導致隊列等待時間超過NGINX的proxy
54414#0: *24135 upstream timed out (110: Connection timed out) while connecting to upstream, client: 172.16.218.1, server: localhost, request: "GET /3.php HTTP/1.0" , upstream: "fastcgi://192.168.122.66:9999" , host: "172.16.218.128"
 
//backlog 過小
[error] 54416#0: *38728 connect() failed (111: Connection refused) while connecting to upstream, client: 172.16.218.1, server: localhost, request: "GET /3.php HTTP/1.0" , upstream: "fastcgi://192.168.122.66:9999" , host: "172.16.218.128"

在壓測的時候,我用tcpdump抓了通訊包,配合通信包的數據,也可以看出,當backlog為某128時,accept queue隊列塞滿后,TCP建立的三次握手完成,連接進入ESTABLISHED狀態,客戶端(nginx)發送給PHP-FPM的數據,FPM處理不過來,沒有調用accept將其從accept quque隊列取出時,那么就沒有ACK包返回給客戶端nginx,nginx那邊根據TCP 重傳機制會再次發從嘗試…報了“111: Connection refused”錯。當SYNS QUEUE滿了時,TCPDUMP的結果如下,不停重傳SYN包。
tcp-sync-queue-overflow
對於已經調用accept函數,從accept queue取出,讀取其數據的TCP連接,由於FPM本身處理較慢,以至於NGINX等待時間過久,直接終止了該fastcgi請求,返回“110: Connection timed out”。當FPM處理完成后,往FD里寫數據時,發現前端的nginx已經斷開連接了,就報了“Write broken pipe”。當ACCEPT QUEUE滿了時,TCPDUMP的結果如下,不停重傳PSH SCK包。(別問我TCP RTO重傳的機制,太復雜了,太深奧了 、TCP的定時器系列 — 超時重傳定時器
tcp-accept-queue-overflow
對於這些結論,我嘗試搜了很多資料,后來在360公司的「基礎架構快報」中也看到了他們的研究資料《TCP三次握手之backlog》,也驗證了我的結論。

關於ACCEPT QUEUE滿了之后的表現問題,早上IM鑫爺給我指出幾個錯誤,感謝批評及指導,在這里,我把這個問題再詳細描述一下。如上圖所示

  • NO.515 client發SYN到server,我的seq是0,消息包內容長度為0. (這里的seq並非真正的0,而是wireshark為了顯示更好閱讀,使用了Relative SeqNum相對序號)
  • NO.516 server回SYN ACK給client,我的seq是0,消息包內容長度是0,已經收到你發的seq 1 之前的TCP包。(請發后面的)
  • NO.641 client發ACK給server,我是seq 1 ,消息包內容長度是0,已經收到你發的seq 1 之前的TCP包。
  • NO.992 client發PSH給server,我是seq 1 ,消息包內容長度是496,已經收到你發的seq 1 之前的TCP包。
  • ………..等了一段時間之后(這里約0.2s左右)
  • NO.4796 client沒等到對方的ACK包,開始TCP retransmission這個包,我是seq 1,消息包長度496,已經收到你發的seq 1 之前的TCP包。
  • ……….又…等了一段時間
  • NO.9669 client還是沒等到對方的ACK包,又開始TCP retransmission這個包,我是seq 1,消息包長度496,已經收到你發的seq 1 之前的TCP包。
  • NO.13434 server發了SYN ACK給client,這里是tcp spurious retransmission 偽重傳,我的seq是0,消息包內容長度是0,已經收到你發的seq 1 之前的TCP包。距離其上次發包給client是NO.516 已1秒左右了,因為沒有收到NO.641 包ACK。這時,client收到過server的SYN,ACK包,將此TCP 連接狀態改為ESTABLISHED,而server那邊沒有收到client的ACK包,則其TCP連接狀態是SYN_RCVD狀態。(感謝IM鑫爺指正)也可能是因為accept queue滿了,暫時不能將此TCP連接從syns queue拉到accept queue,導致這情況,這需要翻閱內核源碼才能確認。
  • NO.13467 client發TCP DUP ACK包給server,其實是重發了N0.641 ,只是seq變化了,因為要包括它之前發送過的seq的序列號總和。即..我的seq 497 ,消息包內容長度是0,已經收到你發的seq 1 之前的TCP包。
  • NO.16573 client繼續重新發消息數據給server,包的內容還是NO.992的內容,因為之前發的幾次,都沒收到確認。
  • NO.25813 client繼續重新發消息數據給server,包的內容還還是NO.992的內容,仍沒收到確認。(參見下圖中綠色框內標識)
  • NO.29733 server又重復了NO.13434包的流程,原因也一樣,參見NO.13434包注釋
  • NO.29765 client只好跟NO.13467一樣,重發ACK包給server。
  • NO.44507 重復NO.16573的步驟
  • NO.79195 繼續重復NO.16573的步驟
  • NO.79195 server立刻直接回了RST包,結束會話

詳細的包內容備注在后面,需要關注的不光是包發送順序,包的seq重傳之類,還有一個重要的,TCP retransmission timeout,即TCP超時重傳。對於這里已經抓到的數據包,wireshark可以看下每次超時重傳的時間間隔,如下圖:
tcp ack rto 重傳數據包
RTO的重傳次數是系統可配置的,見/proc/sys/net/ipv4/tcp_retries1 ,而重傳時間間隔,間隔增長頻率等,是比較復雜的方式計算出來的,見《TCP/IP重傳超時–RTO》。

backlog大小設置為多少合適?
從上面的結論中可以看出,這跟FPM的處理能力有關,backlog太大了,導致FPM處理不過來,nginx那邊等待超時,斷開連接,報504 gateway timeout錯。同時FPM處理完准備write 數據給nginx時,發現TCP連接斷開了,報“Broken pipe”。backlog太小的話,NGINX之類client,根本進入不了FPM的accept queue,報“502 Bad Gateway”錯。所以,這還得去根據FPM的QPS來決定backlog的大小。計算方式最好為QPS=backlog。對了這里的QPS是正常業務下的QPS,千萬別用echo hello world這種結果的QPS去欺騙自己。當然,backlog的數值,如果指定在FPM中的話,記得把操作系統的net.core.somaxconn設置的起碼比它大。另外,ubuntu server 1404上/proc/sys/net/core/somaxconn 跟/proc/sys/net/ipv4/tcp_max_syn_backlog 默認值都是128,這個問題,我為了抓數據,測了好幾遍才發現。
對於測試時,TCP數據包已經drop掉的未進入syns queue,以及未進入accept queue的數據包,可以用netstat -s來查看:

01
02
03
04
05
06
07
08
09
10
11
12
13
14
root@vmware-cnxct:/# netstat -s
TcpExt:
     //...
     91855 times the listen queue of a socket overflowed
     102324 SYNs to LISTEN sockets dropped   //未進入syns queue的數據包數量
     444 packets directly queued to recvmsg prequeue.
     30408 bytes directly in process context from backlog
     //...
     TCPSackShiftFallback: 27
     TCPBacklogDrop: 2334    //未進入accept queue的數據包數量
     TCPTimeWaitOverflow: 229347
     TCPReqQFullDoCookies: 11591
     TCPRcvCoalesce: 29062
     //...

經過相關資料查閱,技術點研究,再做一番測試之后,又加深了我對TCP通訊知識點的記憶,以及對sync queue、accept queue所處環節知識點薄弱的補充,也是蠻有收獲,這些知識,在以后的純TCP通訊程序研發過程中,包括高性能的互聯網通訊中,想必有很大幫助,希望自己能繼續找些案例來實踐檢驗一下對這些知識的掌握。

參考資料


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM