Nginx錯誤日志與優化專題


一、Nginx配置和內核優化 實現突破十萬並發

二、一次Nignx的502頁面的錯誤記錄

(1)錯誤頁面顯示

 

錯誤日志:

2017/07/17 17:32:57 [error] 29071#0: *96 recv() failed (104: Connection reset by peer) while reading response header from upstream,
client: 101.226.125.118, server: live.baidu.com, request: "GET /live/CY00013 HTTP/1.1", upstream: "http://show.baidu.com/live/123.html", host: "live.baidu.com"

(2)配置以及流程設置

本次采用Openresty 搭建的web服務器,使用代理服務器IP(192.168.1.166)代理被代理服務器IP(172.16.0.166)。改配置以及流程一直是合適的,結果在今天下午訪問代理服務器出現Nginx 502 錯誤。配置信息:

 server { listen 80; #resolver 8.8.8.8; server_name live.baidu.com; location / { proxy_pass http://show.baidu.com;
 proxy_set_header Host show.baidu.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-PORT $remote_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }

IP地址和域名對應關系:

show.baidu.com (172.16.0.166)
live.baidu.com (192.168.1.166)

經過各種百度和google都說是后端服務器的原因,但是訪問后端服務器也是正常的show.baidu.com (172.16.0.166),但是當訪問關於一個和Redis有關的頁面的時候就會出現,redis服務器已經斷開連接,重啟Redis服務器后正常工作

(3)總結:如果當前服務器是代理服務器,出現502的錯誤原因,則一般都是后端服務器的異常導致的

三、nginx錯誤日志文件Error.log常見錯誤詳細說明

我們經常遇到各種各樣的nginx錯誤日志,平時根據一些nginx錯誤日志就可以分析出原因了。不過不是很系統,這里網上看到一篇資料還是比較系統的關於nginx的error.log的詳細說明,這里記錄下,方便以后查看了解。

以上表格來自網絡資料。這里只是記錄下,方便以后查看。

四、Nginx錯誤日志說明

錯誤日志類型

  • 類型1: upstream timed out

  • 類型2: connect() failed

  • 類型3: no live upstreams

  • 類型4: upstream prematurely closed connection

  • 類型5: 104: Connection reset by peer

  • 類型6: client intended to send too large body

  • 類型7: upstream sent no valid HTTP/1.0 header

 

類型

錯誤日志

原因

解決辦法

1 upstream timed out (110: Connection timed out) while connecting to upstream nginx與upstream建立tcp連接超時,nginx默認連接建立超時為200ms 排查upstream是否能正常建立tcp連接
1 upstream timed out (110: Connection timed out) while reading response header from upstream nginx從upstream讀取響應時超時,nginx默認的讀超時為20s,讀超時不是整體讀的時間超時,而是指兩次讀操作之間的超時,整體讀耗時有可能超過20s 排查upstream響應請求為什么過於緩慢
2 connect() failed (104: Connection reset by peer) while connecting to upstream nginx與upstream建立tcp連接時被reset 排查upstream是否能正常建立tcp連接
2 connect() failed (111: Connection refused) while connecting to upstream nginx與upstream建立tcp連接時被拒 排查upstream是否能正常建立tcp連接
3 no live upstreams while connecting to upstream nginx向upstream轉發請求時發現upstream狀態全都為down 排查nginx的upstream的健康檢查為什么失敗
4 upstream prematurely closed connection nginx在與upstream建立完tcp連接之后,試圖發送請求或者讀取響應時,連接被upstream強制關閉 排查upstream程序是否異常,是否能正常處理http請求
5 recv() failed (104: Connection reset by peer) while reading response header from upstream nginx從upstream讀取響應時連接被對方reset 排查upstream應用已經tcp連接狀態是否異常
6 client intended to send too large body 客戶端試圖發送過大的請求body,nginx默認最大允許的大小為1m,超過此大小,客戶端會受到http 413錯誤碼
  1. 調整請求客戶端的請求body大小;

  2. 調大相關域名的nginx配置:client_max_body_size;

7 upstream sent no valid HTTP/1.0 header nginx不能正常解析從upstream返回來的請求行  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

=====================openresty 遇到的錯誤信息

錯誤: an upstream response is buffered to a temporary file

 #允許客戶端請求的最大字節 client_max_body_size 50m; #緩沖區最大字節 client_body_buffer_size 256k; #代理服務器鏈接后端服務器的超時時間 proxy_connect_timeout 30; #代理服務器等待后端服務器響應的超時時間 proxy_read_timeout 60; #后端服務器返回數據給代理服務器的最大傳輸時間 proxy_send_timeout 30; #代理服務器緩沖區大小,客戶端的頭信息會保存在這里 proxy_buffer_size 64k; #代理服務器有幾個緩沖區,最大是多大 proxy_buffers 4 64k; #代理服務器煩方式可以申請更大的緩沖區,Nginx官方推薦為*2即可 proxy_busy_buffers_size 128k; #代理服務器臨時文件大小 proxy_temp_file_write_size 256k;

 

======================Nignx + php5 出現的問題

修復Nginx 502錯誤:upstream sent too big header while reading response header from upstream

解決辦法:

在Nginx配置文件的的http段,加入下面的配置

proxy_buffer_size 128k; proxy_buffers 32 32k; proxy_busy_buffers_size 128k; 

重啟Nginx錯誤依舊。再在host配置的php段加入下面配置

fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; 

重啟Nginx 服務器即可

error 2

[error] 21501#0: *24372 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 12.232.112,
request: "GET http://clientapi.ipip.net/echo.php?info=1234567890 HTTP/1.1", upstream: "fastcgi://unix:/var/run/php7.0.9-fpm.sock:",

轉載:nginx監聽端口非80時的轉發問題

摘要: 近日,為了讓更新后台業務系統時,不影響線上用戶的使用終止,故使用了nginx+tomcat集群,其中用到了memcached-session-manager組件來集中管理session,確實遇到了各種“坑”,這幾天有時間陸續會把各種坑挖出來,記錄一下已被忘記。 -----第一篇--nginx監聽端口非80時的轉發問題

該問題是最先發現的,由於之前對nginx不是特別的熟悉所以該問題是個入門級別的:

server { listen 80; server_name localhost; location / { proxy_pass http://192.168.1.100:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }

以上是nginx默認監聽端口號為80的情況,由於公司系統是內網應用,用戶已經將鏈接收藏起來了,收藏后的地址是之前的單台tomcat的8080端口,為了不影響他們的操作習慣所以決定讓nginx繼續監聽8080端口,保持對外端口相同。

於是乎,我便想當然的把nginx的端口號改成了8080,把tomcat的端口改為了8081。改后的nginx配置如下:

server { listen 8080; server_name localhost; location / { proxy_pass http://192.168.1.100:8081; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }

改完之后,重啟測試發現問題了:

當訪問http://localhost:8080后,瀏覽器自動跳轉到了 http://localhost去了

這是為什么呢?????

原來,如果nginx的監聽端口不是默認的80端口,改為其他非80端口后,后端服務tomcat中的request.getServerPort()方法無法獲得正確的端口號,仍然返回到80端口。在response.sendRedirect()時,客戶端可能無法獲得正確的重定向URL。

所以正確的配置:

server { listen 8080; server_name localhost; location / { proxy_pass http://192.168.1.100:8081; proxy_set_header Host $host:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }

 

安裝報錯:

make[2]: Entering directory `/usr/include/openssl' make[2]: *** No rule to make target `clean'. Stop.
make[2]: Leaving directory `/usr/include/openssl' make[1]: *** [/usr/include/openssl//openssl/include/openssl/ssl.h] Error 2 make[1]: Leaving directory `/jowei/nginx-0.8.9' make: *** [build] Error 2
出現這個問題決絕辦法:將你編譯代命了也就是 with-openssl=/______這個路徑指向你的源碼安裝包路徑而不是你安裝后的路徑!

–with-pcre Nginx的rewrite功能需要使用pcre庫才能工作,而Nginx的編譯參數里面的這個選項並不是像常規的那樣指定pcre的安裝目錄,而是指定pcre源代碼的目錄。
也就是說,如果你的系統路徑下已經可以找到pcre的lib和include文件,這個選項可以不指定了。如果你的系統沒有安裝pcre,那么就指定該選項,Nginx會在編譯的時候從你指定的這個目錄把pcre編譯進來。

========20170516 視頻直播遇到的問題=========================

遇到的錯誤1

2017/05/16 20:31:00 [alert] 194667#0: *842850 socket() failed (24: Too many open files) while connecting to upstream, 
client: 122.234.65.111, server: 333.111.com, request: "GET /live/9410.ts HTTP/1.1",
upstream: "http://127.0.1.4:80/live/10.ts",
host: "2.24.87.6:8081",
referrer: "http://y.com/live/12"

遇到的錯誤2

2017/05/16 20:31:00 [crit] 194667#0: *842850 open() "/opt/openresty/nginx/html/50x.html" failed (24: Too many open files), 
client: 122.234.65.11, server: 333.11.com, request: "GET /live/10.ts HTTP/1.1",
upstream: "http://127.0.1.4:80/live/10.ts", 
host: "2.24.87.6:8081",
referrer: "http://y.com/live/12"

遇到的錯誤3

2017/05/16 20:31:12 [crit] 194667#0: *846706 open() "/opt/openresty/nginx/proxy_temp/4/19/0000158194" failed (24: Too many open files) while reading upstream, 

其原因是Linux / Unix 設置了軟硬文件句柄和打開文件的數目,可以使用’ulimit’命令來查看系統文件限制

ulimit -Hn
ulimit -Sn

1、阿里雲配置文件

(1)/etc/security/limits.conf

# End of file root soft nofile 65535 root hard nofile 65535
* soft nofile 65535
* hard nofile 65535

(2)/etc/sysctl.conf 

【1】1核512M

vm.swappiness = 0 net.ipv4.neigh.default.gc_stale_time=120 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.all.arp_announce=2 net.ipv4.tcp_max_tw_buckets = 5000 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 1024 net.ipv4.tcp_synack_retries = 2 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv4.conf.lo.arp_announce=2

【2】4核4G

vm.swappiness = 0 net.ipv4.neigh.default.gc_stale_time=120 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.all.arp_announce=2 #net.ipv4.tcp_max_tw_buckets = 5000 net.ipv4.tcp_max_tw_buckets = 20000 net.ipv4.tcp_syncookies = 1 #add by sss net.ipv4.ip_local_port_range = 1024 65000 net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 net.ipv4.tcp_fin_timeout=30 #net.ipv4.tcp_max_syn_backlog = 1024 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.tcp_synack_retries = 2 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv4.conf.lo.arp_announce=2

(3)ulimit -a

www@iZ23o0b38gsZ:~$ ulimit -a core file size          (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0
file size               (blocks, -f) unlimited pending signals (-i) 31446 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65535 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority              (-r) 0 stack size (kbytes, -s) 8192 cpu time               (seconds, -t) unlimited max user processes (-u) 31446 virtual memory (kbytes, -v) unlimited file locks                      (-x) unlimited

2、公司服務器配置信息

(1)/etc/security/limits.conf 是空的

本地服務器信息查看

www@ubuntu5:/opt/openresty/nginx/logs$ ulimit -Hn 4096 www@ubuntu5:/opt/openresty/nginx/logs$ ulimit -Sn 1024

(2)/etc/sysctl.conf  也是空的

 

(3)ulimit -a 

core file size          (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0
file size               (blocks, -f) unlimited pending signals (-i) 126277 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority              (-r) 0 stack size (kbytes, -s) 8192 cpu time               (seconds, -t) unlimited max user processes (-u) 126277 virtual memory (kbytes, -v) unlimited file locks                      (-x) unlimited

 3、解決辦法

 參考文獻1:http://www.drupal001.com/2013/07/nginx-open-files-error/

參考文獻2:Nginx 出現 500 Error 修復 (too many open file, connection)

參考文獻3:ulimit -a詳解 

參考文獻4:Linux上修改open files數目

ulimit -a  

阿里雲4核4G 

www@iZ23o0b38gsZ:~$ ulimit -a core file size          (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0
file size               (blocks, -f) unlimited pending signals (-i) 31446 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65535 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority              (-r) 0 stack size (kbytes, -s) 8192 cpu time               (seconds, -t) unlimited max user processes (-u) 31446 virtual memory (kbytes, -v) unlimited file locks                      (-x) unlimited

阿里雲1核512M

root@iZ23nl9zsjyZ:~# ulimit -a core file size          (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0
file size               (blocks, -f) unlimited pending signals (-i) 3738 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65535 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority              (-r) 0 stack size (kbytes, -s) 8192 cpu time               (seconds, -t) unlimited max user processes (-u) 3738 virtual memory (kbytes, -v) unlimited file locks                      (-x) unlimited

請對比以上數據做出自己的優化方式

[1]vim /etc/security/limits.conf 文件添加以下內容

# End of file root soft nofile 65535 root hard nofile 65535
* soft nofile 65535
* hard nofile 65535

[2] vim /etc/sysctl.conf  添加以下內容

vm.swappiness = 0 net.ipv4.neigh.default.gc_stale_time=120 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.all.arp_announce=2 #net.ipv4.tcp_max_tw_buckets = 5000 net.ipv4.tcp_max_tw_buckets = 20000 net.ipv4.tcp_syncookies = 1 #add by sss net.ipv4.ip_local_port_range = 1024 65000 net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 net.ipv4.tcp_fin_timeout=30 #net.ipv4.tcp_max_syn_backlog = 1024 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.tcp_synack_retries = 2 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv4.conf.lo.arp_announce=2

[3]這個只是針對當前客戶

sudo echo ulimit -n 65535 >>/etc/profile source /etc/profile    #加載修改后的profile

查看系統句柄文件數

當前系統文件句柄的最大數目,只用於查看,不能設置修改

cat /proc/sys/fs/file-max

linux如何查看文件打開數?設置最大打開文件數

查看進程打開文件數

如果需要查看所有進程的文件打開數,如下圖命令lsof |wc -l 

linux如何查看文件打開數?設置最大打開文件數

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM