Kubernetes因限制內存配置引發的錯誤


今天對一個pod進行內存資源調整后, 一直卡在ContainerCreating的狀態, 執行describe命令查看該 Pod 詳細信息后發現如下 。

[root@master-01 ~]# kubectl describe pod busybox-deploy-b9b6d4ff9-vwfnq ...Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 40s default-scheduler Successfully assigned default/busybox-deploy-b9b6d4ff9-vwfnq to 192.168.209.132Warning FailedCreatePodSandBox 28s (x4 over 37s) kubelet, 192.168.209.132 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-b9b6d4ff9-vwfnq": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:297: getting the final child's pid from pipe caused \"read init-p: connection reset by peer\"": unknownWarning FailedCreatePodSandBox 26s (x9 over 38s) kubelet, 192.168.209.132 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-b9b6d4ff9-vwfnq": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknownNormal SandboxChanged 26s (x12 over 37s) kubelet, 192.168.209.132 Pod sandbox changed, it will be killed and re-created.

通過查看以上 Event 信息中, 能解讀到的信息極其有限
connection reset by peer: 連接被重置。
Pod sandbox changed, it will be killed and re-created: pause 容器引導的 Pod 環境被改變, 重新創建 Pod 中的 pause 引導。
copying bootstrap data to pipe caused "write init-p: broken pipe"": unknown:Google 說的docker和內核不兼容。
參考 https://github.com/docker/for-linux/issues/595
Pod sandbox changed, it will be killed and re-created: 
pause 容器引導的 Pod 環境被改變, 重新創建 Pod 中的 pause 引導。
看完以上錯誤並不能定位出問題根源,只能大致了解到是因為創建SandBox失敗導致的, 接下來查看 kubelet 的日志。

4月 01 16:22:06 master-01 kubelet[69210]: I0401 16:22:06.994710 69210 operation_generator.go:567] MountVolume.SetUp succeeded for volume "default-token-tb5bg" (UniqueName: "kubernetes.io/secret/968128e3-5456-11e9-8874-000c29721372-default-token-tb5bg") pod "busybox-deploy-bcd9cc57f-k2pzt" (UID: "968128e3-5456-11e9-8874-000c29721372")4月 01 16:22:07 master-01 kubelet[69210]: I0401 16:22:07.009632 69210 handler.go:325] Added event &{/kubepods/pod968128e3-5456-11e9-8874-000c29721372/88060715d0bfa605b05ba1ec0dfc637b112f328be76e38537f31a778c52d54ca 2019-04-01 16:22:07.381370221 +0800 CST m=+9485.400866715 oom {<nil>}}4月 01 16:22:07 master-01 kubelet[69210]: I0401 16:22:07.009673 69210 manager.go:1242] Created an OOM event in container "/kubepods/pod968128e3-5456-11e9-8874-000c29721372/88060715d0bfa605b05ba1ec0dfc637b112f328be76e38537f31a778c52d54ca" at 2019-04-01 16:22:07.381370221 +0800 CST m=+9485.4008667154月 01 16:22:07 master-01 kubelet[69210]: I0401 16:22:07.009682 69210 handler.go:325] Added event &{/kubepods/pod968128e3-5456-11e9-8874-000c29721372 2019-04-01 16:22:07.381370221 +0800 CST m=+9485.400866715 oomKill {0xc000807480}}4月 01 16:22:07 master-01 kubelet[69210]: E0401 16:22:07.087320 69210 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-bcd9cc57f-k2pzt": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown4月 01 16:22:07 master-01 kubelet[69210]: E0401 16:22:07.087372 69210 kuberuntime_sandbox.go:65] CreatePodSandbox for pod "busybox-deploy-bcd9cc57f-k2pzt_default(968128e3-5456-11e9-8874-000c29721372)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-bcd9cc57f-k2pzt": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown4月 01 16:22:07 master-01 kubelet[69210]: E0401 16:22:07.087389 69210 kuberuntime_manager.go:662] createPodSandbox for pod "busybox-deploy-bcd9cc57f-k2pzt_default(968128e3-5456-11e9-8874-000c29721372)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "busybox-deploy-bcd9cc57f-k2pzt": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:293: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknown

kubelet 的日志中, 與 describe 出來的信息差不多, tail 的時候更直觀的感覺到頻繁的Sandbox創建的過程, 可以看到有 OCI 運行時報錯, 只能去 docker 的日志中找找看了。

4月 01 16:43:03 master-01 dockerd[68740]: time="2019-04-01T16:43:03.379216541+08:00" level=error msg="Handler for POST /v1.38/containers/33e5014e5a63ebd3a07d0f238fe5ef4dd23c1a28490ef59f16a1f6e3efd5c050/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:293: copying bootstrap data to pipe caused \\\"write init-p: broken pipe\\\"\": unknown"4月 01 16:43:04 master-01 dockerd[68740]: time="2019-04-01T16:43:04.421003016+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:04 master-01 dockerd[68740]: time="2019-04-01T16:43:04.421724442+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:04 master-01 dockerd[68740]: time="2019-04-01T16:43:04.459995603+08:00" level=error msg="f51e1feaed0f43d47447802bee4b356a0ce3866263dd3573d1dae88a84e77d91 cleanup: failed to delete container from containerd: no such container"4月 01 16:43:04 master-01 dockerd[68740]: time="2019-04-01T16:43:04.460063726+08:00" level=error msg="Handler for POST /v1.38/containers/f51e1feaed0f43d47447802bee4b356a0ce3866263dd3573d1dae88a84e77d91/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"4月 01 16:43:05 master-01 dockerd[68740]: time="2019-04-01T16:43:05.508063189+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:05 master-01 dockerd[68740]: time="2019-04-01T16:43:05.508756604+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:05 master-01 dockerd[68740]: time="2019-04-01T16:43:05.556766523+08:00" level=error msg="a25dcbe45d6cde5e8bcf141c8a3e88accb1a2660390f2effa1ca8d6364c0d504 cleanup: failed to delete container from containerd: no such container"4月 01 16:43:05 master-01 dockerd[68740]: time="2019-04-01T16:43:05.556842037+08:00" level=error msg="Handler for POST /v1.38/containers/a25dcbe45d6cde5e8bcf141c8a3e88accb1a2660390f2effa1ca8d6364c0d504/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"4月 01 16:43:06 master-01 dockerd[68740]: time="2019-04-01T16:43:06.579456571+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:06 master-01 dockerd[68740]: time="2019-04-01T16:43:06.579428794+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:06 master-01 dockerd[68740]: time="2019-04-01T16:43:06.618244855+08:00" level=error msg="df537cafea28982180b226574bc0534416042001653829017f514576efd96a3c cleanup: failed to delete container from containerd: no such container"4月 01 16:43:06 master-01 dockerd[68740]: time="2019-04-01T16:43:06.618342042+08:00" level=error msg="Handler for POST /v1.38/containers/df537cafea28982180b226574bc0534416042001653829017f514576efd96a3c/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"4月 01 16:43:07 master-01 dockerd[68740]: time="2019-04-01T16:43:07.721668002+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:07 master-01 dockerd[68740]: time="2019-04-01T16:43:07.724387734+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:07 master-01 dockerd[68740]: time="2019-04-01T16:43:07.768836577+08:00" level=error msg="de574260f24893b4385ce0cb4e530aadc153fac4b8549b85ba6da47070f5091a cleanup: failed to delete container from containerd: no such container"4月 01 16:43:07 master-01 dockerd[68740]: time="2019-04-01T16:43:07.768881236+08:00" level=error msg="Handler for POST /v1.38/containers/de574260f24893b4385ce0cb4e530aadc153fac4b8549b85ba6da47070f5091a/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"read init-p: connection reset by peer\\\"\": unknown"4月 01 16:43:08 master-01 dockerd[68740]: time="2019-04-01T16:43:08.747728984+08:00" level=error msg="stream copy error: reading from a closed fifo"4月 01 16:43:08 master-01 dockerd[68740]: time="2019-04-01T16:43:08.747780229+08:00" level=error msg="stream copy error: reading from a closed fifo"

Docker 的日志中, 除了已經看了很多遍的connection reset by peer之外沒啥大發現,Docker 的日志和 kubelet 的日志的共同點就是, kubelet 頻繁 recreate Sandbox。

執行 docker ps -a 命令發現存在大量 create 狀態的 pause 容器

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESfdfec614a029 hub.test.tech/library/pod-infrastructure:latest "/pod" Less than a second ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_366056be0890536c hub.test.tech/library/pod-infrastructure:latest "/pod" 1 second ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3659cf8cb3311a60 hub.test.tech/library/pod-infrastructure:latest "/pod" 2 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3658bedd3580b8a2 hub.test.tech/library/pod-infrastructure:latest "/pod" 4 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_365711969320abcf hub.test.tech/library/pod-infrastructure:latest "/pod" 5 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3656c7facaef9671 hub.test.tech/library/pod-infrastructure:latest "/pod" 6 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_365538a23131e0e2 hub.test.tech/library/pod-infrastructure:latest "/pod" 7 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_36541951dd1736ee hub.test.tech/library/pod-infrastructure:latest "/pod" 8 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3653dc3b3037c100 hub.test.tech/library/pod-infrastructure:latest "/pod" 9 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3652266f48961a4e hub.test.tech/library/pod-infrastructure:latest "/pod" 10 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3651c56159f99738 hub.test.tech/library/pod-infrastructure:latest "/pod" 11 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_365035fb00ce7522 hub.test.tech/library/pod-infrastructure:latest "/pod" 12 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_36494445f2ce25e6 hub.test.tech/library/pod-infrastructure:latest "/pod" 13 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3648890e1a7dfd53 hub.test.tech/library/pod-infrastructure:latest "/pod" 14 seconds ago Created k8s_POD_busybox-deploy-bcd9cc57f-k2pzt_default_968128e3-5456-11e9-8874-000c29721372_3647

查看 demesg -T 信息, 發現了大量 oom-killer 的字眼的日志, 初步判斷是由於內存溢出, 導致系統主動 kill 進程.

[一 4月 1 17:22:49 2019] Task in /kubepods/pod968128e3-5456-11e9-8874-000c29721372/0c04b9baa40f781fbeb4d53e0ca67101e6e6d588ecdcbec5874e753f1a42e457 killed as a result of limit of /kubepods/pod968128e3-5456-11e9-8874-000c29721372[一 4月 1 17:22:49 2019] memory: usage 0kB, limit 0kB, failcnt 48032[一 4月 1 17:22:49 2019] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0[一 4月 1 17:22:49 2019] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0[一 4月 1 17:22:49 2019] Memory cgroup stats for /kubepods/pod968128e3-5456-11e9-8874-000c29721372: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB[一 4月 1 17:22:49 2019] Memory cgroup stats for /kubepods/pod968128e3-5456-11e9-8874-000c29721372/0c04b9baa40f781fbeb4d53e0ca67101e6e6d588ecdcbec5874e753f1a42e457: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB[一 4月 1 17:22:49 2019] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name[一 4月 1 17:22:49 2019] [121875] 0 121875 5749 1418 16 0 -999 5[一 4月 1 17:22:49 2019] Memory cgroup out of memory: Kill process 121875 (5) score 0 or sacrifice child[一 4月 1 17:22:49 2019] Killed process 121875 (5) total-vm:22996kB, anon-rss:3028kB, file-rss:432kB, shmem-rss:2212kB[一 4月 1 17:22:49 2019] docker0: port 4(veth0a5cbbd) entered disabled state[一 4月 1 17:22:49 2019] device veth0a5cbbd left promiscuous mode[一 4月 1 17:22:49 2019] docker0: port 4(veth0a5cbbd) entered disabled state[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered blocking state[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered disabled state[一 4月 1 17:22:50 2019] device veth3bbadf8 entered promiscuous mode[一 4月 1 17:22:50 2019] IPv6: ADDRCONF(NETDEV_UP): veth3bbadf8: link is not ready[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered blocking state[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered forwarding state[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered disabled state[一 4月 1 17:22:50 2019] SLUB: Unable to allocate memory on node -1 (gfp=0xd0)[一 4月 1 17:22:50 2019] cache: kmalloc-4096(19:1db6c350afbbf07ad909f2c9f896a69333e182d83d513fc283d728cd7d6cf0b5), object size: 4096, buffer size: 4096, default order: 3, min order: 0[一 4月 1 17:22:50 2019] node 0: slabs: 0, objs: 0, free: 0[一 4月 1 17:22:50 2019] exe invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=-999[一 4月 1 17:22:50 2019] exe cpuset=1db6c350afbbf07ad909f2c9f896a69333e182d83d513fc283d728cd7d6cf0b5 mems_allowed=0[一 4月 1 17:22:50 2019] CPU: 1 PID: 121918 Comm: exe Kdump: loaded Tainted: G ------------ T 3.10.0-957.el7.x86_64 #1[一 4月 1 17:22:50 2019] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 04/13/2018[一 4月 1 17:22:50 2019] Call Trace:[一 4月 1 17:22:50 2019] [<ffffffff94f61dc1>] dump_stack+0x19/0x1b[一 4月 1 17:22:50 2019] [<ffffffff94f5c7ea>] dump_header+0x90/0x229[一 4月 1 17:22:50 2019] [<ffffffff949b9dc6>] ? find_lock_task_mm+0x56/0xc0[一 4月 1 17:22:50 2019] [<ffffffff94a31118>] ? try_get_mem_cgroup_from_mm+0x28/0x60[一 4月 1 17:22:50 2019] [<ffffffff949ba274>] oom_kill_process+0x254/0x3d0[一 4月 1 17:22:50 2019] [<ffffffff94a34f16>] mem_cgroup_oom_synchronize+0x546/0x570[一 4月 1 17:22:50 2019] [<ffffffff94a34390>] ? mem_cgroup_charge_common+0xc0/0xc0[一 4月 1 17:22:50 2019] [<ffffffff949bab04>] pagefault_out_of_memory+0x14/0x90[一 4月 1 17:22:50 2019] [<ffffffff94f5acf2>] mm_fault_error+0x6a/0x157[一 4月 1 17:22:50 2019] [<ffffffff94f6f7a8>] __do_page_fault+0x3c8/0x500[一 4月 1 17:22:50 2019] [<ffffffff94f6f915>] do_page_fault+0x35/0x90[一 4月 1 17:22:50 2019] [<ffffffff94f6b758>] page_fault+0x28/0x30[一 4月 1 17:22:50 2019] Task in /kubepods/pod968128e3-5456-11e9-8874-000c29721372/1db6c350afbbf07ad909f2c9f896a69333e182d83d513fc283d728cd7d6cf0b5 killed as a result of limit of /kubepods/pod968128e3-5456-11e9-8874-000c29721372[一 4月 1 17:22:50 2019] memory: usage 0kB, limit 0kB, failcnt 48062[一 4月 1 17:22:50 2019] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0[一 4月 1 17:22:50 2019] kmem: usage 0kB, limit 9007199254740988kB, failcnt 0[一 4月 1 17:22:50 2019] Memory cgroup stats for /kubepods/pod968128e3-5456-11e9-8874-000c29721372: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB[一 4月 1 17:22:50 2019] Memory cgroup stats for /kubepods/pod968128e3-5456-11e9-8874-000c29721372/1db6c350afbbf07ad909f2c9f896a69333e182d83d513fc283d728cd7d6cf0b5: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB swap:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB[一 4月 1 17:22:50 2019] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name[一 4月 1 17:22:50 2019] [121918] 0 121918 5749 1481 15 0 -999 exe[一 4月 1 17:22:50 2019] Memory cgroup out of memory: Kill process 121918 (exe) score 0 or sacrifice child[一 4月 1 17:22:50 2019] Killed process 121918 (exe) total-vm:22996kB, anon-rss:3052kB, file-rss:2872kB, shmem-rss:0kB[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered disabled state[一 4月 1 17:22:50 2019] device veth3bbadf8 left promiscuous mode[一 4月 1 17:22:50 2019] docker0: port 4(veth3bbadf8) entered disabled state

發生這樣的情況的概率並不高, 一般情況下有兩種類型的 oom kill。

1.由於 pod 內進程超出了 pod 指定 Limit 限制的值, 將導致 oom kill, 此時 pod 退出的 Reason 會顯示 OOMKilled。

2.另一種情況是 pod 內的進程給自己設置了可用內存, 比如 jvm 內存限制設置為2G, pod Limit 設置為6G, 此時由於程序的原因導致內存使用超過2G 時, 也會引發 oom kill。

這兩種內存溢出的 kill 區別是第一種原因直接顯示在 pod 的 Event 里; 第二種你在 Event 里找不到, 在宿主機的 dmesg 里面可以找到 invoked oom-killer 的日志

這次的情況看起來像屬於第二種情況, 於是趕緊再次 describe pod, 查看 Limit 限制

[root@master-01 ~]# kubectl  describe pod busybox-deploy-b9b6d4ff9-vwfnq 

終於找到了原因, 就是因為對內存的 Limit 導致的. 這里顯示內存 Limit 為300m, 實際上是因為在創建資源時, 寫的是 300m,資源單位為(CPU-單位m、內存-單位Mi)

理論上來說, 按照之前的經驗, 此種情況(實際使用內存超過 Limit 內存的情況)應該屬於第一種, 會在 Event 里顯示的由於 OOMkilled 原因導致 Terminated 狀態. 但實際情況卻是在 Event 里找不到 oom kill 的日志, 且 pod 狀態處於 ContainerCreating 狀態.

由於 OOMkilled 處於 Terminated 的狀態是因為 pod 已經正常被引導創建后, 導致的內存溢出

由於系統 invoked oom-killer 導致的處於 ContainerCreating 狀態的 pod 是因為 pod 還沒正常被創建, pod 中的 pause 容器都沒有被正常引導就已經被 cgroup 的內存限制而招來殺身之禍

注意: 調整資源的時候單位可得寫對,不然可能會出莫名其妙的問題

往期文章一覽

1、Kubernetes集群搭建之系統初始化配置篇

2、Kubernetes集群搭建之企業級環境中基於Harbor搭建自己的私有倉庫

3、Kubernetes集群搭建之Etcd集群配置篇

4、Kubernetes集群搭建之CNI-Flanneld部署篇

5、Kubernetes集群搭建之Master配置篇

6、Kubernetes系列之Coredns and Dashboard介紹篇

7、Kubernetes系列之監控Metres-server實戰篇

如果您覺得不錯,請別忘了轉發、分享、點贊讓更多的人去學習, 您的舉手之勞,就是對小編最好的支持,非常感謝!

image


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM