基於OpenVINO的“semantic-segmentation-adas”模型,能夠較為精確的分割出天空;使用OpenCV的seamlessClone等函數,實現天空的無縫替換;基於Django實現網絡化部署。三者結合,實現並部署“天空替換”模型。
目前服務已經上線:打開地址:http://81.68.242.86:8000/upload 就可以體驗,手機端和PC端都可以。雖然界面比較簡陋,速度也比較慢,但是基本可用。總的來說,openvino自帶的這個模型本來是用於道路分割的,不是專用的,能夠出一定效果,但是有些時候不精確;再加上后期處理,還有粗糙的地方。但本文最為重要的是證明工具鏈的可行,探索一條道路,這個是有價值的。
OpenVINO Model Server的服務化部署——step1(OpenVINO™ Model Server Quickstart)
https://www.cnblogs.com/jsxyhelu/p/13796161.html
OpenVINO Model Server的服務化部署——step2(天空分割模型)
https://www.cnblogs.com/jsxyhelu/p/13829051.html
OpenVINO Model Server的服務化部署——step3(django服務構建)
https://www.cnblogs.com/jsxyhelu/p/13878335.html
OpenVINO Model Server的服務化部署——step4(實現天空替換)
https://www.cnblogs.com/jsxyhelu/p/13894565.html
==========================================================================
本系列中關於OpenVINO Model Server的服務化研究,就是為了能夠尋找到一種可行的天空分割的方法。由於使用傳統方法,已經無法很好地解決這個復雜問題,所以轉而研究AI的方法。而服務化部署就是為了最終能夠被更方便地調用這里的AI技術。

我們下載了bin+xml,需要 按照以下模式存放
models /
├── model1
│ ├── 1
│ │ ├── ir_model.bin
│ │ └── ir_model.xml
│ └── 2
│ ├── ir_model.bin
│ └── ir_model.xml
└── model2
└── 1
├── ir_model.bin
├── ir_model.xml
└── mapping_config.json
這里的models以及下面的級聯文件夾,都是在本機創建好的。

-v 表示的是本機和docker中目錄的對應方式, :ro表示是嵌套復制,也就是前面那么多級聯的目錄”原模原樣“的復制過去。本機的文件放在哪里,我們當然知道;docker中的文件放在哪里,其實並不重要。重要的是將這里的文件地址告訴openvino,所以這里的目錄地址和后面的 --model_path是一致的
-p 本機和docker的端口鏡像關系
openvino /model_server :latest 啟動的docker鏡像
--model_path 和前面的 -v要保持一致
--model_name openvino調用的model的名稱
其它幾個不是太重要, 也不容易寫錯。
sudo docker exec -it 775c7c9ee1e1 /bin /bash
wget https : / /download. 01.org /opencv / 2021 /openvinotoolkit / 2021. 1 /open_model_zoo /models_bin / 2 /semantic -segmentation -adas - 0001 /FP32 /semantic -segmentation -adas - 0001.xml
[root@VM - 0 - 13 -centos 1] # cd /models
[root@VM - 0 - 13 -centos models] # tree
.
├── model1
│ └── 1
│ ├── face -detection -retail - 0004.bin
│ └── face -detection -retail - 0004.xml
└── model2
└── 1
├── semantic -segmentation -adas - 0001.bin
└── semantic -segmentation -adas - 0001.xml
4 directories, 4 files
27907ca99807fb58184daee3439d821b554199ead70964e6e6bcf233c7ee20f0
[root@VM - 0 - 13 -centos models] # docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27907ca99807 openvino /model_server :latest "/ovms/bin/ovms --mo…" 5 seconds ago Up 3 seconds 0. 0. 0. 0 : 9000 - > 9000 /tcp flamboyant_mahavira
Inputs
The blob with BGR image in format : [B, C = 3, H = 1024, W = 2048], where :
B - batch size,
C - number of channels
H - image height
W - image width
……
( 1, 3, 1024, 2048)
Traceback (most recent call last) :
File "sky_detection.py", line 79, in <module >
result = stub.Predict(request, 10. 0)
File "/usr/local/lib64/python3.6/site-packages/grpc/_channel.py", line 690, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib64/python3.6/site-packages/grpc/_channel.py", line 592, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous : <_Rendezvous of RPC that terminated with :
status = StatusCode.RESOURCE_EXHAUSTED
details = "Received message larger than max (8388653 vs. 4194304)"
debug_error_string = "{"created ":"@ 1602672141. 715481155 ","description " :"Received message larger than max (8388653 vs. 4194304) ","file ":"src /core /ext /filters /message_size /message_size_filter.cc ","file_line ":190,"grpc_status ":8}"
@jsxyhelu The limit on the server side is actually 1GB. Your logs indicate 4MB.
It seems to be client side restriction.
Could you try the following settings :
options = [('grpc.max_receive_message_length', 100 * 1024 * 1024),('grpc.max_send_message_length', 100 * 1024 * 1024)]
channel = grpc.insecure_channel(server_url, options = options)
2020 - 10 - 17 07 : 03 : 10 . 395324 : W tensorflow / stream_executor / platform / default / dso_loader.cc : 59 ] Could not load dynamic library 'libcudart.so.10.1' ; dlerror : libcudart.so. 10 . 1 : cannot open shared object file : No such file or directory
2020 - 10 - 17 07 : 03 : 10 . 395363 : I tensorflow / stream_executor / cuda / cudart_stub.cc : 29 ] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Getting model metadata for model : semantic - segmentation - adas
Inputs metadata :
Input name : data; shape : [ 1 , 3 , 1024 , 2048 ]; dtype : DT_FLOAT
Outputs metadata :
Output name : 4455.1 ; shape : [ 1 , 1 , 1024 , 2048 ]; dtype : DT_INT32
2020 - 10 - 17 07 : 46 : 20. 942953 : W tensorflow /stream_executor /platform /default /dso_loader.cc : 59] Could not load dynamic library 'libcudart.so.10.1'; dlerror : libcudart.so. 10. 1 : cannot open shared object file : No such file or directory
2020 - 10 - 17 07 : 46 : 20. 943164 : I tensorflow /stream_executor /cuda /cudart_stub.cc : 29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[ 'sky9.jpg']
Start processing 1 iterations with batch size 1
Request shape ( 1, 3, 1024, 2048)
image in batch item 0 , output shape ( 1, 1024, 2048)
saving result to results / 1_0.jpg
1024
2048
附件列表