fluid.io.load_inference_model 載入多個模型的時候會報錯 -- [paddlepaddle]


將多個模型部署到同一個服務時,會出現stack錯誤. 原因是program為全局.

改成這樣,可以解決.

 

solved by myself. for those who need it:
use a new scope for every model

  scope = fluid.Scope()
        with fluid.scope_guard(scope):
            place = fluid.CPUPlace()
            exe = fluid.Executor(place)
            [inference_program, _, fetch_targets] = (
                fluid.io.load_inference_model(dirname=model_path[0], executor=exe,
                                          model_filename=model_path[1],
                                          params_filename=params_path[1]))

 

and for prediction:

  with fluid.scope_guard(scope):
            results = exe.run(inference_program,
                          feed=inputs,
                          fetch_list=fetch_targets)

 




參考鏈接: https://github.com/PaddlePaddle/models/issues/1164


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM