发现 Unstructured 的 “TensorrtExecutionProvider“ 比 “CUDAExecutionProvider“ 慢
发现 Unstructured 的 "TensorrtExecutionProvider" 比 "CUDAExecutionProvider" 慢
修改 unstructured_inference/models/detectron2onnx.py
和 unstructured_inference/models/detectron2onnx.py
的代码,优先使用 “CUDAExecutionProvider”。
# ordered_providers = [# "TensorrtExecutionProvider",# "CUDAExecutionProvider",# "CPUExecutionProvider",# ]ordered_providers = ["CUDAExecutionProvider","TensorrtExecutionProvider","CPUExecutionProvider",]
完结!