-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Open
Description
Describe the issue
For the following onnx model
When I use CUDAExecutionProvider to run the model, I can obtain the following results:
v11_0:
[[4.201314]]
----------------------------------------
v6_0:
[[[4.201314]]
[[4.201314]]
[[4.201314]]
[[4.201314]]]
----------------------------------------
v3_0:
[[1.237523 1.149161 0.982885 0.831019 0.781668 0.964119 1.223949 1.089059
0.736921 1.002484 0.830756 1.200388 0.636665 0.609638 0.614915 0.942459
0.603451 0.642226 1.230804 0.720406 1.135304 0.73164 0.61376 0.865292
0.801359 1.091982 0.713896 0.715725 0.825016 1.184692 0.863524 0.954358
0.8438 0.8832 0.630811 0.840824 0.944988 0.695137 1.266777 0.608964
0.696518 0.995498 0.898445 0.673002 0.65762 1.196535 0.646138 0.684795
0.701615 0.985795 0.624965 0.703011 0.614472 1.09798 0.656129 0.878291
0.803383 1.252306 0.622624 0.71064 0.628148]]
----------------------------------------
But when I use CPUExecutionProvider, the following error will be reported
Traceback (most recent call last):
File "/home/user/drl/test.py", line 18, in <module>
sess = ort.InferenceSession(
File "/home/user/anaconda3/envs/drl/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 485, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/user/anaconda3/envs/drl/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 584, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Erf(13) node with name '/Erf'
To reproduce
This bug can be reproduced by the following code with the model in the attachment.
import pickle
import numpy as np
import onnx
import onnxruntime as ort
np.set_printoptions(precision=6, suppress=True)
model = onnx.load("model.onnx")
#model.ir_version = 8
#model.opset_import[0].version = 13
with open("oracle.pkl", "rb") as f:
oracle = pickle.load(f)
inputs = oracle["input"]
expected = oracle["output"]
expected_names = list(expected.keys())
#"CUDAExecutionProvider", "CPUExecutionProvider"
sess = ort.InferenceSession(
model.SerializeToString(),
providers=["CUDAExecutionProvider"],
)
try:
outputs = sess.run([], inputs)
print("ORT run finished")
for idx, tensor in enumerate(outputs):
name = expected_names[idx] if idx < len(expected_names) else f"output_{idx}"
print(f"{name}:")
print(tensor)
if name in expected:
np.testing.assert_allclose(tensor, expected[name], rtol=1e-3, atol=1e-3)
print("-" * 40)
except Exception:
import traceback
traceback.print_exc()
Urgency
No response
Platform
Linux
OS Version
Ubuntu 22.04.4 LTS
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.23.2
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU, CUDA
Execution Provider Library Version
No response
Metadata
Metadata
Assignees
Labels
No labels