Hi maintainers,
I’d like to convert the three released ONNX models (e.g., enc.onnx, df_dec.onnx, erb_dec.onnx) into platform-specific libraries using onnx-mlir.exe for deployment and optimization. I’m running into two problems:
The models in the tagged releases are exported with an old ONNX opset.
(Possibly related) When I try to freeze dynamic inputs to static shapes, the conversion fails. For example, converting
name: feat_erb, tensor: float32[1,1,S,32]
to a fixed shape like
name: feat_erb, tensor: float32[1,1,1,32]
leads to various errors in the onnx-mlir pipeline, so I can’t produce a working .lib/.a.
Could you please publish a refreshed release with an updated ONNX opset? Ideally, could you also provide static-shape variants of the models (e.g., with S=1 for streaming) so they compile cleanly with onnx-mlir.exe without additional surgery?
Environment:
Model version: v0.5.6, deepfilter3_ll
Toolchain: onnx-mlir.exe (Windows)
Thanks a lot!
@Rikorose