-
Notifications
You must be signed in to change notification settings - Fork 105
Description
URL: https://docs.radxa.com/cubie/a7z/app-dev/npu-dev/cubie-acuity-usage
Time: 2/1/2026, 11:55:15 PM
Has anyone ever tried importing yolo26 into Docker AllWinner v2.0.10.1? got error trying to execute pegasus_export_ovx.sh for uint8 type:
E [ops/vsi_nn_op_conv2d.c:op_check:189]Inputs/Outputs data type not supported: DFP INT16, ASYM UINT8
E [vsi_nn_graph.c:setup_node:551]Check node[75] CONV2D failed
E [vnn_yolo26n6uint8.c:vnn_CreateYolo26n6Uint8:11455]CHECK STATUS(-1:A generic error code, used when no other describes the error.)
E [main.c:vnn_CreateNeuralNetwork:198]CHECK PTR 198
E [main.c:main:232]CHECK PTR 232
E 05:02:51 Fatal model generation error: 65280
W 05:02:51 ----------------Error(1),Warning(0)----------------
E 05:02:51 ('Fatal model generation error: 65280', 'nbg_generate')
W 05:02:51 ----------------Error(1),Warning(0)----------------
Traceback (most recent call last):
File "acuitylib/app/exporter/ovxlib_case/casegenerator.py", line 327, in acuitylib.app.exporter.ovxlib_case.casegenerator.CaseGenerator._gen_nb_file
File "acuitylib/acuitylog.py", line 264, prints acuitylib.acuitylog.AcuityLog.e
acuitylib.acuityerror.AcuityError: ('Fatal model generation error: 65280', 'nbg_generate')
The error can be reproduced by doing the following. Download yolo26n from ultralytics. To get yolo26n.onnx, i got yolo26n.pt from ultralytics, export to onnx with model.export(format="onnx", imgsz=640, batch=1, simplify=True, opset=12, dynamic=False). Extracted model with 6 heads,
onnx.utils.extract_model(
input_path="yolo26n.onnx",
output_path="yolo26n_6.onnx",
input_names=['images'],
output_names=[
'/model.23/one2one_cv3.0/one2one_cv3.0.2/Conv_output_0',
'/model.23/one2one_cv3.1/one2one_cv3.1.2/Conv_output_0',
'/model.23/one2one_cv3.2/one2one_cv3.2.2/Conv_output_0',
'/model.23/one2one_cv2.0/one2one_cv2.0.2/Conv_output_0',
'/model.23/one2one_cv2.1/one2one_cv2.1.2/Conv_output_0',
'/model.23/one2one_cv2.2/one2one_cv2.2.2/Conv_output_0'
]
)
Import and quantize, quantize-hybrid, inference successful. But cannot convert_ovx.
I saw that in file quantize yolo26n_6_uint8.quantize node[75]
'@model.8/m.0/m/m.1/cv1/conv/Conv_output_0_346:out0':
qtype: u8
quantizer: asymmetric_affine
rounding: rtne
max_value: 6.992433547973633
min_value: -6.511664867401123
scale: 0.05295724794268608
zero_point: 123
but input of node [75] - node[74] with int16:
'@model.8/m.0/m/m.0/Add_output_0_335:out0':
qtype: i16
quantizer: dynamic_fixed_point
rounding: rtne
max_value: 6.11386251449585
min_value: -0.5569233894348145
fl: 12
Between node[74] and [75] without any converter from int16 to uint8. May be it is problem.
How can configuration when run pegasus_quantize_hybrid to set which layer is specifically quantized using the int16 method? Same problem when use yolo26s_6 from allwinner, but with node[79]. I dont want to edit manually these node in file .quantize and .quantize.json.