It has been many times to deal with problem. Still not solve. Please have a look with this.
thank you!
the mzml file is 2GB and lib is 4GB
DIA-BERT: 2025-06-09 16:06:48,241 - DIA-BERT - INFO - pick rt process: 2300/40001
DIA-BERT: 2025-06-09 16:07:15,994 - DIA-BERT - INFO - pick rt process: 2400/40001
DIA-BERT: 2025-06-09 16:07:36,682 - DIA-BERT - INFO - pick rt process: 2500/40001
DIA-BERT: 2025-06-09 16:08:04,453 - DIA-BERT - INFO - pick rt process: 2600/40001
DIA-BERT: 2025-06-09 16:08:24,555 - DIA-BERT - ERROR - mzml: 140_QHLC_Trypsin_Orbi_DE340_2uL_20250411_01_new.mzML deal exception.
Traceback (most recent call last):
File "/mnt/lishuhao/DIA-BERT-main/src/common/timepoint_handler.py", line 141, in deal
batch_result_list = deal_peak(input_param, peak_group_info, logger)
File "/mnt/lishuhao/DIA-BERT-main/src/common/timepoint_handler.py", line 400, in deal_peak
lib_tensor_handler.build_ms_rt_moz_matrix(ms1_extract_tensor[w_p_arr[0]: w_p_arr[1]],
File "/mnt/lishuhao/DIA-BERT-main/src/common/lib_tensor_handler.py", line 342, in build_ms_rt_moz_matrix
ms1_moz_rt_matrix = construct_sparse_tensor(ms1_moz_rt_matrix, device)
File "/mnt/lishuhao/DIA-BERT-main/src/common/lib_tensor_handler.py", line 412, in construct_sparse_tensor
return torch.sparse_coo_tensor(indices, values, shape)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 13.43 GiB. GPU 0 has a total capacity of 44.34 GiB of which 6.45 GiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 22.83 GiB is allocated by PyTorch, and 14.75 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
It has been many times to deal with problem. Still not solve. Please have a look with this.
thank you!
the mzml file is 2GB and lib is 4GB
DIA-BERT: 2025-06-09 16:06:48,241 - DIA-BERT - INFO - pick rt process: 2300/40001
DIA-BERT: 2025-06-09 16:07:15,994 - DIA-BERT - INFO - pick rt process: 2400/40001
DIA-BERT: 2025-06-09 16:07:36,682 - DIA-BERT - INFO - pick rt process: 2500/40001
DIA-BERT: 2025-06-09 16:08:04,453 - DIA-BERT - INFO - pick rt process: 2600/40001
DIA-BERT: 2025-06-09 16:08:24,555 - DIA-BERT - ERROR - mzml: 140_QHLC_Trypsin_Orbi_DE340_2uL_20250411_01_new.mzML deal exception.
Traceback (most recent call last):
File "/mnt/lishuhao/DIA-BERT-main/src/common/timepoint_handler.py", line 141, in deal
batch_result_list = deal_peak(input_param, peak_group_info, logger)
File "/mnt/lishuhao/DIA-BERT-main/src/common/timepoint_handler.py", line 400, in deal_peak
lib_tensor_handler.build_ms_rt_moz_matrix(ms1_extract_tensor[w_p_arr[0]: w_p_arr[1]],
File "/mnt/lishuhao/DIA-BERT-main/src/common/lib_tensor_handler.py", line 342, in build_ms_rt_moz_matrix
ms1_moz_rt_matrix = construct_sparse_tensor(ms1_moz_rt_matrix, device)
File "/mnt/lishuhao/DIA-BERT-main/src/common/lib_tensor_handler.py", line 412, in construct_sparse_tensor
return torch.sparse_coo_tensor(indices, values, shape)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 13.43 GiB. GPU 0 has a total capacity of 44.34 GiB of which 6.45 GiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 22.83 GiB is allocated by PyTorch, and 14.75 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)