Hi,
Thank you for maintaining great package.
I want to simulate relatively larger system (~10000 atoms) using tensornet.
After I finished to train model using TensorNet-SPICE.yaml, I tried to apply this model to MD simulation for larger system using openmm-torch. When I simulated using the system composed of ~4000 atoms, 80 GiB of GPU memory has filled out. I found out when calculating force (backpropagation phase) consumed most of the GPU memory and resulted in out of memory.
Is there possible way to avoid this?
I expect calculating atomic energy using 'reporesentaion_model' (TensorNet) can be splitted into batch, and can be avoided using large GPU memory. Is it possible?