-
Notifications
You must be signed in to change notification settings - Fork 561
Description
Before you fill out this form:
Did you review the FAQ?
- Yes, I reviewed the FAQ on the cellpose ReadTheDocs
Did you look through previous (open AND closed) issues posted on GH?
Now fill this form out completely:
Describe the bug
The docs say "If you have multiple images of the same size, it can be faster to input them into the Cellpose model.eval function as an array rather than a list, and running with a large batch size", however, when I try to pass a batch of 2D 3-channel images (size=num_images x H x W x 3 or size=num_images x 3 x H x W), it tries to treat the batch as a single 3D image and throws an error:
Traceback (most recent call last):
......
site-packages/cellpose/transforms.py", line 576, in convert_image
raise ValueError("3D input image provided, but do_3D is False. Set do_3D=True to process 3D images. ndims=4")
ValueError: 3D input image provided, but do_3D is False. Set do_3D=True to process 3D images. ndims=4
Is there a parameter configuration to allow processing this way, or are the docs wrong?
Also, I find that even with a single large image (2160 x 2160 x 3), increasing batch_size does not decrease processing time even though I have plenty of memory to process multiple tiles. I'm running on a DGX Spark with ~128 GB of GPU memory.
To Reproduce
Steps to reproduce the behavior:
import numpy as np
from cellpose import models
model = models.CellposeModel(gpu=True)
image = np.random.randint(0, 2**16, size=(32 x 2160 x 2160 x 3), dtype=np.uint16)
masks = model.eval(image)[0]Run log
Please post all command line/notebook output for us to understand the problem. For this please make sure you are running with verbose mode on. So command line, with --verbose tag, or in a notebook first run
logger = io.logger_setup()
before running any Cellpose functions.