Conversation
|
Going to work on windows testing soon! |
There was a problem hiding this comment.
Nice catches of other cases, but we will want to not assume a float input, and also please add unit tests.
The unit test i want to see is one that mocks different models with different callable signatures. The models themselves can be very dumb, and be fed noise as input.
Do not checkin a binary model, but create a simple tensorflow file of a mock model
src/tensorflow_module.py
Outdated
| if np.any(data > 1.0): | ||
| data = data / 255.0 |
There was a problem hiding this comment.
This isn't a valid assumption. Some models do indeed take in uint8 tensors. You need to check the metadata of the model input and see if it expects floats or uint8.
Also, we expect this kind of preprocessing to be done before the input is fed into the module.
So please delete these line, and instead check to see what the model metadata expects, and put in an error that says something along of lines of "float expected, but got uint8"
There was a problem hiding this comment.
I'm rereading the code now and I see the casting to the correct types immediately above, from line 165. However, I am unsure where the division by 255 is supposed to be happening, if not here.
There was a problem hiding this comment.
When testing earlier, I saw that the data tensor was numbers between 0-255 and the model was behaving improperly, this is why I added the division. I'm going to retest right now and verify.
There was a problem hiding this comment.
Verified that the data tensor is numbers between 0-255 and the model needs between 0-1. Should we repackage the model with a preprocessing layer (not sure how to do this but I think it's possible)?
There was a problem hiding this comment.
https://github.com/viam-modules/fish-predictor/pull/7
Resolved here
src/tensorflow_module.py
Outdated
| except (ValueError, TypeError, RuntimeError) as direct_err: | ||
| try: | ||
| f = self.model.signatures["serving_default"] | ||
| input_tensor = tf.convert_to_tensor(data, dtype=tf.float32) |
There was a problem hiding this comment.
Again, do not assume a float vector
src/tensorflow_module.py
Outdated
| try: | ||
| # Attempt direct call with raw data | ||
| res = self.model(data) | ||
| except (ValueError, TypeError, RuntimeError) as direct_err: |
There was a problem hiding this comment.
When would we get a RuntimeError in this scenario? I see the case for the other two errors
AGanguly13
left a comment
There was a problem hiding this comment.
Left one question, otherwise I agree with Bijan on the rest
AGanguly13
left a comment
There was a problem hiding this comment.
This all LGTM. I like how you broke off input and output processing into their own functions!
src/tensorflow_module.py
Outdated
| else: | ||
| return {} |
There was a problem hiding this comment.
Are you sure this isn't an error of some kind that should be raised to the user? Do we expect this to ever happen?
There was a problem hiding this comment.
We should probably raise an error, but I was just mimicking the logic that was here before. I'm pretty sure that it should never happen.
bhaney
left a comment
There was a problem hiding this comment.
once you lint, it should be good
|
A question: you made an rc based on this branch, right? Once this is merged to main, you can make a full release |
This allows us to run SavedModels on robot! Small changes to the way we do inference and format outputs, try this robot:
https://app.viam.com/machine/fbeabd00-80a0-4914-a67c-0d925804094b/control