Skip to content

Conversation

@fabio-innatera
Copy link
Collaborator

I am proposing this as an alternative to #157. The user should get a useful error message instead of a cryptic error message inside type inference.

Not sure about the exact wording for the error message? Is CubaLIF technically (just) a neuron?

@Jegp
Copy link
Collaborator

Jegp commented Aug 2, 2025

I like this a lot, because we want to avoid creating incorrect/ambiguous/useless graphs. Simultaneously, I think this may be a bit strong from the first take. Learning our lessons from the last breaking changes, I think we should start by emitting warnings and then, after some time, do the asserts. Does that make sense @fabio-innatera ?

@fabio-innatera
Copy link
Collaborator Author

It is already broken right now, the only change this PR does is changing the error message for a network like this:

class NetWithAvgPool(torch.nn.Module):
    def __init__(self):
        super(NetWithAvgPool, self).__init__()
        self.conv1 = torch.nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1)
        self.lif1 = snn.Leaky(beta=0.9, init_hidden=True)
        self.fc1 = torch.nn.Linear(28*28*16 // 4, 500)  # Adjusting input size after pooling layer
        self.lif2 = snn.Leaky(beta=0.9, init_hidden=True, output=True)

    def forward(self, x):
        x = torch.nn.functional.avg_pool2d(self.conv1(x), kernel_size=2, stride=2)  #  kernel_size=2 and stride=2 for avg_pool2d
        x = x.view(-1, 28*28*16 // 4)  # Adjusting the view based on the output shape after pooling
        x = self.lif1(x)
        x = self.fc1(x)
        x = self.lif2(x)
        return x

from this: ValueError: Type inference error: type mismatch: input.output: [[ 1 28 28]] -> lif1.input: []

to this: AssertionError: Neuron parameters cannot be scalar, only numpy arrays

More discussion here:
jeshraghian/snntorch#384

@Jegp
Copy link
Collaborator

Jegp commented Aug 5, 2025

Hmmm I agree that it's broken in snnTorch, but that's not necessarily the case for other simulators. Isn't it still possible to define scalar parameters if broadcasting is supported? I'm just wondering whether we should save other platforms from the same incompatibility troubles that snnTorch went through :)

@fabio-innatera
Copy link
Collaborator Author

If other simulators also use scalar parameters, they are breaking the spec too. My opinion is that we should make sure that all libraries are spec compilation as early as possible, when the ecosystem and the number of *.nir files is still small. Otherwise, all the libraries will need workarounds to be able to import broken nir files, which is a much bigger headache than making sure all frameworks export spec compilation files.

I don't think this is related to broadcasting? It's about having to define the exact number of neurons on every neuron node vs. deciding that at runtime. From what I understand, the NIR spec was designed to always explicitly define the exact number of neurons, and I implemented type checking and inference in a way assuming this is the case.

The main trouble snntorch went through is the introduction of v_reset. We explicitly decided to break the whole ecosystem with the introduction of v_reset, so there is no way for other libraries to not update for this.

@Jegp
Copy link
Collaborator

Jegp commented Aug 14, 2025

I agree that other simulators are breaking the spec and should align as soon as possible. I'm only suggesting that we transition more slowly. That is, we do as you say, but permit wrong graphs for a brief period of time, just to give the libraries time to transition. For instance, by emitting a warning in the next version, and then, in a few months, convert that to an exception as you propose. That way, maintainers can't write me angry PMs saying we didn't warn them :P

What about that?

@fabio-innatera
Copy link
Collaborator Author

We could reopen #157 and print a warning if we process scalar parameters? And later on, remove that code again and replace it with this PR here?

Or we could make check_types + infer_types warning only, instead of an exception if one of the two fails.

@Jegp
Copy link
Collaborator

Jegp commented Oct 1, 2025

I like the idea of emitting warnings, although we should be careful not to spam too much. And someone did raise a good point about warnings not being optimal, although I can't find the comment now. Does that ring a bell @fabio-innatera ?

I think the path of least resistance is to (1) try to fix graphs as best we can to maintain backward compatilibity, but (2) let the user know that they're entering dangerous territory. We can introduce deprecation annotations and then be more strict in future versions so people have time to adjust. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants