Skip to content

Implement support for PodSandboxStatus RPC#267

Open
MikeZappa87 wants to merge 2 commits into
containerd:mainfrom
MikeZappa87:mzappa/status
Open

Implement support for PodSandboxStatus RPC#267
MikeZappa87 wants to merge 2 commits into
containerd:mainfrom
MikeZappa87:mzappa/status

Conversation

@MikeZappa87
Copy link
Copy Markdown
Member

@MikeZappa87 MikeZappa87 commented Jan 28, 2026

This PR is to add PodSandboxStatus support to NRI.

The idea to why we want to bring this RPC in now is that we want to try and begin the process to deprecate the CNI and we need a way to return Pod IP's back to the kubelet. Without this we ether need to come up with a new RPC outside of NRI or experiment directly in the kubelet. Using NRI seems to be the best place right now. This is why the PodSandboxStatusResponse type only returns pod ip addresses, its open to more fields be added.

I put together this branch in containerd that brings this NRI branch in and disables the CNI as a test bed.

containerd/containerd@main...MikeZappa87:containerd:mzappa/knext

@LionelJouin
Copy link
Copy Markdown

This is a great thing to add, thanks @MikeZappa87 !

/cc @aojea @squeed @mskrocki

@MikeZappa87 MikeZappa87 force-pushed the mzappa/status branch 2 times, most recently from 5cae6e9 to b4bf04c Compare January 29, 2026 15:16
@MikeZappa87 MikeZappa87 marked this pull request as ready for review January 29, 2026 20:07
Copy link
Copy Markdown
Member

@klihub klihub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a few questions about the chosen semantics. But it might be that I just lack the necessary context here...

Comment thread pkg/api/api.proto Outdated
Comment thread pkg/api/api_ttrpc.pb.go-e Outdated
Comment thread pkg/adaptation/adaptation.go Outdated
Comment thread pkg/stub/stub.go Outdated
Comment thread pkg/adaptation/adaptation.go Outdated
if ipOwner != "" {
return nil, fmt.Errorf("plugins %q and %q both tried to set PodSandboxStatus IP address field", ipOwner, plugin.name())
}
rsp.Ip = pluginRsp.Ip
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we already return the pod ips #119 , do you need anything else from the Status?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, ok, this is not for read, this is for update ... this has a high risk of causing problems with the runtimes, last time I checked I think that crio and containerd both had different hooks for the container IP setting and the NRI hook ... first of all I think that we should define the contract between the NRI hooks and the runtimes of at what point one thing is executed ... I added integration tests in containerd to ensure the current defined order and lifecycle is respected and does not break containerd/containerd#11331 but we should do the same exercise in crio to avoid drifting the runtimes

Copy link
Copy Markdown
Member Author

@MikeZappa87 MikeZappa87 Feb 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/containerd/containerd/blob/317286ac00e07bebd7d77925c0fef2af0d620e40/internal/cri/server/sandbox_run.go#L329

Are you talking about this? If you make any changes those are not commited back to the internal store of containerd. I am not 100% following you here? Since kubelet calls PodSandboxStatus, we need to hook into that RPC directly otherwise the change needs to be made in either RunPodSandbox or the kubelet?

Having kubelet use something internal like reading a claim instead of PodSandboxStatus would be an idea for sure. If we update RunPodSandbox, we would need to have that update the internal store for example?

https://github.com/kubernetes/kubernetes/blob/8c9c67c000104450cfc5a5f48053a9a84b73cf93/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L1568 for reference.

Comment thread pkg/adaptation/adaptation.go Outdated

// PodSandboxStatus relays the corresponding CRI request to plugins.
// If a plugin returns IP addresses, those will be returned to the caller.
// An error is returned if multiple plugins attempt to set the same field.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will be a nightmare to troubleshoot in prod on kubernetes clusters, since the error surfaced will most probably do not indicate something, the CNI will say it is correct, since it assignes the IP, and this NRI plugin will be probably opaque to the user ... I think we need to define first a better authoritative model for the IP assignment in the runtimes

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have a couple things here. If this PR merges, the CNI would be optional so people would need to take that into consideration. I believe this concern is shared across NRI as well? NRI can modify fields on the Pod and containers without the user knowing. The way I was going to implement this in containerd at least was to not allow this RPC to be modified if the CNI was enabled, at least that was just a thought to prevent these conditions however open to all the ideas here.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are your thoughts on the authority for ip assignments?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some thoughts. If the NRI plugin can specify all necessary networking details for a pre-existing interface, then the container runtime could perhaps configure everything itself. But CNIs exist to set up an optimal (to them or the problem at hand) overlay network between nodes, so involving CNIs should perhaps not be removed from the equation altogether?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NRI can modify fields on the Pod and containers without the user knowing

Yeah, but those are properties of the application itself, rlimits, cpu, and assigned by the runtime IIUIC .. the IP assigned is a property required to communicate with the rest of the world, and also assigned by another actor, so you are changing a convention that will generate a lot of issues in prod

@MikeZappa87 this idea of a dedicated hook

explicit Pre- and PostSetupNetwork

Makes it easier to think about, so runtime can choose

if NRISetupNetwork {
  callNRINetworkPlugin
} if CNISetupNetwork {
  call CNI // current state
} else {
  return error, no network plugin configured

This way there is no risk someone assigns an IP and later it realizes is not being used

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if NRISetupNetwork {
callNRINetworkPlugin
} if CNISetupNetwork {
call CNI // current state
} else {
return error, no network plugin configured

The POC works in this way. However I wonder if I want to move this PR forward or not. I debate. It might be better to close for now.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at the end of the day is a containerd/crio decision, I can only provide non-binding opinions here

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An error is returned if multiple plugins attempt to set the same field.

The runtime can return an arbitrary number of pod IPs to kubelet (with kubelet ignoring all but the first IP or the first dual-stack pair), so you could just aggregate them instead...

this will be a nightmare to troubleshoot in prod on kubernetes clusters

Well... yes, if you install both a CNI-based pod network implementation and an NRI-based pod network implementation on the same node, then that will fail spectacularly, but "don't do that then".

(And this is nothing new: if you run any two of Calico, Cilium, and OVN-Kubernetes on the same node at the same time, it will also fail spectacularly. Sure, the runtime will only talk to whichever one of them has the lexically-first CNI config file name, but meanwhile, the node-level daemon of the other pod network will be reconfiguring the node in a way that will totally break the first pod network.)

If this PR merges, the CNI would be optional

ISTM that the ideal implementation would be to make "CNI support" be an NRI plugin. (Maybe even the same plugin for containerd and cri-o?) And then if you don't want to use CNI, you just don't install/run the CNI NRI plugin.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am going to implement it as @danwinship unless someone has objections?

@aojea
Copy link
Copy Markdown
Contributor

aojea commented Jan 31, 2026

Thanks for doing this, I initiated this path with this in mind but I unfortunately didn't have the necessary time to continue, so super glad you are taking the initiative, added some comments from my previous research in this area

@pfl
Copy link
Copy Markdown

pfl commented Feb 4, 2026

A few years back we were investigating CNI lifecycle management in NRI by adding a set of CNI lifecycle hooks (https://github.com/containerd/nri/pull/57/changes#top). Here we added explicit Pre- and PostSetupNetwork, Pre- and PostNetworkDeleted and NetworkConfigurationchanged messages to help with network lifecycle tracking. NRI plugins were able to alter the networking setup at will, and in this PoC quite wildly indeed. Although the messaging at the time was built around CNI as we did not want to build an alternative CNI protocol, perhaps the idea of network lifecycle events can be of use here? There also was a quick demonstration/hack for containerd of the day to provide a PoC for the whole setup (pfl/containerd@ecb9061). Our takeaway at the time was the lifecycle management, not the actual message format sent back and forth.

But back to the CNI/non-CNI issue. As most (all?) of the real-world CNIs seem to have network daemons running while getting configuration requests via CNI directory binaries, maybe there should be a revised protocol for contacting CNIs directly from container runtimes with a perhaps a fallback to current CNI messaging if needed. Maybe messaging could be redefined so that there is a distinction between choices already done by NRI plugins (IP, routing, network devices, etc) with the ones not specified left to the discretion of the CNI? With this I'm assuming the CNI daemons still want to set up an optimized overlay network needed for a particular cluster and its configuration?

@danwinship
Copy link
Copy Markdown

But back to the CNI/non-CNI issue. As most (all?) of the real-world CNIs seem to have network daemons running while getting configuration requests via CNI directory binaries, maybe there should be a revised protocol for contacting CNIs directly from container runtimes with a perhaps a fallback to current CNI messaging if needed.

There's been some discussion about that (containernetworking/cni#821). The big problem is with delegated plugins (e.g., telling your pod network to invoke another CNI plugin for IPAM). Even if the pod network's daemon container mounts the host CNI plugins directory so that it can find the delegated-to plugin, it still may not be able to invoke it successfully, since the other plugin is expecting to be invoked in a context that looks like the container runtime, not in the context of a random hostNetwork pod.

The other problem with CNI is that it is really trying to solve Kubernetes 1.0's problems, and we need an API that is trying to solve Kubernetes 1.36's problems.

So I would not worry about CNI. We should assume that CRI/NRI are going to evolve beyond it.

@MikeZappa87
Copy link
Copy Markdown
Member Author

I need to pick this back up, and resolve the conflicts so we can push it forward

Signed-off-by: Michael Zappa <michael.zappa@gmail.com>
Signed-off-by: Michael Zappa <michael.zappa@gmail.com>
@MikeZappa87
Copy link
Copy Markdown
Member Author

I have a few questions about the chosen semantics. But it might be that I just lack the necessary context here...

What questions did you have?

Copy link
Copy Markdown
Member

@mikebrow mikebrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was there consideration for putting the NRI plugin's pod sandbox response IP(s) into the RunPodSandbox request? Getting the IPs at status time, currently is a read on the pod store metadata... moving it to status would change the pod lifecycle timing somewhat. At any rate I think we need to stow this in the pod store? and consider using pod update if there is a dynamic change to the pod IPs.

Comment thread pkg/api/api.proto
}

message PodSandboxStatusResponse {
// Primary IP address of the pod sandbox.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe put these fields into a PodSandboxNetworkStatus struct from the root of this message

also include an info map[string]string for the typical "extra" debug/status information at the root of the response

Maybe just use a repeated string for IPs.. as was done on the RunPadSandbox call where the primary is the first one..

Comment thread pkg/api/api.proto Outdated
@MikeZappa87
Copy link
Copy Markdown
Member Author

MikeZappa87 commented May 11, 2026

Was there consideration for putting the NRI plugin's pod sandbox response IP(s) into the RunPodSandbox request? Getting the IPs at status time, currently is a read on the pod store metadata... moving it to status would change the pod lifecycle timing somewhat. At any rate I think we need to stow this in the pod store? and consider using pod update if there is a dynamic change to the pod IPs.

The considered alternative has always been for RunPodSandbox was to return the ip addresses instead.

message RunPodSandboxResponse{}

@aojea @danwinship @LionelJouin thoughts??

@mikebrow
Copy link
Copy Markdown
Member

mikebrow commented May 12, 2026

nod.. start with the NRI changes.. I like that this emulates the create/start split I'd like to put into CRI states. I think the right time to fix IP reporting in the Kubelet is when we do the create/start split in the CRI api. We would then naturally migrate the nri network plugin api over to the create/run step.

@MikeZappa87
Copy link
Copy Markdown
Member Author

nod.. start with the NRI changes.. I like that this emulates the create/start split I'd like to put into CRI states. I think the right time to fix IP reporting in the Kubelet is when we do the create/start split in the CRI api. We would then naturally migrate the nri network plugin api over to the create/run step.

#295

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants