Conversation
umswmayj
commented
Feb 12, 2026
Test Coverage ReportTest Coverage 📊: 68.6% |
PhilippMatthes
left a comment
There was a problem hiding this comment.
Really like the proposal! I have a couple ideas
| } | ||
| return checkType == "evacuate" | ||
| } | ||
|
|
There was a problem hiding this comment.
If you rebase on main, you could use this function and then just check for intent == api.EvacuateIntent:
cortex/api/external/nova/messages.go
Line 121 in d2b52ef
There was a problem hiding this comment.
The evacuate case should also be unit tested already, so no additional test needed
| type ReservationSelector struct { | ||
| // IsNUMAAlignedHost specifies whether the host should be NUMA-aligned. | ||
| // +kubebuilder:validation:Optional | ||
| IsNUMAAlignedHost bool `json:"isNUMAAlignedHost,omitempty"` |
There was a problem hiding this comment.
Is this something specific to the reservation or something we could add to the hv crd?
There was a problem hiding this comment.
Maybe I don't get what this field should do
There was a problem hiding this comment.
Discussion outcome: remove for now
| // StartTime is the time when the reservation becomes active. | ||
| // +kubebuilder:validation:Optional | ||
| StartTime *metav1.Time `json:"startTime,omitempty"` | ||
|
|
||
| // EndTime is the time when the reservation expires. | ||
| // +kubebuilder:validation:Optional | ||
| EndTime *metav1.Time `json:"endTime,omitempty"` |
There was a problem hiding this comment.
Wondering if this is relevant for all kinds of reservation.
I.e. is a start + end time relevant for a failover reservation?
If not, we should think about substructs like
kind: FailoverReservation
failover:
...
# or
kind: CommitmentReservation
commitment:
startTime:
endTime:There was a problem hiding this comment.
Lets discuss live, I think there was an argument for flat instead of nested structs
There was a problem hiding this comment.
Discussion outcome: Keep for now but we are open to change it
| // --- Scheduling-related fields (e.g., for Nova) --- | ||
|
|
||
| // ProjectID is the UUID of the project this reservation belongs to. | ||
| // +kubebuilder:validation:Optional | ||
| ProjectID string `json:"projectID,omitempty"` | ||
|
|
||
| // DomainID is the domain ID to reserve for. | ||
| // +kubebuilder:validation:Optional | ||
| DomainID string `json:"domainID,omitempty"` | ||
|
|
||
| // ResourceName is the name of the resource to reserve (e.g., FlavorName for Nova). | ||
| // +kubebuilder:validation:Optional | ||
| ResourceName string `json:"resourceName,omitempty"` | ||
|
|
||
| // ResourceExtraSpecs contains extra specifications relevant for initial placement | ||
| // of the instance (e.g., FlavorExtraSpecs for Nova). | ||
| // +kubebuilder:validation:Optional | ||
| ResourceExtraSpecs map[string]string `json:"resourceExtraSpecs,omitempty"` |
There was a problem hiding this comment.
Maybe we can also put this under its own substruct, e.g.
constraints:
projectID: ...
...There was a problem hiding this comment.
Or we make this part of the expectedGuests below that I proposed, meaning, we schedule this reservation so it conforms to the expected guests' scheduling constraints. Which would be semantically correct?
There was a problem hiding this comment.
Discussion outcome: we just remove it for know until we need something like ResourceExtraSpecs
There was a problem hiding this comment.
AI: for me prepare comparison for next tech sync (flat vs nested for our 4 usecases)
| // ConnectTo the instances that lossy use this reservation | ||
| // The key is the instance, e.g., VM UUID, the value is usecase specific and can be empty | ||
| // E.g., for failover reservations, this maps VM/instance UUIDs to the host the VM is currently running on. | ||
| // +kubebuilder:validation:Optional | ||
| ConnectTo map[string]string `json:"connectTo,omitempty"` |
There was a problem hiding this comment.
Maybe we can do something like this to type it more strongly
expectedGuests:
- vmID: my-funny-vm
expectBy: utc-timestamp
kind: InitialPlacementVM
- vmID: my-other-vm
kind: FailoverVM
- project: my-unfunny-project
kind: CommitmentVM| // Only used when Type is FailoverReservation. | ||
| // This should reflect the actual state and may differ from Spec.ConnectTo during transitions. | ||
| // +kubebuilder:validation:Optional | ||
| ObservedConnectTo map[string]string `json:"observedConnectTo,omitempty"` |
There was a problem hiding this comment.
Would this be better?
allocation:
- vmID: my-funny-vm
# To subtract it from the reserved resources
resources:
cpu: 2
memory: 4
# Keep track of our vm, can be Pending if we're
# locking this space preventively
status: Active
# The last time we observed a change in the state.
# This will allow us to remove the allocation if it's
# unready for too long?
lastTransitionTime: utc timestamp
# Some enum string that helps us handle this
# better (in case status isn't enough)
reason: ...
# Human-readable explanation why this allocation
# currently resides in this status
message: ...| // - For Nova: the hypervisor hostname | ||
| // - For Pods: the node name | ||
| // +kubebuilder:validation:Optional | ||
| ObservedHost string `json:"observedHost,omitempty"` |
There was a problem hiding this comment.
Maybe a hot take, but does the reservation need to keep track where the vm is located? Doesn't it only need to keep track if it is allocated by it and when it was planned to be allocated by it, so we can detect if we need to move the reservation to the right spot (where the vm actually spawned) and how much space of the reservation is left?
|
Discussed a lot offline, feel free to resolve my comments above, surely too much to catch up here :D |