Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion docs/env.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,10 @@
| VM_FILTERPROMETHEUSCONVERTERANNOTATIONPREFIXES: `-` <a href="#variables-vm-filterprometheusconverterannotationprefixes" id="variables-vm-filterprometheusconverterannotationprefixes">#</a><br>allows filtering for converted annotations, annotations with matched prefix will be ignored |
| VM_CLUSTERDOMAINNAME: `-` <a href="#variables-vm-clusterdomainname" id="variables-vm-clusterdomainname">#</a><br>Defines domain name suffix for in-cluster addresses most known ClusterDomainName is .cluster.local |
| VM_APPREADYTIMEOUT: `80s` <a href="#variables-vm-appreadytimeout" id="variables-vm-appreadytimeout">#</a><br>Defines deadline for deployment/statefulset to transit into ready state to wait for transition to ready state |
| VM_PODWAITREADYTIMEOUT: `80s` <a href="#variables-vm-podwaitreadytimeout" id="variables-vm-podwaitreadytimeout">#</a><br>Defines single pod deadline to wait for transition to ready state |
| VM_PODWAITREADYINTERVALCHECK: `5s` <a href="#variables-vm-podwaitreadyintervalcheck" id="variables-vm-podwaitreadyintervalcheck">#</a><br>Defines poll interval for pods ready check at statefulset rollout update |
| VM_PODWAITREADYTIMEOUT: `80s` <a href="#variables-vm-podwaitreadytimeout" id="variables-vm-podwaitreadytimeout">#</a><br>Defines single pod deadline to wait for transition to ready state |
| VM_PVC_WAIT_READY_INTERVAL: `5s` <a href="#variables-vm-pvc-wait-ready-interval" id="variables-vm-pvc-wait-ready-interval">#</a><br>Defines poll interval for PVC ready check |
| VM_PVC_WAIT_READY_TIMEOUT: `80s` <a href="#variables-vm-pvc-wait-ready-timeout" id="variables-vm-pvc-wait-ready-timeout">#</a><br>Defines poll timeout for PVC ready check |
| VM_WAIT_READY_INTERVAL: `5s` <a href="#variables-vm-wait-ready-interval" id="variables-vm-wait-ready-interval">#</a><br>Defines poll interval for VM CRs |
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: Narrow this description: VM_WAIT_READY_INTERVAL does not apply to all VM CRs, only to the waitForStatus loop used for VMAgent, VMCluster, and VMAuth.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At docs/env.md, line 237:

<comment>Narrow this description: `VM_WAIT_READY_INTERVAL` does not apply to all VM CRs, only to the `waitForStatus` loop used for VMAgent, VMCluster, and VMAuth.</comment>

<file context>
@@ -230,7 +230,10 @@
+| VM_PODWAITREADYTIMEOUT: `80s` <a href="#variables-vm-podwaitreadytimeout" id="variables-vm-podwaitreadytimeout">#</a><br>Defines single pod deadline to wait for transition to ready state |
+| VM_PVC_WAIT_READY_INTERVAL: `5s` <a href="#variables-vm-pvc-wait-ready-interval" id="variables-vm-pvc-wait-ready-interval">#</a><br>Defines poll interval for PVC ready check |
+| VM_PVC_WAIT_READY_TIMEOUT: `80s` <a href="#variables-vm-pvc-wait-ready-timeout" id="variables-vm-pvc-wait-ready-timeout">#</a><br>Defines poll timeout for PVC ready check |
+| VM_WAIT_READY_INTERVAL: `5s` <a href="#variables-vm-wait-ready-interval" id="variables-vm-wait-ready-interval">#</a><br>Defines poll interval for VM CRs |
 | VM_FORCERESYNCINTERVAL: `60s` <a href="#variables-vm-forceresyncinterval" id="variables-vm-forceresyncinterval">#</a><br>configures force resync interval for VMAgent, VMAlert, VMAlertmanager and VMAuth. |
 | VM_ENABLESTRICTSECURITY: `false` <a href="#variables-vm-enablestrictsecurity" id="variables-vm-enablestrictsecurity">#</a><br>EnableStrictSecurity will add default `securityContext` to pods and containers created by operator Default PodSecurityContext include: 1. RunAsNonRoot: true 2. RunAsUser/RunAsGroup/FSGroup: 65534 '65534' refers to 'nobody' in all the used default images like alpine, busybox. If you're using customize image, please make sure '65534' is a valid uid in there or specify SecurityContext. 3. FSGroupChangePolicy: &onRootMismatch If KubeVersion>=1.20, use `FSGroupChangePolicy="onRootMismatch"` to skip the recursive permission change when the root of the volume already has the correct permissions 4. SeccompProfile:      type: RuntimeDefault Use `RuntimeDefault` seccomp profile by default, which is defined by the container runtime, instead of using the Unconfined (seccomp disabled) mode. Default container SecurityContext include: 1. AllowPrivilegeEscalation: false 2. ReadOnlyRootFilesystem: true 3. Capabilities:      drop:        - all turn off `EnableStrictSecurity` by default, see https://github.com/VictoriaMetrics/operator/issues/749 for details |
</file context>
Suggested change
| VM_WAIT_READY_INTERVAL: `5s` <a href="#variables-vm-wait-ready-interval" id="variables-vm-wait-ready-interval">#</a><br>Defines poll interval for VM CRs |
| VM_WAIT_READY_INTERVAL: `5s` <a href="#variables-vm-wait-ready-interval" id="variables-vm-wait-ready-interval">#</a><br>Defines poll interval for status checks of VMAgent, VMCluster and VMAuth CRs |
Fix with Cubic

| VM_FORCERESYNCINTERVAL: `60s` <a href="#variables-vm-forceresyncinterval" id="variables-vm-forceresyncinterval">#</a><br>configures force resync interval for VMAgent, VMAlert, VMAlertmanager and VMAuth. |
| VM_ENABLESTRICTSECURITY: `false` <a href="#variables-vm-enablestrictsecurity" id="variables-vm-enablestrictsecurity">#</a><br>EnableStrictSecurity will add default `securityContext` to pods and containers created by operator Default PodSecurityContext include: 1. RunAsNonRoot: true 2. RunAsUser/RunAsGroup/FSGroup: 65534 '65534' refers to 'nobody' in all the used default images like alpine, busybox. If you're using customize image, please make sure '65534' is a valid uid in there or specify SecurityContext. 3. FSGroupChangePolicy: &onRootMismatch If KubeVersion>=1.20, use `FSGroupChangePolicy="onRootMismatch"` to skip the recursive permission change when the root of the volume already has the correct permissions 4. SeccompProfile: type: RuntimeDefault Use `RuntimeDefault` seccomp profile by default, which is defined by the container runtime, instead of using the Unconfined (seccomp disabled) mode. Default container SecurityContext include: 1. AllowPrivilegeEscalation: false 2. ReadOnlyRootFilesystem: true 3. Capabilities: drop: - all turn off `EnableStrictSecurity` by default, see https://github.com/VictoriaMetrics/operator/issues/749 for details |
14 changes: 10 additions & 4 deletions internal/config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -546,13 +546,19 @@ type BaseOperatorConf struct {
// Defines deadline for deployment/statefulset
// to transit into ready state
// to wait for transition to ready state
AppReadyTimeout time.Duration `default:"80s" env:"VM_APPREADYTIMEOUT"`
AppWaitReadyTimeout time.Duration `default:"80s" env:"VM_APPREADYTIMEOUT"`
// Defines poll interval for pods ready check
// at statefulset rollout update
PodWaitReadyInterval time.Duration `default:"5s" env:"VM_PODWAITREADYINTERVALCHECK"`
// Defines single pod deadline
// to wait for transition to ready state
PodWaitReadyTimeout time.Duration `default:"80s" env:"VM_PODWAITREADYTIMEOUT"`
// Defines poll interval for pods ready check
// at statefulset rollout update
PodWaitReadyIntervalCheck time.Duration `default:"5s" env:"VM_PODWAITREADYINTERVALCHECK"`
// Defines poll interval for PVC ready check
PVCWaitReadyInterval time.Duration `default:"5s" env:"VM_PVC_WAIT_READY_INTERVAL"`
// Defines poll timeout for PVC ready check
PVCWaitReadyTimeout time.Duration `default:"80s" env:"VM_PVC_WAIT_READY_TIMEOUT"`
// Defines poll interval for VM CRs
VMWaitReadyInterval time.Duration `default:"5s" env:"VM_WAIT_READY_INTERVAL"`
// configures force resync interval for VMAgent, VMAlert, VMAlertmanager and VMAuth.
ForceResyncInterval time.Duration `default:"60s" env:"VM_FORCERESYNCINTERVAL"`
// EnableStrictSecurity will add default `securityContext` to pods and containers created by operator
Expand Down
100 changes: 38 additions & 62 deletions internal/controller/operator/factory/k8stools/interceptors.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import (
"context"

appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/apiutil"
Expand All @@ -12,80 +13,55 @@ import (
vmv1beta1 "github.com/VictoriaMetrics/operator/api/operator/v1beta1"
)

func updateStatus(ctx context.Context, cl client.WithWatch, obj client.Object) error {
switch v := obj.(type) {
case *appsv1.StatefulSet:
v.Status.ObservedGeneration = v.Generation
v.Status.ReadyReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.UpdatedReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.CurrentReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.UpdateRevision = "v1"
v.Status.CurrentRevision = "v1"
case *appsv1.Deployment:
v.Status.ObservedGeneration = v.Generation
v.Status.Conditions = append(v.Status.Conditions, appsv1.DeploymentCondition{
Type: appsv1.DeploymentProgressing,
Reason: "NewReplicaSetAvailable",
Status: "True",
})
v.Status.UpdatedReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.ReadyReplicas = ptr.Deref(v.Spec.Replicas, 0)
case *vmv1beta1.VMAgent:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
case *vmv1beta1.VMCluster:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
case *vmv1beta1.VMAuth:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
case *corev1.PersistentVolumeClaim:
v.Status.Capacity = v.Spec.Resources.Requests
default:
return nil
}
return cl.Status().Update(ctx, obj)
}

// GetInterceptorsWithObjects returns interceptors for objects
func GetInterceptorsWithObjects() interceptor.Funcs {
return interceptor.Funcs{
Create: func(ctx context.Context, cl client.WithWatch, obj client.Object, opts ...client.CreateOption) error {
switch v := obj.(type) {
case *appsv1.StatefulSet:
v.Status.ObservedGeneration = v.Generation
v.Status.ReadyReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.UpdatedReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.CurrentReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.UpdateRevision = "v1"
v.Status.CurrentRevision = "v1"
case *appsv1.Deployment:
v.Status.ObservedGeneration = v.Generation
v.Status.Conditions = append(v.Status.Conditions, appsv1.DeploymentCondition{
Type: appsv1.DeploymentProgressing,
Reason: "NewReplicaSetAvailable",
Status: "True",
})
v.Status.UpdatedReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.ReadyReplicas = ptr.Deref(v.Spec.Replicas, 0)
}
if err := cl.Create(ctx, obj, opts...); err != nil {
return err
}
switch v := obj.(type) {
case *vmv1beta1.VMAgent:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
return cl.Status().Update(ctx, v)
case *vmv1beta1.VMCluster:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
return cl.Status().Update(ctx, v)
case *vmv1beta1.VMAuth:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
return cl.Status().Update(ctx, v)
}
return nil
return updateStatus(ctx, cl, obj)
},
Update: func(ctx context.Context, cl client.WithWatch, obj client.Object, opts ...client.UpdateOption) error {
switch v := obj.(type) {
case *appsv1.StatefulSet:
v.Status.ObservedGeneration = v.Generation
v.Status.ReadyReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.UpdatedReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.CurrentReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.UpdateRevision = "v1"
v.Status.CurrentRevision = "v1"
case *appsv1.Deployment:
v.Status.ObservedGeneration = v.Generation
v.Status.UpdatedReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.ReadyReplicas = ptr.Deref(v.Spec.Replicas, 0)
v.Status.Replicas = ptr.Deref(v.Spec.Replicas, 0)
}
if err := cl.Update(ctx, obj, opts...); err != nil {
return err
}
switch v := obj.(type) {
case *vmv1beta1.VMAgent:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
return cl.Status().Update(ctx, v)
case *vmv1beta1.VMCluster:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
return cl.Status().Update(ctx, v)
case *vmv1beta1.VMAuth:
v.Status.UpdateStatus = vmv1beta1.UpdateStatusOperational
v.Status.ObservedGeneration = v.Generation
return cl.Status().Update(ctx, v)
}
return nil
return updateStatus(ctx, cl, obj)
},
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ func DaemonSet(ctx context.Context, rclient client.Client, newObj, prevObj *apps
if err != nil {
return err
}
return waitDaemonSetReady(ctx, rclient, newObj, appWaitReadyDeadline)
return waitDaemonSetReady(ctx, rclient, newObj, appWaitReadyTimeout)
}

// waitDeploymentReady waits until deployment's replicaSet rollouts and all new pods is ready
Expand Down
2 changes: 1 addition & 1 deletion internal/controller/operator/factory/reconcile/deploy.go
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ func Deployment(ctx context.Context, rclient client.Client, newObj, prevObj *app
if err != nil {
return err
}
return waitForDeploymentReady(ctx, rclient, newObj, appWaitReadyDeadline)
return waitForDeploymentReady(ctx, rclient, newObj, appWaitReadyTimeout)
}

// waitForDeploymentReady waits until deployment's replicaSet rollouts and all new pods is ready
Expand Down
62 changes: 51 additions & 11 deletions internal/controller/operator/factory/reconcile/pvc.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,10 @@ import (

corev1 "k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"sigs.k8s.io/controller-runtime/pkg/client"

"github.com/VictoriaMetrics/operator/internal/controller/operator/factory/logger"
Expand All @@ -20,24 +22,62 @@ import (
// in case of deletion timestamp > 0 does nothing
// user must manually remove finalizer if needed
func PersistentVolumeClaim(ctx context.Context, rclient client.Client, newObj, prevObj *corev1.PersistentVolumeClaim, owner *metav1.OwnerReference) error {
l := logger.WithContext(ctx)
var existingObj corev1.PersistentVolumeClaim
nsn := types.NamespacedName{Namespace: newObj.Namespace, Name: newObj.Name}
if err := rclient.Get(ctx, nsn, &existingObj); err != nil {
if k8serrors.IsNotFound(err) {
l.Info(fmt.Sprintf("creating new PVC=%s", nsn.String()))
if err := rclient.Create(ctx, newObj); err != nil {
return fmt.Errorf("cannot create new PVC=%s: %w", nsn.String(), err)
var existingObj corev1.PersistentVolumeClaim
err := retryOnConflict(func() error {
if err := rclient.Get(ctx, nsn, &existingObj); err != nil {
if k8serrors.IsNotFound(err) {
logger.WithContext(ctx).Info(fmt.Sprintf("creating new PVC=%s", nsn.String()))
return rclient.Create(ctx, newObj)
}
return fmt.Errorf("cannot get existing PVC=%s: %w", nsn.String(), err)
}
if !existingObj.DeletionTimestamp.IsZero() {
return nil
}
return fmt.Errorf("cannot get existing PVC=%s: %w", nsn.String(), err)
return updatePVC(ctx, rclient, &existingObj, newObj, prevObj, owner)
})
if err != nil {
return err
}
size := newObj.Spec.Resources.Requests[corev1.ResourceStorage]
if !existingObj.CreationTimestamp.IsZero() {
size = existingObj.Spec.Resources.Requests[corev1.ResourceStorage]
}
if err = waitForPVCReady(ctx, rclient, nsn, size); err != nil {
return err
}
if !existingObj.DeletionTimestamp.IsZero() {
l.Info(fmt.Sprintf("PVC=%s has non zero DeletionTimestamp, skip update."+
logger.WithContext(ctx).Info(fmt.Sprintf("PVC=%s has non zero DeletionTimestamp, skip update."+
" To fix this, make backup for this pvc, delete pvc finalizers and restore from backup.", nsn.String()))
return nil
}
return nil
}

return updatePVC(ctx, rclient, &existingObj, newObj, prevObj, owner)
func waitForPVCReady(ctx context.Context, rclient client.Client, nsn types.NamespacedName, size resource.Quantity) error {
var pvc corev1.PersistentVolumeClaim
return wait.PollUntilContextTimeout(ctx, pvcWaitReadyInterval, pvcWaitReadyTimeout, true, func(ctx context.Context) (done bool, err error) {
if err := rclient.Get(ctx, nsn, &pvc); err != nil {
if k8serrors.IsNotFound(err) {
return false, nil
}
return false, fmt.Errorf("cannot get PVC=%s: %w", nsn.String(), err)
}
if !pvc.DeletionTimestamp.IsZero() {
return true, nil
}
if len(pvc.Status.Capacity) == 0 {
return true, nil
}
actualSize := pvc.Status.Capacity[corev1.ResourceStorage]
if actualSize.Cmp(size) < 0 {
return false, nil
}
for _, condition := range pvc.Status.Conditions {
if condition.Type == corev1.PersistentVolumeClaimResizing && condition.Status == corev1.ConditionTrue {
return false, nil
}
}
return true, nil
})
}
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ func TestPersistentVolumeClaimReconcile(t *testing.T) {
},
},
}
pvc.Status.Capacity = pvc.Spec.Resources.Requests
for _, fn := range fns {
fn(pvc)
}
Expand All @@ -47,7 +48,7 @@ func TestPersistentVolumeClaimReconcile(t *testing.T) {
f := func(o opts) {
t.Helper()
ctx := context.Background()
cl := k8stools.GetTestClientWithActions(o.predefinedObjects)
cl := k8stools.GetTestClientWithActionsAndObjects(o.predefinedObjects)
synctest.Test(t, func(t *testing.T) {
assert.NoError(t, PersistentVolumeClaim(ctx, cl, o.new, o.prev, nil))
assert.Equal(t, o.actions, cl.Actions)
Expand All @@ -62,6 +63,7 @@ func TestPersistentVolumeClaimReconcile(t *testing.T) {
actions: []k8stools.ClientAction{
{Verb: "Get", Kind: "PersistentVolumeClaim", Resource: nn},
{Verb: "Create", Kind: "PersistentVolumeClaim", Resource: nn},
{Verb: "Get", Kind: "PersistentVolumeClaim", Resource: nn},
},
})

Expand All @@ -74,6 +76,7 @@ func TestPersistentVolumeClaimReconcile(t *testing.T) {
},
actions: []k8stools.ClientAction{
{Verb: "Get", Kind: "PersistentVolumeClaim", Resource: nn},
{Verb: "Get", Kind: "PersistentVolumeClaim", Resource: nn},
},
})

Expand All @@ -89,6 +92,7 @@ func TestPersistentVolumeClaimReconcile(t *testing.T) {
actions: []k8stools.ClientAction{
{Verb: "Get", Kind: "PersistentVolumeClaim", Resource: nn},
{Verb: "Update", Kind: "PersistentVolumeClaim", Resource: nn},
{Verb: "Get", Kind: "PersistentVolumeClaim", Resource: nn},
},
})

Expand All @@ -106,6 +110,7 @@ func TestPersistentVolumeClaimReconcile(t *testing.T) {
actions: []k8stools.ClientAction{
{Verb: "Get", Kind: "PersistentVolumeClaim", Resource: nn},
{Verb: "Update", Kind: "PersistentVolumeClaim", Resource: nn},
{Verb: "Get", Kind: "PersistentVolumeClaim", Resource: nn},
},
})
}
27 changes: 18 additions & 9 deletions internal/controller/operator/factory/reconcile/reconcile.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,23 +18,32 @@ import (
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"

vmv1beta1 "github.com/VictoriaMetrics/operator/api/operator/v1beta1"
"github.com/VictoriaMetrics/operator/internal/config"
"github.com/VictoriaMetrics/operator/internal/controller/operator/factory/finalize"
"github.com/VictoriaMetrics/operator/internal/controller/operator/factory/logger"
)

var (
podWaitReadyIntervalCheck = 50 * time.Millisecond
appWaitReadyDeadline = 5 * time.Second
podWaitReadyTimeout = 5 * time.Second
vmStatusInterval = 5 * time.Second
pvcWaitReadyInterval = 1 * time.Second
pvcWaitReadyTimeout = 5 * time.Second

podWaitReadyInterval = 1 * time.Second
podWaitReadyTimeout = 5 * time.Second

appWaitReadyTimeout = 5 * time.Second
vmWaitReadyInterval = 5 * time.Second
)

// Init sets package defaults
func Init(intervalCheck, appWaitDeadline, podReadyDeadline, statusInterval, statusUpdate time.Duration) {
podWaitReadyIntervalCheck = intervalCheck
appWaitReadyDeadline = appWaitDeadline
podWaitReadyTimeout = podReadyDeadline
vmStatusInterval = statusInterval
func Init(cfg *config.BaseOperatorConf, statusUpdate time.Duration) {
podWaitReadyInterval = cfg.PodWaitReadyInterval
podWaitReadyTimeout = cfg.PodWaitReadyTimeout

pvcWaitReadyInterval = cfg.PVCWaitReadyInterval
pvcWaitReadyTimeout = cfg.PVCWaitReadyTimeout

appWaitReadyTimeout = cfg.AppWaitReadyTimeout
vmWaitReadyInterval = cfg.VMWaitReadyInterval
statusUpdateTTL = statusUpdate
}

Expand Down
Loading
Loading