Skip to content

duty-fetching: simplify AttesterHandler#2712

Open
iurii-ssv wants to merge 5 commits intostagefrom
duty-fetching-simplify
Open

duty-fetching: simplify AttesterHandler#2712
iurii-ssv wants to merge 5 commits intostagefrom
duty-fetching-simplify

Conversation

@iurii-ssv
Copy link
Contributor

@iurii-ssv iurii-ssv commented Mar 4, 2026

This PR aims to clarify/document/simplify the behavior of AttesterHandler (also making the handling of reorgs and indices-changes more precise, especially at epoch-boundaries).

Notable changes:

  • instead of having fetchCurrentEpoch/fetchNextEpoch dictating when to fetch duties we now have the exact epoch->intent mapping (dutyFetchIntents) that works 100% well when crossing epoch-boundary (while the meaning of fetchCurrentEpoch/fetchNextEpoch depends on what the current epoch is - hence sometimes it might lead to intents being interpreted incorrectly at epoch-boundaries) ... this is especially critical to our fetch-retries issuing retry-requests on the next slot (and, if necessary, the next slot after that, etc.) potentially crossing that epoch-boundary with outdated/incorrect fetch-intents
  • on reorgs and indices-change events, we now check whether we are at the last slot of the current epoch - if we are, we don't re-fetch the duties for the current epoch anymore since the next slot-ticker tick will be on the very 1st slot of the next epoch (so, those duties will become irrelevant anyway)
  • on indices-change events, Previously: we will not immediately re-fetch the duties (for the current/next epochs) so we can execute the correct(updated) duties on the very next slot; Now: we'll re-fetch the relevant duties immediately to have them ready for execution on the very next slot (the next slot-ticker tick)
  • on indices-change events, these events are timed to be processed early into the slot (or wait until the next slot, and be processed early in the next slot) such that the duty re-fetching/processing they might result into does not "delay" the next tick of the slot-ticker; This is mostly relevant to accommodate the previous step described above, but also to get rid of the assumption that "the processing of indices-change events is very fast (so it doesn't interfere with the slot-ticker ticks just because it happens to be very fast)"
  • additionally, to keep things clear and extendable in the future, this PR defines 2 separate deadlines (for slot-ticker ticks, and for duty executions) instead of using a single deadline for both - which is what we had previously (and that basically meant we had to use the max of these 2 deadlines ... which is suboptimal and confusing)

If this approach is looking good, we should apply it also to SyncCommitteeHandler (and to ProposerHandler - see also a relevant PR with some prior work towards that: #2699)

Resolves #2705 for AttesterHandler.

@iurii-ssv iurii-ssv requested review from a team as code owners March 4, 2026 11:54
@codecov
Copy link

codecov bot commented Mar 4, 2026

Codecov Report

❌ Patch coverage is 93.08176% with 11 lines in your changes missing coverage. Please review.
✅ Project coverage is 56.2%. Comparing base (3ce9019) to head (90b69bf).

Files with missing lines Patch % Lines
operator/duties/attester.go 91.5% 6 Missing and 4 partials ⚠️
operator/duties/proposer.go 92.3% 1 Missing ⚠️

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 4, 2026

Greptile Summary

This PR simplifies the AttesterHandler by replacing two boolean fetch-intent fields (fetchCurrentEpoch/fetchNextEpoch) with an epoch-keyed map (dutyFetchIntents), making fetch scheduling more precise — particularly at epoch boundaries and during retry scenarios that may cross epoch transitions. It also refactors all handlers to pass an explicit dutyDeadline to ExecuteDuties/ExecuteCommitteeDuties, cleanly separating the tick-processing deadline from the duty-execution deadline.

Key changes:

  • attester.go: dutyFetchIntents map[phase0.Epoch]bool replaces the two booleans; new prepareCurrentEpoch / prepareNextEpoch helpers encapsulate fetch-intent fulfillment; indices-change handling is moved inside the tick closure behind a deadline-guarded select so it can't delay the next slot tick.
  • scheduler.go: ReorgEvent.Previous field removed (superseded by !Current); ExecuteDuties / ExecuteCommitteeDuties signatures gain an explicit dutyDeadline time.Time parameter, eliminating the implicit inheritance of the parent context's deadline via utils.CtxWithParentDeadline.
  • base_handler.go: The shared ctxWithDeadlineOnNextSlot / ctxWithDeadlineInOneEpoch helpers are deleted since per-call deadlines are now managed at each call site.
  • Other handlers (committee, proposer, sync_committee, validator_registration, voluntary_exit): Tick closures with per-tick deadline contexts are removed; ExecuteDuties is updated to pass explicit deadlines.
  • dutystore: ResetEpoch renamed to EraseEpochData for clarity.

Notable issues found:

  • atLastSlotOfCurrentEpoch() calls EstimatedCurrentSlot() internally rather than accepting the already-computed currentSlot as a parameter, introducing a potential clock-skew inconsistency at epoch boundaries.
  • processFetching in sync_committee.go and proposer.go lost their per-tick deadline contexts and now run with the unbounded parent ctx, meaning a slow beacon node can stall those handlers' tick loops indefinitely.

Confidence Score: 3/5

  • The attester logic improvement is solid, but two correctness issues (epoch-boundary slot re-estimation and missing tick deadlines for sync committee / proposer) should be addressed before merging.
  • The core intent-map approach is clean and well-tested. However, atLastSlotOfCurrentEpoch() re-estimating the current slot from the wall clock (rather than using the already-captured slot) is a genuine logic race that can mis-classify the epoch boundary under normal operating conditions. Additionally, removing per-tick deadline contexts from sync_committee.go and proposer.go's processFetching calls is a behavioural regression: previously a stalled beacon node would be bounded by ~1 slot; now it can block those handlers indefinitely. These two issues reduce confidence in a straightforward merge.
  • operator/duties/attester.go (atLastSlotOfCurrentEpoch), operator/duties/sync_committee.go and operator/duties/proposer.go (missing per-tick fetch deadline)

Important Files Changed

Filename Overview
operator/duties/attester.go Core change: replaces fetchCurrentEpoch/fetchNextEpoch booleans with an epoch-keyed dutyFetchIntents map; introduces prepareCurrentEpoch/prepareNextEpoch helpers; moves indices-change handling inline within the tick closure with a deadline-guarded select. Two issues: atLastSlotOfCurrentEpoch() re-calls EstimatedCurrentSlot() rather than reusing the already-computed slot (logic race at epoch boundary), and the error path in prepareCurrentEpoch/prepareNextEpoch drops contextual logger fields.
operator/duties/sync_committee.go Removed the per-tick ctxWithDeadlineOnNextSlot wrapper around processExecution + processFetching; beacon node fetch inside processFetching now runs with the unbounded parent ctx. Regression: a slow/unavailable beacon node can block the sync committee tick handler indefinitely, delaying all future slot processing. Also updates dutiesExecutor.ExecuteDuties signature to accept explicit dutyDeadline.
operator/duties/proposer.go Removed per-tick ctxWithDeadlineOnNextSlot; processFetching now uses unbounded ctx (same regression as sync_committee). Renamed ResetEpochEraseEpochData. Adds explicit dutyDeadline (1 slot) to ExecuteDuties.
operator/duties/scheduler.go Removed the Previous field from ReorgEvent (replaced by !Current); updated ExecuteDuties and ExecuteCommitteeDuties signatures to accept an explicit dutyDeadline time.Time; simplified duty goroutine setup by replacing utils.CtxWithParentDeadline with context.WithDeadline(ctx, dutyDeadline). Clean, correct changes.
operator/duties/committee.go Removed ctxWithDeadlineInOneEpoch wrapper (safe: processExecution only does a map lookup + goroutine spawns, no blocking IO). Passes explicit dutyDeadline (slot + SLOTS_PER_EPOCH + 1) to ExecuteCommitteeDuties.
operator/duties/base_handler.go Deleted the ctxWithDeadlineOnNextSlot, ctxWithDeadlineInOneEpoch, and ctxWithDeadlineOnSlot helpers since deadline management was moved to each handler or to the executor call-site.
operator/duties/dutystore/duties.go Trivial rename: ResetEpochEraseEpochData for clearer semantics. No logic changes.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Slot Ticker fires] --> B[Compute currentSlot / currentEpoch / nextEpoch]
    B --> C[tickCtx = deadline @ SlotStartTime currentSlot+1]
    C --> D[prepareCurrentEpoch\ncheck dutyFetchIntents currentEpoch]
    D --> E{intent exists\nand unfulfilled?}
    E -- Yes --> F[fetchAndProcessDuties\ncurrentEpoch]
    F --> G[dutyFetchIntents currentEpoch = true]
    E -- No --> H[executeAggregatorDuties]
    G --> H
    H --> I[prepareNextEpoch\ncheck dutyFetchIntents nextEpoch]
    I --> J{intent exists\nunfulfilled\nand good time?}
    J -- Yes --> K[fetchAndProcessDuties\nnextEpoch]
    J -- No --> L[Cleanup old epoch data\nif last slot of epoch]
    K --> L
    L --> M{Inner select:\nindicesChange or deadline?}
    M -- indicesChange received --> N[Set intents for currentEpoch + nextEpoch]
    N --> O{atLastSlotOfCurrentEpoch?}
    O -- Yes --> P[prune currentEpoch intent\nprepareNextEpoch immediately]
    O -- No --> Q[prepareCurrentEpoch immediately]
    M -- Too late / timeout --> R[Defer to next slot]
    P --> S[Schedule nextEpoch intent if not already set]
    Q --> S
    R --> S

    AA[Reorg Event] --> AB[Compute currentSlot via EstimatedCurrentSlot]
    AB --> AC[reorgCtx = deadline @ SlotStartTime currentSlot+2]
    AC --> AD{reorgEvent.Current?}
    AD -- false prev epoch --> AE[Set intent for reorgEpoch]
    AD -- true curr epoch --> AF[Skip current epoch intent]
    AE --> AG[Set intent for reorgEpoch+1]
    AF --> AG
    AG --> AH{atLastSlotOfCurrentEpoch?}
    AH -- Yes --> AI[prune currentEpoch\nprepareNextEpoch]
    AH -- No --> AJ[prepareCurrentEpoch]
Loading

Comments Outside Diff (1)

  1. operator/duties/sync_committee.go, line 1273-1274 (link)

    Tick processing loses per-tick deadline for beacon node fetch calls

    Previously processFetching (which calls h.beaconNode.SyncCommitteeDuties(ctx, ...)) was wrapped in a tickCtx with a ctxWithDeadlineOnNextSlot deadline (~1 slot / ~12 s). Now it uses the long-lived ctx (the scheduler pool's parent context, which has no deadline). If the beacon node is slow or unavailable, processFetching will block indefinitely, stalling all future slot-tick processing for the sync committee handler.

    The attester handler deliberately kept a per-tick deadline (tickCtx with SlotStartTime(currentSlot+1)). The sync committee handler should similarly bound the fetch call with a per-tick deadline:

    tickCtx, cancel := context.WithDeadline(ctx, h.beaconConfig.SlotStartTime(slot+1))
    defer cancel()
    h.processExecution(tickCtx, period, slot)
    h.processFetching(tickCtx, epoch, period, true)

    The same issue applies to processFetching in proposer.go's tick case.

Last reviewed commit: 90b69bf

Comment on lines +128 to +147
select {
case <-h.indicesChange:
logger.Info("🔁 indices change received")

// Some validator-related state has updated, means we need to re-fetch the duties for the current
// and next epoch to ensure we have the up-to-date duties for all validators for both epochs.
h.dutyFetchIntents[currentEpoch] = struct{}{}
h.dutyFetchIntents[currentEpoch+1] = struct{}{}

// When at epoch boundary, we only care about pre-fetching & preparing the duties for the next epoch
// (the current epoch will have been passed upon the next slot-tick). Otherwise, pre-fetch & prepare
// the duties for the current epoch.
if h.lastTickedSlotAtEpochBoundary() {
h.prepareNextEpoch(ctx, currentEpoch, currentSlot)
} else {
h.prepareCurrentEpoch(ctx, currentEpoch, currentSlot)
}
case <-time.After(time.Until(indicesChangeDeadline)):
// It's too late(risky) to handle indices change on the current slot, we'll do it on the next slot.
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inner select uses time.After(time.Until(indicesChangeDeadline)) without a ctx.Done() case. If the context is cancelled while waiting at this select (e.g., during shutdown), the goroutine will block up to ~one IntervalDuration (~4 seconds for mainnet) before noticing cancellation, since the outer for { select {...} } only checks ctx.Done() after this inner select returns.

Suggested change
select {
case <-h.indicesChange:
logger.Info("🔁 indices change received")
// Some validator-related state has updated, means we need to re-fetch the duties for the current
// and next epoch to ensure we have the up-to-date duties for all validators for both epochs.
h.dutyFetchIntents[currentEpoch] = struct{}{}
h.dutyFetchIntents[currentEpoch+1] = struct{}{}
// When at epoch boundary, we only care about pre-fetching & preparing the duties for the next epoch
// (the current epoch will have been passed upon the next slot-tick). Otherwise, pre-fetch & prepare
// the duties for the current epoch.
if h.lastTickedSlotAtEpochBoundary() {
h.prepareNextEpoch(ctx, currentEpoch, currentSlot)
} else {
h.prepareCurrentEpoch(ctx, currentEpoch, currentSlot)
}
case <-time.After(time.Until(indicesChangeDeadline)):
// It's too late(risky) to handle indices change on the current slot, we'll do it on the next slot.
}
indicesChangeDeadline := h.beaconConfig.SlotStartTime(currentSlot).Add(h.beaconConfig.IntervalDuration())
select {
case <-h.indicesChange:
logger.Info("🔁 indices change received")
// Some validator-related state has updated, means we need to re-fetch the duties for the current
// and next epoch to ensure we have the up-to-date duties for all validators for both epochs.
h.dutyFetchIntents[currentEpoch] = struct{}{}
h.dutyFetchIntents[currentEpoch+1] = struct{}{}
// When at epoch boundary, we only care about pre-fetching & preparing the duties for the next epoch
// (the current epoch will have been passed upon the next slot-tick). Otherwise, pre-fetch & prepare
// the duties for the current epoch.
if h.lastTickedSlotAtEpochBoundary() {
h.prepareNextEpoch(ctx, currentEpoch, currentSlot)
} else {
h.prepareCurrentEpoch(ctx, currentEpoch, currentSlot)
}
case <-ctx.Done():
return
case <-time.After(time.Until(indicesChangeDeadline)):
// It's too late(risky) to handle indices change on the current slot, we'll do it on the next slot.
}

Context Used: Rule from dashboard - When implementing delays or sleep operations in Go, ensure they respect context cancellation by usin... (source)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, added the case <-ctx.Done() clause now.

Comment on lines 183 to 192
func (h *AttesterHandler) HandleInitialDuties(ctx context.Context) {
ctx, cancel := context.WithTimeout(ctx, h.beaconConfig.SlotDuration)
defer cancel()

h.fetchCurrentEpoch = true
h.fetchNextEpoch = true

slot := h.beaconConfig.EstimatedCurrentSlot()
epoch := h.beaconConfig.EstimatedEpochAtSlot(slot)
h.processFetching(ctx, epoch, slot)
}

func (h *AttesterHandler) processFetching(ctx context.Context, epoch phase0.Epoch, slot phase0.Slot) {
ctx, span := tracer.Start(ctx,
observability.InstrumentName(observabilityNamespace, "attester.fetch"),
trace.WithAttributes(
observability.BeaconEpochAttribute(epoch),
observability.BeaconSlotAttribute(slot),
observability.BeaconRoleAttribute(spectypes.BNRoleAttester),
))
defer span.End()

if h.fetchCurrentEpoch {
span.AddEvent("fetching current epoch duties")
if err := h.fetchAndProcessDuties(ctx, epoch, slot); err != nil {
h.logger.Error("failed to fetch duties for current epoch", zap.Error(err))
span.SetStatus(codes.Error, err.Error())
return
}
h.fetchCurrentEpoch = false
}
currentSlot := h.beaconConfig.EstimatedCurrentSlot()
currentEpoch := h.beaconConfig.EstimatedEpochAtSlot(currentSlot)

// This additional shouldFetchNexEpoch check here is an optimization that prevents
// unnecessary(duplicate) fetches in some cases + also delays the fetch until we
// cannot delay it much further.
if h.fetchNextEpoch && h.shouldFetchNexEpoch(slot) {
span.AddEvent("fetching next epoch duties")
if err := h.fetchAndProcessDuties(ctx, epoch+1, slot); err != nil {
h.logger.Error("failed to fetch duties for next epoch", zap.Error(err))
span.SetStatus(codes.Error, err.Error())
return
}
h.fetchNextEpoch = false
}
h.dutyFetchIntents[currentEpoch] = struct{}{}
h.dutyFetchIntents[currentEpoch+1] = struct{}{}

span.SetStatus(codes.Ok, "")
h.prepareCurrentEpoch(ctx, currentEpoch, currentSlot)
h.prepareNextEpoch(ctx, currentEpoch, currentSlot)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HandleInitialDuties is called synchronously during scheduler startup (see // This call is blocking. comment in scheduler.go) before HandleDuties is started. Without a timeout, if the beacon node is slow or unavailable, the blocking network calls inside prepareCurrentEpoch and prepareNextEpoch will hang indefinitely, stalling the entire scheduler startup.

The timeout should be restored:

func (h *AttesterHandler) HandleInitialDuties(ctx context.Context) {
    ctx, cancel := context.WithTimeout(ctx, h.beaconConfig.SlotDuration)
    defer cancel()

    currentSlot := h.beaconConfig.EstimatedCurrentSlot()
    currentEpoch := h.beaconConfig.EstimatedEpochAtSlot(currentSlot)
    ...
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Restored now

Comment on lines 98 to 105
func() {
tickCtx, cancel := h.ctxWithDeadlineInOneEpoch(ctx, slot)
tickCtx, cancel := h.ctxWithDeadlineInOneEpoch(ctx, currentSlot)
defer cancel()

h.executeAggregatorDuties(tickCtx, currentEpoch, slot)
h.processFetching(tickCtx, currentEpoch, slot)
h.executeAggregatorDuties(tickCtx, currentEpoch, currentSlot)

h.prepareNextEpoch(ctx, currentEpoch, currentSlot)
}()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inside the func() closure, tickCtx is created with a one-epoch deadline via ctxWithDeadlineInOneEpoch() to bound duty-fetching calls. However, prepareNextEpoch on line 104 is called with the parent ctx instead of tickCtx, bypassing the deadline entirely.

If the beacon node is slow, prepareNextEpoch can now run unconstrained during the tick-processing window. Either pass tickCtx to prepareNextEpoch, or — if the intent is to allow next-epoch fetching to run longer — move it outside the closure and document why:

Suggested change
func() {
tickCtx, cancel := h.ctxWithDeadlineInOneEpoch(ctx, slot)
tickCtx, cancel := h.ctxWithDeadlineInOneEpoch(ctx, currentSlot)
defer cancel()
h.executeAggregatorDuties(tickCtx, currentEpoch, slot)
h.processFetching(tickCtx, currentEpoch, slot)
h.executeAggregatorDuties(tickCtx, currentEpoch, currentSlot)
h.prepareNextEpoch(ctx, currentEpoch, currentSlot)
}()
func() {
tickCtx, cancel := h.ctxWithDeadlineInOneEpoch(ctx, currentSlot)
defer cancel()
h.executeAggregatorDuties(tickCtx, currentEpoch, currentSlot)
h.prepareNextEpoch(tickCtx, currentEpoch, currentSlot)
}()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been addressed now.

@iurii-ssv iurii-ssv marked this pull request as draft March 4, 2026 12:01
@iurii-ssv iurii-ssv marked this pull request as ready for review March 6, 2026 13:29
@iurii-ssv
Copy link
Contributor Author

@greptile could you re-review this PR ?

Comment on lines +502 to 506
func (h *AttesterHandler) atLastSlotOfCurrentEpoch() bool {
currentSlot := h.beaconConfig.EstimatedCurrentSlot()
slotsPerEpoch := h.beaconConfig.SlotsPerEpoch
return uint64(slot)%slotsPerEpoch > slotsPerEpoch/2-2
return uint64(currentSlot+1)%slotsPerEpoch == 0
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

atLastSlotOfCurrentEpoch re-estimates the current slot independently

atLastSlotOfCurrentEpoch() calls EstimatedCurrentSlot() again internally, rather than reusing the currentSlot value already computed by the caller. This creates a potential inconsistency near epoch boundaries:

  • In the tick handler, currentSlot comes from h.ticker.Slot(). If tick processing takes non-trivial time (e.g., 1+ seconds), EstimatedCurrentSlot() (wall-clock based) can return a slot one ahead of the ticker's slot.
  • Example: we're at slot 31 (last slot of epoch 0), h.ticker.Slot() = 31, but by the time the inner select resolves and atLastSlotOfCurrentEpoch() is called, the clock has advanced to slot 32 → (32+1)%32 = 1 ≠ 0 → returns false, causing prepareCurrentEpoch to be called instead of prepareNextEpoch at epoch-boundary.

The same issue affects calls from HandleInitialDuties and the reorg handler where currentSlot is also independently computed moments earlier.

The fix is to accept currentSlot phase0.Slot as a parameter:

func (h *AttesterHandler) atLastSlotOfCurrentEpoch(currentSlot phase0.Slot) bool {
    slotsPerEpoch := h.beaconConfig.SlotsPerEpoch
    return uint64(currentSlot+1)%slotsPerEpoch == 0
}

and update all call sites to pass their already-computed currentSlot.

func() {
tickCtx, cancel := h.ctxWithDeadlineInOneEpoch(ctx, reorgEvent.Slot)
defer cancel()
// Clean up the irrelevant data to prevent infinite memory growth at the 1 slot of the epoch.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Misleading comment: "at the 1 slot of the epoch"

The comment says "at the 1 slot of the epoch", but the condition uint64(currentSlot+1)%slotsPerEpoch == 0 actually fires at the last slot of the epoch (e.g., slot 31 in a 32-slot epoch, since 32 % 32 == 0).

Suggested change
// Clean up the irrelevant data to prevent infinite memory growth at the 1 slot of the epoch.
// Clean up the irrelevant data to prevent infinite memory growth at the last slot of the epoch.

Comment on lines +300 to +308
err := h.fetchAndProcessDuties(ctx, logger, currentEpoch, currentSlot)
if err != nil {
h.logger.Error("failed to prepare duties for current epoch", zap.Error(err))
return
}
h.dutyFetchIntents[currentEpoch] = true // the intent has been fulfilled

logger.Info("fetching duties for the current epoch succeeded")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent logger usage: error path drops contextual fields

The info-level logs in prepareCurrentEpoch (and prepareNextEpoch) use the logger parameter (enriched with epoch_slot_pos, current_epoch, current_slot), but the error path falls back to h.logger (base logger, no context fields). This makes the error harder to correlate in Kibana.

Suggested change
err := h.fetchAndProcessDuties(ctx, logger, currentEpoch, currentSlot)
if err != nil {
h.logger.Error("failed to prepare duties for current epoch", zap.Error(err))
return
}
h.dutyFetchIntents[currentEpoch] = true // the intent has been fulfilled
logger.Info("fetching duties for the current epoch succeeded")
}
err := h.fetchAndProcessDuties(ctx, logger, currentEpoch, currentSlot)
if err != nil {
logger.Error("failed to prepare duties for current epoch", zap.Error(err))
return
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant