Skip to content

Conversation

@li-zhixin
Copy link

@li-zhixin li-zhixin commented Jan 10, 2026

Summary

  • Add batch logging support to improve performance when sending multiple logs
  • New configuration options: batchLogging, batchLoggingSize, and batchLoggingFlushInterval
  • Logs are accumulated and sent in batches to reduce API calls

Changes

  • lib/report-portal-client.js: Added batch logging logic with queue management and flush mechanism
  • lib/commons/config.js: Added new configuration options for batch logging
  • index.d.ts: Added TypeScript type definitions for new options
  • README.md: Added documentation for batch logging feature
  • __tests__/report-portal-client.spec.js: Added unit tests for batch logging functionality

Test plan

  • [✅] Unit tests added for batch logging functionality
  • [✅] Manual testing with ReportPortal instance

Summary by CodeRabbit

  • New Features

    • Added batched logging mode with options for enabling batch logs, buffer size, payload limit, retry count and retry delay; automatic flush on thresholds or when finishing a run; manual flush API available.
  • Tests

    • Added extensive tests covering batching triggers, attachments, flush/finish behavior, retries, error handling, concurrency, and backward compatibility.
  • Documentation

    • Updated README and examples with batch logging configuration, usage, flush guidance, and related notes.

✏️ Tip: You can customize this high-level summary in your review settings.

Reduce HTTP overhead by buffering logs and sending them in batches,
configurable via batchLogs, batchLogsSize, and batchPayloadLimit options.
@li-zhixin li-zhixin requested a review from AmsterGet as a code owner January 10, 2026 09:31
@coderabbitai
Copy link

coderabbitai bot commented Jan 10, 2026

Walkthrough

Adds configurable batched log buffering and transmission to the ReportPortal JS client: new config options, in-memory buffering with payload accounting, multipart batch sends with retry/delay, a public flushLogs() method, TypeScript declarations, tests, and README documentation updates. (33 words)

Changes

Cohort / File(s) Summary
Documentation
README.md
Adds Batch logging and flushLogs sections, usage examples, auto-flush triggers (buffer size, payload limit, finishLaunch) and retry policy.
Type Definitions
index.d.ts
Adds batchLogs?, batchLogsSize?, batchPayloadLimit?, batchRetryCount?, batchRetryDelay? to ReportPortalConfig and flushLogs(): Promise<void> to ReportPortalClient.
Configuration
lib/commons/config.js
Exposes new config properties with defaults: batchLogs, batchLogsSize (10), batchPayloadLimit (65011712), batchRetryCount (5), batchRetryDelay (2000).
Client Implementation
lib/report-portal-client.js
Adds in-memory log buffer, payload-size tracking, addLogToBuffer, enqueueBatch, batch queue, flushLogs, sendLogsBatch (multipart), buildBatchMultiPartStream, calculateLogSize, retry/delay logic; sendLog now routes to buffer when enabled; ensures flush before finishLaunch.
Tests
__tests__/report-portal-client.spec.js
Adds extensive tests for batching: default/custom configs, buffer-size and payload triggers, multipart attachments, flush on finish, retry semantics, error mapping to per-log promises, concurrent flush behavior, and backward compatibility when batching disabled.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant RPClient as RPClient
    participant Buffer as Log Buffer
    participant Builder as Multipart Builder
    participant Server as ReportPortal Server

    User->>RPClient: sendLog(logData, file?)
    alt batchLogs enabled
        RPClient->>Buffer: addLogToBuffer(logData, file)
        Buffer->>Buffer: calculateLogSize & update buffer
        Buffer->>Buffer: check thresholds (count / payload)
        alt threshold met
            Buffer->>RPClient: trigger flush
        end
    else
        RPClient->>Server: POST single log (immediate)
        Server-->>RPClient: response
    end

    User->>RPClient: flushLogs() or finishLaunch()
    RPClient->>Buffer: collect buffered logs
    alt logs present
        RPClient->>Builder: build multipart stream
        Builder-->>RPClient: multipart stream
        RPClient->>Server: POST batch (multipart/form-data)
        Server-->>RPClient: per-log responses
        RPClient->>Buffer: resolve/reject per-log promises, remove sent logs
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • maria-hambardzumian

Poem

🐰 I gathered logs in a cozy stack,

Multipart parcels for the network track.
Retry with patience, flush on command—
Hopping bytes safely to the serverland. 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and accurately summarizes the main change: adding batch logging support to the ReportPortal JavaScript client.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cd4fb7c and e5f8229.

📒 Files selected for processing (5)
  • README.md
  • __tests__/report-portal-client.spec.js
  • index.d.ts
  • lib/commons/config.js
  • lib/report-portal-client.js
🚧 Files skipped from review as they are similar to previous changes (2)
  • index.d.ts
  • lib/commons/config.js
🔇 Additional comments (21)
README.md (2)

126-130: Documentation for batch logging configuration options looks good.

The new configuration options are well-documented with clear defaults and descriptions. The options align with the implementation in lib/commons/config.js and lib/report-portal-client.js.


600-633: Batch logging documentation is comprehensive and accurate.

The documentation clearly explains:

  • When batches are automatically sent (buffer size, payload limit, finishLaunch)
  • Retry behavior for failed requests
  • Usage examples for both batch configuration and manual flushing
__tests__/report-portal-client.spec.js (11)

1124-1152: Configuration tests are thorough.

Tests verify default values and custom configuration options are correctly applied. Good coverage of the new batch logging configuration.


1154-1209: Batch send by length tests look correct.

Tests properly verify that batches are sent when the buffer reaches batchLogsSize and that logs remain buffered when below the threshold.


1211-1241: Payload size trigger test is well-designed.

The test correctly validates that when a new log would exceed batchPayloadLimit, the previous logs are flushed first.


1243-1279: Multipart file attachment test validates correct Content-Type header.

Good test ensuring that file attachments are properly handled in batched mode with multipart/form-data encoding.


1281-1304: Backward compatibility test ensures non-batched mode still works.

This is important for ensuring existing users aren't affected when batchLogs is disabled (default).


1306-1363: Flush on finishLaunch test validates critical behavior.

This test ensures logs are properly flushed before the launch finishes, which is essential for data integrity.


1365-1400: Error handling test correctly verifies promise rejection propagation.

The test ensures that when batch send fails, individual log promises are properly rejected with the error.


1402-1447: Multiple files test validates file-log association in batch.

Good test verifying that multiple files in a single batch are correctly included in the multipart payload.


1449-1500: Concurrent flush test validates batch queue behavior.

This is an important test ensuring that logs added during an ongoing flush are queued and sent in a subsequent batch, preventing data loss.


1502-1528: Oversized log warning test validates user feedback.

Good defensive test ensuring users are warned when a single log exceeds the payload limit.


1530-1665: Retry mechanism tests are comprehensive.

Tests cover:

  • Successful retry after transient failures
  • Rejection after all retries exhausted
  • Continued processing of subsequent batches after failure

This validates the resilience of the batch logging feature.

lib/report-portal-client.js (8)

68-74: Batch logging state initialization looks correct.

The new instance variables properly initialize the batch logging infrastructure:

  • logBuffer and logBufferPayloadSize for current batch accumulation
  • pendingLogResolvers for promise management
  • flushingLogs and flushPromise for serialization
  • batchQueue for pending batches

286-325: Double flush in finishLaunch ensures log delivery.

The two-phase flush approach is well-designed:

  1. First flush before waiting for children prevents deadlock
  2. Second flush catches any logs produced during child completion

This is a robust pattern for ensuring all logs are sent before launch completion.


734-741: Routing logic for batch mode is clean.

Simple and clear conditional routing to addLogToBuffer when batchLogs is enabled, falling back to existing behavior otherwise.


911-917: Base64 size calculation may be slightly inaccurate for padded content.

The formula Math.ceil((fileObj.content.length * 3) / 4) assumes the base64 string length. This is correct for decoding size estimation, but doesn't account for potential padding characters (=). For threshold checks, this approximation is acceptable since it's conservative.


919-985: Buffer entry management with promise lifecycle is well-implemented.

The addLogToBuffer method correctly:

  • Validates item existence
  • Warns on oversized entries
  • Triggers flush when payload limit would be exceeded
  • Manages promise resolvers for later resolution
  • Tracks entries in both buffer and map

One note: The .catch(() => {}) on line 942 and 978 silently swallows flush errors, which is intentional per the JSDoc, but individual log promises will still reject appropriately.


1004-1032: flushLogs loop structure handles concurrent batches correctly.

The while(true) loop with proper await on flushPromise ensures:

  • Batches are processed sequentially
  • New logs added during flush are captured on next iteration
  • The method only returns when all queued batches are processed

The empty catch block at line 1024 is acceptable since errors are handled in sendLogsBatch and individual promises are rejected there.


1034-1147: sendLogsBatch implementation is robust with proper retry logic.

The method correctly:

  • Waits for item's promiseStart before preparing log data
  • Handles item-level failures individually (lines 1061-1070)
  • Implements retry with configurable count and delay
  • Resolves/rejects individual log promises based on batch outcome
  • Throws after all retries fail to signal failure to caller

The retry loop correctly generates a new boundary for each attempt, which is good practice.


882-909: Remove this comment—it contradicts the ReportPortal API specification. The official ReportPortal documentation specifies that multipart requests must use multiple form fields all named file, each with its own filename parameter in the Content-Disposition header. This is the expected and correct format. The existing test suite validates that multiple files in a batch work correctly with this approach. File name collisions are impossible because each file part retains its own distinct filename parameter.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
lib/report-portal-client.js (2)

913-977: Consider potential race condition in payload-based flush.

The flushLogs() call on lines 935 and 970 is not awaited. While the flushLogs method handles concurrent calls via the flushingLogs flag and while loop, logs added between triggering the flush and the flush starting could accumulate beyond the payload limit.

This is a minor concern since:

  1. The flush will eventually process all buffered logs
  2. The warning at line 926-928 alerts users to oversized entries

For most use cases this should be fine, but if strict payload limits are critical, consider awaiting the flush or documenting this behavior.


1003-1090: Consider extracting boundary generation to avoid unused variable warning.

The implementation is solid. Minor suggestion: the let response declaration on line 1050 followed by immediate assignment could be simplified.

♻️ Optional cleanup
     try {
-      let response;
       // Always use multipart format for batch logs (required by ReportPortal API)
       const boundary = Math.floor(Math.random() * 10000000000).toString();
       const multipartData = this.buildBatchMultiPartStream(preparedEntries, boundary);
       this.logDebug(`Sending batch of ${preparedEntries.length} logs${hasFiles ? ' with files' : ''}`);
-      response = await this.restClient.create(url, multipartData, {
+      const response = await this.restClient.create(url, multipartData, {
         headers: {
           'Content-Type': `multipart/form-data; boundary=${boundary}`,
         },
       });
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4878b42 and 34d8b04.

📒 Files selected for processing (5)
  • README.md
  • __tests__/report-portal-client.spec.js
  • index.d.ts
  • lib/commons/config.js
  • lib/report-portal-client.js
🧰 Additional context used
🧬 Code graph analysis (1)
lib/commons/config.js (1)
__tests__/rest.spec.js (1)
  • options (8-20)
🔇 Additional comments (21)
README.md (2)

126-128: LGTM!

The documentation for the new batch logging configuration options is clear and comprehensive. The default values (batchLogs=false, batchLogsSize=10, batchPayloadLimit=62MB) are well documented and align with the implementation in lib/commons/config.js.


598-627: LGTM!

The batch logging mode documentation and flushLogs method are well documented with clear examples showing the usage pattern.

lib/commons/config.js (1)

119-121: LGTM!

The batch logging configuration options are correctly implemented. The strict boolean check for batchLogs (=== true) prevents falsy values from enabling batch mode. The defaults (10 logs, 62MB limit) are reasonable.

index.d.ts (2)

163-181: LGTM!

The TypeScript definitions for the batch logging configuration options are well documented with clear descriptions and default values.


411-423: LGTM!

The flushLogs method declaration is correctly typed as Promise<void> with comprehensive JSDoc documentation explaining when it's called automatically.

__tests__/report-portal-client.spec.js (10)

1124-1152: LGTM!

Configuration tests properly validate both default values and custom configuration for batch logging options.


1154-1209: LGTM!

Tests for batch send by buffer length are well structured, verifying that logs are buffered until reaching batchLogsSize and that the batch is sent once the threshold is met.


1211-1241: LGTM!

Payload size-based flushing is properly tested, ensuring logs are flushed when the accumulated payload exceeds batchPayloadLimit.


1243-1279: LGTM!

File attachment handling in batch mode is properly tested, verifying multipart requests are created correctly.


1281-1304: LGTM!

Backward compatibility test confirms that logs are sent immediately when batchLogs is disabled.


1306-1363: LGTM!

The finishLaunch flush test is well-designed, verifying that buffered logs are flushed before the launch completes.


1365-1399: LGTM!

Error handling test properly verifies that individual log promises are rejected when batch send fails.


1401-1446: LGTM!

Multiple files in batch test verifies correct association of files with their log entries in the multipart request.


1448-1497: LGTM!

Concurrent flush test properly validates that logs added during an active flush are queued for the next batch.


1499-1525: LGTM!

Oversized log warning test ensures users are notified when a single log exceeds the payload limit.

lib/report-portal-client.js (6)

68-73: LGTM!

New instance variables for batch logging state management are well-organized. The flushingLogs flag and flushPromise provide proper coordination for concurrent flush operations.


288-292: LGTM!

Correctly flushes buffered logs before finishing the launch, ensuring no logs are lost. The await ensures all logs are sent before the launch finish request is made.


728-730: LGTM!

Clean routing logic that directs logs to the batch buffer when batchLogs is enabled, maintaining backward compatibility with the existing immediate-send behavior when disabled.


876-903: LGTM!

The buildBatchMultiPartStream method correctly constructs multipart request bodies for batched logs with file attachments. The format follows the standard multipart/form-data specification.


905-911: Verify base64 size estimation accuracy.

The formula Math.ceil((fileObj.content.length * 3) / 4) estimates decoded size from base64 string length. This is generally correct but doesn't account for potential padding characters (=), which could cause slight overestimation (up to 2 bytes). This is acceptable as it's a conservative estimate for the payload limit check.


979-1001: LGTM!

The flushLogs method correctly handles concurrent flush scenarios with the while loop pattern. If a flush is already in progress, new calls await the existing flushPromise and then re-check the buffer, ensuring no logs are lost.

li-zhixin added a commit to li-zhixin/client-javascript that referenced this pull request Jan 10, 2026
- Add batchRetryCount and batchRetryDelay config options
- Introduce batchQueue to fix potential race condition during concurrent flushes
- Auto-retry failed batches with configurable delay between attempts

Fixes reportportal#250
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @index.d.ts:
- Around line 423-436: The current declaration for flushLogs() promises success
but implementation treats failures as best-effort; update the API docs and
typings to match that behavior: change the JSDoc for flushLogs() to state it
performs a best-effort flush that returns a Promise<void> which resolves once
sending completes but does not reject on batch send errors (individual per-log
promises will be rejected on failure), and update README/examples to the same
wording; alternatively, if you want strict semantics instead, change the
signature/implementation of flushLogs() to propagate batch errors and reject the
Promise (update index.d.ts, README, and any callers accordingly) — pick one
approach and make both the index.d.ts JSDoc and README consistent with the
chosen behavior, referencing flushLogs() and any per-log promise mechanism in
the implementation.

In @lib/report-portal-client.js:
- Around line 913-979: addLogToBuffer currently pushes the log tempId into
itemObj.children which couples log lifecycle to item completion and can deadlock
finish; stop mutating itemObj.children in addLogToBuffer and instead track log
tempIds separately (e.g. this.itemLogChildren[itemTempId] =
this.itemLogChildren[itemTempId] || [];
this.itemLogChildren[itemTempId].push(tempId)), update finish/complete logic to
consult this.itemLogChildren when awaiting logs (or treat these tracked log
entries as independent), and ensure any cleanup removes entries from
this.itemLogChildren and pendingLogResolvers so flushed or failed batches don’t
block item finishing.
- Around line 289-292: The finishLaunch path can deadlock in batch mode because
child finish waits may block on unflushed batched logs; modify the async barrier
in finishLaunch (the async () => { ... } block) to perform a best-effort flush
before or during waiting for child finishes: if this.config.batchLogs is true,
call await this.flushLogs() (wrapped in try/catch so failures are ignored) prior
to awaiting child/test-item finish promises (and similarly add a best-effort
flush in finishTestItem/wherever children are awaited if present) so pending log
promises are unblocked without letting flush errors abort the finish flow.
🧹 Nitpick comments (2)
lib/commons/config.js (1)

119-123: Validate and default numeric batch options more explicitly (avoid || masking bad/edge values).
batchLogsSize/batchPayloadLimit use ||, which will treat 0, NaN, '' as “unset” and silently default; also no range checks.

Proposed tighten-up
-      batchLogsSize: options.batchLogsSize || 10,
-      batchPayloadLimit: options.batchPayloadLimit || 65011712, // 62MB
+      batchLogsSize:
+        Number.isInteger(options.batchLogsSize) && options.batchLogsSize > 0
+          ? options.batchLogsSize
+          : 10,
+      batchPayloadLimit:
+        Number.isInteger(options.batchPayloadLimit) && options.batchPayloadLimit > 0
+          ? options.batchPayloadLimit
+          : 65011712, // 62MB
lib/report-portal-client.js (1)

990-1018: Decide whether flushLogs() should surface failures; current implementation hides them.
flushLogs() catches batch failures and continues, so await flushLogs() can succeed even if every batch failed; this conflicts with the new public API expectations. Also, on total failure sendLogsBatch() logs lastError which may include request headers depending on the underlying HTTP client—avoid leaking tokens in logs.

Also applies to: 1020-1133

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 34d8b04 and cd4fb7c.

📒 Files selected for processing (5)
  • README.md
  • __tests__/report-portal-client.spec.js
  • index.d.ts
  • lib/commons/config.js
  • lib/report-portal-client.js
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/report-portal-client.spec.js
  • README.md
🧰 Additional context used
🧬 Code graph analysis (1)
lib/commons/config.js (1)
__tests__/rest.spec.js (1)
  • options (8-20)
🔇 Additional comments (4)
index.d.ts (1)

163-193: Batch config typings look consistent with the implementation names (batchLogs*).
No concerns on the shape/docs here.

lib/report-portal-client.js (3)

68-74: Batch state initialization is clear and localized to the client instance.


728-730: Batch routing in sendLog() is straightforward and keeps the existing return shape.


876-903: Verify RP API expectations for multi-file multipart batches (repeated name="file" parts + ordering).
This builder emits one JSON part plus N file parts all named "file". If the server expects per-log field names or an indexed form, batches with multiple attachments will mis-associate files.

- Add batchRetryCount and batchRetryDelay config options
- Introduce batchQueue to fix potential race condition during concurrent flushes
- Auto-retry failed batches with configurable delay between attempts

Fixes reportportal#250
@li-zhixin
Copy link
Author

I'm sorry to bother you, but could you please take a look at this pull request?

@HardNorth
Copy link
Member

@li-zhixin There is almost no chance this code will be approved, since it's excessive, adds complexity to lib/report-portal-client.js module and does not follow SOLID principles. If you want it to be considered, try study how such functionality organized in different repositories, E.G. in Python: https://github.com/reportportal/client-Python/blob/develop/reportportal_client/_internal/logs/batcher.py

@HardNorth HardNorth closed this Jan 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants