-
Notifications
You must be signed in to change notification settings - Fork 51
✨ feat: add batch logging support #250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Reduce HTTP overhead by buffering logs and sending them in batches, configurable via batchLogs, batchLogsSize, and batchPayloadLimit options.
WalkthroughAdds configurable batched log buffering and transmission to the ReportPortal JS client: new config options, in-memory buffering with payload accounting, multipart batch sends with retry/delay, a public flushLogs() method, TypeScript declarations, tests, and README documentation updates. (33 words) Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant RPClient as RPClient
participant Buffer as Log Buffer
participant Builder as Multipart Builder
participant Server as ReportPortal Server
User->>RPClient: sendLog(logData, file?)
alt batchLogs enabled
RPClient->>Buffer: addLogToBuffer(logData, file)
Buffer->>Buffer: calculateLogSize & update buffer
Buffer->>Buffer: check thresholds (count / payload)
alt threshold met
Buffer->>RPClient: trigger flush
end
else
RPClient->>Server: POST single log (immediate)
Server-->>RPClient: response
end
User->>RPClient: flushLogs() or finishLaunch()
RPClient->>Buffer: collect buffered logs
alt logs present
RPClient->>Builder: build multipart stream
Builder-->>RPClient: multipart stream
RPClient->>Server: POST batch (multipart/form-data)
Server-->>RPClient: per-log responses
RPClient->>Buffer: resolve/reject per-log promises, remove sent logs
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (5)
🚧 Files skipped from review as they are similar to previous changes (2)
🔇 Additional comments (21)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
lib/report-portal-client.js (2)
913-977: Consider potential race condition in payload-based flush.The
flushLogs()call on lines 935 and 970 is not awaited. While theflushLogsmethod handles concurrent calls via theflushingLogsflag andwhileloop, logs added between triggering the flush and the flush starting could accumulate beyond the payload limit.This is a minor concern since:
- The flush will eventually process all buffered logs
- The warning at line 926-928 alerts users to oversized entries
For most use cases this should be fine, but if strict payload limits are critical, consider awaiting the flush or documenting this behavior.
1003-1090: Consider extracting boundary generation to avoid unused variable warning.The implementation is solid. Minor suggestion: the
let responsedeclaration on line 1050 followed by immediate assignment could be simplified.♻️ Optional cleanup
try { - let response; // Always use multipart format for batch logs (required by ReportPortal API) const boundary = Math.floor(Math.random() * 10000000000).toString(); const multipartData = this.buildBatchMultiPartStream(preparedEntries, boundary); this.logDebug(`Sending batch of ${preparedEntries.length} logs${hasFiles ? ' with files' : ''}`); - response = await this.restClient.create(url, multipartData, { + const response = await this.restClient.create(url, multipartData, { headers: { 'Content-Type': `multipart/form-data; boundary=${boundary}`, }, });
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
README.md__tests__/report-portal-client.spec.jsindex.d.tslib/commons/config.jslib/report-portal-client.js
🧰 Additional context used
🧬 Code graph analysis (1)
lib/commons/config.js (1)
__tests__/rest.spec.js (1)
options(8-20)
🔇 Additional comments (21)
README.md (2)
126-128: LGTM!The documentation for the new batch logging configuration options is clear and comprehensive. The default values (batchLogs=false, batchLogsSize=10, batchPayloadLimit=62MB) are well documented and align with the implementation in
lib/commons/config.js.
598-627: LGTM!The batch logging mode documentation and
flushLogsmethod are well documented with clear examples showing the usage pattern.lib/commons/config.js (1)
119-121: LGTM!The batch logging configuration options are correctly implemented. The strict boolean check for
batchLogs(=== true) prevents falsy values from enabling batch mode. The defaults (10 logs, 62MB limit) are reasonable.index.d.ts (2)
163-181: LGTM!The TypeScript definitions for the batch logging configuration options are well documented with clear descriptions and default values.
411-423: LGTM!The
flushLogsmethod declaration is correctly typed asPromise<void>with comprehensive JSDoc documentation explaining when it's called automatically.__tests__/report-portal-client.spec.js (10)
1124-1152: LGTM!Configuration tests properly validate both default values and custom configuration for batch logging options.
1154-1209: LGTM!Tests for batch send by buffer length are well structured, verifying that logs are buffered until reaching
batchLogsSizeand that the batch is sent once the threshold is met.
1211-1241: LGTM!Payload size-based flushing is properly tested, ensuring logs are flushed when the accumulated payload exceeds
batchPayloadLimit.
1243-1279: LGTM!File attachment handling in batch mode is properly tested, verifying multipart requests are created correctly.
1281-1304: LGTM!Backward compatibility test confirms that logs are sent immediately when
batchLogsis disabled.
1306-1363: LGTM!The
finishLaunchflush test is well-designed, verifying that buffered logs are flushed before the launch completes.
1365-1399: LGTM!Error handling test properly verifies that individual log promises are rejected when batch send fails.
1401-1446: LGTM!Multiple files in batch test verifies correct association of files with their log entries in the multipart request.
1448-1497: LGTM!Concurrent flush test properly validates that logs added during an active flush are queued for the next batch.
1499-1525: LGTM!Oversized log warning test ensures users are notified when a single log exceeds the payload limit.
lib/report-portal-client.js (6)
68-73: LGTM!New instance variables for batch logging state management are well-organized. The
flushingLogsflag andflushPromiseprovide proper coordination for concurrent flush operations.
288-292: LGTM!Correctly flushes buffered logs before finishing the launch, ensuring no logs are lost. The await ensures all logs are sent before the launch finish request is made.
728-730: LGTM!Clean routing logic that directs logs to the batch buffer when
batchLogsis enabled, maintaining backward compatibility with the existing immediate-send behavior when disabled.
876-903: LGTM!The
buildBatchMultiPartStreammethod correctly constructs multipart request bodies for batched logs with file attachments. The format follows the standard multipart/form-data specification.
905-911: Verify base64 size estimation accuracy.The formula
Math.ceil((fileObj.content.length * 3) / 4)estimates decoded size from base64 string length. This is generally correct but doesn't account for potential padding characters (=), which could cause slight overestimation (up to 2 bytes). This is acceptable as it's a conservative estimate for the payload limit check.
979-1001: LGTM!The
flushLogsmethod correctly handles concurrent flush scenarios with thewhileloop pattern. If a flush is already in progress, new calls await the existingflushPromiseand then re-check the buffer, ensuring no logs are lost.
- Add batchRetryCount and batchRetryDelay config options - Introduce batchQueue to fix potential race condition during concurrent flushes - Auto-retry failed batches with configurable delay between attempts Fixes reportportal#250
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @index.d.ts:
- Around line 423-436: The current declaration for flushLogs() promises success
but implementation treats failures as best-effort; update the API docs and
typings to match that behavior: change the JSDoc for flushLogs() to state it
performs a best-effort flush that returns a Promise<void> which resolves once
sending completes but does not reject on batch send errors (individual per-log
promises will be rejected on failure), and update README/examples to the same
wording; alternatively, if you want strict semantics instead, change the
signature/implementation of flushLogs() to propagate batch errors and reject the
Promise (update index.d.ts, README, and any callers accordingly) — pick one
approach and make both the index.d.ts JSDoc and README consistent with the
chosen behavior, referencing flushLogs() and any per-log promise mechanism in
the implementation.
In @lib/report-portal-client.js:
- Around line 913-979: addLogToBuffer currently pushes the log tempId into
itemObj.children which couples log lifecycle to item completion and can deadlock
finish; stop mutating itemObj.children in addLogToBuffer and instead track log
tempIds separately (e.g. this.itemLogChildren[itemTempId] =
this.itemLogChildren[itemTempId] || [];
this.itemLogChildren[itemTempId].push(tempId)), update finish/complete logic to
consult this.itemLogChildren when awaiting logs (or treat these tracked log
entries as independent), and ensure any cleanup removes entries from
this.itemLogChildren and pendingLogResolvers so flushed or failed batches don’t
block item finishing.
- Around line 289-292: The finishLaunch path can deadlock in batch mode because
child finish waits may block on unflushed batched logs; modify the async barrier
in finishLaunch (the async () => { ... } block) to perform a best-effort flush
before or during waiting for child finishes: if this.config.batchLogs is true,
call await this.flushLogs() (wrapped in try/catch so failures are ignored) prior
to awaiting child/test-item finish promises (and similarly add a best-effort
flush in finishTestItem/wherever children are awaited if present) so pending log
promises are unblocked without letting flush errors abort the finish flow.
🧹 Nitpick comments (2)
lib/commons/config.js (1)
119-123: Validate and default numeric batch options more explicitly (avoid||masking bad/edge values).
batchLogsSize/batchPayloadLimituse||, which will treat0,NaN,''as “unset” and silently default; also no range checks.Proposed tighten-up
- batchLogsSize: options.batchLogsSize || 10, - batchPayloadLimit: options.batchPayloadLimit || 65011712, // 62MB + batchLogsSize: + Number.isInteger(options.batchLogsSize) && options.batchLogsSize > 0 + ? options.batchLogsSize + : 10, + batchPayloadLimit: + Number.isInteger(options.batchPayloadLimit) && options.batchPayloadLimit > 0 + ? options.batchPayloadLimit + : 65011712, // 62MBlib/report-portal-client.js (1)
990-1018: Decide whetherflushLogs()should surface failures; current implementation hides them.
flushLogs()catches batch failures and continues, soawait flushLogs()can succeed even if every batch failed; this conflicts with the new public API expectations. Also, on total failuresendLogsBatch()logslastErrorwhich may include request headers depending on the underlying HTTP client—avoid leaking tokens in logs.Also applies to: 1020-1133
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
README.md__tests__/report-portal-client.spec.jsindex.d.tslib/commons/config.jslib/report-portal-client.js
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/report-portal-client.spec.js
- README.md
🧰 Additional context used
🧬 Code graph analysis (1)
lib/commons/config.js (1)
__tests__/rest.spec.js (1)
options(8-20)
🔇 Additional comments (4)
index.d.ts (1)
163-193: Batch config typings look consistent with the implementation names (batchLogs*).
No concerns on the shape/docs here.lib/report-portal-client.js (3)
68-74: Batch state initialization is clear and localized to the client instance.
728-730: Batch routing insendLog()is straightforward and keeps the existing return shape.
876-903: Verify RP API expectations for multi-file multipart batches (repeatedname="file"parts + ordering).
This builder emits one JSON part plus Nfileparts all named"file". If the server expects per-log field names or an indexed form, batches with multiple attachments will mis-associate files.
- Add batchRetryCount and batchRetryDelay config options - Introduce batchQueue to fix potential race condition during concurrent flushes - Auto-retry failed batches with configurable delay between attempts Fixes reportportal#250
|
I'm sorry to bother you, but could you please take a look at this pull request? |
|
@li-zhixin There is almost no chance this code will be approved, since it's excessive, adds complexity to |
Summary
batchLogging,batchLoggingSize, andbatchLoggingFlushIntervalChanges
lib/report-portal-client.js: Added batch logging logic with queue management and flush mechanismlib/commons/config.js: Added new configuration options for batch loggingindex.d.ts: Added TypeScript type definitions for new optionsREADME.md: Added documentation for batch logging feature__tests__/report-portal-client.spec.js: Added unit tests for batch logging functionalityTest plan
Summary by CodeRabbit
New Features
Tests
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.