Skip to content
Discussion options

You must be logged in to vote

Hi @arthurvanl, great question!

Short answer: for large data (>1-2 MB), save to disk/S3 and pass the file path in the job result. For small data (<1 MB), passing it directly through getParentResult is fine.

Why?

Job results are stored in memory (LRU cache, max 5,000 entries) and optionally persisted to SQLite. Storing 20MB+ blobs per job would quickly exhaust memory and slow down SQLite operations.

Recommended pattern for large data:

import { writeFile, readFile } from 'fs/promises';
import { randomUUID } from 'crypto';

const worker = new Worker('fetch-data', async (job) => {
  const data = await fetchFromFTP(job.data.endpoint);
  
  if (JSON.stringify(data).length > 1_000_000) {
    // …

Replies: 6 comments 5 replies

Comment options

You must be logged in to vote
0 replies
Answer selected by arthurvanl
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@arthurvanl
Comment options

Comment options

You must be logged in to vote
1 reply
@arthurvanl
Comment options

Comment options

You must be logged in to vote
1 reply
@egeominotti
Comment options

Comment options

You must be logged in to vote
2 replies
@arthurvanl
Comment options

@arthurvanl
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants