-
Notifications
You must be signed in to change notification settings - Fork 14
Description
Description
Currently, the APIs do not define a standardized mechanism for uploading large files in a resumable, chunked, session-based manner (comparable to Google Drive–style resumable uploads).
In practice, clients require:
- Uploading large files reliably
- Resuming interrupted uploads
- Retrying individual chunks without restarting the entire transfer
- Server-side tracking of upload progress via a session identifier
There is no RFC-standardized HTTP mechanism that covers resumable, session-based uploads end-to-end. Existing solutions are either proprietary (e.g., Google Drive, AWS S3 Multipart Upload) or application-level protocols.
This ticket proposes to evaluate an explicit session-based upload API for attachments, inspired by established open-source approaches.
Proposed API Shape (Draft)
The following endpoints are proposed as a discussion baseline, not as a final specification.
1. Initiate upload session
POST /attachment/initiate
Request (example):
fileNamefileSizecontentType(optional)
Response (example):
sessionIdchunkSizetotalChunks
Purpose:
- Creates an upload session
- Defines server-side chunking rules
- Allows clients to resume uploads consistently
2. Upload chunk
PUT /attachment/session/{sessionId}/upload/{chunkId}
Request:
- Binary payload of the chunk
- Optional checksum / content-range metadata
Purpose:
- Uploads a single chunk
- Enables retry of individual chunks
- Allows parallel or sequential upload strategies
3. Complete upload session
POST /attachment/session/{sessionId}/complete
Purpose:
- Signals that all chunks have been uploaded
- Triggers server-side validation and finalization
- Creates the final attachment resource
4. Abort upload session
POST /attachment/session/{sessionId}/abort
Purpose:
- Cancels an upload session
- Allows cleanup of temporary server-side data
- Supports explicit client-side cancellation
Additional considerations:
-
Upload sessions should have a server-defined timeout / expiration
-
If a client stops sending data without calling
abort(e.g., network failure, client crash), the server:- Automatically invalidates the session after the timeout
- Cleans up any partially uploaded chunks
-
The timeout duration may be:
- Fixed by the server, or
- Returned to the client during session initiation
This ensures that abandoned upload sessions do not lead to resource leaks or indefinite allocation of server-side storage.
Relation to Existing Open Approaches
The proposed flow is conceptually aligned with Google Drive–style resumable uploads and open protocols such as tus, which also rely on:
- Explicit upload sessions
- Server-defined chunk handling
- Resume and retry semantics
The endpoints above intentionally keep the protocol simple and REST-aligned, while allowing future alignment with existing open standards.
Goals
- Provide a robust solution for large binary payloads
- Enable resumable and fault-tolerant uploads
Non-Goals
- Mandating a concrete server implementation
- Defining storage backends or persistence strategies
- Replacing existing simple upload endpoints
References
- tus resumable upload protocol: https://tus.io
- tus reference server (open source): https://github.com/tus/tusd
- Google Drive resumable upload (reference only, proprietary)
- I have signed the required Developer Certificate of Origin (DCO) already.