Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
e7c7d51
feat: add user metadata support for uploads and RLS policies
TylerHillery Jan 23, 2026
f0e0e49
fix: update metadata parsing logic to handle user metadata correctly
TylerHillery Jan 26, 2026
9a734ca
feat: add file metadata support for uploads and RLS policies
TylerHillery Feb 3, 2026
5f19dea
refactor: simplify userMetadata and contentLength assignment in uploa…
TylerHillery Feb 9, 2026
ad022ac
fix: migration file number
TylerHillery Feb 9, 2026
3109ebf
feat: add contentLength to S3 object creation parameters
TylerHillery Feb 9, 2026
79e96c0
fix: add migration guard for metadata column on s3_multipart_uploads
TylerHillery Feb 9, 2026
e10e404
fix: handle tus uploads differently from first upload vs rest
TylerHillery Feb 10, 2026
5a1791e
fix: change usermetdata back to optional
TylerHillery Feb 25, 2026
81778d8
fix: add x-metadata header
TylerHillery Feb 25, 2026
6c895b8
docs: add todo comment to validate parseUserMetadata
TylerHillery Feb 25, 2026
9fc37d0
fix: add column guard for findMultipartUpload
TylerHillery Feb 25, 2026
573b34f
fix: copy object userMetadata
TylerHillery Feb 25, 2026
cd3ec31
chore: remoe userMetadata now that type is optional
TylerHillery Feb 25, 2026
f8d78d9
fix: call canUpload before shouldAllorPartUpload
TylerHillery Feb 25, 2026
208b479
feat: add more RLS tests
TylerHillery Feb 26, 2026
6afe8c4
feat: add multipart upload operation and tests for S3 uploads
TylerHillery Feb 26, 2026
0f181c2
test: add RLS upload tests to ensure in_progress_size is not mutated …
TylerHillery Feb 26, 2026
8be7f15
fix: formatting
TylerHillery Feb 27, 2026
150920e
fix: incorrectly copying userMetadata
TylerHillery Feb 27, 2026
b4676ca
test(s3-multipart): remove unit tests for integration tests
TylerHillery Feb 28, 2026
2dd0938
fix: rls copy metadata test was wrong
TylerHillery Feb 28, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions migrations/tenant/0057-s3-multipart-uploads-metadata.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
ALTER TABLE storage.s3_multipart_uploads ADD COLUMN IF NOT EXISTS metadata jsonb NULL;
23 changes: 23 additions & 0 deletions src/http/routes/object/getSignedUploadURL.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import { FastifyInstance } from 'fastify'
import { FromSchema } from 'json-schema-to-ts'
import { getConfig } from '../../../config'
import { parseUserMetadata } from '../../../storage/uploader'
import { createDefaultSchema } from '../../routes-helper'
import { AuthenticatedRequest } from '../../types'
import { ROUTE_OPERATIONS } from '../operations'
Expand All @@ -20,6 +21,9 @@ const getSignedUploadURLHeadersSchema = {
type: 'object',
properties: {
'x-upsert': { type: 'string' },
'x-metadata': { type: 'string' },
'content-type': { type: 'string' },
'content-length': { type: 'string' },
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't we need x-metadata as well since we try to read below?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, added

authorization: { type: 'string' },
},
required: ['authorization'],
Expand Down Expand Up @@ -69,10 +73,29 @@ export default async function routes(fastify: FastifyInstance) {

const urlPath = `${bucketName}/${objectName}`

let userMetadata: Record<string, unknown> | undefined

const customMd = request.headers['x-metadata']

if (typeof customMd === 'string') {
// TODO: parseUserMetadata casts to Record<string, string> but values could be anything;
// validation should be added in a follow-up
userMetadata = parseUserMetadata(customMd)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this parser returns Record<string, string> by casting due to its expectation (and S3 compatibility) but there is no validation. Values could be anything.

This is an existing helper, feel free to add a TODO to be addressed later

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into supabase-js and s3 part of this, type is wrong but handling is correct and we return x-amz-meta-missing header for non-string/non-printable

Copy link
Member

@ferhatelmas ferhatelmas Feb 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing to note is that when it's set when content-type is multipart/form-data, we check/limit its size but not when it comes from a header.

However, the value is 1kb and a header can definitely be larger than that as well saw value wrong, it's 1mb so this is a valid assumption. We could reuse the code, instead of two parsers as a cleaup but not in this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added TODO

}

const contentType = request.headers['content-type']
const contentLengthHeader = request.headers['content-length']
const contentLength = contentLengthHeader ? Number(contentLengthHeader) : undefined

const signedUpload = await request.storage
.from(bucketName)
.signUploadObjectUrl(objectName, urlPath as string, uploadSignedUrlExpirationTime, owner, {
upsert: request.headers['x-upsert'] === 'true',
userMetadata,
metadata: {
mimetype: contentType,
contentLength,
},
})

return response.status(200).send({ url: signedUpload.url, token: signedUpload.token })
Expand Down
2 changes: 1 addition & 1 deletion src/http/routes/tus/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ function createTusServer(
namingFunction,
onUploadCreate: onCreate,
onUploadFinish,
onIncomingRequest,
onIncomingRequest: (req, id) => onIncomingRequest(req, id, datastore),
generateUrl,
getFileIdFromRequest,
onResponseError,
Expand Down
46 changes: 44 additions & 2 deletions src/http/routes/tus/lifecycle.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import { ERRORS, isRenderableError } from '@internal/errors'
import { UploadId } from '@storage/protocols/tus'
import { Storage } from '@storage/storage'
import { Uploader, validateMimeType } from '@storage/uploader'
import { Upload } from '@tus/server'
import { DataStore, Metadata, Upload } from '@tus/server'
import { randomUUID } from 'crypto'
import http from 'http'
import { BaseLogger } from 'pino'
Expand Down Expand Up @@ -44,7 +44,7 @@ export type MultiPartRequest = http.IncomingMessage & {
/**
* Runs on every TUS incoming request
*/
export async function onIncomingRequest(rawReq: Request, id: string) {
export async function onIncomingRequest(rawReq: Request, id: string, datastore: DataStore) {
const req = getNodeRequest(rawReq)
const res = rawReq.node?.res as http.ServerResponse

Expand Down Expand Up @@ -92,11 +92,53 @@ export async function onIncomingRequest(rawReq: Request, id: string) {
req.upload.storage.location
)

let contentType: string | undefined
let contentLength: number | undefined
let rawMetadata: string | null | undefined

if (req.method === 'POST') {
const uploadMetadataHeader = req.headers['upload-metadata']
if (uploadMetadataHeader && typeof uploadMetadataHeader === 'string') {
try {
const parsedMetadata = Metadata.parse(uploadMetadataHeader)
contentType = parsedMetadata?.contentType ?? undefined
rawMetadata = parsedMetadata?.metadata
} catch (e) {
req.log.warn({ error: e }, 'Failed to parse upload metadata')
throw ERRORS.InvalidParameter('upload-metadata', {
error: e as Error,
message: 'Invalid Upload-Metadata header',
})
}
}
const uploadLength = req.headers['upload-length']
contentLength = uploadLength ? Number(uploadLength) : undefined
} else {
const upload = await datastore.getUpload(id)
contentType = upload.metadata?.contentType ?? undefined
contentLength = upload.size ?? undefined
rawMetadata = upload.metadata?.metadata
}

let customMd: Record<string, string> | undefined
if (rawMetadata) {
try {
customMd = JSON.parse(rawMetadata)
} catch (e) {
req.log.warn({ error: e }, 'Failed to parse user metadata')
}
}

await uploader.canUpload({
owner: req.upload.owner,
bucketId: uploadID.bucket,
objectName: uploadID.objectName,
isUpsert,
userMetadata: customMd,
metadata: {
mimetype: contentType,
contentLength,
},
})
}

Expand Down
1 change: 1 addition & 0 deletions src/internal/database/migrations/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -56,4 +56,5 @@ export const DBMigration = {
'drop-index-object-level': 54,
'prevent-direct-deletes': 55,
'fix-optimized-search-function': 56,
's3-multipart-uploads-metadata': 57,
}
3 changes: 2 additions & 1 deletion src/storage/database/adapter.ts
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,8 @@ export interface Database {
version: string,
signature: string,
owner?: string,
metadata?: Record<string, string | null>
userMetadata?: Record<string, string | null>,
metadata?: Partial<ObjectMetadata>
): Promise<S3MultipartUpload>

findMultipartUpload(
Expand Down
51 changes: 35 additions & 16 deletions src/storage/database/knex.ts
Original file line number Diff line number Diff line change
Expand Up @@ -902,22 +902,33 @@ export class StorageKnexDB implements Database {
version: string,
signature: string,
owner?: string,
metadata?: Record<string, string | null>
userMetadata?: Record<string, string | null>,
metadata?: Partial<ObjectMetadata>
) {
return this.runQuery('CreateMultipartUpload', async (knex, signal) => {
const data: Record<string, unknown> = {
id: uploadId,
bucket_id: bucketId,
key: objectName,
version,
upload_signature: signature,
owner_id: owner,
user_metadata: userMetadata,
}

// TODO: move this guard into normalizeColumns once it is table-aware.
// metadata was added to s3_multipart_uploads in migration 57 but has existed on
// objects since much earlier, so a table-agnostic rule would incorrectly strip it.
if (
!this.latestMigration ||
DBMigration[this.latestMigration] >= DBMigration['s3-multipart-uploads-metadata']
) {
data.metadata = metadata
}

const multipart = await knex
.table<S3MultipartUpload>('s3_multipart_uploads')
.insert(
this.normalizeColumns({
id: uploadId,
bucket_id: bucketId,
key: objectName,
version,
upload_signature: signature,
owner_id: owner,
user_metadata: metadata,
})
)
.insert(this.normalizeColumns(data))
.returning('*')
.abortOnSignal(signal)

Expand All @@ -927,10 +938,18 @@ export class StorageKnexDB implements Database {

async findMultipartUpload(uploadId: string, columns = 'id', options?: { forUpdate?: boolean }) {
const multiPart = await this.runQuery('FindMultipartUpload', async (knex, signal) => {
const query = knex
.from('s3_multipart_uploads')
.select(columns.split(','))
.where('id', uploadId)
// TODO: move this guard into normalizeColumns once it is table-aware.
// metadata was added to s3_multipart_uploads in migration 57 but has existed on
// objects since much earlier, so a table-agnostic rule would incorrectly strip it.
const hasMetadataColumn =
!this.latestMigration ||
DBMigration[this.latestMigration] >= DBMigration['s3-multipart-uploads-metadata']

const cols = hasMetadataColumn
? columns.split(',')
: columns.split(',').filter((col) => col.trim() !== 'metadata')

const query = knex.from('s3_multipart_uploads').select(cols).where('id', uploadId)

if (options?.forUpdate) {
return query.abortOnSignal(signal).forUpdate().first()
Expand Down
17 changes: 14 additions & 3 deletions src/storage/object.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ import {
ObjectUpdatedMetadata,
} from './events'
import { mustBeValidKey } from './limits'
import { fileUploadFromRequest, Uploader, UploadRequest } from './uploader'
import { CanUploadMetadata, fileUploadFromRequest, Uploader, UploadRequest } from './uploader'

const { requestUrlLengthLimit } = getConfig()

Expand Down Expand Up @@ -97,6 +97,7 @@ export class ObjectStorage {
owner: file.owner,
isUpsert: Boolean(file.isUpsert),
signal: file.signal,
userMetadata: uploadRequest.userMetadata,
})
}

Expand Down Expand Up @@ -332,11 +333,15 @@ export class ObjectStorage {
...(fileMetadata || {}),
}

const destinationUserMetadata = copyMetadata ? originObject.user_metadata : userMetadata

await this.uploader.canUpload({
bucketId: destinationBucket,
objectName: destinationKey,
owner,
isUpsert: upsert,
userMetadata: destinationUserMetadata || undefined,
metadata: destinationMetadata,
})

try {
Expand Down Expand Up @@ -381,7 +386,7 @@ export class ObjectStorage {
lastModified: copyResult.lastModified,
eTag: copyResult.eTag,
},
user_metadata: copyMetadata ? originObject.user_metadata : userMetadata,
user_metadata: destinationUserMetadata,
version: newVersion,
})

Expand Down Expand Up @@ -790,14 +795,20 @@ export class ObjectStorage {
url: string,
expiresIn: number,
owner?: string,
options?: { upsert?: boolean }
options?: {
upsert?: boolean
userMetadata?: Record<string, unknown>
metadata?: CanUploadMetadata
}
) {
// check if user has INSERT permissions
await this.uploader.canUpload({
bucketId: this.bucketId,
objectName,
owner,
isUpsert: options?.upsert ?? false,
userMetadata: options?.userMetadata,
metadata: options?.metadata,
})

const { urlSigningKey } = await getJwtSecret(this.db.tenantId)
Expand Down
Loading