TechLead
Lesson 17 of 22
5 min read
Supabase

Advanced File Upload Patterns

Production file upload patterns with Supabase Storage including resumable uploads, image transforms, and CDN caching

Advanced File Upload Patterns

Supabase Storage goes far beyond simple file uploads. In production, you need resumable uploads for large files, image transformations for responsive images, signed URLs for secure access, and smart folder organization. This guide covers battle-tested patterns for real applications.

🚀 What You'll Learn

  • Resumable Uploads: Upload large files reliably with TUS protocol
  • Image Transforms: Resize, crop, and convert images on the fly
  • Signed URLs: Secure, time-limited access to private files
  • Storage Policies: RLS-like rules for file access control

Resumable Uploads with TUS Protocol

For files over 6MB, resumable uploads prevent lost progress on network failures.

import { createClient } from '@supabase/supabase-js'

const supabase = createClient(SUPABASE_URL, SUPABASE_KEY)

async function uploadLargeFile(file: File, bucket: string, path: string) {
  const { data, error } = await supabase.storage
    .from(bucket)
    .upload(path, file, {
      // Enable resumable uploads for large files
      duplex: 'half',
      // Chunk size: 6MB minimum
      // The upload can resume from the last successful chunk
    })

  if (error) throw error
  return data
}

// Using tus-js-client for progress tracking
import * as tus from 'tus-js-client'

function uploadWithProgress(file: File, bucket: string, path: string) {
  return new Promise((resolve, reject) => {
    const upload = new tus.Upload(file, {
      endpoint: `${SUPABASE_URL}/storage/v1/upload/resumable`,
      retryDelays: [0, 3000, 5000, 10000],
      headers: {
        authorization: `Bearer ${session.access_token}`,
        'x-upsert': 'true',
      },
      metadata: {
        bucketName: bucket,
        objectName: path,
        contentType: file.type,
      },
      chunkSize: 6 * 1024 * 1024, // 6MB
      onError: reject,
      onProgress: (bytesUploaded, bytesTotal) => {
        const pct = ((bytesUploaded / bytesTotal) * 100).toFixed(1)
        console.log(`Upload: ${pct}%`)
      },
      onSuccess: () => resolve(upload),
    })
    upload.start()
  })
}

Image Transformations

// Get a transformed image URL — no pre-processing needed!
const { data } = supabase.storage
  .from('avatars')
  .getPublicUrl('user-123/profile.jpg', {
    transform: {
      width: 200,
      height: 200,
      resize: 'cover',   // 'cover' | 'contain' | 'fill'
      quality: 80,
      format: 'origin',  // 'origin' to keep original format
    },
  })

// Responsive images with multiple sizes
function getResponsiveUrls(path: string) {
  const sizes = [320, 640, 1024, 1920]
  return sizes.map(width => {
    const { data } = supabase.storage
      .from('images')
      .getPublicUrl(path, {
        transform: { width, quality: 80 },
      })
    return { width, url: data.publicUrl }
  })
}

// Use in HTML srcset
// <img srcset="url-320 320w, url-640 640w, url-1024 1024w" />

Signed URLs for Private Files

// Generate a signed URL (expires in 1 hour)
const { data, error } = await supabase.storage
  .from('private-docs')
  .createSignedUrl('reports/q4-2024.pdf', 3600) // 3600 seconds = 1 hour

console.log(data.signedUrl) // Time-limited URL

// Signed upload URL — let clients upload without your API key
const { data: uploadData } = await supabase.storage
  .from('uploads')
  .createSignedUploadUrl('user-files/document.pdf')

// Client uses the signed URL to upload directly
await fetch(uploadData.signedUrl, {
  method: 'PUT',
  headers: { 'Content-Type': 'application/pdf' },
  body: file,
})

Storage Policies

-- Users can upload to their own folder
CREATE POLICY "Users can upload own files"
ON storage.objects FOR INSERT
WITH CHECK (
  bucket_id = 'avatars'
  AND (storage.foldername(name))[1] = auth.uid()::text
);

-- Users can read their own files
CREATE POLICY "Users can read own files"
ON storage.objects FOR SELECT
USING (
  bucket_id = 'avatars'
  AND (storage.foldername(name))[1] = auth.uid()::text
);

-- Public bucket: anyone can read
CREATE POLICY "Public read access"
ON storage.objects FOR SELECT
USING (bucket_id = 'public-images');

-- Limit file sizes and types (via bucket configuration)
-- Configure in Supabase Dashboard > Storage > Bucket Settings

Organizing Files

// Recommended folder structure
// {bucket}/{user_id}/{category}/{filename}

async function uploadAvatar(userId: string, file: File) {
  const ext = file.name.split('.').pop()
  const path = `${userId}/avatar.${ext}`

  const { data, error } = await supabase.storage
    .from('avatars')
    .upload(path, file, {
      upsert: true,  // Replace existing avatar
      contentType: file.type,
      cacheControl: '3600',  // CDN cache for 1 hour
    })

  return data
}

// List files in a folder
const { data: files } = await supabase.storage
  .from('documents')
  .list(`${userId}/reports`, {
    limit: 100,
    offset: 0,
    sortBy: { column: 'created_at', order: 'desc' },
  })

💡 Key Takeaways

  • • Use resumable uploads (TUS) for files larger than 6MB
  • • Image transformations happen on the CDN — no server-side processing needed
  • • Signed URLs provide time-limited access to private files
  • • Storage policies control who can upload, read, and delete files
  • • Organize files with {user_id}/{category}/{filename} structure

📚 Learn More

Continue Learning