File Storage & Uploads
Most applications need to store files — profile photos, documents, product images, exports. In this course, you will learn how Grit handles file uploads using S3-compatible storage and presigned URLs. You'll start with MinIO for local development, then learn how to switch to cloud providers like Cloudflare R2 and AWS S3 with zero code changes.
What is File Storage?
When users upload files to your application — photos, PDFs, spreadsheets, videos — you need somewhere to store them. You cannot store files in your database. Databases are designed for structured data (rows and columns), not large binary files. Instead, files are stored in object storage.
uploads/users/42/photo.jpg.my-app-uploads) and organizes files inside it using key prefixes like uploads/, avatars/, or exports/.Grit uses S3-compatible storage for all file operations. This means the same code works with AWS S3 in production, Cloudflare R2 for cost savings, or MinIO running locally in Docker during development. Your files live outside the database, outside your API server — in dedicated storage built for the job.
How Presigned URLs Work
Grit uses presigned URLs for file uploads. This is the modern, scalable way to handle file uploads — and it's very different from the traditional approach of uploading files through your API.
Here's the flow, step by step:
- 1.Frontend asks API for an upload URL. The frontend sends the filename and content type to your API: "I want to upload photo.jpg (image/jpeg)."
- 2.API generates a presigned URL. Your Go API creates a time-limited, signed URL that allows uploading one specific file to one specific location in S3.
- 3.Frontend uploads directly to S3. The frontend sends the file directly to the presigned URL using a PUT request. The file goes straight to S3 — it never passes through your API server.
- 4.Frontend confirms the upload. After a successful upload, the frontend tells your API: "The upload to key uploads/abc123/photo.jpg is complete." The API saves a record in the uploads table.
┌──────────┐ 1. Request URL ┌──────────┐
│ │ ─────────────────────> │ │
│ Frontend │ │ Go API │
│ │ <───────────────────── │ │
└──────────┘ 2. Presigned URL └──────────┘
│
│ 3. PUT file directly
v
┌──────────┐
│ S3 │ (MinIO / R2 / AWS S3)
│ Storage │
└──────────┘
│
│ 4. Frontend confirms upload
v
┌──────────┐ Save upload record ┌──────────┐
│ Frontend │ ─────────────────────> │ Go API │
└──────────┘ └──────────┘Why presigned URLs? Three big reasons:
- • No request body limits. Your API doesn't handle the file data, so there's no 10MB or 50MB upload limit to configure. Users can upload files of any size directly to S3.
- • No API bottleneck. Large files don't consume your API's memory or bandwidth. A 500MB video goes straight to S3 while your API keeps serving other requests.
- • Progress tracking works. Because the frontend sends the file directly via XHR, you get real upload progress events. This lets you show a proper progress bar to the user.
Challenge: Explain Presigned URLs
In your own words, explain why presigned URLs are better than uploading files through the API. Write down at least 3 advantages. Bonus: can you think of a scenario where uploading through the API might be acceptable instead?
Storage Configuration
Grit's storage configuration is driven entirely by environment variables. This means you can switch from local MinIO to Cloudflare R2 or AWS S3 without changing any code — just update your .env file.
Here are all the storage-related environment variables:
# Storage Configuration
STORAGE_DRIVER=s3
STORAGE_ENDPOINT=localhost:9000
STORAGE_BUCKET=uploads
STORAGE_ACCESS_KEY=minioadmin
STORAGE_SECRET_KEY=minioadmin
STORAGE_REGION=us-east-1
STORAGE_USE_SSL=falseLet's break down each variable:
- • STORAGE_DRIVER — The storage backend to use. Set to
s3for any S3-compatible service (MinIO, R2, AWS S3, B2). This tells Grit which client library to initialize. - • STORAGE_ENDPOINT — The hostname (and optional port) of the storage service. For local MinIO:
localhost:9000. For R2:your-account.r2.cloudflarestorage.com. For AWS S3: leave empty or use the regional endpoint. - • STORAGE_BUCKET — The name of the S3 bucket where files will be stored. Grit uses a single bucket and organizes files with key prefixes.
- • STORAGE_ACCESS_KEY — The access key ID for authenticating with the storage service. For MinIO:
minioadmin. For cloud providers: your IAM access key. - • STORAGE_SECRET_KEY — The secret access key. For MinIO:
minioadmin. For cloud providers: your IAM secret key. Keep this secret — never commit it to Git. - • STORAGE_REGION — The AWS region for the bucket. Required by the S3 protocol. For MinIO:
us-east-1(any value works). For R2:auto. For AWS S3: your actual region likeus-west-2. - • STORAGE_USE_SSL — Whether to use HTTPS when connecting to the storage endpoint. Set to
falsefor local MinIO (HTTP). Set totruefor any cloud provider (HTTPS).
.env file generated by grit new is pre-configured for local MinIO development. You don't need to change anything to get started — just run docker compose up -d and MinIO will be ready.Challenge: Find Your Storage Variables
Open your project's .env file and find all the STORAGE_ variables. What driver is configured by default? What bucket name is used? What endpoint does it point to?
MinIO for Local Development
In development, you don't want to use a cloud storage service for every file upload test. Instead, Grit uses MinIO — an S3-compatible object storage server that runs locally in Docker.
MinIO is already included in your project's docker-compose.yml. When you run docker compose up -d, MinIO starts alongside PostgreSQL, Redis, and Mailhog. It exposes two ports:
- • Port 9000 — The S3 API endpoint. Your Go API connects here to generate presigned URLs and manage files.
- • Port 9001 — The MinIO Console (web UI). Open
localhost:9001in your browser to manage buckets and browse files visually.
To access the MinIO Console, open http://localhost:9001 and log in with the default credentials:
Username: minioadmin
Password: minioadminOnce logged in, you can:
- • Browse existing buckets and their contents
- • Create new buckets
- • Upload and download files manually
- • View file metadata (size, content type, last modified)
- • Set access policies on buckets
Your project should already have an uploads bucket. If it doesn't exist yet, you can create it through the console:
1. Open http://localhost:9001
2. Log in with minioadmin / minioadmin
3. Click "Buckets" in the sidebar
4. Click "Create Bucket"
5. Enter the name: uploads
6. Click "Create Bucket"
The bucket is now ready to receive files.docker compose down -v (with the -v flag), the volumes are deleted and you'll lose all stored files. Without -v, your files are safe.Challenge: Explore MinIO Console
Make sure Docker is running with docker compose up -d. Open localhost:9001 in your browser and log in with minioadmin / minioadmin. Can you see the uploads bucket? Create a test bucket called "images" and verify it appears in the list.
Uploading a File
Now let's see how file uploads actually work in Grit. The process has two steps: (1) get a presigned URL from your API, and (2) upload the file directly to S3 using that URL.
Step 1: Get a Presigned URL
The frontend sends a POST request to the presign endpoint with the filename and content type:
// Request
{
"filename": "photo.jpg",
"content_type": "image/jpeg"
}
// Response
{
"data": {
"upload_url": "http://localhost:9000/uploads/abc123-photo.jpg?X-Amz-Algorithm=...",
"key": "uploads/abc123-photo.jpg"
}
}The API generates a unique key for the file (to avoid naming collisions) and returns a presigned URL valid for 15 minutes. The key is the file's path inside the bucket — you'll need it to reference the file later.
Step 2: Upload to S3
The frontend then PUTs the file directly to the presigned URL. Here's the Go handler that generates the presigned URL:
func (h *UploadHandler) Presign(c *gin.Context) {
var req struct {
Filename string `json:"filename" binding:"required"`
ContentType string `json:"content_type" binding:"required"`
}
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(400, gin.H{"error": gin.H{
"code": "VALIDATION_ERROR",
"message": "filename and content_type are required",
}})
return
}
// Generate a unique key to avoid collisions
key := fmt.Sprintf("uploads/%s-%s", uuid.New().String()[:8], req.Filename)
// Generate presigned PUT URL (15 min expiry)
uploadURL, err := h.storage.PresignPut(key, req.ContentType, 15*time.Minute)
if err != nil {
c.JSON(500, gin.H{"error": gin.H{
"code": "STORAGE_ERROR",
"message": "Failed to generate upload URL",
}})
return
}
c.JSON(200, gin.H{"data": gin.H{
"upload_url": uploadURL,
"key": key,
}})
}And here's the frontend upload component with progress tracking using XHR:
async function uploadFile(file: File) {
// Step 1: Get presigned URL from API
const res = await fetch("/api/uploads/presign", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + token,
},
body: JSON.stringify({
filename: file.name,
content_type: file.type,
}),
});
const { data } = await res.json();
// Step 2: Upload file directly to S3 with progress
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.open("PUT", data.upload_url);
xhr.setRequestHeader("Content-Type", file.type);
xhr.upload.onprogress = (e) => {
if (e.lengthComputable) {
const percent = Math.round((e.loaded / e.total) * 100);
setProgress(percent); // Update progress bar
}
};
xhr.onload = () => {
if (xhr.status === 200) {
resolve(data.key); // Return the file key
} else {
reject(new Error("Upload failed"));
}
};
xhr.onerror = () => reject(new Error("Upload failed"));
xhr.send(file);
});
}Notice how we use XMLHttpRequest instead of fetch for the actual upload. This is because XHR provides upload.onprogress events, which let us show a real-time progress bar. The fetch API doesn't support upload progress tracking.
Challenge: Call the Presign Endpoint
With your project running (grit dev), use the API docs or a tool like curl to call the presign endpoint:
curl -X POST http://localhost:8080/api/uploads/presign \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"filename": "test.txt", "content_type": "text/plain"}'Copy the upload_url from the response. What does it look like? Can you spot the expiration time in the URL parameters?
Challenge: Upload Through the Admin
Generate a resource with an image field:
grit generate resource Product --fields "name:string,price:float,image:string"Open the admin panel, go to the Products page, and create a new product. Upload an image file through the form. Check the MinIO console at localhost:9001 — can you find your uploaded file in the uploads bucket?
Image Processing
When a user uploads an image, you often need more than just the original file. A 5MB profile photo is too large for a 40x40 avatar in a sidebar. A 4000px product image is too heavy for a thumbnail grid. Grit handles this with background image processing.
Here's how it works:
- 1.Upload completes. The frontend uploads the original image to S3 via a presigned URL and notifies the API.
- 2.API enqueues a background job. The upload handler dispatches an image processing job to the asynq task queue (backed by Redis).
- 3.Worker processes the image. A background worker picks up the job, downloads the original from S3, generates thumbnail and medium-sized versions, and uploads them back to S3.
- 4.Multiple sizes available. The original, thumbnail, and medium versions are all stored in S3 with predictable keys.
Original Upload: uploads/abc123-photo.jpg (original)
│
└─> Background Job (asynq worker)
│
├─> uploads/abc123-photo_thumb.jpg (150x150)
└─> uploads/abc123-photo_medium.jpg (800x600)The image processing worker in your Go API looks like this:
func (w *ImageWorker) ProcessImage(ctx context.Context, t *asynq.Task) error {
var payload struct {
Key string `json:"key"`
ContentType string `json:"content_type"`
}
if err := json.Unmarshal(t.Payload(), &payload); err != nil {
return fmt.Errorf("unmarshal payload: %w", err)
}
// Download original from S3
original, err := w.storage.Get(payload.Key)
if err != nil {
return fmt.Errorf("download original: %w", err)
}
// Generate thumbnail (150x150)
thumb, err := resize(original, 150, 150)
if err != nil {
return fmt.Errorf("resize thumbnail: %w", err)
}
thumbKey := strings.TrimSuffix(payload.Key, filepath.Ext(payload.Key)) + "_thumb" + filepath.Ext(payload.Key)
if err := w.storage.Put(thumbKey, thumb, payload.ContentType); err != nil {
return fmt.Errorf("upload thumbnail: %w", err)
}
// Generate medium (800x600)
medium, err := resize(original, 800, 600)
if err != nil {
return fmt.Errorf("resize medium: %w", err)
}
mediumKey := strings.TrimSuffix(payload.Key, filepath.Ext(payload.Key)) + "_medium" + filepath.Ext(payload.Key)
if err := w.storage.Put(mediumKey, medium, payload.ContentType); err != nil {
return fmt.Errorf("upload medium: %w", err)
}
return nil
}Challenge: Check for Thumbnails
Upload a JPG or PNG image through the admin panel (a product image, profile photo, etc.). Wait a few seconds for the background worker to process it, then open the MinIO console at localhost:9001. Browse the uploads bucket. Can you find the original file and its thumbnail versions (_thumb and _medium suffixes)?
Switching to Cloud Storage
When you're ready to deploy, you'll switch from local MinIO to a cloud storage provider. The beauty of S3-compatible storage is that your code doesn't change at all — you only update the .env variables.
Cloudflare R2
Cloudflare R2 is a popular choice because it has no egress fees — you don't pay for downloading files, only for storage and write operations. This can save a lot of money compared to AWS S3.
STORAGE_DRIVER=s3
STORAGE_ENDPOINT=your-account-id.r2.cloudflarestorage.com
STORAGE_BUCKET=my-uploads
STORAGE_ACCESS_KEY=your-r2-access-key
STORAGE_SECRET_KEY=your-r2-secret-key
STORAGE_REGION=auto
STORAGE_USE_SSL=trueAWS S3
AWS S3 is the original and most widely used object storage. If you're already on AWS, it's the natural choice:
STORAGE_DRIVER=s3
STORAGE_ENDPOINT=s3.us-west-2.amazonaws.com
STORAGE_BUCKET=my-app-uploads
STORAGE_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE
STORAGE_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
STORAGE_REGION=us-west-2
STORAGE_USE_SSL=trueNotice: the code is identical in both cases. Only the environment variables change. This is the power of the S3-compatible abstraction. You develop locally with MinIO, deploy to R2 for cost savings, or use AWS S3 if you need AWS-specific features — all without touching your code.
Here's how the storage service decides which driver to use:
func NewStorage(cfg *config.Config) (*Storage, error) {
switch cfg.StorageDriver {
case "s3":
client, err := minio.New(cfg.StorageEndpoint, &minio.Options{
Creds: credentials.NewStaticV4(cfg.StorageAccessKey, cfg.StorageSecretKey, ""),
Secure: cfg.StorageUseSSL,
Region: cfg.StorageRegion,
})
if err != nil {
return nil, fmt.Errorf("init s3 client: %w", err)
}
return &Storage{client: client, bucket: cfg.StorageBucket}, nil
default:
return nil, fmt.Errorf("unsupported storage driver: %s", cfg.StorageDriver)
}
}Challenge: Read the Storage Service
Open your project's storage service code at apps/api/internal/storage/. Read through the files. How does the service decide which driver to use? What methods does the storage service expose (e.g., PresignPut, Get, Delete)? List all the public methods you can find.
The Upload Model
Every file uploaded through Grit is tracked in the database using the Upload model. This gives you a record of every file — who uploaded it, when, what type it is, and where it's stored in S3.
type Upload struct {
ID uint `gorm:"primaryKey" json:"id"`
Filename string `gorm:"not null" json:"filename"`
Key string `gorm:"not null;uniqueIndex" json:"key"`
ContentType string `gorm:"not null" json:"content_type"`
Size int64 `json:"size"`
UserID uint `json:"user_id"`
User User `gorm:"foreignKey:UserID" json:"user,omitempty"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}Let's break down each field:
- • Filename — The original filename the user uploaded (e.g.,
photo.jpg). Used for display purposes and when the user downloads the file. - • Key — The unique storage key in S3 (e.g.,
uploads/abc123-photo.jpg). This is how you locate the file in the bucket. It has a unique index to prevent duplicates. - • ContentType — The MIME type (e.g.,
image/jpeg,application/pdf,text/csv). Used to set the correct headers when serving the file. - • Size — The file size in bytes. Useful for showing "2.4 MB" in the UI or enforcing storage quotas per user.
- • UserID — Which user uploaded the file. This enables per-user file management and access control.
When the frontend confirms a successful upload, the API creates an Upload record:
func (h *UploadHandler) ConfirmUpload(c *gin.Context) {
var req struct {
Key string `json:"key" binding:"required"`
Filename string `json:"filename" binding:"required"`
ContentType string `json:"content_type" binding:"required"`
Size int64 `json:"size"`
}
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(400, gin.H{"error": gin.H{
"code": "VALIDATION_ERROR",
"message": "key, filename, and content_type are required",
}})
return
}
userID := c.GetUint("userID") // from auth middleware
upload := models.Upload{
Filename: req.Filename,
Key: req.Key,
ContentType: req.ContentType,
Size: req.Size,
UserID: userID,
}
if err := h.db.Create(&upload).Error; err != nil {
c.JSON(500, gin.H{"error": gin.H{
"code": "DATABASE_ERROR",
"message": "Failed to save upload record",
}})
return
}
c.JSON(201, gin.H{
"data": upload,
"message": "Upload confirmed",
})
}Challenge: Explore the Uploads Table
Open GORM Studio at localhost:8080/studio and find the uploads table. What columns does it have? Upload a few files through the admin panel, then refresh the table. Can you see the records with their filenames, keys, content types, and sizes?
Admin File Management
The admin panel includes a Files system page where administrators can view and manage all uploaded files. This page is available in the admin sidebar under the System section.
The Files page provides:
- • File list — A DataTable showing all uploads with filename, content type, size, uploader, and upload date
- • Preview — Image files show a thumbnail preview directly in the table
- • Search and filter — Search by filename, filter by content type or uploader
- • Download — Generate a presigned download URL to retrieve any file
- • Delete — Remove files from both S3 storage and the database
// The Files page uses the standard DataTable component
// with columns configured for upload records:
const columns = [
{ key: "filename", label: "Filename", sortable: true },
{ key: "content_type", label: "Type", sortable: true },
{ key: "size", label: "Size", sortable: true,
render: (value: number) => formatBytes(value) },
{ key: "user", label: "Uploaded By", sortable: true,
render: (value: any) => value?.name || "Unknown" },
{ key: "created_at", label: "Uploaded", sortable: true,
render: (value: string) => formatDate(value) },
];When you delete a file from the admin panel, Grit performs a two-step cleanup:
- 1.Delete the file (and any thumbnails) from S3 storage
- 2.Delete the Upload record from the database
Challenge: Manage Files in Admin
Upload 3 different files through your application — try an image (JPG/PNG), a document (PDF), and a spreadsheet (CSV or XLSX). Then open the admin panel and navigate to the Files page under the System section in the sidebar. Can you see all 3 files? Try sorting by size, searching by filename, and deleting one of them.
Summary
You've learned how Grit handles file storage — from the concepts of object storage and presigned URLs to hands-on uploading with MinIO and switching to cloud providers. Here's what you covered:
- Object storage — files stored outside the database in S3-compatible storage (MinIO, R2, AWS S3)
- Presigned URLs — time-limited signed URLs that let the frontend upload directly to S3, bypassing the API
- Storage configuration — 7 environment variables that control where and how files are stored
- MinIO — a local S3-compatible server for development, with a web console at port 9001
- Upload flow — presign request, direct S3 upload with XHR progress, then confirm with the API
- Image processing — background jobs that generate thumbnail and medium-sized versions automatically
- Cloud switching — change .env variables to switch between MinIO, Cloudflare R2, and AWS S3 with zero code changes
- Upload model — tracks every file in the database with filename, key, content type, size, and user association
- Admin Files page — system page for viewing, searching, downloading, and deleting uploaded files
Challenge: Presigned URL Quiz
Answer these questions without looking back at the course:
- What HTTP method does the frontend use to upload a file to a presigned URL?
- Why do we use XHR instead of fetch for the upload step?
- What happens if you try to use a presigned URL after 15 minutes?
- What is the default MinIO endpoint in development?
- What STORAGE_REGION value should you use for Cloudflare R2?
Challenge: Storage Driver Deep Dive
Open apps/api/internal/storage/storage.go and read the NewStorage function. Then answer:
- What Go library does Grit use for S3 operations? (Hint: look at the imports)
- How is the
Secureoption set? What does it control? - What happens if
STORAGE_DRIVERis set to an unsupported value?
Challenge: Compare Cloud Providers
Research the differences between Cloudflare R2, AWS S3, and Backblaze B2. For each provider, find out:
- How much does storage cost per GB/month?
- Are there egress (download) fees?
- Is it S3-compatible (can Grit use it without code changes)?
Challenge: Upload Lifecycle
Trace the full lifecycle of a file upload by finding and reading these files in your project:
apps/api/internal/handler/upload_handler.go— the Presign and ConfirmUpload handlersapps/api/internal/storage/storage.go— the PresignPut methodapps/api/internal/worker/image_worker.go— the background image processorapps/api/internal/models/upload.go— the Upload model
Write down the complete journey of a file from the user's browser to S3 storage to thumbnail generation to database record.
Challenge: Final Challenge: Build a Photo Gallery
Put everything together. Generate a Photo resource:
grit generate resource Photo --fields "title:string,description:text:optional,image:string"Then complete these tasks:
- Open the admin panel and find the Photos page
- Upload 5 photos with different titles and descriptions
- Open GORM Studio at
localhost:8080/studioand find the photos table — verify all 5 records exist - Open the MinIO console at
localhost:9001and browse the uploads bucket — find all 5 original images and their thumbnails - Check the uploads table in GORM Studio — are there corresponding upload records for each photo?
- Delete one photo from the admin panel and verify the file is removed from both the database and MinIO
Enjoying the course?
Help us grow — star us on GitHub, subscribe on YouTube, and follow on LinkedIn.