Courses/Grit Web/Deploy to Production
Course 8 of 8~30 min14 challenges

Deploy to Production

In this final course, you will learn how to take your Grit app from localhost to a live server that real users can access. You will learn the grit deploy command, how systemd keeps your app running, how Caddy provides automatic HTTPS, and how Docker gives you an alternative deployment path. By the end, you will have a complete deployment playbook.


What is Deployment?

During development, your app runs on localhost:3000 or localhost:8080. Only you can access it from your own machine. Deployment is the process of putting your app on a real server with a real domain so that anyone on the internet can use it. It is the final step that turns your project into a product.

Deployment: The process of transferring your application from your local development machine to a remote server where it can be accessed by real users over the internet. It includes building your code, uploading it, configuring the server, and starting the application.
Production: The live environment where real users interact with your application. Unlike development (where errors are expected), production must be stable, secure, and performant. You never test in production — you test locally and in staging, then deploy to production when everything works.
VPS (Virtual Private Server): A remote computer in the cloud that you rent from a provider like DigitalOcean, Hetzner, or Linode. You get root access to install software, run your app, and configure your server. It is like having your own computer in a data center, but it is a virtual slice of a larger physical machine.

On your local machine, you type grit dev and everything just works. But your users need to access your app from their browsers — they cannot connect to your laptop. A VPS gives your app a permanent home on the internet with a public IP address. You point your domain name to that IP, and suddenly myapp.com loads your Grit application.

1

Challenge: Localhost vs Production

What's the difference between running your app on localhost and deploying it to production? Think about who can access it, what URL they use, and what happens when you close your laptop.

The grit deploy Command

Grit gives you a single command that handles the entire deployment pipeline — building your code, uploading it to your server, configuring the process manager, and setting up HTTPS. Here is the full command:

grit deploy --host deploy@server.com --domain myapp.com

That's it. One command, and your app is live. Let's break down every flag:

--host deploy@server.com

The SSH connection string. deploy is the username on your server, and server.com is your server's IP address or hostname. Grit connects to this address to upload your app and configure services.

--domain myapp.com

Your domain name. When provided, Grit configures Caddy to serve your app at this domain with automatic HTTPS. If omitted, your app is accessible only via IP address on the app port.

--port 22

The SSH port on your server. Defaults to 22. Some servers use a custom port like 2222 for extra security.

--key ~/.ssh/id_rsa

Path to your SSH private key file. If your server uses key-based authentication (recommended), point this to your private key.

--app-port 8080

The port your Go API listens on. Defaults to 8080. Caddy will forward traffic from port 443 (HTTPS) to this port internally.

SSH (Secure Shell): A protocol for securely connecting to a remote server. You use it to run commands on your server from your local computer. When you type ssh deploy@server.com, you get a terminal on the remote machine — as if you were sitting in front of it. All traffic is encrypted, so passwords and commands cannot be intercepted.
2

Challenge: Custom Deploy Flags

Your server is at IP 192.168.1.100, your domain is shop.example.com, and you use a custom SSH port 2222. Write the full grit deploy command with the correct --host, --domain, and --port flags.

What Happens During Deploy

When you run grit deploy, a 5-step pipeline executes automatically. Understanding each step helps you debug deployment issues and customize the process when needed.

1

Cross-compile Go binary for Linux

Your development machine might run Windows or macOS, but your server runs Linux. Go makes cross-compilation trivial — just set the target OS and architecture.

2

Build frontend if present

If your project has a Next.js frontend, Grit runs pnpm build to create production-optimized static files.

3

Upload binary to server via SCP

The compiled binary is securely copied to /opt/myapp/ on your server.

4

Create systemd service with auto-restart

Grit writes a systemd unit file that keeps your app running 24/7 and automatically restarts it if it crashes.

5

Configure Caddy reverse proxy with auto-TLS

If you provided --domain, Grit configures Caddy to handle HTTPS with automatic certificate provisioning from Let's Encrypt.

Cross-compilation: Building a program on one OS (like Windows or Mac) that runs on a different OS (Linux). Go makes this easy — just set the GOOS and GOARCH environment variables. No extra tools or virtual machines needed. This is one of Go's biggest strengths for deployment.
SCP (Secure Copy): A command that copies files between your computer and a remote server over SSH. Like cpbut over the network. When Grit uploads your binary to the server, it uses SCP under the hood to securely transfer the file.

Here is the exact build command Grit runs internally during Step 1:

CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/myapp ./cmd/server

Let's break this down:

CGO_ENABLED=0

Disables C bindings (cgo). This produces a fully static binary with zero external dependencies. Your binary will run on any Linux machine without needing to install libraries.

GOOS=linux

Target operating system. Even if you are on Windows or macOS, Go will produce a Linux binary.

GOARCH=amd64

Target CPU architecture. Most VPS servers use 64-bit x86 processors (amd64). If your server uses ARM (like some AWS instances), you would use GOARCH=arm64.

-o bin/myapp

Output path. The compiled binary will be saved as bin/myapp in your project directory.

3

Challenge: Build Command Breakdown

Explain what each part of the build command does: CGO_ENABLED=0, GOOS=linux, GOARCH=amd64, -o bin/myapp. Why is CGO_ENABLED=0 important for deployment?

systemd — The Process Manager

Once your binary is on the server, something needs to keep it running. You cannot just SSH in and type ./myapp — the process would die the moment you close your terminal. That's where systemd comes in.

systemd: The init system and service manager for most modern Linux distributions (Ubuntu, Debian, CentOS, Fedora). It starts and manages long-running processes called "services." When you tell systemd to manage your app, it ensures the app starts on boot, restarts on crash, and logs output to the system journal.

Grit automatically generates a systemd service file for your app. Here is what it looks like:

[Unit]
Description=myapp
After=network.target

[Service]
Type=simple
User=www-data
WorkingDirectory=/opt/myapp
ExecStart=/opt/myapp/myapp
Restart=on-failure
RestartSec=5
EnvironmentFile=/opt/myapp/.env

[Install]
WantedBy=multi-user.target

Let's understand each section:

[Unit] — Metadata

Description is a human-readable name. After=network.target means"wait until the network is available before starting." Your app needs a network to listen for HTTP requests.

[Service] — How to run

Type=simple means the process runs in the foreground. User=www-dataruns the app as a non-root user for security. ExecStart is the command to launch your binary. Restart=on-failure means systemd will restart the app if it exits with an error. RestartSec=5 waits 5 seconds between restart attempts. EnvironmentFile loads your .env variables.

[Install] — When to start

WantedBy=multi-user.target means the service starts during normal system boot (when the server has network and multi-user capabilities). This is the standard target for server applications.

Service: A long-running program managed by the operating system. Your Go API runs as a systemd service so it stays alive 24/7 — even after you disconnect from SSH, even after the server reboots. The operating system itself is responsible for keeping your app running.
You can check your service status with sudo systemctl status myapp, view logs with sudo journalctl -u myapp -f (live tail), and manually restart with sudo systemctl restart myapp.
4

Challenge: systemd Crash Recovery

Read the systemd file above. What happens if the app crashes? How does Restart=on-failure help? How long does systemd wait before restarting (RestartSec=5)?

5

Challenge: Security: User Permissions

What user does the app run as? Why is www-data used instead of root? What could go wrong if your app ran as root and had a security vulnerability?

Caddy — Reverse Proxy with Auto-TLS

Your Go app listens on port 8080, but users expect to visit https://myapp.com (port 443). Something needs to sit between the user's browser and your app to handle HTTPS, compress responses, and add security headers. That something is Caddy.

Reverse Proxy: A server that sits in front of your application and forwards requests to it. The browser talks to the proxy (Caddy on port 443), and the proxy talks to your app (port 8080). Users never connect to your app directly — they always go through Caddy. This adds a security layer and lets Caddy handle HTTPS, compression, and caching.
TLS (Transport Layer Security): The encryption protocol that makes HTTPS work. When you see the padlock icon in your browser, that's TLS in action. It encrypts all data between the browser and the server so that passwords, credit cards, and personal data cannot be intercepted by anyone on the network.
Let's Encrypt: A free, automated certificate authority that provides TLS certificates. Before Let's Encrypt, you had to buy certificates and manually install them. Caddy integrates with Let's Encrypt automatically — it obtains, installs, and renews certificates without you doing anything.

Here is the Caddy configuration that Grit generates:

myapp.com {
    reverse_proxy localhost:8080
    encode gzip
    header {
        X-Frame-Options "DENY"
        X-Content-Type-Options "nosniff"
        Referrer-Policy "strict-origin-when-cross-origin"
        -Server
    }
    log {
        output file /var/log/caddy/myapp.log {
            roll_size 10mb
            roll_keep 5
        }
    }
}

Let's understand what each directive does:

myapp.com

The domain name. Caddy automatically obtains a TLS certificate from Let's Encrypt for this domain. HTTPS is enabled by default — you do not need to configure it.

reverse_proxy localhost:8080

Forward all incoming requests to your Go app running on port 8080.

encode gzip

Compress responses with gzip. This makes your API responses and pages load faster by reducing the data sent over the network.

X-Frame-Options "DENY"

Prevents your site from being embedded in an iframe. Protects against clickjacking attacks.

X-Content-Type-Options "nosniff"

Tells browsers to trust the Content-Type header and not try to guess the file type. Prevents MIME-type sniffing attacks.

Referrer-Policy "strict-origin-when-cross-origin"

Controls how much referrer information is sent when navigating away from your site. Reduces information leakage.

-Server

Removes the Server response header. By default, Caddy advertises itself in the header. Removing it hides your technology stack from potential attackers — they do not need to know you are using Caddy.

HTTPS is completely automatic with Caddy. You do not need to run certbot, buy certificates, or configure renewal cron jobs. Caddy handles obtaining, installing, and renewing TLS certificates from Let's Encrypt. It even redirects HTTP to HTTPS automatically.
6

Challenge: Security Headers

In the Caddy config, what security headers are set? Why is the Server header removed with -Server? What information would an attacker gain if the Server header was present?

Environment Variables for Deploy

Typing --host, --domain, and --key every time you deploy is tedious and error-prone. Instead, you can set environment variables in your .env file and Grit will read them automatically.

DEPLOY_HOST=deploy@server.com
DEPLOY_KEY_FILE=~/.ssh/id_rsa
DEPLOY_DOMAIN=myapp.com

With these variables set, deploying becomes a single word:

grit deploy

No flags needed. Grit reads DEPLOY_HOST, DEPLOY_KEY_FILE, and DEPLOY_DOMAIN from your .env file. You can still override any variable with a flag — flags take priority over environment variables.

Keep your .env file out of version control (it is in .gitignore by default). Each developer and server has its own .env with different values. Use .env.example as a template that IS committed to git, showing which variables are needed without revealing actual values.
7

Challenge: Configure Deploy Variables

Add DEPLOY_HOST and DEPLOY_DOMAIN to your .env file for a server at deploy@198.51.100.42 with domain mystore.com. Then run grit deploy without any flags.

Maintenance Mode During Deploy

Sometimes you need to take your app offline briefly during a deployment — for example, when running database migrations that change table structures. Grit provides maintenance mode for this:

grit down              # 503 for all requests
grit deploy --host ... # Deploy new version
grit up                # Back online

When maintenance mode is active, every API request receives a 503 Service Unavailableresponse with a friendly message. Your frontend can detect this status code and show a"We'll be right back" page instead of confusing error messages.

Maintenance Mode: A state where your app returns 503 Service Unavailable for all requests. Useful during deployments so users see a "we'll be right back" message instead of errors. It is a controlled way to take your app offline temporarily, unlike a crash where users see broken pages.
For most deployments, you do NOT need maintenance mode. The grit deploy command handles the binary swap and systemd restart so quickly that there is near-zero downtime. Use maintenance mode only when you are running migrations that could break things if the old code and new database schema are running simultaneously.
8

Challenge: Maintenance Mode Test

Run grit down in your local project. Try accessing the API — what HTTP status code do you get? What message does the response contain? Run grit up to bring it back online.

Docker Deployment (Alternative)

The grit deploy command is designed for simple VPS deployment — one server, one app. For more complex setups where you need to run your API, database, and cache all together in isolated containers, Docker with Docker Compose is the way to go.

Grit scaffolds a docker-compose.prod.yml file for production Docker deployment:

services:
  api:
    build: ./apps/api
    ports:
      - "8080:8080"
    env_file: .env
    depends_on:
      - postgres
      - redis
  postgres:
    image: postgres:16
    volumes:
      - pgdata:/var/lib/postgresql/data
  redis:
    image: redis:7-alpine

This defines your entire production stack: the API service (built from your Go code), PostgreSQL for the database, and Redis for caching and job queues. Docker Compose starts all three together and handles networking between them automatically.

Docker Compose (production): A configuration file that defines how to run your entire application stack (API, database, cache) in Docker containers on a production server. Unlike the development docker-compose.ymlwhich includes tools like Mailhog and MinIO, the production version only includes what's needed to serve real users. The volumes directive ensures database data persists even if the container restarts.

To deploy with Docker, copy your project to the server and run:

docker compose -f docker-compose.prod.yml up -d --build
Use grit deploy when you have a simple setup — one VPS, one app, and you want the fastest path to production. Use Docker when you need reproducible environments, are deploying to multiple servers, or want to use container orchestration tools like Kubernetes in the future.
9

Challenge: Dev vs Production Docker

Compare docker-compose.yml (development) with docker-compose.prod.yml(production). What services are present in development but missing in production? Why are tools like Mailhog and MinIO not needed in production?

Production Checklist

Before going live, walk through this checklist. Missing any of these items could lead to security vulnerabilities, data loss, or embarrassing errors in front of real users.

Change JWT_SECRET to a strong random string

The default secret is for development only. Generate a random 64-character string with openssl rand -hex 32 and set it in your production .env.

Set APP_ENV=production

This disables debug logging, enables stricter security checks, and optimizes performance settings.

Configure real STORAGE_DRIVER (S3 or R2, not MinIO)

MinIO is a development stand-in. In production, use Cloudflare R2, AWS S3, or Backblaze B2 for reliable, scalable file storage.

Set RESEND_API_KEY for real emails

Mailhog catches emails in development but does not send them. Set your Resend API key so password reset emails, welcome emails, and notifications actually reach users.

Remove default passwords

Change all default passwords for GORM Studio, Sentinel rate limiter dashboard, and Pulse monitoring. Attackers know the defaults.

Set up database backups

Schedule daily PostgreSQL backups with pg_dump and store them off-server (S3 bucket, another server). Test restoring from a backup before you need it.

Point your domain DNS to your server's IP

Create an A record in your DNS provider (Cloudflare, Namecheap, etc.) pointing your domain to your server's public IP address. DNS propagation can take up to 48 hours, so do this early.

10

Challenge: Audit Your Checklist

Go through the production checklist above. How many items apply to your project? Which ones would you most likely forget without this list? What could go wrong if you deployed with the default JWT_SECRET?

Updating a Deployed App

Your app is live and users are using it. Now you need to ship a bug fix or a new feature. How do you update? Just run grit deploy again:

# Make your code changes locally
# Test them thoroughly
grit deploy

Grit runs the same 5-step pipeline: rebuild the binary, upload it, and restart the systemd service. The restart happens so quickly that users experience near-zero downtime. systemd stops the old process and starts the new one in milliseconds.

Here is the typical update workflow:

1

Make code changes and commit to git

2

Run tests locally: go test ./...

3

Run grit deploy

4

Verify the update: visit your domain and test the changes

5

Check logs: sudo journalctl -u myapp -f

If an update breaks something, you can SSH into your server, replace the binary with the previous version (keep backups!), and restart the service with sudo systemctl restart myapp. This is your manual rollback procedure until you set up CI/CD with automated rollbacks.
11

Challenge: Update Workflow

You have fixed a bug in your user registration handler. Describe the exact steps you would take to deploy this fix to production, starting from your local machine. Include the commands you would run.

Summary

You have completed the entire Grit Web course series. Let's recap what you learned in this final course:

grit deploy — one command to build, upload, and configure your production server

Cross-compilation — Go builds Linux binaries from any OS with GOOS and GOARCH

systemd — keeps your app running 24/7 with auto-restart on crash and boot

Caddy — reverse proxy with automatic HTTPS from Let's Encrypt, security headers, and gzip compression

Environment variables — configure deployment without flags using DEPLOY_HOST, DEPLOY_DOMAIN, DEPLOY_KEY_FILE

Maintenance mode — grit down/up for controlled downtime during major migrations

Docker deployment — an alternative for complex setups with docker-compose.prod.yml

Production checklist — JWT secrets, environment settings, real email/storage drivers, DNS configuration

12

Challenge: Final Challenge: Complete Deployment Plan

You are deploying a bookstore application built with Grit. Write a complete deployment plan that covers:

  1. VPS provider choice — which provider would you use (DigitalOcean, Hetzner, Linode, etc.) and why?
  2. The grit deploy command — write the exact command with all flags for your bookstore domain
  3. Production .env changes — list every variable you would change from development defaults
  4. Production checklist — walk through each item and explain what you would do
  5. First deployment vs updates — how does the first deploy differ from subsequent updates?
13

Challenge: Bonus: Rollback Strategy

Your latest deployment introduced a bug that breaks checkout. Describe your rollback strategy: how would you revert to the previous working version? What tools and commands would you use? How would you prevent this from happening again?

14

Challenge: Bonus: Monitoring After Deploy

Your app is live. How would you monitor it? Describe what logs you would check (journalctl), what metrics matter (response times, error rates), and how you would know if something breaks at 3 AM (alerts, health checks). Write a monitoring plan for your first week in production.