SaaS Dev8 min read · 20 January 2026

Deploying FastAPI to Railway: The Production Checklist

Every mistake I made deploying ContentForge AI's backend to Railway — and the exact configuration, environment variables, health checks, and monitoring setup that finally made it rock-solid.

FastAPIRailwayDevOpsPythonProduction

Railway Is Great — Until It Isn't

Railway makes deployment easy. Push to GitHub, it builds and deploys. But "deployed" is not the same as "production-ready." Here's what I learned the hard way.

The Procfile

Always use a Procfile to define your start command:

web: uvicorn main:app --host 0.0.0.0 --port $PORT --workers 2

Note $PORT — Railway injects this. Never hardcode 8000.

Health Check Endpoint

Railway needs to know your app is alive:

python
@app.get("/health")
async def health():
    return {
        "status": "healthy",
        "timestamp": datetime.utcnow().isoformat(),
        "version": os.getenv("APP_VERSION", "1.0.0")
    }

Set this in Railway Settings → Health Check Path → /health

Database Connections

Never create a new DB connection per request. Use a connection pool:

python
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker

engine = create_async_engine(
    DATABASE_URL,
    pool_size=5,          # Max 5 connections (Railway free = 10 limit)
    max_overflow=10,
    pool_pre_ping=True,   # Test connection before use
    pool_recycle=3600,    # Recycle after 1 hour
)

AsyncSessionLocal = sessionmaker(engine, class_=AsyncSession)

async def get_db():
    async with AsyncSessionLocal() as session:
        yield session

Environment Variables Checklist

bash
DATABASE_URL=postgresql+asyncpg://...   # Use asyncpg for async
SECRET_KEY=<64-char random string>
ENVIRONMENT=production
ALLOWED_ORIGINS=https://yourfrontend.vercel.app
REDIS_URL=redis://...                   # For rate limiting/caching
SENTRY_DSN=https://...                  # Error tracking

Rate Limiting with Upstash Redis

python
from upstash_ratelimit import Ratelimit, FixedWindow
from upstash_redis import Redis

ratelimit = Ratelimit(
    redis=Redis.from_env(),
    limiter=FixedWindow(max_requests=20, window=60),
)

@app.middleware("http")
async def rate_limit_middleware(request: Request, call_next):
    identifier = request.client.host
    result = ratelimit.limit(identifier)
    
    if not result.allowed:
        return JSONResponse(
            status_code=429,
            content={"error": "Rate limit exceeded"}
        )
    
    return await call_next(request)

Sentry Error Tracking

python
import sentry_sdk
from sentry_sdk.integrations.fastapi import FastApiIntegration

sentry_sdk.init(
    dsn=os.getenv("SENTRY_DSN"),
    integrations=[FastApiIntegration()],
    traces_sample_rate=0.1,  # 10% of requests
    environment=os.getenv("ENVIRONMENT", "development"),
)

With these in place, ContentForge AI's Railway backend has maintained 99.7% uptime over 3 months.

MH
Mahmudul Hassan Mithun
AI SaaS Builder · BSc Data Science & AI, UEL · Building ContentForge AI

Related Posts

Building a Production Multi-Agent AI Pipeline with LangChain & FastAPI
Building a Production Multi-Agent AI Pipeline with LangChain & FastAPI
12 min read →
Multi-Tenant SaaS Architecture with Next.js 15, Prisma & Auth0
Multi-Tenant SaaS Architecture with Next.js 15, Prisma & Auth0
15 min read →