Fast and Reproducible Python Deployments on ECS with uv
0. Why Another ECS Setup?
Running Python on ECS is easy until cold starts and rebuild churn show up. On Fargate every task pulls the image from scratch; when your image includes big dependencies, startup time balloons and short‑lived jobs pay that cost on every run.
The idea: keep the runtime image tiny and separate build from run. Package the Python environment once, pick a version at launch, and run.
This keeps startup fast and behavior consistent across versions—no image rebuilds just to try a change, no dependency drift between runs.
1. The Strategy
Run small images and fetch the environment at runtime. This plays to Fargate’s strengths (ephemeral tasks) and avoids paying the cold‑start penalty of large image pulls every single run.
1.0. What Fargate Optimizes For
- No layer cache between tasks → each task pulls the full image again.
- Pull time scales with image size → big images = slow starts.
- Private subnets without VPC endpoints can add NAT latency/cost.
Keep the image lean; move heavy Python deps out of the image. Treat S3 like a layer store for your virtualenv.
1.1. Two Approaches (and why one wins on Fargate)
| Approach | What it means | Pros | Cons |
|---|---|---|---|
| A. Bake deps into the image | pip install during build; one fat image | Simple deploy; one immutable artifact | Large image; slow pulls every run; rebuild image for any dep change |
| B. Slim image + download venv at runtime | Keep image minimal; fetch venv_*.tar.gz from S3 on startup | Tiny image → fast pulls; reuse one runtime image for many versions; faster iteration (swap venv, not image) | Slightly more bootstrap logic; needs S3 access & versioning |
For short‑lived/batch tasks, B consistently starts faster: pull a ~80–100 MB image, then download a compressed venv from S3. Net startup is typically tens of seconds, not minutes.
1.2. The Pattern I Use (and this post documents)
One image, many jobs/versions. The Docker image is generic and reusable. The version you pass selects what to run.
Package once, run anywhere:
-
Build a versioned virtualenv tarball with
uv(e.g.,venv_0.7.1.villoro.tar.gz). -
Publish versioned artifacts to S3 as a set:
pyproject_<version>.tomluv_<version>.lockentrypoint_<version>.pyvenv_<version>.tar.gz
-
Ship a minimal runtime image that, on start:
- downloads the selected version (
-v <version>) from S3, - extracts to
./.venv/, - activates it, and
- runs the versioned
entrypoint.pywith your args.
- downloads the selected version (
Developers can iterate fast: compile a custom version, upload the four artifacts, and run it in prod with -v <version> with no image rebuilds. Roll forward/back by switching only the version.
1.3. Files & Layout
deploy/
├── docker/
│ ├── Dockerfile.venv # build the venv tarball with uv
│ └── Dockerfile.runtime # tiny runtime image
└── scripts/
├── upload_all.py # CI: push venv/locks/entrypoint to S3
├── download_all.py # runtime: fetch artifacts by version
└── setup_and_run.sh # runtime: bootstrap → activate → run
ecs_northius/ # Actual code, feel free to use whatever name you want
tests/
Venv tarballs and lockfiles are versioned (e.g., uv, 0.7.1). Run any version in prod with -v <version> which is great for hotfixes and A/B verification.
2. Building venv and Publishing
We use uv for speed and determinism: it resolves from a lock file (uv.lock), creates a clean in-project venv, installs our code, and then packages everything for production. The resulting venv_<version>.tar.gz is fully self-contained and versioned for autonomous deployment.
2.1 Why two base images (uv + python)?
ghcr.io/astral-sh/uvstage (distroless) → provides only theuvanduvxbinaries. We copy them in, avoidingcurl/aptinstallations and keeping the surface minimal.python:3.12-slim-bookwormbuilder → provides a small Debian-based runtime with GNUtarandgzipfor packaging. It’s slim, reproducible, and doesn’t need build tools.
Copying the uv binaries (instead of installing them dynamically) keeps the image small, stable, and hermetic. The Python slim base is used solely to build and compress the venv.
2.2 Make the venv self-contained
In addition to third-party dependencies, we install ecs_northius into the venv (non-editable). This means the venv_<version>.tar.gz already contains our package and can execute jobs directly in ECS.
The venv tarball is a plug-and-play runtime: download it, activate it, run the entrypoint — no extra setup required.
There are two valid ways to make your code importable in the venv. Both work but the pyproject-driven approach is usually better because it also lets you include non-Python assets (YAML, templates, etc.).
2.2.1 Option A — Classic packages (add __init__.py)
Add __init__.py to every package directory you want importable. Simple and explicit.
ecs_northius/__init__.py
ecs_northius/common/__init__.py
ecs_northius/common/aws/__init__.py
Use standard package discovery:
[tool.setuptools.packages.find]
where = ["."]
include = ["ecs_northius*"]
2.2.2. Option B — Pyproject-driven (recommended)
Skip __init__.py by using namespace packages (PEP 420) or keep them if you prefer — the key is that discovery and data inclusion are driven by pyproject.toml.
Package discovery:
# If you want to allow namespace packages (no __init__.py):
[tool.setuptools.packages.find_namespace]
where = ["."]
include = ["ecs_northius*"]
# If you keep classic packages with __init__.py, use this instead:
# [tool.setuptools.packages.find]
# where = ["."]
# include = ["ecs_northius*"]
Package data (YAML, templates, etc.):
[tool.setuptools]
include-package-data = true
[tool.setuptools.package-data]
ecs_northius = [
"**/*.html",
"**/*.j2",
"**/*.yaml",
"**/*.yml",
]
With Option B you can manage both discovery and non-Python assets from one place.
2.3 Dockerfile.venv
deploy/docker/Dockerfile.venv
# 0. Pull images
ARG BUILD_FOR=linux/amd64
# 0.1. Stage with just the uv binaries at /uv and /uvx (distroless)
FROM --platform=${BUILD_FOR} ghcr.io/astral-sh/uv:latest AS uvbin
# 0.2. Builder: slim Python + uv copied in
FROM --platform=${BUILD_FOR} python:3.12-slim-bookworm AS base
# 0.3. Set up config
WORKDIR /app/
ENV PYTHONIOENCODING=utf-8 \
LANG=C.UTF-8 \
UV_LINK_MODE=copy \
UV_NO_PROGRESS=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
# 1. Copy only the uv binaries (no curl, no apt)
COPY --from=uvbin /uv /usr/local/bin/uv
COPY --from=uvbin /uvx /usr/local/bin/uvx
# 1.1. Test uv: fail fast if something changes upstream
RUN uv --version && uvx --version
# 2. Lock first for better cache hits
COPY pyproject.toml uv.lock ./
# 3. Create in-project venv and install deps (no dev if prod-only)
RUN uv venv --python 3.12 --copies && uv sync --frozen --no-dev
# 4. Add source and install it INTO the venv (non-editable)
COPY ecs_northius ./ecs_northius
RUN uv pip install --python .venv/bin/python --no-deps .
# 4.1. Remove source dir so test imports from venv
RUN [ -d ecs_northius ] && rm -rf ecs_northius || echo "ecs_northius not present"
# 4.2. Test venv: sanity check
RUN .venv/bin/python -c "import ecs_northius; print('ecs_northius OK')"
# 5. Package the venv
ARG PACKAGE_VERSION
RUN mkdir -p /dist && tar -C .venv -czf /dist/venv_${PACKAGE_VERSION}.tar.gz .
# 6. Export only the venv tar
FROM scratch AS export
COPY --from=base /dist/venv_*.tar.gz /dist/
Build and export:
docker build -f deploy/docker/Dockerfile.venv --output . . --build-arg PACKAGE_VERSION=0.1.0
The resulting venv_0.1.0.tar.gz is portable — upload it to S3 and reuse it across ECS tasks.
2.4 Extra optimizations & tricks
.dockerignoreaggressively: exclude.git,tests/, build artifacts, local caches. Smaller build context = faster.- Cache-friendly layering: copy
pyproject.toml+uv.lockbefore project sources to maximizeuv synccache hits. - Symlink safety:
uv venv --copiesavoids symlink issues when unpacking later. - Prune bytecode (optional): remove
__pycache__dirs to shave MBs. - BuildKit caches (optional): use
--mount=type=cache,target=/root/.cache/uvon theuv synclayer for faster CI builds.
Avoid compilers and build tools unless you need to compile native deps. Prefer prebuilt manylinux wheels for portability.
2.5 Publishing the artifacts
Once built, we publish the venv and its companion files to S3. The upload_all.py script handles this automatically.
It can also optionally upload the requested version as latest.
deploy/scripts/upload_all.py
from time import time
import boto3
import click
import utils as u
from loguru import logger
def upload_to_s3(origin, dest, bucket=u.BUCKET):
logger.info(f"Uploading from {origin=} to {dest=}")
boto3.client("s3").upload_file(origin, bucket, dest)
@click.command()
@click.option("--version", "-v", help="Package version")
@click.option("--latest", is_flag=True, default=False, help="Upload latest version?")
def upload_all(version, latest):
t0 = time()
logger.info(f"Uploading all files ({version=}, {latest=})")
versions = [version]
if latest:
versions.append("latest")
else:
logger.warning("Skipping upload of 'latest' (local-only builds)")
for x in versions:
# 1. Upload uv config
upload_to_s3("pyproject.toml", f"{u.S3_UV}/pyproject_{x}.toml")
upload_to_s3("uv.lock", f"{u.S3_UV}/uv_{x}.lock")
# 2. Upload entrypoint
upload_to_s3("deploy/entrypoint.py", f"{u.S3_ENTRY}/entrypoint_{x}.py")
# 3. Upload venv
upload_to_s3(f"dist/venv_{version}.tar.gz", f"{u.S3_VENV}/venv_{x}.tar.gz")
logger.success(f"All uploads done in {round(time() - t0, 2)} seconds")
if __name__ == "__main__":
upload_all()
Purpose:
- Uploads all versioned build artifacts (
pyproject,uv.lock,entrypoint,venv). - Ensures every build can be referenced independently (
-v 0.7.1.villoro) or vialatest. - Enables developers to deploy and test new versions in prod without rebuilding or redeploying Docker images — simply upload and point ECS to a new version.
This closes the loop: every artifact is versioned, uploaded, and retrievable by the runtime.
3. The Runtime
At runtime, speed and simplicity take priority; no compilers, no uv, no dependency resolution.
The container just downloads, activates, and executes.
3.1 Runtime Image
deploy/docker/Dockerfile.runtime
ARG BUILD_FOR=linux/amd64
FROM --platform=${BUILD_FOR} python:3.12-slim-bookworm AS runtime
WORKDIR /app
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
RUN python -m pip install --no-cache-dir boto3 click loguru toml
COPY deploy/scripts .
ENTRYPOINT ["/bin/bash", "./setup_and_run.sh"]
The image is ~80–100 MB and boots fast because all dependency resolution was done earlier during venv packaging. The size can increase with big libraries such as awswrangler.
The runtime image is lightweight — no compilers, no dependency resolution. It starts quickly and scales efficiently across short-lived ECS tasks.
3.2 Runtime Bootstrap Script
Executed automatically when the container starts:
deploy/scripts/setup_and_run.sh
echo 1. Downloading venv and config
python download_all.py "$@"
echo 2. Activating venv
. ./.venv/bin/activate
echo 3. Running ECS task. Using args="$@"
python entrypoint.py "$@"
Highlights:
-
$@allows passing arguments like-v uvto select the version. -
No dependency installation here — only download, extract, and activate.
-
Always use LF line endings to avoid
/bin/sherrors:*.sh text eol=lf *.py text eol=lf
The bootstrap keeps tasks reproducible and fast — it only fetches what’s needed for the specified version.
3.3 Download Script
This script fetches all necessary artifacts (venv, entrypoint, and config) from S3 and prepares the environment for execution.
deploy/scripts/download_all.py
import pathlib
import tarfile
from time import time
import boto3
import click
import utils as u
from loguru import logger
def download_s3(origin, dest, bucket=u.BUCKET):
logger.info(f"Downloading from {origin=} to {dest=}")
boto3.client("s3").download_file(bucket, origin, dest)
def venv_extract_tar(filename, local_venv=".venv"):
pathlib.Path(local_venv).mkdir(parents=True, exist_ok=True)
# Python 3.14: allow absolute symlinks from our trusted artifact
with tarfile.open(filename, "r:gz") as tar:
tar.extractall(local_venv, filter="fully_trusted")
@click.command(
name="download",
context_settings=dict(
ignore_unknown_options=True,
allow_extra_args=True,
),
) # This is needed in order to pass other parameters to the entrypoint, more on that later
@click.option("--version", "--ecs_version", "-v", required=True, help="Package version")
def download_all(version):
t0 = time()
version = version.strip()
logger.info(f"Downloading all files ({version=})")
download_s3(f"{u.S3_UV}/pyproject_{version}.toml", "pyproject.toml")
download_s3(f"{u.S3_UV}/uv_{version}.lock", "uv.lock")
download_s3(f"{u.S3_ENTRY}/entrypoint_{version}.py", "entrypoint.py")
download_s3(f"{u.S3_VENV}/venv_{version}.tar.gz", "venv.tar.gz")
venv_extract_tar("venv.tar.gz")
logger.success(f"All downloads done in {round(time() - t0, 2)} seconds")
if __name__ == "__main__":
download_all()
Python 3.14+ tightens tar extraction security — use filter="fully_trusted" to avoid AbsoluteLinkError when unpacking symlinks.
3.4 How this connects to entrypoints & params
This section focuses on runtime packaging and activation. For dynamic entrypoints (import-by-string), argument parsing via click, and typed params with pydantic, see Effortless EMR: A Guide to Seamlessly Running PySpark Coder.
The same entrypoint model integrates seamlessly here.
Because pyproject, uv.lock, venv, and entrypoint are versioned together, each run is fully autonomous and reproducible.
Developers can safely test new jobs in production by uploading a custom version and running it with -v <version>.
3.5 How It Runs in ECS
Run tasks by passing a version at launch:
docker run --rm ecs/runtime -v uv
At runtime the container:
- Downloads
venv_<version>.tar.gz,entrypoint.py, and lock files from S3. - Extracts and activates the virtual environment.
- Executes
entrypoint.pywith your provided arguments.
You can run multiple versions in parallel (for example, -v latest for dev and -v 0.7.1.villoro for prod) without rebuilding Docker images.
4. CI/CD Flow
We keep testing, venv publishing, and runtime image pushes as separate jobs. This speeds up iteration (you rarely need to rebuild the runtime docker image) and preserves reproducibility (each venv is versioned).
4.1. CI: Tests with uv
.gihtub/workflows/CI.yaml
name: CI
on:
pull_request:
push:
branches: [ main ]
jobs:
pytest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12' # match Dockerfile.venv
- name: Install uv
run: |
curl -Ls https://astral.sh/uv/install.sh | sh
echo "$HOME/.local/bin" >> "$GITHUB_PATH"
- name: Cache uv
uses: actions/cache@v4
with:
path: ~/.cache/uv
key: uv-${{ runner.os }}-${{ hashFiles('uv.lock') }}
- name: Sync deps from lock
run: uv venv && uv sync --frozen
- name: Install project
run: uv pip install .
- name: Run tests
run: uv run python -m pytest -q
4.2. CD: Build & Publish venv to S3
Triggers when app code, dependency files, or deployment config changes. Produces a versioned quartet: pyproject_<v>.toml, uv_<v>.lock, entrypoint_<v>.py, venv_<v>.tar.gz.
.gihtub/workflows/CD_venv.yaml
name: CD Venv
on:
push:
branches: [ main ]
paths:
- deploy/**
- ecs_northius/**
- pyproject.toml
- uv.lock
- .github/workflows/CD_venv.yaml
jobs:
publish_venv:
runs-on: ubuntu-latest
permissions:
id-token: write # GitHub OIDC → AWS
contents: read
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install python deps for uploader
run: pip install boto3 click loguru toml
# This just gets the version from the pyproject.toml
- name: Get current version
run: python .github/scripts/get_version.py --name=current
- name: Build venv tarball
run: |
docker build \
-f deploy/docker/Dockerfile.venv \
--output . . \
--build-arg PACKAGE_VERSION=$VERSION_CURRENT
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.AWS_ROLE_DATA }}
aws-region: eu-west-1
- name: Upload artifacts to S3
run: python deploy/scripts/upload_all.py --version $VERSION_CURRENT --latest
4.3. CD: Build & Push the Runtime Image to ECR
This job changes infrequently (scripts or the runtime Dockerfile). It tags with the version from pyproject.toml and with latest.
.gihtub/workflows/CD_docker.yaml
name: CD Docker
on:
push:
branches: [ main ]
paths:
- deploy/scripts/**
- deploy/docker/Dockerfile.runtime
- .github/workflows/CD_docker.yaml
jobs:
push_runtime:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- name: Read image version & name from pyproject.toml
id: cfg
uses: SebRollen/toml-action@v1.0.2
with:
file: 'pyproject.toml'
field: 'docker'
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.AWS_ROLE_DATA }}
aws-region: eu-west-1
- name: Login to Amazon ECR
id: ecr
uses: aws-actions/amazon-ecr-login@v2
with:
mask-password: true
env:
AWS_REGION: eu-west-1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Image metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ steps.ecr.outputs.registry }}/${{ fromJson(steps.cfg.outputs.value).name }}
tags: |
${{ fromJson(steps.cfg.outputs.value).version }}
latest
- name: Build & push runtime image
uses: docker/build-push-action@v5
with:
file: deploy/docker/Dockerfile.runtime
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
4.4. Deployment workflow summary:
- CI runs tests on every PR as a mandatory step.
- On merge to
main, CD Venv buildsvenv_<version>.tar.gzand uploads all versioned artifacts to S3. - CD Docker rebuilds and pushes the runtime image to ECR (only if there are changes).
- ECS tasks start containers using the latest runtime image, fetching the desired venv version dynamically with
-v <version>.
4.5. Practical tips
- Pin Python versions: keep GitHub Action Python versions aligned with Dockerfiles.
- Cache
uv: already included; it makes PR validation fast. - S3/ECR VPC endpoints: set them to avoid NAT costs in private subnets.
- Immutable versions +
latest: publish both; production can point atlatestwhile you can roll forward/back with a version flag.
5. Wrapping Up
You now have a fast, versioned, and reproducible pattern for running Python on ECS:
- Build once → deploy anywhere.
- Keep runtime images tiny and stable.
- Test new versions in production safely.
- Roll forward or back with a single
-vflag.
This approach scales from small one-off jobs to full ETL pipelines, and it works just as well for Lambda or Batch.
With uv, Docker, and S3, ECS becomes a clean, predictable environment.
No cold-start surprises, no rebuild overhead, and total control over your Python versions.