dockerfile-x empowers developers with an extended syntax that allow modular factorization with ease.
To enable dockerfile-x custom syntax, you can use native docker buildkit frontend feature by adding syntax comment to the beginning of your Dockerfile:
# syntax = devthefuture/dockerfile-x
FROM ./base/dockerfile
COPY --from=./build/dockerfile#build-stage /app /app
INCLUDE ./other/dockerfileThat's it!
then you can run buildkit as usual:
docker build .note that you can also use docker compose or other tools that rely on docker buildkit.
this will compile the final Dockerfile using devthefuture/dockerfile-x docker image, just before running the build
We recommend using Docker 20.10 or later.
However, if you're working with Docker versions as old as 18.09, you can still enable BuildKit. To do so, you'll need to set the following environment variables: DOCKER_BUILDKIT=1 and COMPOSE_DOCKER_CLI_BUILD=1.
- INCLUDE: Incorporate content as is from other Dockerfiles or snippets.
- INCLUDE_ARGS: Converts a
.envfile into DockerfileARGinstructions. - INCLUDE_ENVS: Converts a
.envfile into DockerfileENVinstructions. - INCLUDE_LABELS: Converts a
.envfile into DockerfileLABELinstructions. - FROM:
- FROM with Relative Paths: Use other Dockerfiles as a base using relative paths.
- FROM with Stages: Reference specific stages from other Dockerfiles.
- FROM with Re-Alias: Rename specific stages from other Dockerfiles.
- COPY --from:
- COPY/ADD from Another Dockerfile: Transfer files from another Dockerfile.
- COPY/ADD with Stages: Specify a stage when copying files from another Dockerfile.
- Remote sources: pull included Dockerfiles from HTTP(S), Git, or OCI registries.
- Recursion guard: default depth limit and cycle detection across local and remote includes.
For a file to be recognized as an included Dockerfile, the FROM| or --from parameters must begin with either a . (examples: ./another/dockerfile or ../parent-dir/my.dockerfile) or /. Any Dockerfile imported via this custom FROM syntax will be treated according to the rules specified below.
-
Dockerfiles included via the
INCLUDEinstruction are integrated as they are, without any modifications. -
Dockerfiles brought in through the
FROMinstruction or--fromparameters undergo scoping to prevent conflicts with other Dockerfiles. Specifically:- All stages are renamed based on the scope.
- This scoping is transparent to users: one can re-alias imported stages (or the final stage of the imported Dockerfile if no stage is explicitly mentioned) and utilize them as needed.
-
All processing and the features described are recursive. Only the final stages of the root Dockerfile are made visible to the user, suitable for use with the
--targetparameter during a Docker build. -
A Dockerfile can be imported many times using
FROMinstruction or--fromparameters, at same or different stages, the imported stages will be deduplicated automatically. -
The paths resolutions for imported Dockerfiles from the root Dockerfile are relative to the docker build context, not the root Dockerfile itself. This is due to a limitation in BuildKit and this is consistent with other instructions that are also relative the context. The imported Dockerfiles must be in the build context, but can be safely ignored from .dockerignore. Symlinking does not help in this case.
-
However, the paths resolutions for imported Dockerfiles from the imported Dockerfiles are relative to the imported Dockerfile itself.
- If you're importing a Dockerfile with the
.dockerfileextension, you don't need to specify the extension; it will be detected automatically.
Basic INCLUDE
INCLUDE ./common-instructions.dockerfile
FROM debian:latest
CMD ["bash"]Using stages from another Dockerfile
FROM ./base/dockerfile#dev AS development
COPY . /app
FROM ./base/dockerfile#prod AS production
COPY --from=development /app /app
CMD ["start-app"]Re-aliasing a stage
FROM ./complex-setup/dockerfile#old-stage-name AS new-name
COPY ./configs /configsEasily include content from another Dockerfile or snippet, ensuring straightforward reuse of Dockerfile segments across projects.
# Include another Dockerfile's content
INCLUDE ./path/to/another/dockerfileConverts key-value pairs from a .env file into Dockerfile ARG instructions.
Use this to expose build-time variables without hardcoding them into the Dockerfile.
# custom-args.env
NODE_VERSION=20.11.1
PNPM_VERSION=9.1.0
# Include key-value pairs from file
INCLUDE_ARGS ./path/to/custom-args.envThis expands to:
ARG NODE_VERSION="20.11.1"
ARG PNPM_VERSION="9.1.0"Note: Values can be overridden at build time with --build-arg if desired.
Converts key-value pairs from a .env file into Dockerfile ENV instructions.
Ideal for runtime configuration baked into the image.
# custom-envvars.env
NODE_ENV=production
APP_PORT=8080
# Include key-value pairs from file
INCLUDE_ENVS ./path/to/custom-envvars.envThis expands to:
ENV NODE_ENV="production"
ENV APP_PORT="8080"Converts key-value pairs from a .env file into Dockerfile LABEL instructions.
Useful for image metadata (e.g., authorship, version, VCS refs).
# custom-labels.env
org.opencontainers.image.title=myapp
org.opencontainers.image.version=1.2.3
org.opencontainers.image.revision=abc1234
# Include key-value pairs from file
INCLUDE_LABELS ./path/to/custom-labels.envThis expands to:
LABEL org.opencontainers.image.title="myapp"
LABEL org.opencontainers.image.version="1.2.3"
LABEL org.opencontainers.image.revision="abc1234"Instead of using image names from DockerHub or another registry, use relative paths to refer to other Dockerfiles directly.
# Use another Dockerfile as a base
FROM ./path/to/another/dockerfileOr use a specific stage from another Dockerfile:
# Use a specific stage from another Dockerfile
FROM ./path/to/another/dockerfile#stage-name
Re-alias a specific stage from another Dockerfile to a new name, providing flexibility in naming.
# Re-alias a stage from another Dockerfile
FROM ./path/to/another/dockerfile#original-stage-name AS new-stage-nameCopy or add files directly from another Dockerfile, streamlining the process of transferring files between build stages.
# Copy files from another Dockerfile
COPY --from=/path/to/another/dockerfile source-path destination-path
# Add files from another Dockerfile
ADD --from=/path/to/another/dockerfile source-path destination-pathOr specify a stage from which to copy or add:
# Copy files from a specific stage of another Dockerfile
COPY --from=/path/to/another/dockerfile#stage-name source-path destination-pathAny reference (in INCLUDE, INCLUDE_ARGS|ENVS|LABELS, FROM, COPY --from=, ADD --from=) can target a remote location. Three schemes are supported:
INCLUDE https://example.com/snippets/setup.dockerfile
FROM https://example.com/base/alpine.dockerfile AS baseThe fetcher caches responses in ~/.dockerfile-x/cache using ETag / Last-Modified and falls back to the cached copy on network errors. Override the cache directory with DOCKERFILEX_CACHE_DIR.
INCLUDE git+https://github.com/foo/bar.git#main:dockerfiles/base.dockerfile
FROM git+ssh://git@github.com/foo/bar.git#v1.2.3:base.dockerfile AS baseFormat: git+<git-url>#<ref>:<path>. <ref> is a branch, tag, or full commit SHA (40-hex SHA-1 or 64-hex SHA-256):
- Branch / tag:
git clone --depth 1 --filter=blob:none --no-checkout --single-branch --branch <ref>. The fetcher then resolves the moving ref's currentoidviagit ls-remoteand uses that as the cycle-detection key — so a movingmainstill cycle-checks deterministically. - Commit SHA:
git init+git fetch --depth 1 --filter=blob:none origin <sha>(protocol v2). Requires the upstream server to allow fetch-by-oid. Works out of the box with GitHub, GitLab (.com and self-hosted), Gitea / Forgejo, Bitbucket Cloud, andfile://bare repos. Plaingit http-backend/ older self-hosted servers may needuploadpack.allowReachableSHA1InWant=true(orallowAnySHA1InWant) on the server.
Supply-chain pinning. Pinning to a commit SHA (or an OCI digest, see below) is the strongest defense against a compromised or rewritten upstream: once the build resolves the artifact, an attacker who force-pushes the same branch (or republishes the same tag) cannot change what your build sees. Pin in production builds; use moving refs only for development.
Both forms produce the same on-cache layout; git sparse-checkout set <dir> materializes only the requested file's directory. Authentication uses your standard Git mechanisms (GIT_ASKPASS, ~/.netrc, credential helpers). For BuildKit usage, pass credentials via a build secret:
docker buildx build --secret id=git-credentials,src=$HOME/.git-credentials .The cache directory is ~/.dockerfile-x/git-cache (override via DOCKERFILEX_GIT_CACHE_DIR).
INCLUDE oci://ghcr.io/foo/bar:1.0#dockerfiles/common.dockerfile
FROM oci://registry.example.com/base/alpine:3.19#Dockerfile AS baseFormat: oci://<registry>/<repo>(:<tag>|@<digest>)#<path>.
Supply-chain pinning. Use the
@<algo>:<hex>digest form (oci://reg/repo@sha256:abc…#path) in production builds — same rationale as a git commit SHA: an attacker who republishes the tag cannot change what your build sees, anddockerfile-xskips the manifest-fetch round-trip when the digest is already known. Tag form (:<tag>) is convenient for development; the resolved digest still becomes the cycle-detection key, but a future build pulls whatever the tag now points to.
Pulled via oras using ~/.docker/config.json for authentication, including credential helpers (docker-credential-ecr-login is bundled in the runtime image; docker-credential-gcr, docker-credential-acr-env, etc. can be added in a derived image). Resolution cascade for the auth file:
/run/secrets/docker-config(BuildKit build secret)$DOCKER_CONFIG/config.json$HOME/.docker/config.json
For BuildKit usage, mount your Docker config:
docker buildx build --secret id=docker-config,src=$HOME/.docker/config.json .The cache directory is ~/.dockerfile-x/oci-cache (override via DOCKERFILEX_OCI_CACHE_DIR). Localhost registries (127.0.0.1, localhost) are pulled with --plain-http; for any other host the env var DOCKERFILEX_ORAS_PLAIN_HTTP=true forces it.
A remote-included Dockerfile may itself contain INCLUDE, FROM, or COPY --from directives. Relative references resolve against the parent's location:
| Parent scheme | INCLUDE ./foo.dockerfile resolves to |
|---|---|
| HTTP(S) | new URL("./foo.dockerfile", parentUrl) |
| Git | same repo#ref, sibling path |
| OCI | same artifact, sibling path |
| Local | sibling path on disk (existing behavior) |
A nested reference may also use a different scheme — git+https://…#main:foo.dockerfile inside an HTTP-fetched Dockerfile is fine; resolution is delegated to resolveSource.
dockerfile-x enforces two safeguards across local and remote includes:
- Maximum depth — default
32. Exceeding it produces a JSON error on stdout and exit code2:{"error":"include-depth-exceeded","depth":N,"max":32,"chain":[...]}. Override with--max-include-depth <n>or envDOCKERFILEX_MAX_INCLUDE_DEPTH. - Cycle detection — if any Dockerfile re-references itself transitively, you get
{"error":"include-cycle","chain":[...]}and exit2. The cycle key for Git/OCI is the resolved commitoid/ artifactdigest, so a movingmainor:latesttag still detects loops correctly.
When the CLI encounters a structured error, it writes a single-line JSON object to stdout and exits with code 2. The Go BuildKit frontend distinguishes the codes below; only missing-file triggers an LLB-context retry — every other code surfaces directly as a build error.
| Code | Trigger | Extra fields |
|---|---|---|
missing-file |
Local include path not found in the build context | filename |
remote-file-not-found |
HTTP 404, git path missing in repo, or oci path missing in artifact | ref |
include-cycle |
A reference re-enters its own include chain | chain |
include-depth-exceeded |
More than --max-include-depth nested includes |
depth, max, chain |
path-traversal |
A .. segment, absolute path, or symlink resolves outside its source |
ref |
remote-fetch-failed |
HTTP error other than 404, oversize body, refused redirect (HTTPS→HTTP downgrade, non-http(s) scheme, or cross-host redirect to a private IP not on DOCKERFILEX_REDIRECT_ALLOWLIST) |
url, originalUrl?, message |
| Variable | Default | Purpose |
|---|---|---|
DOCKERFILEX_MAX_INCLUDE_DEPTH |
32 |
Maximum nested include depth (also via --max-include-depth). |
DOCKERFILEX_HTTP_TIMEOUT_MS |
30000 |
HTTP request timeout in milliseconds. |
DOCKERFILEX_MAX_HTTP_BYTES |
1048576 |
Maximum HTTP response body size in bytes (1 MiB). Caps memory use against hostile servers. |
DOCKERFILEX_MAX_REDIRECTS |
5 |
Maximum HTTP redirects followed. HTTPS→HTTP downgrades and non-http(s) schemes are always refused. |
DOCKERFILEX_REDIRECT_ALLOWLIST |
unset | Comma-separated list permitting cross-host redirects to private IPs (loopback, RFC1918, link-local, ULA — denied by default to mitigate SSRF). Entries can be exact IPs (192.168.1.10), IPv4 CIDRs (10.0.0.0/8), exact hostnames (internal.corp.local), or suffix wildcards (*.corp.local). |
DOCKERFILEX_SUBPROCESS_TIMEOUT_MS |
300000 |
Timeout for git / oras invocations (5 min). |
DOCKERFILEX_CACHE_DIR |
~/.dockerfile-x/cache |
HTTP response cache. |
DOCKERFILEX_GIT_CACHE_DIR |
~/.dockerfile-x/git-cache |
Git sparse-clone cache. |
DOCKERFILEX_OCI_CACHE_DIR |
~/.dockerfile-x/oci-cache |
OCI artifact cache. |
DOCKERFILEX_ORAS_PLAIN_HTTP |
unset | Set to true to force plaintext HTTP for non-localhost OCI registries. Use with care — disables TLS for all registry traffic. |
DOCKERFILEX_ORAS_BIN |
oras |
Path to the oras binary. |
DOCKERFILEX_TMPDIR |
os.tmpdir() |
Directory for the stdin temp file when reading from -. |
The runtime image ships with network access enabled (the upstream BuildKit network.none capability is dropped) so that the frontend can fetch HTTP/Git/OCI sources. A malicious Dockerfile could exfiltrate data via the references it declares — audit Dockerfiles you do not trust before building.
Understanding concerns regarding feature availability with alternative frontends:
Some might worry that alternative frontends would lack the ability to layer on top of the official docker/dockerfile, potentially missing out on its additional features. Let's address this concern for dockerfile-x.
- Node.js Compilation: The Node.js component compiles custom Dockerfile syntax into the standard Dockerfile format in a superset manner.
- BuildKit Frontend Service: This service then translates the standard Dockerfile to LLB. This step uses minimal custom code, predominantly relying on official BuildKit packages.
Though updates to docker/dockerfile aren't frequent, here's how we'd accommodate them:
- Upgrade the Go Package: The Go part of
dockerfile-xis lean in terms of custom code. Thus, maintaining it and integrating any updates fromdocker/dockerfilewould be straightforward. - Direct Compilation with Node.js: Instead of using the custom syntax frontend, users can compile the Dockerfile-X directly to a standard Dockerfile using only the Node.js component. This standalone CLI tool can be used via the command
npx dockerfile-x(further details available with--help). Any new additions todocker/dockerfile, like novel keywords, would be inherently supported without needing any modifications to this library.
In essence, one could think of dockerfile-x as a dedicated template engine specially crafted for Dockerfiles.
With the growing complexity of Docker setups, this tool ensures your Dockerfiles remain clean, maintainable, and modular.
We welcome contributions! If you encounter a bug or have a feature suggestion, please open an issue. To contribute code, simply fork the repository and submit a pull request.
This repository is mirrored on both GitHub and Codeberg. Contributions can be made on either platform, as the repositories are synchronized bidirectionally.
- Codeberg: https://codeberg.org/devthefuture/dockerfile-x
- GitHub: https://github.com/devthefuture-org/dockerfile-x
For more information:
/etc/docker/daemon.json
{
"experimental": true,
"debug": true
}sudo systemctl restart dockerthen observe the logs
journalctl -u docker.service -f- allow customization hook/plugins autoloading .dockerfile-x.js or .dockerfile-x/index.js (eg: integration of yarn workspaces topographically)
- release
- publish to npm
- build and push images to docker registry and codeberg registry