Arden is built for people who want native output, compiler-enforced safety, and an integrated workflow without stitching together five separate tools.
Today the repository already includes:
- LLVM-backed native code generation
- a real CLI for
build,run,check,fmt,lint,fix,test,bench,profile,bindgen,lex,parse, andlsp - multi-file project builds via
arden.toml - ownership and borrowing checks
- async tasks and runtime control helpers
- formatter, linter, test runner, benchmark harness, and CI smoke coverage in the same repo
This is still an nearly release language.
This repository is not just the compiler binary. It also contains the material a new user needs to go from "what is this?" to "I can build something with it":
- source documentation under
docs/ - runnable language and project examples under
examples/ - compiler and project smoke scripts under
scripts/ - the benchmark harness under
benchmark/ - CI and release automation under
.github/
The intended learning loop is:
- install the toolchain
- run one single-file example
- create a project with
arden new - inspect project mode with
arden info - move into testing, formatting, benchmarking, and larger examples
import std.io.*;
function main(): None {
mut sum: Integer = 0;
for (value in range(0, 5)) {
sum += value;
}
println("sum = {sum}");
return None;
}
- Rust
1.85+ - LLVM
22.1+ - Clang
moldon Linux, or LLVMlldon macOS/Windows
Detailed platform notes live in docs/getting_started/installation.md.
git clone https://github.com/TheRemyyy/arden-lang.git arden
cd arden
cargo build --releaseThe compiler binary will be available at:
target/release/ardentarget/release/arden.exeon Windows
cat > hello.arden <<'EOF'
import std.io.*;
function main(): None {
println("Hello, Arden!");
return None;
}
EOF
./target/release/arden run hello.arden./target/release/arden new hello_project
cd hello_project
../target/release/arden runThat scaffold is intentionally small, but it already gives you the pieces Arden uses for project mode:
arden.tomldeclares the project name, entry file, output kind, output path, and explicit source file listsrc/main.ardenis the entrypoint used byarden runandarden buildREADME.mdrecords the local workflow so the generated project is not a dead skeleton
To inspect exactly what the compiler sees, run:
../target/release/arden infoArden ships with a broader workflow than just compile.
new Create a project skeleton
build Build the current project
run Build and run a project or single file
compile Compile a single Arden file
check Parse, type-check, and borrow-check source
info Print project configuration and build settings
lint Report static findings
fix Apply safe fixes and reformat the result
fmt Format Arden source
lex Print lexer tokens
parse Print the parsed AST
lsp Start the language server
test Discover and run @Test suites
bindgen Generate Arden extern bindings from a C header
bench Measure end-to-end execution time
profile Run once and print a timing summary
Reference: docs/compiler/cli.md
Arden currently supports:
- functions, lambdas, modules, packages, and imports
- classes, inheritance, interfaces, and visibility rules
- enums, pattern matching,
Option<T>, andResult<T, E> - generics and generic bounds
- ownership, borrowing, and mutability checking
- async / await with
Task<T> - intrinsic standard library modules for I/O, math, time, args, strings, collections, and system access
Good starting points:
- docs/overview.md
- docs/getting_started/quick_start.md
- docs/features/projects.md
- docs/features/testing.md
- docs/stdlib/overview.md
At a high level, the compiler pipeline is:
- lex source text into tokens
- parse the token stream into an AST
- resolve names, types, and effects
- run ownership and borrow validation
- lower the checked program to LLVM IR
- link a native executable or library
That matters for users because many CLI commands stop at different layers:
arden lexshows tokenizer outputarden parseshows parser outputarden checkruns semantic and borrow checks without building a native binaryarden buildgoes through codegen and linkingarden runbuilds and executes
More detail lives in docs/compiler/architecture.md.
Single-file programs are useful for experiments, but most real Arden work happens in project mode.
Project mode gives you:
- explicit source graph control through
arden.toml - a stable entry file instead of magic directory scanning
- reusable build metadata in
.ardencache/ - project-aware
build,run,check,fmt,test, andinfo
This is one of the bigger differences between Arden and parser-demo style language repos: there is an opinionated workflow for building multi-file code, not just compiling one example file at a time.
Reference: docs/features/projects.md
The repo includes both focused feature examples and larger project-style samples.
Recommended first passes:
- examples/single_file/basics/01_hello/01_hello.arden
- examples/single_file/safety_and_async/10_ownership/10_ownership.arden
- examples/single_file/safety_and_async/14_async/14_async.arden
- examples/single_file/tooling_and_ffi/24_test_attributes/24_test_attributes.arden
- examples/single_file/language_edges/35_visibility_enforcement/35_visibility_enforcement.arden
- examples/starter_project/README.md
- examples/showcase_project/README.md
Overview: examples/README.md
If you are learning the language, a good order is:
- start with
01_hello,02_variables, and04_control_flow - move to
05_classes,08_modules, and09_generics - then read
10_ownership,13_error_handling, and14_async - after that, switch to
starter_project/andshowcase_project/
That path mirrors the way the docs are structured, so you can alternate between prose and runnable code instead of reading one giant manual first.
Arden includes a benchmark harness that compares Arden, Rust, and Go on shared workloads.
The suite covers:
- CPU-focused runtime workloads
- cold and hot project compile benchmarks
- incremental rebuild benchmarks
- optional larger synthetic graph stress tests
There are two entrypoints:
benchmark/run.py— single benchmark runs, outputs tobenchmark/results/latest.*benchmark/full_campaign.py— multi-stage campaigns with presets, outputs to a timestampedbenchmark/results/campaign_*/directory
Quick start:
# Smoke test — does the harness work?
python3 benchmark/run.py --bench matrix_mul_heavy --repeats 1 --warmup 0 --no-build
# Quick sanity pass across all groups (~2–5 min)
python3 benchmark/full_campaign.py --preset quick --no-build
# Full publication-grade campaign (~15–30 min)
python3 benchmark/full_campaign.py --preset full --no-buildFull documentation, command map, output layout, instrumentation flags, and methodology caveats: benchmark/README.md
The benchmark harness is intentionally part of the repository instead of an external gist so numbers can be regenerated, challenged, and updated. If benchmark results are published, they should always be tied to a command, machine, and date rather than presented as timeless marketing.
- docs/ - language, stdlib, project, and compiler documentation
- examples/ - feature-focused examples and multi-file sample projects
- benchmark/ - benchmark harness and report generation
- scripts/ - smoke tests, example runners, and maintenance scripts
- .github/workflows/ - CI and release automation
- src/ - compiler implementation
If you want a structured reading order:
- docs/overview.md for the broad mental model
- docs/getting_started/installation.md for toolchain setup
- docs/getting_started/quick_start.md for the first runnable steps
- docs/compiler/cli.md for the command surface
- docs/features/projects.md and docs/features/testing.md for day-to-day workflow
- docs/compiler/architecture.md if you want to understand how the compiler is arranged internally
If you prefer code first:
Arden is actively evolving. The docs in this repository aim to describe what is implemented now, not an aspirational roadmap.
That means:
- examples are intended to run against the current compiler
- CLI docs follow the current
--helpoutput - benchmark docs describe the actual shipped harness
- web docs are generated from the repository sources in
docs/
That also means docs should become richer over time, but not looser. If a feature is incomplete, the docs should say so plainly.
If you want to improve the compiler, docs, examples, or tooling, start with:
