A production‑style quality engineering ecosystem that shows how I design, build, and run automation, performance, and security testing for real products – not toy demos.
One lab, two repositories, full CI, and an opinionated test strategy around a realistic Express.js + React + PostgreSQL application with JWT auth.
📊 View Live Test Report (Available after CI/CD deployment)
This project is intentionally split into two repositories:
-
automation-lab-ecosystem(private)
Full quality engineering lab: application under test, test frameworks, performance and security tooling, Docker infrastructure, and CI configuration.
This is where all implementation details live and where CI pipelines are executed. -
sqa-architect(public)
Public‑facing portfolio repo that hosts the unified Allure report generated by the private lab.
CI runs inautomation-lab-ecosystempublish test results and history to this repository, exposing behavior and trends without exposing the underlying code.
For security and IP reasons, the full lab remains private. The public repository is designed to show what the system does (coverage, stability, history) without revealing how everything is implemented.
When I do technical walkthroughs, I grant time‑boxed collaborator access to the private lab so stakeholders can inspect the architecture, frameworks, and pipelines directly, then revoke access afterward.
This is not a tutorial follow‑along or a folder of random scripts. It is a portfolio‑grade lab built to demonstrate how I operate as a senior QA automation engineer and independent contractor.
It exists to show that I can:
- Design and shape a testable system (API, UI, data, auth), not just “test what I’m given”.
- Build and maintain multi‑language API automation that cross‑validates the same surface.
- Treat UI, performance, and security as first‑class citizens in the same pipeline.
- Wire everything into CI/CD with stable, meaningful suites instead of flaky noise.
- Run the whole thing in a realistic, resource‑constrained environment using Docker and profiles.
If you’re evaluating me for a senior QA / automation / SDET role or a 1099 engagement, this lab is how I would approach your quality stack from scratch. The private repo shows the implementation; the public repo and Allure dashboard show the results.
- 390 API tests across Python (pytest), Java (REST Assured + TestNG), and TypeScript (Playwright).
- 164 UI tests across Playwright TypeScript and Selenium Java POM (Chrome + Firefox), including documented intentional AUT bugs.
- 12 performance scenarios across k6 and JMeter, with thresholds and CI integration.
- OWASP ZAP API security scanning driven by an OpenAPI 3.0.3 spec.
- Docker‑based infrastructure with profile‑based startup for constrained environments.
- CI in
automation-lab-ecosystemwith parallel jobs and a unified Allure report published tosqa-architecton every push tomain.
| Layer / Category | Technology |
|---|---|
| Application Under Test | Express.js, React (Vite), PostgreSQL 15, Redis, JWT |
| Languages | Java 17, Python 3.12, TypeScript, JavaScript (Node.js 22) |
| API Test Automation | Python (pytest + requests), Java (REST Assured + TestNG), TypeScript (Playwright) |
| UI Test Automation | Selenium 4 (Java, POM), Playwright (TypeScript) |
| Performance Testing | k6, Apache JMeter 5.6.3 |
| Security Testing | OWASP ZAP (Docker, Automation Framework) |
| Build / Dependency | Maven, pip, npm |
| Infrastructure | Docker, Docker Compose (profile‑based architecture) |
| CI/CD | GitHub Actions |
| Reporting | Allure (unified across API, UI, perf, security) |
| Documentation | OpenAPI 3.0.3 spec, project docs, inline test metadata |
The goal is to resemble a trimmed‑down production environment, not some “hello world” application with a few happy path tests.
Three independent API suites validate the same endpoints to demonstrate design choices and consistency, not just tool familiarity.
-
Python suite (pytest + requests) – 130 tests
- Auth flows, CRUD for users/products/orders, pagination/filtering/search
- Negative tests and error handling
- Allure reporting integration
-
Java suite (REST Assured + TestNG) – 130 tests
- BDD‑style structure and groups (smoke, auth, negative, etc.)
- Mirrors Python coverage for cross‑suite verification
- Allure reporting integration
-
Playwright TypeScript API suite – 130 tests
- Uses Playwright’s request layer for API checks
- Mirrors Python/Java coverage for cross‑framework validation
- Allure reporting integration
Two UI stacks to show cross‑browser, cross‑tool thinking with a consistent Page Object Model approach.
-
Playwright TypeScript UI suite – 82 tests
- POM pattern with dedicated page objects
- Full coverage of Login, Register, Products, Product Detail, Orders, Dashboard, and navigation
- Auth fixture using API login + localStorage token injection
- 17 failing tests are intentional, documenting real AUT bugs found by the suite
- Allure reporting integration
-
Selenium Java POM suite – 82 tests (Chrome + Firefox)
- Strict Page Object Model with
BasePage,DriverFactory, and utility classes - Mirrors Playwright UI coverage for cross‑tool validation
- Cross‑browser execution via separate CI jobs (Chrome headless, Firefox headless)
- Uses Java HttpClient for API‑level setup (mirrors Playwright auth fixture)
- Known AUT bugs annotated with
@Issuefor Allure traceability - Allure reporting integration
- Strict Page Object Model with
-
k6 suite – 6 scenarios
- Health, auth, users, products, orders, user journey
- 10 VUs, 30s sustained load, ramp‑up/ramp‑down
- Thresholds enforced per scenario
- Custom Allure conversion script, runs in CI
-
JMeter suite – 6 test plans mirroring k6
- Health, auth, users, products, orders, user journey
- setUp thread group pattern for shared admin token
- RFC‑4180 compliant JTL parser for Allure conversion
- CI execution aligned with k6 parameters
- OWASP ZAP API scan
- Driven by
docs/openapi.yaml(OpenAPI 3.0.3) for full surface coverage - Uses ZAP Automation Framework (YAML), aligning with ZAP’s forward direction
- Passive + active scans in CI
- CI threshold: pipeline fails on High alerts; Medium issues tracked and documented
- HTML and JSON reports converted into Allure results
- Driven by
- Express.js REST API on port 3000 with JWT auth.
- React UI client (Vite) served from the same Express app (e.g.,
/products). - PostgreSQL 15 with realistic seeded data (users, products, orders).
- OpenAPI 3.0.3 spec in
docs/openapi.yaml, validated with Spectral.
Endpoints include (partial):
GET /health– health checkPOST /api/auth/register– user registrationPOST /api/auth/login– authenticationGET /api/auth/me– current userGET /api/users– list users (auth required)GET /api/products/GET /api/products/:id– product catalogPOST /api/orders,GET /api/orders,PATCH /api/orders/:id/status– order lifecycle with role‑based access
The AUT is intentionally non‑trivial so tests exercise real flows: auth, state, roles, and business rules.
GitHub Actions runs a full test matrix on pushes and PRs to key branches:
- Python API tests (130)
- Java REST Assured API tests (130)
- Playwright TypeScript API tests (130)
- Playwright TypeScript UI tests (82)
- Selenium Java UI tests – Chrome headless (82)
- Selenium Java UI tests – Firefox headless (82)
- k6 performance suite (6 scenarios)
- JMeter performance suite (6 plans)
- OWASP ZAP API security scan
- Unified Allure report build + GitHub Pages deployment
PRs receive test result feedback, and the unified Allure report provides a single pane of glass across functional, perf, and security.
automation-lab-ecosystem/
├── application-under-test/ # Express.js REST API + React UI
│ ├── src/
│ ├── client/
│ ├── config/
│ └── package.json
├── api-automation/
│ ├── python-api-tests/ # 130 pytest tests
│ ├── rest-assured-java/ # 130 TestNG tests
│ └── playwright-tests/ # 130 Playwright API tests
├── ui-automation/
│ ├── playwright-tests/ # 82 Playwright UI tests
│ └── selenium-java-pom/ # 82 Selenium POM tests (Chrome + Firefox)
├── performance-testing/
│ ├── k6-scripts/ # 6 k6 scenarios + Allure converter
│ └── jmeter-tests/ # 6 JMeter plans + Allure converter
├── security-testing/
│ └── zap-scans/ # ZAP automation + Allure converter
├── docs/
│ └── openapi.yaml # OpenAPI 3.0.3 specification
├── docker-infrastructure/ # Docker Compose profiles
├── test-data/
│ └── seeds/ # Database seeding scripts
└── .github/
└── workflows/ # GitHub Actions CI definition
Prerequisites
- OS: Ubuntu/Lubuntu 20.04+ (or compatible)
- RAM: 8 GB minimum, 10+ GB recommended if running everything
- Disk: ~100 GB recommended (images + reports)
- Tools:
- Docker & Docker Compose v2
- Java 17
- Python 3.12
- Node.js 22 LTS
- k6
- JMeter 5.6.3
- Git
OWASP ZAP is run via Docker (
zaproxy/zap-stable); no local installation needed.
1. Clone
git clone https://github.com/drexm1967/automation-lab-ecosystem.git
cd automation-lab-ecosystem2. Start core services
cd docker-infrastructure
# Core stack: Postgres, Redis, Express API + React UI
docker compose --profile core up -dReact UI is available at: http://localhost:3000/products
3. Seed database
cd ..
python test-data/seeds/seed.py --reset4. Smoke the main suites (examples)
# Python API suite
cd api-automation/python-api-tests
python3 -m venv venv && source venv/bin/activate
pip install -r requirements.txt
pytest tests/ -v# Playwright UI suite
cd ../../ui-automation/playwright-tests
npm install
npx playwright install --with-deps chromium
npx playwright test# k6 health scenario
cd ../../performance-testing/k6-scripts
k6 run scenarios/health_test.jsEach suite has more detailed instructions in its own README.
| Role | Password | |
|---|---|---|
| Admin | admin1@testlab.com | Test@1234 |
| User | testuser1@testlab.com | Test@1234 |
Some failures are by design to demonstrate real‑world conditions:
- ZAP Medium‑severity findings (e.g., CSP headers, CORS) are kept as case‑study material; the CI threshold fails only on High.
- 17 UI test failures are tied to 5 documented React UI bugs (form validation, auth redirects, pagination, etc.).
- CI UI jobs can be configured with
continue-on-errorto preserve the evidence without blocking the pipeline.
- CI UI jobs can be configured with
This mirrors how I’d manage known issues and triage in a real environment.
This lab is the template I use when I come in as a 1099 QA automation specialist.
Typical starting engagement:
- Scope: ~35 hours (about a focused week of work).
- Rate: Competitive. Details available upon request.
- Outcomes in that window usually include:
- An audit of existing API/UI automation, CI, and basic perf/security coverage.
- Stabilization of the highest‑impact flakiness (or creation of a minimal, stable suite where none exists).
- Introduction or enhancement of performance checks and basic security scanning.
- A concrete 60–90‑day roadmap tailored to the team’s stack and constraints.
Everything in this repo is designed to be lifted, adapted, and focused on a client’s system under that kind of engagement.
| Version | Description |
|---|---|
| v0.9.0 | Selenium POM refactor – XPath eliminated, CI green |
| v0.8.0 | Playwright UI refactor – human interaction flows, CI green |
| v0.7.0 | React UI DOM fixes – improved testability |
| v0.6.0 | Playwright UI suite complete – 17 AUT bugs documented |
| v0.5.0 | OWASP ZAP API security scan integrated |
| v0.4.0 | OpenAPI 3.0.3 spec – Spectral validated |
For full detail, see CHANGELOG.md (if present).
Author: Drexel McMillan
Focus: Senior QA Automation / SDET – corporate roles and 1099 consulting
Contact: sqalab.admin@protonmail.ch
This project demonstrates competencies across:
- API and UI automation framework design.
- Performance and security testing as part of regular CI.
- Test environment and data management.
- Defect discovery, documentation, and pipeline integration.
An AI assistant was used to help with some text, code, and structural ideas for this project. All final architecture, design decisions, and code were reviewed, vetted, and curated by me. I take full responsibility for the content and its correctness.
MIT License – see LICENSE for details.
© 2026 Drexel McMillan. This project was designed and implemented as a professional portfolio and demonstration of engineering capability. Git history and timestamps serve as proof of authorship.