Challenge 1
Local RAG Assurance Engine delivered a fully local, offline-capable assurance analysis engine using retrieval‑augmented generation (RAG) to identify and surface evidence from project documentation and return structured, machine‑readable outputs.
Please be aware that this content was generated follwing an automated review so may not be perfectly accurate; refer to the original challenge brief and team files for authoritative information
Expected to significantly reduce manual evidence search time, improve assurance repeatability, and address data sovereignty concerns by enabling assurance analysis without reliance on external cloud LLMs.
Technical_README.md: Technical overview describing the local LLM setup, RAG architecture, model choices, file processing pipeline, and installation steps.RAG.py: Implements the retrieval‑augmented generation logic for chunking, indexing, and querying documents using vector search.structure_output.py: Parses LLM responses into structured fields and aggregates results into CSV outputs for analysis.
team: Local RAG Assurance Engine members: tbc topics: solution-centre, hack25, challenge1, ollama, deepseek-r1, langchain, streamlit, python, tesseract-ocr, project-assurance, automation, evidence-management, llm, rag, local-llm technologies: Ollama, DeepSeek-R1, LangChain, Streamlit, Python, Tesseract OCR