Challenge 1
Team 1B applied structured prompt engineering with Microsoft Copilot to automate assurance evidence identification and scoring across multiple personas, aligned to PEAT success criteria.
Please be aware that this content was generated follwing an automated review so may not be perfectly accurate; refer to the original challenge brief and team files for authoritative information
Expected to reduce manual assurance effort through repeatable prompt-driven evidence checks, improve consistency of scoring across personas, and enable faster assurance judgements with clearer, auditable evidence trails.
LLM Prompts_2.docx: Core prompt and scoring framework defining personas, evidence criteria, thresholds, and scoring rules for assurance assessment.Copilot Evidence.docx: Worked example of Copilot-based assurance analysis, including persona assignment, evidence identification, scoring, and justification.Persona Research.docx: Defines assurance-related personas, their goals, and pain points to guide persona-specific evidence analysis.
team: Team 1B members: tbc topics: solution-centre, hack25, challenge1, microsoft-copilot, large-language-models, natural-language-processing, project-assurance, automation, evidence-management, llm, prompt-engineering, assurance-personas technologies: Microsoft Copilot, Large Language Models, Natural Language Processing