Instructions for Use (IFU)

The Instructions for Use (IFU) is a manual for users and institutions that explains how to safely and effectively use the AI system. Under the EU AI Act, it ensures users and institutions understand the system’s capabilities, limits, human oversight, and safe operation.

Last updated 5 months ago

Intended purpose

Claire is an AI‑assisted grading and feedback copilot for higher education. It helps educators review student submissions faster, maintain grading consistency, and efficiently draft feedback against rubrics. Claire does not make final grading decisions.

Capabilities

  • Analyzes rubrics and instructions to understand grading criteria

  • Assesses student submissions against rubrics, highlighting strengths and areas for improvement

  • Summarizes educator notes and approved AI suggestions into well-written feedback

  • Aligns feedback with rubric dimensions to ensure grading consistency

Limitations

  • Suggestions may be incomplete, incorrect, or not context‑aware

  • Performance varies by subject domain and rubric quality

  • Does not access systems beyond the provided content and configured integrations

Expected accuracy ranges

Suggestion approval rates will be tracked as our main Key Performance Indicator (KPI). We'll measure the percentage of AI suggestions that educators approve versus those ignored or rejected to continuously improve system accuracy.

Data requirements

  • Student instructions

  • Grading rubrics

  • Student submissions

Human oversight steps

  1. Review all AI suggestions critically in both Rubric and Reader views

  2. Edit, accept, or reject suggestions based on your professional judgment

  3. Finalize feedback only after completing a thorough review and performing completeness checks, critically reviewing score recommendations and their explanations, and ensuring the feedback report is accurate and relevant

Known residual risks

  • Over‑reliance on AI output (automatically accepting all feedback drafts without incorporating human nuance and expertise)

  • Misinterpretation of ambiguous rubric criteria - If the rubric quality isn't high or score criteria aren't precisely defined, the AI may struggle to distinguish between grades, requiring careful user review of grade alignment suggestions

Safe‑use dos and don’ts

  • Do verify all facts and rubric applications before approving suggestions

  • Do document overrides when material changes are made to AI suggestions

  • Do review for automation bias when approving suggestions

  • Don't publish feedback without a thorough human review

  • Don't upload sensitive data beyond what is contractually permitted

  • Don't rely solely on AI output without incorporating human expertise

Contacts