Hallucinations and text discrepancies
LLMs pose AI-related risks such as hallucinations. To ensure that we preserve submissions in their original form and prevent hallucinations, we automatically flag interferences for you.
Last updated 3 months ago
Early feature ☝🏼
This feature is an early release and the quality of text discrepancy flags might vary depending on the submission, its structure, language and format.
When you upload a PDF file, Claire handles heavy lifting. To support a wide range of assessment types, formats and languages, we rely on LLMs to process your submissions.
Large language models, however, pose AI-related risks such as hallucinations. To ensure that we preserve submissions in their original form and identify hallucinations where they occur, we run a separate and independent validation workflow that flags interferences automatically.
Example
In the original student submission, we can see the following elements:
A heading (“The single biggest reason why start-ups succeed”)
A sub-heading (“Summary and take-aways”)
An image (with heading) and its source

Less common document structures and formats can occasionally confuse LLMs and cause interference, where the model can’t help itself but to correct or display information slightly different from their original form. Below you can see an example for such an interference.
The image shows the same document as uploaded to Claire. If you look closely, you’ll find two discrepancies:
The sub-heading no longer contains the extra space (“Summary and take-aways”)
The LLM duplicated the image’s heading and shows it as text above the image

The first interference is what we call a tolerable interference as it does not change the text, its meaning or semantics. Those interferences are not flagged as Discrepancies.
The second, however, is an unwanted interference where the model adds, alters, or omits a part of the text that now no longer represents the original submission accurately. Those interferences are flagged and reported to you in the Discrepancies panel 3.
You can access the panel by clicking the Discrepancies badge in the top navigation bar 4.
Note that the badge only shows when our validation workflow catches unwanted discrepancies.