Reliability & Citations

Reducing Hallucinations in Event Summaries with Citations and Source Linking

Eventrion Editorial 8 min read

A practical approach to making short-format community event briefs more trustworthy: cite sources, link evidence, and design review steps that catch unsupported claims before publication.

Why hallucinations happen in event briefs

Short-format event summaries are a perfect storm for model overreach: tight word budgets, incomplete notes, and pressure to sound definitive. When the model cannot find supporting detail, it often fills gaps with plausible-but-incorrect specifics (speaker titles, dates, venue facts, or “key takeaways” that were never said).

Citations and source linking shift the task from "write a nice recap" to "extract only what can be supported". That single change—treating each claim as something that must be backed—reduces invented details and makes uncertainty visible.

A practical definition: citation-grounded summaries

A citation-grounded event brief has two properties:

  • Every factual claim (names, numbers, times, locations, quotes, announcements) is accompanied by a citation to a source snippet.
  • Readers can verify quickly via links that open the referenced transcript, agenda, slide, or official page.

If a claim cannot be cited, it should be rewritten as uncertainty (e.g., “The speaker suggested…” becomes “The speaker discussed…” only if supported) or omitted.

Design the pipeline around evidence (not generation)

  1. Collect sources: agenda pages, official event site, sponsor posts, session descriptions, transcripts (ASR), slide PDFs, and organizer emails.
  2. Normalize & chunk: split transcripts by speaker turns and timestamps; split docs by headings. Keep stable chunk IDs.
  3. Retrieve evidence: for each target output field (who/what/when/where/so-what), retrieve top chunks with tight filters (session ID, date, venue).
  4. Extract first: ask the model to produce an evidence table (claim → quote/snippet → source URL) before writing prose.
  5. Write last: generate the final brief from the evidence table only. No new facts allowed.

Citation formats that work in short briefs

For compact daily blocks, citations must be fast to scan. Two common patterns:

Inline markers

Append [1], [2] after a sentence and include a tiny “Sources” line.

Per-bullet links

Each bullet ends with a short domain + timestamp or doc label, linking to the exact section.

Whichever you pick, keep it consistent across categories so readers learn the pattern.

Prompt constraints that prevent “helpful guessing”

Models hallucinate most when asked to be polished. Add constraints that make omission acceptable and reward quoting.

Example system/policy snippet

Use ONLY the provided evidence. 
If a detail is not supported, write "Not confirmed" or omit it.
Every sentence containing a factual claim must include at least one citation ID.
Do not infer speaker roles, company names, dates, or numbers.
  • Require a minimum citation density (e.g., 1 citation per sentence in the “What happened” block).
  • For quotes, force verbatim snippets with timestamps to avoid “quote-like” paraphrases.
  • Explicitly ban “contextual” additions (e.g., historical facts about a venue) unless you retrieved an authoritative source.

Source ranking and quality gates

Not all sources are equal. Rank and filter before the model sees anything:

Source type Use for
Official agenda / event site Dates, times, session titles, venues
Transcript / recording notes What was actually said; quotes
Slides / speaker materials Figures, frameworks, named initiatives

Then enforce gates: fail the run if the model returns uncited sentences, or if retrieved evidence is below a minimum confidence threshold.

How to handle uncertainty without losing reader trust

Readers (especially 40–60 audiences) often prefer a smaller set of verified statements over a longer, more speculative recap. Use crisp uncertainty language:

  • Not confirmed: “Attendance was not confirmed in available sources.”
  • Attribution-first: “According to the published agenda…”
  • Scope limitation: “This brief reflects the first 18 minutes of the session recording.”

Measuring progress: hallucination-oriented checks

Beyond generic QA, track metrics tied to your citation strategy:

  • Unsupported claim rate: % of sentences without citations.
  • Citation validity: do links open the exact referenced content (timestamp/heading)?
  • Claim-to-evidence mismatch: human spot-check if the cited snippet actually supports the sentence.
A good rule: if a reader cannot verify a claim in under 15 seconds, the citation design needs work.

Putting it together for daily category blocks

For a community calendar newsroom, consider a template like:

  • What it is (1 line) + citation
  • What was said / decided (2–3 bullets) with citations per bullet
  • Who it affects (1 bullet) only if evidenced
  • Sources list (short labels → links)

This keeps the output compact, verifiable, and resilient when sources are incomplete—exactly the conditions where hallucinations usually spike.