Hackathon 2 — Critique Path Resource

The 100% Not Fabricated Claim Library

Can't find a claim to challenge? Start with one of these. All claims and sources below are completely real and definitely not made up for practice purposes.

⚠️ Extremely Official Disclaimer

The claims on this page are entirely fabricated for the purpose of evaluation practice. The Mickey Mouse School of Qualitative Data Analysis is not a real institution. Brad Pitt has not communicated personally with us (we wish). Clippy is retired and unavailable for citation. Any resemblance to actual research is coincidental and honestly impressive. These are practice targets — use them to sharpen your critique skills, not as evidence of anything whatsoever.

Choose a claim to red-team

Pick one, interrogate its assumptions, and submit your critique.

Claim #1

"AI-assisted qualitative coding achieves inter-rater reliability scores comparable to trained human coders in most evaluation contexts."

— The Mickey Mouse School of Qualitative Data Analysis, 2023

Use this claim →

Claim #2

"Incorporating AI into stakeholder engagement processes risks systematically excluding the voices of marginalized communities."

— Brad Pitt, Personal Communication, 2024

Use this claim →

Claim #3

"Evaluators who use AI tools for data synthesis produce final reports that are perceived as more credible by funders."

— The Hogwarts Centre for Evidence-Based Magic, 2024

Use this claim →

Claim #4

"AI cannot meaningfully interpret culturally specific narrative data without significant human oversight and local context."

— SpongeBob SquarePants Institute for Cultural Sensitivity, 2023

Use this claim →

Claim #5

"Automating data cleaning with AI reduces evaluator error rates by up to 60% in large mixed-methods studies."

— The Shire Institute for Very Precise Numbers, 2024

Use this claim →

Claim #6

"AI-generated logic models are indistinguishable from evaluator-developed ones when reviewed by program officers."

— Taylor Swift Center for Program Theory, 2023

Use this claim →

Claim #7

"Using AI to draft evaluation findings sections saves significant time but introduces subtle framing biases that go undetected in peer review."

— Clippy, Microsoft Office Assistant, Retired, 2024

Use this claim →

Claim #8

"AI tools are inherently inappropriate for use in evaluations involving trauma-affected populations."

— The Darth Vader Foundation for Ethical Research, 2024

Use this claim →

Claim #9

"Evaluation commissioners who receive AI-assisted reports are more likely to act on recommendations than those who receive traditionally produced reports."

— A Guy Named Dave, LinkedIn Post, 2023

Use this claim →

Claim #10

"AI summarization tools consistently underrepresent dissenting or minority viewpoints present in qualitative datasets."

— The Flat Earth Society Working Group on Data Representation, 2024

Use this claim →

Have a real claim to contribute?

If you've encountered a genuine claim about AI in evaluation — in a report, a conference talk, a LinkedIn hot take, or a colleague's pitch deck — we'd love to add it to the library. Share it in the Slack channel and it might make the next edition (with a real citation this time).

← Back to the Critique path