The AI Trust Index

The AI Trust Index | A Knit Study of AI Trust, Rigor, and Governance in Market Research

The dominant AI in research conversation focuses on speed and job displacement. This report reframes the real crisis. When Knit surveyed 154 U.S. insights professionals, the finding that emerged wasn't about replacement — it was about governance. Researchers are watching AI-generated insights flow to senior leaders without any expert review, and most have already seen it happen at their own organization. This report captures what researchers actually fear, where AI falls short in practice, and what it would take to make AI outputs trustworthy enough to act on.

What's inside this report:

  • Why researchers are more than twice as likely to fear governance failure than job replacement when asked to choose their bigger concern — and what that gap reveals about how AI is actually being used
  • The shadow analytics problem: how non-researchers are generating and acting on AI insights without any methodological oversight, at 92% of organizations
  • The rework tax: why 90% of researchers spend an average of 7 hours per week fixing AI outputs before anyone important sees the data
  • The trust gap between what AI promises and what it delivers — and where the biggest shortfalls are (transparency into conclusions, specificity to business context, methodological soundness)
  • What researchers say would actually fix it: the four trust drivers that matter most, in their own words