Can you spot AI generated content? Survey findings explored

27.01.2025
"Professional woman at her desk in an office, deep in thought while reviewing documents, symbolizing critical analysis and the challenge of identifying AI-generated content."

In an era where artificial intelligence (AI) significantly influences how we receive and process information, distinguishing between AI-generated and human-written content has become increasingly challenging.

Our September 2024 Risk Radar survey posed this very question to 500 risk and security managers, whose roles heavily rely on accurate intelligence. Respondents were asked to identify whether incident updates about an anti-government protest were authored by a human or produced by AI. The results revealed a divide: 50% correctly identified the human-written alert, while 49% incorrectly chose the AI version, leaving just 1% unsure.

This near-even split underscores the sophistication of AI in replicating human tone and urgency, especially in high-stakes scenarios. While the human-crafted message provided greater detail and nuance, the AI-generated response was concise and directive - qualities that appeal to those accustomed to fast, clear updates.

As AI technology continues to advance, its capacity to mimic human communication poses critical questions regarding trust, accuracy, and the essential role of human oversight.

Using AI for data-gathering

AI technology has opened up new avenues for intelligence collection, with tools such as ChatGPT and DeepSeek able to filter vast amounts of open-source data in a matter of seconds.

When asked about their biggest concerns regarding the use of AI within their organisation, respondents to our survey highlighted privacy and data protection (21%) and unclear data sources (18%) as their top worries. Surprisingly, the fear of false information ranked sixth out of seven concerns, with only 14% citing it as a major issue.

Current AI models are susceptible to generating inaccuracies, often referred to as "hallucinations". This can happen for a number for reasons, for example when the tool doesn't have enough, or the right, data to give an answer, it fills in the gaps, or if it incorrectly applies patterns it has learned from past data. Predictions aren’t always correct.

While this is a well-known truth about AI, our survey suggests that many organisations are more focused on the logistics of AI - securing data, ensuring privacy, and understanding the sources behind algorithms - than on the content’s inherent reliability. This may mean that these organisations are confident they already have a trusted system in place for vetting the data they draw from AI, such as skilled human analysts who can verify and interpret AI-generated findings, ensuring accuracy and deriving actionable insights.

But on the other hand, the fact that false information fell below concerns like plagiarism (15%) and loss of skills (14%), could indicate an undervaluing if guardrails are not in place. AI hallucinations can distort decision-making, cause operational inefficiencies, or even harm reputations if inaccurate data is acted upon.

Combining AI’s speed and scale with human expertise creates a balanced approach that maximises the strengths of both, ensuring that AI-generated insights are reliable and actionable.

Mia Cura head and shoulders
Mia Cura
Brand and Communications Manager
Share on social

A new version of this website is available.