top of page
Matt Seabridge.jpg

Lauren Shelley

4 March 2026

The rise of the fake expert: How to remain authentic in an inbox of fakes

The rise of the fake expert: How to remain authentic in an inbox of fakes


A recent Press Gazette investigation into fake experts being widely quoted in the UK and US media has led to dozens of stories being amended or even completely deleted by leading publishers, including News UK, Reach, and Yahoo. 


This investigation, which was expanded on earlier this year, revealed a shocking trend of fake accounts using AI to generate expert commentary in response to journalists' requests. Press Gazette compiled more than 1000 articles that appeared on various sites, including MailOnline, Metro, The Sun, Daily Mirror, HuffPost UK, Yahoo, and The Independent, that were partly or entirely based on commentary from AI-generated fake experts. It also produced a ‘PR Hall of Shame’, naming over 50 fake experts who had appeared in multiple stories using AI-generated commentary.



What is a ‘fake expert’ and why are people doing it?


A fake expert or profile, in the context of PR or SEO, is usually an AI-generated persona used to appear as an expert on various topics to build links or influence opinions. SEOs will sometimes use this tactic to increase their website’s visibility on search engine results pages. 


When executed successfully, expert commentary is a great way to secure PR coverage, as it demonstrates expertise and helps your business stand out as an industry expert. It also means you’re providing journalists with exactly what they want, making you more likely to secure coverage, enhance brand awareness, and drive website traffic when you include a link. Online commentary that mentions a company, an expert on the team, and links back to the website demonstrates E-E-A-T (Experience, Expertise, Authoritativeness, and Trust) to Google and could give you an SEO boost. 


However, people have begun abusing this practice to boost links and cheat their way to PR coverage by using fake experts. It could benefit brands or companies without a spokesperson or someone willing to be named publicly, enabling them to secure coverage through expert commentary and thought leadership. Some brands that have no marketing expert may be doing this because they think it’s an easy way to secure coverage, without actually having someone to do the hard work.


Essentially, people are trying to get links through whatever means possible, even if that means faking it. The problem? Creating AI-enhanced "fake people" to generate links and coverage may offer a short-term SEO boost, but it carries significant long-term risks.



How to spot a fake expert


Not all clients will have experts readily available for every campaign, media request, or expert comment, so in this case, you’ll need to source an external expert. When sourcing an expert externally, it can be fairly easy to spot a fake, or at least an AI-generated person. Here are some of my top tips for spotting one:


Lack of credentials - A quick Google search can help you decide whether an expert is real or not before you use them in your campaign. If they only recently started securing links, but the company they work for is fairly established, then this could potentially be a red flag. Look for past articles, interviews, or appearances in which they have demonstrated their expertise. You should also check whether they’re active on social media or if they have their own website; if they have no profiles and haven’t been linked to before, this could be another sign they aren’t real. Verify their credentials and confirm they have the appropriate education, certifications, or licenses for their claimed area of expertise, particularly if they’re claiming to be a money, law, or health expert. 


Makes unsubstantiated claims - Always be aware of experts making sweeping generalisations or vague claims when sharing their commentary for your campaign. If they have no data, actual experience, or evidence to back up their expertise, it may be made up, so you should steer clear and avoid using them in any of your PR. A genuine expert will usually have data, or be able to source it, to back up any major claims they may make.


Lack of personality - If the expert is fake, then they will have most likely used AI to generate their commentary, too. Review what they’ve sent to you for inclusion in your campaign. If their insights are overly robotic, boring, or vague, they may have been copied and pasted from an AI tool such as ChatGPT or Gemini. Always double-check the insights before using them in any campaigns or commentary; if they don’t sound human, they probably aren’t, and a journalist will definitely pick this up. Run their copy through a detection tool or do a quick Google search to see if the same 'expert' advice appears verbatim elsewhere on the web.


No headshots - When using an expert, many journalists may request a headshot to go in their article; therefore, you’ll need to request this when you ask for their insights or commentary. If an expert or source is unwilling to provide a headshot or the one they’ve sent looks a bit off, then it’s most likely that you’re talking to a fake expert. Most sources will be happy to provide an image to go with your campaign or to at least showcase that they’re real, so it’s rare you’ll come across an expert who is willing to speak but can’t share a photograph. 


Refusal to hop on a call - If an expert or source is willing to provide complex expert commentary or insights via email for your campaign or media request, but refuses to jump on the phone, or if you’ve been asked by a journalist to verify their identity directly, this often raises red flags and could signal they aren’t real. You should explain to any sources and experts before using them that they may need to verify their identity by emailing the journalist directly or by making a quick introductory call. If they can’t do this, it may be worth sourcing a new expert who is happy to speak to the press or verify their identity if asked.  



Why you shouldn’t be using fake experts in your strategy


This practice is unethical. It can severely damage a brand's reputation, erode trust with journalists, and lead to blacklisting, potentially undermining genuine experts within an organisation. If caught, previous SEO and digital PR efforts could be negated by a reputational crisis.


It could even invite legal action if you’re not careful, depending on how the comments have been used. People don’t like being lied to, and when something goes wrong, they usually look for someone to blame or compensation. This means that whatever your fake expert says must be 100% accurate, and as we all know, AI can (and does) make mistakes. In some cases, this could lead to a criminal conviction for misleading consumers under the Consumer Protection from Unfair Trading Regulations 2008, or, in more serious cases, to prosecution for fraud under the Fraud Act 2006. 


Beyond traditional regulations, the Digital Markets, Competition and Consumers (DMCC) Act (which came into full force in 2025) has fundamentally changed the stakes. Under this Act, the Competition and Markets Authority (CMA) no longer needs to prove that a 'fake expert' directly influenced a sale to take action. Simply using a fake persona is now a blacklisted practice. For businesses caught in the crosshairs, the CMA now has the power to bypass the courts and issue direct fines of up to 10% of global annual turnover.


From a regulatory perspective, the Advertising Standards Authority (ASA) states that all advertisements must be honest and not misleading. Therefore, using a fake expert could result in a banned campaign or further action from Trading Standards. The Online Safety Act will apply to online platforms that spread content, particularly when it causes harm or contains misinformation. 



What does the rise in fake experts mean for AI in PR and SEO, and what can we do about it?


As a result of fake experts, journalists and consumers will be more sceptical about taking advice online. In PR, for example, journalists are increasingly likely to request expert verification via video or phone calls and to conduct more thorough checks. Recent reports show that Reach has even begun creating a trusted PR directory to blacklist agencies that use these tactics. In SEO, Google is more likely to penalise those caught using fake experts to boost rankings and visibility in future updates. Google’s latest 2026 core updates have focused heavily on information gain and experience. AI-generated fake experts often fail this because AI can't provide real-world experience; it relies on signals such as author entities and social proof to verify experts.


Those looking to secure links should prioritise making real experts accessible, ensuring content is genuine through more interviews and calls, and building trust-based relationships with journalists so they know you’re more likely to send real content rather than AI-written commentary. 


Now, don’t get me wrong, I’m not against AI generally. It can be a useful tool for productivity and other administrative tasks, but it should never replace the expertise and personality of real individuals. The long-term consequences of using fake experts, including reputational damage and loss of credibility, far outweigh any potential short-term SEO gains.




This post was written by Lauren Shelley, Senior Digital PR Executive at Marketing Signals


Lauren Shelley, Senior Digital PR Executive at Marketing Signals

 



Enjoy learning more about Digital PR? Subscribe to The Digital PR Observer Newsletter to stay up to date with all of the latest Digital PR news and tips!




CAMPAIGNS, OUTREACH
bottom of page