“You guys always suspect the algorithms are rigged against you, but the reality is actually so much more depressing than the conspiracy theories,” the supposed whistleblower wrote.
He claimed to be drunk and at the library to use its public Wi-Fi, where he was typing this long screed about how the company was exploiting legal loopholes to steal drivers’ tips and wages with impunity.
“For most of my career up until this point, the document shared with me by the whistleblower would have seemed highly credible in large part because it would have taken so long to put together,” Newton wrote. “Who would take the time to put together a detailed, 18-page technical document about market dynamics just to troll a reporter? Who would go to the trouble of creating a fake badge?”
There have always been bad actors seeking to deceive reporters, but the prevalence of AI tools has made fact-checking require even more rigor.
“AI slop on the internet has gotten a lot worse, and I think part of this is due to the increased use of LLMs, but other factors as well,” Spero told TechCrunch. “There’s companies with millions in revenue that can pay for ‘organic engagement’ on Reddit, which is actually just that they’re going to try to go viral on Reddit with AI-generated posts that mention your brand name.”
Tools like Pangram can help determine if text is AI-generated, but especially when it comes to multimedia content, these tools aren’t always reliable — and even if a synthetic post is proven to be fake, it might have already gone viral before being debunked. So for now, we’re left scrolling social media like detectives, second-guessing if anything we see is real.
International