Came across a fascinating read of an article today about GPT-3, the openAI that’s been all the talk of town. The article talked about how a college student set up a fake blog, gave it a fake persona called “Adolos”, used GPT-3 to generate content and blog posts, and posted it on Hacker News and various places just test if people can tell the difference between a AI and a human.
Here’s the verdict: 99% of even the really smart folks on HN couldn’t tell. One of its posts even got all the way to the top of HN (which is a real testament of its apparent legitimacy). All this happened before the grand reveal by the author. In fact, the few who were suspicious and make snarky remarks were even admonished and defended by others in the HN community.
It’s a fascinating and curious social experiment for sure. If HN can’t tell, then who’s to say any of the content we read online isn’t made up by an AI, right now? How do you know this post taht you’re reading right now isn’t written by an AI that Jason had used? The lines are blurring and it’s getting harder and harder to tell the difference every day. Of course, the author said that there are caveats: “GPT-3 isn’t great with logic, so inspirational posts would probably be best, maybe some pieces on productivity too.” It’s easy to pull a fast one on topics where it doesn’t have to follow strict logic and structure, and some fluffiness and rambling is accepted, like life inspiration and productivity. But that’s exactly the kind of content that makes people lap it all up!
Imagine this: In the not-too-distant future, with the emergence of GPT-10, content will be completely commoditized, and content marketing will be a job for robots. It’s be as simple as typing a few parameters, and at the push of a button or the prompting of a voice command, you get legit pages of content that might easily rival—or surpass—something written by a human. Writing used to be a very human, creative act. But now it’s almost cheapened by GPT-3. Maybe in an hyper-saturated market full of fake blogs and content, we’ll find new ways to authenticate human origins, much like the blue tick on Twitter. Maybe it doesn’t matter – informational (and boring) content work can then be passed on to bots to get done, just like how rote and monotonous tasks are now replaced by bots, while we humans can do stuff which truly inspires us: poetry, fiction, humour, sarcasm.
Should we be concerned? Or just lean into it?