Why AI cannot replace your writer: Part one

I’ll admit, as a writer, I am most definitely biased on the topic of AI programs, particularly those that produce written content instantaneously and—supposedly—error-free. When these technologies were first popularized a few months back, I remember thinking, “Welp, there goes the market for writers.”

For a moment, it seemed like magic. With a quick wave of a wand—or in this case, typing of a quick prompt—ChatGTP could turn out a 1,500-word article on any topic you wished for. I had professional connections online boasting their articles written by ChatGTP and even had authors approaching me to see if using such technology for their articles was acceptable (it wasn’t).

But over time, questions arose. Where was it getting its information from? How was it producing this content? What were its sources? How was the information produced being fact-checked and edited? As the curtain was pulled back, it became clear that not only was AI not ready to replace writers, but that doing so could lead to issues including sharing false information, fostering brand distrust, and causing reputational damage.

It removes the human experience

In AI writing

At first, watching the AI produce well-written and researched content within seconds can be fascinating. It offers endless information, backed by every source the internet can offer, all for free. But the closer you look, the more imperfections that appear, and the more “off” it begins to feel. The information may seem correct, the vocabulary astounding, and the format impeccable, but upon further inspection, not only may the information be incorrect, but the content often reads like a dictionary; completely factual, robotic (perhaps an unfair criticism, considering it is a robot, but true nonetheless), and superficial.

 

continue reading »