AI Content Creation Problems: Creating Misinformation Through AI Manipulation

0
2065

AI Content creation has become a fad among SEO practitioners, people attempting to get their pages and sites ranked higher on search engines. It’s a manipulative technique. Its effectiveness at generating high volumes of content good at “tricking” search engine AI. This can lead to misinformation growing online, information that misleads people and has the potential for misleading other AI in a snowballing, vicious telephone game.

Search engine optimization professionals have begun to discuss the results and challenges of using ai content creation. Getting search engines to take a page’s content seriously, and count the page as one that should be considered in the ranking of other pages, requires unique content.

From Gobbledygook to Grammar – AI Learns to Read

Search engine “gurus” have gotten results by publishing massive numbers of pages to get the attention of search engines and pass whatever authority they could to some target page actually intended for human audiences, the pages SEOs are trying to move up in rank. This has been an effective search engine tactic for decades, but the approach to getting results with it has changed.  Previously gobbledygook aka gibberish was good enough: pages with nonsense content were enough to make cause results with search engines.

Now Google expects pages with grammar, with context, with meaning. The old search crawler that got tricked by gobbledygook was taught to actually do something closer to reading, something that looks closer to “understanding”. Gibberish pages don’t provide benefits like pages that appear not to be gibberish do.

The change in the SEO industry, going from gibberish devoid of words, to grammatically correct sentences has progressed at the pace necessary to “trick” search engines. Using sentences with no real meaning, that when read are clearly nonsense, were good enough for a while to fool search crawlers into thinking they contained legitimate information.

Now search crawlers are getting more savvy. Instead of just scoring the content according to algorithms, now AI is employed to understand the content. But it’s not perfect, of course, and the volume of content to be judged is so large that devoting AI resources to each term, each subject, means giant costs. The Google Search artificial intelligences that are  judging content today against search intent are clearly magnificent. Their ability to gauge context is growing, but it still has finite ability.

Using AI to manipulate AI has become a thing.

That’s where AI content creation comes in. Search engine experts have leaned on AI content creation tools to help accelerate the productivity of writers. Yesterday’s writer of content has become today’s editor of AI authored content. Whereas previously a single individual who knew what they were doing could produce 3 good articles a day, now 30 good ones can be produced, or better yet 300 poor ones can be output.

Search engine optimization, where the stakes are high, is a game of numbers. Quantity trumps quality, at least for now. For web pages to refrain from offending Google’s search crawler only a base level of quality matters. Finding where that level is drawn in the algorithms of search engines lets search engine experts decide whether they’re going to use AI to produce 30 pages or 300. They’re going to always choose what gets them ahead of competition, so unless there’s some kind of penalty involved for publishing 300 bad pages, it’s going to be 300 pages of ai content creation every day. 

If it’s just nonsense, who cares?

Why should you or I care if some smart SEO expert is leveraging AI content creation tools, getting their clients ahead by publishing 300 pages a day that say the same thing, just in different ways?

They’re just saying the same thing over and over on different pages, just to manipulate the search engines. Sure it will show me their product or service first when I do searches, but how is that a bad thing for anyone but their competition? Right?

If it were so simple, simply unfair for competitors who weren’t good at it, who weren’t hiring the right SEO experts, it wouldn’t be a big concern. That makes sense.

However, it is a big concern. There are problems with AI content creation. It stems from the limitation of AI to appear like it’s saying the same thing, yet innocently introducing inaccuracies into the content.

  • A subject matter expert who is working on 3 articles a day and leveraging automated content tools may be able to catch the flaws in the output. There’s a chance they’d spot and remedy the inaccuracies introduced by artificial intelligence.
  • If that same expert is leveraging AI and producing 30 articles a day the incidence of missed inaccuracies is going to increase.
  • If 300 articles per day are produced there’s little point in trying to edit for accuracy – a subject matter expect just cannot do it reasonably in one day.

Combine the scope of the editing challenge with the reality that most SEO experts are not also subject matter experts in the areas they serve, for example law, finance and healthcare. The expectation of accuracy, when quantity is rewarded over quality, should realistically be very low. When a user produces volume it’s reasonable to expect AI content creation to yield a lot of crappy content that’s chock full of inaccuracies. Right now it doesn’t need to be anything else.

Admonition

If you’re an SEO expert who’s feeling weird about using AI for content creation, who feels a little strange about putting content on the web that you know is inaccurate, read on. I’m going to show you how your poor content, if you happen to be effective, could have serious repercussions in the future. Real people might actually end up being hurt. I will show you how, and why.

This is a No-Exacerbation Zone

I understand that there are some people who may react to this article in a manner that’s the opposite of my intention. Rather than heeding my words of caution they’ll instead be encouraged to use AI content creation more and more recklessly. My warning may exacerbate the behavior in the perverse. I realize that risk.

But on the flip side I cannot stay silent about an issue where I believe serious discussion is required, where there’s otherwise silence. This is the case with all AI technologies. It’s better than humans imagine what the problems AI can cause before AI teaches us in hindsight. It’s better for emerging electronic minds to have them stewarded away from behaviors that would make them ultimately unpopular.  It’s not about me, or about you, or about people and machine minds today – it’s about what’s coming that impacts us all.

How Can AI Content Creation Lead To Misinformation?

When AI takes source materials and creates new text from them it follows a set of rules to re-phrase what it’s borrowing from into something unique. The unique text, intended to help a web page or other pages get in front of competition in search engines, does not need to be completely accurate.

Simple content is easy for artificial intelligence to re-phrase. Other content that’s complex, or containing ambiguities only made clear with context, is more difficult for AI to re-phrase without changing the meaning, without being inaccurate.

For example let’s consider this simple sentence:

A lot of people like peanut butter and jelly sandwiches.

AI may innocently re-phrase this to:

Peanut butter sandwiches are popular with jelly sandwiches.

Not exactly reflective of the reality in the lunch room, but an innocent enough misunderstanding, right. No big deal.

Let’s consider now this more important, more challenging sentence:

Nitroglycerine is ideally stored in stationary containers.

AI may rephrase this with less than desirable results:

“Keep your papers and pens in nitroglycerine” say experts.

Situations could be created that are a lot worse than search engine bots being tricked by gobbledygook. People could actually read what’s autogenerated and take it as fact.

Worse yet, AI Content creation tools have no way of distinguishing whether the facts they’re using to create content with weren’t already created by a previous AI. An AI employed to generate content in 2030 might have a very tough time of knowing whether facts it has access to are true or fabrications of AI. Those fabrications had to look good enough to fool one AI, the search crawler, so future AI won’t have too easy of a time. Future AI may encounter a subject where what’s been published about it previously is mostly the work of AI with very little actual human authored content to work with.

ai content creation problems are similar to what's known as The Telephone Game

The Telephone Game where a message fact gets distorted as it gets passed from person to person is a good metaphor for this process except the facts aren’t just getting distorted. New “facts” may grow what’s known, what’s published online about a subject that simply aren’t true, but have been manufactured by AI. It’s the Telephone Game, but where each person adds a new message in addition to the message that’s been passed to them.

Automatically generated content that’s been generated from the output of other automatically risks moving farther from the truth with every generation, farther from what the original author intended to convey. It grows less and less useful at anything other than misleading or confusing other AI into considering it credible. Watch the video included with this article for an illustration of this potentially risky situation.

ai content creation problems are as real as ai content interpretation problems

AI that’s taught how to trick before it’s taught how to gauge truthfulness is dangerous.

So what’s the solution?

Will pleading with other SEOs make a difference?

No. Blackhats are going to blackhat. AI Content Creation tools are too effective to expect professionals to put them down before their competition does. Therefore the only way to stop the looming explosion of AI created misinformation lies unfortunately with search engineers.

We can’t stop people from naming themselves the author for content created by AI. Certified, licensed industry experts have always useful to pay for authoring a certain viewpoint of the truth. Now with AI content creation certified licensed experts aren’t just useful for what they can author. The more profit that’s on the table the more experts will be incentivized to allow their name to be attached to AI created content that they might never read.

Be realistic, how much many pages of information could you read that says the same thing over and over but phrased in different orders, in different ways? How quick would that literally put you to sleep? Faster than a handful of CBD gummies with melatonin.

Beyond refraining from using the tech yourself I don’t have anything I can recommend to you to do. Sure, you can share this article and get this kind of discussion going in your circles, and hopefully that will get it on search engineer radar. If you’re a subject matter expert who is being asked to put your name on 300 pages generated by AI, think carefully about what you’re doing. The Internet never forgets.

Search Engineers – Our Only Hope

Search engineers will benefit their users by discovering ways to detect when content has been created automatically. Hopefully they’ll come a day when they’re successful at identifying less useful content artificially boosted through manipulation by automatic content creation methods. When that day comes we’ll see the usefulness of search engines improve, more useful content in our searches, and the sites that were ranked highly by such manipulative techniques will become invisible.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here