Trego's Mountain Ear

"Serving North Lincoln County"

I’m Not Sure About Artificial Intelligence

Published by

on

I’ve been reading about artificial intelligence and how well it works when you replace a human writer with artificial intelligence.   Forbes Magazine had an article on this topic about 3 years ago.

“Back in 2014, the Los Angeles Times published a report about an earthquake three minutes after it happened. This feat was possible because a staffer had developed a bot (a software robot) called Quakebot to write automated articles based on data generated by the US Geological Survey. Today, AIs write hundreds of thousands of the articles that are published by mainstream media outlets every week.

At first, most of the Natural Language Generation (NLG) tools producing these articles were provided by software companies like Narrative Science. Today, many media organisations have developed in-house versions. The BBC has Juicer, the Washington Post has Heliograf, and nearly a third of the content published by Bloomberg is generated by a system called Cyborg. These systems start with data – graphs, tables and spreadsheets. They analyse these to extract particular facts which could form the basis of a narrative. They generate a plan for the article, and finally they craft sentences using natural language generation software.

These systems can only produce articles where highly structured data is available as an input, such as video of a football match, or spreadsheet data from a company’s annual return. They cannot write articles with flair, imagination, or in-depth analysis. As a result, they have not rendered thousands of journalists redundant. Instead they have sharply increased the number of niche articles being written.”

forbes

Other articles show Artificial Intelligence articles that are unable to survive fact-checking – with articles based on a reality that doesn’t exist, on social situations that aren’t correct.  While that is problematic, it really isn’t unique to artificial intelligence in journalism – I’ve seen the differences between right-wing and left-wing articles. 

It reminds me of comments I heard from a journalist many years ago.  He described how a journalist had to be able to quickly understand a very specialized topic, then rephrase it in a way that makes it possible for the average man to understand it.  It was a great description of journalistic responsibility – but it was something that personally, he hadn’t achieved.

I’m not sure we’re looking at it the right way – maybe we need to compare artificial intelligence with natural, organic stupidity.  And that gets me back into the realm of confirmation bias.  As I listen to the stories on Gaza, and the concept of the Israelis marching in, and how much collateral damage will occur, a few basic thoughts occur:

Israel offered to give Gaza back to Egypt.  Egypt refused – the Egyptian government didn’t want the challenge of keeping the Palestinians in line.

Jordan allowed the Palestinians in as refugees.  Three years later, the Palestinians were rising against the Jordanian government – the name for that uprising is Black September.  The story is available at Fifty Years after “Black September” in Jordan .  As a general statement, Muslim countries that have accepted Palestinian refugees have encountered tremendous unrest.  The simplest explanation is that neither Israel nor Egypt wants the inhabitants of Gaza.  And I can find flesh and blood journalists who write their articles without including that fundamental problem.

Whether the author is artificial or flesh and blood, we have to read critically.  Anyone who offers his, her, or its authoritative statement and doesn’t provide the support for that statement – well, it doesn’t matter.  We can encounter at least as many problematic, unreliable articles from flesh and blood authors as from artificial intelligence.  The problem may be that natural stupidity also infects artificial intelligence.

Leave a comment