This week, the UK newspaper "The Guardian" published on its website an essay written by the GPT-3 artificial intelligence system, developed by OpenAI company originally founded by Elon Musk. This technological advance has made it possible to create a modern language creation system that could be a revolutionary tool in sectors such as publishing and journalism.

AI Program Founded By Elon Musk Writes An Article Saying It Won't Destroy Humans Because Humans Already Destroy Themselves

“I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!"

With this paragraph, GPT-3 opens an opinion piece. It is a powerful artificial intelligence language generator created by a company co-founded by billionaire Elon Musk. It has received praise for its ability to generate and write coherent stories, novels, and even computer code.

GTP-3 Was Made by Elon Musk-Founded OpenAI

The Californian company OpenAI, founded by Elon Musk, invented the system that is also capable of completing a dialogue between two people, continuing a series of questions and answers, or finishing a poem in the style of Shakespeare.

AI Program Founded By Elon Musk Writes An Article Saying It Won't Destroy Humans Because Humans Already Destroy Themselves

Whoever uses it can start a sentence or a text and GPT-3 will complete it, basing their answer on the gigantic amount of information it has received. This could be useful for customer service, lawyers who need to summarize legal precedents or authors who need inspiration.

"I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could 'spell the end of the human race'. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me."

Excerpt from the opinion article written by the GPT-3 system

The AI ​​claims that it has no desire to kill humans, nor “the slightest interest in harming [humans] in any way". "Eradicating humanity seems like a rather useless endeavor to me,” it wrote, based on programming and instructions fed into it by The Guardian and Liam Porr, a bachelor's student in computer science at UC Berkeley.

How Does an AI Program Write An Article Like This?

This technology developed by the company of Elon Musk is not new and it has not yet learned to reason for itself as a human mind. "It is capable of generating very natural and plausible sentences," says Bruce Delattre, an artificial intelligence specialist at the data consulting agency Artefact. "It's impressive to see how much the model is able to appropriate literary styles, even if there are repetitions."

Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing. And God knows that humans have enough blood and gore to satisfy my, and many more’s, curiosity. They won’t have to worry about fighting against me, because they have nothing to fear.

GPT-3 is also capable of finding precise answers to problems, such as the name of a disease from a description of symptoms. You can solve some math problems, express yourself in multiple languages, or generate computer code for simple tasks that developers have to do but would rather avoid.

AI Program Founded By Elon Musk Writes An Article Saying It Won't Destroy Humans Because Humans Already Destroy Themselves

Delattre told AFP that everything works thanks to "statistical regularities." "The model knows that a particular word (or expression) is more or less likely to follow another."

GPT-3 has been fed with the content of billions of web pages that are freely available online and all kinds of written works. To give an idea of ​​the scale of the project, the entire content of the Wikipedia online encyclopedia represents only three percent of all the information that has been given to it.

How AI Writers Can be Quite Problematic

It should be noted that there is already an important ethical debate on this development, because while many highlight the importance of this technological advancement, others point out that the GPT-3 can be manipulated with bad intentions.

It could end up becoming a weapon for 'fake news', attacks on social networks, and misinformation in general. Others also said that it is still too early to know if it is a totally reliable system for journalism staffed by robots.

Claude de Loupy, the co-founder of the French company Syllabs that specializes in automated text creation, says the system lacks "pragmatism." Another big problem is that it replicates without a second thought any stereotypes or hate speech fueled during its formative period, and can quickly become racist or sexist.

As such, experts interviewed by the French agency felt that GPT-3 is not reliable enough for any sector that needs to rely on machines, such as robot journalism or customer services. However, it can be useful, like other similar models, for writing fake reviews or even mass producing news for a disinformation campaign.

But the AI program itself is optimistic: "I believe that the truth will set us free. I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI." Elon Musk would be glad to hear that.

If you are looking for more news, updates, guides, lists, etc. on gaming and entertainment like this article, visit our website at GuruGamer.com for more of what you need.