A recent wave of excitement, fear and even dystopian predictions has taken over as users have become massively exposed to ChatGPT, a human-language conversation system or ‘chatbot’ that feels incredibly ‘alive.’
It was developed by OpenAI, an artificial intelligence firm that has been valued at nearly $30 billion and is one of the most hyped companies in the high-flying sector. Many users express amazement at the speed with which the robot can create seemingly complex texts in response to simple prompts, suggesting it could enhance or even replace humans in tasks that range from copy-editing to full-blown journalism. This is inevitably true, as the current industrial revolution moves forward with the same implacable creative destruction of its predecessors, with all the good and the bad that comes with it.
At the same time, others are seriously concerned about some uncontrolled side-effects of its mass adoption that could start with its use for cheating educational systems, plagiarism, and even more dangerous issues that span from scams and criminal activities to all-out social manipulation. And these things will inevitably occur as well. Yet, like all previous eras of technological disruption, the underlying uses of these technologies depend on us, and will probably bring more benefit than harm, even if the risk of the robot apocalypse is closer than before.
While artificial intelligence has been around for decades and the study of when the robots will outsmart the humans can be traced back at least to Alan Turing in the 1950s, the concept has become part of the mainstream jargon only recently. Companies are rushing to include it in their promotional materials as investors go crazy over any project that boasts the use of ‘AI’ and machine learning, a related concept. AI is much more pervasive than people know, particularly on the Internet, where it is widespread in recommendation systems, spam and fraud prevention systems, and is a key component behind several of the most common applications developed by Google
GOOG
With the arrival of ChatGPT, a practical use of AI has now been at everyone’s disposal for free since last November. Since then, it’s passed 100 million users, becoming the fastest-growing consumer application ever. As users begin to play around with the application, they marvel at how eloquently it responds to seemingly tough questions, the depth of its knowledge, and the endless potential for any task related to word processing. Even though it has built-in systems to avoid creating controversial texts, related to things like hate speech, racism, and even religious and political issues, it’s relatively easy to circumvent those protections and get the robot to exhibit certain dangerous underlying biases. It’s not hard to imagine the ways in which to apply this and other similarly advanced technologies for malicious issues – it’s also easy to imagine myriad productive ways to put this and other similar systems into play. As usual, ultimately it’s not about the technology but how we put it to work as individuals, groups and society as a whole.
Asking for ChatGPT and other innovative technologies with potential harmful uses to be stopped is nothing more than the useless calls of modern-age Luddites. Computers will continue to get more sophisticated at the pace of Moore’s Law, quantum computing is already a reality. Much like Albert Einstein’s advancements in physics allowed for the development of a nuclear bomb, AI, quantum computing and the rest of them will enhance the capacities of bad actors, making it all that much harder for the “authorities” to try and stop them, as criminals are much quicker at adopting new technologies and agile at putting them into play. Eventually the system will find a balance through self-regulation, community actors seeking to prevent harm, the private sector and finally governments putting legislation into place.
That being said, there are several short-term issues which will begin to come into play and are part of a larger narrative that is connected to the impact of digital disruption on the information ecosystem. For years now the quality of socio-political discourse has been in decline, in great part due to a massive change in the way information is distributed and paid for. While in the past there were high barriers of entry to become an entity which had the capacity to distribute information massively, the emergence of the Internet turned the tables on the old gatekeepers, taking that power from the producers to the aggregators. Thus, publishers, broadcasters, and journalists saw the value of their content determined by major technology platforms with massive reach which were “free” for their users, monetized immensely, and didn’t share those profits with the content creators. That created a rift between the Googles and Facebooks of the world, and news publishers, from The New York Times
NYT
With the emergence of ChatGPT and the definite eruption of AI into the scene, the potential to rely on these technologies to increase productivity on a major scale is already here. Companies have already been using these technologies to produce news articles and other forms of content distributed via journalistic or pseudo-journalistic platforms, many of them rife with plagiarism and falsities, and often attaining high ranks in Google search pages. Newsrooms will further reduce their staff and turn to AI- generated content for more monotonous work including sports results, weather, and market information, and quickly it will make its way into work which is “higher up the informational hierarchy ladder,” until it will be difficult to imagine that any piece will be 100 percent free of robot interference. Again, there are many good uses of AI for journalists, but the temptation will be there and many will eat the forbidden fruit.
Furthermore, the ease with which this technology would allow a malicious actor to create seemingly real news articles created with the sole purpose of manipulating the population will increase the spread of disinformation. Coupled with other emerging technologies including AI-powered image makers, deep fakes, and other innovations, players seeking to manipulate public opinion will be armed with an arsenal of weapons difficult to counteract, particularly in a society where journalism is in decadence, as is trust in institutions in general. It’s easy to imagine how AI could be another element that helps increase polarization.
The challenges of AI for the information ecosystem, as well as the opportunities, are there for the taking. Hopefully journalists will quickly embrace these new tools rather than just denouncing them.
This piece was originally published in the Buenos Aires Times, Argentina’s only English-language newspaper.
Read the full article here