The Case Against A Paywalled Human Touch

News Room
9 Min Read

Phone menus, also known as Interactive Voice Response (IVR) systems, could be a blessing when all goes well, but extremely frustrating when all a consumer wants is a human representative. How would you react if you would get lost in the maze of automated options, just to hear that you will be charged a significant fee for wanting to be transferred to a human representative?

Customer service is a critical aspect of most businesses, but doing a good job at it can prove expensive. With the help of technology, in recent years, many companies have automated their customer support processes, including financial institutions. Yet, automated customer service is not merely a technological innovation, but also a social change: it reshapes the behaviors and expectations of both consumers and businesses. Moreover, governmental agencies and other organizations have also started adopting variations of such menus in their operations as well. And why not?

Automation of customer service can reduce operational expenses, offer services that run 24/7, and allocate their employees and workforce to more professional tasks – all the while improving customer satisfaction. However, this automation results in some disadvantages as well, with a main one being the growing trend of offering a human alternative only to paying consumers that agree to be charged additional fees for that privilege. And while this trend is not the norm yet, if policymakers will not act now to stop it, consumers would soon find themselves facing a choice: free automated service or pricey human interaction.

Only a few years ago, the problem we faced was charging consumers for settling or paying their bills over the phone while speaking with representatives – mostly by telecom providers. While no accurate empirical data or research on charging consumers to speak with humans in the era of AI and IVR is available yet, some telecom and credit card providers have started charging an “Agent’s Assistance Fee” from those seeking access to human representatives. This includes paying a fee for a bot to hand the conversation over to a human representative, a payments industry trend that started a few years ago. (For additional information or questions, please contact the authors). This type of fee charging takes place after an automated system warns consumers that insisting on being transferred to a human representative would result in them paying fees. For example, the authors were charged $9 for requesting to speak with a representative.

The right to interact with a human is crucially important in the era of artificial intelligence (AI) and big data algorithms. In the last decade, much has been written about automated decisions that lead to discriminatory outcomes, unjustified denials of services, and inaccurate predictions, even when based on accurate data. While a meaningful response from a human, for instance, in reviewing an appeal that was made on an automated decision is easily understandable in connection with profiling, access to social welfare, and employment eligibility (similarly to the European Union’s GDPR regulation mandates), it is not always the case with customer service. Recent business practices have demonstrated that access to speaking with a human representative could soon become restricted, behind a paywall, making this service a privilege limited exclusively to customers who have paid for or subscribed to it. This is wrong. Charging consumers that for various reasons may want to interact with a human representative is a trend that must be stopped.

Apart from potential legal ramifications and risks to brand reputation, the act of creating obstacles that prevent customers from engaging with a human representative is deeply reprehensible. First, it is discriminatory towards financial underprivileged consumers who may want but cannot afford paying the extra fees. Second, it can lead to feelings such as alienation, frustration, anxiety, and dissatisfaction when the automated service fails to resolve one’s complaint or issue – whether because the matter is too complex, a nuanced issue, an uncommon situation, or simply because the inability to find the right option in the labyrinth of automated menus. Third, for individuals with speech, language, or other disabilities and impairments, as well as those with foreign accents, attempting to communicate through automated customer service systems or speech recognition interfaces can present significant challenges, often making it an arduous and frustrating experience. Fourth, there are certain populations, such as the elderly, that might find it particularly difficult to understand, follow or interact with an automated service. Fifth, automated customer support dehumanizes those facing issues that require human empathy and understanding. Lastly, utilizing automated and IVR systems for servicing consumers and for customer service, particularly in critical areas that profoundly impact one’s life, such as health, legal, or financial matters, is both unethical and irresponsible.

Every human, as a consumer, customer, or individual, should have the option of accessing human communication. It is essential that human support, interaction, and assistance remain accessible without the imposition of barriers or fees.

In recent years, the discourse surrounding humans and AI has largely focused on the importance of keeping a human in the loop. The concept refers to the involvement and oversight of humans in the decision-making process of AI systems. By having humans in the loop, we maintain a level of human control and oversight over AI systems to ensure their responsible and beneficial use. However, despite the focus on this issue, no public attention has been directed towards the need to mandate the provision of a human alternative or representative in customer service.

In today’s era, ethical behavior and corporate social responsibility must dictate that organizations and corporations have a responsibility to offer a human alternative to those who prefer it. Furthermore, it must be a legal requirement. Therefore, Environmental, Social, and Governance (ESG) principles, which have provoked widespread discussion in the last few years from corporate entities, as well as government agencies such as the Securities and Exchange Commission, Federal Trade Commission, and the Consumer Financial Protection Bureau, must include a commitment to ensure consumers can easily reach human representatives through various channels, such as phone, email, or chat.

This human interaction principle is about value, in its monetary and ideological senses. The remarkable technological advancement of AI, exemplified by automated speech recognition and most recently – ChatGPT, has the potential to revolutionize society by enabling a more human-like interface with technology. It offers efficient information access and personalized experiences, ultimately fostering new opportunities and transforming the way we live, work, and connect with one another. At this crucial point in time, as society starts to understand the consequences of unleashing the power of automation, billions of dollars are at stake. To establish a social norm that truly puts the human in the center, we must ensure we keep a human available in the loop. No company would volunteer paying extra for offering a human touch – we must have our regulators require that.

This Op-Ed was co-authored with Dr. Ori Freiman who is a Post-Doctoral Fellow at McMaster University’s Digital Society Lab, researching the responsible implementation of emerging technologies.

Read the full article here

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *