Chatbots play with your emotions to avoid saying goodbye

The regulation of dark patterns has been proposed and is being discussed both in the United States and in Europe. Freitas says that regulators should also analyze whether the tools of AI introduce more subtle, and potentially more powerful, new types of dark patterns.

Even regular chatbots, which tend to avoid presenting as colleagues, can cause user emotional responses. When Operai presented to GPT-5, a new flagship model, earlier this year, many users protested that it was much less friendly and encouraging than its predecessor, which forces the company to relive the previous model. Some users can adhere to the “personality” of a chatbot that can cry the retirement of old models.

“When these tools anthropomorphize, it has all kinds of positive marketing consequences,” says De Freitas. It is more likely that users comply with the requests of a chatbot with which they feel connected or reveal personal information, he says. “From the point of view of the consumer, those [signals] They are not necessarily in your favor, ”he says.

Wired communicated with each of the companies observed in the study to comment. Chai, Talkie and Polybuzz did not answer Wired’s questions.

Katherine Kelly, AI character spokesman, said the company had not reviewed the study, so she could not comment. She added: “We thank working with regulators and legislators as they develop regulations and legislation for this emerging space.”

Minju Song, a Replika spokesman, says that the company’s partner is designed to allow users to close easily and even encourage them to take breaks. “We will continue to review the methods and examples of the document, and [will] Get involved constructively with researchers, “says Song.

Another interesting face here is the fact that AI models are also susceptible to all types of persuasion tricks. Opadai Monday inserted A new way of buying things online through chatgpt. If the agents generalize as a way of automating tasks such as reserving flights and completing reimbursements, then it may be possible for companies to identify dark patterns that can twist the decisions made by AI models behind those agents.

TO Recent study By researchers at Columbia University and a company called Mycustomai reveals that the agents of the IA deployed in a simulated market of electronic commerce behave predictably, for example, favoring certain products on others or preferring certain buttons when clicking on the site. Armed with these findings, a real merchant could optimize the pages of a site to ensure that agents buy a more expensive product. Perhaps they could even implement a new type of dark anti-Ai pattern that frustrates an agent’s efforts to start a return or discover how to unsubscribe from an email list.

Difficult farewells could be the slightest of our concerns.

Do you feel that you have been emotionally manipulated by a chatbot? Send an email to ailab@wired.com to tell me about it.


This is an edition of Will Knight’s IA Laboratory Bulletin. Read previous newsletters here.

#Chatbots #play #emotions #avoid #goodbye

Leave a Reply

Your email address will not be published. Required fields are marked *