Joshua Browder, the CEO of DoNotPay, a company that automates administrative chores including disputing parking fines and requesting compensation from airlines, this week released video of a chatbot negotiating down the price of internet service on a customer’s behalf. The negotiator-bot was built on the AI technology that powers ChatGPT. It complains about poor internet service and parries the points made by a Comcast agent in an online chat, successfully negotiating a discount worth $120 annually. DoNotPay used GPT-3, the language model behind ChatGPT, which OpenAI makes available to programmers as a commercial service. The company customized GPT-3 by training it on examples of successful negotiations as well as relevant legal information, Browder says. He hopes to automate a lot more than just talking to Comcast, including negotiating with health insurers. “If we can save the consumer $5,000 on their medical bill, that’s real value,” Browder says. ChatGPT is just the latest, more compelling, implementation of a new line of language-adept AI programs created using huge quantities of text information scooped from the web, scraped from books, and slurped from other sources. Algorithms that have digested that training material can mimic human writing and answer questions by extracting useful information from it. But because they operate on text using statistical pattern matching rather than an understanding of the world, they are prone to generating fluent untruths. A number of fluent new conversational agents based on this new approach to language AI have popped up lately. In May 2021, Google showed off an advanced chatbot under development called LaMDA and touted it as the future of search. In June, an engineer at the company was suspended after bizarrely claiming that the program had shown signs of sentience. Startups are working on similar bots for tasks such as providing entertainment, or acting as personal assistants. Browder of DoNotPay is not the only person who sees ChatGPT and the technology behind it as a way to automate persuasion. One doctor posted a video on Twitter showing how the bot might write a letter to help convince an insurer to pay for a certain procedure, even citing scientific literature, albeit with dubious accuracy. Longer term, large companies may adopt the technology and create chatbots designed to handle customer inquiries and complaints—or to sell them new products. Browder says he is already in an “arms race” with companies that use automated tools that try to foil his services. He expects that to now intensify but claims DoNotPay will be able to stay ahead. “I think the future of this is where bots just talk to each other to get the optimal outcome,” Browder says. Jonas Kaiser, an assistant professor at Suffolk University in Boston who studies online misinformation and algorithmic recommendations, says the cost of creating large language models—often tens of millions of dollars—means that big companies may have an edge. “Companies can and presumably will train the language model on a certain desired outcome—for example a customer dropping their complaint or signing a new contract,” he says. Some businesses are already using AI language models to help salespeople hone their pitches. Eilon Reshef, cofounder and chief product officer at Gong, a company that uses AI to optimize sales, sees lots of potential in ChatGPT. Gong uses AI to analyze the text of sales pitches used on calls and in writing and to provide feedback to salespeople. Reshef says that the propensity of language generators to fabricate means that a person should still supervise the technology and that systems that invent too freely won’t be trusted by salespeople. But he says a tool like ChatGPT could be trained with knowledge of a particular company or person to help improve a pitch. “lf the AI has context around who you’re communicating with and why, it could help you generate an email,” Reshef says. That vision sees language software helping humans in the workplace, but ChatGPT has sparked speculation about how it might displace people from certain kinds of office work. David Autor, an economist at MIT who studies the impact of AI on labor, says it’s too early to say whether this new generation of AI technology will augment human work or replace it. But he sees plenty of potential for disruption in both workplaces, through commercial adaptations of ChatGPT-like systems, and wider society, through malicious uses. “It’s going to wreak all sorts of havoc,” Autor says. “The opportunities for scams or fraud or gaming systems are just amazing.”