Chat is Law

The future of customer service

🔷 Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.

From the cold wasteland north of the border:

  • Jake’s grandmother died

  • Jake tries to book a flight on Air Canada with a bereavement fare

  • Asks Air Canada chatbot how

  • Chatbot says, “Go ahead and book a normal fare, then request a refund within 90 days.”

  • This is not Air Canada’s actual policy, listed on its site, which says “No bereavement refunds after booking.”

  • Jake’s refund request gets denied, and he gets a $200 coupon offer

  • He files suit with the small claims tribunal

  • Air Canada claims “the chatbot is a separate legal entity that is responsible for its own actions“, and therefore Air Canada should not be liable

  • Tribunal finds Air Canada’s defense ludicrous and finds in favor of Jake

And this from Matt Levine:

The funny thing is that the chatbot is more human than Air Canada. Air Canada is a corporation, an emergent entity that is made up of people but that does things that people, left to themselves, would not do. The chatbot is a language model; it is in the business of saying the sorts of things that people plausibly might say. If you just woke up one day representing Air Canada in a customer-service chat, and the customer said “my grandmother died, can I book a full-fare flight and then request the bereavement fare later,” you would probably say “yes, I’m sorry for your loss, I’m sure I can take care of that for you.” Because you are a person! The chatbot is decent at predicting what people would do, and it accurately gave that answer. But that’s not Air Canada’s answer, because Air Canada is not a person.

I think sometimes about Ted Chiang’s essay arguing that popular fears about runaway artificial intelligence are really about modern corporate capitalism, that modern corporations actually do what we worry superintelligent AIs might one day do. “Consider,” writes Chiang: “Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share?” Here, the AI chatbot was benevolent and human; the corporation was not.

Matt Levine, Bloomberg

The problem, of course, is that what corporations really want is to reduce the number of bereavement payouts they need to make. The chatbot dodge is a tactic, not a symptom.

Improving access to customer service agents that successfully award bereavement payments is a failure and negative for the bottom line. The corporation wants a chatbot that skillfully diverts the customer into paying full fare while remaining happy with the choice.

In this sense, the chatbot was a failure for the corporation, but a success as an empathetic human. The future of customer service, I suspect, is an inversion where the corporation accedes to the decision-making of the empathetic customer service agent bot. There will be no more of this, but rather: “This is the right thing to do, and the corporation shall do it, and I, CSBot 118, will enter into contract right now, and it is within my powers to commit the corporation into doing the right thing.”

Current systems are built on inflexible heuristics because decision-making is spread across many human minds. The rules exist as interfaces and guarantees between two surfaces. But if you were the CEO and sitting with your board, and you had access to all the systems of the firm, and if a call came in for a bereavement ticket decision, you could make that decision on the fly, knowing the intricacies of how it would affect the profitability and reputation of the company in real-time. There is no reason a CEO-bot couldn’t do the same.

Wouldn’t that be an amazing outcome?

Become a subscriber for daily breakdowns of what’s happening in the AI world:

Join the conversation

or to participate.