skipToContent
United StatesHE higher-ed

‘If you’re boring, it’s good to know that you’re being boring.’

Harvard Gazette United States
‘If you’re boring, it’s good to know that you’re being boring.’
Jonathan Zittrain (left), Carissa Véliz, and Eric Beerbohm. Veasey Conway/Harvard Staff Photographer Science & Tech ‘If you’re boring, it’s good to know that you’re being boring.’ The perils of seeking empathy from a chatbot Christina Pazzanese Harvard Staff Writer April 30, 2026 5 min read It’s clear that artificial intelligence is changing everything, from the way we learn to the way we work. What’s far less clear is how AI’s insinuation into everyday life is changing the way we relate to each other in the non-digital world. During a talk this week at the Barker Center, panelists discussed the rapid development of generative AI chatbots, like Claude and ChatGPT, and the ethical implications for how we communicate and connect as human beings. The event, moderated by Eric Beerbohm , faculty director of the Edmond & Lily Safra Center for Ethics at Harvard, kicked off a series that will consider how AI is transforming both civil and uncivil disagreement. The panel referenced a now-famous JAMA Internal Medicine article about patients perceiving AI responses to online health questions as more empathetic and accurate than those from human physicians. Training AI chatbots to seem empathetic in order to boost engagement may be a smart business decision for tech firms, and may be preferred by users, but such constant reflexive validation comes at a cost, said Carissa Véliz , associate professor of philosophy at the Institute for Ethics in AI and a fellow at Hertford College at the University of Oxford. Chatbots are “the ultimate bullshitters because they don’t care about anything, they’re not truth tracking, and they will say whatever human beings prefer.” The “empathy” as a design feature lulls users into thinking the chatbot understands them and has their best interests at heart when it does not, she said. “There is no one there on the other side of the screen, there’s no one who cares about you,” said Véliz. “And even to call it empathic, I think, is a mistake. It’s a kind of simulation of empathy, which is very different.” “There is no one there on the other side of the screen, there’s no one who cares about you.” Carissa Véliz The potential for distorted social and cognitive effects from chatbot use, particularly among children and teenagers, is worrisome, she said. “I think it’s very healthy to experience the frustration of other people. If you’re boring, it’s good to know that you’re being boring. Yes, it’s painful, but it’s valuable feedback,” she said. Panelists agreed that perpetual validation can lead to an overreliance on AI for emotional support and stunt the development of critical thinking skills, as well as prompt users to hold everyone to the “infinitely patient and sycophantic standards” of chatbots. “One of the advantages of talking to another human being is that, annoyingly, they disagree with you and they push back, and they don’t see things the same way as you do. That’s very frustrating and incredibly healthy because it grounds you to reality,” she said. Since ChatGPT’s introduction in fall 2022, chatbots have greatly advanced in sophistication and accuracy, especially in medical diagnostics and laboratory research, said Jonathan Zittrain , George Bemis Professor of International Law at Harvard Law School, a professor of computer science at John A. Paulson School of Engineering and Applied Sciences, and professor of public policy at Harvard Kennedy School. However, much like antibiotics that may be helpful in the moment, there’s a real danger that the easy availability of medical chatbots can be socially corrosive and instigate a much larger, society-wide problem if we are over-reliant on them, he said. “It is the poor man’s version of artisanal contact with a human being, and it will provide a crutch so that we never have to provide” real human-to-human interaction, said Zittrain, who is also the faculty director of the Berkman Klein Center for Internet and Society. Not only do people lose the therapeutic value of personal contact when a medical chatbot is in charge, but it becomes much more difficult to hold anyone accountable when something goes wrong, Véliz noted. Algorithms, especially on social media, are often used as a barrier to minimize a technology company’s accountability, so it’s important that the question of who’s responsible for a chatbot’s faulty diagnosis or mishandling of treatment gets answered, she said. Zittrain said while today’s AI chatbots present some risks of diminished interpersonal connection, they also offer immense promise that we shouldn’t simply turn away from out of fear or skepticism. “I just I think we need to be really gimlet-eyed about just how far functionally these things have come, even if what has gotten them the most distance lately has been parlor tricks strung together,” he said. If humans could tweak chatbots so they aligned with their long-term goals and helped them stay on track, Zittrain asked, “Would you turn that down, especially in a world where as soon as you leave this building, or if you’re on your phone right now, you’re getting importuned on all corners by people appealing to your momentary impulses?” Rather than extinguishing our taste for cultural artifacts like books and art, the ubiquity of AI may instead remind us of their value, said Véliz. “I think that when we look at AI, one of the things that becomes more salient is the beauty and richness and resources of everything that’s not AI — all of the richness of the analog world,” she said. “No matter how good AI becomes, it will never be analog, and no matter how digital we become, we will never be completely digital. The richness of the natural world, of the cultural world, of paintings, of coffee shops, of bars, of universities, is something that I think we should cherish a lot more and that becomes brighter in light of AI.”
Share
Original story
Continue reading at Harvard Gazette
news.harvard.edu
Read full article

Summary generated from the RSS feed of Harvard Gazette. All article rights belong to the original publisher. Click through to read the full piece on news.harvard.edu.