The writer is a contributing columnist, based in Chicago
The label “neurotic” isn’t normally viewed as a compliment. But when University of Chicago researchers tested earlier this year how people reacted to robots pretending to be restaurant greeters, they found that folks liked a dash of neuroticism in their artificial intelligence — saying it made the robot more “human-like”.
But these days there is increasing controversy over just how “human” our AI helpers should pretend to be — and what personalities they should be given, if any. Critics argue that humanlike emotional attributes can trick people into treating them less like tools and more like friends or therapists, with sometimes tragic consequences.