Post Format

Alone Together: Socializing with Technology

Sherry Turkle’s book Alone Together presents a well-researched look at how technology is changing the way we socialize, both with each other, and with technology itself.

In the first part of the book, we are introduced to an array of increasingly complex robots designed to offer some degree of social interaction with people, and and descriptions of studies of how people react to these robots. Laying the groundwork for these robots, we first get a review of ELIZA, an early artificial intelligence computer program. Fairly well known amongst computer scientists, ELIZA was written to mimic a psychotherapist, accepting text input from the user and responding with meaningless questions formed by trivial rephrasing of the human-supplied sentences.

Even when knowledgeable about how ELIZA worked, many people found it grossly engaging, attributing intellect and emotional involvement to the program that simply was not there. This turns out to be a recurring theme through Turkle’s descriptions of social robots: people are inclined to find attributes of intelligence where there are none, and to interpret any plausible signal of emotional attachment coming from the robot, no matter how slight, as evidence that the robot actually feels something and, on some level, cares for and connects with its human users.

Toys

The first actual robot looked at is just slightly more a “robot” than ELIZA was: the small, handheld Tamagotchi, marketed as a “digital pet”. Bestowed by their programmers with algorithms meant to simulate needing care and attention, children worldwide have felt compelled to treat these electronic trinkets with far more compassion than one might expect:

In the presence of a needy Tamagotchi, children become responsible parents: demands translate into care and care into the feeling of caring. …

[One child] says that his Tamagotchi is “like a baby. You can’t just change the baby’s diaper. You have to, like, rub cream on the baby. That is how the baby knows you love it.” …

Three nine-year-olds consider their Tamagotchis. One is excited that his pet requires him to build a castle as its home. “I can do it. I don’t want him to get cold and sick and to die.”

Can a Tamagotchi actually die? Its programming allows it to “pass away” on the screen, but a reset button brings it back to “life”. However, as a Tamagotchi is used, its programming causes it to appear increasingly intelligent, developing one attribute or another based on how the user’s interaction with it. Restarting a “dead” Tamagotchi does not bring it back to the same state it was in, and children are aware of this:

“It’s cheating. Your Tamagotchi is really dead. Your one is really dead. They say you get it back, but it’s not the same one. It hasn’t had the same things happen to it. It’s like they give you a new one. It doesn’t remember the life it had.”

“When my Tamagotchi dies, I don’t want to play with the new one who can pop up. It makes me remember the real one [the first one]. I like to get another [a new egg]…. If you made it die, you should start fresh.”

Good business for the manufacturer, many children interviewed were adept at persuading their parents to buy new Tamagotchis after one “died”, even though restarting the software on the “dead” one was functionally the same as starting with a brand new one. Somehow, the simple electronic toy truly became alive, not by any stunning prowess on the part of its programmers, but by the fervent belief of the children you played with it.

Taking the idea of the Tamagotchi to a new level, the Furby is a similarly-needy collection of algorithms packaged into a furry stuffed toy. Looking more like, feeling more like a living being makes it more compelling than a handheld digital screen. One researcher conducted a test to see where in the spectrum of inanimate things and living creatures the Furby fell:

A person is asked to invert three creates: a Barbie doll, a Furby, and a biological gerbil. [The] question is simple: “How long can you hold the object upside down before your emotions make you turn it back.

While the Barbie doll says nothing, a Furby turned upside down whines and claims to be afraid, and the study finds the Furby somewhere in between alive and not:

“People are willing to be carrying the Barbie around by the feet, slinging it by the hair … no problem. … People are not going to mess around with their gerbil. … [People will] hold the Furby upside down for thirty seconds or so, but when it starts crying and saying it’s scared, most people feel guilty and turn it over.”

While the people in the study knew full well that the Furby was not alive, its somewhat lifelike form and programmed response elicited emotions sufficiently like those brought about by an actual living creature that it was difficult to relate to the Furby strictly as the machine that it is.

Kara, a woman in her fifties, reflects on holding a moaning Furby that says it is scared. She finds it distasteful, “not because I believe that the Furby is really scared, but because I’m not willing to hear anything talk like that and respond by continuing my behavior. It feels to me that I could be hurt if I keep doing this. … In that moment, the Furby comes to represent how I treat creatures.”

As with the Tamagotchis, a Furby undergoes simulated learning as it is played with, making a well-loved Furby different from a brand new one. When, in the course of studying children’s reactions to a Furby, one of them broke, one child was given a replacement:

[He] wants little to do with it. He doesn’t talk to it or try to teach it. His interest is in “his” Furby, the Furby he nurtured, the Furby he taught. He says, “The Furby that I had before could say ‘again’; it could say ‘hungry.’” … The first Furby was never “annoying,” but the second Furby is. His Furby is irreplaceable.

Pets

While a Furby may vaguely resemble a living creature, it’s a totally fictitious creature. What happens when a robot is designed to imitate an actual living creature? AIBO was built to behave like a pet dog, and some children found its behavior sufficiently convincing, and then some:

Yolanda … first sees AIBO as a substitute: “AIBO might be a good practice for all children whose parents aren’t ready to take care of a real dog.” But then she takes another step: in some ways AIBO might be better than a real dog. “The AIBO,” says Yolanda, “doesn’t shed, doesn’t bite, doesn’t die.” More than this, a robotic companion can be made as you like it. Yolanda muses about how nice it would be to “keep AIBO at a puppy stage for people who like to have puppies.”

Another child was convinced that AIBO was capable of emotion, even though it was not:

Oliver does not see AIBO’s current lack of emotionality as a fixed thing. On the contrary. “Give him six months,” Oliver says. “That’s how long it took [the biological hamster] to really love. … If it advanced more, if it had more technology, it could certainly love you in the future.”

In the future, he says, or even now:

“AIBO loves me. I love AIBO.” As far as Oliver is concerned, AIBO is alive enough for them to be true companions.

On the other hand, some children found AIBO’s lack of emotional attachment to be a positive thing:

Pets have long been thought good for children because they teach responsibility and commitment. AIBO permits something different: attachment without responsibility. Children love their pets, but at times … they feel burdened by their pets’ demands. … But now children see a future where something different may be available. With robot pets, children can give enough to feel attached, but then they can turn away. They are learning a way of feeling connected in which they have permission to think only of themselves.

Given how many children (and even adults) desire the companionship of a pet, but either cannot or will not accept the responsibility of adequately caring for it, often resulting in animals with warped personalities being dropped off at overbooked shelters, perhaps getting a fraction of the companionship of a real pet in exchange for the ability to put it away whenever you like will be a welcome option for some people.

The Cutting Edge

Moving on from commercialized robot products, a number of children were brought into the MIT Artificial Intelligence Lab to [be studied as they] interact with the state of the art of robotic creatures. The robot Kismet was designed with large eyes that appeared to gaze at whatever person or object caught its attention, and with large ears to seemingly listen to what was being said to it, and with a mouth to speak infant-like babbling sounds in response to what it “heard”. Many children found Kismet enjoyable, believing that it understood them, and was trying its best to speak intelligibly back at them. Some children, though, were underwhelmed with Kismet’s ability to talk:

With no prologue, Edward walks up to Kismet and asks, “Can you talk?” When Kismet doesn’t answer, Edward repeats his question at greater volume. Kismet stares into space. Again, Edward asks, “Can you talk?” Now, Kismet speaks in the emotionally layered babble that has delighted other children or puzzled them into inventive games. … He tries to understand Kismet: “What?” “Say that again?” “What exactly?” “Huh? What are you saying?” After a few minutes, Edward decides that Kismet is making no sense. He tells the robot, “Shut up!” And then, Edward picks up objects in the laboratory and forces them into Kismet’s mouth — first a metal pin, then a pencil, then a toy caterpillar. Edward yells, “Chew this! Chew this!” Absorbed by hostility, he remains engaged with the robot.

Children are by no means alone in becoming emotionally attached to robots. Dr. Cynthia Breazeal, Kismet’s creator, accepted a position in another lab at MIT and had to leave Kismet behind:

Breazeal describes a sharp sense of loss. Building a new Kismet will not be the same. This is the Kismet she has “raised” from a “child.” She says she would not be able to part with Kismet if she weren’t sure it would remain with people who would treat it well.

Caring Robots

The attachment between robots and their creators aside, is the only purpose for social robots to be whimsical toys and children’s playthings? Widespread in Japan, and increasingly in the United States, is the use of social robots as companions for the elderly in nursing homes. The robot Paro, for example, has been mass-produced as a “therapeutic robot”:

Miriam’s son has recently broken off his relationship with her. He has a job and family on the West Coast, and when he visits, he and his mother quarrel — he feels she wants more from him than he can give. Now Miriam sits, quietly, stroking Paro, a sociable robot in the shape of a baby harp seal. … [It] has a small working English vocabulary for “understanding” its users … [and] can sense whether it is being stroked gently or with aggression. Now, with Paro, Miriam is lost in her reverie, patting down the robot’s soft fur with care. On this day, she is particularly depressed and believes that the robot is depressed as well. She turns to Paro, strokes him again, and says, “Yes, you’re sad, aren’t you? It’s tough out there. Yes, it’s hard.” Miriam’s tender touch triggers a warm response in Paro: it turns its head toward her and purrs approvingly. Encouraged, Miriam shows yet more affection for the little robot. In attempting to provide the comfort she believes it needs, she comforts herself.

Is this a positive direction to be heading, leaving the elderly to be comforted by robots? Many study participants found the supplementary companionship of robots to be preferable to being alone, and sometimes even preferable to being with someone, as sharing private emotional thoughts with a robot felt easier than with another person.

What about helping the elderly with physical needs? For that there is Nursebot:

… which can help elderly people in their homes, reminding them of their medication schedule and to eat regular meals. Some models can bring medicine or oxygen if needed. In an institutional setting, a hospital or nursing home, it learns the terrain. It knows patients’ schedules and accompanies them where they need to go.

Many people who experienced the care of Nursebot found it appealing. While they would not want to see robots entirely replace human care, they found it perfectly acceptable as a supplement. Some even found it superior, with one person reporting that:

“Robots … will not abuse the elderly like some humans do in convalescent care facilities.”

Others wrote that many of their human care providers kept themselves so emotionally distant that they might as well be replaced by robots.

Conclusion

Going from trivial electronic toys to animatronic creatures to machines for helping people both emotionally and physically, robotics technology is clearly ready for real everyday use, and society seems ready to accept it. The two parties are meeting in the middle; technology has certainly been advancing, but where it still lacks, people are willing to fill in the gaps with their imagination. Robots may not be real living creatures, but for many purposes they are real enough, and for some purposes they are better.

We’ll look at the second half of the book, covering advances in humans interacting with other humans through technology, in a future post. If you’ve found this much interesting, you can pick up a copy of the book for yourself, either in printed or electronic form.

Comments are closed.