Trevis Rothwell's weblog

Almost, but not quite, entirely unlike tea

Keys Under Doormats

07 July 2015

Dovetailing into last week’s musings about the need for computer science education as part of standard school curriculum, a group of researchers at and around MIT have published a new report on the topic of law enforcement’s need for access to private (personal or corporate) data. While this need may be valid, how to implement it in acceptable way is not necessarily obvious, and lawmakers need to think through a number of important questions.

One particularly interesting passage, highlighting the value of understanding computer science in the public sphere:

With people’s lives and liberties increasingly online, the question of whether to support law enforcement demands for guaranteed access to private information has a special urgency, and must be evaluated with clarity. From a public policy perspective, there is an argument for giving law enforcement the best possible tools to investigate crime, subject to due process and the rule of law. But a careful scientific analysis of the likely impact of such demands must distinguish what might be desirable from what is technically possible. In this regard, a proposal to regulate encryption and guarantee law enforcement access centrally feels rather like a proposal to require that all airplanes can be controlled from the ground. While this might be desirable in the case of a hijacking or a suicidal pilot, a clear-eyed assessment of how one could design such a capability reveals enormous technical and operational complexity, international scope, large costs, and massive risks — so much so that such proposals, though occasionally made, are not really taken seriously.

We have shown that current law enforcement demands for exceptional access would likely entail very substantial security risks, engineering costs, and collateral damage. If policy-makers believe it is still necessary to consider exceptional access mandates, there are technical, operational, and legal questions that must be answered in detail before legislation is drafted.

Legislators need to understand technical topics related to information security and privacy in order to write and vote on legislation in a rational way. Citizen constituents need to understand these same topics in order to do their part in voicing their opinions to their representatives and in voting them into or out of office.

More: read the report.

Google Photos

29 May 2015

DSCN0223.JPGThis week Google announced and released their new photo sharing service. I mostly use Flickr, but I had a handful of photo collections in Google Picassa years ago, which got dragged over into Google+ photos, and now have dutifully arrived in Google Photos.

As an overall interface for viewing photos, Google Photos seems nice, but not particularly better or worse than Flickr. There are options to share photos on Facebook, Twitter, and Google+, but I see no way to get various-sized photos to embed within web pages as I do with Flickr.

I also see no way to tag photos, but this might not be significant, as the facial, object, and location recognition built in to Google Photos is so accurate that it comes across frightening to this privacy advocate.

Facial recognition in my photo sample set is almost perfect. If the face is looking straight on, or is turned to the side, or is wearing a hat — doesn’t matter. Google Photos can pick out the face. It also correctly identified photograph locations including Boston, Washington D.C., Cedar Rapids, Omaha, Irvine, Joshua Tree National Park, and San Juan Capistrano, seemingly based on photographic content. (My ancient Canon 5D camera doesn’t have a GPS to embed location data, and my even more ancient Canon EOS-3 film camera certainly doesn’t embed location data!)

Object recognition was nearly as accurate, with a search for “food” including pictures of restaurants, pictures of food on a plate, and pictures of unpicked vegetables growing — though I was amused to see a picture of a live crab in an aquarium counted amongst “food”… not strictly incorrect, but unexpected.

The two main things that I do with photo sharing is to set up a place to store, share, and browse photos, and to embed them into web pages (such as this blog post). Google Photos does a fine job of the first set of tasks, but apparently not so great at the second task, so I will be sticking with Flickr for the time being.

The content recognition software behind Google Photos is outstanding, but might open a whole new can of worms in terms of reasonably expected privacy. Obviously, anyone sharing a photo in public would not expect privacy of the photo itself, but the fact that so much data can be automatically sucked out of the photo could easily give one pause. And it doesn’t really matter if your photos or stored on Google Photos or not, as Google can find and analyze photos from Flickr or from any public photo site.

Richard Stallman at UIUC

17 March 2015

20150317-uiuc-7.jpgLast night I attended Richard Stallman’s lecture at the University of Illinois in Urbana-Champaign, on the subject of “Free Software and Your Freedom”. Dr. Stallman launched the GNU Project and the Free Software Foundation some thirty years ago in order to create and promote free software:

“Free software” means software that respects users’ freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, “free software” is a matter of liberty, not price.

I won’t try to summarize everything that was said at the lecture, but some segments stood out to me as particularly interesting:

Proprietary Malware

While free software grants users the freedom to run, copy, distribute, study, change, and improve the software, proprietary software is licensed such that there are restrictions on one or more (possibly all) of those attributes.

One detriment of proprietary software is that you as a user cannot know for sure what the software is doing. As has been demonstrated in various well-known proprietary applications, the software may well be sending data about you or your usage of the software to its developers without telling you. This, Stallman concludes, makes the application malware. It is spying on its users, and is an attack on their privacy.

Publishing the application as free software, on the other hand, is a defense against such malware. While it is still possible that a free application could spy on its users, it is much less likely, as anyone who uses the software could plausibly investigate what it is doing.

Mobile Phones and Surveillance

Mobile phones — smartphones and otherwise — can be remotely accessed and controlled by service providers, and can be converted into listening devices, to transmit all audio that they pick up. The only surefire way to prevent this from happening is to remove the batteries. All of the batteries. Some phones, Stallman says, include multiple batteries, including one that is designed to prohibit being removed. Other phones are designed to prohibit removal of batteries altogether.

Even when not converted into listening devices, mobile phones still, out of functional necessity, track where they go — and thus track where their users go. The mobile service providers can know at all times where their users are located, and can maintain a database of everywhere they have gone in the past.

Thanks to the efforts of NSA whistle-blower Edward Snowden, we know that there is far more surveillance of us today than there ever was in the Soviet Union. (Upon mentioning Snowden, the audience erupted into applause, and Stallman led everyone in three cheers for Snowden: “Hip, hip, hooray!”)

Service as a Software Substitute

What many people call Software as a Service — providing a software application over the internet, typically through a web interface — Stallman calls Service as a Software Substitute (SaaSS). Software that runs on someone else’s server is out of your direct control, so even if it is free software, it doesn’t matter: you cannot change it, study it, and so on. By definition, you are handing your data over to someone else, since your computing is being done on their computer and not yours, so the door is wide open for the SaaSS provider to invade your privacy.

Not all web applications fall under the category of Service as a Software Substitute; it depends on what service they provide. Communications services and collaboration services are not SaaSS, because by their very nature you must be sharing data over the internet. Even if you ran a free local application instead of using a web application, you still would end up sharing the data through whatever servers and routers were between you and those you were communicating with.

How can you tell if a web application is SaaSS or not? Dr. Stallman recommends a thought experiment: imagine that you have the most powerful computer possible. Using that computer alone, can you run the application you want to run? If the answer is yes, then to make that application run remotely from a server would be SaaSS, as that would be a service substituting for software.

For example, no matter how powerful of a computer you had, you could not use that computer by itself as a web search engine. You need to connect to other computers, and use resources on other computers, in order to have the functionality of a web search engine. So a web search engine application is not a software substitute; it is a service.

Valuing Freedom

If all software in the world today was instantly licensed as free software, that would not be enough to guarantee ongoing software freedom long-term. People need to understand the value of freedom when it comes to using computer software; otherwise, someone could come along offering some new technology or some improved user experience with a proprietary license, and it would be accepted on its technological merits alone.

Software usage is still relatively quite young in the world; only for the past several decades has it been part of society at all. But there has never been any serious public debate or discussion about what rights and freedoms software users ought to have. Such matters were decided by proprietary software vendors as they published and sold their products. Is this really what is best for society?

Miscellaneous

  • Reverse-engineering of proprietary hardware drivers and related software would be a huge benefit to free software development today. And developing expertise in reverse engineering would also be a potentially good career idea, as it’s an in-demand skill set that not many people have.
  • If you want to learn how to develop large software applications from scratch, it’s not very helpful to spend time developing small software applications from scratch. Instead, start with contributing small changes to existing large applications. Then move on to contributing large changes. And then move on to building your own.

Conclusion

I’ve read many of Richard Stallman’s articles and listened to recordings of many of his lectures, but I still found attending this lecture in person to be a particularly insightful and educational experience. There is clearly much good that can come from having, using, and promoting free software; and there are clearly social ills and injustices that can come from using proprietary software.

As I sat in the auditorium listening, I started to ponder how arguing for free software might be likened to debates and discussions held in the framing of the United States government. What rights and freedoms and restrictions are best for the people, and for the long-term good of the country? Indeed, just considering how free software can help enforce constitutionally-provided privacy makes a strong case for why it is in the best interest of the people.

More reading from FSF Project GNU:

[Much of what I have written here is based from notes that I quickly jotted down during the lecture, and I may have misunderstood or misinterpreted something. Corrections would be welcome.]

More: a few photos of the University of Illinois campus.

Alone Together: Socializing through Technology

25 January 2015

We’ve already looked at the first half of Alone Together, which focused on how people are increasingly socializing with technology, especially with artificially-intelligent robots. The second half covers how people are increasingly socializing with each other through technology.

Except maybe for those who stay at home all day talking only to robots, it’s obvious that ever-advancing technology has had a huge impact on how we communicate with each other. What might not be obvious is to what extent we are becoming dependent upon technology for communication, and either consciously or subconsciously avoiding traditional in-person conversations, and even telephone conversations, pushing as much as possible into the asynchronous world of electronic messaging.

Many people interviewed delight in constantly receiving messages; every incoming message is something new to look at. Even if the messages in fact interrupt something else they were doing, it doesn’t matter:

“I’m waiting to be interrupted right now,” one says. For him, what I would term “interruption” is the beginning of a connection.

Some not only relish constant communication from others, but rely on it to help shape and determine their own thoughts. When faced with an opportunity to feel upset or sad, sending and receiving text messages with a friend can be used to solidify what exactly is the right emotional response. But there’s no time for a telephone call or an in-person conversation; they feel lost without immediate input over text messages.

Not too many years ago, friends and family routinely chatted with each other over telephone. Now, a telephone call is often viewed as a last resort; emails and text messages are so much more convenient and less intrusive, with a telephone call requiring more personal attention from both parties.

Tara, a fifty-five-year-old lawyer who juggles children, a job, and a new marriage, [says that] “When you ask for a call, the expectation is that you have pumped it up a level. People say to themselves: ‘It’s urgent or she would have sent an email.’” So Tara avoids the telephone. …

Randolph, a forty-six-year-old architect with two jobs, two young children, and a twelve-year-old son from a former marriage … explains, “Now that there is e-mail, people expect that a call will be more complicated. Not about facts. A fuller thing. People expect it to take time — or else you wouldn’t have called. …

A widow of fifty-two grew up on volunteer work and people stopping by for afternoon tea. Now she works full-time as an office manager. Unaccustomed to her new routine, she says she is “somewhat surprised” to find that she has stopped calling her friends. She is content to send e-mails and Facebook messages. She says, “A call feels like an intrusion, as though I would be intruding on my friends … After work — I want to go home, look at some photos from the grandchildren on Facebook, send some e-mails and feel in tough. I’m tired. I’m not ready for people — I mean people in person.”

As convenient as electronic messages may be, people are increasingly consumed by them, giving attention to the incoming notifications on their phone over people that are physically present, regardless of the urgency of the messages. The attention that children receive from parents seems particularly diminished: they may be present physically — pushing the child on a swing, eating dinner at the table, watching a football game on television — but often with one hand firmly grasping a smartphone, and their attention elsewhere.

Audrey complains of her mother’s inattention when she picks her up at school or after sports practice. At these times, Audrey says, her mother is usually focused on her cell phone, either texting or talking to friends. Audrey describes the scene: she comes out of the gym exhausted, carrying heavy gear. Her mother sits in her beaten-up SUV, immersed in her cell, and doesn’t even look up until Audrey opens the car door. Sometimes her mother will make eye contact but remain engrossed with the phone as they begin the drive home. Audrey says, “It gets between us, but it’s hopeless. She’s not going to give it up. Like, it could have been four days since I last spoke to her, then I sit in the car and wait in silence until she’s done.

Audrey has a fantasy of her mother, waiting for her, expectant, without a phone.

A constant barrage of incoming messages not only has an impact on interacting with other people in the room, but on completing work in the room:

“I’m trying to write,” says a professor of economics. “My article is due. But I’m checking my e-mail every two minutes. And then, the worst is when I change the setting so that I don’t have to check the e-mail. It just comes in with a ‘ping.’ So now I’m like Pavlov’s dog. I’m sitting around, waiting for that ping. I should ignore it. But I go right to it.” …

An art critic with a book deadline took drastic measures: “I went away to a cabin. And I left my cell phone in the car. In the trunk. My idea was that maybe I would check it once a day. I kept walking out of the house to open the trunk and check the phone. I felt like an addict, like the people at work who huddle around the outdoor smoking places they keep on campus, the outdoor ashtray places. I kept going to that trunk.”

But shifting conversation to electronic messages is not without clear benefits as well. Children growing up and moving away from home are able to keep in frequent contact with their parents, easing the transition for everyone. While placing dozens of phone calls to your parents might seem excessive, sending dozens of text messages feels perfectly normal.

With so many omnipresent means of communication, we never have to be alone. But on the other hand, we may never learn how to be alone: how to think thoughts entirely of our own, how to occupy ourselves without the input from others. Is this a good thing or a bad thing? While younger people are generally enamored with communications technology, some are starting to feel like maybe life before technology had something better to offer:

Hillary is fond of movies but drawn toward “an Amish life minus certain exceptions [these would be the movies] … but I wouldn’t mind if the Internet went away.” She asks, “What could people be doing if they weren’t on the Internet?” She answers her own question: “There’s piano; there’s drawing; there’s all these things people could be creating.”

The important takeaway here may be that communications technology, like technology in general, can be both used and abused. It’s up to us to use it wisely, and not let it overtake our lives. But, as several interviewees reported, technology has a strong pull to it, often stronger than we are capable of resisting. It takes deliberate thought and action to not be drawn too far in.

With lives lived so much online, does the next generation of technology users have any qualms about privacy? It appears that they are neither in favor of nor dismissing online privacy, preferring to not think about it much:

The media has tended to portray today’s young adults as a generation that no longer cares about privacy. I have found something else, something equally disquieting. High school and college students don’t really understand the rules. Are they being watched? Who is watching? Do you have to do something to provoke surveillance, or is it routine? Is surveillance legal? They don’t really understand the terms of service for Facebook or Gmail, the mail service Google provides. They don’t know what protections they are “entitled” to. They don’t know what objections are reasonable or possible. …

There is an upside to vagueness. What you don’t know won’t make you angry. Julia says, “Facebook and MySpace are my life.” If she learned something too upsetting about what, say, Facebook can do with her information, she would have to justify staying on the site. But Julia admits that whatever she finds out, even if her worst fears of surveillance by high school administrators and local police were true, she would not take action. She cannot imagine her life without Facebook.

The book concludes with some thoughts that purposefully keeping some of our most cherished thoughts and conversations in analog form may be in our own best interest; for example, hand-written letters received from dear friends and family seem to hold more emotional value than text messages, and the very act of writing and sending a letter suggests putting more of yourself into the communique than sending an email.

Whatever your persuasion of communications technology use, taking the time to consider the thoughts and situations chronicled in Alone Together should give some additional perspective from which to more strongly base your opinions. A very enjoyable a thought-provoking read.

Alone Together: Socializing with Technology

17 January 2015

Sherry Turkle’s book Alone Together presents a well-researched look at how technology is changing the way we socialize, both with each other, and with technology itself.

In the first part of the book, we are introduced to an array of increasingly complex robots designed to offer some degree of social interaction with people, and and descriptions of studies of how people react to these robots. Laying the groundwork for these robots, we first get a review of ELIZA, an early artificial intelligence computer program. Fairly well known amongst computer scientists, ELIZA was written to mimic a psychotherapist, accepting text input from the user and responding with meaningless questions formed by trivial rephrasing of the human-supplied sentences.

Even when knowledgeable about how ELIZA worked, many people found it grossly engaging, attributing intellect and emotional involvement to the program that simply was not there. This turns out to be a recurring theme through Turkle’s descriptions of social robots: people are inclined to find attributes of intelligence where there are none, and to interpret any plausible signal of emotional attachment coming from the robot, no matter how slight, as evidence that the robot actually feels something and, on some level, cares for and connects with its human users.

Toys

The first actual robot looked at is just slightly more a “robot” than ELIZA was: the small, handheld Tamagotchi, marketed as a “digital pet”. Bestowed by their programmers with algorithms meant to simulate needing care and attention, children worldwide have felt compelled to treat these electronic trinkets with far more compassion than one might expect:

In the presence of a needy Tamagotchi, children become responsible parents: demands translate into care and care into the feeling of caring. …

[One child] says that his Tamagotchi is “like a baby. You can’t just change the baby’s diaper. You have to, like, rub cream on the baby. That is how the baby knows you love it.” …

Three nine-year-olds consider their Tamagotchis. One is excited that his pet requires him to build a castle as its home. “I can do it. I don’t want him to get cold and sick and to die.”

Can a Tamagotchi actually die? Its programming allows it to “pass away” on the screen, but a reset button brings it back to “life”. However, as a Tamagotchi is used, its programming causes it to appear increasingly intelligent, developing one attribute or another based on how the user’s interaction with it. Restarting a “dead” Tamagotchi does not bring it back to the same state it was in, and children are aware of this:

“It’s cheating. Your Tamagotchi is really dead. Your one is really dead. They say you get it back, but it’s not the same one. It hasn’t had the same things happen to it. It’s like they give you a new one. It doesn’t remember the life it had.”

“When my Tamagotchi dies, I don’t want to play with the new one who can pop up. It makes me remember the real one [the first one]. I like to get another [a new egg]…. If you made it die, you should start fresh.”

Good business for the manufacturer, many children interviewed were adept at persuading their parents to buy new Tamagotchis after one “died”, even though restarting the software on the “dead” one was functionally the same as starting with a brand new one. Somehow, the simple electronic toy truly became alive, not by any stunning prowess on the part of its programmers, but by the fervent belief of the children you played with it.

Taking the idea of the Tamagotchi to a new level, the Furby is a similarly-needy collection of algorithms packaged into a furry stuffed toy. Looking more like, feeling more like a living being makes it more compelling than a handheld digital screen. One researcher conducted a test to see where in the spectrum of inanimate things and living creatures the Furby fell:

A person is asked to invert three creates: a Barbie doll, a Furby, and a biological gerbil. [The] question is simple: “How long can you hold the object upside down before your emotions make you turn it back.

While the Barbie doll says nothing, a Furby turned upside down whines and claims to be afraid, and the study finds the Furby somewhere in between alive and not:

“People are willing to be carrying the Barbie around by the feet, slinging it by the hair … no problem. … People are not going to mess around with their gerbil. … [People will] hold the Furby upside down for thirty seconds or so, but when it starts crying and saying it’s scared, most people feel guilty and turn it over.”

While the people in the study knew full well that the Furby was not alive, its somewhat lifelike form and programmed response elicited emotions sufficiently like those brought about by an actual living creature that it was difficult to relate to the Furby strictly as the machine that it is.

Kara, a woman in her fifties, reflects on holding a moaning Furby that says it is scared. She finds it distasteful, “not because I believe that the Furby is really scared, but because I’m not willing to hear anything talk like that and respond by continuing my behavior. It feels to me that I could be hurt if I keep doing this. … In that moment, the Furby comes to represent how I treat creatures.”

As with the Tamagotchis, a Furby undergoes simulated learning as it is played with, making a well-loved Furby different from a brand new one. When, in the course of studying children’s reactions to a Furby, one of them broke, one child was given a replacement:

[He] wants little to do with it. He doesn’t talk to it or try to teach it. His interest is in “his” Furby, the Furby he nurtured, the Furby he taught. He says, “The Furby that I had before could say ‘again’; it could say ‘hungry.’” … The first Furby was never “annoying,” but the second Furby is. His Furby is irreplaceable.

Pets

While a Furby may vaguely resemble a living creature, it’s a totally fictitious creature. What happens when a robot is designed to imitate an actual living creature? AIBO was built to behave like a pet dog, and some children found its behavior sufficiently convincing, and then some:

Yolanda … first sees AIBO as a substitute: “AIBO might be a good practice for all children whose parents aren’t ready to take care of a real dog.” But then she takes another step: in some ways AIBO might be better than a real dog. “The AIBO,” says Yolanda, “doesn’t shed, doesn’t bite, doesn’t die.” More than this, a robotic companion can be made as you like it. Yolanda muses about how nice it would be to “keep AIBO at a puppy stage for people who like to have puppies.”

Another child was convinced that AIBO was capable of emotion, even though it was not:

Oliver does not see AIBO’s current lack of emotionality as a fixed thing. On the contrary. “Give him six months,” Oliver says. “That’s how long it took [the biological hamster] to really love. … If it advanced more, if it had more technology, it could certainly love you in the future.”

In the future, he says, or even now:

“AIBO loves me. I love AIBO.” As far as Oliver is concerned, AIBO is alive enough for them to be true companions.

On the other hand, some children found AIBO’s lack of emotional attachment to be a positive thing:

Pets have long been thought good for children because they teach responsibility and commitment. AIBO permits something different: attachment without responsibility. Children love their pets, but at times … they feel burdened by their pets’ demands. … But now children see a future where something different may be available. With robot pets, children can give enough to feel attached, but then they can turn away. They are learning a way of feeling connected in which they have permission to think only of themselves.

Given how many children (and even adults) desire the companionship of a pet, but either cannot or will not accept the responsibility of adequately caring for it, often resulting in animals with warped personalities being dropped off at overbooked shelters, perhaps getting a fraction of the companionship of a real pet in exchange for the ability to put it away whenever you like will be a welcome option for some people.

The Cutting Edge

Moving on from commercialized robot products, a number of children were brought into the MIT Artificial Intelligence Lab to [be studied as they] interact with the state of the art of robotic creatures. The robot Kismet was designed with large eyes that appeared to gaze at whatever person or object caught its attention, and with large ears to seemingly listen to what was being said to it, and with a mouth to speak infant-like babbling sounds in response to what it “heard”. Many children found Kismet enjoyable, believing that it understood them, and was trying its best to speak intelligibly back at them. Some children, though, were underwhelmed with Kismet’s ability to talk:

With no prologue, Edward walks up to Kismet and asks, “Can you talk?” When Kismet doesn’t answer, Edward repeats his question at greater volume. Kismet stares into space. Again, Edward asks, “Can you talk?” Now, Kismet speaks in the emotionally layered babble that has delighted other children or puzzled them into inventive games. … He tries to understand Kismet: “What?” “Say that again?” “What exactly?” “Huh? What are you saying?” After a few minutes, Edward decides that Kismet is making no sense. He tells the robot, “Shut up!” And then, Edward picks up objects in the laboratory and forces them into Kismet’s mouth — first a metal pin, then a pencil, then a toy caterpillar. Edward yells, “Chew this! Chew this!” Absorbed by hostility, he remains engaged with the robot.

Children are by no means alone in becoming emotionally attached to robots. Dr. Cynthia Breazeal, Kismet’s creator, accepted a position in another lab at MIT and had to leave Kismet behind:

Breazeal describes a sharp sense of loss. Building a new Kismet will not be the same. This is the Kismet she has “raised” from a “child.” She says she would not be able to part with Kismet if she weren’t sure it would remain with people who would treat it well.

Caring Robots

The attachment between robots and their creators aside, is the only purpose for social robots to be whimsical toys and children’s playthings? Widespread in Japan, and increasingly in the United States, is the use of social robots as companions for the elderly in nursing homes. The robot Paro, for example, has been mass-produced as a “therapeutic robot”:

Miriam’s son has recently broken off his relationship with her. He has a job and family on the West Coast, and when he visits, he and his mother quarrel — he feels she wants more from him than he can give. Now Miriam sits, quietly, stroking Paro, a sociable robot in the shape of a baby harp seal. … [It] has a small working English vocabulary for “understanding” its users … [and] can sense whether it is being stroked gently or with aggression. Now, with Paro, Miriam is lost in her reverie, patting down the robot’s soft fur with care. On this day, she is particularly depressed and believes that the robot is depressed as well. She turns to Paro, strokes him again, and says, “Yes, you’re sad, aren’t you? It’s tough out there. Yes, it’s hard.” Miriam’s tender touch triggers a warm response in Paro: it turns its head toward her and purrs approvingly. Encouraged, Miriam shows yet more affection for the little robot. In attempting to provide the comfort she believes it needs, she comforts herself.

Is this a positive direction to be heading, leaving the elderly to be comforted by robots? Many study participants found the supplementary companionship of robots to be preferable to being alone, and sometimes even preferable to being with someone, as sharing private emotional thoughts with a robot felt easier than with another person.

What about helping the elderly with physical needs? For that there is Nursebot:

… which can help elderly people in their homes, reminding them of their medication schedule and to eat regular meals. Some models can bring medicine or oxygen if needed. In an institutional setting, a hospital or nursing home, it learns the terrain. It knows patients’ schedules and accompanies them where they need to go.

Many people who experienced the care of Nursebot found it appealing. While they would not want to see robots entirely replace human care, they found it perfectly acceptable as a supplement. Some even found it superior, with one person reporting that:

“Robots … will not abuse the elderly like some humans do in convalescent care facilities.”

Others wrote that many of their human care providers kept themselves so emotionally distant that they might as well be replaced by robots.

Conclusion

Going from trivial electronic toys to animatronic creatures to machines for helping people both emotionally and physically, robotics technology is clearly ready for real everyday use, and society seems ready to accept it. The two parties are meeting in the middle; technology has certainly been advancing, but where it still lacks, people are willing to fill in the gaps with their imagination. Robots may not be real living creatures, but for many purposes they are real enough, and for some purposes they are better.

We’ll look at the second half of the book, covering advances in humans interacting with other humans through technology, in a future post. If you’ve found this much interesting, you can pick up a copy of the book for yourself, either in printed or electronic form.