I read the most interesting concept in the New Yorker today. If I may quote:
What an amazing concept! And to be honest, it sounds entirely plausible. It's like a club sandwhich that you get in a foreign country. At one point, it's their own take on the club sandwhich, and the egg or butter or whatever they've added is a charming new flavor. But if it's too close, and still not accurate, then it's just a very bad club sandwhich. Not to get to philosophical about this, but one can derive an enormous amount of power from this thought, simply by ensuring that whatever you are creating or involved with makes the clear distinction between what it emulates and what it is. A cat isn't a bad dog, it's a cat which shares some similarity to a dog.
One of the few guidelines from Breazeal [a professor creating a human like robot] was that Leo [the robot] not look too human, lest he fall into the "uncanny valley," a concept formulated by Masahiro Mori, a Japanese roboticist. Mori tested people's emotionsal responses to a wide variety of robots, from non-humanoid to completely humanoid. He found that the human tendency to empathize with machines increases as the robot becomes more human. But at a certain point, when the robot becomes too human, the emotional sympathy abruptly ceases, and revulsion takes its place. People began to notice not the charmingly human characteristics of the robot, but the creepy zombielike differences.