The implications are, for me at least, somewhat frightening.
Firstly, the moral implications, that nobody can be held responsible for their actions. If a robot is programmed to kill several people, and it does so, then it as absurd to hold the robot as morally responsible; we would blame the programmer, even if the robot had been programmed to display messages like 'I am doing this because I enjoy it and I want to.'
However, if all human choices are nothing but the outcome of their programming (i.e. their genetic background and life experiences), then it is equally absurd to hold them responsible if they start a massacre, and say things like 'I am doing this because I enjoy it and I want to.'
Secondly, and even more worrying, is that not only am I morally level with a robot, but my internal experiences are not any different. We can imagine a robot with a camera that acts in a variety of ways to different visual stimulus, yet we hold off from saying that it actually has its own mental represenation, which it sees. However, what science is revealing is, apart from complexity, there is no difference between how a simple robot with a light sensor perceives the world and how a human does.
I find this disturbing, not just because its insulting, but also because it suggests a profound statement like 'I love you' could be reduced to a list of brain processes and conditions that would elicit certain behaviour.