On one of my co-hosting gigs on the Jordan Rich show, we received a call from a listener who asked about rights for future individuals that wouldn’t be human or robot but something in between. He cited the term Singularity coined in a book by Ray Kurzweil where the line between man and machine will blur. I was stumped and my evasive answer, comical as it was, didn’t appease the caller. But the question stayed with me.
My first reaction after thinking about the question for a while is that any entity with a human brain should have human rights whether their body is organic or not. Talk about a no brainer so to speak. But then that leads to the question of entities with non-human brains. Before we say no and discount robots, however, consider the real possibility of a dying man or woman downloading their minds into a computer. It could be a stationary device like in the movie Transcendence or an ambulatory one like a robot. What rights would such an individual have? They don’t have an organic brain but they have a human experience contained within a silicon device. Shouldn’t that individual have human rights too? Therefore a rule whereby only those with an organic brain would have rights would be unfair.
So now let’s consider the possibility of a computer brain without human experience as in a robot or android. Should they have human rights? The obvious answer is no. But what if those created with silicon brains live a human like life with dreams of a career and friends etc… Remember Data on Star Trek: The Next Generation? He joined Star Fleet and earned a commission as a Star Fleet officer. I could see something like that happening at some future time. What rights would Data have?
There was a great episode about exactly that question when a scientist claimed the right to dismantle Data for research. The question went to trial where Picard defended Data as a sentient being with the right to choose to exist and continue operating as a member of the Enterprise crew. I have to think that most of the viewing audience felt it was a travesty to treat Data like an inanimate object. And that he should be accorded the same right to exist and make choices about his future as any human member of the crew. If there is ever an android as sophisticated as Data I agree that he or she should have human rights even if their brains are silicon based.
So I guess the answer to should cyborgs have rights is: it depends. It depends because our understanding and definition of sentience will always be in motion depending on the technology. Right now it’s simple. Humans should have rights and computers should not but as the line between machines and humans becomes less distinct, the answer won’t always be as clear cut. But it’ll be interesting to hear those arguments in some future court.
BIO:
Michael J. Foy was born to Irish immigrants in upstate New York and lived in London for a year on two different occasions as a child. He graduated Northeastern University in 1979 with an engineering degree. In 1993 he changed careers to become a recruiter servicing the publishing industry. In essence, his literary career has spanned two other careers but has always been his first love.
In 1991 he sold an option for his first novel, False Gods, as a screenplay to Timothy Bogart the nephew of Peter Guber, Producer of Batman. Michael has since published Future Perfect, a Science Fiction novel and local bestseller, and The Kennedy Effect which weaves the story of JFK with parallel reality themes.
He was also an early pioneer in publishing short stories over the internet including the Solar Winds of Change, The Adventure of the Moonstone and A Land to Call Our Own. He lives in Massachusetts where he enjoys kayaking, bicycling and exploring a wide array of literary subjects.




Depends indeed. Are we talking robots or cyborgs or advanced Artificial Intelligences regardless of their outer packaging?
A cyborg is generally held to be a technologically enhanced human, so I would assume that the enhancements wouldn’t strip them of their rights. So, yes in that case. Robots not equipped with a sufficiently advanced AI (Like the ones on automotive assembly lines) aren’t sentient and so aren’t provided rights.
Self aware and autonomous AIs are another matter, regardless of how they’re packaged. I think that may be a subject for conversation, but it’s not something that humans can or should decide unilaterally on behalf of the AIs. We have no idea what a machine-based intelligence will be like and it is the height of arrogance to assume AI thoughts, motivations and perceptions will even resemble those of humans.
Very much a wait and see proposition.
Hi, Michael!
As someone who has spent many years in the company of robots, computers and, of course, humans, I couldn’t agree more. However, we must be clear so we do not mistake the appearance of sentience with actual sentience. At some point, someone is going to have to come up with a litmus test for sentience or at least some way of proving sentience is actual. Not an easy thing to do. IMHO it’s not just about self-awareness, but desire and creativity – original thought. And having an on/off button has got nothing to do with that kind of capability.
I also wonder if we might end up with human and robot/machine rights being different (possibly similar but different), simply because we are unable to determine what makes us human and whether the machines can measure up. After all, we don’t want to go around granting human rights to non-humans willy-nilly. Who (strike that) What will build our cars and houses and toasters and DVD players if not for an army of slave robots? ;->