September 21st, 2012, 03:41 PM
Live Long & Suffer
Do we want the AI to be human or do the AIs want to be human? What if they go more crazy the more we try to make them human?
Is the writer trying to create a story that sells or to create what he likes or to imagine what he thinks may happen in the future or some combination of those three?
If the AI is software then how powerful does the hardware have to be? Can the software move or copy itself to any sufficiently powerful hardware? If it sends out copies and they can all communicate at light speed will the copies be any different? Will it just become one giant hive entity.
Why would it be hostile? What if it is totally indifferent to us? Is the Frankenstein AI just paranoia on our part or just what unimaginative authors think will sell? And maybe it does sell but has nothing to do with what may actually happen.
I don't know of anything better than Hogan's The Two Faces of Tomorrow and the only thing as good is The Adolescence of P-1.
In fact it is really annoying for this robot to say thank you for logging in. We know it can't really care. LOL
Last edited by psikeyhackr; September 22nd, 2012 at 12:53 PM.
Tags for this Thread