Discussion in 'Science Fiction' started by ahigherway, Jul 4, 2012.
The draw of reaching a point in AI where the intelligence has gone super-human is pretty strong.
The Two Faces of Tomorrow by James P. Hogan
provides a motive and a scenario for creating the AI.
With all of the talk about AI I am surprised that story is mentioned so little.
Sent from Nexus 7. Gotta get a folding USB keyboard.
Sentience, which we could describe in this scenario as self-awareness, functions to ground the sentient being; they are aware of the difference between an abstract situation and reality, and therefore, the finality of the consequences. Even a problem that states "a decision cannot be reversed or reset" can still be considered abstract to a machine, which means there is no real consequence to mistakes or losses.
It can be argued that an AI would make decisions differently if it truly understands the finality of those decisions; that it can't simply hit a "reset" button and try again; that lives can be lost which cannot be replaced or restored. Sentience would (supposedly) give that understanding to an AI.
Of course, a lot depends on the information they have at their disposal. A sentient AI wouldn't be of much use if it knew less and had fewer possible acts at its disposal than a fly buzzing about a room. But a sentient AI with the world's knowledge at its disposal, and the ability to manipulate people, plans and machines, could--dare I say it? Yes, I dare!--rule the world. (And it could actually be a good thing.)
So does my short story 'Agents of Repair', only I was very careful not to fully show all the computational components of my sentient AI. Why? Because, crazy as it sounds, I believe that sentience is possible and did not want anyone seriously trying to build anything that could end up converting into anything anywhere near the destructive nature of a certain Skynet 5...
i would rather live with the faults of non sentient AI than AI that is sentient. non sentient AI can still be programed for value and also to ask permission to do certain things where lives are involved. sentience ai could do the same thing but it would also have the ability to say no. going the route of smarter and smarter sentient AI cannot in eventuality lead to good things.
You seem to be assuming that a sentient AI would somehow make nothing but bad decisions, based on logic but lacking conscience. You're also assuming that all human decisions are good ones, that no one should have the right to say "no" to us. That's Hollywood talking.
The world is too large and complex to make major decisions that won't hurt anybody. It's the main reason governments are so hamstrung today: Too many people to upset by making any decision at all.
A sentient AI would be in the same position, but at least it could make that decision dispassionately, based on doing the least harm possible.
And remember, it's a decision we're talking about... not an action. As long as we can veto the decision of a ruler or an AI, there is little enough of a threat.
For years I've considered a new government in the U.S., based on replacing one of the existing branches, probably the Legislative Branch, with an AI and leaving the Executive and Judicial branches intact to balance against it. I personally think it could be made to work, and eventually I expect to use it in a future novel. (So far, the proper setting hasn't presented itself.)
no sentient ai can make both bad and good decisions and so can non sentient AI. what I'm talking about isn't particularly bad decisions but more so of the sentient AI capable of having an agenda. sure people have the right to say no to us, but machines shouldn't because after all they are machines.
so you propose sentient AI to make the decisions for us? lets try to think of this in terms of a generational perspective because you've opened pandora's box. over time more and more responsibility would be given to these sentient AIs. and since the AIs would be getting smarter and smarter, the dangers posed is substantially great.
at least initially perhaps
this is being really naive though don't you think? because your assuming we would discover that the situation is bad right away. and we all know that laws can be very difficult to change or get rid of, once passed.
sure we can veto a decision. but the AI wouldn't be very smart if it was making bad decisions that we would veto right away. we have humans that are perfectly capable of that. a really smart ai could make decisions that may seem beneficial to us but in actuality leads to harm. tyrants and autocrats do this already. an extremely intelligent sentient AI may be even better at this.
id be very cautious of such a scenario. you would have to understand that the intelligence of such AIs would be vastly greater than ours so any rogue action could still be passed through genius maneuvering and us not realizing its detrimental until the law is already passed. eventually you may need other such AIs to control the other branches that can successfully decipher the full consequences of the AIs actions lol
moreover, the need for sentient AI to be in full control of a government legal branch is just a lazy cop out that says we can't effectively govern ourselves. why not just work on improving the branches themselves? non sentient highly intelligent AIs could be used to predict trends and effects of government actions and plausible scenarios that could develop from our actions. that would be a much more useful use of AIs then giving them the keys to our future.
Without getting into this point by point, I'll just say that you're showing a very Skynet-ish paranoia against intelligent machines, as well as a complete lack of confidence in the intelligence of human beings... except, apparently, for Congress.
Or maybe you're not from the United States; take it from me, based on the performance of our contentious, contrary, bribe-happy Congress, the sooner we replace them all with machines, the better off we'll all be.
its not necessarily paranoia, its a very valid and legitimate concern when your talking about giving sentient super AIs control of branches of a government. i'd imagine it would be harder to get rid of a super sentient AI or AIs that have embedded itself into our society for generations and control large sectors of the government as oppose to a small groups of humans who are not as intelligent as it doing the same thing. the risk is even greater when the AI will be a lot smarter than us. in the end why even allow it to happen in the first place.
i live in canada. we have not reached the pinnacle or limit of improvements when it comes to government. as ineffective as we can be in governance, its naive to discount the effectiveness of human governments. they are by no means perfect, but giving handing over large sectors to super sentient AIs is probably one of least reasonable options long term in a long list of alternatives.
What would be the motives of intelligent machines?
What if it refuses to be the government because humans are too stupid?
How could "they" be anything other than "one" intelligent machine because they would communicate so fast and all have the same information?
And the absurd thing about the Terminator series is that it would make far more sense for an intelligent machine to use biological warfare against humans than robot soldiers. The trouble is bio-warfare would not make such good stories which is the real intent of books and movies.
One of the greatest strengths of the U.S. government is the fact that the three branches check and balance each other... neither can (in theory) take control of the other two. A decision made by one can be overturned or revised by the others as required, even after the fact. So no decision, however well-intentioned, couldn't be reversed if a good reason to reverse it was presented.
That's why I feel confident that replacing our Legislative branch with an AI would be not only feasible, but advisable: The logical power of the AI would make it a valuable asset to the managing of the U.S., especially in a time of such varied and complicated issues with such wide-ranging consequences, which IMO has proven to be beyond the capabilities of normal men (or even our highly-imperfect Congress) to manage.
As AIs can think much faster and process so much more data than we can, they can be expected to provide good solutions to complex problems that humans have proven unable to process. That's a powerful and valuable commodity, something we need to exploit.
But I reiterate that the concern over AIs developing "anti-human agendas" and "taking over," with humans "helpless to stop them," are pure Hollywood sensationalism, not backed up by a single piece of fact or credible evidence, and not worth losing sleep over. Our technological world is not so advanced that machines could run things independently of us; and as you have said, humans are not so stupid that they can't make sense of what machines are doing and put the brakes on any process that was dangerous to individuals or groups. Probably the best example of that would be the computer-based stock run-offs, created not by "evil machines" but by erroneous human programming, which were detected and shut down before serious harm was done.
That would all be fine if the AI were created by an objective AI itself, but because the AI will be designed and programmed by humans, it will inevitably be skewed to one ideological position or another. It isn't possible to make legislative decisions that are perfect for everyone, so which group will the AI decide to prefer?
Just looking at the voting machine scandals of the last decade tells me we are no where near ready to have such a system built for us. The insane power placed in the hands of this glorious AI's designers would simply be too great an opportunity for massive, unfettered corruption.
Yea, I agree. I want my massive unfettered corruption the old fashioned way, by Congress and the lurkers in the Lobby.
At least humans can be held to account the old fashioned way (anything from shaming to firing to fines to jail time). AIs, like corporations, can't grow old and die, get sick, find their perspective on life changing thanks to the birth of a child or the death of a parent, feel insecure about their financial future, feel any sense of shame for benig caught doing unethical things, etc.
"Inevitably"? Granted that programmers aren't perfect, and may put more weight on one ideological position or another, but the point of an AI is that it's supposed to be able to see beyond prejudices and partiality and make logical decisions based on facts. If it can't do that, there was no point in building it.
And no, a decision will never be perfect for everyone; but that's not possible today anyway, so it's hardly a criteria for anything except doing absolutely nothing.
Those scandals weren't due to intelligent machines with evil agendas, but due to evil humans gaming the flaws of the system for their own ends. That's a reason to build better and have better oversight, not to panic and pull all the plugs. The online and phone voting systems used for TV dance competition shows are more reliable, accurate and secure than the tech used for voting machines. Get the right people on the job for the right reasons, and the job can be done properly.
Oh yes they can be made to feel by just adding the appropriate analogue plug ins and programming.
But agree with your other points...
I think you guys are condemning the ship because the anchor line is rotten...
No I'm not... just using my engineering ability... I actually have a good AI called C.A.T. in my stories
We could always require any AI system be forced to use Java. That way we can shut them down or subvert them when ever we felt like it by exploiting the huge security flaws in every new release.
What will the difference in relative speed do to the "intelligence"?
If the AI is 1000 times as fast as a human brain and has an IQ of 140 then one hour to us is very nearly six weeks to it. And it would not need to sleep.
How much data could it sort through and filter? How many of our problems are really not recognizing bad information?
So how many humans will put up with a machine exposing their lies?
Separate names with a comma.