PDA

View Full Version : A.i.


SFFWorld.com
Home - Discussion Forums - News - Reviews - Interviews

New reviews, interviews and news

New in the Discussion Forum


Pages : [1] 2

RyanIVR
June 7th, 2007, 09:24 PM
Hello everyone.

I had a question about classification of AI's. In my current story I have an AI who is the pilot of a small starship. It is simply an application loaded onto the hard drive of the ship, so it's not like C-3P0 or R2-D2. It is intangible.

It can, however, perform the incredibly complex calculations needed to travel through hyperspace safely, and can also construct sentences at near-human levels.

What kind of classification would this AI have? Is it known as a "smart AI?"

If you have any insight, please let me know. Thanks!

Arinth
June 7th, 2007, 09:45 PM
iIf it can compute calculations needed to pilot through hyper-space (which I am assuming is beyond the normal human) why then would it be so limited in the use of language?

I don't really read that much SF, so I don't know much about AIs, but what you said jumped out at me so I thought I might point it out.

RyanIVR
June 7th, 2007, 10:17 PM
It is simply to dangerous for humans to attempt because of the quickness, preciseness, and complexity of the calculations.

A calculator can give me 2134 to the 243154 power in less than a tenth of a second, and I can't. That doesn't mean it should be able to think or reason.

In my mind, this AI can construct human-like sentences. It can simply take its immediate surroundings and use it to create sentences and answer questions that it's asked, but it doesn't wonder who built it, wonder why it is there, and wonder what is outside the walls of this ship.

Arinth
June 7th, 2007, 10:37 PM
ok cool. I do not expect it to have a personality or an awareness of itself. At first it sounded like it had trouble putting sentences together, but it is clear that is not the case, so no worries.

smart AI would work for me, but hopefully someone else will come along with a better answer for you

James Carmack
June 8th, 2007, 01:53 AM
Sounds like its primary function is that of a glorified calculator. Does it actually "think"? What are its functions outside piloting the ship? Can it adapt, learn?

No matter how complex the calculations it can perform, if there is no element of reaction, it can't really be called an AI. Does it have an imperative for self-preservation? What lengths will it go to to survive? Can it ID hostile vessels and defend itself without user input? If I try to break into the motherboard, will the AI cut off life support to save itself? Does it even possess the level of sophistication needed to make that sort of decision?

As for its linguistic capacity, is it limited to its original programming or can it expand beyond it (an element of that critical aspect: adaptation)? Can it only answer questions directly related to its functions or does it go beyond that? The ability to answer questions does not make a program an AI. There's more to it than that.

However, to answer Arinth's question, it wouldn't be surprising for a navigational comuter to only have a rudimentary linguistic capacity. Why waste resources on something that's unnecessary to its primary function? Specialization vs. Generalization. An age-old debate. Better to have a stiletto honed to a fine point than a dull meat cleaver. That's what the designers are most likely to think.

Also, Arinth, you don't need to read a lot of SF to be exposed to AI. In fact, you don't need to read any. It's here now. It's reality. Yeah, independent AI is still at about insect level, but think of what insects are capable of. And they're always getting better. (Mildly disturbing that some of the leaders in the field completely dismiss Asimov's Laws, but what can ya do?)

RyanIVR
June 8th, 2007, 08:50 AM
Sounds like its primary function is that of a glorified calculator. Does it actually "think"? What are its functions outside piloting the ship? Can it adapt, learn?

No matter how complex the calculations it can perform, if there is no element of reaction, it can't really be called an AI. Does it have an imperative for self-preservation? What lengths will it go to to survive? Can it ID hostile vessels and defend itself without user input? If I try to break into the motherboard, will the AI cut off life support to save itself? Does it even possess the level of sophistication needed to make that sort of decision?

As for its linguistic capacity, is it limited to its original programming or can it expand beyond it (an element of that critical aspect: adaptation)? Can it only answer questions directly related to its functions or does it go beyond that? The ability to answer questions does not make a program an AI. There's more to it than that.

However, to answer Arinth's question, it wouldn't be surprising for a navigational comuter to only have a rudimentary linguistic capacity. Why waste resources on something that's unnecessary to its primary function? Specialization vs. Generalization. An age-old debate. Better to have a stiletto honed to a fine point than a dull meat cleaver. That's what the designers are most likely to think.

Also, Arinth, you don't need to read a lot of SF to be exposed to AI. In fact, you don't need to read any. It's here now. It's reality. Yeah, independent AI is still at about insect level, but think of what insects are capable of. And they're always getting better. (Mildly disturbing that some of the leaders in the field completely dismiss Asimov's Laws, but what can ya do?)

James, you might have just give me a good idea.

My original plan was that this A.I. was just loaded onto the ship's hard drive, unable to get off of it. So I guess it's basically trapped in there. But what if it was taken out and installed somewhere else?

---"No matter how complex the calculations it can perform, if there is no element of reaction, it can't really be called an AI. Does it have an imperative for self-preservation? What lengths will it go to to survive? Can it ID hostile vessels and defend itself without user input? If I try to break into the motherboard, will the AI cut off life support to save itself? Does it even possess the level of sophistication needed to make that sort of decision?"---

Yes, I think that it can do all of these things and will on it's own. It can adjust to situations and will do anything it can to preserve the ship it is piloting. It can perform all of the tasks that a human pilot could, and would, such as defend itself when attacked, ID'ing hostiles, and adapting to situations as they arise.

I guess this A.I. is pretty advanced, I just want there to be some kind of limit on it. I don't want it to be something that can be mistaken for human. Philosophy, Religion, and abstract and illogical thinking exist in human minds. They don't in this A.I's "mind."

RyanIVR
June 8th, 2007, 09:00 AM
As for its linguistic capacity, is it limited to its original programming or can it expand beyond it (an element of that critical aspect: adaptation)? Can it only answer questions directly related to its functions or does it go beyond that? The ability to answer questions does not make a program an AI. There's more to it than that.

However, to answer Arinth's question, it wouldn't be surprising for a navigational comuter to only have a rudimentary linguistic capacity. Why waste resources on something that's unnecessary to its primary function? Specialization vs. Generalization. An age-old debate. Better to have a stiletto honed to a fine point than a dull meat cleaver. That's what the designers are most likely to think.

Also, Arinth, you don't need to read a lot of SF to be exposed to AI. In fact, you don't need to read any. It's here now. It's reality. Yeah, independent AI is still at about insect level, but think of what insects are capable of. And they're always getting better. (Mildly disturbing that some of the leaders in the field completely dismiss Asimov's Laws, but what can ya do?)

I'm glad I asked this question, because it's making me think so much more about this. I do think that this A.I. can adapt, learn more about language, etc.

I guess another question to ask is: What do you think an A.I. of this specialization would be like in 500 years? Would a more complex language system be commonplace, because of the ease it may become to program?

KatG
June 8th, 2007, 09:38 AM
A robot is a machine programmed to perform specific mechanical functions. An android is a robot in the shape of a human or other living being. An Artificial Intelligence is a program, programmed robot, etc. that is able to learn and adapt and to "think" independently on its own. A.I.'s on the level of piloting a ship would need to be able to handle basic abstract concepts. An A.I. could theoretically evolve and develop philosophical concepts, emotional states, and illogical or chaotic thinking.

There would be limits on your A.I. because it does not have a physical presence and while it can manipulate the hard drive equipment, it can't turn it into a body or make the ship part of its intelligence. The equipment of the ship would remain a machine that is programmable by the A.I., but also could be over-ridden by human programmers. Your A.I. is also limited because it cannot electronically transmit itself from system to system -- it cannot escape the ship on its own. It would presumably have the ability to diagnose and repair if there are problems with its code, but it would not be invulnerable. It might or might not be able to protect itself from an electrical surges, radiation, water, magnetic pulse or other means of disrupting electronic circuitry. The A.I. would be monitored at all times. It might, however, come up with a way to get around this.

Some titles of possible interest to you that I remember off the top of my head: the Ender series by Orson Scott Card, The Moon is a Harsh Mistress by Robert Heinlein, and the adaptation of Clarke's 2001 with the famous Hal. There are also some intelligent ship series. You might want to check out Anne McCaffrey's The Ship Who Sang. This is not about A.I.'s per se -- it's altered humans who run/are ships, but it might give you some ideas. Check also with the Science Fiction forum for suggestions of good A.I. material.

James Carmack
June 8th, 2007, 09:16 PM
Now that we've established that your AI is indeed a true AI capable of learning, the question is what are the limits on its ability to learn. Can it independently seek out new information and if so, can it seek out information not related to its purpose? After all, its storage capacity isn't unlimited. It would stand to reason that its core programming would limit or prohibit the acquisition of extraneous data. If so limited, it would be unable to wrap it's head around concepts that have no relation to its function. I mean, what does Kierkegaard's writings have to do with piloting a ship? The AI's ability to interact with the human crew will depend on how utilitarian it is. No one expects any affection from a wrench, but if your dog does nothing but fetch your slippers at precisely 0700 every morning, there's a problem.

KatG
June 9th, 2007, 12:38 PM
It didn't sound like it had a human crew. I thought it was an autonomous ship, but maybe I didn't understand the parameters.