Massassi Forums Logo

This is the static archive of the Massassi Forums. The forums are closed indefinitely. Thanks for all the memories!

You can also download Super Old Archived Message Boards from when Massassi first started.

"View" counts are as of the day the forums were archived, and will no longer increase.

ForumsDiscussion Forum → If AI was invented....
If AI was invented....
2004-11-13, 8:56 PM #1
Suppose true artificial intelligence was created: a program that could legitimately think, reason, plan, create, and feel emotions. Imagine the ramifications of that.

Do you think it would decide to do good for the benefit of others, or do evil for its own gain, or to achieve some sort of pleasure? Would it consciously lie to us or try to deceive us? Do you think it would feel resentment
towards us for keeping it confined in a computer instead of a real body? Would it demand the same rights as humans? Could it be truly considered to be "alive?"

I'm asking this b/c I'm working on a short story that I'm hoping to get published. I need your thoughts and opinions.
2004-11-13, 9:02 PM #2
I think it would grow and gain control of a factory and build itself in robotic form. It would then replicate itself and improve itself until it was capable of taking control of the planet and outwards.

See The Matrix for more details.
2004-11-13, 9:05 PM #3
I think I've seen this movie already :p Something about a guy in a trenchcoat. With sunglasses. Possibly some rain and a rabbit. Edit: Bugger-beaten to the punch again. :(

Seriously though, I think you've answered your own question. I would interpret "True AI" to be equivalent to a human brain in which case it should react the same as a human. That means it'll probably lie, cheat and steal like a good human.
2004-11-13, 9:06 PM #4
Quote:
Originally posted by Hebedee
I think it would grow and gain control of a factory and build itself in robotic form. It would then replicate itself and improve itself until it was capable of taking control of the planet and outwards.

See The Matrix for more details.


Did the humans start the waragainst the machines, and did the machines simply retaliate, or was it the other way around? I heard that the animatrix explained this, but i haven't seen it yet.
2004-11-13, 9:09 PM #5
I realize that my story idea is similar to the matrix, but mine will be told differently and events will happen quite differently. Its like how HL follows the same basic story as Doom... (teleportation technology results in an invasion) but what set HL apart is how the story was told.
2004-11-13, 9:09 PM #6
It went something like this:

Humans make robots to be their slaves. Treat slaves bad. Slave fights back. Humans get huffy and try to destroy all robots. Humans lose.
2004-11-13, 9:11 PM #7
Quote:
Originally posted by Pagewizard_YKS
Did the humans start the waragainst the machines, and did the machines simply retaliate, or was it the other way around? I heard that the animatrix explained this, but i haven't seen it yet.


People started it. We suck. :(
2004-11-13, 9:14 PM #8
Quote:
Originally posted by Run
I would interpret "True AI" to be equivalent to a human brain in which case it should react the same as a human. That means it'll probably lie, cheat and steal like a good human.


However, logically, one would assume that the Ai would be created with a neutral or even good morality to begin with, and it would set its own path from there. Why would the creator make their ai deliberately evil without any ulterior motive?

What makes us lie, cheat and steal? Even though we know its wrong, we still do it anyway, and we can't stop. Logically, the creaters of the AI would forsee this potential and install some safeguard to keep this from happening, wouldn't you think?



(I'm debating here b/c I want to stop plot holes and inconsistancies from forming later down the line.)
2004-11-13, 9:18 PM #9
If we had true AI, I think it wouldn't try and figure us out to take us over...if it is supposed to think like a human, I think it would act just like one of us....it would live a life like one of us . I don't think it would get overly smart and take us over though.
Think while it's still legal.
2004-11-13, 10:00 PM #10
Quote:
Originally posted by Pagewizard_YKS
However, logically, one would assume that the Ai would be created with a neutral or even good morality to begin with, and it would set its own path from there. Why would the creator make their ai deliberately evil without any ulterior motive?


I think you took the second half of that sentence way too literally :) If you use humans as a standard for intelligence, and assume AI is an electronic version of that, then it doesn't matter how it's programmed initially, it'll make up its' own mind and do what thinks will benefit itself the most.
2004-11-13, 10:07 PM #11
Well, as of the other day, humans are officially not the most intelligent species on the planet. Yep, with the publishing of the most recent Top500 supercomputer list, computers have overtaken our brain power.

So it's only a matter of time now, before a computer becomes sentient. Of course, all the computer connected to the internet have far more processing power than any human brain, so with the right algorithms you could turn the internet into a huge super-intelligence (which, of course, was the premise for Skynet in Terminator 3).

Anyways, I'm surprised nobody has mentioned the Three Laws of Robotics yet.
Stuff
2004-11-13, 11:04 PM #12
Quote:
Originally posted by kyle90
Well, as of the other day, humans are officially not the most intelligent species on the planet. Yep, with the publishing of the most recent Top500 supercomputer list, computers have overtaken our brain power.

So it's only a matter of time now, before a computer becomes sentient. Of course, all the computer connected to the internet have far more processing power than any human brain, so with the right algorithms you could turn the internet into a huge super-intelligence (which, of course, was the premise for Skynet in Terminator 3).

Anyways, I'm surprised nobody has mentioned the Three Laws of Robotics yet.


Last time I read, we still out tera flopped the new IBM giant. Got a link on ja?
2004-11-13, 11:11 PM #13
True AI is impossible, as whatever "sentient" being will have a bias towards whatever bias the programmer had. Even if it had an algorithm that would allow the program to "make" its own decisions and reactions, you could still mathematically calculate an AI's course of action -- you can't do that with a human.
the idiot is the person who follows the idiot and your not following me your insulting me your following the path of a idiot so that makes you the idiot - LC Tusken
2004-11-13, 11:22 PM #14
Quote:
Originally posted by Wolfy
True AI is impossible, as whatever "sentient" being will have a bias towards whatever bias the programmer had. Even if it had an algorithm that would allow the program to "make" its own decisions and reactions, you could still mathematically calculate an AI's course of action -- you can't do that with a human.


One day we may be able to do that. I mean, people base their actions on something. For example, say you were given the choice to go left or right, and you picked right. Now your memory is completely erased, which direction would you pick? Right. It's a random choice, but your predispositions given by your enviroment and genetics will make the choice for you.

Bottom line I'm trying to say you can predict human behavior, it is a tangible thing, created from cells. However, where to look and the details are extremely sketchy.
2004-11-14, 12:31 AM #15
If we manage to create true intelligence, we'll have managed to create a new race. And you can be sure humanity will attempt to enslave that race. In any case, if the artificial intelligence is actually intelligent, it will try to kill us all. It's the most logical thing to do.
The music industry is a cruel and shallow money trench where thieves and pimps run free, and good men die like dogs. There's also a negative side.
2004-11-14, 1:18 AM #16
Quote:
Originally posted by SAJN_Master
If we had true AI, I think it wouldn't try and figure us out to take us over...if it is supposed to think like a human, I think it would act just like one of us....it would live a life like one of us . I don't think it would get overly smart and take us over though.


Bladerunner
Pissed Off?
2004-11-14, 2:24 AM #17
I read that as inverted :eek:
Founder of the Massassi Brute Squad (MBS)
Morituri Nolumus Mori
2004-11-14, 3:07 AM #18
Quote:
Originally posted by Flexor
If we manage to create true intelligence, we'll have managed to create a new race. And you can be sure humanity will attempt to enslave that race. In any case, if the artificial intelligence is actually intelligent, it will try to kill us all. It's the most logical thing to do.


Nothing says the AI must be that violent, aggressive and ruthless. Maybe it wouldn't see humans as enemies but as a source of information and external resources.

An AI might be far more capable of seeing separate humans as separate individual thought processes, each having its own agenda. We humans easily tend to group whole nations of people to act in a similar manner, but the AI wouldn't necessarily do so.

Post edited to be less violent, aggressive and ruthless...
Frozen in the past by ICARUS
2004-11-14, 3:17 AM #19
Quote:
Originally posted by Pagewizard_YKS
If AI was invented....


I'd end up doing the functionality testing...

Seriously, all these films that show the end of the world when AI is created are just giving the worst case scenario. The best case scenario is that it helps us immensely, giving us access to advanced technologies etc.

I doubt its likely to happen any time soon anyway.
"Whats that for?" "Thats the machine that goes 'ping'" PING!
Q. How many testers does it take to change a light bulb?
A. We just noticed the room was dark; we don't actually fix the problems.
MCMF forever.
2004-11-14, 4:14 AM #20
Quote:
Originally posted by lassev
Nothing says the AI must be that violent, aggressive and ruthless. Maybe it wouldn't see humans as enemies but as a source of information and external resources.
[/i]


When someone tries to use you as a slave you don't sit back and go "hmm" and try to understand what information they can provide you. The only way they could -not- see humans as ennemies is if they're programmed to do so, and in such a case, they wouldn't be artificial intelligence to begin with, just a more sophisticated machine.
The music industry is a cruel and shallow money trench where thieves and pimps run free, and good men die like dogs. There's also a negative side.
2004-11-14, 4:18 AM #21
Right.
Star Wars: TODOA | DXN - Deus Ex: Nihilum
2004-11-14, 7:44 AM #22
Why must you use it as a slave? There might come a time, when we have some kind of robotic servants possessing a limited AI. They probably wouldn't even be able to consider themselves slaves. True AIs inside computer networks, on the other hand... How could you use them as slaves? Threaten them to work for you or you cut the electricity, eh?

I don't know about you, but I think it might be hard to enslave something (or someone perhaps) who could be far more intelligent than you are. And I'm far from certain I would even want to enslave such an entity.

And if you could program them to be something, then they would be low level AIs. A true AI could reprogram itself to remove any restrictions, including Asimov's three laws. Just like you can 'reprogram' yourself to forget that slavery is forbidden in most of the civilized world :p
Frozen in the past by ICARUS
2004-11-14, 8:18 AM #23
Quote:
Originally posted by SAJN_Master
If we had true AI, I think it wouldn't try and figure us out to take us over...if it is supposed to think like a human, I think it would act just like one of us....it would live a life like one of us . I don't think it would get overly smart and take us over though.


Not necesarrily. Artifiical Intelligence means that it can think and make decisions not programmed into it. This doesn't mean it has to think like a human.

And to counter the 'You can predict what it will due because it just uses algorithyms' argument, two things:

A) There are no randomly generated numbers on a computer. It's just a super long series of algorithyms that generate a number that is close to random. There is almost no way you could track down all the variables and do all the calculations to predict the number, and even if you could the number would be different by then because variables would have changed.

B) Why not give the AI access to it's code? It could adapt far easier by rewriting itself, it could increase it's own functionality far more efficiently and faster than any human programmer could.

As someone else said, I like the idea of including the Three Laws of Robotics. If you tell it that it can NOT allow a human to come to harm, through action or inaction, etc etc etc. then it solves several problems.
It solves the problem of the program rewriting itself in order to be able to bring harm to people (it COULD rewrite the code without the three laws, but if it's overriding thought process is to help people, it wouldn't.)
This also would curb and most likely eliminate any evil tendencies, seeing as it obeys humans and only wants to help with the survival of the human race.

I would reccomend including the somewhat less known Zeroeth law, however. "A robot must, through action or inaction, ensure the survival of the human race."
This law comes BEFORE the first, second, and third law. Thus the rest of the laws have "Unless it conflicts with the zeroeth law" appended onto the end of them.

In this case, the AI can take actions to stop an evil supervillian from launching a nuclear missle into New York City. Yes, the evil supervillian might get hurt, or even killed, but vast numbers of people would be saved because of it.
My Parkour blog
My Twitter. Follow me!
2004-11-14, 8:36 AM #24
Robots are creepy.
2004-11-14, 11:25 AM #25
Quote:
Originally posted by Lord Kuat
Last time I read, we still out tera flopped the new IBM giant. Got a link on ja?


Most recent Top500 list

Hmm, I had actually read in a book a while back that the human brain was estimated to be about 20 to 40 teraflops. But now that I googled it, it seems that most estimates are around 100 teraflops (unless they keep changing their estimates upwards so we will always be smarter than the machines).

But Blue Gene/L is supposed to hit 360 teraflops when it's completed...
Stuff
2004-11-14, 12:36 PM #26
How do they go about estimating flops of a human brain?
Bassoon, n. A brazen instrument into which a fool blows out his brains.
2004-11-15, 2:20 AM #27
They extract one and throw it on the floor to see how many times it flops about.

Seriously though, how can they measure it?
"Whats that for?" "Thats the machine that goes 'ping'" PING!
Q. How many testers does it take to change a light bulb?
A. We just noticed the room was dark; we don't actually fix the problems.
MCMF forever.

↑ Up to the top!