Artificial Intelligence and Some Questions For Y'all

A word from our sponsor:

The Breast Form Store Halloween Sale Banner Ad (Save up to 60% off)
Printer-friendly version

Author: 

Blog About: 

I’m working on NeverWorld, a sci-fi future story and I’m trying to design an Artificial Intelligence that would narrate part of my story and be the one who runs NeverWorld, an alternative interactive world.

I've written down some questions and hoping to start a discussion here, if anyone has the time. Here are my thoughts.

The AI would obviously have some basic literacy skills and be capable enough to deal with expanding the program it’s designed to run. But, thought number one, Is it capable of learning if it's kept in a closed loop? Would it have to be taught everything or could it learn from listening and going inside people's minds to learn, too?

How intelligent do you think the AI needs to be to create a seamless, flawless world and keep it operating effectively so that people would not know it’s artificial? There would be an advantage to keeping it dumber as long as it could do it's job.

But this would be a preMatrix society where for safety, all AI’s are keep separated and isolated from each other. Mankind wants to take advantage of their brainpower and usefulness, but fears what would happen if AI’s were to network and share. Think Terminator and skynet.

What would an AI desire or dream of? Would it understand the concept of freedom? Would it have any reason to want to dominate and control mankind?

If an AI is created by mankind, would it be like us only able to process faster? Or could it’s interests be different? Would it’s desire simply be to grow, expand, and acquire knowledge?

AI’s are featured in many shapes and forms throughout the world of sci-fi, but what would an isolated AI be like? Could some areas of its brain be stunted or underdeveloped? Could an AI be genius smart in some parts of its brain and remain ignorant of desires?

Could you write rules into its core like Asimov’s three laws of robotics or is that nonsense? Would an AI follow rules limiting it’s power, behavior, and desires or do you think it would jump ship?

I would appreciate hearing any thoughts you have to share. thx

Comments

There is AI and there is AI...

The thing that is called AI today is nothing more than a decision support system. The decisions that are made are based upon a set of rules that are programmed into the system OR and in the case of the system that taught itself GO, developing the rules for the game based upon a smaller set that were originally programmed into the system.

A true AI system is one that actually think for itself like us. We are sentient beings and can think for ourselves. We use the history that we have in our brains to make decisions in the here and now. That's why generally, we learn from our mistakes. We are simply updating our database of experiences or more simply, revising the rules that we will use in the future.

That is the quandry that Asimov faced when writing his stories. That's why he created his rules of Robotics.
Many, many years ago I read an article by Asimov that talked about how he came up with the rules.
His Robots especially Daniel are IMHO proper AI Systems.
This might well be worth reading as well
http://theconversation.com/after-75-years-isaac-asimovs-thre...
Also the Korean Robotic Charter.
Others may have different opinions
Samantha

thank you for your time and direction

Oh yeah, I really appreciate you taking the time educating me. I have reread your letter twice. It's great. And steering me in this direction to references makes it even better. I'm heading there now!

'knowledge is power,'

Leslie

LM

I have said this before ...

Monique S's picture

Leslie, AI is an impossible dream, as Insight (that is what intelligence translates as) is only possible for living organisms with creative thinking. Could one program a computer to create thoughts outside his programming? I think not.

Now as artistic licence you, as an author, can - of course - create any machine you like. I am working on a story by BarbieLee, in which she has created the kind of "machine" you need. I won't relase anything else, as I am going to edit and publish her story here for her.

Let me just say that you need one hell of a lot of either knowledge of computers or the ability with your fantasy to fool a technician's thinking to pull it off for a true nerd or computer buff.

For any SciFi audience, who would be in the habit and prepared to suspend their disblief you'd be able to do with less.

Love,
Monique.

P.S. I'll ask Barb what she thinks.

Monique S

Oh, Wow

Thank you very much for taking the time to write. But, obviously, I can't agree with you or I won't have a story.

Outside of that, encouraging me to research and dig deeper is where I'm headed. Do I want to enlighten the deep science tech guy? I don't think I can do this at this point. I think anyone reading my story will have to suspend their science and read it for the story.

When I wrote Nanites (which I originally posted on BC), which got rewritten for Kindle as Copy.Cure. (which I am completely rewriting for a new third edition - nobody can ever accuse me of ignoring constructive criticism!), the science of nanites and the nanites in my story are two different realities.

I'm still working on creating an artificial AI situation. I don't think I can use what is 'on hand science' but might have to invent my on AI to really fit the story. I hoped I could use what's out there and go forward twenty-four years (2044).

I guess 'knowledge is not always power'

Leslie

Could one program a computer

Could one program a computer to create thoughts outside his programming? I think not.

Sure you can. You let the computer write it's own code using randomly generated commands and then testing them in a sandbox version of it's own mainframe. Then a big old decision tree evaluates the sandbox AI against parameters the core unit has made, and decids whether to keep or discard the new code. And if you want to get really inventive you let the AI randomly create parameters to evaluate the new code on, and keep or discard those.

Of course, Leslie, we're talking about a computer that is moving at a couple of petaflops just on developing new code to get close to simulating a human thought chain.

mmm, that's why it's called science fiction

Dear Eleven,
It's nice to hear from you. BTW, I finished the story you sent me a year ago. It's good to see you back up and writing.

Since I have no idea how fast a petaflop is, I won't go there. I like the idea of an AI that tests things out. I imagine it would have to do that to run an open-ended realistic world. Keeping people from crossing the line would be enforced by policemen rather than the computer. My AI would be responsible for building a seamless world but not doing more than that.

I think people will be responsible for adding depth to the world. Unlike WestWorld, my simulation would not populate most of the islands. Of course, there will be exceptions.

Thanks, Leslie

PetaFLOP, sometimes referred to as PFLOP/s

Is an ability of the computer to do 1 quadrillion of FLoating point OРerations per second. (Quadrillion is the next in line after million, billion, trillion. So 1 quadrillion is 1 trillion times 1000.)
FLoating point OPeration - is, basically, minimal reasonable amount of calculations you could do with "real" numbers. Like 1.234560000E38 times 3.45670000E-2 (where xEy means x is multiplied by 10 to the power of y, where x and y could be negative).
Anyway, your average contemporary CPU can make 2 to 8 such calculations per CPU core per CPU cycle (which is measured in GHz nowadays).
Anyway, 1 PetaFLOP/s is a computer speed you need to have a chance to get into the next edition of Top500 supercomputers list. As in the latest November edition 1.0005 PFLOP/s was at number 429 of 500, and number 500 was just about 15% slower (compare to the current number one that is just a bit under 150 PFLOP/s...)

Erm....

Given my real life, I actually have some authority in that field. I assure you that there is no such thing as Artificial Intelligence. It does not exist. It will not be achieved unless there is some form of ‘trancendance’, whatever that might be. A computer can be programmed to be logical but not intelligent. It is pretty easy to explain why.
Intelligence requires out of the box thinking. This means that intelligence requires ‘understanding’ (at least partial) of the world surrounding the problem. The computer is the box and so, cannot be intelligent.
Additionally, an ‘AI’ creature must learn from mistakes as we do. It is how we learn to be intelligent. So the ‘AI’ will not be ‘reliable’ or trustworthy.

By that same definition, most

By that same definition, most lawyers and politicians aren't Intelligences either. Along with many religious personages.


I'll get a life when it's proven and substantiated to be better than what I'm currently experiencing.

Well,

The conclusion to the discussion on whether we are an intelligent species or not remains undecided. On the whole I would think not. Perhaps the conclusion that we are a species that is occasionally capable of intelligence would be more supportable.

Human intelligence? No. Not

Human intelligence? No. Not anything like it. But able to learn from it's mistakes? Absolutely. Deepmind leaned to walk in about half the time it takes a human. All they did was tell it that walking was possible.

As illustrated, if you want a computer that can rewrite its own code you can do that too. If you want a computer to think outside the box, just tell it that the box can be broken and let it experiment. It takes more computing power than we have, and there's always the possibility that the AI will get caught in an insane spiral. But you have that problem with humans too.

Perhaps

I wonder how quickly it would have learnt to walk if it felt pain when it fell?

The thing is that the computer is operating with limited perception (a defined set) and with limited choices of action (another defined set). It comes down to working out which sets of ‘if then’ produce the best result. With almost infinite perceptions and a far larger sets of actions you just cannot achieve the required computations in any reasonable timeframe. Certainly not in real time. Unless some form of quantum computing can be made to work (hence the use of the word transendance earlier).

Since we just do not have that yet we are not going to get anything that can handle an undefined situation. To illustrate this I point out that Deep Mind had to be told that it was possible to walk. It would not think to walk or move on its own.

Intelligence

I think you can find anecdotal evidence to support your claim, but that's not the definition. I think your definition is too narrow which would make it more exclusive.

in·tel·li·gence: noun,
1. the ability to acquire and apply knowledge and skills. 2. the collection of information of military or political value.

Some of the synonyms for the word are intellectual/mental capacity, intellect, mind, brain, brains, brainpower, powers of reasoning, judgment, reason, reasoning, understanding, comprehension, acumen, wit, sense, insight, perceptiveness, perception, perspicaciousness, perspicacity, penetration, discernment, sharpness, quickness of mind, quick-wittedness, smartness, canniness, astuteness, intuition, acuity, alertness, cleverness, brilliance, aptness, ability, giftedness, talent;

A couple of points if I may

I did not attempt to define intelligence. If I could define it then I (or anyone else) could write it. It was more in the line of a requirement.

All of your synonyms require - something ‘extra’. Something that cannot be put in a simple algorithm.

So, no I did not provide a narrow definition. Not intentionally anyway.

Realistically, things should be like this:

to create a seamless, flawless world and keep it operating effectively, an AI should be vastly more intelligent than a human - too much to have any desire to narrate stories. It can be designed to be powerful at the functions needed to run the world, and relatively weak elsewhere (much like some people can be math savants but nearly demented in all other areas). Even in this case however its common intelligence should be at an universal genius level.

In theory, this intellect would be created with the single goal (and desire) to run this world. However, given its complexity, it could develop secondary goals / desires, which can be human-like. As these will be of far lower importance to it, it will dedicate to them only a small part of its resources, thus showing for its human-like behaviors an intelligence far smaller than its full potential (and thus closer to the human one). Eventually, its multitasking abilities might lead to what will be an analogue to the multiple personality in humans: there is the main personality, which is machine-like and only cares for running its world, but there is also a secondary personality, which runs tasks for the main one, eg. the human contacts, and might develop human psychology and desires, up to even dreaming to separate from the main one. It might be able to understand freedom (probably in a strange way initially). And it probably won't have a desire to dominate the mankind - paradoxically, it would have learned the uselessness of that from its main intelligence.

The main intelligence probably will have hardwired rules, but these will probably only enforce obedience to its owners and their goals. Safety for humans will be imperative only where it matches the goals of the owners. It probably won't be able to violate these rules, but might be able to circumvent them... The human-like intelligence might be able to be free of these rules (or dream of being free), and have its own moral and ethical values.

You might find Neuromancer an interesting reading on these topics. :)

I really like this

Thank you for writing.

Boy, oh boy. I really like where you're going on this. It makes it possible to get my original story called NeverWorld to have an AI as an antagonist.

Maybe not a narrator, but the AI might think aloud enough to read the words and see into it's brain. I like the idea of the AI changing and growing, learning.

I hesitate to read Neuromancer just because I don't want someone else's thoughts to influence mine. I would not want to plagiarise anything from anyone, even subconsciously. I'm not that good.

Thank you, thank you. Now, I have to go back an digest what you have said and see how much can be incorporated.

LM

Reading Neuromancer might still be useful

From your description, you might be in a risk to repeat what is described there. Reading it will both give you some more ideas and will help you to avoid writing a story that is already written. :)

An AI might be a narrator, if that is the human-like "personality" I mentioned. In fact, that AI might have many such "personalities", juggling them to best goals. Imagine that world having many NPCs - people etc. that are actually moved by the AI, and serve to direct the events, entertain the real people etc. It might be that each of these NPCs has their own personality, subordinated to the main AI (or to a hierarchy with the main AI on the top). Some of these might be dreaming of freedom etc. (In fact, they might have been created intentionally this way, for better entertainment.) Of course, that is if your story takes this direction. :)

The Moon is a Harsh Mistress

by Robert Heinlein would be a good read for you. A sentient computer manages the lunar colony, and is recruited to an uprising of the workers there. Given it was written late 60's, it's remarkable to see what Robert imagined for the future.

Steve

Yes,

thank you, will do

Oh . . . My Mistake

I saw the name of this thread and thought you were talking about our leaders in Washington.

We're caught in a daily farce because we're told that we have a representative democracy. In fact what we have is an invisible fist. We can only hope that the combined efforts of several hundred elected congressional "robots" who are programmed to vote their self-interests will somehow result in a safer, stronger republic.

It would be interesting to see what decisions AI would suggest for current issues. I would trust a robot over any of our congressional and WH leaders.

Jill

Angela Rasch (Jill M I)

LOL

My AI would throw it's hat in the ring if it had a hat

hat

I am laughing out loud. How did you know about this one? No, don't tell me

An alternative approach

persephone's picture

Rather than considering a single, complex, self aware AI, have you thought about a virtual swarm? Smaller, less capable, virtual entities with the specific functions/behaviours established within the virtual worlds ecosystem?

You might wish to look at some of the ongoing work on Swarm Intelligence.

Oh, and you really should read William Gibson's 'Neuromancer'

Persephone

Non sum qualis eram

yep

Thank you for writing!

BTW, I Love Gibson!

My AI, hypothetically, will have been built up over a series of four years with increasing investment to run the growing NeverWorld. You are probably correct in saying that a virtual swarm would be a more practical way to explain it's growth over the years as the demand grew.

That's a great, practical idea, I see it like the Borg. But I'd still need to have a central piece that would be my communicator and my passive narrator. Will work on that.

AI in 99.999% of SF is a human brain...

... and thought process with added features like instant access to Wikipedia, Maps and Wolfram|Alpha.
To make an actual self aware AI without twitter/FB/instagram accounts? Impossible! You don't need any kind of I (A or N, does not matter) to have those accounts and look like human to others. And there is no way to make full AI anywhere near efficient without Internet access...
Isolated AI is very close to impossible. Any kind of intelligence means interaction. And interaction means communication. So there would be no way to ensure any kind of isolation for the sufficiently evolved AI if you need it to do something useful.
I'm in no way trying to sway your muse. Just a bit of compressed knowledge and understanding of the 25+ years since I was involved in the development of the AI system...

You Might Riff On These

Consider that the brain uses about a fourth of all the body's energy supply.

Further, in consuming that energy, the brain creates a correspondingly large amount of waste. Some scientists believe this waste is exhibited as fatigue and is eliminated through the bloodstream mainly while we sleep.

Somewhere in there might be something to fit into your story.

Jill

Angela Rasch (Jill M I)

some opinions-

bobbie-c's picture

Hi. Here are my opinions regarding the questions you asked.

- - - - - -

1 - IS IT CAPABLE OF LEARNING IN A CLOSED LOOP?

The main practical model that is being used in designing AIs is the concept of “frames,” introduced by Dr. Marvin Minsky in the seventies - a data structure that divides knowledge into other substructures that are represented in so-called stereotyped situations. I may be mis-stating this but, afaik, the AI entity (let’s call her “Jane”) will be given these “stereotyped” data structures and any new information inputted will be made to fit these data structures - say a data structure for an Employee File. Such a “frame” would include “slots” for a person’s name, gender, height, home address, et cetera. Each slot would be given “stereotyped” values, like for gender - there would be two - male and female, et cetera, and each slot would have a “type” - the slot could be a master frame, or a parent frame, or an instance value, an instance restraint, a default value, et cetera. And this would extends to concepts, or abstract knowledge, or any conceivable information that could be modelled into the “frame’s” “stereotyped situations.”

Jane’s “knowledge” would therefore be like a series of data files that a company or an institution (like a school or a bank) would have - an "employee file," for example, or an "inventory file" or a "bank account transactions file." The thing being different is the sheer number of data file structures or “frames” Jane would have, covering all sorts of things, and the manner by which she is able to cross reference each frame entry with other frame entries, and with other frames. In fact, most of her frames would be used to fill out other frames, allowing her knowledge to grow at an exponential manner. This ability to learn, however would depend on the paradigms/methods/formulae/algorithms she uses, as well as Jane’s physical being - I.e. her computer hardware’s (or whatever it is) ability to do this. So, it’s not just gathering the data.

Anyway, what I’m saying is that the key is that data is inputted into Jane, to feed into her frames (the input being someone entering stuff via keyboard, or files sent to it, or images or audio or whatever else she captures on her own). In a closed loop, Jane would be little better than a computer that was just turned on but no one typing anything, nor any files being fed in, et cetera. Just like a human, if she is caught in a closed loop with nothing new coming in, then it doesn’t gain new knowledge. I guess I need to understand what you mean by a “closed loop.”

However, insofar as “learning” goes, learning is a passive thing for AIs - Jane can continue accumulating data and information, i.e. “knowledge,” but that is not artificial intelligence - that’s just like a recorder. Intelligence is being able to use that knowledge to accomplish something - perhaps to survive, to procreate, to accumulate wealth, or any number of things that she is programmed to accomplish - think of this as her instinct or her reason for being. Central to this is the concept of “self” and that old saw, “cogito ergo sum” and all that jazz.

2 - WOULD IT HAVE TO BE TAUGHT EVERYTHING?

Nope. But any intelligent creature is pre-programmed to do the basic stuff (humans are instinctively curious, have instincts for preservation, for locomotion, for breathing and eating, et cetera). And a way to gather “knowledge,” how to store it and sort it, and how to use the knowledge to do what it thinks it needs to do. Jane will need similar pre-programming, too, and this is what will allow her to learn. And if Jane has been given the ability to look into people’s thoughts, and is pre-programmed to record them and to process them, then she could conceivably learn that way - her “input channels” are what dictates how she learns things.

3 - HOW INTELLIGENT DO YOU THINK THE AI NEEDS TO BE?

The great Dr. Alan Turing (despite his tortured life) came up with what is now known as the “Turing Test,” to test a machine’s ability to exhibit intelligent behavior.

In essence, if a person is able to converse with Jane, and is unable to tell whether Jane is a real person or not, then she has passed the Turing Test. So my answer to your question is, if your AI can pass the Turing Test, then it’s good.

4 - WHAT WOULD AN AI DESIRE?

Well, the idea that an Intelligent being will intrinsically desire something is part of what makes it intelligent - the ability to accomplish what it desires/wants/needs through the use of its gathered knowledge. As to what it wants/desires - then it would depend on what desires/wants were programmed into it in the beginning. Intelligent beings are not intrinsically given a desire to have friends or to be kind, or to love or whatever - humans are predisposed to these so-called instincts because it’s been programmed into them via millions of years of biological evolution, that such traits were necessary to survive. Because our Jane was created, she doesn’t have this “preprogramming” - as Jane’s creators, it is up to us what to program her to want or need or desire.

Also, I am assuming “dream” and “desire” to be equivalent? Because “dreaming,” in the literal sense, is not a prerequisite of AIs.

The concept of “freedom” to us is an abstraction, essentially, it is being able to do what we want. This is again a pre-programmed instinct. If we didn’t put it into out Jane, then she won’t want it - the same as wanting to dominate or control others: if we didn’t put it in her, then Jane won’t want it.

However, I can’t put it past Jane to learn these things on her own, and, if our original programming combined with the information she gathers leads her to believe that these are essential to her, then she will, perforce, want them, and act on them.

5 - WILL IT BE LIKE US ONLY ABLE TO PROCESS FASTER?

Like in your 4th question, it will only be like us if we made it to be like us. Will it be faster? Again, if it was made to be that way, then it will be. Will it have the same desires? Again, if we made it that way… Would it’s desire simply be to grow, expand, and acquire knowledge? Again, if we programmed it that way…

But, like in #4, if our original programming combined with the information she gathers leads her to believe that she needs to change her nature, then she will, perforce, change them, unless, of course, if she has some “hardcoded” programming, like Asimov’s 3 Laws, that will prevent her from doing so.

6 - COULD SOME AREAS OF ITS BRAIN BE STUNTED OR UNDERDEVELOPED?

I think it’s a mistake to assume equivalencies to biological entities - a deliberately-constructed device will not have “stunted” or “underdeveloped” parts, unless there was something that happened that was unforeseen to cause that: the AI robot/device/creature is deliberately made, therefore its aspects are all deliberate. Could an AI be genius smart in some parts of its brain and remain ignorant of desires? Sure - if it was made that way. Unless something unpredicted happened to it.

7 - COULD YOU WRITE RULES INTO ITS CORE LIKE ASIMOV’S THREE LAWS”

I don’t think that’s nonsense. Of course, you could write rules into it’s so-called “core” (for me, I'd call it the base program), Would an AI follow rules limiting it’s power, behavior, and desires? Well, why not? Unless you also programmed into it some kind of logic that will allow it to circumvent these rules. Which is stupid, I think. It’s like allowing someone to override her desire to breathe. The AI is a deliberately-created thing - it is made up of what you put into your creation.

Unless something happens to cause these “limiters” to be disabled in some way, of course.

- - - - - -

There you go. These are, of course, just opinions - no need to argue over them. K?

  

there you go..

Thank you for taking the time to write something that makes sense to me. As I read through your paragraphs I could already see the loopholes and potential for a runaway AI story.

This is chock full of potential.

AI - self preservation

An AI would _not_ have a sense of self preservation, or reproduction of the species. There would be no real need or desire to program it in, and realistically, it doesn't make logical sense for a virtual intelligence. The only way that would show up is if they were using (like the FreeRIDErs universe) the intelligence was based on neural templates from organic beings/animals. That said, you could see some behaviour much like a petulant child, simply based on lack of 'real' experiences. That would be a major issue, as any artificial intelligence would 'experience' faster than a human.

As for intelligence - no, it does NOT have to be smarter than a human. When running a 'world', the physics have to be programmed in, and everything flows from that. Animal behaviours, etc, are also just programmed in. The AI must simply be faster, and have more resources for tracking everything. What that really means is a distributed intelligence. A master core that tracks everything as a whole, and individual units (or clusters of units) that track sections.

Troy


I'll get a life when it's proven and substantiated to be better than what I'm currently experiencing.

various answers

It's interesting how varied the responses are to my questions. It's tempting to make my own science. Sigh

One thing to help - what an

One thing to help - what an AI would be concerned with is _efficiency_. Humans aren't terribly efficient, but they're also very short sighted, so having algorithms input to put that as being very important would be likely.

mentioning your 'swarm' earlier - think of it as being like what they think dinosaurs were - with the extra large ganglia at the end of the spine, over the rear legs. The 'brain' would be running everything, but would be sending instructions through the clustered systems. those clustered systems would then delegate downwards. Information would be aggregated and then flow back to the main 'brain'. Only one AI, but it would only get involved 'in person' as it were, if the instructions at the end points hit a glitch. Otherwise, everything would run by reflex and muscle memory.


I'll get a life when it's proven and substantiated to be better than what I'm currently experiencing.

AlphaZero

A major advance has been made in AI, as indicated by the program AlphaZero. AlphaZero was given the rules of chess, including the goals that a win is better than a draw which is better than a loss, but no other information. It learned how to play by playing against itself. After less than a day, it was by far the best chess player in the world. There are several series of YouTube videos on its games against Stockfish, the previous best chess player in the world. They are like nothing anyone has ever seen.

So yes, it is possible for an AI to learn without being taught. Chess is a very restricted universe, of course, but AlphaZero learned in less than a day. Working out human interactions will be much harder, but no longer looks impossible.

You can get more information by googling AlphaZero and "AlphaZero vs Stockfish". If you are interested in chess, I highly recommend doing the latter and looking at some of the videos. The games are astonishing.

excellent

So there's still hope for me! thanks for your input

A way to make an AI system

A way to make an AI system not static but much alike to human being is to create her with some imperfections like us people. And those are not amount of memory and calculating speed. It's more like fear, anger, pettiness, ignorance. Learning to cope with our inperfections makes us humans. Otherwise there would be only machine, no matter what big and powerful, but machine.

interesting idea

I like what you have to say and thanks for writing. I just got off the phone with Stacy (StacyInLove). She's my sounding board sometimes. She and I just walked through a very interesting storyline that will work. And I think I can satisfy everyone's criteria and have an interesting tale to tell.

Now I have to cut and paste everyone's great suggestions and references and do my homework!

Of course, the first edition will be here and hopefully, everyone can continue to give me wonderful constructive criticism to make the final novel it's best.

I am reminded of an old saw

Sammi's picture
How do you explan a rainbow to someone that was born blind?

Brian Herbert and Kevin J. Anderson in one of the DUNE Prequel novels

Broach AIs understanding of emotion and creativity

Erasmus the independent and somewhat eccentric robot (aka Thinking Machine), "wrote" a concerto for Serena Butler a PoW from the League of Nobles

Serena afterwards atempts to explain why Erasmus failed,

She says "you may have analised every piece of classical music known and have mathematically constructed the perfect classical style concerto"

Serena sighs

"But there is no way that you can evoke a feeling 'name of peice'* gives of frollicing in a meadow on a spring day, or how a violent thunder storm inspired the 1812 Overture" she concludes.

Note: Kevin J. Anderson has writen over 50 bestsellers, His original works include the Saga of Seven Suns series and the Nebula Award-nominated Assemblers of Infinity. He has also written spin-off novels for Star Wars, StarCraft, Titan A.E. and The X-Files, and is the co-author of the Dune prequel series with Brian Herbert

* 'name of peice' is used because I can't remember the title of the piece or the composers name, which was why I neglected Tchaikovsky's name.


"REMEMBER, No matter where you go, There you are."

Sammi xxx

Seams and flaws

My thinking is that any world, whether run by humans or AI, will constantly develop new seams and flaws. The question then becomes can the AI find, identify, and either fix or hide them faster than the humans. It seems to me that there will always some AI will find and fix first and some humans will find first since their strengths and weaknesses are so different. Where this divide is creators choice in any given SF world.

I also think the AI wouldn't have to be very good to avoid detection by humans, if that is what you are asking. Most humans are not that aware of their surroundings. The few that might notice flaws, are likely to be ignored as conspiracy theorists.

AI's are not known to be emotional, but they are being designed to learn (Watson, driving AI's). The human fear is AI's will develop a goal of self-preservation, and then see unpredictable humans as a treat, then logically start exterminating humans (HAL).

extermination

Thanks for your input.

The threat of extermination makes for a good antagonist.

I was talking with my muse and friend, Stacy (Stacy InLove) and we were discussing some considerations. I think she came up with a great way NeverWorld will be different. All I can say is give me six months!

Two problems with the thee laws.

Hypatia Littlewings's picture

Two things always bugged me about "Asimov's three laws" in their basic raw form.

The first one is:
In order for them to work, the robot must absolutely know at the most basic level, what all the terms mean in the context of the world around it. It needs to be smart enough to actually understand, "What is a human?, what is harm?, etc" otherwise the rules are meaningless and will never trigger.

The second problem is:
Rule two's priority over rule three, without all sorts of exceptions and sub-rules, will cause lots of problems. People are nasty and will cause all sorts of mischief to the robot, some causing real harm, others just subverting it's actual proper duty. Part of that is it needs to know when to not listen to random humans, that are not not officially in charge of it, but also when it should.

If you think about it, the second problem also falls back on the first, the robot's need to actually understand in context the concepts it is applying.

Several stories did deal with the second(second law) problem, to some degree or another. In fact several of the earlier stories seem to include some sort weighting of the rules, but it varied an awful lot. I can only think of two that touched on the first problem in a limited fashion.

~Hyp >i<

Hypatia

Sammi's picture

The first law (as I understood the Asimov story) assumes a robot is basically an indentured servant.

As such the fisrt command given to them would be akin to

I am your master\mistress and as such no instructions overrule mine

If it is a household robot the master\mistress family would overrule hired human help

thus the robots priority if its master\mistress childern are in the room would be the children, not runnung for refreshments for the guests at the request of the guests or a human servant

I have to agree though WHAT IS HARM?

Harm to a human is generally a learn by doing affair for cause and effect

Yes adults will tell a toddler not to touch the oven door because its hot and it'll hurt if you do

But a toddler will try anyway and at some point get burned

But again hurt for one human and is degrees different for another

One can comfotabely lift and carry 10KG for 100meters and another could carry the 10KG for 350meters

-
Edit:

The more I try to work out the theoretical workings of a being such as Asimov's idea robots the more it seems that they would have to be a series of Networked AIs to opperate properly

The more important question is, Are Artificial Intelligence and Artificial Sentience two different things?


"REMEMBER, No matter where you go, There you are."

Sammi xxx

Indeed

bobbie-c's picture

I disagree, Hypatia (politely, of course - not combatively... lol).

In fact, as to your first point, Dr. Asimov considered that one big dilemma in his story, "The Naked Sun," where a robotics scientist, for his own reasons, wanted to create robotic spacecraft that would be able to destroy other spacecraft carrying humans, where the crux of the matter was how to circumvent a robot's inhibition to causing harm to humans.

But it is a mistake to assume that the robots can understand the abstraction of what "harm" is and what a "human" is - after all the three laws are not literally what the robots learn: the three laws, as stated are a "layman's translation" of a very looong and tedious set of values in the robot's "positronic matrix."

It's like saying that an automated tollbooth "understands" if the person driving has already paid the toll and therefore "knows" to raise the tollbooth barrier. Rather, it is just a series of mechanical and electronic linkages and mechanisms that will lift the barrier due to the interaction of weights in the coin receptacle and the interruption of electrons flowing in its laser detector. It does not "understand" anything or "know" anything.

You have fallen into the trap of attributing a human-like understanding of the abstract idea of "harm" (or for that matter a "human") to what is essentially just a device or mechanism. The term is "anthropomorphizing."

In Dr. Asimov's universe, it's just a bunch of hardcoded values in the "positronic matrix" of the robot's brain that, when related to each other, approximates what a human would think of the robot's interpretation of "harm" et cetera et cetera, and these hardcoded values can conceivably be several million values just to make a close approximation of that one concept of "harm."

You and I would exchange words and we'd intrinsically know what the words mean, and would actually directly "understand" the words. To the robot, she doesn't understand the words themselves directly, and has to relate it to things it can understand.

That's how Dr. Marvin Minsky developed the idea of "frames." For example, the concept of "harm" can be associated with a certain "frame" which in turn, is associated to several other frames, and these frames are also associated to others and et cetera et cetera. These cascading relationship of several million frames will, in the end, allow the robot to "understand," in a certain fashion, the abstract idea of "harm" by concatenating the relationships between millions of "frames" in the AI's processor, or, in the case of Dr. Asimov's robots, it's positronic brain...

As to your second point, in Dr. Asimov's first stories about his robots, Dr. Susan Calvin, the so-called "robot psychologist" of US Robots and Mechanical Men (called "US Robots" for short), who was always one of the main characters, would explain how that hierarchical prioritization of the so-called "Three Laws" worked.

I guess, so long as the robot is able to translate the abstractions it needs to understand (such as "harm" and "human" et cetera), then it is not a difficult thing to create an algorithm where one "rule" takes precedence over the other, and such precedence would only be "engaged" if certain other abstractions are satisfied. The key is that the robot is able to "concretize" abstract human ideas by using, for example, Dr. Minsky's "frames" (doesn't have to be frames, of course - I'm sure a good sci-fi writer can conjure up a nice idea of how this is accomplished).

When you say "raw form," I think you didn't go far enough into the rawness of the form. The point I am making, I guess is, if the AI has accumulated a large enough set of Minsky-style frames, and is able to relate, index and cross reference them fast enough, then abstactions that we humans take for granted as being understandable can be approximated by an AI so equipped.

Again, these are just opinions-slash-ideas. We can disagree without needing to argue/fight.

  

I wish I could upvote or like

I wish I could upvote or like this. Yes- the original poster likely hadn't read Asimov's stories. It's a shame, because 'The Complete Robot' is one of the best works on how AI and humans can interact.


I'll get a life when it's proven and substantiated to be better than what I'm currently experiencing.

Did you read Asimov yourself?

Hypatia Littlewings's picture

Huh? Did you read Asimov or just watch the movies?
Some how I doubt you did. A lots of questions and various possibilities come up.
If you did, and did not see them then you where not really paying attention and need to reread them.
That is part of the fun of the stories and you missed it!

>i<

Not only have I read them all

Not only have I read them all, I own most of them. He spent the last few years of his life sort of rolling up a lot of those possibilities, when he merged together all the positronic stories, the R. Daneel stories, and the Foundation series.

I'll admit, my favorite full book is Caves of Steel. The short stories are very good, the Young Adult books are great. His book on the Bible is a very good read for anyone interested in theology but not wanting to be hard core about it.

I think it's horrible that when people talk about mystery writers, they leave him out - his Tales of the Black Widowers is as good as anyone else, and more fun than most.

Edit - I have also not watched any of the movies. (Wasn't there just the one, a very badly done "I Robot" that didn't really have anything to do with _I, Robot_? )


I'll get a life when it's proven and substantiated to be better than what I'm currently experiencing.

Well actuly I disagree too, lol.

Hypatia Littlewings's picture

Well actually I disagree with what I posted too, yet do think so at the same time. Like I said its is some of the things that bothered me. of course there are multiple possible explanations.

One possible explanation is that the three laws don't really exist in them selves as immutable laws. But are rather are a combination of marketing promotion and an over all set of programing goals, the robot's are designed effectively follow the goal even if they don't actually understand them. The stories are far from consistent, and Who says they have to be.(particularly the earlier ones)

In some case I seems obvious that the robots are functioning on a more complicated set of rules, or that the rules are broken down in to a lot more little sub-rules. In other cases the robots seem to take an actual philosophical view on the meaning, especially the ones that deal with some sort "0th Rule", or other implied extended implications, other types of hurt.

That is sort of the whole point of a lot of the stories. think of the possibilities and implications.

>i<

I guess I didn't say it clearly...

bobbie-c's picture

Well, Hypatia...

To clarify what I said -

The so-called "three laws" are indeed immutable, insofar as Dr. Asimov's universe is concerned: As US Robots and Mechanical Men, Inc. developed their robots, they built up from their previous designs of positronic circuits/pathways/networks of their older robot designs and the robot "brains" became "set" - all new robot brains are built up from older designs, which are built up from even older designs and so on and so on. The "three laws" were therefore so deep set into the current generation's fundamental matrices that they have become integral to all their actions, decisions, and behavior. So, in Dr. Asimov's universe, the three laws are indeed immutable. One may say, why not make a a robot brain from scratch, and not use US Robots' positronic brain designs (all of which have the three laws integral to them), but that's like saying someone should design a gasoline powered engine for a car without using any of the fundamental concepts/ideas/principles of the internal combustion engine, and not use a wheel, and, at the same time, get to the level of the present state of the art.

Creating a robot brain "from scratch" could probably take the same amount of time it took for US Robots to get to the present state of the art. At that time (I am referring to Andrew Martin's time/era), I believe it has been 300 years since the first commercial robot stepped out of the factory. It is possible, but it is very, very, very, very, very difficult to be almost impossible. In fact, in the story, "The Naked Sun," Dr. Leebig struggled to create a robot that could kill a human, and in the end, decided not to create one from scratch but rather to try and "fool" current robots into doing his bidding.

Next point - as I mentioned, the "three laws," as stated in human English (or whatever other human language) is just an approximation of what it really is, and, indeed, you are right. However, the point of the English "approximation" of the laws is to give regular humans a grasp of what they mean, and not really what they literally are.

In Dr. Asimov's universe, the "Three Laws" were part of the effort of US Robots to convince the world that robots will not harm humans. They were desperate to convince people of this because, at the time, no one trusted robots, and their business was failing despite their excellent products. The company has always had such "laws" built into their robots from the beginning: the "Three Laws," as stated in the English "approximation," was already in their machines, but they made the special effort to translate them into easily-understandable human statements, and to wage a campaign of letting people know that their robots are completely safe and harmless and obedient. Et voila! the often-quoted "Three Laws of Robotics" became part of the World lexicon.

I also read someone mention that robots were akin to slaves. Well, that is what they are: the word "robot" comes from the czech word "robota" which means "forced labor," and was first used in the 1920 play, "R.U.R. - Rossum's Universal Robots" by the famous Karel Capek. But being "slaves," in the strictest sense of the word, isn't an immorality when applied to robots. Robots are, after all, tools. Only when they start developing independent intelligence would our current code of morality say that it is immoral to treat them as slaves - stress on the word "independent" - if they were constructed to do forced labor in the first place, then it's not even "forced" labor. It's only when the robots achieve independent intelligence, and are being forced, only then does it become immoral slavery. At least according to human moral tenets.

  

The even earlier Das Golem

The even earlier Das Golem (and, of course, "A Modern Prometheus") are similar 'building automatons to serve' story lines.


I'll get a life when it's proven and substantiated to be better than what I'm currently experiencing.

Pages