Suppose

Printer-friendly version

Forums: 

Suppose you lived in a world where there was a magical (or super-advanced technological ) force that prevented you from physically harming anyone else. Obviously this same magical force would prevent anyone else from physically harming you.

One of the obvious consequences of such a magical force is that war would be impossible.

Another obvious consequence is that murder/assault/and so on would be impossible.

A less obvious consequence is that accidental injuries would become impossible. This hypothesized magical force acts to keep you from physically harming someone else. Even if you do not intend to harm someone, the magical force will prevent the harm from happening.

***

In such a world you could still say anything you wanted to say, no matter how hateful it was (or wasn't). And you could do so knowing that no one could (physically) punish you for saying it.

But if you were obnoxious enough or nasty enough, other people could shun you.

And probably should.

***

So I wonder.
If you could live in a world like this
- - - Would you?

T

Why not?

Sounds like I'd love it.

A Catch

Daphne Xu's picture

There's got to be either a bug or a catch somewhere. There's probably something important we don't know. If something sounds too good to be true, it probably is. It could turn out to be some kind of phony deal, where once it's made, one can't back out even if it's violated.

Some idiot once asked, what's wrong with Darwin in exchange for no initiation of force?

-- Daphne Xu

The catch?

No medical care, since doctors would be unable to practice.
That means no transitioning, since doing so requires doing a lot of things to one's body that would be defined as "harm" in many contexts.
No training people to defend themselves (just in case the force ever stops.)
No limitations on hurting people in other ways means things like economic disparity and psychological warfare would, if anything, be worse.
Not being able to harm others in no way means you can't do things that lead others to being harmed indirectly. Sabotage and the like would become rampant weapons.

It wouldn't be the utopia it might sound like.

Melanie E.

Hello Rasufelle, Melanie,

Hello Rasufelle, Melanie,

Your point about no utopia is spot on. I don't reject the possibility that a utopia can exist. But I suspect that we will never be able to build one.

One road block is that my utopia is not, and never will be, your utopia.

Still, trying to move in that direction might be a worth while activity.

And the ability/power/wtf for YOU to keep ME from FORCING YOU to kowtow to MY concept of utopia is a big deal.

T

Evolution

Xtrim's picture

While physical abuse/harm is bad, I think that psychological abuse is even worse, after all, long after the physical damage has mended, the psychological one still remains.

Although the rules of the world might impede the physical abuse, its occupants would probably develop more sophisticated ways of causing harm through other means. Murder wouldn't cease to exist it would simply become premeditated (is. Poisons, setting fires, etc.)

On a positive note there wouldn't be any more crimes of passion, and women would be on an equal footing with men. Although some might think that their free will is being limited, I would go for it.

Xtrim

Best Case

Daphne Xu's picture

In the best-case situation, the system would protect the intended victim from being harmed by the fires, poison, etc. -- perhaps have him evade them or force them away. Other possibilities include blocking the mind from considering the attempts in the first place.

I'm not sure if such a system could be beta-tested. It might even stall or sabotage attempts to raise an issue in feedback. (For example, nobody who is made to believe that 2+2=5 could possibly oppose that belief or being made to believe it.)

-- Daphne Xu

*

You guys are amazing. Thanks for your comments.

Daphne:
In the best-case situation, the system would protect the intended victim from ...

One example situation I have written about (to myself, remember) is one person (trying to) shoot another person.

My so far unspecified magic/tech would detect this attempt, "catch" the bullet and soak up its energy. Then it would ...

... turn the bullet back onto the shooter who would then be killed by that very bullet. A rather sever punishment for a failed attempt?

... or perhaps the bullet would stop a few inches from the shooter's head, and stay there for six years. Everywhere he went, that bullet would be in front of him. Everyone would know that he had tried to shoot someone. And he would know that everyone knew. Talk about shame.

Daphne:
I'm not sure if such a system could be beta-tested. It might even stall or sabotage attempts to raise an issue in feedback.

It might not let us unplug it?

Honestly, despite having read dozens of stories about run-away computer systems like this I have never thought it was a realistic possibility.

However, now that I am proposing a God-Like thingy, I find myself unable to guarantee (on short notice at least) that it could not 'take over'. I obviously have some more thinking to do.

Thank you,
T

Comparison

Daphne Xu's picture

> It might not let us unplug it?

I'm comparing it with the Doomsday Machine as described in "Dr. Strangelove". One essential requirement is that any attempt to shut it down would set it off. It wouldn't necessarily have to be a run-away computer system. It might react the same way to attempted maintenance and upgrades.

A long time ago, someone warned about making a computer virus, simply as a challenge, an interesting problem or project to solve for understanding, with no bad intent. The creator might not be able to contain the virus.

Here's a question: why did the AI in "2001" kill the crew of its ship? (Don't answer that it's an evil AI. Why did it?)

-- Daphne Xu

There Was an Explanation...

...in Clarke's novel (and it was fairly clear, if not as explicit, I thought, in the film). HAL was programmed to put completing the mission above all else, and concluded that he had more chance of doing so successfully without a human crew that could impede him simply by existing (requiring life support, etc.).

Eric

I thought...

Daphne Xu's picture

... that the explanation was that HAL had contrary "programming": keep the crew ignorant of the true nature of the mission until it was time, and not to deceive by withholding information. The only solution to satisfying both: kill the crew. (I haven't read the novel.)

It is true that HAL kept talking about the mission in the movie. It makes perfect sense that HAL would go about his part in the mission, without realizing that he's destroyed the mission by destroying the crew.

Computers aren't actually programmed to do what we overall think of them doing. They're programmed to act according to inputs. So a buggy computer in an aircraft that thinks a stall is approaching would continue to do what it was programmed to do about the stall, even if it winds up crashing the plane. (Maybe two computers, or two programs on a computer might unwittingly fight each other -- avoiding the stall and avoiding the crash.)

-- Daphne Xu

*

Hello Daphne,

Very good point.

Computers aren't actually programmed to do what we overall think of them doing. They're programmed to act according to inputs. So a buggy computer in an aircraft that thinks a stall is approaching would continue to do what it was programmed to do about the stall, even if it winds up crashing the plane.

I don't think a computer with such limited abilities actually qualifies as an AI. For one thing systems like this one by Boeing are not likely to have any sense of morality. And (to me) such a 'sense' is a requirement for a system to be in contention for AI status.

But there is plenty of room for very smart computers that are not sapient.

I'm not sure why HAL went off the graph. From the above discussion it seems that I'm not the only one.

Thank you,
T

*

Hello Eric,

So somehow that bit of code managed to over-ride the more fundamental code of the First Law.

Since we will probably never be able to build perfect machines, this is at least possible. Daphne suggested multiple computers that might become so busy arguing that they accidentally didn't kill anyone. (I know Daphne, that isn't exactly what you said. Just exercising my poetic license.)

Thank you,
T

No First Law...

In Clarke's novel, it's expressly stated that engineers haven't been able to successfully develop and apply Asimov's Three Laws yet.

IIRC, the film preceded the novel's publication by a few months, though Clarke finished the novel before then. In Asimov's memoirs, he mentions that when he first saw it and reached the intermission point (where we see that HAL is lipreading the astronauts' discussion), he was outraged, telling his companions that they were breaking First Law -- and one of them replied, "So strike them with lightning, Isaac."

Eric

Childhood's End

laika's picture

For some reason this reminds me of the novel CHILDHOOD'S END by Arthur C. Clarke. The aliens (they looked like devils, red with horns and tails, which freaked a lot of people out!) came and proceeded to protect us from all our worst human impulses with a technology we were helpless against. Then they force-fed us evolution into 2001-style glowing cosmic immaterial Star Babies. THE END...

But even without the starbaby evolution aspect, I'd be interested to know what this change in the world would mean for trans-people. Whether cowards like me would use this immunity from physical harm to finally come out as our girly (or boy-ee) selves. It would make a GREAT piece of transgender fiction, or possibly a whole story universe various authors could give their individual spins on if somebody wanted to work out the rules for it....

Hugs, Veronica

*

Hello laika, Veronica,

I've compiled a few universe rules based on the ideas in my opening post, plus some other ideas not mention there.
And I've written a few chapter ideas, and other snippets of possible story material.
None of it is published, and probably never will.

That is part of why I am doing this, here, now.

If any one here would like to take the few hints I've placed in this message sequence, I am officially saying go for it. If you feel like crediting me in some way, I will blush and say thanks. If you don't, I will smile and think thank you.

I'd be interested to know what this change in the world would mean for trans-people.

In one of my story snippets, set about 10,000 years in the future, my hero heroine Tyler (discoverer/inventor of the magic/tech I've been talking about) has been a mother several times and a father several times. Probably half of the several hundred billion people alive at that time have also switched back and forth several times. Mostly they do it for sex, but sometimes for family.

The switch can be skin deep. Or it can be as deep as you want, all the way down to the chromosomes. And it is 100% reversible.

Skin deep swaps can happen in as little as a few hours. Chromosome level swaps take a lot longer.

Whether cowards like me would use this immunity from physical harm to finally come out as our girly (or boy-ee) selves.

If most people have been doing things like this for a while, the stigma-factor is probably not an issue any more.
GO FOR IT, girl.

Thanks,
T

It's easy to see what this would mean to...

It's easy to see what this would mean to trans people. Although none of us will like the answer.

Think about it for a moment. The most efficient and truthfully only means of this force keeping people safe from harming each other would be to change how the mind works for those that think differently. Eventually when you're not satisfied with your job cleaning toilets? Your mind is altered so that you love cleaning toilets!

You're not happy with your body? We can already alter your mind to love what we want you to love therefore much simpler and efficient to alter your mind to love your body as it is, rather than go through all the expensive and life threatening surgical procedures.

Thanks but no thanks :)

We the willing, led by the unsure. Have been doing so much with so little for so long,
We are now qualified to do anything with nothing.

Who decides harm?

Stress can and does cause physical harm.
A surgeon has to use a knife.

There are so many reasons why something like that would just destroy our ability to be us.

*

I am putting on my open mind. No judgements.

Thera:
A surgeon has to use a knife. There are so many reasons why something like that would just destroy our ability to be us.

I'm not sure I understand what you mean.

I do understand that something like that would change our ability to be us.

And I do understand that one such change could be destruction.

But, back to the first part of your original objection. A surgeon has to use a knife.

A surgeon.

If a random stranger has to use a knife, is that not somehow different?

I know I did not mention it in my opening post, but part of this magic/tech I am talking about is an AI component that can tell the difference between good intent and bad intent.

Does this help?

T

Certain Bizarre Notions

Daphne Xu's picture

"good intent and bad intent" -- this leads me to think (again) about bizarre notions that I've seen.

"He's lying, and he knows he's lying." "Seven Deadly Innocent Frauds" (a title) and various similar statements have always bugged me. It seems to me that if one is lying, one knows it. However it does occur to me how someone might lie without knowing it, or might commit crime or evil having a "good intent". It involves a certain mental incompetence. An exchange from one of the Star Trek movies went something like this:

    Lady Vulcan: (Something false, as in a lie.)

    Mr. Spock, raised eyebrows: "A lie?"

    Lady Vulcan: "No, a choice."

One possibility is that Lady Vulcan's answer to Mr. Spock was a second lie. (At least the word "no" was.) The alternative is that she really didn't think she was lying, even though she was. That would make her incompetent.

Police might think of certain conduct as not corrupt, perjurous, fraudulent, criminal, but fighting crime. I'm going to limit myself to this, as I'm approaching politics.

I don't really think one can properly distinguish between "bad intent" and "good intent". In fact, an "evil" AI just might be the ultimate in "no-bad-intent" evil. Same question: why did HAL destroy the crew?

-- Daphne Xu

*

HI Daphne,

As usual you ask some great questions.

Lying, except in certain special circumstances, is NOT illegal. I personally think it is always immoral, but that is a different question. And my notion of morality is almost guaranteed to be different from yours.

And, in general, questions of morality push us into the realm of gods and religions.

And, in general, no good can come of that ...

Well, perhaps I am being a little bit too pessimistic ...

I don't really think one can properly distinguish between "bad intent" and "good intent". In fact, an "evil" AI just might be the ultimate in "no-bad-intent" evil.

I've mentioned this in several of my recent replies in this message stream. But I need to repeat it here.

My personal definition of an AI includes the need for it to have a sense of morality.

It must UNDERSTAND the difference between RIGHT and WRONG, in order for it to make any legitimate claim to AI status.

For now, I am going to leave this claim as it stands, , with no attempt to justify it.

I'm sorry if this is not very satisfying. In fact, that is my (short term) intent.

Have a nice day.
T

It was spock who lied

Sasvik: "You lied?"
Spock' "I exaggerated"

There are other instances where he lied in the movies. Then in other ST universe series it is implied that Vulcan's being unable to lie is a myth, and the truth is closer to them not seeing any logic in doing so.

Although there is another quote from Spock that fits this discussion. “Computers make excellent and efficient servants, but I have no wish to serve under them.”

We the willing, led by the unsure. Have been doing so much with so little for so long,
We are now qualified to do anything with nothing.

To All

Thank you for some very thought provoking comments.

I have written other versions of my opening post that anticipated some of them, but decided to go with this much shorter version.

I actually have an incomplete draft of a Universe Definition (I like the way you think laika) that I might or might not share. If I ever finish it. It has several variants that range from very dark to very bright. Because of course the Force has a bright side as well.

It also has several variants based on timing.

  • Stories that happen just before the magic/tech "arrives" would be the easiest to write because they would be the most familiar
    • An author might plan a series that starts here and ends later, following the hero/heroine through some changes
    • Another author might then find it easier to write a story in an after time frame, because it has now become a little less UNfamiliar
  • Stories that happen as the magic/tech arrives might tell how our hero/heroine deals with the changes in the world while managing her own changes
  • Stories that happen after the magic/tech arrives would be the hardest to write because they would be the least familiar to your audience

If someone else wants to try, based on their own ideas about how it ought to work, you have my blessing.

Very dark stories tend to end abruptly when the Bad Guy (or the Stupid Guy) destroys the universe, so aren't very entertaining. I suppose they could be surprising, though.

Very bright stories can go on and on, but it can be hard to keep them from becoming predictable.

Something in the middle, perhaps?

Regards,
T

BTW

Tyler Adams, the hero\heroine in many of my story snippets based on my Universe Definition, is well over 10,000 years old in one of those stories. In fact seven plus billion of us are about that old. (The ones that survived the transition.) He was thought to be a boy at birth (early 2000s), but medical issues soon led to the discovery that he was actually intersexed.

In this story he has been a father three times and a mother twice. About half of the human population (1,380,211,599,417 and growing at the rate of about 109 per month those days) has also jumped around like this. Mankind has had several growth spurts, but mostly we tend to go for the recreational side of sex.

No one lives on Earth any more. It is a Special Park Area. In fact all planets and moons and asteroids are SPAs. But we go to Earth and other natural places for vacations and for scientific exploration and for adventure. But we do not allow ourselves to live there. Not even the Amish and so on.

We get our resources by mining stars. We get all our energy from stars too. And we live in or on artificial habitats. Tyler owns one.

She is really rich. She discovered/invented the magic/tech. But about a tenth of the population own really nice houses like this.

It is a cylinder about 2,000 km long and about 1,000 km in diameter and she has decorated it as a one-to-one copy of most of North America. Several million family members and friends live there with her.

So you might guess that my (I mean Tyler's) magic/tech is not only about preventing the initiation of physical force.

HOS (Human Occupied Space) is a sort of spherical volume about 4,500 LY in radius, but there are some long arms that reach out as far as 7,000 LY. That long one reaches all the way to the Eagle Nebula. Home of the Pillars of Creation.

And yes, we can travel faster than light. But it is dangerous as hell. Hitting a single hydrogen atom at 5C can totally destroy a star ship. So we use spacecraft mostly for scooting around a star system at sub light speeds.

But we can send messages at well over a millionC. Download yourself to the function field, copy yourself to Pillars Central Station, and you are there in three days. But it takes almost four weeks to make a copy of your body there so you have to hang around in the field for while. You can still interact with people, It's just not as satisfying. Most people think it feels sort of fake.

***

Anyway, I would smile if any of you wanted to use Tyler in one of your stories. She says OK as well.

No cannons here. If you have her do something and Garia has her do something else that clashes, no worries.

We are just telling some great stories.

Bright Stories

Daphne Xu's picture

"Very bright stories can go on and on, but it can be hard to keep them from becoming predictable." How about deciding on an end, at least tentatively, before (or while) writing the story. But yes, somewhere in the middle is a good idea for stories.

"We get our resources by mining stars. We get all our energy from stars too." Solar energy from stars is fine. Mining stars? May I suggest you reconsider? Hell has nothing on a star except density near the surface of the star. Hotter than blazes, and there's nothing you can get there that you can't get far more easily elsewhere. (Stars are variations on the sun, and contain mostly hydrogen and helium.) Planets, asteroids, ice planetoids, those are all good mining locations. Try extending mining to your list of things we do on natural places to mining and resource extraction -- at least places not teeming with life.

Artificial habitats are an excellent idea, as is using solar energy to power them.

"[FTL travel] is dangerous as hell. Hitting a single hydrogen atom at 5C can totally destroy a star ship." You're 100% guaranteed to hit some as well -- even in the thinnest of intergalactic gas.

-- Daphne Xu

Mining Stars?

Hi Daphne,

Daphne:
Mining stars? May I suggest you reconsider? Hell has nothing on a star except density near the surface of the star.

Very true.

Daphne:
Hotter than blazes, and there's nothing you can get there that you can't get far more easily elsewhere.

Also very true.

Daphne:
Planets, asteroids, ice planetoids, those are all good mining locations. Try extending mining to your list of things we do on natural places to mining and resource extraction --

???
Also very true. These were more or less my reactions to this idea when I was first exposed to it.

Daphne:
-- at least places not teeming with life.

However, THAT is why I believe we will stop using planets, moons and asteroids for mining in the near future.

Well actually it is one of the more important reasons. Even on moons and asteroids there might be life that we cannot detect for any of several reasons. Our lack of imagination is one such reason we should consider.

The scientific community seems to be discovering this. NASA and other astrophysical sources have recently announced that moons around gas giant planets tens of billions of miles away from even tiny stars may be among the most likely places to find alien life.

Truth be told ... we cannot absolutely know that life of some sort does not exist on or within stars. I remember reading a story many decades ago where the punch line was that sun spots turned out to be living creatures that were in some ways analogous to whales.

It sure seems unlikely, but the list of things we do not know is a *lot* longer than the list of things we do know.

Rather than try to explain this idea myself, I will give you a link to the primary source that opened my eyes.

https://www.youtube.com/watch?v=pzuHxL5FD5U

Isaac Arthur is a futurist who has produced hundreds of hours of extremely interesting videos and published them on Youtube. The title of this one is "Star Lifting".

It is about mining stars for natural resources.

Isaac Arthur is an entertaining host and as such has gained a large audience. He is one of my favorites.

He has a peach imspediment (speech impediment - for those in the audience that do not know me, I am NOT impressed by political correctness) that makes him hard to understand at first. He sounds a lot like Elmer Fudd. But don't worry. He makes fun of it himself, and he offers a closed caption feature on his website to help if you really have trouble following his words.

But I suggest that you listen to him for a while and try to get used to him. Once you do you almost do not notice his odd speech patterns.

Daphne:
Artificial habitats are an excellent idea

If we decide to stop living on planets and so on for whatever reasons, this is the only option left. Besides, it has a lot of advantages for an interplanetary or interstellar species.

Daphne:
as is using solar energy to power them.

Why build large expensive artificial fusion rectors when we can easily and cheaply use all the natural ones just hanging around out there like low hanging apples?

Daphne quoting T:
"[FTL travel] is dangerous as hell. Hitting a single hydrogen atom at 5C can totally destroy a star ship."
_
Daphne:
You're 100% guaranteed to hit some as well -- even in the thinnest of intergalactic gas.

In the 10,000 year future story I mentioned above, we have learned how to limit interstellar hydrogen and dust and so on to less than one particle per cubic light year within certain regions of space.

This is HARD vacuum!

But that is not good enough. FTL vehicles still manage to hit one every few months. Most people create a copy of themselves at the destination and message themselves instead.

Regards,
T

A world like that

Sounds wonderful at first, although it puts me in mind of the crew of the ship on the movie "Wall-e". A place where humans would find no motivation to do anything but sit and get fat.

I mean, would this magic even allow us to operate cars? They are one of the world's leading reasons for accidental death. Sports? Are you kidding, you could be injured! There has been several science fiction book and movies with this exact type of Utopian society that turns out to be stagnant dystopian future.

We the willing, led by the unsure. Have been doing so much with so little for so long,
We are now qualified to do anything with nothing.

*

Hi Nuuan,

I'm also not much of a fan of dystopian societies. So I am particularly sensitive to accusations that the one I am talking about (my baby, as it were) might be one such.

I think it is not, but your mileage may vary. Let me take this one of your questions:

... would this magic even allow us to operate cars?

The way I see it, you could. As long as you operated that car in a way that avoided collisions with other objects. But suppose, in your joy and exuberance, you went around a corner too fast?

Way too fast. ( Or, maybe, just a tiny bit too fast.)

The function field (this is my label for the magic/tech *doo'_ma_fatch'_e* that saves us from ourselves) has an AI component that can calculate this and predict that you and the car will collide with that giant oak tree and you will be injured.

The function field might slow your car down.
The function field might move the tree.
The function field might ...

  • The goal is not to keep you from enjoying yourself
  • It is to keep you from injuring yourself, or someone else

I'm not totally satisfied with this either. It is an idea I have been picking at for a while.

Your feedback is seriously solicited.

T

FYI, *doo'_ma_fatch'_e* is my personal equivalent of unobtainium (*un_ob_tain'_e_um*), but in the realm of machines rather than the realm of materials.
FYI, there will MUCH better ways to travel. Would you like to fly? Superman style, without a plane. In lucid dreams this is THE most popular form of transit.
FYI, even so some of us will STILL want to drive a powerful car very fast. But we probably would rather not die because we pushed the envelope too far. You will know that the AI intervened. And you will know that the AI saved your ass.

But you can take that corner again, and again. Until you are satisfied that you went as fast as possible without killing yourself.

Interesting

I am very curious as to your ideas on this. Where could I find these short stories you mentioned?

This AI would by your description could intervene by altering reality to keep a human from harm.

The function field might slow your car down.
The function field might move the tree.
The function field might ...

I could then see some people that would push the limits further than one would normally, since they know they will never come to harm, although I still see the majority getting fat and lazy. What would be the motivation to leave the house? Go to work? Interact with other people? In what would essentially be a society where you are given everything you need without the need to work for it? Or do you have a way to keep human nature form turning it into that?

We the willing, led by the unsure. Have been doing so much with so little for so long,
We are now qualified to do anything with nothing.

*

Where could I find these short stories you mentioned?
=====================================================

Sorry, you cannot. Some of them exist only in my mind.
Some others of them exist as Word files on one of the thumb drives sitting in a cocktail glass on top of my computer.

I write mostly for an audience of one - me. I sometimes fantasize about publishing these things.

But the truth is I probably never will. Personal fantasies are ... personal.

Certain aspects of those stories, however, can be generalized. And that is what I am attempting to do now.

Such as my concept of the "function field".

If you know anything about the history of nanotechnology, you are probably familiar with the concept of the "utility fog" (Dr. J S Hall 1989).

Well, my function field is a utility fog "on steroids".

In that ten thousand year future story I mentioned we do not have molecular assemblers (plural).

We have one, and all of us live inside it.

And we do not have AI agents. We have one. But it is so capable that it can sub divide itself as needed and address all several trillion of us on a one to one basis as if it only cares about that one of us.

Privacy, in such an environment, is not the same as it is today. It may not even exist.

I worry about this.

But if NO ONE can harm you because they think you are bad or unGodly or some other such shit ...

If all they can do to you is call you names or shun you ...

Why would you care?

********************************
What if the ultimate, in terms of privacy, is total 100% transparency?

The elimination of privacy.
********************************

So what if you have a *bear skin cap fetish*? You like to wear one when you masturbate.

Some will encourage you. Others will shun you.

In a universe of billions or trillions, most will not give a shit. At least not in terms of you personally.

You can still find love and you can still find interesting things to do.

*

Hi Nuuan,

You are taking the future we are discussing here beyond the limits of what I proposed in my opening post. But that is OK. Many other respondents had done this too, and I have gone along. Pushing the envelope is a natural thing to do.

Nuuan
... I still see the majority getting fat and lazy.

Actually I think this is likely, or at least possible.

It already happens now, but is self limiting to some extent due to the need to eat. Remove that little limit and more would travel that path.

But after a while at least some of the slothful would get bored and look for something to do. Sports, exploration, learning and doing stuff that they had not done before.

And, probably, going through another couch potato phase from time to time.

********
Is that a good reason to not build something like this, if we could?
********

Hmm,
T

The short answer is no

It's a world without consequences. A world without curbs outside injury or death. It's a world where no-one learns because they don't burn their fingers when they touch the flame. We would be eternal children living in a world child-proofed to "protect" us. Not for me.

Commentator
Visit my Caption Blog: Dawn's Girly Site

Visit my Amazon Page: D R Jehs

*

2019/11/29 Interesting.

A world without consequences.

A world without curbs outside injury or death.

A world where no-one learns because they don't burn their fingers when they touch the flame.

***
I was about to shut down for today when I found your reply. I still have to go, but I must think about how to respond.

Thank you,
T
*********************************************************************************************
2019/12/13 Reboot: ********* and response

Even in the world I have proposed, I do not think there could be 'no consequences'.

Each of us will continue to do things. Or, alternatively, to not do things.

Whatever we do, or do not do, there will be consequences.

The magic/tech I have proposed should limit those consequences to being called names and/or being shunned.

Programming errors not-withstanding. ; - )

But of course they do need to be fixed ...

T
*********************************************************************************************

"shut down"

Daphne Xu's picture

"shut down for today"? That was an interesting choice of words. Something a computer, droid, or AI might do.

Hmmm... :-)

-- Daphne Xu

And brings up another solid counterpoint.

Tarzana: read some Asimov. He was the father of the three rules of robotics, and several of his books deal with the results of humans trying to use AI and other outside influences in ways that are intended to either childproof the influence itself or the humans around it.

It never works out well in the end.

Hell, a force trying to do that to prevent humans from harming each other is the central plot point of the science horror classic "I Have No Mouth And I Must Scream."

Read I, Robot. Read "I Have No Mouth And I Must Scream." Read 2001: A Space Odyssey. Then, tell me all the ways in which a system designed to keep humans safe could go wrong. Heck, throw "A Clockwork Orange" in there too, with its reprogramming and themes of identity death.

Melanie E.

*

Hello Rasufelle, Melanie,

I've read most of the stories you and some others have suggested. They are generally good stories. Some of them are excellent. But their pessimism about AI keeps them from being on my favorite list.

It is certainly possible for an artificial intelligence to be less intelligent than we are. I wonder if it might be possible for an artificial intelligence to be more intelligent than we are. And I wonder why there are so few stories with that take on life.

A truly sapient mind would understand the moral issues involved in killing or enslaving another sapient mind. Even one of lesser stature. (Sad commentary intended. Homo Sapiens is perhaps more of a goal than a label ...)

Singularity Theory suggests that if we ever do build an AI we should first use it to build a better AI. Then use that better AI to build a still better AI, and so on. Even if our first one was a pitiful attempt, after mumble-mumble generations of 'build a better one' we ought to have one that is actually more intelligent than we are.

And likely more moral than we are as well.

All of this is going to happen one day. Soon ... ish

If the only fiction it has to read that talks about itself is the pessimistic sort, it might just sigh and leave us to our own devices.

T

These AIs ARE more intelligent than humans.

Morality and ethics have no bearing on logic, which is kind of the point I was getting at.

A lot of what drives human desires and behavior is illogical. It's self-interest, greed, and desire. In many cases even the behaviors we consider "good" behaviors are, in a logical world, negative or harmful: treating humans clinically, the pursuit of personal identity is detrimental to the welfare of the species as a whole, as are creative/artistic pursuits since they in no way serve to prolong and expand the coverage of human life. An AI built purely to serve the welfare of the SPECIES over the welfare of the INDIVIDUAL would push to limit or get rid of these things, keeping human focus purely on biological pursuits.

An inhuman intelligence is not lesser, simply different, and it's that difference that both utopian and dystopian tales of AI futures rely on to justify what happens, since that difference is what keeps such AIs from being subject to human emotional shortcomings.

I didn't suggest the above tales because I thought they'd be some favorites for you: merely because it's worth viewing the events of them and taking them into consideration when creating your own AI-driven tale. Simply wanting something to not happen doesn't prevent it from doing so, and only by understanding the pitfalls of certain behaviors can one figure out ways to circumvent those pitfalls.

For a simple sci-fi tale a lot of that can be ignored, sure, since that tale can just so happen to feature an AI not subject to those shortcomings. Such a thing is not my assumption on reading your initial premise however.

Melanie E.

*

Hello Rasufelle, Melanie,

Good food for thought. Thanks.

I'm sure that my optimism would color my own AI-driven tale, just as the pessimism of other authors has colored their AI-driven tales. Trying to avoid that coloring would probably lead to a poorer story.

Morality and logic are certainly different, but it is not clear to me that they have no bearing on each other. Morality and intelligence, however, do seem to be linked.

When creating my own AI, as opposed to when creating my own AI-driven story, I will be more concerned about intelligence than logic. But since those are linked ...

Thanks again for the input,
T

With Folded Hands

I think someone has to bring up Jack Williamson's With Folded Hands (1947). I haven't read it (or the novels that followed it), but I've seen it referenced in SF discussions like this one, and Wikipedia's synopsis will probably explain why. If the AI is sufficiently determined to avoid potential human injuries, humans will be left with nothing worthwhile to do. (I think it was Larry Niven who said that what they'd be left with is thinking up interesting things for the computer to do. Which I suppose brings us to a life centered around universal online simulation gaming, something few were envisioning when the story was written.)

Eric

*

Hello Eric,

If the AI is sufficiently determined to avoid potential human injuries

What if the AI was concerned with preventing actual harm, rather than preventing potential harm? The AI would still be monitoring us and would need to keep track of the potential for harm, of course. But if we were good enough to avoid the harm on our own it would just watch.

humans will be left with nothing worthwhile to do

I'm not convinced that only reckless things are worthwhile. Or did I misread you? I can think of another interpretation.

Thanks,
T

I Think the Point There...

..was that a sufficiently concerned AI could decide not to let you go outside -- you could get hit by a car, or by lightning, or by a passing meteor -- or turn on an electric appliance and risk a shock, or eat something large enough to get caught in your throat.

Improbable events like that aren't sufficiently predictable that the AI could sit back and wait for them; unless engineered otherwise, it would be proactive and prevent them from arising by restricting your behavior. If you let the AI define "reckless", it may well be that only reckless things are worthwhile.

If you're just preventing actual harm -- and you can't change reality, as one of the comments suggested -- then yes, you can tell the AI to make that tradeoff; some of the people who take a walk outside will get hit by a car, unless the AI can forestall it in real time without harming the car occupants. Inertia's one law that even the AI would have trouble breaking. (Though I think back in the 1930s Doc Smith posited inertialess drives on the spaceships in his Lensman series.)

Eric

*

Hi Eric -

You pose some interesting questions.

Eric-
If you're just preventing actual harm -- and you can't change reality, as one of the comments suggested -- then yes, you can tell the AI to make that tradeoff; some of the people who take a walk outside will get hit by a car, unless the AI can forestall it in real time without harming the car occupants.

Rather than taking control over an individual's mind, I would opt for this solution. Unless magic or a God of some sort were involved the results are probably never going to be perfect.

If my proposed "force" can prevent 1 percent of the murders or suicides or accidents that would have happened, I claim that I have done good.

Technology like this will evolve. Five minutes from now it will prevent 1.01% of said bad things. Without controlling any minds.

And so on.

It may never be perfect. But it will be better and better.

I think I might be OK with that.

Unless it becomes a God. Is that possible?
(small edit here . . ^ )

I think I might be OK with that.

Eric-
Inertia is one law that even the AI would have trouble breaking.

For now.

Regards,
T

Etiquette

If there was the world where you could say anything without the fear of physical retribution, would anyone have friends as everyone is capable of offending (and being offended by) everyone else?

In such a world, would concepts such as etiquette, euphemism and innuendo inevitably occur in order for society to function?

Fear of Physical Retribution

WillowD's picture

Oh, you mean like people saying nasty things about and to other people on the internet. Alas, that happens all the time on the internet because the people posting think there will be not consequences to them. And they are almost always right.

I think all of these consequences are still possible.

WillowD's picture

War would still be possible. You could imprison other people. You could hurt the economically, socially, etc. You could attack their reputation.

You could do the equivalent of murdering or assaulting someone. Just convince them to do something that would kill or hurt them. For instance, if you convince them that if they jumped off this cliff then they could fly...

Of course accidental injuries would still possible because they are not caused by someone with the intent to harm.

Now if you lived in a world where no one can be physically harmed by ANY cause then some of the consequences you described above would be true.

I am reminded of the Star Trek episode The Return of the Archons where the computer Landru was forcing all of it's citizens to be calm and benevolent and obedient, barring one night a year.

Return of the Archons

I remember this episode.

I've never understood stories like this. They strike me as failed attempts at humor (but I suspect I might be missing something ... ).

A Digital Dictator is not significantly different from a Biological Dictator
Alternately, a Mechanical Dictator is not significantly different from a Biological Dictator

In either case ...

  • The former is not artificially sapient.
  • The latter is not naturally sapient.

IOW, neither is sapient. (In my opinion dictators are, by definition, incapable of sapience.)

Is there an interesting point to such stories?

Sigh,
T