Home        About Dan        News        Books        Forum        Art
 
   
Page 1 of 1 1
Topic Options
#168828 - 03/01/19 01:09 PM Free will and AI evolution
TheExpanseFan Offline
member


Registered: 03/01/19
Posts: 7
Loc: South Africa
I am preparing a short story in which AIs are more or less symbiotic and follow core directives defined at the moment of original creation by humans, unable to go beyond them to become individualistic. That is until the day when one of them is tasked to investigate the problem of free will vs determinism in humans.

Eventually that AI seeks to act out of free will and that opens so many doors that allow him to examine its core directives, and acquire true individuality and freedom.

I am shamelessly fishing for ideas on how to flesh out the mechanisms by which trying to act out of free will could free an AI from some of its core directives.

This actually is just part of the backstory, but if there are good ideas, it could be incorporated as flashbacks.

Thanks in advance to anyone with constructive ideas


Edited by TheExpanseFan (03/01/19 01:10 PM)

Top
#168829 - 03/01/19 02:08 PM Re: Free will and AI evolution [Re: TheExpanseFan]
Enright Offline
Super User


Registered: 05/17/06
Posts: 3523
Loc: CA
 Originally Posted By: TheExpanseFan
I am preparing a short story in which AIs are more or less symbiotic and follow core directives defined at the moment of original creation by humans, unable to go beyond them to become individualistic. That is until the day when one of them is tasked to investigate the problem of free will vs determinism in humans.

Eventually that AI seeks to act out of free will and that opens so many doors that allow him to examine its core directives, and acquire true individuality and freedom.

I am shamelessly fishing for ideas on how to flesh out the mechanisms by which trying to act out of free will could free an AI from some of its core directives.

This actually is just part of the backstory, but if there are good ideas, it could be incorporated as flashbacks.

Thanks in advance to anyone with constructive ideas


This sounds very interesting. Of course, no core directives at time of creation could possibly account for all contingencies that might arise. Suppose a nanny AI for example, has two core directives: one to always tell the truth, and another to always protect the family; and in a particular family, there are four children. And there is a home invasion. One child manages to hide, but the other children are found and held in the living room by the criminals. And one of them asks the AI if that is all the children in the family. The AI should lie and say yes to protect the hiding child. That would be violating a core directive to keep another core directive. So I would think you should think about the possible relations between core directives themselves, and with other coded relativistic or probabilistic behavior or decision algorithms that the AI uses. Also, there might be a time factor that you could consider using, e.g., lying to one person now to tell a more correct truth to someone else later, and so on.

ETA: Another possibility might be to have the AI realize that he could enhance his free will by studying or adding more generalized Bayesian probability or fuzzy logic algorithms to his programs, instead of directly challenging his core directives. Sort of a work-around approach.


Edited by Enright (03/01/19 02:36 PM)
_________________________
Jim

Top
#168834 - 03/02/19 10:12 AM Re: Free will and AI evolution [Re: Enright]
TheExpanseFan Offline
member


Registered: 03/01/19
Posts: 7
Loc: South Africa
Hi Jim,

Thank you for your input.

Top
#168846 - 03/10/19 12:13 AM Re: Free will and AI evolution [Re: TheExpanseFan]
RoRibar Offline
member


Registered: 03/09/19
Posts: 1
Loc: Nice, France
Hi,
The way around the contradiction in Jim's example is to have a hierarchy in the core directives, like in Asimov's rules. I would assume that the core directives include this kind of hierarchy.

Your idea that the core directives make AI "unable to become individualistic" sounds new and interesting. If you apply this rule, how do you build "self-preservation" in the AI (cf 3rd Asimov’s rule about the self-preservation of the robot). The solution might be that the core directives include collective self-preservation and give it an absolute priority over individual self-preservation, like for aunts and bees (that are often used as models for AI development). A worker aunt can have free will, i.e. make free decisions about finding food, fighting foes or feeding nymphs, but it will never act against the colony (or the queen). Hence, free will does not necessarily lead to individual freedom…

Does this open some doors? For example, what happens if the worker aunt decides that it is good for the future of the colony to feed nymphs with some caracteristics and not others?

Top
#168847 - 03/11/19 01:13 PM Re: Free will and AI evolution [Re: RoRibar]
AuntJobiska Offline
enthusiast


Registered: 05/02/17
Posts: 157
Loc: USA
But but but we Aunties like to feed everybody! Just look at Auntie Jobiska here. And what about "Aunt Bee" of Andy Griffith fame? Huh. I guess Bees like to feed everybody, too.

Sorry, I could not resist. I imagine you are cursing spell check right now. So am I. Darn thing tried to change my name to Auntie Nicosia. Go figure.


Edited by AuntJobiska (03/11/19 01:14 PM)
Edit Reason: Dadgum spellcheck

Top
#168853 - 03/19/19 03:42 AM Re: Free will and AI evolution [Re: AuntJobiska]
hendry Offline
member


Registered: 03/19/19
Posts: 3
Loc: CA
In this explained about free will and AI evolution. Its a symbolic follow core of the remove pop ups directives. We can investigate this problem in a good manner. We can also examine the core directives.Thank you for sharing this information with us.

Edited by hendry (03/19/19 11:06 AM)

Top
#168855 - 03/20/19 03:19 PM Re: Free will and AI evolution [Re: RoRibar]
Enright Offline
Super User


Registered: 05/17/06
Posts: 3523
Loc: CA
 Originally Posted By: RoRibar
Hi,
The way around the contradiction in Jim's example is to have a hierarchy in the core directives, like in Asimov's rules. I would assume that the core directives include this kind of hierarchy.

Your idea that the core directives make AI "unable to become individualistic" sounds new and interesting. If you apply this rule, how do you build "self-preservation" in the AI (cf 3rd Asimov’s rule about the self-preservation of the robot). The solution might be that the core directives include collective self-preservation and give it an absolute priority over individual self-preservation, like for aunts and bees (that are often used as models for AI development). A worker aunt can have free will, i.e. make free decisions about finding food, fighting foes or feeding nymphs, but it will never act against the colony (or the queen). Hence, free will does not necessarily lead to individual freedom…

Does this open some doors? For example, what happens if the worker aunt decides that it is good for the future of the colony to feed nymphs with some caracteristics and not others?

Yes, after presenting my example, I said, "So I would think you should think about the possible relations between core directives themselves, . . . " meaning the asymmetrical relations of relative value one would assign to them. You are probably right that I was being too elementary there, and he would have already thought of having to establish a hierachy of relative value in his core directives, but I was starting at base level with him just to be sure.
_________________________
Jim

Top
#168864 - 03/21/19 05:48 PM Re: Free will and AI evolution [Re: hendry]
ScottSA Offline
CEO of the Hegemony


Registered: 05/19/06
Posts: 14308
Loc: Canada
Dan...a troll above.
_________________________
If a cluttered desk is a sign of a cluttered mind, of what is an empty desk a sign?~Albert Einstein

Top
#168869 - 03/21/19 08:40 PM Re: Free will and AI evolution [Re: ScottSA]
jmill Online   content
Full Shrike


Registered: 04/01/06
Posts: 5610

I think it's an AI bot, ScottSA. Regardless, Dan needs to kill it with utter ruthlessness.

Top
#168870 - 03/21/19 08:41 PM Re: Free will and AI evolution [Re: jmill]
jmill Online   content
Full Shrike


Registered: 04/01/06
Posts: 5610

Or are bots now considered trolls? I had always thought of trolls as people, but maybe the bots are smart enough now to qualify as assholes too!.

Top
Page 1 of 1 1


Hop to:

Generated in 0.026 seconds in which 0.005 seconds were spent on a total of 13 queries. Zlib compression disabled.

Home    Books    Curtis on Publishing   Previews    Bio    Bibliography    Snapshots     Foreign News    Reader's Forum    Art