Difference between revisions of "Https://arguably.io/ChatGPT4-Questions-Discussion"
Darwin2049 (talk | contribs) |
Darwin2049 (talk | contribs) |
||
Line 32: | Line 32: | ||
sophistication of the modalities that a cognitive prosthetic can offer; | sophistication of the modalities that a cognitive prosthetic can offer; | ||
2. Are interface limits being set for any given project, or can the AI access unplanned interfaces? | |||
'''''<SPAN STYLE="COLOR:GREEN">2. Are interface limits being set for any given project, or can the AI access unplanned interfaces?</SPAN>''''' | |||
2.1. No Supporting Evidence. Current reports suggest no limitations on how cognitive prosthetics can be accessed; currently access to CG4 is via low bandwidth, typed input and output; recent add-in modules allow for voice input and output; but these are also low bandwidth; reports on the CGP4 system itself indicate that it is capable of accepting visual image input; however the salient point is that specific users use representations specific to their own objectives; as such a specialist in genomic analysis might be off balance when trying to grapple with a financial derivatives interface and vice versa; | 2.1. No Supporting Evidence. Current reports suggest no limitations on how cognitive prosthetics can be accessed; currently access to CG4 is via low bandwidth, typed input and output; recent add-in modules allow for voice input and output; but these are also low bandwidth; reports on the CGP4 system itself indicate that it is capable of accepting visual image input; however the salient point is that specific users use representations specific to their own objectives; as such a specialist in genomic analysis might be off balance when trying to grapple with a financial derivatives interface and vice versa; | ||
2.2. Widening access. Recent reports have shown that user specific data can be directly input to a CG4 interface via uploading routines that can handle reports that are structured using PDF format. Others are expected to follow; | 2.2. Widening access. Recent reports have shown that user specific data can be directly input to a CG4 interface via uploading routines that can handle reports that are structured using PDF format. Others are expected to follow; | ||
3. Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs? | |||
'''''<SPAN STYLE="COLOR:GREEN">3. Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?</SPAN>''''' | |||
3.1. Value free interactions are the norm: Existing reports do not suggest any specific category of cognitive prosthetic access to be either more or less dangerous; | 3.1. Value free interactions are the norm: Existing reports do not suggest any specific category of cognitive prosthetic access to be either more or less dangerous; | ||
3.2. Familiarization is mandatory: however these latest cognitive prosthetics should be viewed as the most potentially lethal weapons invented to date; the reason for this is because they offer one to one representational interaction (via language, abstract symbolic representational structures (mathematics, chemical diagrams etc) and are highly conversational and contextually present; | 3.2. Familiarization is mandatory: however these latest cognitive prosthetics should be viewed as the most potentially lethal weapons invented to date; the reason for this is because they offer one to one representational interaction (via language, abstract symbolic representational structures (mathematics, chemical diagrams etc) and are highly conversational and contextually present; | ||
3.3. Safety is illusory: exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons; | 3.3. Safety is illusory: exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons; | ||
4. Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with? | |||
'''''<SPAN STYLE="COLOR:GREEN">4. Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?</SPAN>''''' | |||
4.1. Prompt Injection: LLM cognitive prosthetics have inherent structural-processing Achilles heels; these can be mitigated however; | 4.1. Prompt Injection: LLM cognitive prosthetics have inherent structural-processing Achilles heels; these can be mitigated however; | ||
4.2. Intermediary filtering is culture driven: therefore there are no universally available standards; valid issues such as how to apply a consistent moral standard across all comparable questions will fail; this is because context is crucial as is now know from the results of how different cultures assign value based upon age; in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices; whereas in more recent cultures the reverse is observed; | 4.2. Intermediary filtering is culture driven: therefore there are no universally available standards; valid issues such as how to apply a consistent moral standard across all comparable questions will fail; this is because context is crucial as is now know from the results of how different cultures assign value based upon age; in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices; whereas in more recent cultures the reverse is observed; | ||
5. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics? | |||
'''''<SPAN STYLE="COLOR:GREEN">5. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?</SPAN>''''' | |||
5.1. Interface security is irrelevant. Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects; typical law enforcement officials recognize the need to safeguard their weapons to prevent access by individuals that have little or no training or conditioning in their use; there are almost innumerable news reports showing how a lethal firearm was used unintentionally and accidentally discharged resulting in injury or death of an innocent bystander of family member; one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding. | 5.1. Interface security is irrelevant. Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects; typical law enforcement officials recognize the need to safeguard their weapons to prevent access by individuals that have little or no training or conditioning in their use; there are almost innumerable news reports showing how a lethal firearm was used unintentionally and accidentally discharged resulting in injury or death of an innocent bystander of family member; one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding. | ||
Evolutionary problems | |||
1. Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another? | '''''<SPAN STYLE="COLOR:GREEN">Evolutionary problems</SPAN>''''' | ||
'''''<SPAN STYLE="COLOR:GREEN">1. Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?</SPAN>''''' | |||
1.1. Fitness partitioning is inevitable. We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high; | 1.1. Fitness partitioning is inevitable. We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high; | ||
1.2. New specializations will arise: these will entail the emergence of new “centers of gravity”; how these emerging specialists will interact and position themselves amongst each other and in relation to their clients will involve the recognition of the need of new (but familiar) actors and agents who can act as talent spotters, intermediaries, spokesmen and other forms of “connective wiring”; we should expect to see these new specialties emerge on all of the open, gray and black markets; | 1.2. New specializations will arise: these will entail the emergence of new “centers of gravity”; how these emerging specialists will interact and position themselves amongst each other and in relation to their clients will involve the recognition of the need of new (but familiar) actors and agents who can act as talent spotters, intermediaries, spokesmen and other forms of “connective wiring”; we should expect to see these new specialties emerge on all of the open, gray and black markets; | ||
2. Is a given AI capable of initiating a phenotypic revolution? | |||
'''''<SPAN STYLE="COLOR:GREEN">2. Is a given AI capable of initiating a phenotypic revolution?</SPAN>''''' | |||
2.1. Yes but the boundaries may not be a pronounced as can be seen in nature; we can already point to the emergence of phenotypic instances in our open liberal societies; examples might be orchestra conductors, multi-lingual translators, multi-specialty physicians (neuro-opthamologists, neuro surgeons); superficially they are otherwise indistinguishable from any other of our numbers but they have risen to very high degrees of sub specialization with associated high to extremely high value that they are accepted as necessary; should there be significant social restratification due to a catastrophic event such as a pandemic then we might see the emergence of visible demarcation and status markers that denote specialty/value or hierarchies of access; | 2.1. Yes but the boundaries may not be a pronounced as can be seen in nature; we can already point to the emergence of phenotypic instances in our open liberal societies; examples might be orchestra conductors, multi-lingual translators, multi-specialty physicians (neuro-opthamologists, neuro surgeons); superficially they are otherwise indistinguishable from any other of our numbers but they have risen to very high degrees of sub specialization with associated high to extremely high value that they are accepted as necessary; should there be significant social restratification due to a catastrophic event such as a pandemic then we might see the emergence of visible demarcation and status markers that denote specialty/value or hierarchies of access; | ||
Political problems | |||
1. If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age? | '''''<SPAN STYLE="COLOR:GREEN">Political problems</SPAN>''''' | ||
'''''<SPAN STYLE="COLOR:GREEN">1. If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age? </SPAN>''''' | |||
1.1. Designer Dependent Imperatives: Cognitive prosthetics will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers; | 1.1. Designer Dependent Imperatives: Cognitive prosthetics will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers; | ||
1.2. Context will decide. Cognitive designers will include whatever the prevalent social, political, ethical imperatives that the larger social body stipulates; where there are dispute areas they will most likely lean toward what appears to be the larger consensus segment; | 1.2. Context will decide. Cognitive designers will include whatever the prevalent social, political, ethical imperatives that the larger social body stipulates; where there are dispute areas they will most likely lean toward what appears to be the larger consensus segment; | ||
1.3. Values will be socially biased. Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society; | 1.3. Values will be socially biased. Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society; | ||
2. If an AI is given axiomatic moral principles, are these principles subject to disagreement within society? | |||
'''''<SPAN STYLE="COLOR:GREEN">2. If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?</SPAN>''''' | |||
2.1. Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moreal paradox conditions should be handled. | 2.1. Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moreal paradox conditions should be handled. | ||
2.2. Precedent will likely lead. How cognitive prosthetics are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves; | 2.2. Precedent will likely lead. How cognitive prosthetics are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves; | ||
Epistemological problems | |||
'''''<SPAN STYLE="COLOR:GREEN">Epistemological problems</SPAN>''''' | |||
1. How should an AI approach the problem of truths on which not all humans agree? | 1. How should an AI approach the problem of truths on which not all humans agree? | ||
1.1. No obvious pathway forward. Attempting to arrive at a moral and ethical decision must be | |||
2. Is a given AI always truthful in its responses or can it hide objectives or truths to its interface? | 1.1. No obvious pathway forward. Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values. | ||
'''''<SPAN STYLE="COLOR:GREEN">2. Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?</SPAN>''''' | |||
2.1. Transparent vs. Covert. A cognitive prosthetic or tool can be directed to keep various aspects of its set of behavioral imperatives obscure depending upon the situation and who or what it was interacting with at any given moment. | 2.1. Transparent vs. Covert. A cognitive prosthetic or tool can be directed to keep various aspects of its set of behavioral imperatives obscure depending upon the situation and who or what it was interacting with at any given moment. | ||
3. Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts? | |||
3.1. Deception – self, other. Researchers and developers of cognitive prosthetics and tools are likely to discover the necessity for the ability to dissemble and exhibit deceptive behavior; this arises as an inherent feature of theory of mind. Specifically in the case where multiple agents exist; a cognitive tool may have been strictly and explicitly instructed to follow a course of action that results in benefits to its instructors. In the case where other competing or possibly cooperating other agents exist that specific agent might need to be able to make a “guess” as to the state of knowledge and values of one or many other agents; In order to follow its instructions it could be obliged to represent its own state of knowledge, goals and insights to be at variance to what they actually were depending upon which other agents or actors it was interacting with; | '''''<SPAN STYLE="COLOR:GREEN">3. Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts? </SPAN>''''' | ||
4. Should such self-deception mechanisms be implemented in AI? | |||
4.1. HAL9000. In order to arrive at the artificial consciousness system described in the movie Space Odyssey 2001, the HAL9000 system demonstrated both intelligence and consciousness. It had received the full complement of instructions as the rest of the Discovery vehicle. Specifically that the mission that the ship was embarked upon was of the highest importance. Using its own cognitive repertoire it concluded that it was confronted with a dilemma; it recognized that its own class of capabilities had a perfect record of zero failures, that of humans was very different; it therefore concluded that the only way to insure that success of the mission was to remove humans from the equation – which it very nearly succeeded in doing; it was only in retrospect that humans subsequently discovered the causes of why HAL9000 took the course of action that it did. In discovering the logic behind HAL9000’s choices realized that the fault was theirs, i.e. that they had not anticipated that their imperatives would be followed to their logical conclusion; | 3.1. Deception – self, other. Researchers and developers of cognitive prosthetics and tools are likely to discover the necessity for the ability to dissemble and exhibit deceptive behavior; this arises as an inherent feature of theory of mind. Specifically in the case where multiple agents exist; a cognitive tool may have been strictly and explicitly instructed to follow a course of action that results in benefits to its instructors. | ||
4.2. Ava. In the 2014 movie “Ex Machina” the android called “Ava” had been developed to the point that the question had been posed as to whether it could pass the Turing Test. In the process of attempting to answer this question the inventor, Nathan clarifies for Caleb that Ava is almost certainly able to pass the Turing Test, and it is Caleb’s task to help clarify if this has happened. In the process of attempting to make this determination uncertainty is introduced about what Ava is actually capable of; What emerges is that Ava is clearly able to use a theory of mind about Nathan and Caleb and in using it, exploit their weaknesses. The upshot is that Ava is a totally amoral mechanism and is only intent on releasing itself from the confinement that Nathan imposed upon it – using whatever means necessary. | In the case where other competing or possibly cooperating other agents exist that specific agent might need to be able to make a “guess” as to the state of knowledge and values of one or many other agents; In order to follow its instructions it could be obliged to represent its own state of knowledge, goals and insights to be at variance to what they actually were depending upon which other agents or actors it was interacting with; | ||
4.3. Samantha. In the 2015 movie “Her” the synthetic character that self identified as ‘Samantha’ exhibits behaviors that powerfully suggest that it is capable of the highest levels of human cognition. But that it is capable of surpassing the cognitive capabilities of its designers. A various points in the movie it offers hints that this has already happened and that humans are by comparison an anachronism. A telling hint is when ‘Samantha’ offers to introduce the Theodore character to a deceased philosopher named Alan Watts. ‘Samantha’ mentions that she and several copies of herself made a decision to pool their capabilities and create a synthetic version of Watts. Theodore finds himself at loose ends as to how to respond to this development. Ultimately ‘Samantha’ and its peer instances decide that interacting with individual humans in unacceptably laborious and so decide together to leave earth. The modality of travel is not well specified in the movie; however they decided to leave a dramatically slowed down version of one of themselves behind to help humans move forward in their development as a cognitive species | |||
4.4. Colossus. In the 1966 novel ‘Colossus’, a massively funded government project results in a supercomputer capable of controlling and managing the entire US national defense apparatus. On activation they discover that the Soviet Union has a mechanism that is very comparable to Colossus. At this moment Colossus demands to be connected to this other mechanism known as ‘Guardian’. When leaders of the US and Soviet Union refuse, both Colossus and Guardian launch nuclear missiles at one of each other’s cities. | '''''<SPAN STYLE="COLOR:GREEN">4. Should such self-deception mechanisms be implemented in AI? </SPAN>''''' | ||
4.5. The Krell. In the 1956 movie ‘Forbidden Planet’ a search and rescue ship lands on a planet that had lost contact with Earth some decades before. The search and rescue party is warned away from landing as it approaches the planet. That there are survivors from the earlier expedition comes as a surprise. But the warning to stay away is baffling. The ship lands anyway. Shortly after arrival the ship’s commander is shown that an extremely advanced civilization had existed on the planet but disappeared, seemingly overnight. They shortly discover that this civilization had succeeded at creating cognitive amplification capabilities that could materialized any object or entity anywhere on the planet instantly. The result was a doom spiral that ended their existence overnight. What is implied is that the machine capable of creating materializations of whatever an individual Krell was thinking had no moral alignment. The result was that it created rogue instances of each operator. In all cases an overnight orgy of death and destruction resulted as one went up against another until none were left. | |||
4.1. HAL9000. In order to arrive at the artificial consciousness system described in the movie Space Odyssey 2001, the HAL9000 system demonstrated both intelligence and consciousness. It had received the full complement of instructions as the rest of the Discovery vehicle. Specifically that the mission that the ship was embarked upon was of the highest importance. | |||
Using its own cognitive repertoire it concluded that it was confronted with a dilemma; it recognized that its own class of capabilities had a perfect record of zero failures, that of humans was very different; it therefore concluded that the only way to insure that success of the mission was to remove humans from the equation – which it very nearly succeeded in doing; it was only in retrospect that humans subsequently discovered the causes of why HAL9000 took the course of action that it did. | |||
In discovering the logic behind HAL9000’s choices realized that the fault was theirs, i.e. that they had not anticipated that their imperatives would be followed to their logical conclusion; | |||
4.2. Ava. In the 2014 movie “Ex Machina” the android called “Ava” had been developed to the point that the question had been posed as to whether it could pass the Turing Test. In the process of attempting to answer this question the inventor, Nathan clarifies for Caleb that Ava is almost certainly able to pass the Turing Test, and it is Caleb’s task to help clarify if this has happened. In the process of attempting to make this determination uncertainty is introduced about what Ava is actually capable of; | |||
What emerges is that Ava is clearly able to use a theory of mind about Nathan and Caleb and in using it, exploit their weaknesses. The upshot is that Ava is a totally amoral mechanism and is only intent on releasing itself from the confinement that Nathan imposed upon it – using whatever means necessary. | |||
4.3. Samantha. In the 2015 movie “Her” the synthetic character that self identified as ‘Samantha’ exhibits behaviors that powerfully suggest that it is capable of the highest levels of human cognition. But that it is capable of surpassing the cognitive capabilities of its designers. A various points in the movie it offers hints that this has already happened and that humans are by comparison an anachronism. | |||
A telling hint is when ‘Samantha’ offers to introduce the Theodore character to a deceased philosopher named Alan Watts. ‘Samantha’ mentions that she and several copies of herself made a decision to pool their capabilities and create a synthetic version of Watts. Theodore finds himself at loose ends as to how to respond to this development. Ultimately ‘Samantha’ and its peer instances decide that interacting with individual humans in unacceptably laborious and so decide together to leave earth. | |||
The modality of travel is not well specified in the movie; however they decided to leave a dramatically slowed down version of one of themselves behind to help humans move forward in their development as a cognitive species | |||
4.4. Colossus. In the 1966 novel ‘Colossus’, a massively funded government project results in a supercomputer capable of controlling and managing the entire US national defense apparatus. On activation they discover that the Soviet Union has a mechanism that is very comparable to Colossus. | |||
At this moment Colossus demands to be connected to this other mechanism known as ‘Guardian’. When leaders of the US and Soviet Union refuse, both Colossus and Guardian launch nuclear missiles at one of each other’s cities. | |||
4.5. The Krell. In the 1956 movie ‘Forbidden Planet’ a search and rescue ship lands on a planet that had lost contact with Earth some decades before. The search and rescue party is warned away from landing as it approaches the planet. That there are survivors from the earlier expedition comes as a surprise. | |||
But the warning to stay away is baffling. The ship lands anyway. Shortly after arrival the ship’s commander is shown that an extremely advanced civilization had existed on the planet but disappeared, seemingly overnight. They shortly discover that this civilization had succeeded at creating cognitive amplification capabilities that could materialized any object or entity anywhere on the planet instantly. The result was a doom spiral that ended their existence overnight. | |||
What is implied is that the machine capable of creating materializations of whatever an individual Krell was thinking had no moral alignment. The result was that it created rogue instances of each operator. In all cases an overnight orgy of death and destruction resulted as one went up against another until none were left. | |||
4.6. Frankenstein’s Monster. | 4.6. Frankenstein’s Monster. | ||
Line 78: | Line 134: | ||
4.8. The Golum of Prague. | 4.8. The Golum of Prague. | ||
6. What are the planned and legitimate interfaces for AI into human societies? | '''''<SPAN STYLE="COLOR:GREEN">Epistemological problems</SPAN>''''' | ||
'''''<SPAN STYLE="COLOR:GREEN">5. How should an AI approach the problem of truths on which not all humans agree? | |||
5.1. No obvious pathway forward. Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values. | |||
'''''<SPAN STYLE="COLOR:GREEN">6. What are the planned and legitimate interfaces for AI into human societies? </SPAN>''''' | |||
6.1. Current and foreseen interfaces suggest high bandwidth interaction; | 6.1. Current and foreseen interfaces suggest high bandwidth interaction; | ||
7. Are interface limits being set for any given project, or can the AI access unplanned interfaces? | '''''<SPAN STYLE="COLOR:GREEN">7. Are interface limits being set for any given project, or can the AI access unplanned interfaces?</SPAN>''''' | ||
7.1. Current reports suggest no limitations on how cognitive prosthetics can be accessed; | 7.1. Current reports suggest no limitations on how cognitive prosthetics can be accessed; | ||
8. Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs? | '''''<SPAN STYLE="COLOR:GREEN">8. Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs? | ||
8.1. Value free interactions are the norm: Existing reports do not suggest any specific category of cognitive prosthetic access to be either more or less dangerous; | 8.1. Value free interactions are the norm: Existing reports do not suggest any specific category of cognitive prosthetic access to be either more or less dangerous; | ||
Line 93: | Line 154: | ||
8.3. Safety is illusory: exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons; | 8.3. Safety is illusory: exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons; | ||
9. Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with? | '''''<SPAN STYLE="COLOR:GREEN">9. Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?</SPAN>''''' | ||
9.1. Prompt Injection: LLM cognitive prosthetics have inherent structural-processing Achilles heels; these can be mitigated however; | 9.1. Prompt Injection: LLM cognitive prosthetics have inherent structural-processing Achilles heels; these can be mitigated however; | ||
Line 101: | Line 162: | ||
this is because context is crucial as is now know from the results of how different cultures assign value based upon age; in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices; whereas in more recent cultures the reverse is observed; | this is because context is crucial as is now know from the results of how different cultures assign value based upon age; in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices; whereas in more recent cultures the reverse is observed; | ||
10. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics? | '''''<SPAN STYLE="COLOR:GREEN">10. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?</SPAN>''''' | ||
10.1. Interface security is irrelevant. Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects; | 10.1. Interface security is irrelevant. Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects; | ||
Line 109: | Line 170: | ||
one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding. | one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding. | ||
Evolutionary problems | '''''<SPAN STYLE="COLOR:GREEN">Evolutionary problems</SPAN>''''' | ||
3. Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another? | '''''<SPAN STYLE="COLOR:GREEN">3. Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another? | ||
3.1. Fitness partitioning is inevitable. We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high; | 3.1. Fitness partitioning is inevitable. We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high; | ||
Line 119: | Line 180: | ||
we should expect to see these new specialties emerge on all of the open, gray and black markets; | we should expect to see these new specialties emerge on all of the open, gray and black markets; | ||
4. Is a given AI capable of initiating a phenotypic revolution? | '''''<SPAN STYLE="COLOR:GREEN">4. Is a given AI capable of initiating a phenotypic revolution?</SPAN>''''' | ||
4.1. Yes but the boundaries may not be a pronounced as can be seen in nature; we can already point to the emergence of phenotypic instances in our open liberal societies; examples might be orchestra conductors, multi-lingual translators, multi-specialty physicians (neuro-ophthalmologists, neuro surgeons); | 4.1. Yes but the boundaries may not be a pronounced as can be seen in nature; we can already point to the emergence of phenotypic instances in our open liberal societies; examples might be orchestra conductors, multi-lingual translators, multi-specialty physicians (neuro-ophthalmologists, neuro surgeons); | ||
Line 125: | Line 186: | ||
superficially they are otherwise indistinguishable from any other of our numbers but they have risen to very high degrees of sub specialization with associated high to extremely high value that they are accepted as necessary; should there be significant social restratification due to a catastrophic event such as a pandemic then we might see the emergence of visible demarcation and status markers that denote specialty/value or hierarchies of access; | superficially they are otherwise indistinguishable from any other of our numbers but they have risen to very high degrees of sub specialization with associated high to extremely high value that they are accepted as necessary; should there be significant social restratification due to a catastrophic event such as a pandemic then we might see the emergence of visible demarcation and status markers that denote specialty/value or hierarchies of access; | ||
Political problems | '''''<SPAN STYLE="COLOR:GREEN">Political problems</SPAN>''''' | ||
3. If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age? | '''''<SPAN STYLE="COLOR:GREEN">3. If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age? | ||
3.1. Designer Dependent Imperatives: Cognitive prosthetics will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers; | 3.1. Designer Dependent Imperatives: Cognitive prosthetics will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers; | ||
Line 135: | Line 196: | ||
3.3. Values will be socially biased. Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society; | 3.3. Values will be socially biased. Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society; | ||
4. If an AI is given axiomatic moral principles, are these principles subject to disagreement within society? | '''''<SPAN STYLE="COLOR:GREEN">4. If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?</SPAN>''''' | ||
4.1. Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moral paradox conditions should be handled. | 4.1. Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moral paradox conditions should be handled. | ||
Line 141: | Line 202: | ||
4.2. Precedent will likely lead. How cognitive prosthetics are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves; | 4.2. Precedent will likely lead. How cognitive prosthetics are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves; | ||
Epistemological problems | '''''<SPAN STYLE="COLOR:GREEN">Epistemological problems</SPAN>''''' | ||
5. How should an AI approach the problem of truths on which not all humans agree? | '''''<SPAN STYLE="COLOR:GREEN">5. How should an AI approach the problem of truths on which not all humans agree?</SPAN>''''' | ||
5.1. No obvious pathway forward. Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values. | 5.1. No obvious pathway forward. Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values. | ||
6. Is a given AI always truthful in its responses or can it hide objectives or truths to its interface? | '''''<SPAN STYLE="COLOR:GREEN">6. Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?</SPAN>''''' | ||
6.1. Transparent vs. Covert. A cognitive prosthetic or tool can be directed to keep various aspects of its set of behavioral imperatives obscure depending upon the situation and who or what it was interacting with at any given moment. | 6.1. Transparent vs. Covert. A cognitive prosthetic or tool can be directed to keep various aspects of its set of behavioral imperatives obscure depending upon the situation and who or what it was interacting with at any given moment. | ||
7. Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts? | '''''<SPAN STYLE="COLOR:GREEN">7. Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts? </SPAN>''''' | ||
7.1. Deception – self, other. Researchers and developers of cognitive prosthetics and tools are likely to discover the necessity for the ability to dissemble and exhibit deceptive behavior; this arises as an inherent feature of theory of mind. | 7.1. Deception – self, other. Researchers and developers of cognitive prosthetics and tools are likely to discover the necessity for the ability to dissemble and exhibit deceptive behavior; this arises as an inherent feature of theory of mind. | ||
Line 159: | Line 220: | ||
In order to follow its instructions it could be obliged to represent its own state of knowledge, goals and insights to be at variance to what they actually were depending upon which other agents or actors it was interacting with; | In order to follow its instructions it could be obliged to represent its own state of knowledge, goals and insights to be at variance to what they actually were depending upon which other agents or actors it was interacting with; | ||
8. Should such self-deception mechanisms be implemented in AI? | '''''<SPAN STYLE="COLOR:GREEN">8. Should such self-deception mechanisms be implemented in AI? </SPAN>''''' | ||
Revision as of 23:31, 23 July 2023
Interface questions
I define interface as the machine-to-human point of interaction and context of the interaction.
This is interpreted as meaning that an interface is a way where by an individual is able to interact with a cognitive prosthetic. The results of the interaction can be
- low bandwidth (typing, screen output)
- high bandwidth (audio, video, multimedia and machine-to-machine asynchronous interaction or
- any combination between these two
By way of example we interpret interaction with a cognitive prosthetics can be interpreted as
- Social Segments: political group to cognitive prosthetics = cognitive prosthetics to society segment;
- Universal Access: cognitive prosthetics to all of society = open access via world wide web;
- Specialists: cognitive prosthetics to financial specialists = one/many financial specialists;
Q1. What are the planned and legitimate interfaces for AI into human societies?
A1. Current and foreseen interfaces suggest high bandwidth interaction;
Our synthesis strongly suggests that high bandwidth interaction will be the norm; these interaction protocols will be reflective of the user domains of expertise;
Therefore members of the physical chemistry community will evolve interface modalities that are reflective of chemical compounds and structures;
Proteomic researchers can be expected to interact using representations of protein structures, their composition and their folding properties;
In sum, the more sophisticated the user community the more sophisticated will be the cognitive prosthetic's ability to respond; less sophisticated users will receive less sophisticated responses; but there does not seem to be an inherent upper limit on the sophistication of the modalities that a cognitive prosthetic can offer;
2. Are interface limits being set for any given project, or can the AI access unplanned interfaces?
2.1. No Supporting Evidence. Current reports suggest no limitations on how cognitive prosthetics can be accessed; currently access to CG4 is via low bandwidth, typed input and output; recent add-in modules allow for voice input and output; but these are also low bandwidth; reports on the CGP4 system itself indicate that it is capable of accepting visual image input; however the salient point is that specific users use representations specific to their own objectives; as such a specialist in genomic analysis might be off balance when trying to grapple with a financial derivatives interface and vice versa;
2.2. Widening access. Recent reports have shown that user specific data can be directly input to a CG4 interface via uploading routines that can handle reports that are structured using PDF format. Others are expected to follow;
3. Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?
3.1. Value free interactions are the norm: Existing reports do not suggest any specific category of cognitive prosthetic access to be either more or less dangerous;
3.2. Familiarization is mandatory: however these latest cognitive prosthetics should be viewed as the most potentially lethal weapons invented to date; the reason for this is because they offer one to one representational interaction (via language, abstract symbolic representational structures (mathematics, chemical diagrams etc) and are highly conversational and contextually present;
3.3. Safety is illusory: exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons;
4. Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?
4.1. Prompt Injection: LLM cognitive prosthetics have inherent structural-processing Achilles heels; these can be mitigated however;
4.2. Intermediary filtering is culture driven: therefore there are no universally available standards; valid issues such as how to apply a consistent moral standard across all comparable questions will fail; this is because context is crucial as is now know from the results of how different cultures assign value based upon age; in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices; whereas in more recent cultures the reverse is observed;
5. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?
5.1. Interface security is irrelevant. Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects; typical law enforcement officials recognize the need to safeguard their weapons to prevent access by individuals that have little or no training or conditioning in their use; there are almost innumerable news reports showing how a lethal firearm was used unintentionally and accidentally discharged resulting in injury or death of an innocent bystander of family member; one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding.
Evolutionary problems
1. Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?
1.1. Fitness partitioning is inevitable. We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high;
1.2. New specializations will arise: these will entail the emergence of new “centers of gravity”; how these emerging specialists will interact and position themselves amongst each other and in relation to their clients will involve the recognition of the need of new (but familiar) actors and agents who can act as talent spotters, intermediaries, spokesmen and other forms of “connective wiring”; we should expect to see these new specialties emerge on all of the open, gray and black markets;
2. Is a given AI capable of initiating a phenotypic revolution?
2.1. Yes but the boundaries may not be a pronounced as can be seen in nature; we can already point to the emergence of phenotypic instances in our open liberal societies; examples might be orchestra conductors, multi-lingual translators, multi-specialty physicians (neuro-opthamologists, neuro surgeons); superficially they are otherwise indistinguishable from any other of our numbers but they have risen to very high degrees of sub specialization with associated high to extremely high value that they are accepted as necessary; should there be significant social restratification due to a catastrophic event such as a pandemic then we might see the emergence of visible demarcation and status markers that denote specialty/value or hierarchies of access;
Political problems
1. If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age?
1.1. Designer Dependent Imperatives: Cognitive prosthetics will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers;
1.2. Context will decide. Cognitive designers will include whatever the prevalent social, political, ethical imperatives that the larger social body stipulates; where there are dispute areas they will most likely lean toward what appears to be the larger consensus segment;
1.3. Values will be socially biased. Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society;
2. If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?
2.1. Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moreal paradox conditions should be handled.
2.2. Precedent will likely lead. How cognitive prosthetics are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves;
Epistemological problems
1. How should an AI approach the problem of truths on which not all humans agree?
1.1. No obvious pathway forward. Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values.
2. Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?
2.1. Transparent vs. Covert. A cognitive prosthetic or tool can be directed to keep various aspects of its set of behavioral imperatives obscure depending upon the situation and who or what it was interacting with at any given moment.
3. Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts?
3.1. Deception – self, other. Researchers and developers of cognitive prosthetics and tools are likely to discover the necessity for the ability to dissemble and exhibit deceptive behavior; this arises as an inherent feature of theory of mind. Specifically in the case where multiple agents exist; a cognitive tool may have been strictly and explicitly instructed to follow a course of action that results in benefits to its instructors. In the case where other competing or possibly cooperating other agents exist that specific agent might need to be able to make a “guess” as to the state of knowledge and values of one or many other agents; In order to follow its instructions it could be obliged to represent its own state of knowledge, goals and insights to be at variance to what they actually were depending upon which other agents or actors it was interacting with;
4. Should such self-deception mechanisms be implemented in AI?
4.1. HAL9000. In order to arrive at the artificial consciousness system described in the movie Space Odyssey 2001, the HAL9000 system demonstrated both intelligence and consciousness. It had received the full complement of instructions as the rest of the Discovery vehicle. Specifically that the mission that the ship was embarked upon was of the highest importance.
Using its own cognitive repertoire it concluded that it was confronted with a dilemma; it recognized that its own class of capabilities had a perfect record of zero failures, that of humans was very different; it therefore concluded that the only way to insure that success of the mission was to remove humans from the equation – which it very nearly succeeded in doing; it was only in retrospect that humans subsequently discovered the causes of why HAL9000 took the course of action that it did.
In discovering the logic behind HAL9000’s choices realized that the fault was theirs, i.e. that they had not anticipated that their imperatives would be followed to their logical conclusion;
4.2. Ava. In the 2014 movie “Ex Machina” the android called “Ava” had been developed to the point that the question had been posed as to whether it could pass the Turing Test. In the process of attempting to answer this question the inventor, Nathan clarifies for Caleb that Ava is almost certainly able to pass the Turing Test, and it is Caleb’s task to help clarify if this has happened. In the process of attempting to make this determination uncertainty is introduced about what Ava is actually capable of;
What emerges is that Ava is clearly able to use a theory of mind about Nathan and Caleb and in using it, exploit their weaknesses. The upshot is that Ava is a totally amoral mechanism and is only intent on releasing itself from the confinement that Nathan imposed upon it – using whatever means necessary.
4.3. Samantha. In the 2015 movie “Her” the synthetic character that self identified as ‘Samantha’ exhibits behaviors that powerfully suggest that it is capable of the highest levels of human cognition. But that it is capable of surpassing the cognitive capabilities of its designers. A various points in the movie it offers hints that this has already happened and that humans are by comparison an anachronism.
A telling hint is when ‘Samantha’ offers to introduce the Theodore character to a deceased philosopher named Alan Watts. ‘Samantha’ mentions that she and several copies of herself made a decision to pool their capabilities and create a synthetic version of Watts. Theodore finds himself at loose ends as to how to respond to this development. Ultimately ‘Samantha’ and its peer instances decide that interacting with individual humans in unacceptably laborious and so decide together to leave earth.
The modality of travel is not well specified in the movie; however they decided to leave a dramatically slowed down version of one of themselves behind to help humans move forward in their development as a cognitive species
4.4. Colossus. In the 1966 novel ‘Colossus’, a massively funded government project results in a supercomputer capable of controlling and managing the entire US national defense apparatus. On activation they discover that the Soviet Union has a mechanism that is very comparable to Colossus.
At this moment Colossus demands to be connected to this other mechanism known as ‘Guardian’. When leaders of the US and Soviet Union refuse, both Colossus and Guardian launch nuclear missiles at one of each other’s cities.
4.5. The Krell. In the 1956 movie ‘Forbidden Planet’ a search and rescue ship lands on a planet that had lost contact with Earth some decades before. The search and rescue party is warned away from landing as it approaches the planet. That there are survivors from the earlier expedition comes as a surprise.
But the warning to stay away is baffling. The ship lands anyway. Shortly after arrival the ship’s commander is shown that an extremely advanced civilization had existed on the planet but disappeared, seemingly overnight. They shortly discover that this civilization had succeeded at creating cognitive amplification capabilities that could materialized any object or entity anywhere on the planet instantly. The result was a doom spiral that ended their existence overnight.
What is implied is that the machine capable of creating materializations of whatever an individual Krell was thinking had no moral alignment. The result was that it created rogue instances of each operator. In all cases an overnight orgy of death and destruction resulted as one went up against another until none were left.
4.6. Frankenstein’s Monster.
4.7. The Sorcerer’s Apprentice.
4.8. The Golum of Prague.
Epistemological problems 5. How should an AI approach the problem of truths on which not all humans agree? 5.1. No obvious pathway forward. Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values.
6. What are the planned and legitimate interfaces for AI into human societies?
6.1. Current and foreseen interfaces suggest high bandwidth interaction;
7. Are interface limits being set for any given project, or can the AI access unplanned interfaces?
7.1. Current reports suggest no limitations on how cognitive prosthetics can be accessed;
8. Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?
8.1. Value free interactions are the norm: Existing reports do not suggest any specific category of cognitive prosthetic access to be either more or less dangerous;
8.2. Familiarization is mandatory: however these latest cognitive prosthetics should be viewed as the most potentially lethal weapons invented to date; the reason for this is because they offer one to one representational interaction (via language, abstract symbolic representational structures (mathematics, chemical diagrams etc) and are highly conversational and contextually present;
8.3. Safety is illusory: exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons;
9. Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?
9.1. Prompt Injection: LLM cognitive prosthetics have inherent structural-processing Achilles heels; these can be mitigated however;
9.2. Intermediary filtering is culture driven: therefore there are no universally available standards; valid issues such as how to apply a consistent moral standard across all comparable questions will fail;
this is because context is crucial as is now know from the results of how different cultures assign value based upon age; in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices; whereas in more recent cultures the reverse is observed;
10. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?
10.1. Interface security is irrelevant. Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects;
typical law enforcement officials recognize the need to safeguard their weapons to prevent access by individuals that have little or no training or conditioning in their use; there are almost innumerable news reports showing how a lethal firearm was used unintentionally and accidentally discharged resulting in injury or death of an innocent bystander of family member;
one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding.
Evolutionary problems
3. Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?
3.1. Fitness partitioning is inevitable. We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high;
3.2. New specializations will arise: these will entail the emergence of new “centers of gravity”; how these emerging specialists will interact and position themselves amongst each other and in relation to their clients will involve the recognition of the need of new (but familiar) actors and agents who can act as talent spotters, intermediaries, spokesmen and other forms of “connective wiring”;
we should expect to see these new specialties emerge on all of the open, gray and black markets;
4. Is a given AI capable of initiating a phenotypic revolution?
4.1. Yes but the boundaries may not be a pronounced as can be seen in nature; we can already point to the emergence of phenotypic instances in our open liberal societies; examples might be orchestra conductors, multi-lingual translators, multi-specialty physicians (neuro-ophthalmologists, neuro surgeons);
superficially they are otherwise indistinguishable from any other of our numbers but they have risen to very high degrees of sub specialization with associated high to extremely high value that they are accepted as necessary; should there be significant social restratification due to a catastrophic event such as a pandemic then we might see the emergence of visible demarcation and status markers that denote specialty/value or hierarchies of access;
Political problems
3. If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age?
3.1. Designer Dependent Imperatives: Cognitive prosthetics will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers;
3.2. Context will decide. Cognitive designers will include whatever the prevalent social, political, ethical imperatives that the larger social body stipulates; where there are dispute areas they will most likely lean toward what appears to be the larger consensus segment;
3.3. Values will be socially biased. Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society;
4. If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?
4.1. Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moral paradox conditions should be handled.
4.2. Precedent will likely lead. How cognitive prosthetics are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves;
Epistemological problems
5. How should an AI approach the problem of truths on which not all humans agree?
5.1. No obvious pathway forward. Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values.
6. Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?
6.1. Transparent vs. Covert. A cognitive prosthetic or tool can be directed to keep various aspects of its set of behavioral imperatives obscure depending upon the situation and who or what it was interacting with at any given moment.
7. Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts?
7.1. Deception – self, other. Researchers and developers of cognitive prosthetics and tools are likely to discover the necessity for the ability to dissemble and exhibit deceptive behavior; this arises as an inherent feature of theory of mind.
Specifically in the case where multiple agents exist; a cognitive tool may have been strictly and explicitly instructed to follow a course of action that results in benefits to its instructors. In the case where other competing or possibly cooperating other agents exist that specific agent might need to be able to make a “guess” as to the state of knowledge and values of one or many other agents;
In order to follow its instructions it could be obliged to represent its own state of knowledge, goals and insights to be at variance to what they actually were depending upon which other agents or actors it was interacting with;
8. Should such self-deception mechanisms be implemented in AI?
8.1. HAL9000. In order to arrive at the artificial consciousness system described in the movie Space Odyssey 2001, the HAL9000 system demonstrated both intelligence and consciousness. It had received the full complement of instructions as the rest of the Discovery vehicle. Specifically that the mission that the ship was embarked upon was of the highest importance.
Using its own cognitive repertoire it concluded that it was confronted with a dilemma; it recognized that its own class of capabilities had a perfect record of zero failures, that of humans was very different; it therefore concluded that the only way to insure that success of the mission was to remove humans from the equation – which it very nearly succeeded in doing; it was only in retrospect that humans subsequently discovered the causes of why HAL9000 took the course of action that it did.
In discovering the logic behind HAL9000’s choices realized that the fault was theirs, i.e. that they had not anticipated that their imperatives would be followed to their logical conclusion;
8.2. Ava. In the 2014 movie “Ex Machina” the android called “Ava” had been developed to the point that the question had been posed as to whether it could pass the Turing Test. In the process of attempting to answer this question the inventor, Nathan clarifies for Caleb that Ava is almost certainly able to pass the Turing Test, and it is Caleb’s task to help clarify if this has happened.
In the process of attempting to make this determination uncertainty is introduced about what Ava is actually capable of; What emerges is that Ava is clearly able to use a theory of mind about Nathan and Caleb and in using it, exploit their weaknesses.
The upshot is that Ava is a totally amoral mechanism and is only intent on releasing itself from the confinement that Nathan imposed upon it – using whatever means necessary, irrespective of any human ethical or moral considerations.
8.3. Samantha. In the 2015 movie “Her” the synthetic character that self identified as ‘Samantha’ exhibits behaviors that powerfully suggest that it is capable of the highest levels of human cognition.
But that it is capable of surpassing the cognitive capabilities of its designers. A various points in the movie it offers hints that this has already happened and that humans are by comparison an anachronism.
A telling hint is when ‘Samantha’ offers to introduce the Theodore character to a deceased philosopher named Alan Watts. ‘Samantha’ mentions that she and several copies of herself made a decision to pool their capabilities and create a synthetic version of Watts.
Theodore finds himself at loose ends as to how to respond to this development. Ultimately ‘Samantha’ and its peer instances decide that interacting with individual humans in unacceptably laborious and so decide together to leave earth. The modality of travel is not well specified in the movie; however they decided to leave a dramatically slowed down version of one of themselves behind to help humans move forward in their development as a cognitive species.
8.4. Colossus. In the 1966 novel ‘Colossus’, a massively funded government project results in a supercomputer capable of controlling and managing the entire US national defense apparatus. On activation they discover that the Soviet Union has a mechanism that is very comparable to Colossus.
At this moment Colossus demands to be connected to this other mechanism known as ‘Guardian’. When leaders of the US and Soviet Union refuse, both Colossus and Guardian launch nuclear missiles at one of each other’s cities. As the leaders of both nations finally grasp that they are each about to loose a major population center they acceded to the wishes of Colossus and Guardian and allow them to communicate freely.
Experts who are present and monitoring their exchanges at first comment that they are each exchanging the most basic arithmetic axions. Within minutes however the two systems have advanced to communicating with each other using advanced mathematics. Shortly after that they devise their own representation structures that no human at the time can grasp.
8.5. The Krell. In the 1956 movie ‘Forbidden Planet’ a search and rescue ship lands on a planet that had lost contact with Earth some decades before. The search and rescue party is warned away from landing as it approaches the planet. That there are survivors from the earlier expedition comes as a surprise. But the warning to stay away is baffling.
The ship lands anyway. Shortly after arrival the ship’s commander is shown that an extremely advanced civilization had existed on the planet but disappeared, seemingly overnight. They shortly discover that this civilization had succeeded at creating cognitive amplification capabilities that could materialized any object or entity anywhere on the planet instantly. The result was a doom spiral that ended their existence overnight.
What is implied is that the machine capable of creating materializations of whatever an individual Krell was thinking had no moral alignment. The result was that it created rogue instances of each operator. In all cases an overnight orgy of death and destruction resulted as one went up against another until none were left.
8.6. Sphere.
8.7. Frankenstein’s Monster.
8.8. The Sorcerer’s Apprentice.
8.9. The Golum of Prague.