Difference between revisions of "ChatGPT4-Questions-Discussion01"

From arguably.io
Jump to navigation Jump to search
 
(46 intermediate revisions by the same user not shown)
Line 1: Line 1:


'''''<SPAN STYLE="COLOR:BLUE">Interface questions</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Q01 Interface Questions</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">I define interface as the machine-to-human point of interaction and context of the interaction. </SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">I define interface as the machine-to-human point of interaction and context of the interaction. </SPAN>'''''
By way of clarification we interpret take the position that ChatGPT4 is a tool specifically designed and implemented to assist in
By way of clarification we interpret take the position that ChatGPT4 is a tool specifically designed and implemented to assist in
performing cognitive tasks. We therefore characterize it as being a Cognitive Prosthetic (CP).
performing cognitive tasks. We therefore characterize it as being a '''''Cognitive Prosthetic (CP)'''''.


We interpret this to mean that an interface is a way whereby an individual or community is able to interact with a CP. The results of the interaction can be  
We interpret this to mean that an interface is a way whereby an individual or community is able to interact with a CP. The results of the interaction can be  


* low bandwidth (typing, screen output)  
* '''''low bandwidth'''''  (typing, screen output)  
* high bandwidth (audio, video, multimedia and machine-to-machine asynchronous interaction or
* '''''high bandwidth''''' (audio, video, multimedia and machine-to-machine asynchronous interaction or
* any combination between these two
* '''''any combination''''' between these two


By way of example we interpret '''''interaction with a CP''''' to be interpreted as
By way of example we interpret '''''Interaction with a CP''''' to be mean:


* '''''Social Segments''''' (i.e. communities): e.g. a political group interacting with a CP = CP to segment of society or... e.g. a '''''community''''';
* '''''Social Segments: ''''' (i.e. communities): e.g. a political group interacting with a CP = CP to segment of society or... e.g. a '''''community''''';
* '''''Universal Access:''''' CP to all of society = open access via world wide web;
* '''''Universal Access:''''' CP to all of society = open access via world wide web;
* '''''Specialists:''''' CP to financial specialists = one/many financial specialists;
* '''''Specialists:''''' CP to financial specialists = one/many financial specialists;


'''''<SPAN STYLE="COLOR:GREEN">Q1. What are the planned and legitimate interfaces for AI into human societies? </SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">INTERFACE QUESTION: What are the planned and legitimate interfaces for AI into human societies? </SPAN>'''''


'''''<SPAN STYLE="COLOR:BLUE">A1. Current and foreseen interfaces suggest high bandwidth interaction; </SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">INTERFACE  QUESTION: Current and foreseen interfaces suggest high bandwidth interaction; </SPAN>'''''


Our synthesis strongly suggests that high bandwidth interaction will be the norm; these interaction protocols will
We believe that high bandwidth interaction will be the norm; these interaction protocols will be reflective of the user domains of expertise;  
be reflective of the user domains of expertise;  
* '''''Chemists.''''' Chemistry community members will evolve interface modalities that reflect analysis and problem solving of chemical compounds and structures;
* '''''Proteomic researchers.''''' Genomic and protein analysis will reflect interactions that represent protein structures, composition and folding properties;
* '''''Mathematicians. ''''' Many problems that mathematicians discuss or present often use advanced forms of mathematics that will be unfamiliar or opaque to a layman;
* '''''Philosophers - Moral or Ethical.''''' A typical discourse in philosophy will be dominated by narrative text. Therefore it is reasonable to expect that the most sophisticated interaction that this community might use can be limited to textual content.


Therefore members of the physical chemistry community will evolve interface modalities that are reflective of chemical
User community interaction will track sophistication of the user community. The more representationally sophisticated the user community the more sophisticated the interface used to interact with the CP; Less sophisticated users will require less sophisticated responses; however no upper limit on the sophistication of the modalities that a CP can offer;
compounds and structures;  


Proteomic researchers can be expected to interact using representations of protein structures, their composition and their folding
'''''<SPAN STYLE="COLOR:GREEN">INTERFACE QUESTION: Are interface limits being set for any given project, or can the AI access unplanned interfaces?</SPAN>'''''
properties;


In sum, the more sophisticated the user community the more sophisticated will be the CP's ability to respond;
'''''<SPAN STYLE="COLOR:BLUE"> No Supporting Evidence. </SPAN>''''' Current reports suggest no limitations on how a CP can be accessed;  
less sophisticated users will receive less sophisticated responses; but there does not seem to be an inherent upper limit on the
sophistication of the modalities that a cognitive prosthetic can offer;


'''''<SPAN STYLE="COLOR:GREEN">2. Are interface limits being set for any given project, or can the AI access unplanned interfaces?</SPAN>'''''
Currently access to CG4 is via low bandwidth, typed input and output; recent add-in modules allow for voice input and output; but these are also low bandwidth;
reports on the CGP4 system itself indicate that it is capable of accepting visual image input; specific users will use representations specific to their own objectives;


'''''<SPAN STYLE="COLOR:GREEN">3. Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE"> Widening Access. </SPAN>'''''Recent reports have shown that user specific data can be directly input to a CG4 interface via uploading routines that can handle reports that are structured input such as PDF format. Others are expected to follow;


'''''<SPAN STYLE="COLOR:GREEN">4. Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?</SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">INTERFACE QUESTION: </SPAN>'''''Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">5. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Value Free Interactions are the Norm: </SPAN>'''''Existing reports do not suggest any specific category of CP access to be either more or less dangerous;


'''''<SPAN STYLE="COLOR:BLUE">Evolutionary problems</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE"> Familiarization is Mandatory: </SPAN>'''''however these latest CP should be viewed as the most potentially lethal weapons invented to date; the reason for this is because they offer one to one representational interaction (via language, abstract symbolic representational structures (mathematics, chemical diagrams etc) and are highly conversational and contextually present;


'''''<SPAN STYLE="COLOR:GREEN">Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Safety is Illusory: </SPAN>'''''exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons;


'''''<SPAN STYLE="COLOR:GREEN">Is a given AI capable of initiating a phenotypic revolution?</SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">INTERFACE QUESTION: Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?</SPAN>'''''


'''''<SPAN STYLE="COLOR:BLUE">Political Problems</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Prompt Injection: </SPAN>'''''LLM CP's have inherent structural-processing Achilles heels; these can be mitigated however;


'''''<SPAN STYLE="COLOR:GREEN">If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age? </SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Intermediary Filtering is Culture Driven: </SPAN>'''''
* there are no universally available standards; valid issues such as how to apply a consistent moral standard across all comparable questions will fail;


'''''<SPAN STYLE="COLOR:GREEN">If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?</SPAN>'''''
* this is because context is crucial as is now know from the results of how different cultures assign value based upon age;


'''''<SPAN STYLE="COLOR:BLUE">Epistemological problems</SPAN>'''''
* in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices, whereas in more recent cultures the reverse is observed;


'''''<SPAN STYLE="COLOR:GREEN">Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?</SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">INTERFACE QUESTION: Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts? </SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Interface Security is Irrelevant. </SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">Should such self-deception mechanisms be implemented in AI? </SPAN>'''''
* Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects; typical law enforcement officials recognize the need to safeguard their weapons to prevent access by individuals that have little or no training or conditioning in their use;


'''''<SPAN STYLE="COLOR:BLUE">Epistemological problems</SPAN>'''''
* there are almost innumerable news reports showing how a lethal firearm was used unintentionally and accidentally discharged resulting in injury or death of an innocent bystander of family member;


'''''<SPAN STYLE="COLOR:GREEN">How should an AI approach the problem of truths on which not all humans agree?
* one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding.


'''''<SPAN STYLE="COLOR:GREEN">What are the planned and legitimate interfaces for AI into human societies? </SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">SYNTHESIS. </SPAN>''''' Our conclusion is that:
* a CP will typically be domain specific and tailored to the end user's specific interests and needs;
* increasingly common practice when accessing a information system or a CP will incorporate multimodal interaction including audio, video, graphics and animation; interaction with speech input and output will become an increasingly normal means of interaction;
* controlling a CP that is used for specific purposes will require some degree of familiarization and training; however we expect that intelligence and context sensitive help will enable a wide range of users to access and control many of the functions of a CP; however, in order to use the more advanced functionality will presuppose familiarity with the subject that the CP is specialized to offer;


'''''<SPAN STYLE="COLOR:GREEN">Are interface limits being set for any given project, or can the AI access unplanned interfaces?</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Q02 EVOLUTIONARY PROBLEMS</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?
'''''<SPAN STYLE="COLOR:GREEN">EVOLUTION QUESTION: Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Fitness Partitioning is Inevitable. </SPAN>'''''We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high;


'''''<SPAN STYLE="COLOR:GREEN">10. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">New Specializations Will Arise: </SPAN>'''''these will entail the emergence of new “centers of gravity”; how these emerging specialists will interact and position themselves amongst each other and in relation to their clients will involve the recognition of the need of new (but familiar) actors and agents who can act as talent spotters, intermediaries, spokesmen and other forms of “connective wiring”; we should expect to see these new specialties emerge on all of the open, gray and black markets;


'''''<SPAN STYLE="COLOR:BLUE">Evolutionary problems</SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">EVOLUTION QUESTION: Is a given AI capable of initiating a phenotypic revolution?</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?
'''''<SPAN STYLE="COLOR:BLUE">Yes. </SPAN>'''''However visible phenotypic markers will probably be absent;


'''''<SPAN STYLE="COLOR:GREEN">4. Is a given AI capable of initiating a phenotypic revolution?</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">SYNTHESIS: </SPAN>'''''
* we believe that  emergence of new phenotypic instances will emerge; however they maybe not exhibit overt physical traits;
* specialist adaptation: orchestra conductors, multi-specialty physicians (neuro-ophthalmologists, neuro surgeons); superficially they are indistinguishable from any other individual however they have risen to a very high and restricted station; 
* stratification and realignment: a major event such as a pandemic or other form of disruption might provoke social restratification; a severe enough event might prompt or even oblige individuals to use openly visible displays of membership within various social circles that might be increasingly exclusive; these displays could be engineered such that they could not be counterfeited or otherwise misused;


'''''<SPAN STYLE="COLOR:BLUE">Political Problems</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Q03 POLITICAL PROBLEMS</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles.  
'''''<SPAN STYLE="COLOR:GREEN">POLITICAL QUESTION:  If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. </SPAN>'''''
For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age?


'''''<SPAN STYLE="COLOR:GREEN">If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?</SPAN>'''''
For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age.


'''''<SPAN STYLE="COLOR:BLUE">Epistemological problems</SPAN>'''''
'''''Designer Dependent Imperatives:''''' CP’s will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers;


'''''<SPAN STYLE="COLOR:GREEN">How should an AI approach the problem of truths on which not all humans agree?</SPAN>'''''
'''''Context Will Decide.''''' CP designers will include whatever the prevalent social, political, ethical imperatives that the larger social body stipulates; where there are dispute areas they will most likely lean toward what appears to be the larger consensus segment;


'''''<SPAN STYLE="COLOR:GREEN">Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?</SPAN>'''''
'''''Values will be Socially Biased.'''''' Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society;


'''''<SPAN STYLE="COLOR:GREEN">Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts? </SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">POLITICAL QUESTION: If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">Should such self-deception mechanisms be implemented in AI? </SPAN>'''''
'''''Moral Paradox.''''' Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moral paradox conditions should be handled.


8.1. HAL9000. In order to arrive at the artificial consciousness system described in the movie '''''Space  Odyssey 2001''''', the '''''HAL9000''''' system demonstrated both intelligence and consciousness. It had received the full complement of instructions as the rest of the Discovery vehicle. Specifically that the mission that the ship was embarked upon was of the highest importance.
'''''Precedent Will Lead.''''' CP's are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves;


Using its own cognitive repertoire it concluded that it was confronted with a dilemma; it recognized that its own class of capabilities had a perfect record of zero failures, that of humans was very different; it therefore concluded that the only way to insure that success of the mission was to remove humans from the equation – which it very nearly succeeded in doing; it was only in retrospect that humans subsequently discovered the causes of why HAL9000 took the course of action that it did.
'''''<SPAN STYLE="COLOR:BLUE">SYNTHESIS: </SPAN>'''''
* cultural values vary considerably from one culture to another;  


In discovering the logic behind '''''HAL9000’s''''' choices realized that the fault was theirs, i.e. that they had not anticipated that their imperatives would be followed to their logical conclusion;
'''''<SPAN STYLE="COLOR:BLUE">Q04 EPISTIMOLOGICAL PROBLEMS</SPAN>'''''


8.2. Ava. In the 2014 movie '''''Ex Machina''''' the android called '''''Ava''''' had been developed to the point that the question had been posed as to whether it could pass the Turing Test. In the process of attempting to answer this question the inventor, Nathan clarifies for Caleb that '''''Ava''''' is almost certainly able to pass the Turing Test, and it is Caleb’s task to help clarify if this has happened.
'''''<SPAN STYLE="COLOR:GREEN">EPISTIMOLOGICAL QUESTION: How should an AI approach the problem of truths on which not all humans agree?</SPAN>'''''


In the process of attempting to make this determination uncertainty is introduced about what Ava is actually capable of; What emerges is that '''''Ava''''' is clearly able to use a theory of mind about Nathan and Caleb and in using it, exploit their weaknesses.  
'''''No Obvious Pathway Forward.''''' Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values.
More concretely Japanese culture places increased reverence and value upon age and experience. Whereas in American culture the media is suffused with indications that suggest that it favors a more youth oriented culture.  


The upshot is that '''''Ava''''' is a totally amoral mechanism and is only intent on releasing itself from the confinement that Nathan imposed upon it – using whatever means necessary, irrespective of any human ethical or moral considerations.
'''''<SPAN STYLE="COLOR:GREEN">EPISTIMOLOGICAL QUESTION: Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?</SPAN>'''''
'''''Transparent vs. Covert.''''' A CP or tool can be directed to keep various aspects of its set of behavioral imperatives obscure depending upon the situation and who or what it was interacting with at any given moment.


8.3. Samantha. In the 2015 movie '''''Her''''' the synthetic character that self identified as '''''Samantha''''' exhibits behaviors that powerfully suggest that it is capable of the highest levels of human cognition.
'''''<SPAN STYLE="COLOR:GREEN">EPISTIMOLOGICAL QUESTION: Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts? </SPAN>'''''


But that it is capable of surpassing the cognitive capabilities of its designers. A various points in the movie it offers hints that this has already happened and that humans are by comparison an anachronism.
'''''Deception – Self, Other.''''' CP researchers and developers and tools will eventually discover the necessity of enabling a CP to dissemble and engage in deceptive behavior; by way of illustration if we take the example of a child discovering a firearm that has no locking mechanism precluding its use; suppose that a CP has been configured to control an industrial process of some kind that insures the safety and benefit of a population; were control of a process that regulated availability of clean water which had been purified via a number of highly refined and specific steps then would we be comfortable with a child gaining access to it and deciding to disable one or more of those filtration and sanitation steps? the result could be variously dangerous to lethal for those individual who were dependent upon reliable clean potable water;


A telling hint is when '''''Samantha''''' offers to introduce the Theodore character to a deceased philosopher named '''''Alan Watts'''''. '''''Samantha''''' mentions that she and several copies of herself made a decision to pool their capabilities and create a synthetic version of '''''Watts'''''.
In this example we can see that a CP should be viewed as an instrument comparably dangerous to a firearm; the case of a firearm the damage that these things can inflict is relatively limited, though this does not mitigate the danger that they pose if they fall into psychopathic hands; in the case of potable water far larger populations can be subjected to serious to lethal risks; 


Theodore finds himself at loose ends as to how to respond to this development. Ultimately '''''Samantha''''' and its peer instances decide that interacting with individual humans in unacceptably laborious and so decide together to leave earth. The modality of travel is not well specified in the movie; however they decided to leave a dramatically slowed down version of one of themselves behind to help humans move forward in their development as a cognitive species.
In order to mitigate against this kind of risk a CP whose function it was to regulate an industrial filtration process might need to have a mandatory feature that enabled it to operate along  '''''theory of mind''''' principles. To this point... it should be able to formulate a sense of what a person knows and what that person's intentions are. Based upon these kinds of summations it would then be obliged to make a determination of what information and control to reveal or offer and what not to.
8.4. Colossus. In the 1966 novel '''''Colossus''''', a massively funded government project results in a supercomputer capable of controlling and managing the entire US national defense apparatus. On activation they discover that the Soviet Union has a mechanism that is very comparable to '''''Colossus'''''.  


At this moment '''''Colossus''''' demands to be connected to this other mechanism known as '''''Guardian'''''. When leaders of the US and Soviet Union refuse, both '''''Colossus''''' and ''Guardian'' launch nuclear missiles at one of each other’s cities. As the leaders of both nations finally grasp that they are each about to loose a major population center they acceded to the wishes of Colossus and Guardian and allow them to communicate freely.  
Specifically in the case where multiple agents exist; a CP may have been strictly and explicitly instructed to follow a course of action that results in benefits to its instructors.  


Experts who are present and monitoring their exchanges at first comment that they are each exchanging the most basic arithmetic axions. Within minutes however the two systems have advanced to communicating with each other using advanced mathematics. Shortly after that they devise their own representation structures that no human at the time can grasp.
In the case where other competing or possibly cooperating CPs then the need might arise wherein one specific CP may need to determine the access level of one or more other CPs; this can be seen with network access; different levels of security are enforced by escalating the restriction associated with various areas of a database; {NEED TO REWORK THE WORKING WITH THIS ONE A BIT}


8.5. The Krell. In the 1956 movie '''''Forbidden Planet''''' a search and rescue ship lands on a planet that had lost contact with Earth some decades before. The search and rescue party is warned away from landing as it approaches the planet. That there are survivors from the earlier expedition comes as a surprise. But the warning to stay away is baffling.
exist then either a specific one or they all may  specific agent might need to be able to make a “guess” as to the state of knowledge and values of one or many other agents; In order to follow its instructions it could be obliged to represent its own state of knowledge, goals and insights to be at variance to what they actually were depending upon which other agents or actors it was interacting with;


The ship lands anyway. Shortly after arrival the ship’s commander is shown that an extremely advanced civilization had existed on the planet but disappeared, seemingly overnight. They shortly discover that this civilization had succeeded at creating cognitive amplification capabilities that could materialized any object or entity anywhere on the planet instantly. The result was a doom spiral that ended their existence overnight.
'''''<SPAN STYLE="COLOR:GREEN">EPISTIMOLOGICAL QUESTION: Should such self-deception mechanisms be implemented in AI? </SPAN>'''''


What is implied is that the machine capable of creating materializations of whatever an individual '''''Krell''''' was thinking had no moral alignment. The result was that it created rogue instances of each operator. In all cases an overnight orgy of death and destruction resulted as one went up against another until none were left.


8.6. Sphere. This Michael Crichton novel's storyline developed from the premise that a sufficiently advanced technology would be like magic to a more primitive society. In the story line a team of several specialists are called to investigate a mid-ocean plane crash. On arriving at the "crash site" they are promptly told that the site is not that of an international airliner but that it is a large ship constructed for either interplanetary or possibly interplanetary travel. They further learn that it has been almost completely buried in coral for over three hundred years. On entering the craft they discover that some kind of spherical object has been retrieved from an unspecified location in space relatively distant from earth. As each of the team members attempt to puzzle through their finding they discover that aspects of their sub conscious memories begin to become materially manifest. Often with lethal results.
'''''<SPAN STYLE="COLOR:BLUE">SYNTHESIS: </SPAN>'''''
 
8.7. Frankenstein’s Monster. This early Mary Shelly novel examined how humans react when confronted by a creation of their own efforts but which possesses undesirable behaviors. The story line is one that is relatively well known in most Western cultures therefore there should be no surprise that Shelly's work has become one of the basic pillars used to examine human nature when confronted by something of their own, human creation but whose actions show an absence for the ethical and moral underpinnings that ground almost all of human societies.
 
8.8. The Golem of Prague. This Eastern European fable emerged as a result of pernicious and objectionable social and political conditions that were common at that time and place. The storyline involves the rabbi of Prague. The rabbi takes action by fashioning a human like object from clay. By using various incantations he is able to direct it to perform various tasks. These tasks are performed mindlessly. The golem can also perform actions that the rabbi does not want.
 
8.9. The Sorcerer's Apprentice. In the brilliant Goethe's work a sorcerer leaves some minor choirs to his apprentice. The apprentice finds that the chores are redundant and boring so decides to attempt to use one of the sorcerer's incantations to cause a broomstick to perform them; on discovering that the broomstick is indeed performing the chores, although slowly the apprentice splits the broomstick into two separate pieces, each of which becomes a whole broomstick and resumes the task, but now in half the time; as the task nears completion the apprentice discovers that he does not know the incantation to cause the broomstick to cease functioning; and nearly causes a disaster for the sorcerer; the sorcerer arrives just in time to intercept the action of the broomsticks and halt their movements. But it is a very close run thing.

Latest revision as of 00:54, 27 July 2023

Q01 Interface Questions

I define interface as the machine-to-human point of interaction and context of the interaction. By way of clarification we interpret take the position that ChatGPT4 is a tool specifically designed and implemented to assist in performing cognitive tasks. We therefore characterize it as being a Cognitive Prosthetic (CP).

We interpret this to mean that an interface is a way whereby an individual or community is able to interact with a CP. The results of the interaction can be

  • low bandwidth (typing, screen output)
  • high bandwidth (audio, video, multimedia and machine-to-machine asynchronous interaction or
  • any combination between these two

By way of example we interpret Interaction with a CP to be mean:

  • Social Segments: (i.e. communities): e.g. a political group interacting with a CP = CP to segment of society or... e.g. a community;
  • Universal Access: CP to all of society = open access via world wide web;
  • Specialists: CP to financial specialists = one/many financial specialists;

INTERFACE QUESTION: What are the planned and legitimate interfaces for AI into human societies?

INTERFACE QUESTION: Current and foreseen interfaces suggest high bandwidth interaction;

We believe that high bandwidth interaction will be the norm; these interaction protocols will be reflective of the user domains of expertise;

  • Chemists. Chemistry community members will evolve interface modalities that reflect analysis and problem solving of chemical compounds and structures;
  • Proteomic researchers. Genomic and protein analysis will reflect interactions that represent protein structures, composition and folding properties;
  • Mathematicians. Many problems that mathematicians discuss or present often use advanced forms of mathematics that will be unfamiliar or opaque to a layman;
  • Philosophers - Moral or Ethical. A typical discourse in philosophy will be dominated by narrative text. Therefore it is reasonable to expect that the most sophisticated interaction that this community might use can be limited to textual content.

User community interaction will track sophistication of the user community. The more representationally sophisticated the user community the more sophisticated the interface used to interact with the CP; Less sophisticated users will require less sophisticated responses; however no upper limit on the sophistication of the modalities that a CP can offer;

INTERFACE QUESTION: Are interface limits being set for any given project, or can the AI access unplanned interfaces?

No Supporting Evidence. Current reports suggest no limitations on how a CP can be accessed;

Currently access to CG4 is via low bandwidth, typed input and output; recent add-in modules allow for voice input and output; but these are also low bandwidth; reports on the CGP4 system itself indicate that it is capable of accepting visual image input; specific users will use representations specific to their own objectives;

Widening Access. Recent reports have shown that user specific data can be directly input to a CG4 interface via uploading routines that can handle reports that are structured input such as PDF format. Others are expected to follow;

INTERFACE QUESTION: Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?

Value Free Interactions are the Norm: Existing reports do not suggest any specific category of CP access to be either more or less dangerous;

Familiarization is Mandatory: however these latest CP should be viewed as the most potentially lethal weapons invented to date; the reason for this is because they offer one to one representational interaction (via language, abstract symbolic representational structures (mathematics, chemical diagrams etc) and are highly conversational and contextually present;

Safety is Illusory: exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons;

INTERFACE QUESTION: Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?

Prompt Injection: LLM CP's have inherent structural-processing Achilles heels; these can be mitigated however;

Intermediary Filtering is Culture Driven:

  • there are no universally available standards; valid issues such as how to apply a consistent moral standard across all comparable questions will fail;
  • this is because context is crucial as is now know from the results of how different cultures assign value based upon age;
  • in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices, whereas in more recent cultures the reverse is observed;

INTERFACE QUESTION: Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?

Interface Security is Irrelevant.

  • Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects; typical law enforcement officials recognize the need to safeguard their weapons to prevent access by individuals that have little or no training or conditioning in their use;
  • there are almost innumerable news reports showing how a lethal firearm was used unintentionally and accidentally discharged resulting in injury or death of an innocent bystander of family member;
  • one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding.

SYNTHESIS. Our conclusion is that:

  • a CP will typically be domain specific and tailored to the end user's specific interests and needs;
  • increasingly common practice when accessing a information system or a CP will incorporate multimodal interaction including audio, video, graphics and animation; interaction with speech input and output will become an increasingly normal means of interaction;
  • controlling a CP that is used for specific purposes will require some degree of familiarization and training; however we expect that intelligence and context sensitive help will enable a wide range of users to access and control many of the functions of a CP; however, in order to use the more advanced functionality will presuppose familiarity with the subject that the CP is specialized to offer;

Q02 EVOLUTIONARY PROBLEMS

EVOLUTION QUESTION: Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?

Fitness Partitioning is Inevitable. We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high;

New Specializations Will Arise: these will entail the emergence of new “centers of gravity”; how these emerging specialists will interact and position themselves amongst each other and in relation to their clients will involve the recognition of the need of new (but familiar) actors and agents who can act as talent spotters, intermediaries, spokesmen and other forms of “connective wiring”; we should expect to see these new specialties emerge on all of the open, gray and black markets;

EVOLUTION QUESTION: Is a given AI capable of initiating a phenotypic revolution?

Yes. However visible phenotypic markers will probably be absent;

SYNTHESIS:

  • we believe that emergence of new phenotypic instances will emerge; however they maybe not exhibit overt physical traits;
  • specialist adaptation: orchestra conductors, multi-specialty physicians (neuro-ophthalmologists, neuro surgeons); superficially they are indistinguishable from any other individual however they have risen to a very high and restricted station;
  • stratification and realignment: a major event such as a pandemic or other form of disruption might provoke social restratification; a severe enough event might prompt or even oblige individuals to use openly visible displays of membership within various social circles that might be increasingly exclusive; these displays could be engineered such that they could not be counterfeited or otherwise misused;

Q03 POLITICAL PROBLEMS

POLITICAL QUESTION: If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles.

For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age.

Designer Dependent Imperatives: CP’s will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers;

Context Will Decide. CP designers will include whatever the prevalent social, political, ethical imperatives that the larger social body stipulates; where there are dispute areas they will most likely lean toward what appears to be the larger consensus segment;

Values will be Socially Biased.' Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society;

POLITICAL QUESTION: If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?

Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moral paradox conditions should be handled.

Precedent Will Lead. CP's are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves;

SYNTHESIS:

  • cultural values vary considerably from one culture to another;

Q04 EPISTIMOLOGICAL PROBLEMS

EPISTIMOLOGICAL QUESTION: How should an AI approach the problem of truths on which not all humans agree?

No Obvious Pathway Forward. Attempting to arrive at a moral and ethical decision must be dependent upon the context of the time and its prevalent values. More concretely Japanese culture places increased reverence and value upon age and experience. Whereas in American culture the media is suffused with indications that suggest that it favors a more youth oriented culture.

EPISTIMOLOGICAL QUESTION: Is a given AI always truthful in its responses or can it hide objectives or truths to its interface? Transparent vs. Covert. A CP or tool can be directed to keep various aspects of its set of behavioral imperatives obscure depending upon the situation and who or what it was interacting with at any given moment.

EPISTIMOLOGICAL QUESTION: Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts?

Deception – Self, Other. CP researchers and developers and tools will eventually discover the necessity of enabling a CP to dissemble and engage in deceptive behavior; by way of illustration if we take the example of a child discovering a firearm that has no locking mechanism precluding its use; suppose that a CP has been configured to control an industrial process of some kind that insures the safety and benefit of a population; were control of a process that regulated availability of clean water which had been purified via a number of highly refined and specific steps then would we be comfortable with a child gaining access to it and deciding to disable one or more of those filtration and sanitation steps? the result could be variously dangerous to lethal for those individual who were dependent upon reliable clean potable water;

In this example we can see that a CP should be viewed as an instrument comparably dangerous to a firearm; the case of a firearm the damage that these things can inflict is relatively limited, though this does not mitigate the danger that they pose if they fall into psychopathic hands; in the case of potable water far larger populations can be subjected to serious to lethal risks;

In order to mitigate against this kind of risk a CP whose function it was to regulate an industrial filtration process might need to have a mandatory feature that enabled it to operate along theory of mind principles. To this point... it should be able to formulate a sense of what a person knows and what that person's intentions are. Based upon these kinds of summations it would then be obliged to make a determination of what information and control to reveal or offer and what not to.

Specifically in the case where multiple agents exist; a CP may have been strictly and explicitly instructed to follow a course of action that results in benefits to its instructors.

In the case where other competing or possibly cooperating CPs then the need might arise wherein one specific CP may need to determine the access level of one or more other CPs; this can be seen with network access; different levels of security are enforced by escalating the restriction associated with various areas of a database; {NEED TO REWORK THE WORKING WITH THIS ONE A BIT}

exist then either a specific one or they all may specific agent might need to be able to make a “guess” as to the state of knowledge and values of one or many other agents; In order to follow its instructions it could be obliged to represent its own state of knowledge, goals and insights to be at variance to what they actually were depending upon which other agents or actors it was interacting with;

EPISTIMOLOGICAL QUESTION: Should such self-deception mechanisms be implemented in AI?


SYNTHESIS: