Difference between revisions of "ChatGPT4-Questions-Discussion01"

From arguably.io
Jump to navigation Jump to search
Line 92: Line 92:


For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age
For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age
A7.1 Designer Dependent Imperatives: CP’s will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers;
A7.2 Context will decide. CP designers will include whatever the prevalent social, political, ethical imperatives that the larger social body stipulates; where there are dispute areas they will most likely lean toward what appears to be the larger consensus segment;
A7.3 Values will be socially biased. Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society;


'''''<SPAN STYLE="COLOR:GREEN">POLITICAL QUESTION: If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?</SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">POLITICAL QUESTION: If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN">If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?</SPAN>'''''
2.1. Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moral paradox conditions should be handled.
 
2.2. Precedent will likely lead. How cognitive prosthetics are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves;
 


'''''<SPAN STYLE="COLOR:BLUE">Q04 EPISTIMOLOGICAL PROBLEMS</SPAN>'''''
'''''<SPAN STYLE="COLOR:BLUE">Q04 EPISTIMOLOGICAL PROBLEMS</SPAN>'''''

Revision as of 22:57, 26 July 2023

Q01 Interface Questions

I define interface as the machine-to-human point of interaction and context of the interaction. By way of clarification we interpret take the position that ChatGPT4 is a tool specifically designed and implemented to assist in performing cognitive tasks. We therefore characterize it as being a Cognitive Prosthetic (CP).

We interpret this to mean that an interface is a way whereby an individual or community is able to interact with a CP. The results of the interaction can be

  • low bandwidth (typing, screen output)
  • high bandwidth (audio, video, multimedia and machine-to-machine asynchronous interaction or
  • any combination between these two

By way of example we interpret interaction with a CP to be mean:

  • Social Segments: (i.e. communities): e.g. a political group interacting with a CP = CP to segment of society or... e.g. a community;
  • Universal Access: CP to all of society = open access via world wide web;
  • Specialists: CP to financial specialists = one/many financial specialists;

INTERFACING QUESTION: What are the planned and legitimate interfaces for AI into human societies?

INTERFACING QUESTION: Current and foreseen interfaces suggest high bandwidth interaction;

We believe that high bandwidth interaction will be the norm; these interaction protocols will be reflective of the user domains of expertise;

    • Chemists. Chemistry community members will evolve interface modalities that reflect analysis and problem solving of chemical compounds and structures;
    • Proteomic researchers. Genomic and protein analysis will reflect interactions that represent protein structures, composition and folding properties;
    • Philosophers - Moral or Ethical. A typical discourse in philosophy will be dominated by narrative text. Therefore it is reasonable to expect that the most sophisticated interaction that this community might use can be limited to textual content.

User community interaction will track sophistication of the user community. The more representationally sophisticated the user community the more sophisticated the interface used to interact with the CP; Less sophisticated users will require less sophisticated responses; however no upper limit on the sophistication of the modalities that a CP can offer;

INTERFACING QUESTION: Are interface limits being set for any given project, or can the AI access unplanned interfaces?

INTERFACING ANSWER: No Supporting Evidence. Current reports suggest no limitations on how a CP can be accessed;

Currently access to CG4 is via low bandwidth, typed input and output; recent add-in modules allow for voice input and output; but these are also low bandwidth; reports on the CGP4 system itself indicate that it is capable of accepting visual image input; specific users will use representations specific to their own objectives;

INTERFACING: ANSWER: Widening access. Recent reports have shown that user specific data can be directly input to a CG4 interface via uploading routines that can handle reports that are structured input such as PDF format. Others are expected to follow;

INTERFACING QUESTION: Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?

INTERFACING ANSWER: Value free interactions are the norm: Existing reports do not suggest any specific category of CP access to be either more or less dangerous;

INTERFACING ANSWER: Familiarization is mandatory: however these latest CP should be viewed as the most potentially lethal weapons invented to date; the reason for this is because they offer one to one representational interaction (via language, abstract symbolic representational structures (mathematics, chemical diagrams etc) and are highly conversational and contextually present;

INTERFACING ANSWER: Safety is illusory: exactly because they provide direct access to cognitive processing and can directly accept human representational objects (documents, pdf files, spreadsheets, other forms of symbolic representations) they are effortlessly capable of ingesting, processing and interacting with almost any new knowledge object provided – all with no… i.e. zero value associated with it; we now are in possession of hand held forth generation nuclear weapons;

INTERFACING QUESTION: Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?

INTERFACING ANSWER: Prompt Injection: LLM CP's have inherent structural-processing Achilles heels; these can be mitigated however;

INTERFACING ANSWER: Intermediary filtering is culture driven:

  • there are no universally available standards; valid issues such as how to apply a consistent moral standard across all comparable questions will fail;
  • this is because context is crucial as is now know from the results of how different cultures assign value based upon age;
  • in various European cultures studies have shown that there is a preference to spare elderly individuals over youthful individuals if situation offers only these two choices, whereas in more recent cultures the reverse is observed;

INTERFACING QUESTION: Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?

INTERFACING ANSWER: Interface security is irrelevant - why?

  • Owners of lethal firearms have life and death choices to make in terms of how to secure these dangerous objects; typical law enforcement officials recognize the need to safeguard their weapons to prevent access by individuals that have little or no training or conditioning in their use;
  • there are almost innumerable news reports showing how a lethal firearm was used unintentionally and accidentally discharged resulting in injury or death of an innocent bystander of family member;
  • one either recognizes the potential lethality of these things and takes the appropriate safeguards to preclude tragedy or one must be held culpable for negligence in their securing and safeguarding.

Q02 EVOLUTIONARY PROBLEMS

EVOLUTION QUESTION: Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?

EVOLUTION ANSWER: Fitness partitioning is inevitable. We recognize this to be a human trait; from the high priests from classical societies who recognized the need to align themselves with the seat of power, through the guilds of the middle ages through to the present plumbers unions that artificially restrict access to the plumbing trades and therefore keep plumbing maintenance costs high;

EVOLUTION ANSWER: New specializations will arise: these will entail the emergence of new “centers of gravity”; how these emerging specialists will interact and position themselves amongst each other and in relation to their clients will involve the recognition of the need of new (but familiar) actors and agents who can act as talent spotters, intermediaries, spokesmen and other forms of “connective wiring”; we should expect to see these new specialties emerge on all of the open, gray and black markets;

EVOLUTION QUESTION: Is a given AI capable of initiating a phenotypic revolution?

EVOLUTION ANSWER: Yes. However visible phenotypic markers will probably be absent;

  • we can already point to the emergence of symbolic phenotypic instances in our open liberal societies;
  • specialist adaptation: examples might be orchestra conductors, multi-lingual translators, multi-specialty physicians (neuro-ophthalmologists, neuro surgeons); superficially they are otherwise indistinguishable from any other of our numbers but they have risen to very high degrees of sub specialization with associated high to extremely high value that they are accepted as necessary;
  • stratification and realignment: should there be significant social restratification due to a catastrophic event such as a pandemic then we might see the emergence of visible demarcation and status markers that denote specialty/value or hierarchies of access;

Q03 POLITICAL PROBLEMS

POLITICAL QUESTION: If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles.

For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age

A7.1 Designer Dependent Imperatives: CP’s will reflect the values that were presented as stipulations and injunctions by the cognitive prosthetic designers;

A7.2 Context will decide. CP designers will include whatever the prevalent social, political, ethical imperatives that the larger social body stipulates; where there are dispute areas they will most likely lean toward what appears to be the larger consensus segment;

A7.3 Values will be socially biased. Therefore questions regarding how to resolve ethical dilemmas associated with (possibly) spontaneously assigning value by age will be informed by the values implicit in the larger society;


POLITICAL QUESTION: If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?

2.1. Moral Paradox. Studies have been conducted and scrutinized in the US, UK, Europe and Japan. The results show a range of how moral principles should be applied; the key is that they frequently do not agree with how instant moral paradox conditions should be handled.

2.2. Precedent will likely lead. How cognitive prosthetics are likely to be subjected to the same approach. In some instances owners of self driving cars insisted that there be a “disable” switch that can instantly enable a driver to take charge of a situation and decide for themselves;


Q04 EPISTIMOLOGICAL PROBLEMS

EPISTIMOLOGICAL QUESTION: How should an AI approach the problem of truths on which not all humans agree?

EPISTIMOLOGICAL QUESTION: Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?

EPISTIMOLOGICAL QUESTION: Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts?

EPISTIMOLOGICAL QUESTION: Should such self-deception mechanisms be implemented in AI?