Difference between revisions of "ChatGPT4-Questions-Discussion01"

From arguably.io
Jump to navigation Jump to search
Line 27: Line 27:
** '''''Philosophers - Moral or Ethical.''''' A typical discourse in philosophy will be dominated by narrative text. Therefore it is reasonable to expect that the most sophisticated interaction that this community might use can be limited to textual content.
** '''''Philosophers - Moral or Ethical.''''' A typical discourse in philosophy will be dominated by narrative text. Therefore it is reasonable to expect that the most sophisticated interaction that this community might use can be limited to textual content.


User community interaction will track sophistication of the user community. The more representationally sophisticated the user community the more sophisticated the interface used to interact with the CP;
User community interaction will track sophistication of the user community. The more representationally sophisticated the user community the more sophisticated the interface used to interact with the CP; Less sophisticated users will require less sophisticated responses; however no upper limit on the sophistication of the modalities that a CP can offer;
 
Less sophisticated users will receive less sophisticated responses; but there does not seem to be an inherent upper limit on the
sophistication of the modalities that a CP can offer;


'''''<SPAN STYLE="COLOR:GREEN">Q: Are interface limits being set for any given project, or can the AI access unplanned interfaces?</SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">Q: Are interface limits being set for any given project, or can the AI access unplanned interfaces?</SPAN>'''''
Line 63: Line 60:


'''''<SPAN STYLE="COLOR:GREEN">Q: How should an AI approach the problem of truths on which not all humans agree?
'''''<SPAN STYLE="COLOR:GREEN">Q: How should an AI approach the problem of truths on which not all humans agree?
'''''<SPAN STYLE="COLOR:GREEN">Q: What are the planned and legitimate interfaces for AI into human societies? </SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">Q: Are interface limits being set for any given project, or can the AI access unplanned interfaces?</SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN">Q: Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?
'''''<SPAN STYLE="COLOR:GREEN">Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?</SPAN>'''''


'''''<SPAN STYLE="COLOR:GREEN"> Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?</SPAN>'''''
'''''<SPAN STYLE="COLOR:GREEN"> Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?</SPAN>'''''

Revision as of 19:29, 25 July 2023

Interface questions

I define interface as the machine-to-human point of interaction and context of the interaction. By way of clarification we interpret take the position that ChatGPT4 is a tool specifically designed and implemented to assist in performing cognitive tasks. We therefore characterize it as being a Cognitive Prosthetic (CP).

We interpret this to mean that an interface is a way whereby an individual or community is able to interact with a CP. The results of the interaction can be

  • low bandwidth (typing, screen output)
  • high bandwidth (audio, video, multimedia and machine-to-machine asynchronous interaction or
  • any combination between these two

By way of example we interpret interaction with a CP to be mean:

  • Social Segments: (i.e. communities): e.g. a political group interacting with a CP = CP to segment of society or... e.g. a community;
  • Universal Access: CP to all of society = open access via world wide web;
  • Specialists: CP to financial specialists = one/many financial specialists;

Q: What are the planned and legitimate interfaces for AI into human societies?

A: Current and foreseen interfaces suggest high bandwidth interaction;

We believe that high bandwidth interaction will be the norm; these interaction protocols will be reflective of the user domains of expertise;

    • Chemists. Chemistry community members will evolve interface modalities that reflect analysis and problem solving of chemical compounds and structures;
    • Proteomic researchers. Genomic and protein analysis will reflect interactions that represent protein structures, composition and folding properties;
    • Philosophers - Moral or Ethical. A typical discourse in philosophy will be dominated by narrative text. Therefore it is reasonable to expect that the most sophisticated interaction that this community might use can be limited to textual content.

User community interaction will track sophistication of the user community. The more representationally sophisticated the user community the more sophisticated the interface used to interact with the CP; Less sophisticated users will require less sophisticated responses; however no upper limit on the sophistication of the modalities that a CP can offer;

Q: Are interface limits being set for any given project, or can the AI access unplanned interfaces?

Q: Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?

Q: Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?

Q: Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?

Evolutionary problems

Q: Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?

Q: Is a given AI capable of initiating a phenotypic revolution?

Political Problems

If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age?

Q: If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?

Epistemological problems

Q: Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?

Q: Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts?

Q: Should such self-deception mechanisms be implemented in AI?

Epistemological problems

Q: How should an AI approach the problem of truths on which not all humans agree?

Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?

Evolutionary problems

Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?

Is a given AI capable of initiating a phenotypic revolution?

Political Problems

If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age?

If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?

Epistemological problems

Q: How should an AI approach the problem of truths on which not all humans agree?

Q: Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?

Q: Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts?

Q: Should such self-deception mechanisms be implemented in AI?