ChatGPT4-Questions

From arguably.io
Revision as of 16:21, 19 October 2023 by Darwin2049 (talk | contribs)
Jump to navigation Jump to search

I'm starting this page as personal notes (and anyone can contribute to it too) about a taxonomy of all unaddressed questions and dangers with AI.

Interface questions

I define interface as the machine-to-human point of interaction and context of the interaction. For example, if an AI serves a political group, its interface to human society is the political group. If an AI has opened access to the public on the web, then its interface is an open web access point. If an AI is providing counsels to a trader in the financial sector, then its interface is the single trader it is counseling.

What are the planned and legitimate interfaces for AI into human societies?

Are interface limits being set for any given project, or can the AI access unplanned interfaces?

Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?

Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?

Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?

Evolutionary problems

Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?

Is a given AI capable of initiating a phenotypic revolution?

Political problems

If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age?

If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?

Epistemological problems

How should an AI approach the problem of truths on which not all humans agree?

Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?

Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts? Should such self-deception mechanisms be implemented in AI?

Some Deliberations The questions posed above have led to deliberations that attempt to create a framework for a coherent framework that can answer one or more of them.
During our examination of this very broad topic we observe and note that:

  • before/after event
    • this is a very large topic area with social, political, economic impact;
    • nation state interaction will experience disruption, possible dislocations;
    • economic sector/sub-sector transitions will be sometimes wrenching;
  • historical turning point
    • comparable to Gutenberg Printing, Jacquard Loom, Watt Steam Engine;
    • rapid uptake of ChatGPT within weeks is indicative of magnitude of impact;
  • cultural variances will be cast into glaring spotlight;
    • this topic is fraught with difficult questions
    • economics will drive systemic change in others unavoidable cross-cultural values come into play;
    • malevolent actors will mount more daring, cunning attacks;
  • current focus will have short shelf life;
    • expect rapid "take off" of new personal, social facilities, tools, modalities of communicating, problem solving;
  • positions, observations expressed are perishable; new and unexpected developments may upend earlier conclusion;