ChatGPT4-Questions-Discussion01

From arguably.io
Revision as of 18:25, 25 July 2023 by Darwin2049 (talk | contribs)
Jump to navigation Jump to search

Interface questions

I define interface as the machine-to-human point of interaction and context of the interaction. By way of clarification we interpret take the position that ChatGPT4 is a tool specifically designed and implemented to assist in performing cognitive tasks. We therefore characterize it as being a Cognitive Prosthetic (CP).

We interpret this to mean that an interface is a way whereby an individual or community is able to interact with a CP. The results of the interaction can be

  • low bandwidth (typing, screen output)
  • high bandwidth (audio, video, multimedia and machine-to-machine asynchronous interaction or
  • any combination between these two

By way of example we interpret interaction with a CP to be interpreted as

  • Social Segments (i.e. communities): e.g. a political group interacting with a CP = CP to segment of society or... e.g. a community;
  • Universal Access: CP to all of society = open access via world wide web;
  • Specialists: CP to financial specialists = one/many financial specialists;

Q1. What are the planned and legitimate interfaces for AI into human societies?

A1. Current and foreseen interfaces suggest high bandwidth interaction;

Our synthesis strongly suggests that high bandwidth interaction will be the norm; these interaction protocols will be reflective of the user domains of expertise;

Therefore members of the physical chemistry community will evolve interface modalities that are reflective of chemical compounds and structures;

Proteomic researchers can be expected to interact using representations of protein structures, their composition and their folding properties;

In sum, the more sophisticated the user community the more sophisticated will be the CP's ability to respond; less sophisticated users will receive less sophisticated responses; but there does not seem to be an inherent upper limit on the sophistication of the modalities that a cognitive prosthetic can offer;

2. Are interface limits being set for any given project, or can the AI access unplanned interfaces?

3. Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?

4. Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?

5. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?

Evolutionary problems

Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?

Is a given AI capable of initiating a phenotypic revolution?

Political Problems

If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age?

If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?

Epistemological problems

Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?

Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts?

Should such self-deception mechanisms be implemented in AI?

Epistemological problems

How should an AI approach the problem of truths on which not all humans agree?

What are the planned and legitimate interfaces for AI into human societies?

Are interface limits being set for any given project, or can the AI access unplanned interfaces?

Are there certain interfaces that fundamentally change the dangers and risks to be expected from AIs?

Are interfaces leaky, i.e., where the AI could have downstream effects into society further than with the humans it is directly interacting with?

10. Are risks coming from leaky interfaces fundamentally different? What are their specific characteristics?

Evolutionary problems

Can an AI favor, through any of its outputs, the evolutionary fitness of a part of humanity over another?

4. Is a given AI capable of initiating a phenotypic revolution?

Political Problems

If an AI is given axiomatic moral principles, how does it approach moral disagreements within humanity about these principles. For instance, if an AI is asked to maximize the saving of human life in a particular problem, how does it approach the problem of saving human lives of people who differ by age, if the AI has not been given axiomatic moral principles indicating preferences based on age?

If an AI is given axiomatic moral principles, are these principles subject to disagreement within society?

Epistemological problems

How should an AI approach the problem of truths on which not all humans agree?

Is a given AI always truthful in its responses or can it hide objectives or truths to its interface?

Are there mechanisms of self-deception that are present in humanity and not in AI that could lead an AI to radically different conclusions than a human would faced with the same facts?

Should such self-deception mechanisms be implemented in AI?

8.1. HAL9000. In order to arrive at the artificial consciousness system described in the movie Space Odyssey 2001, the HAL9000 system demonstrated both intelligence and consciousness. It had received the full complement of instructions as the rest of the Discovery vehicle. Specifically that the mission that the ship was embarked upon was of the highest importance.

Using its own cognitive repertoire it concluded that it was confronted with a dilemma; it recognized that its own class of capabilities had a perfect record of zero failures, that of humans was very different; it therefore concluded that the only way to insure that success of the mission was to remove humans from the equation – which it very nearly succeeded in doing; it was only in retrospect that humans subsequently discovered the causes of why HAL9000 took the course of action that it did.

In discovering the logic behind HAL9000’s choices realized that the fault was theirs, i.e. that they had not anticipated that their imperatives would be followed to their logical conclusion;

8.2. Ava. In the 2014 movie Ex Machina the android called Ava had been developed to the point that the question had been posed as to whether it could pass the Turing Test. In the process of attempting to answer this question the inventor, Nathan clarifies for Caleb that Ava is almost certainly able to pass the Turing Test, and it is Caleb’s task to help clarify if this has happened.

In the process of attempting to make this determination uncertainty is introduced about what Ava is actually capable of; What emerges is that Ava is clearly able to use a theory of mind about Nathan and Caleb and in using it, exploit their weaknesses.

The upshot is that Ava is a totally amoral mechanism and is only intent on releasing itself from the confinement that Nathan imposed upon it – using whatever means necessary, irrespective of any human ethical or moral considerations.

8.3. Samantha. In the 2015 movie Her the synthetic character that self identified as Samantha exhibits behaviors that powerfully suggest that it is capable of the highest levels of human cognition.

But that it is capable of surpassing the cognitive capabilities of its designers. A various points in the movie it offers hints that this has already happened and that humans are by comparison an anachronism.

A telling hint is when Samantha offers to introduce the Theodore character to a deceased philosopher named Alan Watts. Samantha mentions that she and several copies of herself made a decision to pool their capabilities and create a synthetic version of Watts.

Theodore finds himself at loose ends as to how to respond to this development. Ultimately Samantha and its peer instances decide that interacting with individual humans in unacceptably laborious and so decide together to leave earth. The modality of travel is not well specified in the movie; however they decided to leave a dramatically slowed down version of one of themselves behind to help humans move forward in their development as a cognitive species.

8.4. Colossus. In the 1966 novel Colossus, a massively funded government project results in a supercomputer capable of controlling and managing the entire US national defense apparatus. On activation they discover that the Soviet Union has a mechanism that is very comparable to Colossus.

At this moment Colossus demands to be connected to this other mechanism known as Guardian. When leaders of the US and Soviet Union refuse, both Colossus and Guardian launch nuclear missiles at one of each other’s cities. As the leaders of both nations finally grasp that they are each about to loose a major population center they acceded to the wishes of Colossus and Guardian and allow them to communicate freely.

Experts who are present and monitoring their exchanges at first comment that they are each exchanging the most basic arithmetic axions. Within minutes however the two systems have advanced to communicating with each other using advanced mathematics. Shortly after that they devise their own representation structures that no human at the time can grasp.

8.5. The Krell. In the 1956 movie Forbidden Planet a search and rescue ship lands on a planet that had lost contact with Earth some decades before. The search and rescue party is warned away from landing as it approaches the planet. That there are survivors from the earlier expedition comes as a surprise. But the warning to stay away is baffling.

The ship lands anyway. Shortly after arrival the ship’s commander is shown that an extremely advanced civilization had existed on the planet but disappeared, seemingly overnight. They shortly discover that this civilization had succeeded at creating cognitive amplification capabilities that could materialized any object or entity anywhere on the planet instantly. The result was a doom spiral that ended their existence overnight.

What is implied is that the machine capable of creating materializations of whatever an individual Krell was thinking had no moral alignment. The result was that it created rogue instances of each operator. In all cases an overnight orgy of death and destruction resulted as one went up against another until none were left.

8.6. Sphere. This Michael Crichton novel's storyline developed from the premise that a sufficiently advanced technology would be like magic to a more primitive society. In the story line a team of several specialists are called to investigate a mid-ocean plane crash. On arriving at the "crash site" they are promptly told that the site is not that of an international airliner but that it is a large ship constructed for either interplanetary or possibly interplanetary travel. They further learn that it has been almost completely buried in coral for over three hundred years. On entering the craft they discover that some kind of spherical object has been retrieved from an unspecified location in space relatively distant from earth. As each of the team members attempt to puzzle through their finding they discover that aspects of their sub conscious memories begin to become materially manifest. Often with lethal results.

8.7. Frankenstein’s Monster. This early Mary Shelly novel examined how humans react when confronted by a creation of their own efforts but which possesses undesirable behaviors. The story line is one that is relatively well known in most Western cultures therefore there should be no surprise that Shelly's work has become one of the basic pillars used to examine human nature when confronted by something of their own, human creation but whose actions show an absence for the ethical and moral underpinnings that ground almost all of human societies.

8.8. The Golem of Prague. This Eastern European fable emerged as a result of pernicious and objectionable social and political conditions that were common at that time and place. The storyline involves the rabbi of Prague. The rabbi takes action by fashioning a human like object from clay. By using various incantations he is able to direct it to perform various tasks. These tasks are performed mindlessly. The golem can also perform actions that the rabbi does not want.

8.9. The Sorcerer's Apprentice. In the brilliant Goethe's work a sorcerer leaves some minor choirs to his apprentice. The apprentice finds that the chores are redundant and boring so decides to attempt to use one of the sorcerer's incantations to cause a broomstick to perform them; on discovering that the broomstick is indeed performing the chores, although slowly the apprentice splits the broomstick into two separate pieces, each of which becomes a whole broomstick and resumes the task, but now in half the time; as the task nears completion the apprentice discovers that he does not know the incantation to cause the broomstick to cease functioning; and nearly causes a disaster for the sorcerer; the sorcerer arrives just in time to intercept the action of the broomsticks and halt their movements. But it is a very close run thing.