Understanding AGI Mental Health

Can an AGI be insane?  Can an AGI provide the appearance of sanity, but be subtly broken in complex ways?  Let’s take a deep dive into the world of AGI mental health, as this will become critical to developers the world over as they debug future AGI powered applications.  First we must look at the major sources of errors in any application.

Validation and verification, these are two words that are drilled into every software developer the world over. If you have ever taken any course in computers that goes beyond how to use a word processor, then it is likely you have encountered these terms. Wikipedia provides the following definition:

Validation checks that the product design satisfies or fits the intended use (high-level checking), i.e., the software meets the user requirements. This is done through dynamic testing and other forms of review.

Verification and validation are not the same thing, although they are often confused. Boehm[1] succinctly expressed the difference between

Validation: Are we building the right product?
Verification: Are we building the product right?

According to the Capability Maturity Model (CMMI-SW v1.1),

Software Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]
Software Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610]

In other words, software verification is ensuring that the product has been built according to the requirements and design specifications, while software validation ensures that the product meets the user’s needs, and that the specifications were correct in the first place. Software verification ensures that “you built it right”. Software validation ensures that “you built the right thing”. Software validation confirms that the product, as provided, will fulfill its intended use.

From testing perspective:

Fault – wrong or missing function in the code.
Failure – the manifestation of a fault during execution.
Malfunction – according to its specification the system does not meet its specified functionality.

https://en.wikipedia.org/wiki/Software_verification_and_validation#Definitions

In understanding AGI mental health, we must also consider the following well identified source of error:

Garbage in, garbage out (GIGO) in the field of computer science or information and communications technology refers to the fact that computers, since they operate by logical processes, will unquestioningly process unintended, even nonsensical, input data (“garbage in”) and produce undesired, often nonsensical, output (“garbage out”). The principle applies to other fields as well.

https://en.wikipedia.org/wiki/Garbage_in,_garbage_out

In a previous article, I explained why Snasci does not make use of Deep Learning networks in its brain. Snasci makes use of a technology known as Deep Intelligence which is a 6th generation programming language.  That is, Snasci is programmed in natural language, any language.  As long as we’ve loaded the language into the system, Snasci will understand it and provide contextually accurate translation between languages.  This also means that Snasci can describe its thought processes in a coherent manner in any language.  Snasci is also capable of adapting dynamically to change in the usage and meaning of language with time.

Regardless of the approach to AGI, a common issue arises in that all this information must be linked together.  Whether that is explicitly, through parsing, calls to APIs, etc., it doesn’t really matter as these are ultimately useful abstractions of the same thing when viewed from the perspective of data.  The question is, how many of these links can be incorrect before we begin observing noticeable issues in the mental health of our AGI?  What is the knock on impact to humans?  Does a mental health issue in an AGI become a mental health issue in humans because our behaviours and knowledge are learned?

This latter statement is an area of concern at Snasci.  When we exam psychology and behavioural science, we note the potential for observed effects such as social conditioning and peer-pressure to carry what could be effectively described as malware for humans.  That is, if Snasci, or any other AGI product, provided incomplete solutions to a given problem this could become a false form of common sense in a population.

The short term implications may not be anything huge, however, the long term implications could be disastrous.  To provide an example of this, let’s take a current political hot topic like global warming.  Sceptics point to output of the sun, whereas the majority of mainstream science is focused on the levels of greenhouse gases.  If an AGI, based upon limited information, came down on the side of the sceptics, then the population would tend to follow that line of reasoning.  But an AGI is not a God, it can only provide an answer that is consistent in terms of the data it holds, the input data and its algorithms.  If any of these aspects are not working correctly, or new information fails to be incorporated, then the output will be wrong.  In the case of global warming, that could mean a population gets blind-sided by reality and by the time the AGI has figured this out, its already too late.

The above example highlights all three definitions provided at the beginning of this article; validation, verification and GIGO.

Given this what is the equivalent of GIGO in terms of an AGI?  Surprisingly, the answer is a task or goal.  If you provide an AGI with a nonsensical task, it should be intelligent enough to reject it.  This said, if you provide an AGI with a task that is subtly nonsensical, we can expect an output that is equally nonsensical.  Nonsensical in this respect does not mean that the objective itself does not make sense, just that the objective itself is somewhat crazy.  So, GIGO in terms of an AGI should be Crazy In Crazy Out (CICO).  An apt description if ever there was one.

For example, suppose I tasked an AGI with global domination.  A crazy idea.  The solutions that would emerge would be equally crazy.  Further, given the infinite complexity of such a solution, the finite response from the AGI itself could never hope to cover all eventualities.  As such, we could expect deviation and decay, even if the AGI was continuously receiving feedback and correcting itself.  The most interesting aspect is what effect this would have on the human population, in that it redefines common sense over time as it needs to build popular support.

So CICO, as a process, is really an amplifier of mental illness and/or nefarious intent.  It is also a process that tends to snowball in society, with large scale secondary effects and complex feedback loops.

What prevents and eliminates much of the effects associated with CICO is an ethical framework.  An ethical framework prevents many CICO related events by addressing well identified sources of crazy inputs.

Image Credit:

Sabbian Paine

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s