Why Does Snasci Not Use Deep Learning?

At present, this must be the single most baffling question about Snasci.  Why does it not use deep learning?  At some point in the future, Snasci will provide a comprehensive deep dive into its Deep Intelligence technology, but today let’s focus on the limitations of deep learning and how it fits into the picture of AGI.

Deep learning is a form of narrow AI.  A narrow AI is therefore a classifier or filter which typically performs a singular function.  The difference between a neural network and deep learning, is one of degrees and semantics but it is more than just a marketable term.

A neural network will typically have an extremely narrow focus such as the ability to separate two or more classes of well defined data in a general sense.  A deep learning network, on the other hand, is able to separate much more complex forms of data, leading to algorithms that can carry out complex functions that have many degrees of freedom.  So, a neural network may separate/classify data for a few columns in a database, deep learning will separate the problem of a game, with thousands/millions of variables, and provide a general solution.  Most times, that solution will be sufficient to constantly outperform humans.

An AGI is therefore a complex nesting of classifiers of classifiers in an engine that mimics human behaviour.  Every problem that could ever be encountered must result in a new deep learning network and inserted into the whole.  To meet the criteria for an Artificial General Intelligence, the system must be capable of performing this itself without external intervention.  This can lead to a chicken-and-the-egg scenario, where the system needs to be able to classify the training data, before it can train itself.  I like to think of this issue as one particular class of a universal law called ‘The Conservation of Stupidity’.

There are many tricks and approaches to overcome this stumbling block in the development of an AGI.  In the beginning, it is human intervention, as the AGI develops it becomes about providing it a toolkit to decompose problems that will lead to the creation of better and/or more optimised algorithms.  This said, if this toolkit is incomplete, the AGI will miss entire sets of relationships and not be intelligent as it should be.

It is here that we begin to observe the real problems with deep learning.  Time, cost and computational load both in training and in operation.  Imagine quantifying everything a human can or could possible do, breaking it down and attempting to build a deep learning network to drive a control system.  The costs would be staggering.  In the end, what you get is a solution that is accurate to a certain percentage.  Further, it can be hard to gauge exactly what that percentage of accuracy really is.  A good example of this is Google’s AlphaGo.  Whilst it was able to beat the world’s top player, as a general solution for the game of Go it could only be 20% accurate or it could be 90% accurate.

We simply don’t know and this has broad implications in the field of security.

Any neural network, or deep learning network, selects a defining characteristic of a given dataset as the ultimate reason it classifies it one way or another.  A neural network is a bit of a black box in this regard, in that in complex scenarios, it is extremely difficult to discern exactly what it is using as its characteristic.  Take sentiment analysis for example, we like to say a deep learning network is classifying moods with 88% accuracy.  But is that what is really happening?  What about this other 12%?  It is obviously seeing something we don’t and using that.

Now let’s translate that to an AGI and focus on this 12% of misclassification.  Is this exploitable?  How do we prove that it is not?  More importantly, how do we test for such errors and remote exploits?

This leads to the next issue, whilst a deep learning network may provide a general solution to a problem, it is still just a singular approach to that problem.  To get around this issue of misclassification, or a high error rate, many such weak deep learning networks and the architecture to select between them are required adding more costs and computational time.

As a small startup, Snasci could never hope to compete against major international players in an AGI market where deep learning was the only solution.  The risk, capital expenditure and resources required would guarantee failure.

Deep Intelligence, however, is a game changer and unlike its competitors it is here now.

Image Credit:



One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s