12 March 2009

Of spaghetti and the hot plate of failure...

Time for my 2nd post, I've also had trouble gathering my senses, trying to force myself into expressing what's been going through my mind recently.

Sometimes I feel like I'm surging with ideas, and this gives me the desperate urge to express them and materialize said ideas into something useful, people call this the creative process, I call it neural venting.

So, as a first step, as of now, I'll start venting here, for I believe that a worthy thought not written is a lost cause, the brain is a fine instrument, but our memory is not infinite, and thus decays with time, and something you've been thinking about for weeks or even months may vanish out of context any moment, making way for a new memory perhaps, and leaves you with that strange aftertaste where you ask yourself "What was that idea again?".

So, regardless of my meaningless ramblings, let's get on topic, as a software developer that mainly thinks in bits and bytes, and as I'm heavily involved in the field of Artificial Intelligence, I always seek new knowledge to better my understanding of how the code I write to simulate Neural Networks actually works? And why does it lack so much in comparison to real brains?

The same neural networks come in dire lack compared to biological ones, they do not even come close to a worm's brain, a 302 neurons that make up one of the simplest biological brains in existence (C. elegans) beat a matching Artificial Neural Network at the same task (survival) if put in a simulator, but I believe that the worm's brain was engineered with care, with every single neuron and synaptic connection put in place for a reason, you may argue genetic selection, or the adept, delicate and benevolent hands of god (my beliefs say so, by the way), but regardless, there's no dispute that they are simply superior to artificial ones that come as a huge pile of entangled spaghetti, a big plate of failure, served hot with Italian sauce, in comparison.

That's another topic though, but thinking like this got me into comparing biological brains, and how they work and interact, to write a good simulator you need to study what you're simulating first, right?

For example, although our bodies employ the same concepts and neural structure as other mammals, we gain a consciousness, we claim the realm of thought and imagination, and we invent, we create, and we innovate, how come?

We share the concept of memory with almost all living beings, we have memory, and so do they; and in my books, memory equals experience, and the ability to accumulate experience results intelligence, and adaptive behavior, but all in all, we have something they lack, we have the capability of producing new memories at will, memories that may have never happened, all in our heads, the figments of our own minds, and that is called imagination.

Let's look at a young baby, the baby comes into this world naked, completely helpless, and with no experience or past knowledge of the mechanics behind life, I know I did, and now, 25 years later, I'm conscious, aware, very curious, and I question my own mind and the workings of the brain that drives it about.

What happened in said 25 years? I sure can't remember all, but now when I look back at my childhood I remember, how I developed through childhood, I learned how to perform the most complex tasks by observing and imitating, but how did I learn how to feed myself when that awful feeling in my stomach emerges? as a baby, when I felt hungry, I cried, the mother fed me and I realized that by putting food in the mouth hunger goes away, and knew how hunger can be satisfied.

From that point on, I bet that's how I learned that other needs can also be satisfied, and by observing how grownups satisfy theirs', I stand where I am now, in knowledge of how my body works, what needs and desires I may posses and how to satisfy them.

But that only covers the process of learning, all animal babies also learn this way, so, the question still remains, what differentiates human beings as sentient beings, what makes us invent, create and alter the environment around us for our favor?

From experience and careful thought about the subject, I can say that although animals do possess a primal form of imagination (when a dog seeks food or warmth, they surely picture it in their minds in a way or another, take Pavlov’s dogs for example, they must have pictured the image of food every time the bell rang), but imagination is not all about anticipating future events and the realization of needs and desires, as humans, we have all of the above, in addition to the capability of triggering the process at will, the way we think is not always a reaction or a reflex to a certain event, we provide the impetus, the propelling force that drives our imagination to the limit (which hardly exists), I believe that da Vince was a master at this, and he must have realized this fact, and knew how to exploit it all the way.

Back on topic however, we come to the mathematical representation of the brain, as huge of a network it is, it still has a known number of inputs and a known number of outputs, and memory is the collaborative result of all neurons working together, does this mean that the brain is a mathematical function? that's a crude way to put it, but I believe something else.

I believe that the brain can be represented as an astronomically huge array, with a huge number of bounds, each entry in this array represents the expected outputs for a certain possibility/state of mind, this array would look something like this:

Output[x] = Memory(Input1, Input2, Input3, ..... , x)

Where x represents the desired output node.

And here's a representation in pseudo-C code:

struct Outputs_t 
{

float
Outputs[NUMBER_OF_OUTPUTS];
}


/* number of bounds is equal to the number of available
* inputs, the indexers are also floating point numbers,
* thus, allowing for a (theoretically) infinite number
* of possibilities, the only limit here is the precision
* of the floating-point number. (which represents the
* number of available neurons and synaptic connections
* in this case)
*/


Outputs_t Memory[i1, i2, i3 , i4, i5, ...];


Like I mentioned in the comment above, but I'll reiterate anyways, the array would have floating-point indexers, allowing for decimal numbers to be used as indices to the array, and for the sake of simplicity, let's say that the numbers range from -1 to 1, the indexer referenced by every input is its state at the time of evaluation, this means that the number of possible states is infinite, versus the number of neurons (which is finite), thus, we come to a new concept, detail.

When the number of neurons is low, the precision drops, resulting "fractured behavior", to imagine this, think of the simpler life forms out there for example, they seem "programmed" to do things, they follow a predictable pattern, learning is minimal, and the new, better generation, develops through evolution, by means of genetic-selection, and mutation.

With the array representation above, I hear you wonder "What of learning? how does it fit in this mathematical madness?"

Learning is the process of changing the fields of the array, and should be like this:

Memory(Input1, Input2, Input3, ..... , x) = Optimal_Output[x]

But not quite, this array has a very special property, changing a value affects the fields surrounding it, but assuming that our network is a typical artificial neural network (where every neuron in a layer is connected to every neuron in the next), we can safely assume that the changes will be symmetric to a certain extent, as in, the rippling effect affects more fields when the threshold of a neuron is adjusted if said neuron is in the topmost layers of the networks (near inputs) and affects less fields if the neuron is near the bottom (closer to the output layer).

I might write a small program that demonstrates this visually, but that's another task for another day.

If you have followed me and understood this so far, then I bet you've come to a disturbing realization, does this mean that the brain is some form of a probability machine? are our brains some form of statistical spreadsheets?

The explanation above suggests so, our brains are predictive engines, the concept is simple, yet the application is complex and difficult to perceive, the closest term that comes to mind when I try to describe the brain, is an infinite state machine!

Yes, this is what I think of when I think of the brain, it's the optimal representation of such a term, it can handle infinite possibilities, and when its capacity of neurons is not enough, it improvises, and it compromises, but there's always an output and a "thing to do" for any given state at any given moment.

If you look deeper into this, you'll find out that the brain is simply a living, self-hosting, and self-adjusting adaptive algorithm, it is a relational database of "what to do when", and through that rippling effect I mentioned above, it also predicts what to do in similar situations in the future, given similar, but not the same inputs, it will perform a similar, but also not the same action, this is what gives organic beings the unpredictability factor, something, somewhere might change through the learning process, and the predicted behavior for a certain state may be drastically affected by something seemingly irrelevant, yet it's all consistent and falls into the same context, and this leads to one conclusion:

The brain is a mathematical oracle.

Even this far into the discussion, I allowed myself to neglect a huge aspect of the subject, the biological brain, the difference between the biological brain and the representation above is, the biological brain is not static, it does not "evaluate" a value every xx milliseconds in a loop, the biological neurons are rather those of a spiking model, they trigger at will, I could go in-depth about action potentials and activation functions, but this information is available widely around the internet, and I won't bother explaining what you could read about elsewhere, so let's put it like this, the biological brain has loop-back mechanisms; It has neural-microcircuits that give it the capability to supply its own inputs, and divert previous outputs as new inputs, and this puts it into an ever-renovated state of flux, it does not wait for "inputs", and it does not "react", it acts, external inputs such as sensory data are simply additional incentives that may aid in the decision-making process, in the ever-going cycle of "what to do next?".

I will discuss this in detail in the future, since the topic deserves volumes over volumes of articles to fully discuss.

Anyways, thinking like this inspired me to make a new model of artificial neural networks, one that learns in different ways from the current ones, the current linear back-propagation approach is simply lacking, as you must provide the expected inputs and matching outputs for the networks in order for them to succeed at learning specific and limited tasks, although they perform them marvelously well when carefully designed and taught, tasks such as the recognition of character glyphs, recognition of voice and fingerprints, all remain limited compared to their true potential.

The current model of neural networks revolves around inputs, outputs and layers over layers of hidden neurons, and the network is evaluated in a procedural manner, this is different from what real brains do, these networks are usually crafted and trained for one purpose, and that's about it, once they enter a production stage they cease all learning activities, although some expert-systems that employ neural networks in their arsenal of problem solving solutions keep error-recovery routines where they train them as they work, to fine-tune their performance when applicable, but that also remains a different topic.

(To be continued)

2 comments:

Mohammed Gamal said...

As Neo has once said "Whoa!". Dude, are you for real???

Interesting post! Anxiously waiting for the rest :)

ALien_13 said...

"of spaghetti and hot plate of failure"

Looking at this title doesn't relate myself to neural networks.

But it really is a nice and interesting post.

You got a nice neural venting XD