31 August 2009

THE CHIMPUTER AND THE DIGITAL BANANA

I've always been fascinated by analog computers, and let me define this term for simplicity's sake, an analog computer is a closed system of physical objects/particles, constantly interacting and performing metaphorical/physical math to solve a certain problem.

To be honest, if you think of today's digital computer, there's not much of a difference between a digital computer and an analog one, today's digital computers use the same concepts, for instance, memory is stored on chips with matrices of silicon gates, each gate is composed of two layers of silicon, P and N, which interact in a special way that allows for a metaphorical value (the binary 0 or 1) to be stored at said gate, at the bottom line, OFF and ON inevitably become 0 and 1.

With all of the particles involved in this process, you end up using physical particles to represent information.

I'm a big fan of abstract, and even quantum computers use similar techniques for information storage and retrieval, although utilizing a much more delicate property of physical particles, information is stored and retrieved by harnessing the physical properties of the particles themselves, so, you can think of qubits (the quantum computer's equivalent of a bit) as meta-data stored ON a physical particle.

So, in essence, one could argue that you can build a computer out of anything with strict rules of interaction, provided by the right elements, you could build a logic gate out of anything dynamic (which responds to external events and affects surrounding objects in the same sense), even sticks of wood and a water stream serving as current would serve as an adequate (but not optimal) solution, the only problem would be arranging them in the right way.

So, can you use light for this? I bet you could, and photonics are quite close to becoming common place.

Can we use monkeys for this? I BET YOU COULD!

Preface

As outlined above, as long as elements of the system can affect other elements, any chain of events could be considered a form of computing, and as long as a current can turn another current off, you can build a logic gate out of it, and this includes trained monkeys, and a river.

Hardware platform

In this section I'll propose a potential approach to building a CHIMPUTER, and let's begin by defining the term CHIMPUTER, and the hardware architecture: A CHIMPUTER is a closed system with trained MONKEYS and CHIMPS inside, all seated by a stream of water that runs throughout the system, the arms of said monkeys are tied to wooden gates with short ropes, data is transferred through the water stream as BANANAS (or DIGITAL BANANAS for short), and as the first banana enters the stream, the highly trained monkey will try to grab it, pulling the rope in the process, switching gates on and off, and thus interrupting or permitting the flow of a different stream somewhere else.

The rope could also be tied to a BANANA DISPENSER that activates different currents throughout the machine's lifetime.

So, how does computation fall into this? and here we come to the memory, which is stored in matrices of 8BANANA BOXES, divided into small partitions, and placed in a BANANA MAZE, memory addressing is done through the rope pulling of MEMORY CHIMPS that pull the ropes in a specific sequence, influenced by the information carried by the stream of water, thus, opening and closing specific gates inside the BANANA MAZE, through which a highly trained MEMORY CHIMP will navigate and fetch the destination box and then dispatch it to the C-PU.

Same logic applies to the C-PU, the rope-pulling-logic-gates can drive the whole system, for computation, you have the AND-CHIMP, the OR-CHIMP, the ADD-CHIMP, and other specialists trained to process their instructions specifically.

I/O

Input is done using punch-cards, as you'd expect from a top-of-the-line analog computer.
Output: that's my favorite, monkey-with-the-typewriter can press keys based on where the bananas fall.

Of course you'd argue that chimps do all the cool work, and monkeys do the menial tasks, but what can I say, chimps are smarter by nature, and for a successful design, you must not neglect such critical details.

Operating System

The machine will come with a pre-installed copy of MONKEYSOFT® BLINDS©: FIESTA™, a fast, feature-rich, secure, and stable operating system.

Instruction set

The system will include a BASIC-CHIMP to interpret, and natively execute MONKEY-BASIC instructions, MONKEY-BASIC is a powerful, high-level, and highly-structured language with a procedural approach, and a CHIMP-ORIENTED feature set; designed for ease of use and flexibility; and of course line numbers are mandatory, too bad you must use punch-cards to input the program still.

In closing

You can see it, can't you? this technical marvel can (and will, once it goes live), alter the way we think of modern computers, and have a notable impact on everyone's day-to-day life.

Now all I need is adequate funding for my exceptional ideas.

06 August 2009

The teapot of madness..

Hello again, and welcome to my blog!

As strange as this may seem, I think my only motive to write these days is purified, condensed depression, the kind that comes as dark blocks of oozing matter, which would inevitably drive a whole nation into mass-suicide by hydro-electrolyzed-strangulation if a small chunk was slipped into the water supplies by any chance; the bottom line is, you get the point.

My need to write seems to be somehow linked with strong emotional states at the subconscious level, as I'm only incited to indulge in meaningless ramblings when I am pushed into one of those states, and I guess that writing fulfills my need to communicate with others, if not before all; my own self.

By now you're expecting me to state 'the why' after stating 'the how', and my answer is, nothing that can be solved by writing about, people are selfish @#$%, life sucks, etc; and now I'm done, what a relief..

Fulfilled my need to ramble, deflated some pressure from the boiling teapot in my head, thanks, greetings, and have a good day!

12 March 2009

Of spaghetti and the hot plate of failure...

Time for my 2nd post, I've also had trouble gathering my senses, trying to force myself into expressing what's been going through my mind recently.

Sometimes I feel like I'm surging with ideas, and this gives me the desperate urge to express them and materialize said ideas into something useful, people call this the creative process, I call it neural venting.

So, as a first step, as of now, I'll start venting here, for I believe that a worthy thought not written is a lost cause, the brain is a fine instrument, but our memory is not infinite, and thus decays with time, and something you've been thinking about for weeks or even months may vanish out of context any moment, making way for a new memory perhaps, and leaves you with that strange aftertaste where you ask yourself "What was that idea again?".

So, regardless of my meaningless ramblings, let's get on topic, as a software developer that mainly thinks in bits and bytes, and as I'm heavily involved in the field of Artificial Intelligence, I always seek new knowledge to better my understanding of how the code I write to simulate Neural Networks actually works? And why does it lack so much in comparison to real brains?

The same neural networks come in dire lack compared to biological ones, they do not even come close to a worm's brain, a 302 neurons that make up one of the simplest biological brains in existence (C. elegans) beat a matching Artificial Neural Network at the same task (survival) if put in a simulator, but I believe that the worm's brain was engineered with care, with every single neuron and synaptic connection put in place for a reason, you may argue genetic selection, or the adept, delicate and benevolent hands of god (my beliefs say so, by the way), but regardless, there's no dispute that they are simply superior to artificial ones that come as a huge pile of entangled spaghetti, a big plate of failure, served hot with Italian sauce, in comparison.

That's another topic though, but thinking like this got me into comparing biological brains, and how they work and interact, to write a good simulator you need to study what you're simulating first, right?

For example, although our bodies employ the same concepts and neural structure as other mammals, we gain a consciousness, we claim the realm of thought and imagination, and we invent, we create, and we innovate, how come?

We share the concept of memory with almost all living beings, we have memory, and so do they; and in my books, memory equals experience, and the ability to accumulate experience results intelligence, and adaptive behavior, but all in all, we have something they lack, we have the capability of producing new memories at will, memories that may have never happened, all in our heads, the figments of our own minds, and that is called imagination.

Let's look at a young baby, the baby comes into this world naked, completely helpless, and with no experience or past knowledge of the mechanics behind life, I know I did, and now, 25 years later, I'm conscious, aware, very curious, and I question my own mind and the workings of the brain that drives it about.

What happened in said 25 years? I sure can't remember all, but now when I look back at my childhood I remember, how I developed through childhood, I learned how to perform the most complex tasks by observing and imitating, but how did I learn how to feed myself when that awful feeling in my stomach emerges? as a baby, when I felt hungry, I cried, the mother fed me and I realized that by putting food in the mouth hunger goes away, and knew how hunger can be satisfied.

From that point on, I bet that's how I learned that other needs can also be satisfied, and by observing how grownups satisfy theirs', I stand where I am now, in knowledge of how my body works, what needs and desires I may posses and how to satisfy them.

But that only covers the process of learning, all animal babies also learn this way, so, the question still remains, what differentiates human beings as sentient beings, what makes us invent, create and alter the environment around us for our favor?

From experience and careful thought about the subject, I can say that although animals do possess a primal form of imagination (when a dog seeks food or warmth, they surely picture it in their minds in a way or another, take Pavlov’s dogs for example, they must have pictured the image of food every time the bell rang), but imagination is not all about anticipating future events and the realization of needs and desires, as humans, we have all of the above, in addition to the capability of triggering the process at will, the way we think is not always a reaction or a reflex to a certain event, we provide the impetus, the propelling force that drives our imagination to the limit (which hardly exists), I believe that da Vince was a master at this, and he must have realized this fact, and knew how to exploit it all the way.

Back on topic however, we come to the mathematical representation of the brain, as huge of a network it is, it still has a known number of inputs and a known number of outputs, and memory is the collaborative result of all neurons working together, does this mean that the brain is a mathematical function? that's a crude way to put it, but I believe something else.

I believe that the brain can be represented as an astronomically huge array, with a huge number of bounds, each entry in this array represents the expected outputs for a certain possibility/state of mind, this array would look something like this:

Output[x] = Memory(Input1, Input2, Input3, ..... , x)

Where x represents the desired output node.

And here's a representation in pseudo-C code:

struct Outputs_t 
{

float
Outputs[NUMBER_OF_OUTPUTS];
}


/* number of bounds is equal to the number of available
* inputs, the indexers are also floating point numbers,
* thus, allowing for a (theoretically) infinite number
* of possibilities, the only limit here is the precision
* of the floating-point number. (which represents the
* number of available neurons and synaptic connections
* in this case)
*/


Outputs_t Memory[i1, i2, i3 , i4, i5, ...];


Like I mentioned in the comment above, but I'll reiterate anyways, the array would have floating-point indexers, allowing for decimal numbers to be used as indices to the array, and for the sake of simplicity, let's say that the numbers range from -1 to 1, the indexer referenced by every input is its state at the time of evaluation, this means that the number of possible states is infinite, versus the number of neurons (which is finite), thus, we come to a new concept, detail.

When the number of neurons is low, the precision drops, resulting "fractured behavior", to imagine this, think of the simpler life forms out there for example, they seem "programmed" to do things, they follow a predictable pattern, learning is minimal, and the new, better generation, develops through evolution, by means of genetic-selection, and mutation.

With the array representation above, I hear you wonder "What of learning? how does it fit in this mathematical madness?"

Learning is the process of changing the fields of the array, and should be like this:

Memory(Input1, Input2, Input3, ..... , x) = Optimal_Output[x]

But not quite, this array has a very special property, changing a value affects the fields surrounding it, but assuming that our network is a typical artificial neural network (where every neuron in a layer is connected to every neuron in the next), we can safely assume that the changes will be symmetric to a certain extent, as in, the rippling effect affects more fields when the threshold of a neuron is adjusted if said neuron is in the topmost layers of the networks (near inputs) and affects less fields if the neuron is near the bottom (closer to the output layer).

I might write a small program that demonstrates this visually, but that's another task for another day.

If you have followed me and understood this so far, then I bet you've come to a disturbing realization, does this mean that the brain is some form of a probability machine? are our brains some form of statistical spreadsheets?

The explanation above suggests so, our brains are predictive engines, the concept is simple, yet the application is complex and difficult to perceive, the closest term that comes to mind when I try to describe the brain, is an infinite state machine!

Yes, this is what I think of when I think of the brain, it's the optimal representation of such a term, it can handle infinite possibilities, and when its capacity of neurons is not enough, it improvises, and it compromises, but there's always an output and a "thing to do" for any given state at any given moment.

If you look deeper into this, you'll find out that the brain is simply a living, self-hosting, and self-adjusting adaptive algorithm, it is a relational database of "what to do when", and through that rippling effect I mentioned above, it also predicts what to do in similar situations in the future, given similar, but not the same inputs, it will perform a similar, but also not the same action, this is what gives organic beings the unpredictability factor, something, somewhere might change through the learning process, and the predicted behavior for a certain state may be drastically affected by something seemingly irrelevant, yet it's all consistent and falls into the same context, and this leads to one conclusion:

The brain is a mathematical oracle.

Even this far into the discussion, I allowed myself to neglect a huge aspect of the subject, the biological brain, the difference between the biological brain and the representation above is, the biological brain is not static, it does not "evaluate" a value every xx milliseconds in a loop, the biological neurons are rather those of a spiking model, they trigger at will, I could go in-depth about action potentials and activation functions, but this information is available widely around the internet, and I won't bother explaining what you could read about elsewhere, so let's put it like this, the biological brain has loop-back mechanisms; It has neural-microcircuits that give it the capability to supply its own inputs, and divert previous outputs as new inputs, and this puts it into an ever-renovated state of flux, it does not wait for "inputs", and it does not "react", it acts, external inputs such as sensory data are simply additional incentives that may aid in the decision-making process, in the ever-going cycle of "what to do next?".

I will discuss this in detail in the future, since the topic deserves volumes over volumes of articles to fully discuss.

Anyways, thinking like this inspired me to make a new model of artificial neural networks, one that learns in different ways from the current ones, the current linear back-propagation approach is simply lacking, as you must provide the expected inputs and matching outputs for the networks in order for them to succeed at learning specific and limited tasks, although they perform them marvelously well when carefully designed and taught, tasks such as the recognition of character glyphs, recognition of voice and fingerprints, all remain limited compared to their true potential.

The current model of neural networks revolves around inputs, outputs and layers over layers of hidden neurons, and the network is evaluated in a procedural manner, this is different from what real brains do, these networks are usually crafted and trained for one purpose, and that's about it, once they enter a production stage they cease all learning activities, although some expert-systems that employ neural networks in their arsenal of problem solving solutions keep error-recovery routines where they train them as they work, to fine-tune their performance when applicable, but that also remains a different topic.

(To be continued)

10 February 2009

Greetings!

Hmm, a new blog at last..

I am known to be a big fan of lurking online, always staying in the shadows, reading forums that I never register on just for the joy of reading what people write in there, and same goes for blogs.

I'm always in invisible mode on [Insert name of you favorite IM application here] , I rarely reply to e-mails, and hardly engage in any kind of social interaction.

I accustomed myself to enjoy solitude more than company, I like reading about complex scientific topics all on my own, just for the sake of reading, and accumulating as much knowledge as I could in every field possible.

Although it is inciting to keep things the way they are, I found myself in a tough situation here; All of the knowledge I gathered around over the past few years turned into hurricanes of ideas and thoughts, pushing my brain to its limits, and burning through my cerebral cortex, my mind is about to implode, and this is why I created this blog.

This is going to be my own way of arranging them, sorting them, and (if anyone is even reading this) share them with others, in hopes of making something useful out of them.

I would also appreciate it if you take note than English is not my native tongue, so if my linguistics come short, by all means, correct me.

Anyways, my main interests that I will be writing about are: Software development, Biology, and Neurobiotics; amongst other things. (if you think this is a strange mix, you're probably right - but like I mentioned, I do have a hunger for knowledge, and that includes EVERYTHING, so if you see me writing about Aerodynamics or Psychology, don't be surprised).

Another language I might be writing in is Arabic (My mother language), And I conceive that most posts in Arabic will be either programming tutorials, or local chattery about current events in Egypt, etc; If a certain topic deserves it, I'll have it posted in both languages.

I guess that's about it *resisting my desire to cancel this post, I'll click "Publish" now to end this agony*