MY ACCOUNT | CONTACT | FAQ 
Search :
WHAT I DOWORK WITH METORIDION PROJECTPUBLICATIONS


Probabilistic Quantum Neural Network Basics

Probabilistic Quantum Neural Networks – The basics of how technology can turn mistakes to its advantage.

This simple article explores the main difference between logic based and quantum probability based neural networks. The context is that of developing memory architecture that has the capability of developing AI capabilities as a function of its construction and not by way of logical design or programming. In simple terms, universal memory that replicates and processes data without the need for logic or rules being present. Memory that can reconfigure without loosing the essence of what it has already remembered, whilst at the same time storing new information.

The classic AI problem

When we think of artificial intelligence, we often think of lots of processing power, large sets of data and complicated trees of decisions and weighted choices. All these elements being constantly evaluated in an attempt to make a choice or spot a pattern.

This approach to AI comprises of several key components such as :

  • Memory (hard disks, RAM, ROM, Internet/Cloud, photos,databases,text)

  • Inference engines (various approaches of rules, relational and other decision technologies)

  • Input (Sensor and data capture)

  • Output (screen, in memory operations)

Just look at the above list for a moment and try and think of a problem with it.... Well you may have guessed or not, but the problem is one of integration. By integration I mean that each of the elements is essentially a separate system. A problem with this is that it is almost impossible to do more than one thing at a time. The memory cannot write and read at the same time, the sensors cannot capture information and save it to memory if the memory is being read to try and decide if the sensor has something valuable to say. The traditional solution is to simply add more CPU power (the chips that process data) in the hope that they can do things faster or even possibly at the same time. This approach to realtime data processing follows multiple logical paths to derive answers to questions, evaluate the state of the game and offer up alternatives if required. Whilst at small to medium scale it can be very useful, especially for specific tasks - there is a limit to this approach and it quickly gets out of hand.

A simple problem of image recognition

Let us imagine a very simple sounding task that humans are routinely capable of. The task is to recognise a well known celebrity face that has been “mixed up”. The type of puzzle you see in magazines, where the eyes are upside down or the mouth is swapped with the nose.

OK, this problem is really easy for humans to solve. In fact sometimes we don't even realise the photo has been altered until we are told, but what about for a computer? What steps does a traditional computer system have to make to determine with 99% accuracy that the image it is looking at is the person that it is?

Well for starters, the computer will need an exact or near enough copy of the original photo.

Next it will need to begin trying to cut the photo into smaller parts (known as digitizing), but the computer does not know how the image was rearranged – so it may have to try many thousands of combinations (almost all of which will be wrong). Eventually it rebuild the image and begins a comparison. It might for example start looking at every other 1mm square in the source image and see if it matches the destination image... You see the problem? The problem is that as the image size and quality grow, the number of operations grows exponentially at the same time. This is why your computer takes longer to process a HD photograph than smaller images. Traditional AI will try to make a set of rules that stop it having to retrace its own steps, but these rues need to either be set by a human precisely, or in the case of classical neural networks they may be created automatically with various weights ascribed to denote their functional accuracy on a particular data set or problem.

A quantum probabilistic solution

The first thing I think you need to do is now try and forget everything you think you know about neural networks because quantum probabilistic neural networks are radically different. There are several documented approaches to quantum neural networks, however these generalyl try and replicate classic algorithms by the use of quantum logic and are not applicable here.

The approach I am going to introduce is the concept of probabilistic memory architecture called the Zero Logic Quantum Probabilistic Neural Network – [ZLQNN]. ZLQNN is a quantum neural network memory systems I am working on at the moment and it has been an operational research project since at least 2014. Whilst a complete run down of the software capabilities is outside the scope of this document, it is hoped that I can demonstrate the potential advantages and in some cases quite exceptional characteristics of the technology. Having discussed the problems and limitations of classical systems, I will now discuss the ontology of a ZLQNN system.

ZLQNN is a more strictly a memory model as opposed to an AI network in the classical sense. The main component of a ZLQNN memory is the Quantum Memory Engram QME. The QME can be thought of in a basic sense as a 'file'. The following steps indicate the general flow of information through the system in a generic sense. I say generic because unlike classical systems ZLQNN does not care what the data is, what it represents or how it was captured in the first place. It is the job of the ZLQNN sensory cortex to convert any sensory input data into binary stream format.

This example discusses using ZLQNN to solve the previously discussed image recognition problem in particular.

First, data is captured by a suitable sensor. This might be a camera, microphone, a file format or any other way of taking andy information and converting it into stream format. This data is presented to the ZLQNN sensory cortex which immediately compresses the data using the Toridion quantum compression algorithm. Toridion creates a single memory engram of the stream. Everything that arrives on the sensory cortex is compressed and stored no matter what it is. If there is data on the cycle line – it is compressed and stored in the engram. The engram can by recursively updated and pruned to accommodate more data and/or increase compression but for simplicity sake, for now just imagine that everything the system “sees” is compressed.

Probabilistic Content Addressable Memory (PCAM)

Traditional CAM is a memory scheme that is addressable by having a sufficient or partial knowledge of its content rather than knowing where it is stored. In an classical RAM addressable memory model, data is stored in a container that lives at an address. So, 1234 may be stored at address A21. Asking the computer to find 1234 would mean going through every address in the system to find 1234. Trivially for a search 1234 CAM would return the address of every container that holds 1234. In another incarnation called Associative Memory, the data returned would be the search itself – which on the face of it seems a little unremarkable but is used to great effect in networking and routing hardware (wikipedia https://en.wikipedia.org/wiki/Content-addressable_memory#Example_applications ).

PCAM memory is an expansion of the CAM/AM model. CAM has a requirement that the data is stored in a strict descending order where the first result is returned. Which is fine if the networks limits and rules are well defined. PCAM offers a solution the this problem by allowing probabilistic results to be returned within well computable boundaries which supports the development of mental construct models from relatively sparse information searches that will result in an array of search results that at worst will be an associative array of possibilities, and at best the exact answer desired.

PCAM as implemented in Toridion offers a hybrid version of a tuple space based memory model with the addition of non-deterministic probabilistic memory structures (engrams) that serve as a resonance model anchor point. The engram is a mathematically bounded state representation that can be be the base point for an almost indeterminable number of unique information patterns.

This post is incomplete - please come back soon to read the full article.

Thanks

Scot














  



Website powered by Firecart X eCommerce