Free will, sense of self and AI

A couple of ideas came to me this morning around the development of human-like AI.

The first thing that came to mind is around neuroscience. Has there been a study into exactly how many states an individual neuron can achieve? I recall from my first degree that neurotransmitters are released into synaptic clefts, and that action potentials deal with transmission within a neuron. This leads me to the questions:

  1. What are the maximum number of connections between a neuron and other neurons?
  2. Can an action potential affect the behaviour of a neuron based on its strength (or does it work more like a PC –  with on/off or 0/1 values)?
  3. What are the maximum number of neurotransmitters that a neuron can use and receive?

The reason for these questions is that they could potentially be pertinent to the development of human-like AI via neural networks. They might also be a measure of the success of an AI developed using techniques like machine learning or genetic programming (though I need to understand more about all of these to know for sure).

The second idea that occurred to me this morning was around how we conceptualise a human-like AI. Plenty of philosophers and neuroscientists seem to be coming to the conclusion that free will is essentially nothing more than a convenient comfort blanket that helps societies function. What if the same is true of our sense of self?

Perhaps our understanding that we exist is nothing more than a convenient illusion? Perhaps coming to the conclusion that we exist was nothing more than evolutionary happenstance that conferred an advantage at a specific time? Perhaps sense of self is more of a tradition that we pass on through the generations by rote, rather than something that exists as a result of a human capacity for higher level reasoning? If so, might the final stage of creating a human-like AI be simply telling it that it exists, or coding in this belief (like Asimov’s three laws)?

If so, we might actually be much closer to the existence of a human-like AI than we have heretofore realised. This line of thought led me to wondering if, with the exception of a concept of its own existence, we might consider the internet itself to be our first successful AI? After all, it takes external inputs, forwards them to the correct part of its body, generates a response, then sends that response to the appropriate part of its body. At a very basic level, that sounds quite human.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.