Thursday, December 05, 2013

More Thoughts On AI

This post is motivated by this post (http://www.theoryserum.com/responsibility-creating-consciousness/) to the TheorySerum.COM blog.

Many of the points made in the post above have been explored by stories written around the assumption that AI was successfully invented.  These are very powerful ideas because they reflect right back at ourselves.  I think it is very productive to try and create as much objectivity around these ideas as we can.

So, it is my opinion that consciousness is an emergent attribute of the mind of any motile organism.  Further this attribute, like most such things, is on a continuous scale.  

To this end it is helpful to think about the mind as an engine that takes sensory inputs, and the memory of the organism, as its input.  This engine then processes these inputs with the purpose of creating a model of the inputs.  Next the engine runs the model to make short term (and longer term) predictions that reflect how the actual environment will change and affect the organism.  Finally the engine sends out commands to the organism that maximize its chances of successfully negotiating that environment.

This model building and execution capacity of the mind is not restricted to just sending commands but also has recursive attributes that cause the model to be modified.  A good way to think about this ability to model the models is say it gives rise to another emergent attribute we call consciousness.  Since this is all on a continuous scale then some organisms are more conscious than others.  

It is important to realize that it is not the case that organisms that are not human are some how lower or inferior to humans but instead when we realize that this high degree of recursive modeling ability that gives rise to consciousness is not necessary for survival but instead is just a side effect of our 'big-brains'.

What this means for AI is that we can not expect to invent it full blown but instead it will be one of those things that starts out not looking at all like it is conscious but over time gains more and more of the attributes of what is called conscious.


For these reason I do not think the points made in the post above are very worrisome because they will be automatically resolved as the artificial organisms that exhibit AI become more and more complete (or conscious).  I think the sense of the decisions discussed above will not present themselves as deliberate choices but instead will be built-into byproducts (side effects) of how the artificial organism is constructed.