Defining A.I. beyond a narrow cognitive computing view

Define A.I. beyond a narrow cognitive computing view.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

I read with interest the recent article in Scientific American this month titled “The Search for a New Test of Artificial Intelligence” exploring the limitations of the famous Alan Turing “imitation test” for Intelligence (1). It is a subject that the spring 2016 AI Magazine from the Association for the Advancement of Artificial Intelligence AAAI that was solely devoted to this one question of what is “beyond the Turing Test?” (2).

Alan Turing

Alan Turing

By Unknown - http://www.turingarchive.org/viewer/?id=521&title=4, Public Domain, https://commons.wikimedia.org/w/index.php?curid=22828488

Alan Turing in his famous 1950’s paper was already aware of the limitations of his proposed test seeking to measure a machine capable of general all purpose intelligence indistinguishable from a human (3). The Turing test is based on a human evaluator asking questions that would appear to be natural human language. But as has been demonstrated numerous times from 2012 onward (4), the ability to “fake intelligence” through structured answers have made it possible to fool the test.

General intelligence has many more facets beyond the speech-to-text translation of Natural Language Processing (NLP) and logic in programs to provide simulated answers. This is just the basic language syntax rules component and not even close to the issue of “understanding” the meaning of this language which requires much more contextual information, interaction and what anthropomorphically one might call inner reflection and insight. Intelligence can be broadened into a deeper set of dimensions that reflect not specialised specific mechanistic tasks but a more real interpretation of physical environment generalisation and human existence:

  • Perception - How sensors and processing of information are contextualised and used. How this is defined as a level of “awareness” in seeing, hearing, touching, “feeling” for example in the immediate physical or transmitted field of perception frame. It leads on into questions of understanding and areas of sentience we see in debates of consciousness in bacteria, plants, animals and humans.
  • Action - How physical motor skills are enacted to interact with the environment. This includes the agency of actuators and robotic degrees of freedom but can expand beyond mechanisms of mechanical manipulation to other forms of state change in energy and matter through automation (akin to commentators using numbers to describe eras such as Industry 4.0 and 4th industrial revolution).
  • Language - How symbolic representation of information, its semiotic representation and syntax definition is defined. How semantic meaning and leading into the pragmatics of using language in an appropriate context.
  • Cognition - How knowledge and experience are acquired through sensors, processes of association, inferencing and into areas of reasoning and its basis of understanding based on those processes as objective and subjective awareness and again leading to questions of sentience, ethics and its interaction and impact with external environments and biological forms.

These are dimensions expand beyond the revolutionary for its time Alan Turing Test into something more realistic and appropriate for the 21st Century. (see note 1 on dimensions and see statistical learning theory and computational learning theory and the VC dimension (for Vapnik–Chervonenkis dimension) and cardinality (13)(14)).

Way out in the future is the idea of integrating all these into a holistic system able to process general unstructured information; able to manipulate free-form objects into art or constructions; interpret and inference to higher forms of being “aware” of context and information. The achievements of IBM Watson defeating human players in the game Jeopardy in January 2011 (5); the Google AlphaGo win over the Go champion in March 2016 (6); or the Infineon robot completing a Rubiks Cube in 0.637 seconds in Nov 2016 (7) are examples of what is referred to as specialised intelligence but are extremely narrow parts of general intelligence.

Defining any one of these specialised terms as general intelligence and then jumping to the conclusion that all gaming is then a reflection of general intelligence of sentient is not correct. Being able to use Deep neural nets to process image recognition for cancer cells is not the same as claiming the search for the cure of cancer. The search is still a separate act and level of intelligence beyond the specialist actions of machine routines.

Do or Dont feed the AI ? - A public debate is needed

A public debate would be helpful in exploring these issues in separating the difference of intelligence, artificial intelligence and reasoning become blurred with populism and the real and present explosion in specialist machine intelligence that is the closer concern.

The growth in personal data and connected machines and devices is being fed into algorithm driving new opportunities in supervised or unsupervised learning for algorithm development. This is greatly affected by the types of dimensions of the scenario you are trying to frame and “add” intelligence to. To describe this by any one of the dimensions for example as “Cognitive” is a very broad term that relates to a range of “intelligent” processes from perception to reasoning which is a wide open field for AI experts. The term “Machine Reasoning” is a particularly contentious among Machine Learning specialists who seek and use clear specific definition between machine routines for specific tasks and structured training data and those that are exploring a wider set of structure or unstructured data use cases. The following diagram is just illustrative a complex yet emerging set of specific machine intelligence uses that fit several dimensions of action and information processing. These axes are very coarse definitions of a living and information space that may have many events and undefined objects and causation.

Prof Mark Skilton 2017

Beyond Digital Transformation

But it should be said that this image is a snap shot of a very fast moving field of computer science who;s history can be traced back hundreds of years but until now lacked the computational power and economic infrastructure and device to drive automation. These insights are pushing through beyond a faster connected consumer and society driven computation into other spheres of biological, social and material influence that is described in the 4th Industrial Revolution by Dr Klaus Schwab of the World Economic Forum (8).

Black Box worlds

A big concern of AI is the “black box” effect of a lack of transparency and auditability or deliberately putting it in a big box or datacenter, to some neural nets that the experts don’t fully know how they actually work.

Then there is ethics and cyber security risks with AI. The popularized terminator film view is way out of sync with reality and gets confused with real concerns in making sure algorithms today are open, fair, and governable while having some IP retained for companies too. This was the recent EU law debate on the Robotics laws January 2017 (9) and with the Royal Society reports on transparency to the UK Parliament for ethical and legal rules needed for AI governance (10).

Forthcoming book: The Fourth Industrial Revolution: An Executive Guide to Intelligent Systems, 2017 Palgrave Macmillan. Professor Mark Skilton. Dr Felix Hovespian.

Note 1. Dimensions in neural nets refer to data types used in matrices of computation. The term dimensions more generally in machine learning describe the number of variables used to define a machine learning problem. The highers the number of dimensions, the more complex the computational type of learning involved. Dimensionalizing machine learning problems are seeking to frame a scenario or interaction space is critical to establishing the level of machine response to an event. The term dimensions referred to here are more in the general types of interact that define aspects of what is involved in intelligence (11)(12) See statistical learning theory and computational learning theory and the the VC dimension (for Vapnik–Chervonenkis dimension) and cardinality (13)(14).

References

  1. https://www.scientificamerican.com/article/the-search-for-a-new-test-of-artificial-intelligence/?WT.mc_id=SA_TW_TECH_PW
  2. https://www.questia.com/magazine/1P3-4042600191/beyond-the-turing-test
  3. Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423, http://loebner.net/Prizef/TuringArticle.html
  4. http://ieeexplore.ieee.org/document/6609034/?arnumber=6609034
  5. https://www.ibm.com/midmarket/us/en/article_Smartercomm5_1209.html
  6. https://deepmind.com/research/alphago/
  7. http://www.infineon.com/cms/en/about-infineon/press/press-releases/2016/INFXX201611-014.html
  8. https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/
  9. http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf
  10. https://www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/14506.htm
  11. http://machinelearningmastery.com/how-to-define-your-machine-learning-problem/
  12. Definition of effective dimension..Machine Learning Proceedings 1991: Proceedings of the Eighth International. https://books.google.co.uk/books?id=P0ajBQAAQBAJ&pg=PA154&lpg=PA154&dq=defining+dimensions+of+machine+learning+problem&source=bl&ots=Zh5dU1piWK&sig=2ujlr4rcUjuTE_178egfCIEYYpk&hl=en&sa=X&ved=0ahUKEwjh4q_w9L7SAhUrJ8AKHfDzC5YQ6AEIYDAG#v=onepage&q=defining%20dimensions%20of%20machine%20learning%20problem&f=false
  13. https://en.wikipedia.org/wiki/VC_dimension
  14. https://en.wikipedia.org/wiki/Cardinality

Popular in the Community

Close

What's Hot