Artificial Intelligence is stupid and causal reasoning won’t fix it

[Submitted on 20 Jul 2020]

Download PDF

Abstract: Artificial Neural Networks have reached Grandmaster and even super-human
performance across a variety of games: from those involving perfect-information
(such as Go) to those involving imperfect-information (such as Starcraft). Such
technological developments from AI-labs have ushered concomitant applications
across the world of business – where an AI brand tag is fast becoming
ubiquitous. A corollary of such widespread commercial deployment is that when
AI gets things wrong – an autonomous vehicle crashes; a chatbot exhibits racist
behaviour; automated credit scoring processes discriminate on gender etc. –
there are often significant financial, legal and brand consequences and the
incident becomes major news. As Judea Pearl sees it, the underlying reason for
such mistakes is that, ‘all the impressive achievements of deep learning amount
to just curve fitting’. The key, Judea Pearl suggests, is to replace reasoning
by association with causal-reasoning – the ability to infer causes from
observed phenomena. It is a point that was echoed by Gary Marcus and Ernest
Davis in a recent piece for the New York Times: ‘we need to stop building
computer systems that merely get better and better at detecting statistical
patterns in data sets – often using an approach known as Deep Learning – and
start building computer systems that from the moment of their assembly innately
grasp three basic concepts: time, space and causality’. In this paper,
foregrounding what in 1949 Gilbert Ryle termed a category mistake, I will offer
an alternative explanation for AI errors: it is not so much that AI machinery
cannot grasp causality, but that AI machinery – qua computation – cannot
understand anything at all.

Submission history

From: John Bishop Prof [view email]


[v1]
Mon, 20 Jul 2020 22:23:50 UTC (5,152 KB)

Read More

tianze.zhang@graduateinstitute.ch