The problem of meaning in AI: Still with us after all these years

I was invited to give a talk at the “Programs, minds and machines” workshop, which will be hosted jointly by the Mathematics and the Philosophy Research Institutes of UNAM, August 6-9, 2018.

The problem of meaning in AI: Still with us after all these years

Tom Froese

In recent years there has been a lot of excitement about the possibilities of advanced artificial intelligence that could rival the human mind. I cast doubt on this prospect by reviewing past revolutions in cognitive robotics, specifically the shift toward situated robotics in the 90s and the shift toward a dynamical approach in the 00s. I argue that despite claims to the contrary, these revolutions did not manage to overcome the fundamental problem of meaning that was first identified in the context of various theoretical and practical problems faced by Good Old-Fashioned AI. Even after billions of dollars of investment, today’s computers simply do not understand anything. I argue for a paradigm shift in the field: the aim should not be to replicate the human mind in autonomous systems, but to help it realize its full potential via interfaces.

Advertisements

2 Comments

  1. August 9, 2018 at 4:35 pm

    (resent)
    Hi Tom,
    Indeed, computers do not understand things like we humans do.
    But computers can be considered as generating meanings derived from what they are designed to do.
    A robot programed to avoid obstacles can be considered as generating a meaning when it perceives an obstacle.
    Meaning generation is a useful tool to differentiate DP in artificial agents, animals and humans.
    You may remember a short 2013 paper in APA Newsletter that addresses these points with the Turing Test (https://philpapers.org/rec/MENTTC-2).
    These subjects highlight (I feel) that we need a better understanding about the nature of life to go further.
    Best
    Christophe

  2. Tom Froese said,

    August 10, 2018 at 9:30 am

    Dear Christophe, many thanks for getting in touch. I don’t agree that an obstacle avoiding robot perceives its environment meaningfully. If you say that, then it’s hard to draw a line, and meaning generation suddenly appears to be a universal property of pretty much any system. I agree with you that we need to understand the nature of life to go further! I highly recommend that you have a look at Froese and Ziemke (2009) for some arguments to back up your feeling. Best, Tom


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: