Much of what’s being sold as "AI" today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides: cs.princeton.edu
Key point #1: AI is an umbrella term for a set of loosely related technologies. *Some* of those technologies have made genuine, remarkable, and widely-publicized progress recently. But companies exploit public confusion by slapping the “AI” label on whatever they’re selling.
Key point #2: Many dubious applications of AI involve predicting social outcomes: who will succeed at a job, which kids will drop out, etc. We can’t predict the future — that should be common sense. But we seem to have decided to suspend common sense when “AI” is involved.
There’s evidence from many domains, including prediction of criminal risk, that machine learning using hundreds of features is only slightly more accurate than random, and no more accurate than simple linear regression with three or four features—basically a manual scoring rule.
The best part of the event was the panel discussion with @histoftech, @STurkle, @edenmedina, and the audience. Thanks to @crystaljjlee for the excellent summary!
Loading suggestions...