This thread is a pretty effective and brutal evisceration of ChatGPT. They haven't built an answer-bot, they've built a bullshitter bot that is very good at giving wrong answers convincingly by mimicking the way right answers are structured.
The thing is, writing BS is not something for which humans require machine assistance. But more broadly I think this speaks to the struggles with the current direction of machine learning and 'AI' - algorithms are trained without quite knowing what we're training them for.
There's also a secondary un-resolved rights issue of the training materials. An algorithm trained on, say, a particular scholar's writing is plagiarizing them, no? Same for AI-art trained on copies of an artists work?
Do the makers of these AIs ask permission first?
Do the makers of these AIs ask permission first?
I'm certainly thinking if someone trained a ACOUP-Bot on all of my writings without my permission and put it on the internet, I'd be pretty upset and I'd have serious questions about the legality of it.
Whole lotta 'can we' and not a lot of 'should we' going on here.
Whole lotta 'can we' and not a lot of 'should we' going on here.
Loading suggestions...