A smaller and trimmer AI from better quality data.
In a recent article, MIT Technology Review commented on an AI model called Molmo that was trained using a much smaller amount of data to train the system by looking for high quality data, rather than unselectively hoovering up everything.
About the only thing I've had a noodle
with is Perplexity AI, which at least gives attributions so you can
have a dig to build confidence that answers don't involve the
technological equivalent of magic mushrooms!
As such, I'm not qualified to hold an
opinion about AI, but I do think that one of a marker of
intelligence of any stripe is the ability to take on board data
and make intelligent inferences to synthesize novel information
when there are conflicting ideas, while holding those conflicting
ideas in healthy opposition.
F. Scott Fitzgerald wrote something
similar and much more elegantly (in 1936!) - "The test of
a first-rate intelligence is the ability to hold two opposed ideas
in the mind at the same time, and still retain the ability to
function."
It does rather feel like AI is a
summary engine, not a synthesis device: you get the canon of
consensus, not the new and novel.
And Garbage In, Garbage Out.
Alternatively: you are what you eat :-).
Comments
Post a Comment