AI
Research2
What
keeps bottom-up reasoning from becoming a made up
hallucination? In science, which deals in large numbers, it is falsifying H0,
the null hypothesis of the experiment which says there is no statistical
difference between the result and the random Gaussian bell-shaped curve. The
other alternative is the individual case, balancing the pros and cons and then
reaching a (logical, common sense) decision. In the real world, both universal
law and the individual case should converge upon the truth of the matter. But
bottom-up AI currently has no null hypothesis or method to test a result
against, and probably needs one, unless the database is curated beforehand for
restricted tasks.
The
above is the last step in a generative AI model, which is essentially
determining final parameter weights to assess, “the truth of the matter, given
all the evidence.” On the simpler matter
of determining, “generally held facts,” Google DeepMind has successfully fact-checked ChatGPT answers using
internet searches. In business, “generally held facts” have
to further become, “facts about my operation.” What this says is that,
to be practical, there has to be a lot of cutting and
fitting of generative AI to the specific case.
The
fundamental problem with generative AI, why it will likely always require human
agency, is that it cannot engage at all in top-down reasoning and therefore in
its practical tradeoffs. The apparent top-down reasoning that it does engage,
say in “proving” math theorems, is merely a simulation. About top-down
reasoning:
1)
Top-down
reasoning can be very efficient. Consider the millions of steps that bottom-up
generative AI must take to reach a decision, as opposed to getting the
principle right in the first place. It was a goal of Western thought to seek
certain knowledge. Such a quest, which has now become open-ended, now includes
the major factors in the context of a situation.
2)
Top-down
decision making makes possible practical (to a point) tradeoffs in a particular
application, for instance between costs and functionality. It has become a
pastime to stymy generative AI with trick questions. We got it to demur (choke)
when we asked it two specific engineering applications. It could not
provide the circuit specifics for designing a 100 watt
hi-fi amplifier, and it could not provide the specifics for designing a bridge
(although it could provide alternate bridge designs). Finally, the chatbot
pleaded, and we think accurately so, “Ultimately the best way to use me is to
leverage my strengths. If you need help with broad ideas and information
gathering, I’m a great resource. But for tasks requiring specialized knowledge
(beginning with physics 101), complex calculations, or ensuring safety and
functionality, it’s important to involve a human expert.”
A
PBS television show captured the essence of generative AI. In the 19th
century, the steam engine materially transformed the U.S. continent, linking
coast to coast. Generative AI has been called, “the steam engine of the mind.”
New ideas will prevail because they are useful, and not hallucinations.