Research4
A
professor at the Santa Fe Institute said, “If you give a monkey launching power
over nuclear weapons, the monkey is an existential threat.” AI, although it may appear intelligent, isn’t
because it lacks consciousness. What a C. Elegans worm (with 302 mapped
neurons), a monkey, and a person (with approximately 86 billion neurons) have
is an evolved biological brain. In comparison, a researcher has likened AI to a
spreadsheet, rather than the real thing.
The core
function of AI is to predict: to predict the result of some algorithmic
operation or to predict the next word in a sentence, occurring in various
contexts. There are both theoretical and practical limits to this.
Theoretical
Limits
The
top-down theoretical limit is the central value theorem. The statistical model
that the researcher uses (whether Gaussian, Poisson, or log-normal) must have a
finite mean and variance, in comparison, many distributions in finance (at the
bleeding edge of change) are very large or statistically indeterminate (Black
Swan single events). According to the philosopher Isaiah Berlin, there was a
story that when a certain British prime minister decided whether to go to war,
he analogously “…looked at the sky” when deciding to take an umbrella.
The
bottom-up limit is the survey design, whether biased or not. The question of
bias enters into many discussions of AI, and it is
useful to ask the question what bias is. The sample survey should be
“objective.” reflecting the population reality as closely as possible. An
accurate sample reflects the population; and it should be properly randomized
so, for instance, if you are taking a survey of some controlled factory
operation, to derive an equation of its operation, you can use the Gaussian
statistical model to derive the equation parameters and a confidence interval
to assess the entire model.
But the
problem is, the economy changes, a company has to be
responsive, and the operational data has to be good, see the article. The
assumed statistical model has to be designed well and
run often.
These are
the two main theoretical problems with AI. The Perplexity chatbot says, “…these
limitations also provide opportunities for advancement and refinement.” We’ll
see.
Practical
Implementation
In a
practical sense, both limitations have to effectively
relate to the business question, “What is the decision,” when designing the
survey instrument. Silicon Valley is pouring billions and billions of dollars
into AI technology and infrastructure. The companies so doing can certainly
afford to keep doing this indefinitely, Nvidia’s business will be good for a
while, but at some point people will ask, “Will this be profitable?”
A
concluding example shows that generative AI is now capable of great nuance and
sophistication. But note, and this is crucial, it is very capable of answering general
questions culled from the literature. A
lot of very precise and difficult work is required to profitably adapt general
AI to specific corporate situations that involve strategies, money and profits.
The facile Wall Street assumption that AI will immediately cause economic
growth to increase in an economy rendered non-cyclical by AI is unlikely to
become true.
The
9/24-10/24 Harvard Business Review, “Where Data-Driven Decision-Making
Can Go Wrong,” lists five things that can go wrong:
1)
Conflating correlation and causation. Just because two effects occur together,
it doesn’t mean that one must cause the other. In economics, causation comes
from theory. In business, causation often occurs from common sense.
2)
Misjudging the potential magnitude of effects. Relevant here are the survey
sample size and the precision, the percentage of true positive effects/all
measured positives, true and false. The latter measures, there are also other
statistics, how good the model is.
3) A
disconnect between what is measured and what matters. Does the survey get at
the core question, and the intended and unintended consequences?
4) Misjudging
generalizability. How similar is the setting of this study to our business
context. How important is a specific context?
5)
Overweighting a specific result. Are there other analyses that validate the
results and approach?
The proper
design of AI, say to optimize a certain operation, requires insight and nuance.
In the end, as we have noted:
“Only humans have the insights that can be validated or
optimized by one of the (sic) thousands of AI models and their architectures.
Only humans can form the organizations that implement these models…In other
words, YOU - with your goals - are the person that can improve, reflect,
collaborate, or innovate.”
The Perplexity chatbot, reflecting current AI thinking,
came to the same summary conclusion.
__
“It is
easy enough to arrange the past in a symmetrical way…A true science, though,
must be able not merely to rearrange the past but to predict the future. To
classify facts, to order them in…patterns (however complex), is not quite yet a
science.”
Isaiah Berlin
“The Sense of Reality (1996)”
Science
requires patterns, accurate prediction and also
understanding.