Programs of AI

Artificial intelligence is no longer confined to research labs or futuristic speculation. It is already reshaping industries, workplaces, and everyday life. The emergence of multimodal large language models (LLMs), powered by the transformer architecture, has led to significant improvements in natural language processing, computer vision, and speech recognition use cases. AI systems are increasingly capable of handling routine cognitive tasks, opening up possibilities for more human-centered work, enhanced education, and greater creative potential.

This shift has profound implications for human flourishing. AI is poised to alleviate repetitive labor, expand access to high-quality education, and amplify individual productivity. Ethan Mollick, a professor at the University of Pennsylvania who studies AI’s impact on work and learning, argues that AI can help individuals consistently operate at a higher level by making expertise more widely accessible.

Mollick cites the 2 Sigma Problem observed by Benjamin Bloom in 1984 in education which showed that students tutored one-to-one using mastery learning techniques performed two standard deviations better than students in traditional classroms. The implication here is the idea that AI can raise the baseline performance of all students and workers by acting as highly patient tutor and co-intelligence.

Yet, with this promise comes risk. People betting on AI today do so based on the scaling hypothesis, the assumption that simply making neural networks that power AI tools larger will eventually produce general intelligence. This belief is associated with the connectionist paradigm of AI, which conceives of intelligence as merely pattern recognition driven by the interactions of many simple processing units.

This excitement has fueled an explosion in deep learning research, an approach using lots of data and correlations within them to capture these patterns. Still, it remains an open question whether scale alone is enough. Without deeper structural improvements, today’s AI models may hit fundamental limits, leading to stagnation and disillusionment, much as the paradigm before it, formalist AI, did in the past (spanning the mid-1950s to the 1980s).

Meanwhile, the ethical challenges of AI—bias, misinformation, deepfakes, and failures in medical or military decision-making—remain unresolved. If AI is to benefit humanity truly, it must go beyond statistical pattern matching and incorporate reasoning, self-correction, and human-aligned values.

This essay explores three major paradigms in AI—connectionist, symbolic, and metacognitive—and how they might converge into models with more accurate answers and recommendations. Thinkers like Nick Bostrom have used the term “oracle AI” to describe AI systems that provide knowledge and insights without independent agency. These “oracles” are distinct from the more commonly discussed notion of artificial general intelligence (AGI), which implies autonomy and goal-setting.

We will focus on AI’s inference and decision-making capacities in oracle-style use cases. Systems like ChatGPT are good examples. Although they are becoming increasingly agentic, the core problem is going from merely text generation based on predicting language tokens to an internal process that genuinely resembles thinking and self-correction.

More broadly, I’m interested in AI that serves the collective good rather than becoming another tool of economic inequality, misinformation, or geopolitical competition. After all, the future of AI is not predetermined. Since it will be shaped by policy, public engagement, and human values, ensuring AI’s results are accurate and aligned with ethical principles will require an informed, engaged public willing to advocate for its responsible development.

Connectionist AI: Perceptual Representation

Much of modern AI is built on connectionism, a paradigm rooted in empiricism—the belief that knowledge comes from experience with examples. Deep learning models, inspired by the structure of the human brain, learn by analyzing vast amounts of data and identifying patterns without a preset structure, often called statistical induction. The rise of the transformer architecture, which allowed AI to process information more efficiently, led to a revolution in language models, making AI far more useful for real-world tasks.

This approach has clear strengths. Neural networks excel at perception—recognizing speech, classifying images, and generating human-like text. They adapt well to large datasets and can improve with scale. However, there is ongoing debate about whether scaling alone will lead to more capable forms of intelligence. While deep learning has made significant progress in generating coherent and useful responses, it struggles with structured reasoning, expainability, reproducibility (many generative models are based on probabilistic sampling), long-term planning, and causal inference. Many researchers argue that additional mechanisms such as explicit reasoning models or self-reflective architectures will be necessary to overcome these limitations.

Formalist AI: The Logic of Thought

Before deep learning, AI research was dominated by formalist AI, also known as symbolic AI. This paradigm is closely related to rationalism, the idea that intelligence arises from structured rules and logic. Early AI systems used symbolic reasoning to encode human knowledge into databases and solve problems through explicit rules. This led to advances in search algorithms, expert systems, and knowledge graphs, which powered early applications in fields like medicine and finance.

Symbolic AI had one major advantage: it could explain its reasoning. Unlike neural networks in current connectionist paradigm, which operate as opaque black boxes, formalist AI was designed to be transparent and interpretable. It was also excellent at abstract reasoning, solving problems with explicit constraints and long-term planning, something deep learning still struggles with.

However, symbolic systems lacked flexibility. They relied on manually defined rules, which made them brittle and difficult to scale. Even adding a single new variable could blow up the complexity of the problem making it much harder to solve. Unlike neural networks, they could not automatically learn from data, limiting their ability to handle real-world uncertainty.

Today, one of the most popular methods used for “reasoning” in AI is chain-of-thought prompting, a technique that encourages models to break down complex problems into intermediate steps. While this improves performance in certain tasks, it is not true self-reflection nor really true reasoning. Chain-of-thought prompting does not solve a problem in discrete steps nor give AI awareness of its own activations or the ability to monitor its own reliability in real time.

Recent research in neurosymbolic AI attempts to integrate symbolic reasoning with deep learning in a way that could accomplish authentic reasoning. This remains an active area of development rather than a solved problem and represents for me the frontier of AI research. Critics of AI overhype like Gary Marcus, who authored books like the The Algebraic Mind in cognitive science, call for richer cognitive architectures that combine symbolic and connectionist AI just in cognitive science. Some examples of these approaches include Google DeepMind’s AlphaProof which is now currently excelling at mathematics but is not very well known by the public.

According to Marcus, the current dominant thinking in AI has been based on a dismissal of neurosymbolic approach. Researchers like Geoffrey Hinton and Yan LeCun have likened symbolic AI to phlogiston theory, an outmoded scientific theory that attributed an entity called phlogiston to combustible materials to explain how they can catch fire.

Presumably the analogy goes, if fire doesn’t depend on phlogiston then intelligence doesn’t depend on symbols. Antoine Lavosier’s oxygen theory of combustion which replaced phlogiston theory became a founding shift in modern chemistry by demonstrating that water was a compound involving oxygen. Hence, the idea is that symbolic manipulation is only a transitional understanding of intelligence that will be outmoded by through reduction to smaller interactions.

From a philosophical perspective, Marcus seems to have clearly captured the way that pure empiricism can’t adequately account for knowledge acquisition. Not only are symbolic representations required for accurate reasoning, this reasoning is required for ethical AI. As Kant had argued through his philosophy which synthesized empiricism and rationalism, there is an analogous relationship between logical coherence and moral coherence (i.e. the categorical imperative holds that moral principles are universalizable without contradiction).

One promising approach to neurosymbolic AI is called program synthesis. It works by using machine learning for low level representations like perceptual data (sound, images), while using symbolic components for higher level representation. It is more data efficient and generalizable than ML alone because they can learn and generalize on fewer examples while extracting interpretable structures in the data.

One precedent for why this approach makes sense comes from mathematics. Many scientific and engineering problems rely on specifying a certain system of equations such as differential equations that to be solved. When the problem is very complex, one way to solve it is to use an approach called numerical analysis. You turn these symbolic equations into transformations on matrices which are not symbolic/algebraic but instead just arrays of numbers being combined arithmetically.

Now if you have specific initial conditions set you can actually get a concrete answer but it has limitations. For example, because these operations are done only up to machine precision, the small micro-errros get multiplied over and over and so the answer will be only approximate. The second thing is that you have to solve this numerical algorithm everytime you want an answer as opposed to a symbolic answer which you just plug in the values and get the output.

However symbolic solutions can be very complex and take a lot of domain expertise with those type of equations. Using various tricks. If you’ve ever done Calculus II and learned about all the integration strategies you know how messy things can get very quickly. That’s why for a really difficult integral in an applied problem it might just make sense to do it using the trapezoidal rule. If computers could do the symbolic integration for us then we get the best of both worlds. The precision of a closed form expression via symbolic integration.

So, by analogy symbolic AI would expand on connectionist AI, by allowing for exact solutions to reasoning problems, reusable results and explainability.

Metacognitive AI: Learning to Think about Thinking

Beyond integrating connectionism and formalism, the next evolution of AI must involve metacognition—the ability of AI to monitor, evaluate, and improve its own reasoning processes. This missing ingredient could make AI systems even more reliable even beyond neurosymbolic computation.

Another component of the current paradigm is the use of reinforcement learning from human feedback (RLHF). This is a tuning process that occurs after a step called pretraining and is a major way how humans are still involved in the process often in exploitative ways. For example, many people are unaware that AI tuning employs African workers who are paid low wages for their work and often exposed to disturbing content sometimes even leading to mental health issues. It signals a need for not only fairer working conditions but a growing need for this process to be complemented with reinforcement learning from AI Feedback (RLAIF).

For AI to become genuinely metacognitive, it must be able to observe its own “thought processes,” recognize mistakes, and refine its decision-making autonomously. This aligns with intuitionism, a philosophical tradition that asserts that some forms of knowledge are non-inferential in that they are not derived from other arguments or premises. This approach would include for example access to unconscious knowledge about one’s own internal state. Self-awareness depends on being able to represent yourself as if like an “object”— a key to refining your knowledge.

Humans rely on conscious self-monitoring to recognize when they are confused, second-guess assumptions, and refine their thought processes. A metacognitive AI would need similar capabilities. However, self-reflection alone is not enough—it must be built on a strong foundation of perception (connectionism) and structured reasoning (symbolic AI). This self-correction has also been seen as the key to ethical development and growth in humans and should be adopted for AI agents as well.

From a philosophical perspective, if neurosymbolic AI would represent a “Kantian” turn in AI then metacognitive AI would be analogous to a “Hegelian” turn. G. W. F. Hegel was another German philosopher who in many ways can be thought to have completed or advanced the Kantian project of critical philosophy. Hegel popularized a dialectical method which is effectively a way to refine beliefs through engagement with the world.

The Hegelian dialectic is often simplified as a thesis-antithesis-synthesis, but what Hegel described was a process of abstract-negative-concrete. This is where an abstract principle is negated when it encounters its limitations or contradictions in a way that destabilizes the original principle. Next, the initial concept becomes enriched by the insights of the negation process forming a more comprehensive understanding than was earlier had.

Perhaps metacognitive AI would develop a sensitivity to contradictions and dialectical tensions in its knowledge base and conversations and reflect on how to refine its understanding of its self and world .

The Need for Public Engagement in Generalist AI

The future of AI will not be determined solely by technical breakthroughs. It will be shaped by public awareness, policy, and collective advocacy. The risk is that AI’s trajectory will be dictated by geopolitical competition and corporate interests rather than a shared vision of human flourishing. Nations are already racing to dominate AI, and without public vigilance, these technologies could be weaponized or used to entrench power imbalances.

Ethan Mollick describes AI’s evolution as happening on a jagged frontier—a space where we must figure out what works and what doesn’t through direct engagement with the technology. AI is already reshaping education, creativity, and business, and the best way to prepare for its future is to actively explore and shape its development. Instead of passively awaiting policy changes or fearing automation, individuals and organizations must take an active role in experimenting with AI, pushing for ethical guidelines, and demanding transparency.

By advocating for AI that enhances human capabilities rather than replacing them, we can steer its development toward tools that amplify learning, support well-being, and unlock human potential. The alternative—leaving AI’s future solely in the hands of governments and corporations—risks ceding control over a transformative force that should belong to everyone.

Further Reading

  1. Co-Intelligence by Ethan Mollick. This book introduces concepts like the 2 Sigma Problem, the jagged frontier and other ideas that suggests to readers how we can engage with AI rather than watching it develop from the sidelines.
  2. Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. A technical book that sets the stage for the current connectionist paradigm in AI.
  3. Artitfical Intelligence: A Modern Approach by Stuart Russell and Peter Norvig. This is the classic textbook on AI used in many college courses. It describes everything from the best formalist AI algorithms to introducing machine learning algorithms that are aligned with the connectionism paradigm.
  4. Rebooting AI: Builing Artificial Intelligence We Can Trust by Gary Marcus and Ernest Davis. Two researchers in the field offer an accessible and balanced assessment of where AI was as of 2020 while pointing the way to what actually might be required for trustworthy AI.

Comments

Leave a comment