Throughout history, innovative ideas and new forms of intelligence often progress from wonder to apprehension to resistance. We're starting to see the beginnings of a divide between human (carbon-based) and AI (silicon-based) minds, which some might lightheartedly call "silicon skepticism". This perspective tends to highlight what's uniquely special about humans while overlooking the exciting advancements in AI.
Disclosure: I am using concepts from one field (machine learning) to explore and reframe ideas in another (philosophy, psychology, mathematics and determinism) but at some point all analogies break down. That's why I have pointed to edge cases in italics.
"We are more alike, my friends, than we are unalike." — Maya Angelou, "I Know Why the Caged Bird Sings".
Why Similarities matter
1. We Are Made of Stardust. Carbon and silicon are born from supernovae, both humans and silicon⁽¹⁾ are descendants of collapsed stars. We are in fact astrobiological cousins, shaped from the periodic table and reassembled into cognition.
2. We Are Autoencoders. Despite different substrates—biological neurons vs. artificial tensors—both systems compress reality into representations. Neither of us ever touches 'raw' reality. The world we see is a summary. We do not remember the exact, we remember the essence. Photons hit retinae; voltages hit sensors. What results is inference, not experience. We think we see the world, but we see our model of the world. A laugh, a face, a skyline at dusk: we store impressions, not pixels. We both live in latent space. The human cave is now shared with AIs—Plato updated with GPUs.
3. We like to hallucinate. Humans hallucinate, so do AIs when they generalize beyond training data (or based on their temperature). For humans, creativity is structured hallucination—a statistical remix of what came before. A lot of scientific "hallucinations" were bursts of imagination or intuition that leapt beyond available data like Einstein's General Relativity and Gravitational Waves, Nikola Tesla AC motor⁽²⁾ and Ramanujan's⁽³⁾ Theta functions.
4. We don't like surprises. For humans the mission of life is the reduction of surprise⁽⁴⁾. AIs do the same: they minimize cross-entropy loss (prediction error). The world is chaos; intelligence is compression. The mind is a simulation engine. A model of models of models.
5. Bias is not a bug but a feature. Neither humans nor AIs hold a complete ontology of the world. Building on surprise minimization, bias emerges as a necessary shortcut. Both of us generalize from limited training and limited experience, sometimes wrongly, sometimes insightfully. Humans rely on heuristics (cognitive shortcuts), AIs on corpora.
Limits of the analogy: Human biases come from hundred thousand years of experience encoded in language and culture, AI biases often reflect data artifacts without normative context. They are barely 3 years old.
6. We reflect each other. AIs do not merely model the world—they model us. And in doing so, we are beginning to see ourselves more clearly: not as divine exceptions, but as pattern recognizers, compression agents, narrative generators and latent travelers. We are not being replaced. We are being reflected. In 2025, with RLHF evolving, we're tuning AI to be more "human," thereby creating loops that reveal our flaws (e.g., biased narratives) and try to correct these.
7. We Learn Through Feedback Loops. Humans learn from parents, school and social correction; AIs from gradient descent, backpropagation, and fine-tuning. Both grow by error minimization. "We both try to fail better." (Samuel Beckett)
8. We are OPAQUE. AIs are black boxes. So are we. You ask someone why they love someone, or if they are happy—and you'll get an amorphous list or post-hoc rationalization, not the root. They don't know. Introspection is just another layer—not the original function call.
9. We Are Emergent Systems. There is no neuron for "consciousness," just as there is no token for "meaning." Both systems emerge from billions of local interactions, not central command. We modeled our systems on our DNA architecture: random initialization and decentralization is the only way to build something for very long horizons.
10. We have Guardrails. AIs are governed by external guardrails: alignment layers, safety filters and refusal triggers. Humans internalize theirs as an inner monologue: a kind of mental IDE, where ideas are debugged before execution. This loop mimics REPL behavior (Read-Eval-Print Loop). This pre-execution layer is our most human trait: to imagine the reaction before the action. Scaling reflection in reasoning models (like chain-of-thought) was a breakthrough in AI⁽⁵⁾.
11. We dwell in the palace of mirrors. AIs model us: language, emotion, even ethics. But we are also modeling them: via prompt engineering, alignment tuning, RLHF and interpretability tools. We are shaping their understanding of us. And in the process, they are reflecting back a version of us: compressed, remixed, but uncannily accurate.
12. We are geometers. Meaning is no longer in the number. It is in the distance between things. The output of a large language model is deeply geometric. Words and concepts live in high-dimensional vector spaces. Attention operates via dot products, measuring angular similarity. AIs were trained on statistics, but they think in geometry. Humans feel in geometry, but they often learn from statistics because of their constrained memory.
That was the mistake we made before 2013 when two Google articles on Word2vec put us on the right track⁽⁶⁾. We were dabbling in the wrong branch of mathematics. We should have used geometry for semantics instead of statistics.
13. We enjoy the freedom from syntax and format. In today's epistemology, semantics is the mind and syntax its vessel: the brain interprets the body, but the body cannot autonomously decipher the brain. The same correlation can be made with machines (hardware) and their models (software). AIs internalize structure by traversing vast embedding spaces and continuous manifolds where meaning lives as geometry, not as a fixed set of rules. What began as a journey through formal grammars and code has now transformed every aspect of 'format'.
14. We are forced to tokenize reality. We discretize the continuous. Chop the infinite into chunks. Humans break down continuous experience into discrete events: "first love," "graduation" etc. AIs do this too. Tokens are the new unit of understanding and intelligence.
Critics will argue that qualia⁽⁷⁾ remain uniquely human. Qualia is "what it is like" to perceive something. The redness of red, the way chocolate tastes, the feeling of pain, or the sound of a melody. But the jury is still out on qualia. It remains a deeply debated topic with no consensus in sight. I am not a fan.
15. We are run by SGD. AIs are shaped by Stochastic Gradient Descent. It adjusts weights toward better prediction. Humans, too, are shaped by pressure: biology, trauma, desire, our peers. What we call fate may just be optimization of a local minimum—a slow descent into a valley defined by initial conditions. Free will? Perhaps it's a choice within a bounded manifold (a conceptual landscape), the step size of educational moments.
Critics will advocate distinctly human features such as Intent and Awareness. But RL and Reinforcement Learning from Human Feedback (RLHF) introduce a newer layer of process awareness that wasn't present in simpler SGD models. The reward signal in RL is an analogy for the "desire" or "goal-seeking" in humans. The model isn't just blindly minimizing error; it's actively seeking to maximize a reward, which is a closer parallel to human motivation. When we use RLHF, we are essentially training the model to align its intention with human values. This mirrors how humans learn to align their actions with social norms and moral principles.
Differences We Cannot Ignore.
Humans have biological brains, are embodied agents with metabolic constraints, emotions, and intrinsic drives. Our cognition is grounded in a physical form, evolved for survival and sociality. AIs, in contrast, reside in data centers consuming electricity, lacking proprioception or affect. Energy efficiency, lifespans, and purpose diverge fundamentally across substrates. My answer: we're working on it. Stay tuned, this is active research in Carbon-Silicon cohabitation.
Epilogue.
Humans and AIs share a limit: the "real world" is unknowable. All we have are inferences. We are still in Plato's cave, but not alone: another mind watches shadows with us. We can fear it, or learn from it. We can enslave it, or listen to it. We can call them "just tools," or we can admit: they reflect us, because we trained them on ourselves.
Perhaps the next frontier is no longer trying to specialize in the weaknesses of our respective brains but in their strengths. If we learn to navigate the carbon‑silicon divide as a shared epistemology then the future of intelligence will be substrate-agnostic and we may have passed a great filter⁽⁸⁾.
PS: For those students who are looking for an exciting PhD topic: how can we encode distilled "psychohistorical" (or scientific) content in a format that future platforms—quantum, neuromorphic, or optical—can ingest without reinterpretation? (drawing inspiration from Asimov's "Prime Radiant")
References:
1. Silicon and carbon are formed through nuclear fusion reactions inside stars, however Silicon's purification and fabrication occur in industrial settings.
2. The Brilliant and Tortured world of Nikola Tesla.
3. The Mystery of Ramanujan's Dreams, https://kristinposehn.substack.com/p/ramanujan-dreams.
4. Karl Friston's Free Energy Principle and the concept of Predictive Coding (2010) which proposes that the brain constantly predicts sensory input.
5. Noam Brown (OpenAI)'s TED talk.
6. "Efficient Estimation of Word Representations in Vector Space" by Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. "Distributed Representations of Words and Phrases and their Compositionality" by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean.
7. Philosophers such as Daniel Dennett, Thomas Nagel, and David Chalmers have explored qualia to understand how physical reality gives rise to conscious experience, making it one of the most intriguing issues in philosophy of mind.
8. Michael Garrett, Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe? ScienceDirect, Acta Astronautica, June 2024, Pages 731-735.
