Skip to main content
Walter De Brouwer

"We Become What We Behold: first we shape the tool and then the tool shapes us" is a concept popularized by Marshall McLuhan, a key figure in media theory. It suggests that the media we consume shapes our perception of reality and, consequently, influences our behavior and identity.

The prosthetic mind is a deep theme in cognitive science and philosophy of mind. It is the balance between embodied, internalized skill (muscle memory, "knowing how") and externalized, tool-mediated cognition (using notebooks, calculators, or—today—AI systems as "prosthetic minds").

How do we internalize new skills by training machines? Two theories, both of them make sense.

Muscle Memory. As a child I learned to type blindly which was (and still is) a great asset. But when I arrived in the States, I had to switch from AZERTY to QWERTY. But miraculously it worked without much effort! The portal from my brain to my fingers was open and only needed some finetuning. Likewise, mastering a musical instrument or athletic movement requires deliberate practice until the body "knows" the pattern. This shift from conscious control to subconscious is the core of embodied cognition—it is the idea that cognition is not solely in the brain but 95% distributed through the body and its interactions with the world.

The Extended Mind. Clark & Chalmers' Extended Mind thesis posits that objects we reliably use (notebooks, calculators, apps) become part of our cognitive system. Writing a to-do on paper isn't just recording—it's offloading memory. Similarly, we've long externalized calculations, navigational wayfinding, and now, prompt engineering and AI interactions become part of how we think.

The same offloading happened in music consumption and music production over time.

THE EVOLUTION OF MUSIC CONSUMPTION.

In the seventies, every Saturday afternoon, I would race to the record shop, eagerly flipping through bins until I found music to try out in the listening booth with a pair of headphones.

The mixtape was the killer app of my generation. Creating one was like crafting an emotional fingerprint, each song sequence carrying cryptic messages you hoped the listener would decode. When you gave someone a mixtape, you gave them a deeply personal, carefully crafted message in a bottle. CDs carried forward this gift-giving tradition.

But then something shifted. Streaming arrived, and suddenly sending a playlist felt... thin. Too easy. Click, drag, send. The gift became ephemeral, weightless. At the same time social media had arrived and took the place of discovery, and streaming became its distribution. Algorithms made emotional connections a commodity, and replaced relationships with data-driven transactions.

New pocketable devices like AirPods (earbuds) and earphones introduced new gestures: tap to play/pause, swipe to skip, voice prompts ("Hey Siri, play jazz") and airpod clicks, became the new muscle-memory gestures. Unfortunately this new hardware turned music into a continuous background layer of daily life with known side-effects: Continuous Partial Attention, "Anti-Social" signaling, escapism to the "zone", noise-induced hearing loss (NIHL), overstimulation, sleep debt, Tech Neck and rumination.

THE EVOLUTION OF MUSIC PRODUCTION

The story of human skill—whether in typing, coding, playing an instrument, or curating a music collection—is one of recursive transformation: we invent tools to offload effort, those tools demand new forms of bodily mastery, and our cognitive repertoire expands in turn.

Before the invention of any object, music was produced by the body alone: singing, clapping, stomping, ululation. The first instruments as extensions of the body were flutes, drums and stringed instruments.

But then came the big bang of music: transcription (score, sheet music) as Cognitive Offloading. Music got its own artificial language, its own code with the development of notation systems in medieval Europe which transferred musical memory and theory from individual minds to collective ones via written form.

If you define a "programming language" as any formal system of symbols that instructs an agent to perform actions, then musical score is one of humanity's earliest "codes." Both software code and musical notation (as well as human language and number systems) are discrete infinite formal languages, each defined by a finite symbol set and syntax, yet capable of encoding infinitely many distinct works.

But unlike code, score is not Turing complete, neither is it machine-executable nor unambiguous. It lacks branching, loops, and state-mutation in the computational sense, but composers approximate these via repeats, codas, and variable instrumentation. Score was here a millennium before programming code (1950s)!

Once you see music as proto-code, you are a different person. My generation read "Escher, Godel and Bach (EGB)". Hofstadter never labeled musical notation itself a "programming language," but his entire exploration of Bach as a formal system of rules and transformations is arguably the closest thing in GEB to that claim. In Part II ("Meaning and Form in Mathematics and the Arts"), Hofstadter shows how Bach's fugues are built by following precise, symbolic "rules"—much as a formal axiomatic system generates theorems by following inference rules. Bach was undeniably a programmer, with the exception that he used humans as his "runtime environment."

Musicians and engineers share a deep isomorphism:

Composers ⇄ software engineers

Score ⇄ code

Musicians ⇄ interpreters

It was always bound to happen. Just as version control and pull requests powered GitHub's explosion, music—now a data-driven, code-like medium—stands on the brink of the same transformation. Cloud DAWs have already torn down geographic walls; GenAI assistants now democratize mix engineering and arrangement. A "distributed studio" isn't just possible—it's preferable for scale, diversity, and real-time feedback loops.

Early platforms (Splice, BandLab, Ohm Studio) hinted at what's to come; newer players layer in AI-powered mix suggestions, rights metadata, and immutable version logs. As latency, tooling standards, and IP frameworks mature, studios will look less like rooms and more like living networks—automation and governance baked in.

Yet far from disappearing, human agency has merely shifted upward. Even in "lights-out" services like Suno or Cursor, artists still design high-level models, curate datasets, craft prompts, and guide post-production.

So yes, the locus of creativity is evolving, but we are still here. Perhaps we have changed seats, like in a Waymo or Tesla Robotaxi, but we still get in and get out where and when we want. We do not go to the world anymore, we sit in the machine and the world comes to us. Rumors of our demise have been wildly overstated.

ALL BLOGS