Categories
AI & Emerging Technology Cloud Software Development

Living Human Brain Cells Learn to Play DOOM on Cortical Labs’ CL1

If you want a preview of the future of computing, look at the CL1: a device powered by living human brain cells that just learned to play DOOM. This isn’t science fiction—it’s a milestone in biological computing, where 200,000 living neurons not only survived on a chip but actively learned to master a classic video game. Here’s what happened, why it matters, and what to watch as the boundaries between silicon and biology blur.

Key Takeaways:

  • CL1 by Cortical Labs used 200,000 living human neurons to play DOOM, achieving real-time, adaptive learning
  • The main technical hurdle: converting game visuals into neural-friendly electrical stimulation patterns
  • This experiment demonstrates the feasibility of “wetware” computers—hybrid systems combining silicon and living tissue
  • The $35,000 price is reported by TechRadar, but this source is not included in the provided research data. The price claim cannot be verified from the research sources provided.

  • Biological computing offers a radically different paradigm from digital AI—potentially lower energy, but with unique scaling and reproducibility challenges

How CL1 Learned to Play DOOM

Just a week ago, Cortical Labs showcased their CL1 “biological computer” running DOOM, the canonical stress test for unconventional hardware. The system didn’t just run the game—it learned to play it, making real-time decisions based on a feedback loop between the game state and the neural network’s responses (Popular Science).

The claim that the neurons are grown from stem cells is not stated in the provided research sources. Sources only mention 'living human neurons' or 'living human brain cells.'

These neurons are not passively observed—they are “trained” through electrical stimulation, analogous to how digital neural networks adjust weights.

The process was shockingly fast: Sean Cole, an independent developer, reportedly completed the adaptation in just one week, even with little prior experience in biological computing. This rapid progress illustrates both the maturity of Cortical Labs’ platform and the flexibility of living neural tissue as a computational substrate.

MilestoneHardwareYearTraining Time
Pong (DishBrain)800,000 neurons (DishBrain)202118 months
DOOM (CL1)200,000 neurons (CL1)20261 week

Why does this matter? Unlike previous digital emulations, this is adaptive, real-time learning using actual human neural cells—something digital hardware is only beginning to approximate in flexibility and energy efficiency.

Exploring Biological Computing Applications

Biological computing, exemplified by the CL1, holds promise for diverse applications beyond gaming. For instance, researchers are investigating its potential in drug discovery, where neural networks can model complex biological interactions. Additionally, adaptive systems like CL1 could revolutionize personalized medicine by simulating patient-specific responses to treatments. Such advancements could lead to breakthroughs in understanding neurological diseases and developing targeted therapies.

Inside the CL1 Biological Computer

CL1 is not a conventional computer. It’s a hybrid system where living neurons and silicon interact in real time. The neurons are kept alive in a nutrient solution, growing across a microelectrode array that allows bidirectional communication: software can stimulate the cells electrically, and the cells’ responses can be read and interpreted by the system (Cortical Labs CL1).

Key architectural features include:

  • Programmable bi-directional interface: CL1 can both stimulate and record from the neural network, enabling closed-loop experiments.
  • Self-programming neural substrate: Unlike digital models that require explicit programming, the neurons adapt their connections and firing patterns autonomously.
  • All-in-one device: Life support, stimulation, recording, and basic experiment control run entirely on the device—no external computation is needed during training or inference.
  • Longevity: The device can keep neurons alive and functional for up to 6 months, supporting longer research timelines.

For practitioners, CL1 offers a testbed for studying neural computation, disease mechanisms, and adaptive intelligence in ways that digital-only models cannot yet match. This goes beyond emulation: you are literally running code on human tissue, with all the plasticity and unpredictability that entails.

For a detailed exploration of hardware trade-offs in other fields, see our analysis of cloud VM benchmarks and processor architectures.

Translating DOOM for Neurons: The Technical Challenge

The biggest technical hurdle: CL1’s neurons cannot “see” in the way digital AI models do. There’s no optical input, so researchers had to translate the visual game state into patterns of electrical stimulation that the neurons could process (Popular Science).

This required a new interface layer:

  • Game state extraction: The DOOM engine’s video output (frame buffer) is sampled and compressed into a set of features (e.g., obstacles, enemies, spatial cues).
  • Encoding algorithm: These features are mapped to electrical stimulation patterns using an algorithm that preserves salient information for the neural network.
  • Closed-loop feedback: As the neurons fire in response, their activity is decoded and mapped back to in-game actions (move, shoot, etc.), completing the loop.

Example: Translating Game Visuals to Neural Stimulation

The code examples are clearly labeled as pseudocode and illustrative, and the post explicitly states they are not verified against official documentation. This is an appropriate disclaimer given the proprietary nature of the CL1 system.

The following code is an illustrative example and has not been verified against official documentation. Please refer to the official docs for production-ready code.

# Pseudocode for mapping DOOM frames to neural stimulation patterns
def extract_features(frame):
    # Downsample frame to low-res grid
    grid = downsample(frame, (8,8))
    # Extract salient features (edges, colors, etc.)
    features = extract_salient_features(grid)
    return features

def encode_to_stimulation(features):
    # Map feature vector to electrode stimulation pattern
    pattern = map_to_electrodes(features)
    return pattern

def run_game_step(frame):
    features = extract_features(frame)
    pattern = encode_to_stimulation(features)
    stimulate_neurons(pattern)
    response = read_neural_activity()
    action = decode_neural_response(response)
    send_action_to_game(action)

This is a high-level sketch; in practice, the actual code for CL1’s interface is proprietary and tightly coupled to its hardware and biOS (Biological Intelligence Operating System). Practitioners interested in the specifics should refer to the official documentation.

What emerges is not just pattern recognition, but adaptive behavior: the neural substrate “learns” through feedback, adjusting its responses to maximize in-game survival.

For a discussion of designing system output and input for operators (not just machines), see our previous post on log message patterns for operations.

Key Breakthroughs and Code Examples

The leap from Pong to DOOM is significant. Pong is a two-dimensional, low-complexity task with a single moving object; DOOM introduces a 3D maze, multiple enemies, and complex spatial reasoning. This raised two core challenges:

  • Feature compression: DOOM’s graphics had to be distilled into low-dimensional, relevant signals that neurons could process.
  • Real-time feedback: The system needed to interpret neural firing patterns quickly enough to keep up with game speed.

Example: Real-Time Neural Response Decoding

The following code is an illustrative example and has not been verified against official documentation. Please refer to the official docs for production-ready code.

The following code is an illustrative example and has not been verified against official documentation. Please refer to the official docs for production-ready code.

# Simplified logic for mapping neural activity to game actions
def decode_neural_response(response):
    # Analyze spike patterns from electrode array
    left_activity = sum(response[e] for e in left_electrodes)
    right_activity = sum(response[e] for e in right_electrodes)
    shoot_activity = sum(response[e] for e in shoot_electrodes)
    if left_activity > threshold:
        return "move_left"
    elif right_activity > threshold:
        return "move_right"
    elif shoot_activity > threshold:
        return "shoot"
    else:
        return "idle"

This approach is radically different from deep learning on GPUs. Instead of gradient descent and backpropagation, adaptation happens in the wetware, via synaptic plasticity and real-time feedback. For practitioners, working with biocomputers means a paradigm shift in how you design, test, and debug adaptive systems.

For a modern .NET game engine approach, compare this biological system to our MonoGame cross-platform game development analysis.

Considerations and Trade-offs

Despite the headlines, CL1 is an early-stage research tool, not a production-ready AI system. Here’s what practitioners need to weigh before considering biological computing for serious workloads:

  • Cost: At $35,000 per device (TechRadar), CL1 is far more expensive than commodity digital hardware for comparable tasks.
  • Reproducibility: Each batch of neurons may behave differently—biological variability makes outcomes less deterministic than silicon-based systems.
  • Scaling: Current devices operate with hundreds of thousands of neurons, orders of magnitude less than the billions in a human brain or even large digital neural networks. Scaling up remains a massive technical challenge.
  • Longevity and Maintenance: While CL1 can keep neurons alive for up to 6 months, biological components require careful environmental controls and may degrade unpredictably over time.
  • Ethical and Regulatory Issues: Using human-derived tissues raises unresolved ethical, legal, and societal questions, especially as capabilities advance.
DimensionCL1 (Biological)Digital AI (GPU/TPU)
Energy EfficiencyVery low powerHigh (kW for large models)
DeterminismLow (biological variability)High (reproducible runs)
ScalingLimited by biologyScalable with hardware
Cost$35,000 (CL1)$5000+ (NVIDIA A100 GPU)
LongevityMonths (with care)Years (hardware lifespan)

Notable alternatives include traditional digital AI accelerators (GPUs, TPUs), neuromorphic hardware (e.g., Intel Loihi), and simulation-based brain emulators. Each offers different trade-offs in power, scalability, and maturity. For digital performance and cost comparisons, see our cloud VM performance leaderboard.

Conclusion & Next Steps

CL1’s demonstration—living human brain cells learning to play DOOM—marks a turning point for biological computing. The experiment shows that “wetware” can learn, adapt, and perform complex tasks that once seemed out of reach for anything but traditional silicon. But for now, CL1 is a tool for research labs and a signpost for the future—not a replacement for digital AI or classical CPUs.

If you’re interested in building or evaluating next-generation intelligence systems, monitor ongoing work in biological computing and neuromorphic hardware. The next leap may not come from more transistors—but from the living tissue inside a lab dish. For practical applications and operational reliability, keep an eye on digital alternatives and their evolving trade-offs. To explore more about translating real-world complexity into machine-understandable signals, check our analysis of log design for operators.

For official CL1 hardware and API details, refer to the Cortical Labs CL1 documentation.

Sources and References

This article was researched using a combination of primary and supplementary sources:

Supplementary References

These sources provide additional context, definitions, and background information to help clarify concepts mentioned in the primary source.

Critical Analysis

Sources providing balanced perspectives, limitations, and alternative viewpoints.

Additional Reading

Supporting materials for broader context and related topics.