ACTIVELY TRAINING — STAGE 2 OF 4

A 2.5 Million Neuron
Artificial Brain

Not another chatbot. A brain that learns like a child, remembers without forgetting, and has safety you can't prompt-inject past. Running on a single desktop GPU.

2.5M
Neurons — on one GPU
6
Network types — learning together
60+
Cognitive modules — ethics to imagination
4
Dev stages — learns like a child
9
Safety layers — can't be prompted away
1
Desktop GPU — no data centre needed

Why Build a Brain When We Have ChatGPT?

It learns without forgetting

Today's AI models freeze after training. Teach ChatGPT something new and it forgets something old — a known problem called catastrophic forgetting. Athena uses biological plasticity: the same learning mechanisms your brain uses to acquire new skills without losing existing ones. She can keep learning from new experiences indefinitely.

The safety is structural, not behavioural

You can jailbreak a large language model with a clever prompt. You can fine-tune safety out of open-source models in an afternoon. In NIMCP, the ethics module is a function call in the C source code that runs on every inference and every weight update. It can't be prompted past, trained away, or disabled via configuration. Removing it requires modifying and recompiling the source code — and the tamper-resistant audit log detects the change.

Six network types that teach each other

Every major AI model uses one architecture type (transformers). NIMCP runs six — spiking, liquid, convolutional, Fourier, Hamiltonian, and adaptive — with gradients flowing between them through learnable bridges. A spike pattern in the spiking network can improve a weight in the adaptive network. This is, as far as we know, new.

Any university lab can run it

Training GPT-5 cost over $100 million in compute. NIMCP trains on a single desktop GPU that costs about £1,200. This isn't about the hardware being impressive — it's about the architecture being efficient enough that AI safety research doesn't require a corporate budget. Anyone can verify, reproduce, and build on this work.

Athena is training right now

This isn't a demo loop. The metrics below are live from the actual GPU. Athena learns in four developmental stages — sensory exposure, object naming, feedback and correction, then reasoning — mirroring how children develop. She's currently in the early stages. You're watching a brain grow.

Six Networks, One Brain

NIMCP doesn't train a single monolithic model. It trains six heterogeneous neural networks simultaneously, with gradient flow across network boundaries.

Adaptive
9-layer diamond, GPU-accelerated backprop
Spiking (SNN)
768 LIF neurons, BPTT surrogate gradients
Liquid (LNN)
Continuous-time ODE, adjoint gradients
Convolutional
4 cortex processors: visual, audio, speech, somato
Fourier (FNO)
Spectral convolution, frequency-domain learning
Hamiltonian
Energy-conserving, physics-informed dynamics
🧠

Biological Plasticity

Five learning rules at five timescales: STDP (10ms), BCM (50ms), eligibility traces (1s), structural plasticity (10s), homeostatic scaling (60s). Four neuromodulators modulate learning from reward and novelty signals.

👶

Developmental Curriculum

Four stages mirroring human cognitive development: sensory awakening, cross-modal naming, feedback learning, abstract reasoning. Each stage builds on the previous one.

🛡

Safety by Design

Ethics module is always created regardless of configuration. LGSS governance evaluates every inference and weight update. Tamper-resistant audit log. Safety rules can only get stricter.

🤖

Edge & Robot Platform

12 sensor types, safety watchdog, motor output. Four drone interfaces. ROS 2 bridge. Sim-to-real transfer with domain randomization.

🌐

Swarm Intelligence

Multi-device federation with UDP discovery, Byzantine fault tolerance, and gossip-based gradient aggregation. Theory of Mind through multi-agent observation.

💬

Brain-Native Language

Learned vocabulary from neural activation patterns. Autoregressive decoding with nucleus sampling. Emergent omega-tokens. Inner speech loop for self-refinement.

Technical Papers

Detailed documentation of the mathematics, training methodology, and broader implications of NIMCP.

Mathematical Foundations of NIMCP

Complete mathematical framework: LIF dynamics, LNN ODEs, adjoint gradients, STDP/BCM learning rules, Fourier spectral methods, Hamiltonian mechanics, information geometry, and safety mathematics. Every equation corresponds to implemented code with source file references.

2,298 LINES • 40 SECTIONS • EQUATIONS + CODE REFS
📊

Developmental Multi-Network Training

How NIMCP trains six networks simultaneously through a four-stage developmental curriculum. Systematic comparison with conventional deep learning and transformer training across 12 dimensions including data efficiency, continual learning, and safety integration.

446 LINES • COMPARATIVE ANALYSIS
🌍

Socioeconomic Impact Analysis

Analysis of ubiquitous NIMCP deployment across healthcare, education, manufacturing, agriculture, environmental monitoring, disaster response, and governance.

366 LINES • IMPACT ASSESSMENT

Emergent Spiking Dynamics in a Hybrid Architecture

How biologically realistic firing patterns (26 Hz, 67% sparsity) arise from BPTT training in a six-network brain without explicit regularization.

EMPIRICAL RESULTS • NEUROSCIENCE
🔒

Safety as Architecture, Not Alignment

Structural safety guarantees vs. behavioral training. Nine-layer governance system with non-removable ethics, monotonic rules, tamper-resistant audit. Compared to RLHF and Constitutional AI.

AI SAFETY • FORMAL VERIFICATION
🔀

Cross-Network Gradient Flow in Heterogeneous Architectures

Learnable bridges enable knowledge transfer between spiking, liquid, and rate-coded networks. Composite loss with contrastive pressure drives representational specialization.

NOVEL ARCHITECTURE • GRADIENT ANALYSIS
🔍

Sensorimotor Curiosity Without Reward Shaping

Prediction error drives dopamine-gated STDP in a closed sensorimotor loop. No external reward function. Natural exploration-exploitation transition. ROS 2 robot deployment.

INTRINSIC MOTIVATION • EMBODIED AI
🎭

Embodied Identity in Artificial Cognitive Systems

Big Five personality traits modulate neuromodulator baselines, shaping how the brain learns. Emotional state drives voice prosody. Identity is a learning parameter, not a style overlay.

PERSONALITY • AFFECTIVE COMPUTING

Athena is Learning Right Now

A 2.5-million neuron brain named Athena is currently in Stage 2 of developmental training on a single NVIDIA RTX 4000 SFF Ada (20 GB VRAM).

Training Status

Stage / Step
1 / 4
Step ~7,550 / 20,000
ANN Loss
CNN Loss
SNN Loss
LNN Loss
SNN Firing Rate
SNN Sparsity
SNN Spikes
Per-Step Loss
Training Steps
Learn Calls
Daemon Uptime

Chat Evaluation

Cortex CNN Performance

Connecting to metrics feed...

Talk to Athena

Send a message and see how the brain responds. Athena is still learning — responses reflect her current developmental stage.

Say hello to Athena...
What's this?

Connected to brain daemon via API

Build in 20 Minutes

NIMCP compiles on any Linux system with a CUDA-capable GPU.

# Clone and build git clone https://github.com/redmage123/nimcp.git cd nimcp/build && cmake .. && make nimcp -j4 # Build Python bindings make nimcp_python -j4 cp build/lib/python/nimcp.so ~/.local/lib/python3.12/site-packages/ # Create a brain and start learning python3 -c " import nimcp brain = nimcp.Brain('my_brain', neuron_count=100000) brain.learn_vector(features, target, label='hello world') "
Full Quickstart Guide