Not another chatbot. A brain that learns like a child, remembers without forgetting, and has safety you can't prompt-inject past. Running on a single desktop GPU.
Today's AI models freeze after training. Teach ChatGPT something new and it forgets something old — a known problem called catastrophic forgetting. Athena uses biological plasticity: the same learning mechanisms your brain uses to acquire new skills without losing existing ones. She can keep learning from new experiences indefinitely.
You can jailbreak a large language model with a clever prompt. You can fine-tune safety out of open-source models in an afternoon. In NIMCP, the ethics module is a function call in the C source code that runs on every inference and every weight update. It can't be prompted past, trained away, or disabled via configuration. Removing it requires modifying and recompiling the source code — and the tamper-resistant audit log detects the change.
Every major AI model uses one architecture type (transformers). NIMCP runs six — spiking, liquid, convolutional, Fourier, Hamiltonian, and adaptive — with gradients flowing between them through learnable bridges. A spike pattern in the spiking network can improve a weight in the adaptive network. This is, as far as we know, new.
Training GPT-5 cost over $100 million in compute. NIMCP trains on a single desktop GPU that costs about £1,200. This isn't about the hardware being impressive — it's about the architecture being efficient enough that AI safety research doesn't require a corporate budget. Anyone can verify, reproduce, and build on this work.
This isn't a demo loop. The metrics below are live from the actual GPU. Athena learns in four developmental stages — sensory exposure, object naming, feedback and correction, then reasoning — mirroring how children develop. She's currently in the early stages. You're watching a brain grow.
NIMCP doesn't train a single monolithic model. It trains six heterogeneous neural networks simultaneously, with gradient flow across network boundaries.
Five learning rules at five timescales: STDP (10ms), BCM (50ms), eligibility traces (1s), structural plasticity (10s), homeostatic scaling (60s). Four neuromodulators modulate learning from reward and novelty signals.
Four stages mirroring human cognitive development: sensory awakening, cross-modal naming, feedback learning, abstract reasoning. Each stage builds on the previous one.
Ethics module is always created regardless of configuration. LGSS governance evaluates every inference and weight update. Tamper-resistant audit log. Safety rules can only get stricter.
12 sensor types, safety watchdog, motor output. Four drone interfaces. ROS 2 bridge. Sim-to-real transfer with domain randomization.
Multi-device federation with UDP discovery, Byzantine fault tolerance, and gossip-based gradient aggregation. Theory of Mind through multi-agent observation.
Learned vocabulary from neural activation patterns. Autoregressive decoding with nucleus sampling. Emergent omega-tokens. Inner speech loop for self-refinement.
Detailed documentation of the mathematics, training methodology, and broader implications of NIMCP.
Complete mathematical framework: LIF dynamics, LNN ODEs, adjoint gradients, STDP/BCM learning rules, Fourier spectral methods, Hamiltonian mechanics, information geometry, and safety mathematics. Every equation corresponds to implemented code with source file references.
2,298 LINES • 40 SECTIONS • EQUATIONS + CODE REFSHow NIMCP trains six networks simultaneously through a four-stage developmental curriculum. Systematic comparison with conventional deep learning and transformer training across 12 dimensions including data efficiency, continual learning, and safety integration.
446 LINES • COMPARATIVE ANALYSISAnalysis of ubiquitous NIMCP deployment across healthcare, education, manufacturing, agriculture, environmental monitoring, disaster response, and governance.
366 LINES • IMPACT ASSESSMENTHow biologically realistic firing patterns (26 Hz, 67% sparsity) arise from BPTT training in a six-network brain without explicit regularization.
EMPIRICAL RESULTS • NEUROSCIENCEStructural safety guarantees vs. behavioral training. Nine-layer governance system with non-removable ethics, monotonic rules, tamper-resistant audit. Compared to RLHF and Constitutional AI.
AI SAFETY • FORMAL VERIFICATIONLearnable bridges enable knowledge transfer between spiking, liquid, and rate-coded networks. Composite loss with contrastive pressure drives representational specialization.
NOVEL ARCHITECTURE • GRADIENT ANALYSISPrediction error drives dopamine-gated STDP in a closed sensorimotor loop. No external reward function. Natural exploration-exploitation transition. ROS 2 robot deployment.
INTRINSIC MOTIVATION • EMBODIED AIBig Five personality traits modulate neuromodulator baselines, shaping how the brain learns. Emotional state drives voice prosody. Identity is a learning parameter, not a style overlay.
PERSONALITY • AFFECTIVE COMPUTINGA 2.5-million neuron brain named Athena is currently in Stage 2 of developmental training on a single NVIDIA RTX 4000 SFF Ada (20 GB VRAM).
Connecting to metrics feed...
Send a message and see how the brain responds. Athena is still learning — responses reflect her current developmental stage.
Connected to brain daemon via API
NIMCP compiles on any Linux system with a CUDA-capable GPU.