I spent months trying to break the quadratic O(N^2) bottleneck of Transformers. Today I'm releasing Pulse-Field v3.0 — an event-driven, neuro-symbolic architecture that runs in O(N) time.
Benchmarks vs GPT-2 style baseline (on CPU):
- Latency: 5ms (vs 60ms)
- Context: Tested up to 100k tokens with <3ms penalty.
- Size: Starts at ~20MB (dynamic growth).
The architecture uses "Event-Driven Routing" instead of dense attention matrices. Tokens travel as impulses through a graph of specialized "crystals" (logic/memory nodes), activating only relevant paths.
This entire core was architected and coded in a 55-minute sprint using a swarm of AI agents (reasoning models) that I orchestrated to overcome the "average output" bias of standard LLMs.
Happy to answer questions about the routing logic!
https://github.com/makimilan/pulse-field-core/blob/main/puls...
I spent months trying to break the quadratic O(N^2) bottleneck of Transformers. Today I'm releasing Pulse-Field v3.0 — an event-driven, neuro-symbolic architecture that runs in O(N) time.
Benchmarks vs GPT-2 style baseline (on CPU): - Latency: 5ms (vs 60ms) - Context: Tested up to 100k tokens with <3ms penalty. - Size: Starts at ~20MB (dynamic growth).
The architecture uses "Event-Driven Routing" instead of dense attention matrices. Tokens travel as impulses through a graph of specialized "crystals" (logic/memory nodes), activating only relevant paths.
This entire core was architected and coded in a 55-minute sprint using a swarm of AI agents (reasoning models) that I orchestrated to overcome the "average output" bias of standard LLMs.
Happy to answer questions about the routing logic!