Hi FMS FSDP folks,
I wanted to share a small but unusual language-runtime project that may still be relevant to the broader question of language-model training cost structure and capability development, even though it sits far outside the usual GPU training path.
We built a public demo line called Engram and deployed it on a commodity ESP32-C3.
Current public numbers:
Important scope note:
This is not presented as unrestricted open-input native LLM generation on MCU.
The board-side path is closer to a flash-resident, table-driven runtime with:
- packed token weights
- hashed lookup structures
- fixed compiled probe batches
- streaming fold / checksum style execution over precompiled structures
So this is not a standard dense training or inference stack at all. It is closer to a task-specialized language runtime whose behavior has been
crystallized into a compact executable form under severe physical constraints.
Repo:
https://github.com/Alpha-Guardian/Engram
Why I’m posting here is that fms-fsdp is one of the clearest public examples of how seriously the community is pushing on training efficiency,
throughput, and cost-aware development for large language systems.
What I’d be curious about is whether systems like this should be thought of as:
- completely outside the normal training/inference family
- an extreme endpoint where some task capability is no longer best served by ever more efficient dense training paths
- or an early sign that future language systems may combine dense model training for broad capability with highly specialized executable forms for certain capability slices
If this direction is relevant to your team, I’d be glad to compare notes.