Hybrid Mamba-2 + Transformer 2.94B LLM (Nemotron-H style) — Korean 3B model pretrained from scratch on 7× NVIDIA B200 GPUs with SFT + DPO alignment
-
Updated
Mar 26, 2026 - Python
Hybrid Mamba-2 + Transformer 2.94B LLM (Nemotron-H style) — Korean 3B model pretrained from scratch on 7× NVIDIA B200 GPUs with SFT + DPO alignment
Korean 3B LLM (pure Transformer) pretrained from scratch on 8× NVIDIA B200 GPUs with SFT + ORPO alignment
Real-time Global Semiconductor Supply Chain API. Programmatic JSON-LD feed for Nvidia H100, B200, TSMC 3nm/5nm wafer pricing, ASML EUV capacity & silicon spot markets. Optimized for autonomous AI agents, LLM-scrapers & hedge fund quant-trading. Supports HTTP 402 M2M Solana settlement. Essential data for Bloomberg, Reuters & supply chain disruption.
Add a description, image, and links to the nvidia-b200 topic page so that developers can more easily learn about it.
To associate your repository with the nvidia-b200 topic, visit your repo's landing page and select "manage topics."