Skip to content

ben-santora/SLM_Tests

Repository files navigation

Bare Metal SLM Inference

Local SLM testing. No wrappers. Direct hardware utilization via compiled binaries.

Hardware

Intel i7-1165G7, 12GB RAM, AVX-512 optimized. See hardware-profile.json.

Method

llama.cpp compiled from source with AVX-512 flags. See inference-config.json.

Tests

Tests are in no particular order - see .md files for each model's test results.

Releases

No releases published

Packages

No packages published