forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 23
Pull requests: invisiofficial/rk-llama.cpp
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Add RK3576 support and make some optimizations, then sync with upstream llama.cpp
AMD ZenDNN
android
Apple Metal
Ascend NPU
build
devops
documentation
Improvements or additions to documentation
examples
ggml
Hexagon
IBM zDNN
jinja parser
model
nix
Nvidia GPU
OpenCL
OpenVINO
python
script
server/webui
server
SYCL
testing
Vulkan
WebGPU
#13
opened May 8, 2026 by
Dts0
Loading…
ggml-rknpu2: opt-in batched mul_mat path (F8.5)
ggml
testing
#12
opened May 3, 2026 by
fukumori
Loading…
3 of 4 tasks
ProTip!
no:milestone will show everything without a milestone.