Skip to content

Issues: ggerganov/llama.cpp

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

Bug: CUDA error: out of memory - Phi-3 Mini 128k prompted with 20k+ tokens on 4GB GPU bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss)
#7885 opened Jun 11, 2024 by kozuch
Bug: get-wikitext-103.sh seems not working bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7878 opened Jun 11, 2024 by Eddie-Wang1120
Bug: GGML_ASSERT: ggml.c:12793: ne2 == ne02 zsh: abort ./finetune --model-base --train-data ./Llama3-8B-Chinese-Chat-fintune/111.tx bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7877 opened Jun 11, 2024 by CodeBobobo
Bug: multithreading for requests,model infer service failed bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7876 opened Jun 11, 2024 by liuzhipengchd
Feature Request: Add Paligemma support enhancement New feature or request
#7875 opened Jun 11, 2024 by nischalj10
4 tasks done
Bug: Random output after the last update bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
#7874 opened Jun 11, 2024 by alexcardo
Bug: 'scripts/run-with-preset.py fails on --tensor-split` option when run on non-GPU-enabled system bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7864 opened Jun 10, 2024 by HanClinto
Bug: Possible precision loss when using KV cache bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)
#7859 opened Jun 10, 2024 by uwu-420
Bug: server /completion endpoint no longer accepts numeric tokens bug Something isn't working medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)
#7855 opened Jun 10, 2024 by matteoserva
Bug: Server ends up in infinite loop if number of requests in the batch is greater than parallel slots with system prompt bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
#7834 opened Jun 8, 2024 by kdhingra307
iGPU offloading Bug: Memory access fault by GPU node-1 (appeared once only) AMD GPU Issues specific to AMD GPUs bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7829 opened Jun 8, 2024 by eliranwong
Bug: CUDA enabled docker container fails to launch bug-unconfirmed critical severity Used to report critical severity bugs in llama.cpp (e.g. Crashing, Corrupted, Dataloss)
#7822 opened Jun 7, 2024 by mblunt
Bug: Running a large model through the server using vulkan backend always generates gibberish after first call. bug-unconfirmed medium severity Used to report medium severity bugs in llama.cpp (e.g. Malfunctioning Features but still useable)
#7819 opened Jun 7, 2024 by richardanaya
I am running two socket servers, and the CPU usage is at 50% bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
#7812 opened Jun 7, 2024 by superLiben
Bug: QWEN2 quantization GGML_ASSERT bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
#7805 opened Jun 6, 2024 by bartowski1182
Bug: token generation seems to slow down for higher slots bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7802 opened Jun 6, 2024 by desperadoduck
Bug: JSON Schema-to-GBNF additionalProperties bugs (and other minor quirks) bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
#7789 opened Jun 6, 2024 by HanClinto
ProTip! Mix and match filters to narrow down what you’re looking for.