llama.cpp/prompts
Shijie 37c746d687
llama : add Qwen support (#4281)
* enable qwen to llama.cpp

* llama : do not GPU split bias tensors

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-12-01 20:16:31 +02:00
..
LLM-questions.txt parallel : add option to load external prompt file (#3416) 2023-10-06 16:16:38 +03:00
alpaca.txt Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 2023-04-14 22:58:43 +03:00
assistant.txt speculative : add tree-based sampling example (#3624) 2023-10-18 16:21:57 +03:00
chat-with-baichuan.txt feature : support Baichuan serial models (#3009) 2023-09-14 12:32:10 -04:00
chat-with-bob.txt Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982) 2023-04-14 22:58:43 +03:00
chat-with-qwen.txt llama : add Qwen support (#4281) 2023-12-01 20:16:31 +02:00
chat-with-vicuna-v0.txt examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat-with-vicuna-v1.txt examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
chat.txt examples : read chat prompts from a template file (#1196) 2023-05-03 20:58:11 +03:00
dan-modified.txt prompts : model agnostic DAN (#1304) 2023-05-11 18:10:19 +03:00
dan.txt prompts : model agnostic DAN (#1304) 2023-05-11 18:10:19 +03:00
mnemonics.txt prompts : add mnemonics.txt 2023-10-12 09:35:30 +03:00
parallel-questions.txt prompts : fix editorconfig checks after #3416 2023-10-06 16:36:32 +03:00
reason-act.txt do not force the prompt file to end with a new line (#908) 2023-04-13 11:33:16 +02:00