llama.cpp/requirements
DAN™ 889bdd7686
command-r : add BPE pre-tokenization (#7063)
* Add BPE pre-tokenization for Command-R/R+.

* Bump transformers convert requirement.

* command-r : add individual digits regex

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-05-05 08:19:30 +03:00
..
requirements-convert-hf-to-gguf-update.txt llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
requirements-convert-hf-to-gguf.txt convert-hf-to-gguf : require einops for InternLM2ForCausalLM (#5792) 2024-03-01 16:51:12 -05:00
requirements-convert-llama-ggml-to-gguf.txt python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
requirements-convert-lora-to-ggml.txt python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
requirements-convert-persimmon-to-gguf.txt python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
requirements-convert.txt command-r : add BPE pre-tokenization (#7063) 2024-05-05 08:19:30 +03:00