Skip to content

Conversation

@BarfingLemurs
Copy link
Contributor

closes #3459

close #3396 (mistral)

@ggerganov ggerganov merged commit 1faaae8 into ggml-org:master Oct 6, 2023
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 12, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp:
  py : change version of numpy requirement to 1.24.4 (ggml-org#3515)
  quantize : fail fast on write errors (ggml-org#3521)
  metal : support default.metallib load & reuse code for swift package (ggml-org#3522)
  llm : support Adept Persimmon 8B (ggml-org#3410)
  Fix for ggml-org#3454 (ggml-org#3455)
  readme : update models, cuda + ppl instructions (ggml-org#3510)
  server : docs fix default values and add n_probs (ggml-org#3506)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Installing Llama.cpp without nvcc from nvidia-cuda-toolkit Does this work with Synthia-7B-v1.3

2 participants