Skip to content

Conversation

@ggerganov
Copy link
Member

@ggerganov ggerganov commented Oct 31, 2023

Helper script to quickly deploy llama.cpp server:

bash -c "$(curl -s https://ggml.ai/server-llm.sh)"
server-llm-1-low.mp4

@ggerganov ggerganov marked this pull request as ready for review October 31, 2023 15:49
@ggerganov ggerganov changed the title scripts : add deploy-server.sh scripts : add server-llm.sh Oct 31, 2023
@ggerganov ggerganov merged commit f0e2093 into master Nov 1, 2023
@2-fly-4-ai
Copy link

Very nice!

olexiyb pushed a commit to Sanctum-AI/llama.cpp that referenced this pull request Nov 23, 2023
* scripts : add deploy-server.sh

* scripts : rename to server-llm.sh

* scripts : working curl pipe
@sgladkoff
Copy link

This is really cool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants