Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 67.7k 12.6k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 2.6k 364

  3. recipes recipes Public

    Common recipes to run vLLM

    Jupyter Notebook 338 123

  4. speculators speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    Python 196 26

  5. semantic-router semantic-router Public

    System Level Intelligent Router for Mixture-of-Models at Cloud, Data Center and Edge

    Go 2.9k 444

Repositories

Showing 10 of 31 repositories
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 808 Apache-2.0 114 40 (5 issues need help) 17 Updated Jan 16, 2026
  • tpu-inference Public

    TPU inference for vLLM, with unified JAX and PyTorch support.

    vllm-project/tpu-inference’s past year of commit activity
    Python 213 Apache-2.0 77 21 (1 issue needs help) 101 Updated Jan 16, 2026
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Go 4,529 Apache-2.0 515 279 (21 issues need help) 24 Updated Jan 16, 2026
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 2,579 Apache-2.0 364 84 (19 issues need help) 41 Updated Jan 16, 2026
  • speculators Public

    A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM

    vllm-project/speculators’s past year of commit activity
    Python 196 Apache-2.0 26 8 (3 issues need help) 10 Updated Jan 16, 2026
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 40 Apache-2.0 31 5 12 Updated Jan 16, 2026
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 67,674 Apache-2.0 12,637 1,751 (49 issues need help) 1,371 Updated Jan 16, 2026
  • router Public

    A high-performance and light-weight router for vLLM large scale deployment

    vllm-project/router’s past year of commit activity
    Rust 83 Apache-2.0 24 6 9 Updated Jan 16, 2026
  • FlashMLA Public Forked from deepseek-ai/FlashMLA
    vllm-project/FlashMLA’s past year of commit activity
    C++ 10 MIT 939 0 5 Updated Jan 16, 2026
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 22 Apache-2.0 99 1 81 Updated Jan 16, 2026