Skip to content

Conversation

@yeahdongcn
Copy link
Contributor

@yeahdongcn yeahdongcn commented Dec 27, 2025

This PR adds support for Moore Threads (MUSA) GPU platform, expanding LightX2V's hardware compatibility.

NOTE:

  1. safetensors needs to be built and installed from source (0.7.0 doesn't contain our PR to support MTGPU).
  2. torch/torch_musa needs to be upgraded to the latest version.

Testing Done

Wan2.1-T2V-1.3B runs well on MTGPU:

root@worker3218:/ws/scripts/platforms/mthreads_musa# ./run_wan21_t2v.sh 
===============================================================================
LightX2V Base Environment Variables Summary:
-------------------------------------------------------------------------------
lightx2v_path: /ws
model_path: /ws/Wan-AI/Wan2___1-T2V-1___3B
-------------------------------------------------------------------------------
Model Inference Data Type: BF16
Sensitive Layer Data Type: None
Performance Profiling Debug Level: 2
===============================================================================
2025-12-27 16:38:32.648 | INFO     | lightx2v_platform.base.base:init_ai_device:16 - Initialized AI_DEVICE: cuda
2025-12-27 16:38:32.654 | INFO     | lightx2v_platform.base.base:check_ai_device:33 - AI device for platform 'musa' is available.
2025-12-27 16:38:35.198 | INFO     | lightx2v.common.ops.attn.flash_attn:<module>:13 - flash_attn_varlen_func_v3 not found, please install flash_attn3 first
2025-12-27 16:38:35.206 | INFO     | lightx2v.common.ops.attn.sage_attn:<module>:19 - sageattn not found, please install sageattention first
2025-12-27 16:38:35.206 | INFO     | lightx2v.common.ops.attn.sage_attn:<module>:25 - sageattn3 not found, please install sageattention first
INFO 12-27 16:38:35 [__init__.py:36] Available plugins for group vllm.platform_plugins:
INFO 12-27 16:38:35 [__init__.py:38] - musa -> vllm_musa:register
INFO 12-27 16:38:35 [__init__.py:41] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 12-27 16:38:35 [__init__.py:232] Platform plugin musa is activated
...

The generated video is:

output_lightx2v_wan_t2v.mp4

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yeahdongcn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances LightX2V's hardware compatibility by introducing full support for Moore Threads MUSA GPUs. The changes involve integrating a new device class, adding specific configuration files and example scripts for MUSA, and updating project dependencies to ensure seamless operation on this new platform. This expansion allows users to leverage LightX2V's capabilities on a broader range of AI accelerators.

Highlights

  • MUSA GPU Support: Integrated support for Moore Threads MUSA GPU platform, significantly expanding LightX2V's hardware compatibility.
  • New MUSA Device Class: Introduced a dedicated MusaDevice class, inheriting from CudaDevice, to handle MUSA-specific device detection and registration within the platform.
  • Configuration and Scripting: Added a new JSON configuration file (wan_t2v.json) and an example shell script (run_wan21_t2v.sh) to facilitate running models on the MUSA platform.
  • Dependency Update: Included torchada>=0.1.10 as a new project dependency, essential for MUSA integration.
  • Documentation and Build Updates: Updated the README.md to reflect MUSA support and modified .gitignore and setup_vae.py for improved build and environment management.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for MThreads MUSA GPUs, which is a great extension of hardware compatibility. The changes include adding a new platform device implementation, configuration files, dependencies, and an example script.

My review focuses on improving the robustness and usability of the new implementation. I've identified a potential issue in the parallel environment initialization for the new MUSA device, which inherits NVIDIA-specific logic. I've also suggested improvements for dependency consistency and for the usability of the new example script. Overall, these are solid additions with a few areas for refinement.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@helloyongyang helloyongyang merged commit 2680635 into ModelTC:main Dec 27, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants