Skip to content

Comments

feat(pt/pd): add size option to dp show#4783

Merged
iProzd merged 2 commits intodeepmodeling:develfrom
iProzd:D0605_dev_size
Jun 10, 2025
Merged

feat(pt/pd): add size option to dp show#4783
iProzd merged 2 commits intodeepmodeling:develfrom
iProzd:D0605_dev_size

Conversation

@iProzd
Copy link
Collaborator

@iProzd iProzd commented Jun 5, 2025

Summary by CodeRabbit

  • New Features

    • Added the ability to display model parameter counts using the "size" attribute in the "show" command.
    • Users can now view detailed parameter counts for descriptor, fitting-net, and total parameters for both single-task and multi-task models.
  • Tests

    • Updated tests to verify correct output of model parameter counts when using the new "size" attribute.
  • Documentation

    • Added new documentation for the "show" command detailing usage and output for various model attributes, including the new "size" attribute.

@iProzd
Copy link
Collaborator Author

iProzd commented Jun 5, 2025

We need a proper place to add docs for dp show.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 5, 2025

📝 Walkthrough
## Walkthrough

Support for displaying model parameter counts was added to the "show" command. This includes backend implementations to compute parameter counts, updates to the command-line interface to accept a new "size" attribute, logic to display the counts, and corresponding test updates to verify the new output. Documentation was also added to describe the new feature.

## Changes

| File(s)                                                      | Change Summary                                                                                                         |
|--------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| `deepmd/entrypoints/show.py`                                 | Added logic to handle "size" attribute: retrieves and logs model parameter counts.                                     |
| `deepmd/infer/deep_eval.py`                                  | Added `get_model_size` abstract method to backend interface and concrete method to high-level interface.               |
| `deepmd/pt/infer/deep_eval.py`, `deepmd/pd/infer/deep_eval.py`| Implemented `get_model_size` method to compute and return parameter counts by category.                                |
| `deepmd/main.py`                                             | Extended "show" command's valid attributes to include "size".                                                          |
| `source/tests/pt/test_dp_show.py`                            | Updated tests to include "size" attribute and verify parameter count output.                                           |
| `doc/model/index.rst`                                        | Added "show-model-info" entry to model documentation index.                                                            |
| `doc/model/show-model-info.md`                               | Added new documentation describing the `dp show` command and the "size" attribute usage.                               |

## Sequence Diagram(s)

```mermaid
sequenceDiagram
    participant User
    participant CLI
    participant ShowEntrypoint
    participant DeepEval
    participant Backend

    User->>CLI: dp show size <model>
    CLI->>ShowEntrypoint: parse arguments (attribute="size")
    ShowEntrypoint->>DeepEval: get_model_size()
    DeepEval->>Backend: get_model_size()
    Backend-->>DeepEval: {descriptor, fitting-net, total}
    DeepEval-->>ShowEntrypoint: {descriptor, fitting-net, total}
    ShowEntrypoint->>CLI: log parameter counts
    CLI-->>User: display parameter counts

Suggested labels

Python

Suggested reviewers

  • wanghan-iapcm
  • Chengqian-Zhang

</details>

<!-- walkthrough_end -->
<!-- This is an auto-generated comment: all tool run failures by coderabbit.ai -->

> [!WARNING]
> There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.
> 
> <details>
> <summary>🔧 Pylint (3.3.7)</summary>
> 
> <details>
> <summary>deepmd/pt/infer/deep_eval.py</summary>
> 
> No files to lint: exiting.
> 
> </details>
> 
> <details>
> <summary>deepmd/pd/infer/deep_eval.py</summary>
> 
> No files to lint: exiting.
> 
> </details>
> 
> <details>
> <summary>source/tests/pt/test_dp_show.py</summary>
> 
> No files to lint: exiting.
> 
> </details>
> 
> </details>

<!-- end of auto-generated comment: all tool run failures by coderabbit.ai -->
<!-- internal state start -->


<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKNwSPbABsvkCiQBHbGlcSHFcLzpIACIAMxJqAApuXAB6bloASi40WnpEeAAvD3wU+HwsAkhabntYfAB3aMgGtGQHAWZ1Gno5MNgPbERKFCsKfELaDRgB6zt4DFxx2mwGaXRIcgbIeOpsf37qdDzkNHsikrKKsPx+jwADGrrG+4UnDCnIADlb3Np1coYNC+JSIBgUeBXLD4PhKXDaLynAT4PB3FDMbiRNiLaiA/h8WLYDBiQHA9TyfCxfqoArFfhQlqUDy8fASeBKegNdQISqzBxOFwacyWADCLGxuHajmYzlcBnc/AwXnkTFY7EVaNsLTakBlSkgfVwszQeHqFAANKNxpNLQ16pt8D00eQorEYRsWdwRpi0Gsbsd6LR8AxHOxcdc3XwjQ8noh6g1XqqZR9pj8dvto7CkCHEAUKpb4nQBL6ANaW90OIikRDiCrIBpMnjjNkclC8jxJ9iIIUGCyQACqGWoUUEIhJUilArl8vUkS4u1wyTSGWyAfOdNKtcqt1j8YMX0cAkoXAALAB2AAcAGYDP3hhQuPAxhNaEYDPpjOAoGR6JScARiDIZQnU7RYuF4fhhFEcQJwNFVFGUVR1C0HQPxMKA4FQVBMH/QhSHIKgQPFdguCobZ+RlFw4IUJQqCQzRtF0MBDE/UwDCUEhuGYWhUnYFxuHwBZJVSONGg0bhZA4Axohk3tLAAQQASUAgjh3yaVZX4KkGFgTBqyMeTNhIbYmA+AEKmBA0vGDEttVOPIoiqaNIHuUSE3TYkt39XSPkiNFolpEhmmoJZ4AEPAPAWNF7nk6BoBsRSACF+2gABRABlV4vCQTRIAAdQGLAAouZpUF4aR2EtZzCU8vEGGBREXOYBCvA0UhcAAfWapQvA6wLEkyV4qn8UKSCkDZ/hJCyqL/bhnDQNgaD4JgiUlaZFNCaMsGsohTkgAZchGNhczQUg20m3EMCIHh5sWkYVsWRBLTQbgvTMq6YkjDYCiuvyBCoYlYF1FrSqpZzupIXxUGYbxwjaEtpgAMQWBrZEtdQUCW4dkFZEZnMQGEnRLEhZBxsG+Quap4CmoEqMwegduQZy5qoO7lpRRYdndBIdMgYn5C5I1+hRRB6faTj5oIChuxmbC8nMrAHFerEu0gEgAA8coWa6QohcKaDARAvQYeBYmpyAdqIbWWm5FFQmBJbrd4co+B2833VMpZ8F8WJrIaHthUgeSvCxrcmduZylAYLxJcBMm1fVgSKCdd1uGwARsoYNXFgBaQjCgNMdL09YnNmKOY8IuOtITpOU74NOM/N9hc/rRs9RIaYFW603qfDaElXkP5kC2C38CtrOFgKfVnI1rWPtc+NXhqmmhRk6J87YjiuJ4hZ4goVIt46sbgXEyTpNkvslJU4CogozS/yLq684MQyR5UGsqDEXUSCNRQXParqLU+oXFeK0eyrZS4PAACIkE4qlCQwJEqlh/K8ISlBYi+hILaBAvMsKLB/I5W4I19jtmBj1G6rMf73Q5vbPak0tyyk7ggZAi16j0CoEgdY9wfi4EUhiFW+DaCpQoOMCgrw+hKAwbDaYYoRHSAEu9IgyoqqzHuDAuBCCvCvAQEQWAYBIhSChvgigGC/TRzaPWHUGsaAfCiILIGZxTLgiof/H+gCerAOKK8Vhf8jRHB6iQIg2M0T1R8P6ZyRJaLKmto8WB3Aj6aPEcgj4bYayYDWEwuW/wvIswapDJmsxZ41hiQAiGvVJF9XBJCXA3if5sPQCklkLZ1hnDSR8Zw/xiicjQPIKoQRKC9NmGU9cHgjTjGwLog0jogbOWLAwYmKT6Zoh0XogxkM2xLVMR3QOwdQ5V0gdUUQFc+7xw1rXEc9d06Z2zuEeAz8oAAFk6l/z+HQLgsSqSlKAf1YYXhYiZF0HoKmYhUFYHMbmFy6juDwMQck2goKXJb24qkXelAD5xISSfCS9x3yQCeb/egrzaDvMkZAL5HifmQ3+YC4FNS2yMBjhCtRcSYVaPpbEziyLUX70PsfVq2Lz7r3fJvOJyKZQLFPlJNeckg7KXwjfdSU5q6P30vKWYSZ6b6IWB4ZwRBQycxZveLmUZZgBXjM0DoGqUlgMgNgIcToqgLGjtgfUOE/gK0spo9kjB6jUw8F9Zy0RYrxSSilDKzQBIFAYb4XV+rcpjDGuUIYyi0QNUaFEHSgk1itwONEMpYB/rpNgNES00RcCyC9GAGU3AS0xFBFUlIMJa1LLiN0bWYByC4GiEwjwI9M1+piIFZoNqiXhOYRbHKloyAqGyh9IYlBw5+ECMEGs/kh3oFwKFfWHg7RkFtT9a6ga3LNCtZ8NM0z7o+WrP6N2DAp0iPdD5Wgs6iDlnZoscYPs/aMgOO3AOMrdnAX2RHMuRzY51mrmcwmFzPCNyzs3cQ9yZgPHFRgDqhrKADSXkSGm7KkU8VQ6fUBOpu5mzedFDDFA+rxi0HkDqsaJSJgasa/ywaErJTSulCN+Ao2khjRQPVEp9q5Exsge4/bs1ZRygnGxrZYjjGYC5AA2nmlqBaAY6VrWWitJAq0vS0/WiEjaKBabNpu9tnbogAF0hq3HuCp/NhbAZafLZW6tBnpANqlqZttV0O0/y00Omzq9ZLCrAEYfD6Qd4YD3uizimL+Vn2lZfOVQFCK3w0jNbSV7n6v2Mt/AlrjOplM8SQYj4DCHRShayxMjKZaYRYc89hP8IRjRLkMlqAByYeC0oi5LZsgeTLA0RoIoECXwDQqCvSiMMxIrkqUaBqPcTIKi92hJDDHGgBTRmOkshgQ8IxZq3SodLewEt0vKhY7/YYLR6g3f5sgT22giqGeqU2hp9Brs7rux4B7CgcRRVbeZq6zR9YJ2dfqP7T2FjW2iB1TtDQYQlkQN2yAG0l24BIXQ6mDCqL2P6P4DwfE7mIHedEV7xnohL3dM5B6oQ/wU6lmAfwm2+vHaWk9FyQPxB+cs9Tk1/rfPXQR0jihvWOfPRSfcMtu2vBU/QNt+wjhq7Rhu7gRHChVrdiMOezMHkaaWQ9hUL2X7Gg+uLjmjwf63wAZDkBiDBzy7gYwKcxO0HfyXLgzclu+c8VNYDORj5ZK3ElcpX8gFzFaUIvBWJ6riS8Oip4hkFFMW0W8s0URwVG9wvsUT+kNI3K4vxL5ZKrPKXr7pcVZRCk2WLcGSMtsBu1yfH0HuOSyGpXysB8+yB6BLL48x8yY1wrqo05bY+xjrHE0cekjx9yTXj1q7DP6ydznRtRA91CZd4YuSefXRXxzpdrOe/+UZ+9pZgazN7/812ihkoVcdZ6utenAh4ST3QGEoEbB6AH4XTsBTaIwyk2L0XoFoSuzAiu+2zAR4fAf4kMJAEog2qc7Of+doPGv2JMj2xuz2danmRm3mV2oGYI+BRuq0kuiqBS6B/2b+GAsOV+2soOqIGsEO1sf2+OcOIuFAyOzQAasw9BH0nBtkdOPak+Y2UQ9Cs+8EAOLuBOsCfMmBZOZ+Jm9wlo0u/BRAN+VO5BXOBA8IcuKht2jY0uuhwI8uqA+Myuf4kceBb2fASy6hRk6uouv+7MWuqYtwF6y0OWi6t6+I1Bn6Ow36DYv6h0/6l8dulcDuvehy5ikRMhcBbuycMGTeTcOciGiAvu+K9SRKJKJAnyIe3yFwiQvy1Kke9C0edWkK/ewICKHK28+eKesW6eWKsgOK0qYWRgBM+wawqQW2wkKQvRIQHUNQ1GYkEkUqF8CkqWqkTod8WW5uT8GRaqoyIQjAbQ6wUUrkKIFAPRfRiADRfRwx8SbkRGP6VuigPclWTqXgLqoypq66usYUEU9KzkMUcU7GYamU9goUH0c0uYlWrxTwEAKQzw7kp66A4IPGpwYSB6kQYA8IiAtkSyMMIc8A8J8MZC6yexQ+AYCsax86n2swdsY+/o0OWA1iVAE65Aa+0RFilAYckAUgEIsQsg1s6+JsZsWcxJqIs66wwRgw9qlW/ge8ZAfoCQFA2UIwvJyAUUzk3JoQiQQw1szUBw5AQSMEkUZk2amQqY+WdJycVc/J3epJAw8yLGLhC+oQu8MIMoXkspRJeAY+loTJpsrJH0zM/gwwxIJQVI0p50vce+MQVgKBrhj0HAzQ7o0QwZlCS0lpSBdh5wv0HgTmvMZS4ZXMPg6avQ8gcICIGaNC8cLhMppCShBYQuN+2hJhvgRZNsQsZy0EUQUBlA5smiK6PaN2KqfJjYIBkp9ApkBQNY7Al2TkIQ8ZPqogJYAkQkmJiIBY1oe6ZSnOyJsMaJCJQhppk5gknMi57h/AeunZi6fRo848fhlAoiwmvk1sRp1ub44ReyURaITucRru5yHusG1yCGJOvuhc3hY6HgT5JykGiRdc75qRtyXZIR+o9pqA/wsQsQIWQqEAOeQYDAqQZSKeSg6sGg0snAZeUxFeak4B1eyq3hbgZcwYcafc50GsOw8AfkvB5x5CwwuGukMgsCWAdqtAhFjqxINxrqDeNyLgpabkVaam1p5opUpCBAYghOVM/g44Hcss4CeJ1iP4iu2Uq6f4wpTI3pgYFFiBtZ9QqIDszZH0mA8gRSgZxO6wNOfISwqwmO/gYRCkERgFjuYGz5QFr5fhKR8GaRX5+4FQJAWeHR7EwYaFLUIk8YolPUYA1pGg3EEx685e8qleRF98teix9eI8KFlFXkZsfkC8jQMVkMcVMW+ACV8KKAxZXsKwawJ+r2R4aIQwp0Pp0Uu4LwbwyYtA2C1MQM/wRsMcpMhyb+kQ9A1pFAtpeIKgdsYQHC5A9AwyOkE5U5i+7oQ2xQWAwyBV0gIh4JSstcxZ7JgZyIQssJuma5E+KJ4g6JiJM5Mg8g7Jrp1sOECwJJ21dFOqKSQVfhKpOqm6esEUi6A1Posg0w6Udq3ljx26xZEOKGamKZsArwiQX011q5GJi5K2Lkrmum1aBhsSxBth+N6hN+BhSyrkICIhuVEoVFZyMc7+tO4oSyiAsgOI6sq238mA1scBvoDiANTxNAlZhSicDZv4jpqIkYtpz+DKzglxBSRwaNt1tki5oJBoGmAwpwKSONuoL0yAoBatRavVfk51St91qtrS2sxtP81cRZIhFNXiG6W6zx/gh1YubMcZLGo+QVi+F+suMsqU6sC0mIHYTNKSCwEgwYgFzNQdfk8pY5p1QMJtl1y5qJptKtzgzIzY7IVxPgQwSww4YQFavcvgLVZ0Sy8p1B7AzlQcrlDJ7lsRgFCR3lqcVyYFPugV5AOJnZiol2Q8GwOV+lYYXklEJYQYDQWAO1hls1pGbpOsGA8E+oLdXu8kVgikMRxyYcCFG8LE6E2cv4VIJoAEqVhFoEnAfgaA5EmW8gfQTAtEKgagDEqEzEBgu9SY6gHU7IiAHU/gbIxkdAfU8IycTEO9X4kAV4AgF4J4sQtAJ4AADEeGeCeFeCeCQAILA1eAAIyxAABsF4sQAATDA9A/gwAJxIMXhnhrAACsF4DAsDwDL9oDaA+D2DKDEDeQ2DVDJAZ4V4JDiD2DZ4NDV42DAgJDsD2DfD0DlDVDlDaAGDsDJ4DDjDEAXV79n939iaf9tAR8KS+gQAA== -->

<!-- internal state end -->
<!-- finishing_touch_checkbox_start -->

<details open="true">
<summary>✨ Finishing Touches</summary>

- [ ] <!-- {"checkboxId": "7962f53c-55bc-4827-bfbf-6a18da830691"} --> 📝 Generate Docstrings

</details>

<!-- finishing_touch_checkbox_end -->
<!-- tips_start -->

---

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

<details>
<summary>❤️ Share</summary>

- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)
- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)
- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)
- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)

</details>

<details>
<summary>🪧 Tips</summary>

### Chat

There are 3 ways to chat with [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=deepmodeling/deepmd-kit&utm_content=4783):

- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
  - `I pushed a fix in commit <commit_id>, please review it.`
  - `Explain this complex logic.`
  - `Open a follow-up GitHub issue for this discussion.`
- Files and specific lines of code (under the "Files changed" tab): Tag `@coderabbitai` in a new review comment at the desired location with your query. Examples:
  - `@coderabbitai explain this code block.`
  -	`@coderabbitai modularize this function.`
- PR comments: Tag `@coderabbitai` in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
  - `@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.`
  - `@coderabbitai read src/utils.ts and explain its main purpose.`
  - `@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.`
  - `@coderabbitai help me debug CodeRabbit configuration file.`

### Support

Need help? Create a ticket on our [support page](https://www.coderabbit.ai/contact-us/support) for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

### CodeRabbit Commands (Invoked using PR comments)

- `@coderabbitai pause` to pause the reviews on a PR.
- `@coderabbitai resume` to resume the paused reviews.
- `@coderabbitai review` to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
- `@coderabbitai full review` to do a full review from scratch and review all the files again.
- `@coderabbitai summary` to regenerate the summary of the PR.
- `@coderabbitai generate docstrings` to [generate docstrings](https://docs.coderabbit.ai/finishing-touches/docstrings) for this PR.
- `@coderabbitai generate sequence diagram` to generate a sequence diagram of the changes in this PR.
- `@coderabbitai resolve` resolve all the CodeRabbit review comments.
- `@coderabbitai configuration` to show the current CodeRabbit configuration for the repository.
- `@coderabbitai help` to get help.

### Other keywords and placeholders

- Add `@coderabbitai ignore` anywhere in the PR description to prevent this PR from being reviewed.
- Add `@coderabbitai summary` to generate the high-level summary at a specific location in the PR description.
- Add `@coderabbitai` anywhere in the PR title to generate the title automatically.

### CodeRabbit Configuration File (`.coderabbit.yaml`)

- You can programmatically configure CodeRabbit by adding a `.coderabbit.yaml` file to the root of your repository.
- Please see the [configuration documentation](https://docs.coderabbit.ai/guides/configure-coderabbit) for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: `# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json`

### Documentation and Community

- Visit our [Documentation](https://docs.coderabbit.ai) for detailed information on how to use CodeRabbit.
- Join our [Discord Community](http://discord.gg/coderabbit) to get help, request features, and share feedback.
- Follow us on [X/Twitter](https://twitter.com/coderabbitai) for updates and announcements.

</details>

<!-- tips_end -->

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
deepmd/pt/infer/deep_eval.py (1)

627-651: Optimize dictionary key iteration for better performance.

The implementation is functionally correct and follows good practices for parameter counting. However, the static analysis hints highlight a minor performance optimization opportunity.

Apply this optimization to improve performance:

-        sum_param_des = sum(
-            params[k].numel()
-            for k in params.keys()
-            if "descriptor" in k and "mean" not in k and "stddev" not in k
-        )
-        sum_param_fit = sum(
-            params[k].numel()
-            for k in params.keys()
-            if "fitting" in k and "_networks" not in k
-        )
+        sum_param_des = sum(
+            params[k].numel()
+            for k in params
+            if "descriptor" in k and "mean" not in k and "stddev" not in k
+        )
+        sum_param_fit = sum(
+            params[k].numel()
+            for k in params
+            if "fitting" in k and "_networks" not in k
+        )
🧰 Tools
🪛 Ruff (0.11.9)

639-639: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


644-644: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

deepmd/pd/infer/deep_eval.py (1)

485-509: Implement parameter counting with Paddle-specific utilities and optimize dictionary access.

The implementation is functionally correct and consistent with the PyTorch backend. However, there are a couple of improvements to consider:

  1. Use the Paddle-specific numel utility function from deepmd.pd.utils.decomp for better compatibility with Paddle's dynamic/static modes.
  2. Optimize dictionary iteration as suggested by static analysis.

Apply this diff to improve the implementation:

+from deepmd.pd.utils.decomp import (
+    numel,
+)

 def get_model_size(self) -> dict:
     """Get model parameter count.

     Returns
     -------
     dict
         A dictionary containing the number of parameters in the model.
         The keys are 'descriptor', 'fitting_net', and 'total'.
     """
     params = self.dp.state_dict()
     sum_param_des = sum(
-        params[k].numel()
-        for k in params.keys()
+        numel(params[k])
+        for k in params
         if "descriptor" in k and "mean" not in k and "stddev" not in k
     )
     sum_param_fit = sum(
-        params[k].numel()
-        for k in params.keys()
+        numel(params[k])
+        for k in params
         if "fitting" in k and "_networks" not in k
     )
     return {
         "descriptor": sum_param_des,
         "fitting-net": sum_param_fit,
         "total": sum_param_des + sum_param_fit,
     }
🧰 Tools
🪛 Ruff (0.11.9)

497-497: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


502-502: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9ef43fa and 3b84fd4.

📒 Files selected for processing (6)
  • deepmd/entrypoints/show.py (1 hunks)
  • deepmd/infer/deep_eval.py (2 hunks)
  • deepmd/main.py (1 hunks)
  • deepmd/pd/infer/deep_eval.py (1 hunks)
  • deepmd/pt/infer/deep_eval.py (1 hunks)
  • source/tests/pt/test_dp_show.py (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
deepmd/infer/deep_eval.py (6)
deepmd/pd/infer/deep_eval.py (1)
  • get_model_size (485-509)
deepmd/pt/infer/deep_eval.py (1)
  • get_model_size (627-651)
deepmd/backend/dpmodel.py (1)
  • deep_eval (63-75)
deepmd/backend/pytorch.py (1)
  • deep_eval (69-79)
deepmd/backend/backend.py (1)
  • deep_eval (157-165)
deepmd/backend/tensorflow.py (1)
  • deep_eval (78-88)
deepmd/entrypoints/show.py (3)
deepmd/pd/infer/deep_eval.py (1)
  • get_model_size (485-509)
deepmd/infer/deep_eval.py (2)
  • get_model_size (294-296)
  • get_model_size (568-570)
deepmd/pt/infer/deep_eval.py (1)
  • get_model_size (627-651)
source/tests/pt/test_dp_show.py (2)
source/tests/pt/common.py (1)
  • run_dp (27-49)
source/tests/pd/test_dp_show.py (2)
  • test_frozen_model (71-85)
  • test_frozen_model (192-206)
deepmd/pd/infer/deep_eval.py (4)
deepmd/infer/deep_eval.py (2)
  • get_model_size (294-296)
  • get_model_size (568-570)
deepmd/pt/infer/deep_eval.py (1)
  • get_model_size (627-651)
deepmd/pd/train/wrapper.py (1)
  • state_dict (199-203)
deepmd/pd/utils/decomp.py (1)
  • numel (130-134)
🪛 Ruff (0.11.9)
deepmd/pt/infer/deep_eval.py

639-639: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


644-644: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

deepmd/pd/infer/deep_eval.py

497-497: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


502-502: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

⏰ Context from checks skipped due to timeout of 90000ms (29)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build C++ (cuda120, cuda)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Test C++ (true)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
🔇 Additional comments (8)
deepmd/entrypoints/show.py (1)

66-71: LGTM! Clean implementation of parameter count display.

The size attribute handling follows the established pattern in the file and includes thoughtful touches like thousands separators for readability and appropriate messaging for multitask models.

deepmd/main.py (1)

854-854: LGTM! Correct CLI interface update.

The addition of "size" to the choices list properly enables the new functionality in the command-line interface.

deepmd/infer/deep_eval.py (2)

294-296: LGTM! Proper abstract interface definition.

The abstract method follows the established pattern and provides a clear contract for backend implementations.


568-570: LGTM! Consistent delegation pattern.

The delegation to the backend implementation follows the same pattern as other methods like get_model_def_script(), maintaining architectural consistency.

source/tests/pt/test_dp_show.py (4)

50-66: Comprehensive test coverage for the new size attribute functionality.

The test updates appropriately verify the new "size" attribute functionality with proper assertions for parameter counts. The changes include adjusting assertion indices and adding specific checks for parameter count output lines.


70-86: Consistent test pattern for frozen model verification.

The frozen model test follows the same pattern as the checkpoint test, ensuring consistent verification of the size attribute across different model formats.


155-190: Thorough multi-task model testing with appropriate parameter count verification.

The multi-task model test correctly verifies parameter counts for a single branch model, which is the expected behavior when showing size information for multi-task models.


194-210: Complete test coverage for multi-task frozen model scenario.

The test appropriately handles the multi-task to single-task conversion scenario when freezing a specific branch, with correct parameter count assertions.

@github-actions github-actions bot added the Docs label Jun 9, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
deepmd/pd/infer/deep_eval.py (1)

485-507: Apply code style improvements suggested by static analysis.

The implementation is functionally correct, but there are unnecessary .keys() calls that can be simplified.

Apply this diff to improve code style:

    def get_model_size(self) -> dict:
        """Get model parameter count.

        Returns
        -------
        dict
            A dictionary containing the number of parameters in the model.
            The keys are 'descriptor', 'fitting_net', and 'total'.
        """
        params_dict = dict(self.dp.named_parameters())
        sum_param_des = sum(
-            params_dict[k].numel() for k in params_dict.keys() if "descriptor" in k
+            params_dict[k].numel() for k in params_dict if "descriptor" in k
        )
        sum_param_fit = sum(
            params_dict[k].numel()
-            for k in params_dict.keys()
+            for k in params_dict
            if "fitting" in k and "_networks" not in k
        )
        return {
            "descriptor": sum_param_des,
            "fitting-net": sum_param_fit,
            "total": sum_param_des + sum_param_fit,
        }
🧰 Tools
🪛 Ruff (0.11.9)

496-496: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


500-500: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

deepmd/pt/infer/deep_eval.py (1)

627-649: Apply code style improvements for consistency with PD backend.

The implementation correctly mirrors the PaddlePaddle backend implementation, but the same style improvements apply here.

Apply this diff to improve code style:

    def get_model_size(self) -> dict:
        """Get model parameter count.

        Returns
        -------
        dict
            A dictionary containing the number of parameters in the model.
            The keys are 'descriptor', 'fitting_net', and 'total'.
        """
        params_dict = dict(self.dp.named_parameters())
        sum_param_des = sum(
-            params_dict[k].numel() for k in params_dict.keys() if "descriptor" in k
+            params_dict[k].numel() for k in params_dict if "descriptor" in k
        )
        sum_param_fit = sum(
            params_dict[k].numel()
-            for k in params_dict.keys()
+            for k in params_dict
            if "fitting" in k and "_networks" not in k
        )
        return {
            "descriptor": sum_param_des,
            "fitting-net": sum_param_fit,
            "total": sum_param_des + sum_param_fit,
        }
🧰 Tools
🪛 Ruff (0.11.9)

638-638: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)


642-642: Use key in dict instead of key in dict.keys()

Remove .keys()

(SIM118)

doc/model/show-model-info.md (1)

67-76: Add language specifications to code blocks for better documentation.

The documentation is comprehensive and well-written. Consider adding language specifications to the output example code blocks for better syntax highlighting and accessibility.

Apply this diff to improve the code blocks:

For a singletask model, the output might look like:

-```
+```text
This is a singletask model
The type_map is ['O', 'H', 'Au']
The descriptor parameter is {'type': 'se_e2_a', 'sel': [46, 92, 4], 'rcut': 4.0}
The fitting_net parameter is {'neuron': [24, 24, 24], 'resnet_dt': True, 'seed': 1}
Parameter counts:
Parameters in descriptor: 19,350
Parameters in fitting-net: 119,091
Parameters in total: 138,441

For a multitask model, if model-branch is selected, it will additionally display available branches:

- +text
This is a multitask model
Available model branches are ['branch1', 'branch2', 'RANDOM'], where 'RANDOM' means using a randomly initialized fitting net.
...



Also applies to: 80-84

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

67-67: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

</details>

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used: CodeRabbit UI**
**Review profile: CHILL**
**Plan: Pro**


<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 3b84fd40be7434eb031f68f2d4fd294387ce58c0 and a264eb8dd65e73974675836b90697fd7c57ca104.

</details>

<details>
<summary>📒 Files selected for processing (5)</summary>

* `deepmd/pd/infer/deep_eval.py` (1 hunks)
* `deepmd/pt/infer/deep_eval.py` (1 hunks)
* `doc/model/index.rst` (1 hunks)
* `doc/model/show-model-info.md` (1 hunks)
* `source/tests/pt/test_dp_show.py` (2 hunks)

</details>

<details>
<summary>✅ Files skipped from review due to trivial changes (1)</summary>

* doc/model/index.rst

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary>

* source/tests/pt/test_dp_show.py

</details>

<details>
<summary>🧰 Additional context used</summary>

<details>
<summary>🧬 Code Graph Analysis (2)</summary>

<details>
<summary>deepmd/pd/infer/deep_eval.py (3)</summary><blockquote>

<details>
<summary>deepmd/infer/deep_eval.py (2)</summary>

* `get_model_size` (294-296)
* `get_model_size` (568-570)

</details>
<details>
<summary>deepmd/pt/infer/deep_eval.py (1)</summary>

* `get_model_size` (627-649)

</details>
<details>
<summary>deepmd/pd/utils/decomp.py (1)</summary>

* `numel` (130-134)

</details>

</blockquote></details>
<details>
<summary>deepmd/pt/infer/deep_eval.py (2)</summary><blockquote>

<details>
<summary>deepmd/infer/deep_eval.py (2)</summary>

* `get_model_size` (294-296)
* `get_model_size` (568-570)

</details>
<details>
<summary>deepmd/pd/infer/deep_eval.py (1)</summary>

* `get_model_size` (485-507)

</details>

</blockquote></details>

</details><details>
<summary>🪛 Ruff (0.11.9)</summary>

<details>
<summary>deepmd/pd/infer/deep_eval.py</summary>

496-496: Use `key in dict` instead of `key in dict.keys()`

Remove `.keys()`

(SIM118)

---

500-500: Use `key in dict` instead of `key in dict.keys()`

Remove `.keys()`

(SIM118)

</details>
<details>
<summary>deepmd/pt/infer/deep_eval.py</summary>

638-638: Use `key in dict` instead of `key in dict.keys()`

Remove `.keys()`

(SIM118)

---

642-642: Use `key in dict` instead of `key in dict.keys()`

Remove `.keys()`

(SIM118)

</details>

</details>
<details>
<summary>🪛 LanguageTool</summary>

<details>
<summary>doc/model/show-model-info.md</summary>

[misspelling] ~4-~4: This word is normally spelled as one.
Context: ...eter statistics in both single-task and multi-task settings.  ## Command Syntax  ```bash d...

(EN_COMPOUNDS_MULTI_TASK)

---

[uncategorized] ~12-~12: Loose punctuation mark.
Context: ...<INPUT> <ATTRIBUTES...> ```  - `<INPUT>`: Path to the model checkpoint file or fr...

(UNLIKELY_OPENING_PUNCTUATION)

---

[uncategorized] ~13-~13: Loose punctuation mark.
Context: ...e or frozen model file. - `<ATTRIBUTES>`: One or more information categories to d...

(UNLIKELY_OPENING_PUNCTUATION)

---

[uncategorized] ~15-~15: Loose punctuation mark.
Context: ...upported values are:    - `model-branch`: Shows available branches for multi-task...

(UNLIKELY_OPENING_PUNCTUATION)

---

[misspelling] ~15-~15: This word is normally spelled as one.
Context: ...l-branch`: Shows available branches for multi-task models.   - `type-map`: Shows the type ...

(EN_COMPOUNDS_MULTI_TASK)

---

[uncategorized] ~16-~16: Loose punctuation mark.
Context: ...es for multi-task models.   - `type-map`: Shows the type mapping used by the mode...

(UNLIKELY_OPENING_PUNCTUATION)

---

[uncategorized] ~17-~17: Loose punctuation mark.
Context: ...ping used by the model.   - `descriptor`: Displays the model descriptor parameter...

(UNLIKELY_OPENING_PUNCTUATION)

---

[uncategorized] ~18-~18: Loose punctuation mark.
Context: ...descriptor parameters.   - `fitting-net`: Displays parameters of the fitting netw...

(UNLIKELY_OPENING_PUNCTUATION)

</details>

</details>
<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

<details>
<summary>doc/model/show-model-info.md</summary>

67-67: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

---

80-80: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

</details>

</details>

</details>

<details>
<summary>⏰ Context from checks skipped due to timeout of 90000ms (29)</summary>

* GitHub Check: Test Python (6, 3.12)
* GitHub Check: Test Python (6, 3.9)
* GitHub Check: Test Python (5, 3.12)
* GitHub Check: Test Python (5, 3.9)
* GitHub Check: Test Python (4, 3.12)
* GitHub Check: Build wheels for cp310-manylinux_aarch64
* GitHub Check: Test Python (4, 3.9)
* GitHub Check: Build wheels for cp311-win_amd64
* GitHub Check: Test Python (3, 3.12)
* GitHub Check: Build wheels for cp311-macosx_arm64
* GitHub Check: Test Python (3, 3.9)
* GitHub Check: Build C++ (clang, clang)
* GitHub Check: Build wheels for cp311-macosx_x86_64
* GitHub Check: Test Python (2, 3.12)
* GitHub Check: Build C++ (rocm, rocm)
* GitHub Check: Build wheels for cp311-manylinux_x86_64
* GitHub Check: Test Python (2, 3.9)
* GitHub Check: Analyze (python)
* GitHub Check: Build C++ (cuda120, cuda)
* GitHub Check: Test C++ (false)
* GitHub Check: Build wheels for cp311-manylinux_x86_64
* GitHub Check: Test Python (1, 3.12)
* GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
* GitHub Check: Build C++ (cuda, cuda)
* GitHub Check: Test C++ (true)
* GitHub Check: Test Python (1, 3.9)
* GitHub Check: Build C++ (cpu, cpu)
* GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
* GitHub Check: Analyze (c-cpp)

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

@codecov
Copy link

codecov bot commented Jun 9, 2025

Codecov Report

❌ Patch coverage is 70.00000% with 6 lines in your changes missing coverage. Please review.
✅ Project coverage is 84.79%. Comparing base (9ef43fa) to head (a264eb8).
⚠️ Report is 87 commits behind head on devel.

Files with missing lines Patch % Lines
deepmd/pd/infer/deep_eval.py 0.00% 5 Missing ⚠️
deepmd/infer/deep_eval.py 75.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            devel    #4783      +/-   ##
==========================================
- Coverage   84.80%   84.79%   -0.01%     
==========================================
  Files         698      698              
  Lines       67787    67818      +31     
  Branches     3542     3542              
==========================================
+ Hits        57486    57506      +20     
- Misses       9168     9177       +9     
- Partials     1133     1135       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@iProzd iProzd requested a review from njzjz June 9, 2025 08:32
@iProzd iProzd added this pull request to the merge queue Jun 10, 2025
Merged via the queue into deepmodeling:devel with commit 28a781f Jun 10, 2025
60 checks passed
@iProzd iProzd deleted the D0605_dev_size branch June 10, 2025 04:07
ChiahsinChu pushed a commit to ChiahsinChu/deepmd-kit that referenced this pull request Dec 17, 2025
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added the ability to display model parameter counts using the "size"
attribute in the "show" command.
- Users can now view detailed parameter counts for descriptor,
fitting-net, and total parameters for both single-task and multi-task
models.

- **Tests**
- Updated tests to verify correct output of model parameter counts when
using the new "size" attribute.

- **Documentation**
- Added new documentation for the "show" command detailing usage and
output for various model attributes, including the new "size" attribute.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants