Conversation
|
Hey @Zha0q1 , Thanks for submitting the PR
CI supported jobs: [windows-gpu, unix-cpu, sanity, centos-cpu, website, edge, windows-cpu, miscellaneous, centos-gpu, unix-gpu, clang] Note: |
|
@mxnet-bot run ci [all] |
|
Jenkins CI successfully triggered : [centos-gpu, centos-cpu, website, clang, unix-gpu, miscellaneous, unix-cpu, windows-cpu, edge, windows-gpu, sanity] |
| # Create the model (ModelProto) | ||
| onnx_model = helper.make_model(onnx_graph) | ||
|
|
||
| # Run shape inference on the model. Due to ONNX bug/incompatibility this may or may not crash |
There was a problem hiding this comment.
Why do we run shape inference here? Seems previously we don't have it
There was a problem hiding this comment.
This is an optional step. Doing shape inference may help with some runtime optimization and we can visualize the graph better, since the noes will have input and output shapes labeled
There was a problem hiding this comment.
shape inference is default to off
There was a problem hiding this comment.
Makes sense. We can keep it here, and turn it off if it crashes on some cases later on.
RFC #20000