[v1.x] ONNX Supoort for MXNet reverse op#19737
Conversation
|
Hey @Zha0q1 , Thanks for submitting the PR
CI supported jobs: [website, sanity, edge, windows-cpu, unix-gpu, centos-gpu, unix-cpu, windows-gpu, clang, miscellaneous, centos-cpu] Note: |
| # Transpose takes perm as a parameter, so we must 'pad' the input to a known dim (10 here) | ||
| perm = [i for i in range(10)] | ||
| perm[0], perm[axis] = axis, 0 | ||
| print(perm) |
| axis = int(attrs.get('axis', 0)) | ||
|
|
||
| # Transpose takes perm as a parameter, so we must 'pad' the input to a known dim (10 here) | ||
| perm = [i for i in range(10)] |
There was a problem hiding this comment.
what happens if 10 is not enough?
There was a problem hiding this comment.
I think in mxnet 10-d is the largest you can get
There was a problem hiding this comment.
I just checked: it seems we can create > 10-d tensors, but I think many ops do not support >= 10d tensors and in general there is no use case for high-dimensional tensors
There was a problem hiding this comment.
Got it, thanks for the explanation.
There is no direct mapping of reverse in the onnx op set so I had to mimic the behavior by
The performance is not great most likely, but functionally i got it to behave the same as mxnet reverse