This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Use multi-tensor sumSQ in clip_global_norm#17652
Merged
eric-haibin-lin merged 2 commits intoapache:masterfrom Mar 23, 2020
Merged
Use multi-tensor sumSQ in clip_global_norm#17652eric-haibin-lin merged 2 commits intoapache:masterfrom
eric-haibin-lin merged 2 commits intoapache:masterfrom
Conversation
apeforest
reviewed
Feb 21, 2020
anirudh2290
added a commit
to anirudh2290/mxnet
that referenced
this pull request
Mar 27, 2020
* 'master' of https://github.com/apache/incubator-mxnet: (192 commits) * impl - FFI for np einsum (apache#17869) [Numpy] FFI for diag/diagonal/diag_indices_from (apache#17789) [Numpy] Kron operator (apache#17323) cmake: Set DMLC_LOG_FATAL_THROW only for building mxnet and not for tvm (apache#17878) Add simplified HybridBlock.forward without F (apache#17530) Use FP32 copy of weights for norm (multitensor LAMB optimizer) (apache#17700) Use multi-tensor sumSQ in clip_global_norm (apache#17652) [Numpy] Add op fmax, fmin, fmod (apache#17567) Adding sparse support to MXTensor for custom operators (apache#17569) Update 3rdparty/mkldnn to v1.2.2 (apache#17313) Dynamic subgraph compile support (apache#17623) Refactor cpp-package CMakeLists.txt & add missing inference/imagenet_inference (apache#17835) staticbuild: Fix potential user-assisted execution of arbitrary code (apache#17860) * FFI for np.argmax and np.argmin (apache#17843) ffi for roll/rot90 (apache#17861) Skip test_multi_worker_dataloader_release_pool on OS X (apache#17797) add ffi for full_like, binary (apache#17811) HybridBlock.export() to return created filenames (apache#17758) Fix SoftReLU fused operator numerical stability (apache#17849) CI: Test clang10 cpu & gpu builds with -WError (apache#17830) ...
MoisesHer
added a commit
to MoisesHer/incubator-mxnet
that referenced
this pull request
Apr 10, 2020
* Use multi-tensor sumSQ in clip_global_norm * fix pylint
anirudh2290
pushed a commit
to anirudh2290/mxnet
that referenced
this pull request
May 29, 2020
* Use multi-tensor sumSQ in clip_global_norm * fix pylint
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Using multi-tensor sum of squares in gluon: clip_global_norm.
Instead of computing the sum of squares of each input array sequentially, compute them in parallel (multi-tensor).
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes