GAIN framework allows model to focus on a specific areas of object by changing the attention maps (Grad-CAM).
The flow of the framework is summarized as follow:
- First, we should register the
feed-forwardandbackwardof the last convolution layer (or block). - Model generates the attention maps (Grad-CAM) from
forward featuresand backwardfeaturesthen normalize it by using threshold andsigmoidfunction. - Now, the attention maps covers almost important information of the object. We want to tell the model those kind of areas
are important for the task. It can be done by applying attention maps into the original image.
Let imagine that the
masked_imagenow is containing useless information. When we feedmasked_imageinto the model again, we expect that the prediction score is as low as possible. That is the idea ofattention miningin the paper. - The losses are computed at GAINCriterionCallback
In this implementation, I select resnet50 as the base-model to perform GAIN framework. You can change the backbone and it's gradient layer as you want.
bash bin/train_gain.sh