Squeeze-and-Excitation Net Trained on ImageNet Competition Data

Identify the main object in an image

Released in 2017, these models focus on exploring the influence of the channel encoding. The new Squeeze-and-Excitation (SE) blocks adaptively recalibrate the channel-wise feature responses by explicitly modeling interdependencies between channels. These blocks are added to the existing state-of-the-art deep architectures to improve their performance.

Number of models: 7

Training Set Information

Performance

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=
NetModel["Squeeze-and-Excitation Net Trained on ImageNet Competition Data"]
Out[1]=
Image

NetModel parameters

This model consists of a family of individual nets, each identified by a specific parameter combination. Inspect the available parameters:

In[2]:=
NetModel["Squeeze-and-Excitation Net Trained on ImageNet Competition Data", "ParametersInformation"]
Out[2]=
Image

Pick a non-default net by specifying the parameters:

In[3]:=
NetModel[{"Squeeze-and-Excitation Net Trained on ImageNet Competition Data", "Type" -> "ResNeXt-101"}]
Out[3]=
Image

Pick a non-default uninitialized net:

In[4]:=
NetModel[{"Squeeze-and-Excitation Net Trained on ImageNet Competition Data", "Type" -> "ResNeXt-101"}, "UninitializedEvaluationNet"]
Out[4]=
Image

Basic usage

Classify an image:

In[5]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/7000dc02-ad62-4fab-b27d-e832f6e6e99c"]
Out[5]=
Image

The prediction is an Entity object, which can be queried:

In[6]:=
pred["Definition"]
Out[6]=
Image

Get a list of available properties of the predicted Entity:

In[7]:=
pred["Properties"]
Out[7]=
Image

Obtain the probabilities of the ten most likely entities predicted by the net:

In[8]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/af36dc5c-3e85-436a-850d-6c1c384853dd"]
Out[8]=
Image

An object outside the list of the ImageNet classes will be misidentified:

In[9]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/344ce014-fdb6-4b59-bcc7-e5a7c9bdbb22"]
Out[9]=
Image

Obtain the list of names of all available classes:

In[10]:=
EntityValue[
 NetExtract[
   NetModel[
    "Squeeze-and-Excitation Net Trained on ImageNet Competition Data"],
    "Output"][["Labels"]], "Name"]
Out[10]=
Image

Feature extraction

Remove the last two layers of the trained net so that the net produces a vector representation of an image:

In[11]:=
extractor = Take[NetModel[
   "Squeeze-and-Excitation Net Trained on ImageNet Competition Data"], {1, -3}]
Out[11]=
Image

Get a set of images:

In[12]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/b0fb0d52-c74e-4990-837b-389f82bdc828"]

Visualize the features of a set of images:

In[13]:=
FeatureSpacePlot[imgs, FeatureExtractor -> extractor, LabelingSize -> 100, ImageSize -> 800]
Out[13]=
Image

Visualize convolutional weights

Extract the weights of the first convolutional layer in the trained net:

In[14]:=
weights = NetExtract[
   NetModel[
    "Squeeze-and-Excitation Net Trained on ImageNet Competition Data"], {"conv1_1_3x3_s2", "Weights"}];

Show the dimensions of the weights:

In[15]:=
Dimensions[weights]
Out[15]=
Image

Visualize the weights as a list of 64 images of size 3x3:

In[16]:=
ImageAdjust[Image[#, Interleaving -> False]] & /@ Normal[weights]
Out[16]=
Image

Transfer learning

Use the pre-trained model to build a classifier for telling apart images of dogs and cats. Create a test set and a training set:

In[17]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/adc7d6fb-4b7e-4168-be06-752a2ff0b590"]
In[18]:=
(* Evaluate this cell to get the example input *) CloudGet["https://www.wolframcloud.com/obj/010d5eac-af65-4e8d-b37e-75759ce5bf88"]

Remove the linear layer from the pre-trained net:

In[19]:=
tempNet = Take[NetModel[
   "Squeeze-and-Excitation Net Trained on ImageNet Competition Data"], {1, -3}]
Out[19]=
Image

Create a new net composed of the pre-trained net followed by a linear layer and a softmax layer:

In[20]:=
newNet = NetChain[<|"pretrainedNet" -> tempNet, "linearNew" -> LinearLayer[], "softmax" -> SoftmaxLayer[]|>, "Output" -> NetDecoder[{"Class", {"cat", "dog"}}]]
Out[20]=
Image

Train on the dataset, freezing all the weights except for those in the "linearNew" layer (use TargetDevice -> "GPU" for training on a GPU):

In[21]:=
trainedNet = NetTrain[newNet, trainSet, LearningRateMultipliers -> {"linearNew" -> 1, _ -> 0}]
Out[21]=
Image

Perfect accuracy is obtained on the test set:

In[22]:=
ClassifierMeasurements[trainedNet, testSet, "Accuracy"]
Out[22]=
Image

Net information

Inspect the number of parameters of all arrays in the net:

In[23]:=
NetInformation[
 NetModel[
  "Squeeze-and-Excitation Net Trained on ImageNet Competition Data"], "ArraysElementCounts"]
Out[23]=
Image

Obtain the total number of parameters:

In[24]:=
NetInformation[
 NetModel[
  "Squeeze-and-Excitation Net Trained on ImageNet Competition Data"], "ArraysTotalElementCount"]
Out[24]=
Image

Obtain the layer type counts:

In[25]:=
NetInformation[
 NetModel[
  "Squeeze-and-Excitation Net Trained on ImageNet Competition Data"], "LayerTypeCounts"]
Out[25]=
Image

Export to MXNet

Export the net into a format that can be opened in MXNet:

In[26]:=
jsonPath = Export[FileNameJoin[{$TemporaryDirectory, "net.json"}], NetModel[
   "Squeeze-and-Excitation Net Trained on ImageNet Competition Data"],
   "MXNet"]
Out[26]=
Image

Export also creates a net.params file containing parameters:

In[27]:=
paramPath = FileNameJoin[{DirectoryName[jsonPath], "net.params"}]
Out[27]=
Image

Get the size of the parameter file:

In[28]:=
FileByteCount[paramPath]
Out[28]=
Image

Requirements

Wolfram Language 12.0 (April 2019) or above

Resource History

Reference