A simple CNN for the MNIST dataset – X – filling some gaps in filter visualization

I continue my series of articles on a CNN for the MNIST dataset.

A simple CNN for the MNIST dataset – IX – filter visualization at a convolutional layer
A simple CNN for the MNIST dataset – VIII – filters and features – Python code to visualize patterns which activate a map strongly
A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part
A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features
A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

In the last article we harvested the fruits of previous efforts. We produced a variety of input image patterns which triggered certain maps of the innermost convolutional layer of our CNN and the filters behind it maximally. I called such pixel-patterns OIPs. (I am still careful to avoid the expression “feature”, which is used by many authors as a term describing a physical entity identified by the human brain. A connotation which I do not like …)

Although my code for creating OIPs allows for a variation of fluctuations on 4 different length scales it proved to be hard to find OIPs for quite a lot of maps at the highest convolutional layer and for their filters. I could not always find a combination of initial random pixel fluctuations which led to loss values > 0. More precisely: I did not find a pattern by trial and error on a reasonable short timescale of some minutes.

We already know that OIP pixel patterns for the innermost convolutional layer are a bit artificial:

  • They are idealizations; the displayed pattern may not be present in the same form in the real input images which were used during training.
  • They may contain repetitions of sub-patterns at different positions.

The images in the last articles showed us in addition that some maps at the innermost Conv-layer are sensitive to rather complicated and specific patterns on relatively large length scales.

This is to be expected as at least filters on the higher convolutional levels accumulate information on a coarse level of image coverage: Filtered information coming from original large areas of the image are in the end convoluted down to grids of (3×3) neurons, i.e. onto low resolution maps. Filters at this convolutional depth can therefore require relatively large scale patterns to be passed. So, it is no real wonder that some maps do not react at all to random statistical fluctuations on very short length scales, e.g. on a two pixel scale. The activation of the respective neurons may stay at zero.

In this article I shall describe a simple method which allowed me to create OIP patterns for around half of the maps (at the 3d Conv-level) for which I did not have any luck before. This is done by a “precursor run” which tests the reaction of a map to a large sample of input images with pattern variations on relatively long length scales.

Systematic analysis? My simple approach …

What is the basic idea? Let us assume that we cover the original image surface by a a grid of squares, e.g. by 16 (7×7)-squares – where (7×7) means a square with a side length of 7 pixels. We then assign a constant grey-value to all the pixels in such a square. Instead of picking random values in the range [0, 255] we use a limited number of N distinct normalized values in an interval [0, 1].

We then investigate all combinations of distinct value distributions across the 16 squares. For each combination we construct an input image with bicubic interpolation on the original (28×28) scale and standardize the resulting distribution of pixel values. (The standardization is required for my CNN, which was trained on standardized images). Thus we produce a (huge) number of input images with smooth large scale fluctuations, which we then can present to our OIP-optimization algorithm for a test of a map’s reaction.

Do we have a chance to find OIPs by systematic trials?

Now, do we have a real chance to work a bit more systematically with the described approach? Unfortunately, the answer is: Not really, at least not with a PC equipment and for fluctuations at middle length scales with more than 5 distinct pixel values. The reason in terms of Fourier series’ is that the number of amplitude combinations for multiple sinus waves grows exponentially with a defined number of discrete amplitudes values out of an interval.

We see the limitations already in our simplified approach. Let us assume that we pick 5 distinct and normalized pixel values in the interval ]0, 1[. Let us further assume that we cover the whole image area (of a size 28×28) with a grid of (9×9) squares, each with a constant pixel value. (We ignore the fact that 28/9 = 3.11 and trust in bicubic interpolation to take care of the 0.11 🙂 )

Even under these conditions we get already around 5**9 = 1.953 million pattern variations for which we have to test a map’s response. For each of these patterns we would have to follow at least 4 to 10 iterations (= epochs) of the optimization loop to identify the input image as a promising candidate for a full optimization or not. This means that we need 8 to 20 million full gradient calculations on a parameter space with 784 (=28×28) dimensions. Hard work even for graphic cards (at the consumer level). At smaller length scales, as induced e.g. by a (7×7) coverage grid, we go far beyond standard calculation capacities on PC hardware.

What was/is within my computational reach? If we reduced the number of distinct pixel values to 3 (e.g.: 0.2, 0.5, 0.8) and concentrated on large scale fluctuations – e.g. based on (9×9) squares – we would have around 19700 combinations to investigate.

I had to work a bit on the respective Python code to get it run under Tensorflow 2.2.1/Cuda 10.2 with a reasonable percentage of around 30% on a graphics card. With 10 epochs for an optimization trial run for each image a complete run over 20000 images takes less than 5 minutes on my old GTX960 graphics card. With a modern card, a recent CPU and some more optimization one would probably arrive at values significantly below a minute. So, systematically working with large scale fluctuations is possible on PC-equipment!

Note that a coverage of the image surface with (9×9)-squares corresponds to the resolution which the (3×3)-maps of the innermost Conv-layers represent! (Including padding in the filter definitions). I assume that it is reasonable to work and test on this scale.

Selection of 8 promising candidates out of 19683

So, I added some “precursor” methods to my class My_OIP to prepare and perform systematic runs for all of the maps for which I had not found a pattern, yet. I chose the following 3 discrete pixel values: [0.2, 0.5, 0.8].

The resulting (9×9) images where rescaled with a bicubic interpolation to the MNIST size of (28×28) and afterwards standardized.

During the test of a map’s response I selected 8 candidates out of 19683 for subsequent thorough optimization runs with 1200 epochs. The selection was done by looking at the largest loss values reached after 10 optimization epochs. I then applied the procedure to all 67 maps for which I had not got an OIP, yet.

Note that we cannot be sure whether 10 optimization iterations really give us a perfect indication of the highest loss values which would be reached after a full optimization run with 600 to 1200 epochs. So, it is worthwhile to play around with the number of test epochs; the selection of the input fluctuation patterns may change.

I got results then for 33 of the maps which had no OIP. I present the OIP-images below. The results cover almost 50% of the missing OIPs. So, my computational efforts were well invested.

OIPs for additional 33 maps

Below I present MNIST-related OIP-images for the maps with the following indices on the 3rd Conv-layer of my (!) CNN ):
1, 3, 9, 15, 16, 29, 35, 37, 38, 44, 46, 50, 51, 55, 59, 60, 70, 78, 81, 91, 93, 94, 96, 97, 101, 104, 108, 109, 111, 112, 116, 122, 125.
This time without contrast enhancement – as I was too lazy to integrate it into the precursor code.

visualization-of-CNN-filters-and-maps-for-MNIST-3rd-Conv-layer-2-dr-moenchmeyer

CNNs are fun, aren’t they?

Unique OIPs?

An interesting question is: Are the OIPs really unique per map in the sense that we produce the same OIP-images for different fluctuations on the input images? Well, there is some unique overall form of the patterns, but there may occur translations in position and there may be differences in details. Below, I show you different derived images for some maps:

Map 46:

Map 81:

Map 116:

Map 125:

Map 127:

Conclusion

By a systematic investigation of the reaction of maps to large scale fluctuations we can find OIPs for maps where a trial and error approach is not sufficient to trigger a response of the filters. I will discuss the Python code for a related “precursor run” in my next post

A simple CNN for the MNIST dataset – XI – Python code for filter visualization and OIP detection.

 

A simple CNN for the MNIST dataset – IX – filter visualization at a convolutional layer

In the last article I explained the code to visualize patterns which trigger a chosen feature map of a trained CNN strongly. In this series we work with the MNIST data but the basic principles can be modified, extended and applied to other typical data sets (as e.g. the Cifar set).

A simple CNN for the MNIST dataset – VIII – filters and features – Python code to visualize patterns which activate a map strongly
A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part
A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features
A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

We shall now apply our visualization code for some selected maps on the last convolutional layer of our CNN structure. We run the code and do the plotting in a Jupyter environment. To create an image of an OIP-pattern which activates a map after passing its filters is a matter of a second at most.

Our algorithm will evolve patterns out of a seemingly initial “chaos” – but it will not do so for all combinations of statistical input data and a chosen map. We shall investigate this problem in more depth in the next articles. In the present article I first want to present you selected OIP-pattern images for very many of the 128 feature maps on the third layer of my simple CNN which I had trained on the MNIST data set for digits.

Initial Jupyter cells

I recommend to open a new Jupyter notebook for our experiments. We put the code for loading required libraries (see the last article) into a first cell. A second Jupyter cell controls the use of a GPU:

Jupyter cell 2:

gpu = True
if gpu: 
    GPU = True;  CPU = False; num_GPU = 1; num_CPU = 1
else: 
    GPU = False; CPU = True;  num_CPU = 1; num_GPU = 0

config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=6,
                        inter_op_parallelism_threads=1, 
                        allow_soft_placement=True,
                        device_count = {'CPU' : num_CPU,
                                        'GPU' : num_GPU}, 
                        log_device_placement=True

                       )
config.gpu_options.per_process_gpu_memory_fraction=0.35
config.gpu_
options.force_gpu_compatible = True
B.set_session(tf.compat.v1.Session(config=config))

In a third cell we then run the code for the myOIP-class definition with I discussed in my last article.

Loading the CNN-model

A fourth cell just contains just one line which helps to load the CNN-model from a file:

# Load the CNN-model 
myOIP = My_OIP(cnn_model_file = 'cnn_best.h5', layer_name = 'Conv2D_3')

The output looks as follows:

You clearly see the OIP-sub-model which relates the input images to the output of the chosen CNN-layer; in our case of the innermost layer “Conv2d_3”. The maps there have a very low resolution; they consist of only (3×3) nodes, but each of them covers filtered information from relatively large input image areas.

Creation of the initial image with statistical fluctuations

With the help of fifth Jupyter cell we run the following code to build an initial image based on statistical fluctuations of the pixel values:

# build initial image 
# *******************

# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 5
fig1 = plt.figure(1)
ax1_1 = fig1.add_subplot(121)
ax1_2 = fig1.add_subplot(122)

# OIP function to setup an initial image 
initial_img = myOIP._build_initial_img_data(   strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (0.2, 0.2, 0.0, 0.0),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False)

Note that I did not use any small scale fluctuations in my example. The reason is that the map chosen later on reacts better to large scale patterns. But you are of course free to vary the parameters of the list “li_facts” for your own experiments. In my case the resulting output looked like:

The two displayed images should not show any differences for the current version of the code. Note that your initial image may look very differently as our code produces random fluctuations of the pixel values. I suggest that you play a bit around with the parameters of “li_facts” and “li_dim_steps”.

Creation of a OIP-pattern out of random fluctuations

Now we are well prepared to create an image which triggers a selected CNN-map strongly. For this purpose we run the following code in yet another Jupyter cell:

# Derive a single OIP from an input image with statistical fluctuations of the pixel values 
# ******************************************************************

# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure()
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

map_index = 120         # map-index we are interested in 
n_epochs = 600          # should be divisible by 5  
n_steps = 6             # number of intermediate reports 
epsilon = 0.01          # step size for 
gradient correction  
conv_criterion = 2.e-4  # criterion for a potential stop of optimization 

myOIP._derive_OIP(map_index = map_index, n_epochs = n_epochs, n_steps = n_steps, 
                  epsilon = epsilon , conv_criterion = conv_criterion, b_stop_with_convergence=False )

The first statements prepare a grid of maximum 8 intermediate axis-frames which we shall use to display intermediate images which are produced by the optimization loop. You see that I chose the map with number “120” within the selected layer “Conv2D_3”. I allowed for 600 “epochs” (= steps) of the optimization loop. I requested the display of 6 intermediate images and related printed information about the associated loss values.

The printed output in my case was:

Tensor("Mean_10:0", shape=(), dtype=float32)
shape of oip_loss =  ()
GradienTape watch activated 
*************
Start of optimization loop
*************
Strategy: Simple initial mixture of long and short range variations
Number of epochs =  600
Epsilon =   0.01
*************
li_int =  [9, 18, 36, 72, 144, 288]

step 0 finalized
present loss_val =  7.3800406
loss_diff =  7.380040645599365

step 9 finalized
present loss_val =  16.631456
loss_diff =  1.0486774

step 18 finalized
present loss_val =  28.324467
loss_diff =  1.439024align

step 36 finalized
present loss_val =  67.79664
loss_diff =  2.7197113

step 72 finalized
present loss_val =  157.14531
loss_diff =  2.3575745

step 144 finalized
present loss_val =  272.91815
loss_diff =  0.9178772

step 288 finalized
present loss_val =  319.47913
loss_diff =  0.064941406

step 599 finalized
present loss_val =  327.4784
loss_diff =  0.020477295

Note the logarithmic spacing of the intermediate steps. You recognize the approach of a maximum of the loss value during optimization and the convergence at the end: the relative change of the loss at step 600 has a size of 0.02/327 = 6.12e-5, only.

The intermediate images produced by the algorithm are displayed below:

The systematic evolution of a pattern which I called the “Hand of MNIST” in another article is clearly visible. However, you should be aware of the following facts:

  • For a map with the number 120 your OIP-image may look completely different. Reason 1: Your map 120 of your trained CNN-model may represent a different unique filter combination. This leads to the interesting question whether two training runs of a CNN for statistically shuffled images of one and the same training set produce the same filters and the same map order. We shall investigate this problem in a forthcoming article. Reason 2: You may have started with different random fluctuations in the input image.
  • Whenever you repeat the experiment for a new input image, for which the algorithm converges, you will get a different output regarding details – even if the major over-all features of the “hand”-like pattern are reproduced.
  • For quite a number of trials you may run into a frustrating message saying that the loss remains at a value of zero and that you should try another initial input image.

The last point is due to the fact that some specific maps may not react at all to some large scale input image patterns or to input images with dominating fluctuations on small scales only. It depends …

Dependency on the input images and its fluctuations

Already in previous articles of this series I discussed the point that there may be a relatively strong dependency of our output pattern on the mixture of long range and short range fluctuations of the pixel values in the initial input image. With respect to all possible statistical input images – which are quite many ( 255**784 ) – a specific image we allow us only to approach a local maximum of the loss hyperplane – one maximum out of many. But only, if the map reacts to the input image at all. Below I give you some examples of input images to which my CNN’s map with number 120 does not react:

If you just play around a bit you will see that even in the case of a successful optimization the final OIP-images differ a bit and that also the eventual loss values vary. The really convincing point for me was that I did get a hand like pattern all those times when the algorithm did converge – with variations and differences, but structurally similar. I have demonstrated this point already in the article

Just for fun – the „Hand of MNIST“-feature – an example of an image pattern a CNN map reacts to

See the images published there.

Patterns that trigger the other maps of our CNN

Eventually I show you a sequence of images which OIP-patterns for the maps with indices
0, 2, 4, 7, 8, 12, 17, 18, 19, 20, 21, 23, 27, 28, 30, 31, 32, 33, 34, 36, 39, 41, 42, 45, 48, 52, 54, 56, 57, 58, 61, 62, 64, 67, 68, 71, 72, 76, 80, 82, 84, 85, 86, 87, 90, 92, 102, 103, 105, 106, 107, 110, 114, 115, 117, 119, 120, 122, 123, 126, 127.
Each of the images is displayed as calculated and with contrast enhancement.



visualization-of-CNN-filters-and-maps-for-MNIST-3rd-Conv-layer-1-dr-moenchmeyer

 

So, this is basically the essence of what our CNN “thinks” about digits after a MNIST training! Just joking – there is no “thought” present in out simple static CNN, but just the application of filters which were found by a previous mathematical optimization procedure. Filters which fit to certain geometrical pixel correlations in input images …

You certainly noticed that I did not find OIP patterns for many maps, yet. I fiddled around a bit with the parameters, but got no reaction of my maps with the numbers 1, 3, 5, 6, 9, 10, 11 …. The loss stayed at zero. This does not mean that there is no pattern which triggers those maps. However, it may a very special one for which simple fluctuations on short scales may not be a good starting point for an optimization.

Therefore, it would be good to have some kind of precursor run which investigates the reaction of a map towards a sample of (long scale) fluctuations before we run a full optimization. The next article

A simple CNN for the MNIST dataset – X – filling some gaps in filter visualization

describes a strategy for a more systematic approach and shows some results. A further article will afterwards discuss the required code.

 

A simple CNN for the MNIST dataset – VIII – filters and features – Python code to visualize patterns which activate a map strongly

Our series about a simple CNN trained on the MNIST data turns back to some coding.

A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part
A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features
A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

In the last article I discussed an optimization algorithm which should be able to create images of pixel patterns which trigger selected feature maps of a CNN strongly. In this article we shall focus on the required code elements. I again want to emphasize that I apply and modify some basic ideas which I read in a book of F. Chollet and in a contribution of a guy called Mohamed to a discussion at kaggle.com (see my last article for references). A careful reader will notice differences not only with respect to coding; there are major differences regarding the normalization of intermediate data and the construction of input images. To motivate the latter point I first want to point out that OIPs are idealized technical abstractions and that not all maps may react to purely statistical data on short length scales.

Images of OIPs which trigger feature maps are artificial technical abstractions!

In the last articles I made an explicit distinction between two types of patterns which we can analyze in the context of CNNs:

  • FCP: A pattern which emerges within and across activation maps of a chosen (deep) convolutional layer due to filter operations which the CNN applied to a specific input image.
  • OIP: A pattern which is present within the pixel value distribution of an input image and to which a CNN map reacts strongly.

Regarding OIPs there are some points which we should keep in mind:

  • We do something artificial when we create an image of an elementary OIP pattern to which a chosen feature map reacts. Such an OIP is already an abstraction in the sense that it reflects an idealized pattern – i.e. a specific geometrical correlation between pixel values of an input image which passes a very specific filter combinations. We forget about all other figurative elements of the input image which may trigger other maps.
  • There is an additional subtle point regarding our present approach to OIP-visualization:
    Our algorithm – if it works – will lead to OIP images which trigger a
    map’s neurons maximally on average. What does “average” mean with respect to the image area? A map always covers the whole input image area. Now let us assume that a filter combination of lower layers reacts to an elementary pattern limited in size and located somewhere on the input image. But some filters or filter combinations may be capable of detecting such a pattern at multiple locations of an input image.
    One example would be a crossing of two relatively thin lines. Such a crossing could appear at many places in an input image. In fact, a trained CNN has seen several thousand images of handwritten “4”s where the crossing of the horizontal and the vertical line actually appeared at many different locations and may have learned about this when establishing filters. Under such conditions it is obvious that a map gets optimally activated if a relatively small elementary pattern appears multiple times within the image which our algorithm artificially constructs out of initial random data.
    So our algorithm will with a high probability lead to OIP images which consist of a combination or a superposition of elementary patterns at multiple locations. In this sense an OIP image constructed due to the rule of a maximum average activation is another type of idealization. In a real MNIST image the re-occurrence of elementary patterns may not be present at all. Therefore, we should be careful and not directly associate the visualization of a pattern created by our algorithm with an elementary OIP or “feature”.
  • The last argument can in some cases also be reverted: There may be unique large scale patterns which can only be detected by filters of higher (i.e. deeper) convolutional levels which filter coarsely and with respect to relatively large areas of the image. In our approach such unique patterns may only appear in OIP images constructed for maps of the deepest convolutional layer.

Independence of the statistical data of the input image?

In the last article I showed you already some interesting patterns for certain maps which emerged from randomly varying pixel values in an input image. The fluctuations were forced into patterns by the discussed optimization loop. An example of the resulting evolution of the pixel values is shown below: With more and more steps of the optimization loop an OIP-pattern emerges out of the initial “chaos”.

Images were taken at optimization steps 0, 9, 18, 36, 72, 144, 288, 599 of a loop. Convergence is given by a change of the loss values between two different steps divided by the present loss value :
3.6/41 => 3.9/76 => 3.3/143 => 2.3/240 => 0.8/346 => 0.15/398 => 0.03/418

As we work with gradient ascent in OIP detection a lower loss means a lower activation of the map.

If we change the wavelength of the initial input fluctuations we get a somewhat, though not fundamentally different result (actually with a lower loss value of 381):

This gives us confidence in the general usability of the method. However, I would like to point out that during your own experiments you may also experience the contrary:

For some maps and for some initial statistical input data varying at short length scales, only, the optimization process will not converge. It will not even start to do so. Instead you may experience a zero activation of the selected map during all steps of the optimization for a given random input.

You should not be too surprised by this fact. Our CNN was optimized to react to patterns present in written digits. As digits have specific elements (features?) as straight lines, bows, circles, line-crossings, etc., we should expect that not all input will trigger the activation of a selected map which reacts on pixel value variations at relatively large length scales. Therefore, it is helpful to be able to vary the statistical input pattern at different length scales when you start your hunt for a nice visualization of an OIP and/or elementary feature.

All in all we cannot exclude a dependency on the statistical initial input image fluctuations. Our algorithm will find a maximum with respect to the specific input data fluctuations presented to him. Due to this point we should always be aware of the fact that the visualizations produced by our algorithm will probably mark a local maximum in the multidimensional parameter or representation space – not a global one. But also a local maximum may reveal significant sub-structures a specific filter combination is sensitive to.

Libraries

To build a suitable code we need to import some libraries, which you first must install into your virtual Python environment:

  
import numpy as np
import scipy
import time 
import sys 
import math

from sklearn.preprocessing import StandardScaler
import tensorflow as tf
from tensorflow import keras as K
from tensorflow.python.keras import backend as B 
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import schedules
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist

from tensorflow.python.client import device_lib

import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.patches as mpat 

import os 
from os import path as path

 

Basic code elements to construct OIP patterns from statistical input image data

To develop some code for OIP visualizations we follow the outline of steps discussed in the last article. To encapsulate the required functionality we set up a Python class. The following code fragment gives an overview about some variables which we are going to use. In the comments and explanations I sometimes used the word “reference” to mark a difference between

  • addresses to some intermediate Tensorflow 2 [TF2] objects handled by a “watch()“-method of TF2’s GradientTape-object for eager calculation of gradients
  • and eventual return objects (of a function based on keras’ backend) filled with values which we can directly use during the optimization iterations for corrections.

This is only a reminder of complex internal TF2 background operations based on complex layer models; I do not want to stress any difference in the sense of pointers and objects. Actually, I do not even know the precise programming patterns used behind TF2’s watch()-mechanism; but according to the documentation it basically records all operations involving the “watched” objects. The objective is the ability to produce gradient values of defined functions with respect to any variable changes instantaneously in eager execution later on.

 
# class to produce images of OIPs for a chosen CNN-map
# ****************************
***************************
class My_OIP:
    '''
    Version 0.2, 01.09.2020
    ~~~~~~~~~~~~~~~~~~~~~~~~~
    This class allows for the creation and the display of OIP-patterns 
    to which a selected map of a CNN-model reacts   
    
    Functions:
    ~~~~~~~~~~
    1) _load_cnn_model()             => load cnn-model
    2) _build_oip_model()            => build an oip-model to create OIP-images
    3) _setup_gradient_tape()        => Implements TF2 GradientTape to watch input data
                                        for eager gradient calculation
    4) _oip_strat_0_optimization_loop():
                                     => Method implementing a simple strategy to create OIP-images 
                                        based on superposition of random data on large distance scales
    5) _oip_strat_1_optimization_loop():
       (NOT YET DEVELOPED)           => Method implementing a complex strategy to create OIP-images 
                                        based on partially evolved intermediate image 
                                        getting enriched by small scale fluctuations
    6) _derive_OIP():                => Method used externally to start the creation of 
                                        an OIP for a chosen map 
    7) _build_initial_img_data()     => Method to construct an initial image based on 
                                        a superposition by random date on 4 different length scales 
    
    
    Requirements / Preconditions:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    In the present version 
    * a CNN-model is assumed which works with standardized (!) input images,
    * a CNN-Modell trained on MNIST data is assumed ,
    * exactly 4 length scales for random data fluctations are used
      to compose initial statistical image data 
      (roughly with a factor of 2 between them) 
      
    
    '''
    def __init__(self, cnn_model_file = 'cnn_best.h5', 
                 layer_name = 'Conv2D_3', map_index = 0, 
                 img_dim = 28, 
                 b_build_oip_model = True  
                ): 
        '''
        Input: 
        ~~~~~~
            cnn_model_file:     Name of a file containing  a full CNN-model
                                can later be overwritten by _load_cnn_model()
            layer_name:         We can define a layer name we are interested in already when starting; s. below
            map_index:          We may define a map we are interested in already when starting; s. below
            img_dim:            The dimension of the assumed quadratic images 
        
        Major internal variables:
        **************************
            _cnn_model:             A reference to the CNN model object
            _layer_name:            The name of convolutional layer 
                                    (can be overwritten by method _build_oip_model() ) 
            _map_index:             index of the map in the layer's output array 
                                    (can later be overwritten by other methods) 
            _r_cnn_inputs:          A reference to the input tensor of the CNN model (here: 1 image - NOT a batch of images)
            _layer_output:          Tensor with all maps of a certain layer
           
            _oip_submodel:          A new model connecting the input of the cnn-model with a certain map
            
            _tape:                  An instance of TF2's GradientTape-object 
                                    Watches input, output, loss of a model 
                                    and calculates gradients in TF2 eager mode 
            _r_oip_outputs:         A reference to the output of the new OIP-model 
            _r_oip_grads:           Reference to gradient tensors for the new OIP-model 
            _r_oip_loss:            Reference to a loss 
defined by operations on the OIP-output  
            _val_oip_loss:          Reference to a loss defined by operations on the OIP-output  
            
            _iterate:               Keras backend function to invoke the new OIP-model 
                                    and calculate both loss and gradient values (in TF2 eager mode) 
                                    This is the function to be used in the optimization loop for OIPs
            
            Parameters controlling the optimization loop:
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            _oip_strategy:          0, 1 - There are two strategies to evolve OIP patterns out of statistical data - only the first strategy is supported in this version  
                                    0: Simple superposition of fluctuations at different length scales
                                    1: Evolution over partially evolved images based on longer scale variations 
                                       enriched with fluctuations on shorter length scales 
                                    Both strategies can be combined with a precursor calculation 
            
            
            _b_oip_precursor:       True/False - Use a precursor analysis of long range variations 
                                    regarding loss => search for optimum variations for a given map
                                    (Some initial input images do not trigger a map at all or 
                                    sub-optimally => We test out multiple initial fluctuation patterns). 
            
            _ay_epochs:             A list of 4 optimization epochs to be used whilst 
                                    evolving the img data via strategy 1 and intermediate images 
            _n_epochs:              Number of optimization epochs to be used with strategy 0 
            
            _n_steps:               Defines at how many intermediate points we show images and report 
                                    on the optimization process 
            
            _epsilon:               Factor to control the amount of correction imposed by 
                                    the gradient values of the OIP-model 
            
            Input image data of the OIP-model and references to it 
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            _initial_precursor_img  The initial image to start a precursor optimization with.
                                    Would normally be an image of only long range fluctuations. 
            _precursor_image:       The evolved image updated during the precursor loop 
            
            _initial_inp_img_data:  A tensor representing the data of the input image 
            _inp_img_data:          A tensor representing the data of the input img during optimization  
            _img_dim:               We assume quadratic images to work with 
                                    with dimension _img_dim along each axis
                                    For the time being we only support MNIST images 
            
            Parameters controlling the composition of random initial image data 
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            _li_dim_steps:          A list of the intermediate dimensions for random data;
                                    these data are smoothly scaled to the image dimensions 
            _ay_facts:              A Numpy array of 4 factors to control the amount of 
                                    contribution of the statistical variations 
                                    on the 4 length scales to the initial image
                                   
        '''    
        
        # Input data and variable initializations
        # ****************************************
        
        # the model 
        self._cnn_model_file = cnn_model_file
        self._
cnn_model      = None 
        
        # the layer 
        self._layer_name = layer_name
        # the map 
        self._map_index  = map_index
        
        # References to objects and the OIP sub-model
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._r_cnn_inputs  = None # reference to input of the CNN_model
                                   # also used in the OIP-model  
        
        self._layer_output  = None
        self._oip_submodel  = None
        
        self._tape          = None # TF2 GradientTape variable
        # some "references"
        self._r_oip_outputs = None # output of the OIP-submodel to be watched 
        self._r_oip_grads   = None # gradients determined by GradientTape   
        self._r_oip_loss    = None 
        # respective values
        self._val_oip_grads = None
        self._val_oip_loss  = None
        
        # The Keras function to produce concrete outputs of the new OIP-model  
        self._iterate       = None
        
        # The strategy to produce an OIP pattern out of statistical input images 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~
        # 0: Simple superposition of fluctuations at different length scales 
        # 1: Move over 4 interediate images - partially optimized 
        self._oip_strategy = 0
        
        # Parameters controlling the OIP-optimization process 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~
        # Use a precursor analysis ? 
        self._b_oip_precursor = False
       
        # number of epochs for optimization strategy 1
        self._ay_epochs    = np.array((20, 40, 80, 400))
        len_epochs         = len(self._ay_epochs)

        # number of epochs for optimization strategy 0
        self._n_epochs     = self._ay_epochs[len_epochs-1]   
        self._n_steps      = 7   # divides the number of n_epochs into n_steps 
                                 # to produce intermediate outputs
        
        # size of corrections by gradients
        self._epsilon       = 0.01 # step-size for gradient correction
        
        
        # Input images and references to it 
        # ~~~~~~~~
        # precursor image
        self._initial_precursor_img = None
        self._precursor_img         = None
        # The evetually used input image - a superposition of initial random fluctuations
        self._initial_inp_img_data  = None  # The initial data constructed 
        self._inp_img_data          = None  # The data used and varied for optimization 
        # image dimension
        self._img_dim               = img_dim   # = 28 => MNIST images for the time being 
        
        
        # Parameters controlling the setup of an initial image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~~~~~~~~~~~~~~
        # The length scales for initial input fluctuations
        self._li_dim_steps = ( (3, 3), (7,7), (14,14), (28,28) )
        # Parameters for fluctuations  - used both in strategy 0 and strategy 1  
        self._ay_facts     = np.array( (0.5, 0.5, 0.5, 0.5) )
        
        
        # ********************************************************
        # Model setup - load the cnn-model and build the oip-model
        # ************
        
        if path.isfile(self._cnn_model_file): 
            # We trigger the initial load of a model 
            self._load_cnn_model(file_of_cnn_model = self._cnn_model_file, b_print_cnn_model = True)
            # We trigger the build of a new sub-model based on the CNN model used for OIP search 
            self._build_oip_model(layer_name = self._layer_name, b_print_oip_model = True ) 
        else:
            print("<\nWarning: The standard file " + self._cnn_model_file + 
                  " for the cnn-model could not be found!\n " + 
                  " Please use method _load_cnn_model() to load 
a valid model")
            
        return
 

 
The purpose of most of the variables will become clearer as we walk though the class’s methods below.

Loading the original trained CNN model

Let us say we have a trained CNN-model with all final weight parameters for node-connections saved in some h5-file (see the 4th article of this series for more info). We then can load the CNN-model and derive sub-models from its layer elements. The following method performs the loading task for us:

 
    #
    # Method to load a specific CNN model
    # **********************************
    def _load_cnn_model(self, file_of_cnn_model='cnn_best.h5', b_print_cnn_model=True ):
        
        # Check existence of the file
        if not path.isfile(self._cnn_model_file): 
            print("<\nWarning: The file " + file_of_cnn_model + 
                  " for the cnn-model could not be found!\n" + 
                  "Please change the parameter \"file_of_cnn_model\"" + 
                  " to load a valid model")

        # load the CNN model 
        self._cnn_model_file = file_of_cnn_model
        self._cnn_model = models.load_model(self._cnn_model_file)

        # Inform about the model and its file file 7
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~
        print("Used file to load a ´ model = ", self._cnn_model_file)
        # we print out the models structure
        if b_print_cnn_model:
            print("Structure of the loaded CNN-model:\n")
            self._cnn_model.summary()

        # handle/references to the models input => more precise the input image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #    Note: As we only have one image instead of a batch 
        #    we pick only the first tensor element!
        #    The inputs will be needed for buildng the oip-model 
        self._r_cnn_inputs = self._cnn_model.inputs[0]  # !!! We have a btach with just ONE image 
        
        # print out the shape - it should be known fro the original cnn-model
        print("shape of cnn-model inputs = ", self._r_cnn_inputs.shape)
        
        return

 
Actually, I used this function already in the class’ “__init__()”-method – provided the standard file for the last CNN exists. (In a more advanced version you would in addition check that the name of the standard CNN-model meets your expectations.)

The code is straightforward. You see that the structure of the original CNN-model is printed out if requested by the user.

Note also that we assigned the first element of the input tensor of the CNN-model, i.e. a single image tensor, to the variable “self._r_cnn_inputs”. This tensor will become a major ingredient in a new Keras model which we are going to build in a minute and which we shall use to calculate gradient components of a loss function with respect to all pixel values of the input image. The gradient’s component values will in turn be used during gradient ascent to correct the pixel values. Repeated corrections should lead to a systematic approach of a maximum of our loss function, which describes the map’s activation. (Remember: Such a maximum may depend on the input image fluctuations).

Build a new Keras model based on the input tensor and a chosen layer

The next method is more interesting. We need to build a new Keras layer “model” based on the input layer and a deeper layer of the original CNN-model. (We already used the same kind of “trick” when we tried to visualize the activation output of a convolutional layer of the CNN.)

 
    #
    # Method to construct a model to optimize input for OIP-detection 
    # 
***************************************
    def _build_oip_model(self, layer_name = 'Conv2D_3', b_print_oip_model=True ): 
        '''
        We need a Conv layer to build a working model for input image optimization. 
        We get the Conv layer by the layer's name. 
        The new model connects the first input element of the CNN to 
        the output maps of the named Conv layer CNN 
        '''
        # free some RAM - hopefully 
        del self._oip_submodel
        
        self._layer_name = layer_name
        if self._cnn_model == None: 
            print("Error: cnn_model not yet defined.")
            sys.exit()
        # We build a new model based ion the model inputs and the output 
        self._layer_output = self._cnn_model.get_layer(self._layer_name).output
        
        # We do not acre at the moment about the composition of the input 
        # We trust in that we handle only one image - and not a batch
        model_name = "mod_oip__" + layer_name 
        self._oip_submodel = models.Model( [self._r_cnn_inputs], [self._layer_output], name = model_name)                                    
        
        # We print out the oip model structure
        if b_print_oip_model:
            print("Structure of the constructed OIP-sub-model:\n")
            self._oip_submodel.summary()
        return

 
We use the tensor for a single input image and the output of layer (= a collection of maps) of the original CNN-model as the definition elements of the new Keras model.

TF2’s GradientTape() – watch the change of variables which have an impact on model gradients and the loss function

What do we have so far? We have defined a new Keras model connecting input and output data of a layer of the original model. TF2 can determine related gradients by the node connections defined in the original model. However, we cannot rely on a graph analysis by Tensorflow as we were used to with TF1. TF2 uses eager mode – i.e. it calculates gradients directly. What does “directly” mean? Well – as soon as changes to variables occur which have an impact on the gradient values. This in turn means that “something” has to watch out for such changes. TF2 offers a special object for this purpose: tf.GradientTape. See:
https://www.tensorflow.org/guide/eager
https://www.tensorflow.org / api_docs/python/tf/GradientTape

So, as a next step, we set up a method to take care of “GradientTape()”.

    #
    # Method to watch gradients 
    # *************************
    def _setup_gradient_tape(self):
        '''
        For TF2 eager execution we need to watch input changes and trigger gradient evaluation
        '''   
        # Working with TF2 GradientTape
        self._tape = None
        
        # Watch out for input, output variables with respect to gradient chnages
        with tf.GradientTape() as self._tape: 
            # Input
            # ~~~~~~~
            self._tape.watch(self._r_cnn_inputs)
            # Output 
            # ~~~~~~~
            self._r_oip_outputs = self._oip_submodel(self._r_cnn_inputs)
            
            # Loss 
            # ~~~~~~~
            self._r_oip_loss = tf.reduce_mean(self._r_oip_outputs[0, :, :, self._map_index])
            print(self._r_oip_loss)
            print("shape of oip_loss = ", self._r_oip_loss.shape)

 
Note that the loss definition included in the code fragment is specific for a chosen map. This implies that we have to call this method every time we chose a different map for which we want to create OIP visualizations.

The advantage of the above code element is that “_tape()” can produce gradient values for the relation of the input data and loss data of a model automatically whenever we change the input image data. Gradient values can be called by

  self._r_oip_grads  = self._tape.gradient(self._r_oip_loss, self._r_cnn_inputs)

Gradient ascent

As already discussed in my last article we apply a gradient ascent method to our “loss” function whose outcome rises with the activation of the neurons of a map. The following code sets up a method which first calls “_setup_gradient_tape()” and afterwards applies a normalization to the gradient values which “_tape()” produces. It then defines a convenience function and eventually calls a method which runs the optimization loop.

    #        
    # Method to derive OIP from a given initial input image
    # ********************
    def _derive_OIP(self, map_index = 1, n_epochs = None, n_steps = 4, 
                          epsilon = 0.01, 
                          conv_criterion = 5.e-4, 
                          b_stop_with_convergence = False ):
        '''
        V0.3, 31.08.2020
        This method starts the process of producing an OIP of statistical input image data
        
        Requirements:    An initial input image with statistical fluctuations of pixel values 
        -------------    must have been created. 
        
        Restrictions:    This version only supports the most simple strategy - "strategy 0":  
        -------------    Optimize in one loop - starting from a superposition of fluctuations
                         No precursor, no intermediate evolution of input image data 
        
        Input:
        ------
        map_index:       We can chose a map here       (overwrites previous settings)
        n_epochs:        Number of optimization steps  (overwrites previous settings) 
        n_steps:         Defines number of intervals (with length n_epochs/ n_steps) for reporting
                         This number also sets a requirement for providing n_step external axis-frames 
                         to display intermediate images of the emerging OIP  
                         => see _oip_strat_0_optimization_loop()
        epsilon:         Size for corrections by gradient values
        conv_criterion:  A small threshold number for (difference of loss-values / present loss value )
        b_stop_with_convergence: 
                         Boolean which decides whether we stop a loop if the conv-criterion is fulfilled
                         
        '''
        self._map_index = map_index
        self._n_epochs  = n_epochs   
        self._n_steps   = n_steps
        self._epsilon   = epsilon
        
        # number of epochs for optimization strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if n_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = n_epochs
            
        # Rest some variables  
        self._val_oip_grads = None
        self._val_oip_loss  = None 
        self._iterate       = None 
        
        # Setup the TF2 GradientTape watch
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._setup_gradient_tape()
        print("GradientTape watch activated ")
    
        # Gradient handling - so far we only deal with addresses 
        # ~~~~~~~~~~~~~~~~~~
        self._r_oip_grads  = self._tape.gradient(self._r_oip_loss, self._r_cnn_inputs)
        #print("shape of grads = ", self._r_oip_grads.shape)
        
        # normalization of the gradient 
        self._r_oip_grads /= (B.sqrt(B.mean(B.
square(self._r_oip_grads))) + 1.e-7)
        #grads = tf.image.per_image_standardization(grads)
        
        # define an abstract recallable Keras function 
        # producing loss and gradients for corrected img data 
        # the first list of addresses points to the input data, the last to the output data 
        self._iterate = B.function( [self._r_cnn_inputs], [self._r_oip_loss, self._r_oip_grads] )
        
        # get the initial image into a variable for optimization 
        self._inp_img_data = None
        self._inp_img_data = self._initial_inp_img_data
        
        # Optimization loop for strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if self._oip_strategy == 0: 
            self._oip_strat_0_optimization_loop( conv_criterion = conv_criterion, 
                                                b_stop_with_convergence = b_stop_with_convergence )
    

 
Gradient value normalization is done here with respect to the L2-norm of the gradient. I.e., we adjust the length of the gradient to a unit length. Why does such a normalization help us with respect to convergence? You remember from a previous article series about MLPs that we had to care about a reasonable balance of an adaptive application of gradient values and a systematic reduction of the learning rate when approaching the global minimum of a cost function. The various available adaptive gradient methods care about speed and a proper deceleration of the steps by which we move across the cost hyperplane. When we approached a minimum we needed to care about overshooting. In our present gradient ascent case we have no such adaptive fine-tuning available. What we can do, however, is to control the length of the gradient vector such that changes remain of the same order as long as we do not change the step-size factor (epsilon). I.e.:

  • We do not wildly accelerate our path on the loss hyperplane when some local pixel areas want to drive us into a certain direction.
  • We reduce the chance of creating pixel values out of normal boundaries.

The last point fits well with a situation where the CNN has been trained on normalized or standardizes input images. Due to the normalization the size of pixel value corrections now depends significantly on the factor “epsilon”. We should choose it small enough with respect to the pixel values.

Another interesting statement in the above code fragment is

        self._iterate = B.function( [self._r_cnn_inputs], [self._r_oip_loss, self._r_oip_grads] )

Here we use the Keras Backend to define a convenience function which relates input data with dependent outputs, whose calculations we previously defined by suitable statements. The list which is used as the first parameter of this function “_iterate()” defines the input variables, the list used as a second parameter defines the output variables which will internally be calculated via the GradientTape-functionality defined before. The “_iterate()”-function makes it much easier for us to build the optimization loop.

The optimization loop for the construction of images that visualize OIPs and features

The optimization loop must systematically correct the pixel values to approach a maximum of the loss function. The following method “_oip_strat_0_optimization_loop()” does this job for us. (The “0” in the method’s name refers to a first simple approach.)

    #        
    # Method to optimize an emerging OIP out of statistical input image data 
    # (simple strategy - just optimize, no precursor, no intermediate pattern evolution 
    # ********************************
    def _oip_strat_0_optimization_loop(self, conv_criterion = 5.e-4, 
                                            b_stop_with_
convergence = False ):
        
        '''
        V 0.2, 28.08.2020 
        
        Purpose: 
        This method controls the optimization loop for OIP reconstruction of an initial 
        input image with a statistical distribution of pixel values. 
        It also provides intermediate output in the form of printed data and images.
        
        Parameter: 
        conv_criterion:  A small threshold number for (difference of loss-values / present loss value )
        b_stop_with_convergence: 
                         Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        
        
        This method produces some intermediate output during the optimization loop in form of images.
        It uses an external grid of plot frames and their axes-objects. The addresses of the 
        axis-objects must provided by an external list "li_axa[]".  
        We need a seqence of >= n_steps axis-frames length(li_axa) >= n_steps).    
        With Jupyter the grid for plots can externally be provided by statements like 
        
        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        '''
        
        # Check that we really have an input image tensor
        if ( (self._inp_img_data == None) or 
             (self._inp_img_data.shape[1] != self._img_dim) or 
             (self._inp_img_data.shape[2] != self._img_dim) ) :
            print("There is no initial input image or it does not fit dimension requirements (28,28)")
            sys.exit()
        
        # Print some information
        print("*************\nStart of optimization loop\n*************")
        print("Strategy: Simple initial mixture of long and short range variations")
        print("Number of epochs = ", self._n_epochs)
        print("Epsilon =  ", self._epsilon)
        print("*************")
        
        # some initial value
        loss_old = 0.0
       
        # Preparation of intermediate reporting / img printing
        # --------------------------------------
        # Number of intermediate reporting points during the loop 
        steps = math.ceil(self._n_epochs / self._n_steps )
        # Logarithmic spacing of steps (most things happen initially)
        n_el = math.floor(self._n_epochs / 2**(self._n_steps) ) 
        li_int = []
        for j in range(self._n_steps):
            li_int.append(n_el*2**j)
        print("li_int = ", li_int)
        # A counter for intermediate reporting  
        n_rep = 0
        # Array for intermediate image data 
        li_imgs = np.zeros((self._img_dim, self._img_dim, 1), dtype=np.float32)
        
        # Convergence? - A list for steps meeting the convergence criterion
        # ~~~~~~~~~~~~
        li_conv = []
        
        
        # optimization loop 
        # *******************
        # A counter for steps with zero loss and gradient values 
        n_zeros = 0
        
        for j in range(self._n_epochs):
            
            # Get output values of our Keras iteration function 
            # ~~~~~~~~~~~~~~~~~~~
            self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
            
            # loss difference to last step - shuold steadily become smaller 
            loss_diff = self._
val_oip_loss - loss_old 
            #print("loss_val = ", loss_val)
            #print("loss_diff = ", loss_diff)
            loss_old = self._val_oip_loss
            
            if j > 10 and (loss_diff/(self._val_oip_loss + 1.-7)) < conv_criterion:
                li_conv.append(j)
                lenc = len(li_conv)
                # print("conv - step = ", j)
                # stop only after the criterion has been met in 4 successive steps
                if b_stop_with_convergence and lenc > 5 and li_conv[-4] == j-4:
                    return
            
            grads_val     = self._val_oip_grads
            # the gradients average value 
            avg_grads_val = (tf.reduce_mean(grads_val)).numpy()
            
            # Check if our map reacts at all
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if self._val_oip_loss == 0.0 and avg_grads_val == 0.0:
                print( "0-values, j= ", j, 
                       " loss = ", self._val_oip_loss, " avg_loss = ", avg_grads_val )
                n_zeros += 1
            
            if n_zeros > 10: 
                print("More than 10 times zero values - Try a different initial random distribution of pixel values")
                return
            
            # gradient ascent => Correction of the input image data 
            # ~~~~~~~~~~~~~~~
            self._inp_img_data += grads_val * self._epsilon
            
            # Standardize the corrected image - we won't get a convergence otherwise 
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
            
            # Some output at intermediate points 
            #     Note: We us logarithmic intervals because most changes 
            #           appear in the intial third of the loop's range  
            if (j == 0) or (j in li_int) or (j == self._n_epochs-1) :
                # print some info 
                print("\nstep " + str(j) + " finalized")
                #print("Shape of grads = ", grads_val.shape)
                print("present loss_val = ", self._val_oip_loss)
                print("loss_diff = ", loss_diff)
                # display the intermediate image data in an external grid 
                imgn = self._inp_img_data[0,:,:,0].numpy()
                #print("Shape of intermediate img = ", imgn.shape)
                li_axa[n_rep].imshow(imgn, cmap=plt.cm.viridis)
                # counter
                n_rep += 1
         
        return

 
The code is easy to understand: We use our convenience function “self._iterate()” to produce actual loss and gradient values within the loop. Then we change the pixel values of our input image and feed the changed image back into the loop. Note the positive sign of the correction! All complicated gradient calculations are automatically and “eagerly” done internally thanks to “GradientTape”.

We said above that we limited the gradient values. How big is the size of the resulting corrections compared to the image data? If and when we use standardized image data and scale our gradient to unit length size then the relative size of the changes are of the order of the step size “epsilon” for the correction. In our case we set epsilon to 5.e-4.

The careful reader has noticed that I standardized the image data after the correction with the (normalized) gradient. Well, this step proved to be absolutely necessary to get convergence in the sense that we really approach an extremum of the cost function. Reasons are:

  • My CNN was trained on standardized MNIST input images.
  • We did not include any normalization layers into our CNN.
  • Without counter-measures our normalized gradient values would eventually drive unlimited activation
    values.

The last point deserves some explanation:

We used the ReLU-function as the activation function of nodes in the inner layers of our CNN. For positive input values ReLU actually is a linear function. Now let us assume that we have found a rather optimal input pattern which via a succession of basically linear operations drives an activation pattern of a map’s neurons. What happens if we just add constant small values to each pixel per iteration step? The output values after the sequence of linear transformations will just grow! With our method and the ReLU activation function we walk around a surface until we reach a linear ramp and climb it up. Without compensatory steps we will not find a real maximum because there is none. The situation is very different at the convolutional layers than at the eventual output layer of the CNN’s MLP-part.

You may ask yourself why we experienced nothing of this during the classification training? First answer: We did not optimize input data but weights during the training. Second answer: During training we did NOT maximize potentially unbound activation values but minimized a cost function based on output values of the last a MLP-layer. And these values were produced by a sigmoid function! The sigmoid function limits any input to the range ]0, +1[. In addition, the cost function (categorial_crossentropy) is designed to be convex for deviations of limited calculated values from a limited target vector.

The consequence is that we have to limit the values of the (corrected) input data and the related gradients in our present optimization procedure at the same time! This is done by the standardization of the image data. Remember that the correction values are around of the relative order of 5.e-4. In the end this is the order of the fluctuations which are unavoidable in the final OIP image; but now we have a chance to converge to a related small region around a real maximum.

The last block in the code deals with intermediate output – not only printed data on the loss function but also in form of intermediate images of the hopefully emerging pattern. These images can be provided in an external grid of figures in e.g. a Jupyter environment. The important point is that we define a suitable number of Matplotlib’s axis-objects and deliver their addresses via an external array “li_axa[]”. I am well aware of that the plotting solution coded here is a very basic one and it requires some ahead planning of the user – feel free to program it in a better way.

Initial input image data – with variations on different length scales

We lack just one further ingredient: We need a method to construct an input image with statistical data. I have discussed already that it may be helpful to vary data on different length scales. A very simple approach to tackle this problem manually is the following one:

We use squares – each with a different small and limited number of cells; e.g. a (4×4)-, a (7×7)-, a (14×14)- and (28×28)-square. Note that (28×28) is the original size of the MNIST images. For other samples we may need different sizes of such squares and more of them. We then fill the cells with random numbers between [-1, 1]. We smoothly scale the variations of the smaller squares up to the real input image size; here: (28×28). We can do this by applying a bicubic interpolation to the data. In the end we add up all random data and normalize or standardize the resulting distribution of pixel values. See the code below for details:

    # 
    # Method to build an initial image from a superposition of random data on different length scales 
    # ***********************************
    def _build_initial_img_data( self, 
                                 strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_
facts     = (0.5, 0.5, 0.5, 0.5),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False):
        
        '''
        V0.2, 31.08.2020
        Purpose:
        ~~~~~~~~
        This method constructs an initial input image with a statistical distribution of pixel-values.
        We use 4 length scales to mix fluctuations with different "wave-length" by a simple  
        approach: 
        We fill four squares with a different number of cells below the number of pixels 
        in each dimension of the real input image; e.g. (4x4), (7x7, (14x14), (28,28) <= (28,28). 
        We fill the cells with random numbers in [-1.0, 1.]. We smootly scale the resulting pattern 
        up to (28,28) (or what ever the input image dimensions are) by bicubic interpolations 
        and eventually add up all values. As a final step we standardize the pixel value distribution.          
        
        Limitations
        ~~~~~~~~~~~
        This version works with 4 length scales. it only supports a simple strategy for 
        evolving OIP patterns. 
        '''

        self._oip_strategy = strategy
        self._ay_facts     = np.array(li_facts)
        self._ay_epochs    = np.array(li_epochs)
        
        self._li_dim_steps = li_dim_steps
        
        fluct_data = None

        
        # Strategy 0: Simple superposition of random patterns at 4 different wave-length
        # ~~~~~~~~~~
        if self._oip_strategy == 0:
            
            dim_1_1 = self._li_dim_steps[0][0] 
            dim_1_2 = self._li_dim_steps[0][1] 
            dim_2_1 = self._li_dim_steps[1][0] 
            dim_2_2 = self._li_dim_steps[1][1] 
            dim_3_1 = self._li_dim_steps[2][0] 
            dim_3_2 = self._li_dim_steps[2][1] 
            dim_4_1 = self._li_dim_steps[3][0] 
            dim_4_2 = self._li_dim_steps[3][1] 
            
            fact1 = self._ay_facts[0]
            fact2 = self._ay_facts[1]
            fact3 = self._ay_facts[2]
            fact4 = self._ay_facts[3]
            
            # print some parameter information
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            print("\nInitial image composition - strategy 0:\n Superposition of 4 different wavelength patterns")
            print("Parameters:\n", 
                 fact1, " => (" + str(dim_1_1) +", " + str(dim_1_2) + ") :: ", 
                 fact2, " => (" + str(dim_2_1) +", " + str(dim_2_2) + ") :: ", 
                 fact3, " => (" + str(dim_3_1) +", " + str(dim_3_2) + ") :: ", 
                 fact4, " => (" + str(dim_4_1) +", " + str(dim_4_2) + ")" 
                 )
            
            # fluctuations
            fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
            fluct2 =  2.0 * ( np.random.random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
            fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
            fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 

            # Scaling with bicubic interpolation to the required image size
            fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
            fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
            fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
            fluct4_scale = fluct4

            # superposition
            fluct_data = fact1*fluct1_scale + fact2*fluct2_scale + fact3*fluct3_scale + fact4*fluct4_scale
        
        
        # get the standardized plus smoothed and unsmoothed image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #    TF2 provides a function performing standardization of image data function
r
        fluct_data_unsmoothed = tf.image.per_image_standardization(fluct_data) 
        fluct_data_smoothed   = tf.image.per_image_standardization(
                                    tf.image.resize( fluct_data, [28,28], 
                                                     method="bicubic", antialias=True) )

        if b_smoothing: 
            self._initial_inp_img_data = fluct_data_smoothed
        else:
            self._initial_inp_img_data = fluct_data_unsmoothed

        # There should be no difference 
        img_init_unsmoothed = fluct_data_unsmoothed[0,:,:,0].numpy()
        img_init_smoothed   = fluct_data_smoothed[0,:,:,0].numpy()
        
        ax1_1.imshow(img_init_unsmoothed, cmap=plt.cm.viridis)
        ax1_2.imshow(img_init_smoothed, cmap=plt.cm.viridis)

        print("Initial images plotted")
        
        return self._initial_inp_img_data    

 
The factors fact1, fact2 and fact3 determine the relative amplitudes of the fluctuations at the different length scales. Thus the user is e.g. able to suppress short-scale fluctuations completely.

I only took exactly four squares to simulate fluctuation on different length scales. A better code would make the number of squares and length scales parameterizable. Or it would work with a Fourier series right from the start. I was too lazy for such elaborated things. The plots again require the definition of some external Matplotlib figures with axis-objects. You can provide a suitable figure in a Jupyter cell.

Conclusion

In this article we have build a simple class to create OIPs for a specific CNN map out of an input image with a random distribution of pixel values. The algorithm should have made it clear that this a constructive work performed during iteration:

  • We start from the “detection” of slight traces of a pattern in the initial statistical pixel value distribution; the
    pattern actually is a statistical pixel correlation which triggers the chosen map,
  • then we amplify the recognized pattern elements
  • and suppress pixel values which are not relevant into a homogeneous background.

So the headline of this article is a bit misleading: We do not only “detect” a map related OIP; we also “create” it.

Our new Python class makes use of a given trained CNN-model and follows the outline of steps discussed in a previous article. The class has many limitations – in its present version it is only usable for MNIST images and the user has to know a lot about internals. However, I hope that my readers nevertheless have understood the basic ingredients. From there it is only a small step towards are more general and more capable version.

I have also underlined in this article that the images produced by the coded methods may only represent local maxima of the loss function for average map activation and idealized patterns composed of re-occuring elementary sub-structures. In the next article

A simple CNN for the MNIST dataset – IX – filter visualization at a convolutional layer

I am going to apply the code to most of the maps of the highest, i.e. inner-most convolutional layer of my trained CNN. We shall discover a whole zoo of simple and complex input image patterns. But we shall also confirm the suspicion that our optimization algorithm for finding an OIP for a specific map does not react to each and every kind of initial statistical pixel value fluctuation presented to the CNN.