A simple CNN for the MNIST dataset – XI – Python code for filter visualization and OIP detection

I continue my article series on a CNN for the MNIST dataset and related studies of a CNN’s reaction to input patterns.

A simple CNN for the MNIST dataset – X – filling some gaps in filter visualization
A simple CNN for the MNIST dataset – IX – filter visualization at a convolutional layer
A simple CNN for the MNIST dataset – VIII – filters and features – Python code to visualize patterns which activate a map strongly
A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part
A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features

In the course of this series we have already seen that that we must distinguish carefully between

  • activation patterns resulting from the output of the maps of a convolutional layer
  • and pixel patterns (OIPs) within input images which trigger a strong reaction of a selected deep layer map.

OIPs are patterns which pass complicated filter combinations of a sequence of convolutional layers. The images I have showed you so far show: At deep layers some maps react to unique large scale patterns covering a significant area of the input image in one or both dimensions. The patterns may expose sub-structures and spatially shifted repetitions across the images’s surface.

We can display map activation patterns easily on a grid displaying all maps of a layer and their neuron activations. OIP patterns, however, must be made visible via an image whose pixel values are determined by a complicated calculation process: The image data come from an optimization algorithm which analyzes the activation response of a map’s neurons to pixel value changes.

Our hunt for interesting MNIST OIPs, which trigger the maps of the deepest convolutional layer, has been relatively successful over the last two blog posts: Simply assigning a random value to each input pixel as a starting point for our algorithm already gave us OIP patterns for around 48% of the maps. In the last article I discussed an additional method to fill some of the gaps: I suggested to systematically investigate long range fluctuations of the pixel values as an input to the optimization loop. This gave us OIPs for another 26% of the maps. So, we have found OIPs for around 75% of the 128 maps with relatively simple methods.

visualization-of-CNN-filters-and-maps-for-MNIST-3rd-Conv-layer-1-dr-moenchmeyer

visualization-of-CNN-filters-and-maps-for-MNIST-3rd-Conv-layer-2-dr-moenchmeyer

But I still owe my readers some code for OIP-creation and a short guideline how to use it. You find the complete code in the last section of this article. I have encapsulated the methods for producing OIP-patterns in a Python class named “My_OIP”. The code is commented and with the knowledge accumulated during the last articles of this series you should have no major difficulties to understand it. Below I shall provide you with additional code for a sequence of Jupyter cells and walk you through the usage of the class’ methods.

We are mostly interested in maps for which you do not get an OIP pattern (or a “feature”, if you like …) easily by trial and error methods. We therefore pick map Nr. 35 on layer “Conv2D_3” of my (!) trained CNN as an example.

Basic requirements and restrictions

I have tested the code only in combination with a sequence of Jupyter cells. So, you need both a virtual Python environment and a Jupyter installation to repeat my experiments.

The code is not yet build for a full investigation of all maps of a convolutional layer in one extensive run. The present methods are intended to be applied to a single selected map, only. Whenever you want to study another map you have to run a certain sequence of Jupyter cells again.

Another hint: I did all runs in a virtual Python environment with Tensorflow 2.2.1 as my Keras backend. Unfortunately, the code does not yet run as efficiently on TF 2.3.1 as on TF 2.2.1 for unclear reasons. I used the “Keras” version integrated into TF 2.

Required Modules

I invoke the following collection of Python modules:

Jupyter cell 1:

  
import numpy as np
from numpy import save
from numpy import load
import scipy
from itertools import product 
from sklearn.preprocessing import StandardScaler

import tensorflow as tf
from tensorflow import keras as K
from tensorflow.python.keras import backend as B 
#from tensorflow.keras import backend as B 
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import schedules
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist

from tensorflow.python.client import device_lib
#import tensorflow.contrib.eager as tfe

import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.patches as mpat 

import time 
import sys 
import math
import os 
from os import path as path
import imp

# my own class code 
from mycode import myOIP
from mycode import myann32

from IPython.core.tests.simpleerr import sysexit

 

You may have to adjust some versions of these modules to get full consistency with TF 2.2.1. Watch the output of “pip-review” or “pip –upgrade –force” commands carefully! The general form of the required pip-statements to install a certain version of a module in your virtual Python environment looks like

pip install --upgrade --force tensorflow==2.2.1
pip install --upgrade --force six==1.12.0
pip install --upgrade --force bleach==1.5.0
....

n

Enable the code to run at least partially on a graphics card

A systematic investigation of long range fluctuation patterns as a starting point for an OIP-creation should be done with the help of a graphics card – it takes too much time to perform all of the required operations on the cores of a conventional CPU. [At least on my elderly equipment – unfortunately, I have no employer which sponsors my ML- and Linux-activities .. 🙁 ..] You can do the necessary preparatory steps with the help of a 2nd Jupyter cell.

Jupyter cell 2:

  
gpu = True
if gpu: 
    GPU = True;  CPU = False; num_GPU = 1; num_CPU = 4
else: 
    GPU = False; CPU = True;  num_CPU = 1; num_GPU = 0

config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=4,
                        inter_op_parallelism_threads=1, 
                        allow_soft_placement=True,
                        device_count = {'CPU' : num_CPU,
                                        'GPU' : num_GPU}, 
                        log_device_placement=True

config.gpu_options.per_process_gpu_memory_fraction=0.30
config.gpu_options.force_gpu_compatible = True
B.set_session(tf.compat.v1.Session(config=config))

 
I reduced the maximum usage of graphics card memory to 30% of its capacity (4GB).

Creating an object instance of the class My_OIP

As a third step we instantiate an object based on class “My_OIP”. When you look at the code of the class’ “__init__()”-method you see

  • that the my CNN-model is loaded from a h5-file and made available for further operations
  • and that an additional Keras model – named “OIP-(sub)-model” – is created. The OIP-model connects the input of the CNN with the output of the CNN’s innermost convolutional layer “Conv2D_3”. It is a sub-model of the original CNN.

The OIP-sub-model is built with the help of the “Model” class of Keras. At the end of the “__init__()”-method I print out the layer structures of the CNN- and OIP-model. The Juypter code is:

Jupyter cell 3:

# Load the CNN-model - build the OIP-model
#  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
imp.reload(myOIP)
try:
    with tf.device("/GPU:0"):
        MyOIP = myOIP.My_OIP(cnn_model_file = 'cnn_best.h5', layer_name = 'Conv2D_3')

except SystemExit:
    print("stopped")

Note that we chose a specific convolutional layer of the CNN, here, by providing its unique name as a parameter. When you look at the code you see that you can change the layer afterwards by directly calling method “_build_oip_model(layer_name = ‘LAYER_NAME’)” with a suitable name of the layer whose maps you want to study.

The resulting output looks in my case like

Used file to load a ´ model =  cnn_best.h5
Structure of the loaded CNN-model:

Model: "sequential_7"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Conv2D_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
Max_Pool_1 (MaxPooling2D)    (None, 13, 13, 32)        0         
_________________________________________________________________
Conv2D_2 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
Max_Pool_2 (MaxPooling2D)    (None, 5, 5, 64)          0         
_________________________________________________________________
Conv2D_3 (Conv2D)            (None, 3, 3, 128)         73856     
r
_________________________________________________________________
flatten_7 (Flatten)          (None, 1152)              0         
_________________________________________________________________
dense_14 (Dense)             (None, 100)               115300    
_________________________________________________________________
dense_15 (Dense)             (None, 10)                1010      
=================================================================
Total params: 208,982
Trainable params: 208,982
Non-trainable params: 0
_________________________________________________________________
shape of cnn-model inputs =  (None, 28, 28, 1)
Structure of the constructed OIP-sub-model:

Model: "mod_oip__Conv2D_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Conv2D_1_input (InputLayer)  [(None, 28, 28, 1)]       0         
_________________________________________________________________
Conv2D_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
Max_Pool_1 (MaxPooling2D)    (None, 13, 13, 32)        0         
_________________________________________________________________
Conv2D_2 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
Max_Pool_2 (MaxPooling2D)    (None, 5, 5, 64)          0         
_________________________________________________________________
Conv2D_3 (Conv2D)            (None, 3, 3, 128)         73856     
=================================================================
Total params: 92,672
Trainable params: 92,672
Non-trainable params: 0

 
You see that the OIP-model (“mod_oip__Conv2D_3”) starts with the input layer and ends with the output of the last deepest layer “Conv2D_3”.

Preparation and execution of a precursor-run

The code in the next Jupyter cell

  • first starts a method to prepare a precursor-run,
  • then defines a figure to allow for plots of the 8 promising input images with large scale fluctuations
  • and eventually starts a precursor-run which tests around 19800 fluctuation patterns and selects those 8, which trigger our selected map maximally.

What I call “precursor-run” above is, at its core, nothing else than a loop which tests a selected map’s response to many artificially created input images based on a variety of large scale fluctuations imposed on the pixel values.

Jupyter cell 4:

  
# preparation of the precursor run 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
with tf.device("/GPU:0"):
    MyOIP._prepare_precursor(map_index=38)

# figure for plots
# -----------------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure()
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

# start precursor run 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
start_t = time.perf_counter()
with tf.device("/GPU:0"):
    MyOIP._precursor(num_epochs=10, li_axa=li_axa)
end_t   = time.perf_counter()
fit_t = end_t - start_t
print("time= ", fit_t)

 
Note that we define the map we which we want to test when we call “_prepare_precursor(map_index=38)”; the index of the map refers to its position in the list of maps of the chosen layer.

Technically, method “_prepare_precursor()” sets up the OIP-model for the selected map (here map 38) plus a GradientTape-object, which helps us to automatically calculate gradient component values for the map-output with respect to input pixel-values. It does so by calling method

_setup_gradient_tape_and_iterate_function() .

The latter method also creates the “_iterate()-function” which we use during the optimization of the input pixel values.

Method “_precursor()” then systematically creates input images for the fluctuation patterns originally imposed on a coarse (3×3)-grid by interpolating and upscaling. When you study the code carefully you will see that I included a possibility to overlay some kind of constant short scale fluctuation pattern. (This may prove useful in some cases where a map needs short and long scale input patterns at the same time to react with a reasonable activation.)

The method then loops over all long range patterns, performs a defined number of gradient ascent steps for each image and saves the respective pattern data if the map shows a reaction – indicated by a loss value > 0. The optimization loop is directly handled within the method for performance reasons. Note that we only follow the optimization process for a fixed, relatively small numbers of epochs. The loop prints out loss values for all patterns for which the loss is > 0.

In the end we select those 8 input images which showed the highest loss values and save their basic pattern data for reconstruction. The patterns are afterwards available in a list (“self._li_of_flucts”). (We save them also in a file).

We then check whether the image reconstruction algorithm for one of the saved patterns really works. A final step consists of a display of the selected input images. We get an output that looks like

  
We test  19683  possibilities for a (3x3) fluctuations 
i =  27  loss =  5.528541
i =  28  loss =  3.826129
i =  30  loss =  5.545482
i =  31  loss =  5.8238125
i =  32  loss =  2.5444937
i =  36  loss =  2.7260687
i =  37  loss =  3.1436405
i =  46  loss =  2.8298864
i =  54  loss =  5.528542
i =  55  loss =  5.286209 
...
...
i =  19664  loss =  9.8695545
i =  19671  loss =  2.4347296
i =  19672  loss =  2.8202498
i =  19673  loss =  9.868241

num of relevant covs =  11114 
check of map reaction to first selected image
loss for 1st selected img =  20.427849
0 loss =  20.427849
1 loss =  19.50778
2 loss =  19.394796
3 loss =  18.993208
4 loss =  18.844856
5 loss =  18.794407
6 loss =  18.771235
7 loss =  18.607782
time=  251.3919793430032   

 
The test of the fluctuation patterns for map 38 found 1114 candidates with a significant loss.

In my case the selected images of the 8 most promising pattern candidates for OIP-creation look like:

Reconstruction of the input image candidates from their fluctuation pattern

We can reconstruct the images from the contents of “self._li_of_flucts()” by calling method “_display_precursor_imgs()”.

Jupyter cell 5:

  
# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure()
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_
subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

with tf.device("/GPU:0"):
    #MyOIP._display_imgs_2()    
    MyOIP._display_precursor_imgs(li_axa=li_axa)

 
It will test the reconstruction algorithm and display the 8 images already provided by the precursor run again.

Choose an input image candidate and enrich it by small scale fluctuations

The next Jupyter cell offers the opportunity to select a certain candidate out of our 8 candidates and create the respective input image as a basis for a subsequent OIP-creation. In addition it allows us to enrich its large scale fluctuation pattern with some small scale fluctuations at a lower “amplitude”. We must provide a figure with two subplots to do some plotting.

Jupyter cell 6:

 
# build initial image based on PRECURSOR
# *******************

# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 5
fig1 = plt.figure(1)
ax1_1 = fig1.add_subplot(121)
ax1_2 = fig1.add_subplot(122)

try: 
    # OIP function to setup an initial image 
    with tf.device("/GPU:0"):
        
        initial_img = MyOIP._build_initial_img_from_prec( prec_index=7, 
                                             li_epochs    = [20, 50, 100, 400], 
                                             li_facts     = [1.0, 0.0, 0.0, 0.2, 0.0],
                                             li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
                                             b_smoothing = False, b_display=True, 
                                             ax1_1 = ax1_1, ax1_2 = ax1_2)

except SystemExit:
    print("stopped")

 
Parameter “prec_index” reflects which of the eight long range fluctuation patterns you choose.

You certainly noticed the options for parameterizing additional fluctuations: “li_dim_steps” defines the granularity of the fluctuations which are upscaled to the (28×28)-size of the MNIST images. “li_facts” allows for a relative scaling of the strength of the fluctuations. Note that the first element of li_facts determines the strength of the chosen basic pattern after the “_precursor”-run; the other 4 parameters define the relative strength of the other statistical fluctuations on the four length-scales (li_dim_steps).

Let us look at an example:

initial_img = MyOIP._build_initial_img_from_prec( prec_index=7, 
  li_epochs    = [20, 50, 100, 400], 
  li_facts     = [1.0, 0.0, 0.0, 0.0, 0.0],
  li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
  b_smoothing = False, b_display=True, 
  ax1_1 = ax1_1, ax1_2 = ax1_2)

picks the last of the patterns displayed above as the results of the precursor run. The result looks like:

The second image should look like the first one – it is created for check purposes, only. Now, we enrich this image with fluctuations on the scale defined by (14,14), i.e. on squares of (2×2)-dimensions. (The MNIST image size is (28×28)!)

initial_img = MyOIP._build_initial_img_from_prec( prec_index=7, 
  li_epochs    = [20, 50, 100, 400], 
  li_facts     = [1.0, 0.0, 0.0, 0.2, 0.0],
  li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
  b_smoothing = False, b_display=True, 
  ax1_1 = ax1_1, ax1_2 = ax1_2)

We get the result:

The enrichment of a large scale pattern by small scale fluctuations may support the creation of clearer OIP-images.

Creation of an OIP-image from the results of a precursor run

Method “_derive_OIP_for_Prec_Img()” executes the optimization loop for the creation of an OIP-image based on the input image offered. This method uses the the same instances of the GradienTape object and function “_iterate()” as the precursor run itself. It calls a method “_oip_strat_0_optimization_loop()” performing the calculations during the optimization. To plot the image evolution we have to provide a figure with 8 sub-plots.

Jupyter cell 7:

 
#from IPython.core.display import display, HTML
#display(HTML("<style>div.output_scroll { height:44em; }</style>"))


# Derive an  OIP from a PRECURSOR IMAGE for a selected map 
# *********************************************************

# figure A - 8 frames
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure(1)
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

# figure B - 2 vertical frames for last image + contrats enhancemnet 
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 3
fig_size[1] = 7
#fig_b = plt.figure(2, figsize=(5,11.2))
fig_b = plt.figure(2)
ax1_1 = fig_b.add_subplot(211)
ax1_2 = fig_b.add_subplot(212)


# Parameters of the OIP-image optimization 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
n_epochs = 1200          # should be divisible by 5  
n_steps = 6             # number of intermediate reports 
epsilon = 0.01          # step size for gradient correction  
conv_criterion = 2.e-4  # criterion for a potential stop of optimization 


with tf.device("/GPU:0"):
    MyOIP._derive_OIP_for_Prec_Img( n_epochs = n_epochs, n_steps = n_steps, 
                        epsilon = epsilon , conv_criterion = conv_criterion, 
                        li_axa = li_axa, ax1_1 = ax1_1, ax1_2 = ax1_2, 
                        b_stop_with_convergence=False, 
                        b_show_intermediate_images=True 
                        )
      

 
Note: You can use the first two out-commented statements at the cell’s top to control the height of the output window of your Jupyter cells. Just remove the “#” comment signs.

The result contains intermediate information on the loss values and convergence. This helps to determine the minimum number of epochs for an optional second run.

In the end we get a plot for the history of the image evolution out of the input patterns. The distance between the epochs for which the plots are done changes in a logarithmic manner. At the end our method calls “_transform_tensor_to_img()” with some standard parameters for a display of the OIP with contrast enhancement.

For the non-enriched large scale pattern displayed above we get:

*************
Start of optimization loop
*************
Strategy: Simple initial mixture of long and short range variations
Number of epochs =  1000
Epsilon =   0.01
*************
li_int =  [15, 30, 60, 120, 240, 480]
j:  0  :: loss_val =  0.9638658  :: loss_diff =  0.
9638658165931702

step 0 finalized
present loss_val =  0.9638658
loss_diff =  0.9638658165931702
Shape of intermediate img =  (28, 28)
j:  1  :: loss_val =  12.853093  :: loss_diff =  11.889227
j:  2  :: loss_val =  13.5863  :: loss_diff =  0.73320675
j:  3  :: loss_val =  14.277494  :: loss_diff =  0.69119453
j:  4  :: loss_val =  14.983041  :: loss_diff =  0.7055464
j:  5  :: loss_val =  15.694604  :: loss_diff =  0.7115631
j:  6  :: loss_val =  16.413284  :: loss_diff =  0.7186804
j:  7  :: loss_val =  17.148396  :: loss_diff =  0.73511124
...
...
j:  994  :: loss_val =  99.45101  :: loss_diff =  -0.003944397
j:  995  :: loss_val =  99.453865  :: loss_diff =  0.0028533936
j:  996  :: loss_val =  99.50871  :: loss_diff =  0.054847717
j:  997  :: loss_val =  99.47069  :: loss_diff =  -0.038024902
j:  998  :: loss_val =  99.51533  :: loss_diff =  0.044639587
j:  999  :: loss_val =  99.43279  :: loss_diff =  -0.08253479

step 999 finalized
present loss_val =  99.43279
loss_diff =  -0.08253479

Infos on pixel value distribution during contrast enhancement: 
max_orig =  4.7334514  :: avg_orig =  -1.2164214e-08  :: min_orig:  -3.6500213
std_dev_orig =  1.0
max_ay =  4.733451  :: avg_ay =  0.0  :: min_ay:  -3.6500208
std_dev_ay =  0.9999999
div =  3.562975788116455
max_fin =  1.6585106  :: avg_fin =  0.33000004  :: min_fin:  -0.69443035
std_dev_fin =  0.28066427

max_img =  255  :: avg_img =  85.10331632653062  :: min_img:  0
std_dev_img =  58.67217632503727 

The last information lines reflect some data on the pixel value distribution during transformations for contrast enhancement. See the code of method “_transform_tensor_to_img()”.

The resulting images are:

and

When you repeat the OIP-creation this for the enriched input image you get:

So, there are some differences – and, yes, you may want to play around a bit with the parameter options.

Below I present you the results for all eight long range patterns – un-enriched and for 1000 epochs.

 

OIP-creation obviously depends to some extent on the input patterns offered. But there is an overall similarity.

Pick the OIP you like best. The loss values in order of the OIP-images are: 108.94, 105,95, 102.62, 96.02, 99.67, 100.75, 103.40, 99.45. (This shows by the way that the order in terms of a loss value after 10 tested epochs does not reflect the order after a full run over hundreds of epochs.)

Play around with contrast enhancement

You may not be satisfied with the contrast enhancement and find it somewhat exaggerated. Can you change it? Yes, method “_transform_tensor_to_img()” provides two parameters for it. One (centre_move) is shifting the average of the values, the other (fact) the spread of the pixel values when mapping the calculated standardized values to the conventional range of [0, 255] for pixel values. Thecode reveals that some clipping is used, too.

The following Jupyter cell allows for a variation of the contrast related parameters for the present OIP-image whose original values are available from variable “_inp_img_data[0, :, :, 0]”.

Jupyter cell 8:

        
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height:44em; }</style>"))

# figure B - 2 vertical frames for last image + contrats enhancemnet 
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 6
fig_size[1] = 10
#fig_b = plt.figure(2, figsize=(5,11.2))
fig_b = plt.figure(1)
ax1_1 = fig_b.add_subplot(321)
ax1_2 = fig_b.add_subplot(322)
ax1_3 = fig_b.add_subplot(323)
ax1_4 = fig_b.add_subplot(324)
ax1_5 = fig_b.add_subplot(325)
ax1_6 = fig_b.add_subplot(326)
        
# -------------    
    
X_img = MyOIP._inp_img_data[0, :, :, 0]    

XN_img = X_img.numpy()
XT2_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.5, fact = 2.0)

ax1_1.imshow(XN_img, cmap=plt.cm.get_cmap('viridis'))
ax1_2.imshow(XT2_img, cmap=plt.cm.get_cmap('viridis'))

XT3_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.42, fact = 1.8)
XT4_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.33, fact = 1.4)

ax1_3.imshow(XT3_img, cmap=plt.cm.get_cmap('viridis'))
ax1_4.imshow(XT4_img, 
cmap=plt.cm.get_cmap('viridis'))

XT5_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.33, fact = 1.2)
XT6_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.33, fact = 0.9)

ax1_5.imshow(XT5_img, cmap=plt.cm.get_cmap('viridis'))
ax1_6.imshow(XT6_img, cmap=plt.cm.get_cmap('viridis'))
    

 

The images look like:

Enough options to create nice OIP-images.

If you do not care about a precursor run in the first place ….

Now, a precursor-run may be too costly for you. You just want to play around with some statistical input image and a map in a trial and error fashion. Then you may just use the following two Jupyter cells:

Jupyter cell 9:

        
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height:34em; }</style>"))

# build simple initial image composed of fluctuations  
# *******************************************************

# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 5
fig1 = plt.figure(1)
ax1_1 = fig1.add_subplot(121)
ax1_2 = fig1.add_subplot(122)

# OIP function to setup an initial image 
initial_img = MyOIP._build_initial_img_data(  strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (0.2, 0.0, 0.0, 0.0),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False,
                                 ax1_1 = ax1_1, ax1_2 = ax1_2)

 

with output

Then use

Jupyter cell 10:

   
# Derive a single OIP from an input image with statistical fluctuations of the pixel values 
# ******************************************************************

# figure A - 8 frames
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure(1)
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

# figure B - 2 vertical frames for last image + contrats enhancemnet 
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 3
fig_size[1] = 7
#fig_b = plt.figure(2, figsize=(5,11.2))
fig_b = plt.figure(2)
ax1_1 = fig_b.add_subplot(211)
ax1_2 = fig_b.add_subplot(212)


map_index = 38          # map-index we are interested in 
n_epochs = 1000           # should be divisible by 5  
n_steps = 6             # number of intermediate reports 
epsilon = 0.01          # step size for gradient correction  
conv_criterion = 2.e-4  # criterion for a potential stop of optimization 

MyOIP._derive_OIP(map_index = map_index, 
           
       n_epochs = n_epochs, n_steps = n_steps, 
                  epsilon = epsilon , conv_criterion = conv_criterion, 
                  b_stop_with_convergence=False,
                 li_axa = li_axa,
                 ax1_1 = ax1_1, ax1_2 = ax1_2)

 

It produces an image output:

The comparison with other OIP-images above again shows a significant dependency of the outcome of our algorithm on the input image offered. Especially, when large scale OIPs are relevant for a map. The trial-and-error approach often does not reveal the repetitions of some basic pattern at different positions in the OIP-image as clearly as a systematic test of patterns does.

By the way: T was lucky in the above case that I got a reasonable OIP at all! Actually, I needed around 10 trials to get it. But if you tried e.g. with dominant small scale fluctuations defined by

initial_img = MyOIP._build_initial_img_data(  strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (0.1, 0.0, 0.0, 0.2),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False,
                                 ax1_1 = ax1_1, ax1_2 = ax1_2)

then the random number functionality will produce an input image as

which in turn will consistently lead to zero results

Start of optimization loop
*************
Strategy: Simple initial mixture of long and short range variations
Number of epochs =  1000
Epsilon =   0.01
*************
li_int =  [15, 30, 60, 120, 240, 480]
j:  0  :: loss_val =  0.0  :: loss_diff =  0.0
0-values, j=  0  loss =  0.0  avg_loss =  0.0

step 0 finalized
present loss_val =  0.0
loss_diff =  0.0
j:  1  :: loss_val =  0.0  :: loss_diff =  0.0
0-values, j=  1  loss =  0.0  avg_loss =  0.0
j:  2  :: loss_val =  0.0  :: loss_diff =  0.0
...
...
0-values, j=  10  loss =  0.0  avg_loss =  0.0
More than 10 times zero values - Try a different initial random distribution of pixel values

for map 38. Map 38 does not “like” random small scale fluctuations; it sees no pattern in them which would pass its underlying filter combination. Well, this was actually the main reason for a more systematic approach based on large scale patterns … .

But even in a trial-and-error approach you should always test pure long scale fluctuations first.

Does the code run as fast with TF 2.3.1?

No, it does not. Actually, it runs by a factor of 4 to 5 slower! The graphics card is used much less. I could not pin down the reason yet but I think that there is some inefficiency in data transfers between the graka’s environment and the CPU’s environment in the newer version of Tensorflow – maybe there are even unnecessary transfers occurring. If the problems with TF 2.3 do not disappear in a later version, I will file a bug report.

Code extension to cover multiple maps in one run?

Studying the code a bit will enable you to modify, beautify and extend it. Personally, I just have no time which I could invest in major changes right now. If you want to extend the program yourself to cover multiple maps in one run I recommend the following approach:

Build loops across the methods “_prepare_precursor()” and “_precursor()” for a range of defined maps. Save at least some of the identified basic fluctuation pattern for initial input images. Then build and start a method for a separate run which reconstructs input images from the saved fluctuation patterns and builds the related OIP-images – a process which does not take much time. Select theone with highest loss for each map. Then you should add a method for the display of the remaining OIP-images on an image grid for all maps- this would be similar to what we did with activation patterns in previous articles.

Conclusion

The visualization of filters, i.e. the creation of input images with patterns that trigger a map optimally, can become a hard business when we work with maps of deep convolutional layers. Some maps there may not react to input images with purely statistical pixel values and no dominant fluctuation pattern on longer length scales. During this article I have discussed the methods of a simple Python class which allows at least for a more systematic, though GPU/CPU-intensive, approach. It should be easy for readers who work a bit with Python to extend the code and tackle more elaborate tasks – also outside the MNIST case.

In the next article of this series

A simple CNN for the MNIST dataset – XII – filter visualization for maps of the first two convolutional layers

we shall look a bit at the other convolutional layers. So far we have only covered of the deepest Conv layer. Later on we shall close our first encounter with a (simple) CNN by answering a question posed at the series’ beginning: What changes if we re-train the CNN on a shuffled MNIST data set?

Some Python code for OIP detection and creation

 
'''
Module to create OIPs as visualizations of filters of a simple CNN for the MNIST data set
@version: 0.6, 10.10.2020
@change: V0.5:  was based on version 0.4 which was originally created in Jupyter cells
@change: V0.6: General revision of class "my_OIP" and its methods
@change: V0.6: Changes to the documentation 
@attention: General status: For experimental purposes only! 
@requires: A full CNN trained on MNIST data 
@requires: A Keras model of the CNN and weight data saved in a h5-file, e.g."cnn_MIST_best.h5". 
           This file must be placed in the main directory of the Jupyter notebooks.
@requires: A Jupyter environment - from where the class My_OIP is called and where plotting takes place 
@note: The description to the interface to the class via the __init__()-method may be incomplete
@note: The use of prefixes li_ and ay_ is not yet consistent. ay_ should indicate numpy arrays, li_ instead normal Python lists
@warning: This version has not been tested outside a Jupyter environment - plotting in GTK/Qt-environment may require substantial changes 
@status: Under major development with frequent changes 
@author: Dr. Ralph Mönchmeyer
@copyright: Simplified BSD License, 10.10.2020. Copyright (c) 2020, Dr. Ralph Moenchmeyer, Augsburg, Germnay

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this
   list of conditions and the following disclaimer.
2. Redistributions in 
binary form must reproduce the above copyright notice,
   this list of conditions and the following disclaimer in the documentation
   and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''

# Modules to be imported - these libs must be imported in a Jupyter cell nevertheless 
# ~~~~~~~~~~~~~~~~~~~~~~~~
import time 
import sys 
import math
import os 
from os import path as path

import numpy as np
from numpy import save  # used to export intermediate data
from numpy import load

import scipy
from sklearn.preprocessing import StandardScaler
from itertools import product 

import tensorflow as tf
from tensorflow import keras as K
from tensorflow.python.keras import backend as B  # this is the only version compatible with TF 1 compat statements
#from tensorflow.keras import backend as B 
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import schedules
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist

from tensorflow.python.client import device_lib

import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.patches as mpat 

from IPython.core.tests.simpleerr import sysexit

class My_OIP:
    '''
    @summary: This class allows for the creation and the display of OIP-patterns, 
              to which a selected map of a CNN-model and related filters react maximally
    
    @version: Version 0.6, 10.10.2020
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    @change: Revised methods 
    @requires: In the present version the class My_OIP requires: 
                * a CNN-model which works with standardized (!) input images, size 28x28,
                * a CNN-Modell which was trained on MNIST digit data,
                * exactly 4 length scales for random data fluctations are used to compose initial statistial image data 
                  (the length scales should roughly have a factor of 2 between them) 
                * Assumption : exactly 1 input image and not a batch of images is assumed in various methods 
    
    @note: Main Functions:     
    0) _init__()
    1) _load_cnn_model()             => load cnn-model
    2) _build_oip_model()            => build an oip-model to create OIP-images
    3) _setup_gradient_tape_and_iterate_function()        
                                    => Implements TF2 GradientTape to watch input data for eager gradient calculation
                                    => Creates a convenience function by the help of Keras to iterate and optimize the OIP-adjustments
    4) _oip_strat_0_optimization_loop():
                                     => Method implementing a simple strategy to create OIP-images, 
                                        based on superposition of random data on long range data (compared to 28 px) 
                                        The optimization uses "gradient ascent" to get an optimum output of the selected Conv map 

    6) _derive_OIP():                => Method used to start the creation of an OIP-image for a chosen map 
                                        - based on an input image with statistical random date 
    6) _derive_OIP_for_Prec_Img():   => Method used to start the creation of an OIP-image for a chosen map 
                                       - based on an input image with was derived from a PRECURSOR run, 
                                       which tests the reaction of the map to large scale fluctuations 
                                        
    7) _build_initial_img_data():    => Builds an input image based on random data for fluctuations on 4 length scales 
    8) _build_initial_img_from_prec():    
                                     => Reconstruct an input image based on saved random data for long range fluctuations 
    9) _prepare_precursor():         => Prepare a _precursor run by setting up TF2 GradientTape and the _iterate()-function 
    10) _precursor():                => Precursor run which systematically tests the reaction of a selected convolutional map 
                                        to long range fluctuations based on a (3x3)-grid upscaled to the real image size  
    11) _display_precursor_imgs():   => A method which plots up to 8 selected precursor images with fluctuations,
                                        which triggered a maximum map reaction 
    12) _transfrom_tensor_to_img():  => A method which allows to transform tensor data of a standardized (!) image to standard image data 
                                        with (gray)pixel valus in [0, 255]. Parameters allow for a contrast enhancement. 
    
    Usage hints 
    ~~~~~~~~~~~
    @note: If maps of a new convolutional layer are to be investigated then method _build_oip_model(layer_name) has to be rerun 
           with the layer's name as input parameter
    '''
    
    # Method to initialize an instantiation object 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def __init__(self, cnn_model_file = 'cnn_MNIST_best.h5', 
                 layer_name = 'Conv2D_3', 
                 map_index = 0, 
                 img_dim = 28, 
                 b_build_oip_model = True  
                ): 
        '''
        @summary: Initialization of an object instance - read in a CNN model, build an OIP-Model 
        @note: Input: 
        ~~~~~~~~~~~~
        @param cnn_model_file:  Name of a file containing a fully trained CNN-model; 
                                the model can later be overwritten by self._load_cnn_model()
        @param layer_name:      We can define a layer name, which we are interested in,  already when starting;
                                the layer can later be overwritten by self._build_oip_model()
        @param map_index:       We can define a map, which we are interested in,  already when starting;
                                A map-index is NOT required for building the OIP-model, but for the GradientTape-object 
        @param img_dim:         The dimension of the assumed quadratic images (2 for MNIST)

        @note: Major internal variables:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _cnn_model:        A reference to the CNN model object
        @ivar _layer_name:       The name of convolutional layer 
                                 (can be overwritten by method _build_oip_model() ) 
        @ivar _map_index:        Index of the map in the chosen layer's output array 
                                 (can later be overwritten by other methods) 
        @ivar _r_cnn_inputs:     A reference to the input tensor of the CNN model 
                                 Could be a batch of images; but in this class only 1 image is assumed
        @ivar _layer_output:     Tensor with all maps of a certain layer    
        @ivar _oip_submodel:     A new model connecting the 
input of the CNN-model with a certain map's (!) output
        @ivar _tape:             An instance of TF2's GradientTape-object 
                                    Watches input, output, loss of a model 
                                    and calculates gradients in TF2 eager mode 
        @ivar _r_oip_outputs:    Reference to the output of the new OIP-model = map-activation
        @ivar _r_oip_grads:      Reference to gradient tensors for the new OIP-model (output dependency on input image pixels)
        @ivar _r_oip_loss:       Reference to a loss defined on the OIP-output - i.e. on the activation values of the map's nodes;
                                 Normally chosen to be an average of the nodes' activations 
                                 The loss defines a hyperplane on the (28x28)-dim representation space of the input image pixel values  
        @ivar _val_oip_loss:     Loss value for a certain input image 
        @ivar _iterate           Reference toa Keras backend function which invokes the new OIP-model for a given image
                                 and calculates both loss and gradient values (in TF2 eager mode) 
                                 This is the function to be used in the optimization loop for OIPs
        
        @note: Internal Parameters controlling the optimization loop:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _oip_strategy:        0, 1 - There are two strategies to evolve OIP patterns out of statistical data 
                                    - only the first strategy is supported in this version 
                                    Both strategies can be combined with a precursor calculation 
                                    0: Simple superposition of fluctuations at different length scales
                                    1: NOT YET SUPPORTED 
                                    Evolution over partially evolved images based on longer scale variations 
                                    enriched with fluctuations on shorter length scales 

        @ivar _ay_epochs:           A list of 4 optimization epochs to be used whilst 
                                    evolving the img data via strategy 1 and intermediate images 
        @ivar _n_epochs:            Number of optimization epochs to be used with strategy 0 
        @ivar _n_steps:             Defines at how many intermediate points we show images and report 
                                    during the optimization process 
        @ivar _epsilon:             Factor to control the amount of correction imposed by the gradient values of the oip-model 

        @note: Input image data of the OIP-model and references to it 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _initial_precursor_img:  The initial image to start a precursor optimization with.
                                       Would normally be an image of only long range fluctuations. 
        @ivar _precursor_image:        The evolved image created and selected by the precursor loop 

        @ivar _initial_inp_img_data:  A tensor representing the data of the input image 
        @ivar _inp_img_data:          A tensor representig the 
        @ivar _img_dim:               We assume quadratic images to work with 
                                      with dimension _img_dim along each axis
                                      For the time being we only support MNIST images 
        
        @note: Internal parameters controlling the composition of random initial image data 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _li_dim_steps:        A list of the intermediate dimensions for random data;
                                    these data are smoothly scaled to the image dimensions 
        @ivar _ay_facts:            A Numpy array of 4 factors to control the amount of 
                                    contribution of the statistical 
variations 
                                    on the 4 length scales to the initial image
        
        @note: Internal variables to save data of a precursor run
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
        @ivar _list_of_covs: list of long range fluctuation data for a (3x3)-grid covering the image area 
        @ivar _li_fluct_enrichments: [li_facts, li_dim_steps] data for enrichment with small fluctuations 
        
        
        @note: Internal variables for plotting
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
        
        '''    
        
        # Input data and variable initializations
        # ****************************************
        
        # the model 
        # ~~~~~~~~~~
        self._cnn_model_file = cnn_model_file
        self._cnn_model      = None 
        
        # the chosen layer of te CNN-model
        self._layer_name = layer_name
        # the index of the map in the layer array
        self._map_index  = map_index
        
        # References to objects and the OIP-submodel
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._r_cnn_inputs  = None # reference to input of the CNN_model, also used in the oip-model  
        self._layer_output  = None
        self._oip_submodel  = None
        
        # References to watched GradientTape objects 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._tape          = None # TF2 GradientTape variable
        # some "references"
        self._r_oip_outputs = None # output of the oip-submodel to be watched 
        self._r_oip_grads   = None # gradients determined by GradientTape   
        self._r_oip_loss    = None # loss function
        # loss and gradient values (to be produced ba a backend function _iterate() )
        self._val_oip_grads = None
        self._val_oip_loss  = None
        
        # The Keras function to produce concrete outputs of the new OIP-model  
        self._iterate       = None
        
        # The strategy to produce an OIP pattern out of statistical input images 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~
        # 0: Simple superposition of fluctuations at different length scales 
        # 1: Move over 4 interediate images - partially optimized 
        self._oip_strategy = 0
        
        # Parameters controlling the OIP-optimization process 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~
        # number of epochs for optimization strategy 1
        self._ay_epochs    = np.array((20, 40, 80, 400), dtype=np.int32)
        len_epochs         = len(self._ay_epochs)
        
        # number of epochs for optimization strategy 0
        self._n_epochs     = self._ay_epochs[len_epochs-1]   
        self._n_steps      = 6   # divides the number of n_epochs into n_steps to produce intermediate outputs
        
        # size of corrections by gradients
        self._epsilon       = 0.01 # step-size for gradient correction
        
        # Input image-typess and references 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # precursor image
        self._initial_precursor_img = None
        self._precursor_img         = None # output of the _precursor-method 
        
        # The input image for the OIP-creation - a superposition of inial random fluctuations
        self._initial_inp_img_data  = None  # The initial data constructed 
        self._inp_img_data          = None  # The data used and varied for optimization 
        # image dimension
        self._img_dim               = img_dim   # = 28 => MNIST images for the time being 
        
        # Parameters controlling the setup of an initial image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~~~~~~~~~~~~~~
        # The length scales for 
initial input fluctuations
        self._li_dim_steps = ( (3, 3), (7,7), (14,14), (28,28) ) # can later be overwritten 
        # Parameters for fluctuations  - used both in strategy 0 and strategy 1  
        self._ay_facts     = np.array( (0.5, 0.5, 0.5, 0.5), dtype=np.float32 )
        
        # Data of a _precursor()-run 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._list_of_covs = None       # list of long range fluctuation data for a (3x3)-grid covering the image area 
        self._li_fluct_enrichments = None # = [li_facts, li_dim_steps] list with with 2 list of data enrichment for small fluctuations 
        # These data are required to reconstruct the input image to which a map reacted 
        
        # List of references to axis subplots
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # These references may change from Jupyter cell to Jupyter cell and provided by the called methods
        self._li_axa = None # will be set by methods according to axes-frames in Jupyter cells 
        # axis frames for a single image in 2 versions (with contrast enhancement)
        self._ax1_1 = None
        self._ax1_2 = None 
        
        # ********************************************************
        # Model setup - load the cnn-model and build the OIP-model
        # ************
        if path.isfile(self._cnn_model_file): 
            # We trigger the initial load of a model 
            self._load_cnn_model(file_of_cnn_model = self._cnn_model_file, b_print_cnn_model = True)
            # We trigger the build of a new sub-model based on the CNN model used for OIP search 
            self._build_oip_model(layer_name = self._layer_name, b_print_oip_model = True ) 
        else:
            print("<\nWarning: The standard file " + self._cnn_model_file + 
                  " for the cnn-model could not be found!\n " + 
                  " Please use method _load_cnn_model() to load a valid model")
            sys.exit()
        return
    
    
    #
    # Method to load a specific CNN model
    # **********************************
    def _load_cnn_model(self, file_of_cnn_model=None, b_print_cnn_model=True ):
        '''
        @summary: Method to load a CNN-model from a h5-file and create a reference to its input (image)
        @version: 0.2 of 28.09.2020
        @requires: filename must already have been saved in _cnn_model_file or been given as a parameter 
        @requires: file must be a h5-file 
        @change: minor changes - documentation 
        @note: A reference to the CNN's input is saved internally
        @warning: An input in form of an image - a MNIST-image - is implicitly assumed
        
        @note: Parameters
        -----------------
        @param file_of_cnn_model: Name of h5-file with the trained (!) CNN-model
        @param b_print_cnn_model: boolean - Print some information on the CNN-model 
        '''
        if file_of_cnn_model != None:
            self._cnn_model_file = file_of_cnn_model
        
        # Check existence of the file
        if not path.isfile(self._cnn_model_file): 
            print("\nWarning: The file " + file_of_cnn_model + 
                  " for the cnn-model could not be found!\n" + 
                  "Please change the parameter \"file_of_cnn_model\"" + 
                  " to load a valid model")
        
        # load the CNN model 
        self._cnn_model = models.load_model(self._cnn_model_file)
        
        # Inform about the model and its file
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~
        print("Used file to load a ´ model = ", self._cnn_model_file)
        # we print out the models structure
        if b_print_cnn_model:
            print("Structure of the loaded CNN-model:\n")
            self._cnn_model.summary()
        
        # handle/references to the models input 
=> more precise the input image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #    Note: As we only have one image instead of a batch 
        #    we pick only the first tensor element!
        #    The inputs will be needed for buildng the oip-model 
        self._r_cnn_inputs = self._cnn_model.inputs[0]  # !!! We have a btach with just ONE image 
        
        # print out the shape - it should be known from the original cnn-model
        if b_print_cnn_model:
            print("shape of cnn-model inputs = ", self._r_cnn_inputs.shape)
        
        return
    
    
    #
    # Method to construct a model to optimize input for OIP-detection 
    # ***************************************
    def _build_oip_model(self, layer_name = 'Conv2D_3', b_print_oip_model=True ): 
        '''
        @summary: Method to build a new (sub-) model - the "OIP-model" - of the CNN-model by 
                  connectng the input of the CNN-model with one of its Conv-layers
        @version: 0.4 of 28.09.2020
        @change: Minor changes - documentation 
        @note: We need a Conv layer to build a working model for input image optimization
        We get the Conv layer by the layer's name 
        The new model connects the first input element of the CNN to the output maps of the named Conv layer CNN 
        We use Keras' models.Model() functionality 
        @note: The layer's name is crucial for all later investigations - if you want to change it this method has to be rerun 
        @requires: The original, trained CNN-model must be loaded and referenced by self._cnn_model 
        @warning: Only 1 input image and not a batch is implicitly assumed 
        
        @note: Parameters
        -----------------
        @param layer_name: Name of the convolutional layer of the CNN for whose maps we want to find OIP patterns
        @param b_print_oip_model: boolean - Print some information on the OIP-model 
        
        '''
        # free some RAM - hopefully 
        del self._oip_submodel
        
        # check for loaded CNN-model
        if self._cnn_model == None: 
            print("Error: CNN-model not yet defined.")
            sys.exit()
        
        # get layer name 
        self._layer_name = layer_name
        
        # We build a new model based on the model inputs and the output 
        self._layer_output = self._cnn_model.get_layer(self._layer_name).output
        # Note: We do not care at the moment about a complex composition of the input 
        # We trust in that we handle only one image - and not a batch
        
        # Create the sub-model via Keras' models.Model()
        model_name = "mod_oip__" + layer_name 
        self._oip_submodel = models.Model( [self._r_cnn_inputs], [self._layer_output], name = model_name)                                    

        # We print out the oip model structure
        if b_print_oip_model:
            print("Structure of the constructed OIP-sub-model:\n")
            self._oip_submodel.summary()
        return
    
    
    #
    # Method to set up GradientTape and an iteration function providing loss and gradient values  
    # *********************************************************************************************
    def _setup_gradient_tape_and_iterate_function(self, b_print = True):
        '''
        @summary: Central method to watch input variables and resulting gradient changes 
        @version: 0.5 of 28.09.2020
        @change: 
        @note: For TF2 eager execution we need to watch input changes and trigger automatic gradient evaluation
        @note: The normalization of the gradient is strongly recommended; as we fix epsilon for correction steps 
               we thereby will get changes to the input data of an approximately constant order.
               This - together with 
standardization of the images (!) - will lead to convergence at the size of epsilon !
        '''   
        # Working with TF2 GradientTape
        self._tape = None

        # Watch out for input, output variables with respect to gradient changes
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        with tf.GradientTape() as self._tape: 
            # Input
            # ~~~~~
            self._tape.watch(self._r_cnn_inputs)
            # Output
            # ~~~~~~
            self._r_oip_outputs = self._oip_submodel(self._r_cnn_inputs)
            # Loss 
            self._r_oip_loss = tf.reduce_mean(self._r_oip_outputs[0, :, :, self._map_index])
            #self._loss = B.mean(oip_output[:, :, :, map_index])
            #self._loss = B.mean(oip_outputs[-1][:, :, map_index])
            #self._loss = tf.reduce_mean(oip_outputs[-1][ :, :, map_index])
            if b_print:
                print(self._r_oip_loss)
                print("shape of oip_loss = ", self._r_oip_loss.shape)
        
        # Gradient definition and normalization
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._r_oip_grads  = self._tape.gradient(self._r_oip_loss, self._r_cnn_inputs)
        print("shape of grads = ", self._r_oip_grads.shape)

        # normalization of the gradient - required for convergence 
        t_tiny = tf.constant(1.e-7, tf.float32)
        self._r_oip_grads /= (tf.sqrt(tf.reduce_mean(tf.square(self._r_oip_grads))) + t_tiny)
        #self._r_oip_grads /= (B.sqrt(B.mean(B.square(self._r_oip_grads))) + 1.e-7)
        #self._r_oip_grads = tf.image.per_image_standardization(self._r_oip_grads)

        # Define an abstract recallable Keras function as a convenience function for iterations 
        #     The _iterate()-function produces loss and gradient values for corrected img data 
        #     The first list of addresses points to the input data, the last to the output data 
        self._iterate = B.function( [self._r_cnn_inputs], [self._r_oip_loss, self._r_oip_grads] )
        
    
    #        
    # Method to optimize an emerging OIP out of statistical input image data 
    # (simple strategy - just optimize, no precursor, no intermediate pattern evolution 
    # ********************************
    def _oip_strat_0_optimization_loop(self, conv_criterion = 5.e-4, 
                                            b_stop_with_convergence = False,
                                            b_show_intermediate_images = True,
                                            b_print = True):
        '''
        @summary: Method to control the optimization loop for OIP reconstruction of an initial input image
                  with a statistical distribution of pixel values. 
        @version: 0.4 of 28.09.2020
        @changes: Minor changes - eliminated some unused variables
        @note:    The function also provides intermediate output in the form of printed data and images !.
        @requires: An input image tensor must already be available at _inp_img_data - created by _build_initial_img_data()
        @requires: Axis-objects for plotting, typically provided externally by the calling functions 
                  _derive_OIP() or _precursor()
        
        @note: Parameters:
        ----------------- 
        @param conv_criterion:  A small threshold number for (difference of loss-values / present loss value )
        @param b_stop_with_convergence: Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param b_show_intermediate_image: Boolean which decides whether we show up to 8 intermediate images 
        @param b_print: Boolean which decides whether we print intermediate loss values 
        
        @note: Intermediate information is provided at _n_steps intervals, 
               which are logarithmically distanced with respect to _n_epochs
               
Reason: Most changes happen at the beginning 
        @note: This method produces some intermediate output during the optimization loop in form of images.
        It uses an external grid of plot frames and their axes-objects. The addresses of the 
        axis-objects must provided by an external list "li_axa[]" to self._li_axa[].  
        We need a seqence of _n_steps+2 axis-frames (or more) => length(_li_axa) >= _n_steps + 2). 
        
        @todo: Loop not optimized for TF 2 - but not so important here - a run takes less than a second 
        
        '''
        
        # Check that we already an input image tensor
        if ( (self._inp_img_data == None) or 
             (self._inp_img_data.shape[1] != self._img_dim) or 
             (self._inp_img_data.shape[2] != self._img_dim) ) :
            print("There is no initial input image or it does not fit dimension requirements (28,28)")
            sys.exit()

        # Print some information
        if b_print:
            print("*************\nStart of optimization loop\n*************")
            print("Strategy: Simple initial mixture of long and short range variations")
            print("Number of epochs = ", self._n_epochs)
            print("Epsilon =  ", self._epsilon)
            print("*************")

        # some initial value
        loss_old = 0.0
        
        # Preparation of intermediate reporting / img printing
        # --------------------------------------
        # Logarithmic spacing of steps (most things happen initially)
        n_el = math.floor(self._n_epochs / 2**(self._n_steps) ) 
        li_int = []
        for j in range(self._n_steps):
            li_int.append(n_el*2**j)
        if b_print:
            print("li_int = ", li_int)
        # A counter for intermediate reporting  
        n_rep = 0
        
        # Convergence? - A list for steps meeting the convergence criterion
        # ~~~~~~~~~~~~
        li_conv = []
        
        
        # optimization loop 
        # *******************
        # counter for steps with zero loss and gradient values 
        n_zeros = 0
        
        for j in range(self._n_epochs):
            
            # Get output values of our Keras iteration function 
            # ~~~~~~~~~~~~~~~~~~~
            self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
            
            # loss difference to last step - shuold steadily become smaller 
            loss_diff = self._val_oip_loss - loss_old 
            if b_print:
                print("j: ", j, " :: loss_val = ", self._val_oip_loss, " :: loss_diff = ",  loss_diff)
                # print("loss_diff = ", loss_diff)
            loss_old = self._val_oip_loss

            if j > 10 and (loss_diff/(self._val_oip_loss + 1.-7)) < conv_criterion:
                li_conv.append(j)
                lenc = len(li_conv)
                # print("conv - step = ", j)
                # stop only after the criterion has been met in 4 successive steps
                if b_stop_with_convergence and lenc > 5 and li_conv[-4] == j-4:
                    return
            
            grads_val     = self._val_oip_grads
            #grads_val =   normalize_tensor(grads_val)
            
            # The gradients average value 
            avg_grads_val = (tf.reduce_mean(grads_val)).numpy()
            
            # Check if our map reacts at all
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if self._val_oip_loss == 0.0 and avg_grads_val == 0.0 and b_print :
                print( "0-values, j= ", j, 
                       " loss = ", self._val_oip_loss, " avg_loss = ", avg_grads_val )
                n_zeros += 1
            
            if n_zeros > 10 and b_print: 
                print("
More than 10 times zero values - Try a different initial random distribution of pixel values")
                return
            
            # gradient ascent => Correction of the input image data 
            # ~~~~~~~~~~~~~~~
            self._inp_img_data += self._val_oip_grads * self._epsilon
            
            # Standardize the corrected image - we won't get a convergence otherwise 
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
            
            # Some output at intermediate points 
            #     Note: We us logarithmic intervals because most changes 
            #           appear in the intial third of the loop's range  
            if (j == 0) or (j in li_int) or (j == self._n_epochs-1) :
                if b_print:
                    # print some info 
                    print("\nstep " + str(j) + " finalized")
                    #print("Shape of grads = ", grads_val.shape)
                    print("present loss_val = ", self._val_oip_loss)
                    print("loss_diff = ", loss_diff)
                # show the intermediate image data 
                if b_show_intermediate_images: 
                    imgn = self._inp_img_data[0,:,:,0].numpy()
                    # print("Shape of intermediate img = ", imgn.shape)
                    self._li_axa[n_rep].imshow(imgn, cmap=plt.cm.get_cmap('viridis'))
                    # counter
                    n_rep += 1
        
        return
    
    
    #        
    # Standard UI-method to derive OIP from a given initial input image
    # ********************
    def _derive_OIP(self, map_index = 1, 
                          n_epochs = None, 
                          n_steps = 6, 
                          epsilon = 0.01, 
                          conv_criterion = 5.e-4, 
                          li_axa = [], 
                          ax1_1 = None, ax1_2 = None, 
                          b_stop_with_convergence = False,
                          b_show_intermediate_images = True,
                          b_print = True):
        '''
        @summary: Method to create an OIP-image for a given initial input image
                  This is the standard user interface for finding an OIP 
        @warning: This method should NOT be used for finding an initial precursor image 
                  Use _prepare_precursor() to define the map first and then _precursor() to evaluate initial images 
        @version: V0.4, 28.09.2020
        @changes: Minor changes - added internal _li_axa for plotting, added documentation 
                  This method starts the process of producing an OIP of statistical input image data
        
        @requires: A map index should be provided to this method 
        @requires: An initial input image with statistical fluctuations of pixel values must have been created. 

        @warning:    This version only supports the most simple strategy - "strategy 0" 
        -------------    Optimize in one loop - starting from a superposition of fluctuations
                         No precursor, no intermediate evolutions

        @note: Parameters:
        -----------------
        @param map_index: We can and should chose a map here  (overwrites previous settings)
        @param n_epochs: Number of optimization steps  (overwrites previous settings) 
        @param n_steps:  Defines number of intervals (with length n_epochs/ n_steps) for reporting
                         standard value: 6 => 8 images - start image, end image + 6 intermediate 
                         This number also sets a requirement for providing (n_step + 2) external axis-frames 
                         to display intermediate images of the emerging OIP  
                         => see _oip_strat_0_optimization_loop()
r
        @param epsilon:  Size for corrections by gradient values
        @param conv_criterion: A small threshold number for convegenc (checks:  difference of loss-values / present loss value )
        @param b_stop_with_convergence: 
                         Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
                 
        
        @note: Preparations for plotting: 
        We need n_step + 2 axis-frames which must be provided externally
        
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        '''
        # Get input parameters
        self._map_index = map_index
        self._n_epochs  = n_epochs   
        self._n_steps   = n_steps
        self._epsilon   = epsilon
        
        # references to plot frames 
        self._li_axa = li_axa
        num_frames = len(li_axa)
        if num_frames < n_steps+2:
            print("The number of available image frames (", num_frames, ") is smaller than required for intermediate output (", n_steps+2, ")")
            sys.exit()
        
        # 2 axes frames to display the final OIP image (with contrast enhancement) 
        self._ax1_1 = ax1_1
        self._ax1_2 = ax1_2
        
        # number of epochs for optimization strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if n_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = n_epochs

        # Reset some variables  
        self._val_oip_grads = None
        self._val_oip_loss  = None 
        self._iterate       = None 

        # Setup the TF2 GradientTape watch and a Keras function for iterations 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._setup_gradient_tape_and_iterate_function(b_print = b_print)
        if b_print:
            print("GradienTape watch activated ")
        
        '''
        # Gradient handling - so far we only deal with addresses 
        # ~~~~~~~~~~~~~~~~~~
        self._r_oip_grads  = self._tape.gradient(self._r_oip_loss, self._r_cnn_inputs)
        print("shape of grads = ", self._r_oip_grads.shape)
        
        # normalization of the gradient 
        self._r_oip_grads /= (B.sqrt(B.mean(B.square(self._r_oip_grads))) + 1.e-7)
        #self._r_oip_grads = tf.image.per_image_standardization(self._r_oip_grads)
        
        # define an abstract recallable Keras function 
        # producing loss and gradients for corrected img data 
        # the first list of addresses points to the input data, the last to the output data 
        self._iterate = B.function( [self._r_cnn_inputs], [self._r_oip_loss, self._r_oip_grads] )
        '''
            
        # get the initial image into a variable for optimization 
        self._inp_img_data = self._initial_inp_img_data

        # Start optimization loop for strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if self._oip_strategy == 0: 
            self._oip_strat_0_optimization_loop( conv_criterion = conv_
criterion, 
                                                b_stop_with_convergence = b_stop_with_convergence,  
                                                b_show_intermediate_images = b_show_intermediate_images,
                                                b_print = b_print
                                               )
        
        # Display the last OIP-image created at the end of the optimization loop
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # standardized image 
        oip_img = self._inp_img_data[0,:,:,0].numpy()
        # transfored image 
        oip_img_t = self._transform_tensor_to_img(T_img=self._inp_img_data[0,:,:,0])
        
        # display 
        ax1_1.imshow(oip_img, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(oip_img_t, cmap=plt.cm.get_cmap('viridis'))
        
        return
    
    
    # 
    # Method to derive OIP from a given initial input image if map_index is already defined 
    # ********************
    def _derive_OIP_for_Prec_Img(self, 
                          n_epochs = None, 
                          n_steps = 6, 
                          epsilon = 0.01, 
                          conv_criterion = 5.e-4, 
                          li_axa = [], 
                          ax1_1 = None, ax1_2 = None, 
                          b_stop_with_convergence = False,
                          b_show_intermediate_images = True,
                          b_print = True):
        '''
        @summary: Method to create an OIP-image for an already given map-index and a given initial input image
                  This is the core of OIP-detection, which starts the optimization loop  
        @warning: This method should NOT be used directly for finding an initial precursor image 
                  Use _prepare_precursor() to define the map first and then _precursor() to evaluate initial images 
        @version: V0.4, 28.09.2020
        @changes: Minor changes - added internal _li_axa for plotting, added documentation 
                  This method starts the process of producing an OIP of statistical input image data
        
        @note:    This method should only be called after _prepare_precursor(), _precursor(), _build_initial_img_prec() 
                  For a trial of different possible precursor images rerun _build_initial_img_prec() with a different index
        
        @requires: A map index should be provided to this method 
        @requires: An initial input image with statistical fluctuations of pixel values must have been created. 

        @warning:    This version only supports the most simple strategy - "strategy 0" 
        -------------    Optimize in one loop - starting from a superposition of fluctuations
                         no intermediate evolutions

        @note: Parameters:
        -----------------
        @param n_epochs: Number of optimization steps  (overwrites previous settings) 
        @param n_steps:  Defines number of intervals (with length n_epochs/ n_steps) for reporting
                         standard value: 6 => 8 images - start image, end image + 6 intermediate 
                         This number also sets a requirement for providing (n_step + 2) external axis-frames 
                         to display intermediate images of the emerging OIP  
                         => see _oip_strat_0_optimization_loop()
        @param epsilon:  Size for corrections by gradient values
        @param conv_criterion: A small threshold number for convegenc (checks:  difference of loss-values / present loss value )
        @param b_stop_with_convergence: 
                         Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
                 
        
        @note: 
Preparations for plotting: 
        We need n_step + 2 axis-frames which must be provided externally
        
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        '''
        # Get input parameters
        self._n_epochs  = n_epochs   
        self._n_steps   = n_steps
        self._epsilon   = epsilon
        
        # references to plot frames 
        self._li_axa = li_axa
        num_frames = len(li_axa)
        if num_frames < n_steps+2:
            print("The number of available image frames (", num_frames, ") is smaller than required for intermediate output (", n_steps+2, ")")
            sys.exit()
            
        # 2 axes frames to display the final OIP image (with contrast enhancement) 
        self._ax1_1 = ax1_1
        self._ax1_2 = ax1_2
        
        # number of epochs for optimization strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if n_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = n_epochs
        
        # Note: No setup of GradientTape and _iterate(required) - this is done by _prepare_precursor 
            
        # get the initial image into a variable for optimization 
        self._inp_img_data = self._initial_inp_img_data

        # Start optimization loop for strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if self._oip_strategy == 0: 
            self._oip_strat_0_optimization_loop( conv_criterion = conv_criterion, 
                                                b_stop_with_convergence = b_stop_with_convergence,  
                                                b_show_intermediate_images = b_show_intermediate_images,
                                                b_print = b_print
                                               )
        # Display the last OIP-image created at the end of the optimization loop
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # standardized image 
        oip_img = self._inp_img_data[0,:,:,0].numpy()
        # transfored image 
        oip_img_t = self._transform_tensor_to_img(T_img=self._inp_img_data[0,:,:,0])
        
        # display 
        ax1_1.imshow(oip_img, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(oip_img_t, cmap=plt.cm.get_cmap('viridis'))
        
        return
    
    
    # 
    # Method to build an initial image from a superposition of random data on different length scales 
    # ***********************************
    def _build_initial_img_data( self, 
                                 strategy = 0, 
                                 li_epochs    = [20, 50, 100, 400], 
                                 li_facts     = [0.5, 0.5, 0.5, 0.5],
                                 li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
                                 b_smoothing = False,
                                 ax1_1 = None, ax1_2 = None):
        
        '''
        @summary: Standard method to build an initial image with random fluctuations on 4 length scales
        @version: V0.2 of 29.09.2020
        
        @note: This method should NOT be used for initial 
images based on a precursor images. 
               See _build_initial_img_prec(), instead.  
        
        @note: This method constructs an initial input image with a statistical distribution of pixel-values.
        We use 4 length scales to mix fluctuations with different "wave-length" by a simple approach: 
        We fill four square with a different numer of cells below the number of pixels 
        in each dimension of the real input image; e.g. (4x4), (7x7, (14x14), (28,28) <= (28,28). 
        We fill the cells with random numbers in [-1.0, 1.]. We smootly scale the resulting pattern 
        up to (28,28) (or what ever the input image dimensions are) by bicubic interpolations 
        and eventually add up all values. As a final step we standardize the pixel value distribution.          
        
        @warning: This version works with 4 length scales, only. 
        @warning: In the present version th eparameters "strategy " and li_epochs have no effect 
        
        @note: Parameters:
        -----------------
        @param strategy:  The strategy, how to build an image (overwrites previous settings) - presently only 0 is supported 
        @param li_epochs: A list of epoch numbers which will be used in strategy 1 - not yet supported 
        @param li_facts:  A list of factors which the control the relative strength of the 4 fluctuation patterns 
        @param li_dim_steps: A list of square dimensions for setting the length scale of the fluctuations  
        @param b_smoothing: Parameter which builds a control image   
        @param ax1_1: matplotlib axis-frame to display the built image 
        @param ax1_2: matplotlib axis-frame to display a second version of the built image 
        
        '''
        
        # Get input parameters 
        # ~~~~~~~~~~~~~~~~~~
        self._oip_strategy = strategy               # no effect in this version 
        self._ay_epochs    = np.array(li_epochs)    # no effect in this version 
        
        # factors by which to mix the random number fluctuations of the different length scales 
        self._ay_facts     = np.array(li_facts)
        # dimensions of the squares which simulate fluctuations 
        self._li_dim_steps = li_dim_steps
        
        # A Numpy array for the eventual superposition of random data 
        fluct_data = None
        
        
        # Strategy 0: Simple superposition of random patterns at 4 different wave-length
        # ~~~~~~~~~~
        if self._oip_strategy == 0:
            
            dim_1_1 = self._li_dim_steps[0][0] 
            dim_1_2 = self._li_dim_steps[0][1] 
            dim_2_1 = self._li_dim_steps[1][0] 
            dim_2_2 = self._li_dim_steps[1][1] 
            dim_3_1 = self._li_dim_steps[2][0] 
            dim_3_2 = self._li_dim_steps[2][1] 
            dim_4_1 = self._li_dim_steps[3][0] 
            dim_4_2 = self._li_dim_steps[3][1] 
            
            fact1 = self._ay_facts[0]
            fact2 = self._ay_facts[1]
            fact3 = self._ay_facts[2]
            fact4 = self._ay_facts[3]
            
            # print some parameter information
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            print("\nInitial image composition - strategy 0:\n Superposition of 4 different wavelength patterns")
            print("Parameters:\n", 
                 fact1, " => (" + str(dim_1_1) +", " + str(dim_1_2) + ") :: ", 
                 fact2, " => (" + str(dim_2_1) +", " + str(dim_2_2) + ") :: ", 
                 fact3, " => (" + str(dim_3_1) +", " + str(dim_3_2) + ") :: ", 
                 fact4, " => (" + str(dim_4_1) +", " + str(dim_4_2) + ")" 
                 )
            
            # fluctuations
            fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
            fluct2 =  2.0 * ( np.random.
random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
            fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
            fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
            
            # Scaling with bicubic interpolation to the required image size
            fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
            fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
            fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
            fluct4_scale = fluct4
            
            # superposition
            fluct_data = fact1*fluct1_scale + fact2*fluct2_scale + fact3*fluct3_scale + fact4*fluct4_scale
            
        
        # get the standardized plus smoothed and unsmoothed image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #  TF2 provides a function performing standardization of image data function
        fluct_data_unsmoothed = tf.image.per_image_standardization(fluct_data) 
        fluct_data_smoothed   = tf.image.per_image_standardization(
                                    tf.image.resize( fluct_data, [28,28], method="bicubic", antialias=True) )
        
        if b_smoothing: 
            self._initial_inp_img_data = fluct_data_smoothed
        else:
            self._initial_inp_img_data = fluct_data_unsmoothed
        
        # Display of both variants => there should be no difference 
        # ~~~~~~~~~~~~~~~~~~~~~~~~
        img_init_unsmoothed = fluct_data_unsmoothed[0,:,:,0].numpy()
        img_init_smoothed   = fluct_data_smoothed[0,:,:,0].numpy()
        ax1_1.imshow(img_init_unsmoothed, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(img_init_smoothed, cmap=plt.cm.get_cmap('viridis'))
        
        print("Initial images plotted")
        
        return self._initial_inp_img_data    


    # 
    # Method to build an initial image from a superposition of a PRECURSOR image with random data on different length scales 
    # ***********************************
    def _build_initial_img_from_prec( self, 
                                 prec_index = -1,
                                 strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (1.0, 0.0, 0.0, 0.0, 0.0),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False,
                                 b_print = True, 
                                 b_display = False, 
                                 ax1_1 = None, ax1_2 = None):
        
        '''
        @summary: Method to build an initial image based on a Precursor image => Input for _derive_OIP_for_Prec_Img()
        @version: V0.3, 03.10.2020
        @changes: V0.2: Minor- only documentation and comparison of index to length of _li_of_flucts[] 
        @changes: V0.3: Added Booleans to control the output and display of images 
        @changes: V0.4: Extended the reconstruction part / extended documentation
        
        @note: This method differs from _build_initial_img_data() as follows:
                * It uses a Precursor image as the fundamental image 
                * The data of the Precursor Image will be reconstructed from a (3x3) fluctuation pattern and enrichments
                * This method adds even further fluctuations if requested 
        @note: This method should be called manually from a Jupyter cell 
        @note: This method saves the reconstructed input image into self._initial_inp_img_data
        @note: This method should be followed by a call of self._derive_OIP_for_Prec_Img()
        
        @requires: Large scale fluctuation data saved in self._li_of_flucts[]
        @requires: Additional enrichment 
information in self._li_of_fluct_enrichments[]
        
        @param prec_index: This is an index ([0, 7[) of a large scale fluctuation pattern which was saved in self._li_of_flucts[] 
                           during the run of the method "_precursor()". The image tensor is reconstructed from the fluctuation pattern. 
        @warning: We support a maximum of 8 selected fluctuation patterns for which a map reacts 
        @warning: However, less precursor patterns may be found - so you should watch for the output of _precursor() before you run this method
        
        @param li_facts:  A list of factors which the control the relative strength of the precursor image vs. 
                          4 additional fluctuation patterns 
        @warning: Normally, it makes no sense to set li_facts[1] > 0 - because this will destroy the original large scale pattern 
        
        @note: For other input parameters see _build_initial_img_data()
        '''
        
        # Get input parameters 
        self._oip_strategy = strategy
        self._ay_facts     = np.array(li_facts)
        self._ay_epochs    = np.array(li_epochs)
        self._li_dim_steps = li_dim_steps
        
        fluct_data = None
        
        # Reconstruct an precursor image from a saved large scale fluctuation pattern (result of _precursor() ) 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        length_li_cov = len(self._li_of_flucts)
        if prec_index > -1: 
            if length_li_cov < prec_index+1:
                print("index too large for number of detected patterns (", length_li_cov, ")")
                sys.exit()
            cov_p = self._li_of_flucts[prec_index][1][1]
            
            fluct0_scale_p = tf.image.resize(cov_p, [28,28], method="bicubic", antialias=True)
            
            # Add fluctuation enrichments - if saved [ len(self._li_fluct_enrichments) > 0 ]
            if len(self._li_fluct_enrichments) > 0:
                # Scaling enrichment flucts with bicubic interpolation to the required image size
                fluct1_p = self._li_fluct_enrichments[2][0] 
                fluct2_p = self._li_fluct_enrichments[2][1] 
                fluct3_p = self._li_fluct_enrichments[2][2] 
                fluct4_p = self._li_fluct_enrichments[2][3] 
                
                fact0_p = self._li_fluct_enrichments[0][0]
                fact1_p = self._li_fluct_enrichments[0][1]
                fact2_p = self._li_fluct_enrichments[0][2]
                fact3_p = self._li_fluct_enrichments[0][3]
                fact4_p = self._li_fluct_enrichments[0][4]
                
                fluct1_scale_p = tf.image.resize(fluct1_p, [28,28], method="bicubic", antialias=True)
                fluct2_scale_p = tf.image.resize(fluct2_p, [28,28], method="bicubic", antialias=True)
                fluct3_scale_p = tf.image.resize(fluct3_p, [28,28], method="bicubic", antialias=True)
                fluct4_scale_p = fluct4_p
                
                fluct_scale_p = fact0_p*fluct0_scale_p \
                 + fact1_p*fluct1_scale_p + fact2_p*fluct2_scale_p \
                 + fact3_p*fluct3_scale_p + fact4_p*fluct4_scale_p
                 
            else: 
                fluct_scale_p = fluct0_scale_p
            
            # get the img-data 
            fluct_data_p  = tf.image.per_image_standardization(fluct_scale_p)     
            fluct_data_p  = tf.where(fluct_data_p > 5.e-6, fluct_data_p, tf.zeros_like(fluct_data_p))
            self._initial_inp_img_data = fluct_data_p
            self._inp_img_data         = fluct_data_p
            
        
        # Strategy 0: Simple superposition of the precursor image AND additional patterns at 4 different wave-length
        # ~~~~~~~~~~
        if self._oip_strategy == 0:
            
            dim_1_1 = self._
li_dim_steps[0][0] 
            dim_1_2 = self._li_dim_steps[0][1] 
            dim_2_1 = self._li_dim_steps[1][0] 
            dim_2_2 = self._li_dim_steps[1][1] 
            dim_3_1 = self._li_dim_steps[2][0] 
            dim_3_2 = self._li_dim_steps[2][1] 
            dim_4_1 = self._li_dim_steps[3][0] 
            dim_4_2 = self._li_dim_steps[3][1] 
            
            fact0 = self._ay_facts[0]
            fact1 = self._ay_facts[1]
            fact2 = self._ay_facts[2]
            fact3 = self._ay_facts[3]
            fact4 = self._ay_facts[4]
            
            # print some parameter information
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if b_print:
                print("\nInitial image composition - strategy 0:\n Superposition of 4 different wavelength patterns")
                print("Parameters:\n", 
                     fact0, " => precursor image \n", 
                     fact1, " => (" + str(dim_1_1) +", " + str(dim_1_2) + ") :: ", 
                     fact2, " => (" + str(dim_2_1) +", " + str(dim_2_2) + ") :: ", 
                     fact3, " => (" + str(dim_3_1) +", " + str(dim_3_2) + ") :: ", 
                     fact4, " => (" + str(dim_4_1) +", " + str(dim_4_2) + ")" 
                     )
            
            # fluctuations
            fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
            fluct2 =  2.0 * ( np.random.random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
            fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
            fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
            
            # Scaling with bicubic interpolation to the required image size
            fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
            fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
            fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
            fluct4_scale = fluct4
            
            # superposition with the already calculated image 
            fluct_data = fact0 * self._initial_inp_img_data  \
                         + fact1*fluct1_scale + fact2*fluct2_scale \
                         + fact3*fluct3_scale + fact4*fluct4_scale
            
        
        # get the standardized plus smoothed and unsmoothed image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #    TF2 provides a function performing standardization of image data function
        fluct_data_unsmoothed = fluct_data 
        fluct_data_smoothed   = tf.image.per_image_standardization(
                                    tf.image.resize( fluct_data, [28,28], 
                                                     method="bicubic", antialias=True) )
        
        if b_smoothing: 
            self._initial_inp_img_data = fluct_data_smoothed
        else:
            self._initial_inp_img_data = fluct_data_unsmoothed
        
        # There should be no difference 
        img_init_unsmoothed = fluct_data_unsmoothed[0,:,:,0].numpy()
        img_init_smoothed   = fluct_data_smoothed[0,:,:,0].numpy()
        
        if b_display:
            ax1_1.imshow(img_init_unsmoothed, cmap=plt.cm.get_cmap('viridis'))
            ax1_2.imshow(img_init_smoothed, cmap=plt.cm.get_cmap('viridis'))
            print("Initial images plotted")
        
        return self._initial_inp_img_data    
    
    #
    # Method to prepare a Precursor run which checks a variety of large scale fluctuations for optimum actvation  
    # ***********************************
    def _prepare_precursor(self, map_index = 120, b_print = False):
        '''
        @summary: A method to prepare a Precursor run by setting up GradientTape and the _ierate() 
function for an optimization loop
        @version: 0.2, 30.09.2020
        @changes: Minor - adaption to _setup_gradient_tape_and_iterate_function() instead of the obsolete _setup_gradient_tape()
        @requires: A loaded CNN-Model and an already built OIP-model 
        @requires: A valid map-index as input parameter 
        
        @note: This method sets up the GradientTape and _iterate only once 
               - it will not be done again during the investigation of thousands of different input images (with large scale fluctuations) 
               which we investigate during the _precursor()-run. 
        
        @param map_index: Index selecting a map for the CNN layer defined previously by _build_oip_model()
        @param b_print: Boolean - decides about intemediate output 
        
        '''
        
        # Get input parameters 
        # ~~~~~~~~~~~~~~~~~~~~~~
        self._map_index = map_index
        # Rest some variables  
        self._val_oip_grads = None
        self._val_oip_loss  = None 
        self._iterate       = None 
        
        # Setup the TF2 GradientTape watch
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._setup_gradient_tape_and_iterate_function(b_print = b_print)
        if b_print:
            print("GradientTape watch activated and _iterate()-function defined")
    
    
    # Method to prepare a Precursor run which checks a variety of large scale fluctuations for optimum actvation
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _precursor(self, 
                   li_pre_val=[0.2, 0.5, 0.8], 
                   num_epochs=10, 
                   loss_limit = 0.5, 
                   b_print = True, 
                   b_with_enriched_fluct = False, 
                   li_facts     = (1.0, 0.0, 0.0, 0.0, 0.1),
                   li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                   b_check = True, 
                   li_axa = None):

        '''
        @summary: Method to investigate thousands of input images with large scale fluctuations for the reaction of a specified map (filters)
                  and a given number of epochs in pattern creation
        @version: 0.5, 02.10.2020
        @changes: Minor - documentation, skipped some superfluous statements 
        @note: We select the 8 most dominant images - or less, if there are fewer input images which trigger the map 
        @requires: Previous run of _prepare_cursor() with a definition of the map-index
        @note: We vary only 3 given pixel values on (3x3) grids (19683 possibilities)
        @note: The optimization loop is completely done within this method - due to performance reasons
        
        @param li_pre_val: A list of three scaled pixel values between ]0, 1[ which shall be combined in (3x3)-fluctuation patterns
        
        @param num_epochs: The number of epochs used in the optimization loop for pattern creation
        @note: It is worthwile to experiment with the number of epochs - the (3x3)-pattern selection may change !!!
        
        @param loss_limit: Threshold of loss for which we register a fluctuation image as relevant 
        
        @param b_print: Boolean - controls printout for intermediate results - useful to see map response 
        @param b_check: Check the response of map for the first saved pattern - check the image reconstruction at the same time
        
        @note: Parameters to enrich the (3x3)-large scale fluctuation with a small scale pattern 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param b_with_enriched_fluct: Boolean - controls whether we enrich the long-range pattern with other additional patterns 
        @param li_facts:  A list of factors which the control the relative strength of the precursor image vs. 
                          4 
additional fluctuation patterns 
        @param li_dim_steps: A list of square dimensions for setting the length scale of the fluctuations  
        
        @note: Parameters for plotting  
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
                 
        @note: Preparations for plotting: 
        We need n_step + 2 axis-frames which must be provided externally
        
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        
        '''
        # Internal parameter 
        num_selected = 8 
        #check if length of li_axa is sufficient
        if li_axa == None or len(li_axa) < num_selected:
            print("Error: The length of the provided list with axes-frames for plotting must be at least ", num_selected )
            sys.exit()
        
        # get required exernal params 
        # ~~~~~~~~~~~~~~~~~~~~~~~~
        self._n_steps   = 2           # only a dummy 
        self._epsilon   = 0.01        # only a dummy  
        
        # number of epochs for optimization strategy 0 
        if num_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = num_epochs  
        
        # Create a fixed random flutution pattern which we later can overlay to the long range fluctuation patterns 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._li_dim_steps = li_dim_steps # dimensions for fluctuations 
        self._ay_facts     = np.array(li_facts)
        
        dim_1_1 = self._li_dim_steps[0][0] 
        dim_1_2 = self._li_dim_steps[0][1] 
        dim_2_1 = self._li_dim_steps[1][0] 
        dim_2_2 = self._li_dim_steps[1][1] 
        dim_3_1 = self._li_dim_steps[2][0] 
        dim_3_2 = self._li_dim_steps[2][1] 
        dim_4_1 = self._li_dim_steps[3][0] 
        dim_4_2 = self._li_dim_steps[3][1] 
        
        fact0 = self._ay_facts[0]
        fact1 = self._ay_facts[1]
        fact2 = self._ay_facts[2]
        fact3 = self._ay_facts[3]
        fact4 = self._ay_facts[4]
        
        # Create fluctuation patterns for enrichment 
        fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
        fluct2 =  2.0 * ( np.random.random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
        fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
        fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
        
        li_fluct = [fluct1, fluct2, fluct3, fluct4]
        
        # Scaling with bicubic interpolation to the required image size
        fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
        fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
        fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
        fluct4_scale = fluct4

        
        # Create cartesian product of combinatoric possibilities for a (3x3)-grid of long range fluctuations
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        cp = list(product(li_pre_val, repeat=9))
        num = len(cp) 
        print ("We test ", num, " possibilities for a (3x3) fluctuations ")
        
        # Prepare lists to save parameter data for the fluctuation pattern 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # Intermediate list to save relevant long scale fluctuations 
        d_cov = {}
        # list to save parameters for an enrichments of the large scale pattern with small fluctuations
        if b_with_enriched_fluct:
            self._li_fluct_enrichments = [li_facts, li_dim_steps, li_fluct]
        else:
            self._li_fluct_enrichments = []
            
        # Loop to check for relevant fluctuations => Loop over all combinations
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        for i_cp in range(num): 
            
            # create the value distribution 
            cov = np.array(cp[i_cp])
            cov = cov.reshape(1,3,3,1) 

            # create basic image to investigate
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            fluct_scale = tf.image.resize(cov, [28,28], method="bicubic", antialias=True) 
            
            # enrich with additional smallscale fluctuations 
            if b_with_enriched_fluct: 
                # superposition with the already calculated image 
                fluct_scale = fact0 * fluct_scale \
                             + fact1*fluct1_scale + fact2*fluct2_scale \
                             + fact3*fluct3_scale + fact4*fluct4_scale
            
            #standardization of the image data 
            fluct_data  = tf.image.per_image_standardization(fluct_scale)     
            # eliminatng very small values - prove to be helpful in many cases 
            fluct_data = tf.where(fluct_data > 5.e-6, fluct_data, tf.zeros_like(fluct_data))
            
            # save image data 
            self._initial_inp_img_data = fluct_data
            self._inp_img_data         = fluct_data
            
            # optimization loop 
            # ~~~~~~~~~~~~~~~~~
            for j in range(self._n_epochs):
                # Get output values of our Keras iteration function 
                self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
                # gradient ascent => Correction of the input image data 
                self._inp_img_data += self._val_oip_grads * self._epsilon
                # Standardize the corrected image - we won't get a convergence otherwise 
                self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
                
            # Check if we have a loss value > 0.x and save the (3x3)-data
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if self._val_oip_loss > loss_limit:
                d_cov[i_cp] = [self._val_oip_loss, cov] 
                if b_print:
                    tf.print("i = ", i_cp, " loss = ", self._val_oip_loss)

        # We restrict the number to those 8 with highest loss 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if len(d_cov) > 0: 
            self._li_of_flucts = sorted(d_cov.items() , reverse = True,  key=lambda x: x[1][0])
            # print("num of relevant covs = ", len(self._li_of_flucts), len(d_cov))   
            print("\nnum of relevant covs = ", len(self._li_of_flucts))   
            self._li_of_flucts = self._li_of_flucts[:num_selected].copy()
            #save( 'li_of_flucts.npy', np.array(self._li_of_flucts, dtype=np.float32) )
            save( 'li_of_flucts.npy', self._li_of_flucts)
            # save the enrichment-setting 
            save('li_of_cov_enrichments.npy',self._li_fluct_enrichments)
            
            
            # Check of map really reacted - and check the reconstruction mechanism 
            # 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if b_check: 
                print("check of map reaction to first selected image")
                cov_t = self._li_of_flucts[0][1][1]
                #print("cov_t-shape = ", cov_t.shape)
                #cov_del = cov_t - cov
                #print("cov_del =\n", cov_del)
                
                fluct0_scale_t = tf.image.resize(cov_t, [28,28], method="bicubic", antialias=True) 
                        
                # Add fluctuation enrichments - if saved
                if b_with_enriched_fluct:
                    
                    # Scaling enrichment flucts with bicubic interpolation to the required image size
                    fluct1_t = self._li_fluct_enrichments[2][0] 
                    fluct2_t = self._li_fluct_enrichments[2][1] 
                    fluct3_t = self._li_fluct_enrichments[2][2] 
                    fluct4_t = self._li_fluct_enrichments[2][3] 
                    
                    fact0_t = self._li_fluct_enrichments[0][0]
                    fact1_t = self._li_fluct_enrichments[0][1]
                    fact2_t = self._li_fluct_enrichments[0][2]
                    fact3_t = self._li_fluct_enrichments[0][3]
                    fact4_t = self._li_fluct_enrichments[0][4]
                    
                    fluct1_scale_t = tf.image.resize(fluct1_t, [28,28], method="bicubic", antialias=True)
                    fluct2_scale_t = tf.image.resize(fluct2_t, [28,28], method="bicubic", antialias=True)
                    fluct3_scale_t = tf.image.resize(fluct3_t, [28,28], method="bicubic", antialias=True)
                    fluct4_scale_t = fluct4_t
                    
                    fluct_scale_t = fact0_t*fluct0_scale_t \
                                 + fact1_t*fluct1_scale_t + fact2_t*fluct2_scale_t \
                                 + fact3_t*fluct3_scale_t + fact4_t*fluct4_scale_t
                    
                else: 
                    fluct_scale_t = fluct0_scale_t
                
                #standardization
                fluct_data_t  = tf.image.per_image_standardization(fluct_scale_t)     
                fluct_data_t  = tf.where(fluct_data_t > 5.e-6, fluct_data_t, tf.zeros_like(fluct_data_t))
                self._initial_inp_img_data = fluct_data_t
                self._inp_img_data         = fluct_data_t
                self._precursor_img        = fluct_data_t
                
                # optimization loop 
                for j in range(self._n_epochs):
                    self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
                    self._inp_img_data += self._val_oip_grads * self._epsilon
                    self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
                print("loss for 1st selected img = ", self._val_oip_loss )

            # show the imgs 
            # ~~~~~~~~~~~~~~
            self._display_precursor_imgs(li_axa = li_axa)
            
        # list contains no patterns 
        else:
            print("No image found !")
    
    
    # Method to display initial fluctuation images identified as objects for OIP creation 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _display_precursor_imgs(self, li_axa = None):
        '''
        @summary: Method to display up to 8 selected precursor images 
        @version: 0.2, 02.10.2020
        @change: Only some documentation 
        @note: We first reconstruct the image from saved data of the large scale fluctuations 
        @note: We then display the images in externally delivered axes-frames of matplotlib
        @requires: A filled set of valid fluctuation patterns in self._li_of_flucts[][][] - determined by a run of _precursor()
        @requires: Information on fluctuation 
enrichments in self._li_fluct_enrichments[][] - determined by a _precursor() run
        @requires: A set of axes-frames for plotting - preferably defined in a Jupyter cell calling thi smethod 
        @note: Parameters for plotting  
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
         '''
        # length of _li_of_flucts[] vs. length of li_axa
        len_cov = len(self._li_of_flucts)
        if li_axa == None or len(li_axa) < len_cov:
            print("Error: The length of the provided list with axes-frames for plotting must be at least ", len_cov )
            sys.exit()
        
        # Loop to reconstruct and display the found precursor images 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        for j in range(len(self._li_of_flucts)):
            print(j, "loss = ", self._li_of_flucts[j][1][0])
            
            # reconstruct the image from the data of the precursor run 
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            cov = self._li_of_flucts[j][1][1]
            fluct0_scale_t  = tf.image.resize(cov, [28,28], method="bicubic", antialias=True)        
            
            # add enrichments if defined 
            if len(self._li_fluct_enrichments) > 0: 
                # Scaling enrichment flucts with bicubic interpolation to the required image size
                fluct1_t = self._li_fluct_enrichments[2][0] 
                fluct2_t = self._li_fluct_enrichments[2][1] 
                fluct3_t = self._li_fluct_enrichments[2][2] 
                fluct4_t = self._li_fluct_enrichments[2][3] 
                
                fact0_t = self._li_fluct_enrichments[0][0]
                fact1_t = self._li_fluct_enrichments[0][1]
                fact2_t = self._li_fluct_enrichments[0][2]
                fact3_t = self._li_fluct_enrichments[0][3]
                fact4_t = self._li_fluct_enrichments[0][4]
                
                fluct1_scale_t = tf.image.resize(fluct1_t, [28,28], method="bicubic", antialias=True)
                fluct2_scale_t = tf.image.resize(fluct2_t, [28,28], method="bicubic", antialias=True)
                fluct3_scale_t = tf.image.resize(fluct3_t, [28,28], method="bicubic", antialias=True)
                fluct4_scale_t = fluct4_t
                
                fluct_scale_t = fact0_t*fluct0_scale_t \
                             + fact1_t*fluct1_scale_t + fact2_t*fluct2_scale_t \
                             + fact3_t*fluct3_scale_t + fact4_t*fluct4_scale_t
            else:
                fluct_scale_t = fluct0_scale_t                 
            
            fluct_datx  = tf.image.per_image_standardization(fluct_scale_t)     
            fluct_dat  = tf.where(fluct_datx > 5.e-6, fluct_datx, tf.zeros_like(fluct_datx))
            img = fluct_dat[0, :,:, 0].numpy()
            li_axa[j].imshow(img, cmap=plt.cm.get_cmap('viridis'))
    
    
    # Method to transform an img tensor into a standard image - used for contrast enhancemant  
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _transform_tensor_to_img(self, T_img = None, centre_move = 0.33, fact = 0.85):
        '''
        @summary: Method to transform (standardized) tensor img data into standard img array data 
        @note: Clipping is used to remove pixel values outside [0, 255] 
        @version: 0.1, 10.10.2019
        
        @requires: A defined TF2 Keras backend 
        
        @param T_img: The TF2 or keras tensor for image data 
        @param std_dev: A reasonable standard deviation for the spread of the values around an average (=> contrast enhancement)
        
        '''
        ay_x = T_img.numpy()  # floating point array  
        
        maxi_o    = 
np.max(T_img)
        avg_o     = np.mean(T_img)
        mini_o    = np.min(T_img)
        std_dev_o = np.std(T_img)
        print("\nInfos on pixel value distribution during contrast enhancement: ") 
        print("\max_orig = ", maxi_o, " :: avg_orig = ", avg_o, " :: min_orig: ", mini_o) 
        print("std_dev_orig = ", std_dev_o)

        # the following operatin should have no effect on standardized images
        ay_x -= ay_x.mean() # this resuts already in a numpy-array 
        ay_x /= (ay_x.std() + B.epsilon())
        
        maxi    = np.max(ay_x)
        avg     = np.mean(ay_x)
        mini    = np.min(ay_x)
        std_dev = np.std(ay_x)
        print("max_ay = ", maxi, " :: avg_ay = ", avg, " :: min_ay: ", mini) 
        print("std_dev_ay = ", std_dev)
        
        div = fact * 0.5 * ( abs(maxi_o) + abs(mini_o) )
        print("div = ", div)
        ay_x /= div          # scaling  
        ay_x += centre_move  # moving the data center 
        
        maxi = np.max(ay_x)
        avg  = np.mean(ay_x)
        mini = np.min(ay_x)
        std_dev = np.std(ay_x)
        print("max_fin = ", maxi, " :: avg_fin = ", avg, " :: min_fin: ", mini) 
        print("std_dev_fin = ", std_dev)
        
        ay_x = np.clip(ay_x, 0, 1)
        
        ay_x *= 255
        ay_x_img = np.clip(ay_x, 0, 255).astype('uint8')
        
        maxi = np.max(ay_x_img)
        avg  = np.mean(ay_x_img)
        mini = np.min(ay_x_img)
        std_dev = np.std(ay_x_img)
        print("max_img = ", maxi, " :: avg_img = ", avg, " :: min_img: ", mini) 
        print("std_dev_img = ", std_dev, "\n")
        
        
        return ay_x_img



 

Other (previous) articles in this series

A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

 

A simple CNN for the MNIST dataset – X – filling some gaps in filter visualization

I continue my series of articles on a CNN for the MNIST dataset.

A simple CNN for the MNIST dataset – IX – filter visualization at a convolutional layer
A simple CNN for the MNIST dataset – VIII – filters and features – Python code to visualize patterns which activate a map strongly
A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part
A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features
A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

In the last article we harvested the fruits of previous efforts. We produced a variety of input image patterns which triggered certain maps of the innermost convolutional layer of our CNN and the filters behind it maximally. I called such pixel-patterns OIPs. (I am still careful to avoid the expression “feature”, which is used by many authors as a term describing a physical entity identified by the human brain. A connotation which I do not like …)

Although my code for creating OIPs allows for a variation of fluctuations on 4 different length scales it proved to be hard to find OIPs for quite a lot of maps at the highest convolutional layer and for their filters. I could not always find a combination of initial random pixel fluctuations which led to loss values > 0. More precisely: I did not find a pattern by trial and error on a reasonable short timescale of some minutes.

We already know that OIP pixel patterns for the innermost convolutional layer are a bit artificial:

  • They are idealizations; the displayed pattern may not be present in the same form in the real input images which were used during training.
  • They may contain repetitions of sub-patterns at different positions.

The images in the last articles showed us in addition that some maps at the innermost Conv-layer are sensitive to rather complicated and specific patterns on relatively large length scales.

This is to be expected as at least filters on the higher convolutional levels accumulate information on a coarse level of image coverage: Filtered information coming from original large areas of the image are in the end convoluted down to grids of (3×3) neurons, i.e. onto low resolution maps. Filters at this convolutional depth can therefore require relatively large scale patterns to be passed. So, it is no real wonder that some maps do not react at all to random statistical fluctuations on very short length scales, e.g. on a two pixel scale. The activation of the respective neurons may stay at zero.

In this article I shall describe a simple method which allowed me to create OIP patterns for around half of the maps (at the 3d Conv-level) for which I did not have any luck before. This is done by a “precursor run” which tests the reaction of a map to a large sample of input images with pattern variations on relatively long length scales.

Systematic analysis? My simple approach …

What is the basic idea? Let us assume that we cover the original image surface by a a grid of squares, e.g. by 16 (7×7)-squares – where (7×7) means a square with a side length of 7 pixels. We then assign a constant grey-value to all the pixels in such a square. Instead of picking random values in the range [0, 255] we use a limited number of N distinct normalized values in an interval [0, 1].

We then investigate all combinations of distinct value distributions across the 16 squares. For each combination we construct an input image with bicubic interpolation on the original (28×28) scale and standardize the resulting distribution of pixel values. (The standardization is required for my CNN, which was trained on standardized images). Thus we produce a (huge) number of input images with smooth large scale fluctuations, which we then can present to our OIP-optimization algorithm for a test of a map’s reaction.

Do we have a chance to find OIPs by systematic trials?

Now, do we have a real chance to work a bit more systematically with the described approach? Unfortunately, the answer is: Not really, at least not with a PC equipment and for fluctuations at middle length scales with more than 5 distinct pixel values. The reason in terms of Fourier series’ is that the number of amplitude combinations for multiple sinus waves grows exponentially with a defined number of discrete amplitudes values out of an interval.

We see the limitations already in our simplified approach. Let us assume that we pick 5 distinct and normalized pixel values in the interval ]0, 1[. Let us further assume that we cover the whole image area (of a size 28×28) with a grid of (9×9) squares, each with a constant pixel value. (We ignore the fact that 28/9 = 3.11 and trust in bicubic interpolation to take care of the 0.11 🙂 )

Even under these conditions we get already around 5**9 = 1.953 million pattern variations for which we have to test a map’s response. For each of these patterns we would have to follow at least 4 to 10 iterations (= epochs) of the optimization loop to identify the input image as a promising candidate for a full optimization or not. This means that we need 8 to 20 million full gradient calculations on a parameter space with 784 (=28×28) dimensions. Hard work even for graphic cards (at the consumer level). At smaller length scales, as induced e.g. by a (7×7) coverage grid, we go far beyond standard calculation capacities on PC hardware.

What was/is within my computational reach? If we reduced the number of distinct pixel values to 3 (e.g.: 0.2, 0.5, 0.8) and concentrated on large scale fluctuations – e.g. based on (9×9) squares – we would have around 19700 combinations to investigate.

I had to work a bit on the respective Python code to get it run under Tensorflow 2.2.1/Cuda 10.2 with a reasonable percentage of around 30% on a graphics card. With 10 epochs for an optimization trial run for each image a complete run over 20000 images takes less than 5 minutes on my old GTX960 graphics card. With a modern card, a recent CPU and some more optimization one would probably arrive at values significantly below a minute. So, systematically working with large scale fluctuations is possible on PC-equipment!

Note that a coverage of the image surface with (9×9)-squares corresponds to the resolution which the (3×3)-maps of the innermost Conv-layers represent! (Including padding in the filter definitions). I assume that it is reasonable to work and test on this scale.

Selection of 8 promising candidates out of 19683

So, I added some “precursor” methods to my class My_OIP to prepare and perform systematic runs for all of the maps for which I had not found a pattern, yet. I chose the following 3 discrete pixel values: [0.2, 0.5, 0.8].

The resulting (9×9) images where rescaled with a bicubic interpolation to the MNIST size of (28×28) and afterwards standardized.

During the test of a map’s response I selected 8 candidates out of 19683 for subsequent thorough optimization runs with 1200 epochs. The selection was done by looking at the largest loss values reached after 10 optimization epochs. I then applied the procedure to all 67 maps for which I had not got an OIP, yet.

Note that we cannot be sure whether 10 optimization iterations really give us a perfect indication of the highest loss values which would be reached after a full optimization run with 600 to 1200 epochs. So, it is worthwhile to play around with the number of test epochs; the selection of the input fluctuation patterns may change.

I got results then for 33 of the maps which had no OIP. I present the OIP-images below. The results cover almost 50% of the missing OIPs. So, my computational efforts were well invested.

OIPs for additional 33 maps

Below I present MNIST-related OIP-images for the maps with the following indices on the 3rd Conv-layer of my (!) CNN ):
1, 3, 9, 15, 16, 29, 35, 37, 38, 44, 46, 50, 51, 55, 59, 60, 70, 78, 81, 91, 93, 94, 96, 97, 101, 104, 108, 109, 111, 112, 116, 122, 125.
This time without contrast enhancement – as I was too lazy to integrate it into the precursor code.

visualization-of-CNN-filters-and-maps-for-MNIST-3rd-Conv-layer-2-dr-moenchmeyer

CNNs are fun, aren’t they?

Unique OIPs?

An interesting question is: Are the OIPs really unique per map in the sense that we produce the same OIP-images for different fluctuations on the input images? Well, there is some unique overall form of the patterns, but there may occur translations in position and there may be differences in details. Below, I show you different derived images for some maps:

Map 46:

Map 81:

Map 116:

Map 125:

Map 127:

Conclusion

By a systematic investigation of the reaction of maps to large scale fluctuations we can find OIPs for maps where a trial and error approach is not sufficient to trigger a response of the filters. I will discuss the Python code for a related “precursor run” in my next post

A simple CNN for the MNIST dataset – XI – Python code for filter visualization and OIP detection.

 

A simple CNN for the MNIST dataset – IX – filter visualization at a convolutional layer

In the last article I explained the code to visualize patterns which trigger a chosen feature map of a trained CNN strongly. In this series we work with the MNIST data but the basic principles can be modified, extended and applied to other typical data sets (as e.g. the Cifar set).

A simple CNN for the MNIST dataset – VIII – filters and features – Python code to visualize patterns which activate a map strongly
A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part
A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features
A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

We shall now apply our visualization code for some selected maps on the last convolutional layer of our CNN structure. We run the code and do the plotting in a Jupyter environment. To create an image of an OIP-pattern which activates a map after passing its filters is a matter of a second at most.

Our algorithm will evolve patterns out of a seemingly initial “chaos” – but it will not do so for all combinations of statistical input data and a chosen map. We shall investigate this problem in more depth in the next articles. In the present article I first want to present you selected OIP-pattern images for very many of the 128 feature maps on the third layer of my simple CNN which I had trained on the MNIST data set for digits.

Initial Jupyter cells

I recommend to open a new Jupyter notebook for our experiments. We put the code for loading required libraries (see the last article) into a first cell. A second Jupyter cell controls the use of a GPU:

Jupyter cell 2:

gpu = True
if gpu: 
    GPU = True;  CPU = False; num_GPU = 1; num_CPU = 1
else: 
    GPU = False; CPU = True;  num_CPU = 1; num_GPU = 0

config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=6,
                        inter_op_parallelism_threads=1, 
                        allow_soft_placement=True,
                        device_count = {'CPU' : num_CPU,
                                        'GPU' : num_GPU}, 
                        log_device_placement=True

                       )
config.gpu_options.per_process_gpu_memory_fraction=0.35
config.gpu_
options.force_gpu_compatible = True
B.set_session(tf.compat.v1.Session(config=config))

In a third cell we then run the code for the myOIP-class definition with I discussed in my last article.

Loading the CNN-model

A fourth cell just contains just one line which helps to load the CNN-model from a file:

# Load the CNN-model 
myOIP = My_OIP(cnn_model_file = 'cnn_best.h5', layer_name = 'Conv2D_3')

The output looks as follows:

You clearly see the OIP-sub-model which relates the input images to the output of the chosen CNN-layer; in our case of the innermost layer “Conv2d_3”. The maps there have a very low resolution; they consist of only (3×3) nodes, but each of them covers filtered information from relatively large input image areas.

Creation of the initial image with statistical fluctuations

With the help of fifth Jupyter cell we run the following code to build an initial image based on statistical fluctuations of the pixel values:

# build initial image 
# *******************

# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 5
fig1 = plt.figure(1)
ax1_1 = fig1.add_subplot(121)
ax1_2 = fig1.add_subplot(122)

# OIP function to setup an initial image 
initial_img = myOIP._build_initial_img_data(   strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (0.2, 0.2, 0.0, 0.0),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False)

Note that I did not use any small scale fluctuations in my example. The reason is that the map chosen later on reacts better to large scale patterns. But you are of course free to vary the parameters of the list “li_facts” for your own experiments. In my case the resulting output looked like:

The two displayed images should not show any differences for the current version of the code. Note that your initial image may look very differently as our code produces random fluctuations of the pixel values. I suggest that you play a bit around with the parameters of “li_facts” and “li_dim_steps”.

Creation of a OIP-pattern out of random fluctuations

Now we are well prepared to create an image which triggers a selected CNN-map strongly. For this purpose we run the following code in yet another Jupyter cell:

# Derive a single OIP from an input image with statistical fluctuations of the pixel values 
# ******************************************************************

# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure()
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

map_index = 120         # map-index we are interested in 
n_epochs = 600          # should be divisible by 5  
n_steps = 6             # number of intermediate reports 
epsilon = 0.01          # step size for 
gradient correction  
conv_criterion = 2.e-4  # criterion for a potential stop of optimization 

myOIP._derive_OIP(map_index = map_index, n_epochs = n_epochs, n_steps = n_steps, 
                  epsilon = epsilon , conv_criterion = conv_criterion, b_stop_with_convergence=False )

The first statements prepare a grid of maximum 8 intermediate axis-frames which we shall use to display intermediate images which are produced by the optimization loop. You see that I chose the map with number “120” within the selected layer “Conv2D_3”. I allowed for 600 “epochs” (= steps) of the optimization loop. I requested the display of 6 intermediate images and related printed information about the associated loss values.

The printed output in my case was:

Tensor("Mean_10:0", shape=(), dtype=float32)
shape of oip_loss =  ()
GradienTape watch activated 
*************
Start of optimization loop
*************
Strategy: Simple initial mixture of long and short range variations
Number of epochs =  600
Epsilon =   0.01
*************
li_int =  [9, 18, 36, 72, 144, 288]

step 0 finalized
present loss_val =  7.3800406
loss_diff =  7.380040645599365

step 9 finalized
present loss_val =  16.631456
loss_diff =  1.0486774

step 18 finalized
present loss_val =  28.324467
loss_diff =  1.439024align

step 36 finalized
present loss_val =  67.79664
loss_diff =  2.7197113

step 72 finalized
present loss_val =  157.14531
loss_diff =  2.3575745

step 144 finalized
present loss_val =  272.91815
loss_diff =  0.9178772

step 288 finalized
present loss_val =  319.47913
loss_diff =  0.064941406

step 599 finalized
present loss_val =  327.4784
loss_diff =  0.020477295

Note the logarithmic spacing of the intermediate steps. You recognize the approach of a maximum of the loss value during optimization and the convergence at the end: the relative change of the loss at step 600 has a size of 0.02/327 = 6.12e-5, only.

The intermediate images produced by the algorithm are displayed below:

The systematic evolution of a pattern which I called the “Hand of MNIST” in another article is clearly visible. However, you should be aware of the following facts:

  • For a map with the number 120 your OIP-image may look completely different. Reason 1: Your map 120 of your trained CNN-model may represent a different unique filter combination. This leads to the interesting question whether two training runs of a CNN for statistically shuffled images of one and the same training set produce the same filters and the same map order. We shall investigate this problem in a forthcoming article. Reason 2: You may have started with different random fluctuations in the input image.
  • Whenever you repeat the experiment for a new input image, for which the algorithm converges, you will get a different output regarding details – even if the major over-all features of the “hand”-like pattern are reproduced.
  • For quite a number of trials you may run into a frustrating message saying that the loss remains at a value of zero and that you should try another initial input image.

The last point is due to the fact that some specific maps may not react at all to some large scale input image patterns or to input images with dominating fluctuations on small scales only. It depends …

Dependency on the input images and its fluctuations

Already in previous articles of this series I discussed the point that there may be a relatively strong dependency of our output pattern on the mixture of long range and short range fluctuations of the pixel values in the initial input image. With respect to all possible statistical input images – which are quite many ( 255**784 ) – a specific image we allow us only to approach a local maximum of the loss hyperplane – one maximum out of many. But only, if the map reacts to the input image at all. Below I give you some examples of input images to which my CNN’s map with number 120 does not react:

If you just play around a bit you will see that even in the case of a successful optimization the final OIP-images differ a bit and that also the eventual loss values vary. The really convincing point for me was that I did get a hand like pattern all those times when the algorithm did converge – with variations and differences, but structurally similar. I have demonstrated this point already in the article

Just for fun – the „Hand of MNIST“-feature – an example of an image pattern a CNN map reacts to

See the images published there.

Patterns that trigger the other maps of our CNN

Eventually I show you a sequence of images which OIP-patterns for the maps with indices
0, 2, 4, 7, 8, 12, 17, 18, 19, 20, 21, 23, 27, 28, 30, 31, 32, 33, 34, 36, 39, 41, 42, 45, 48, 52, 54, 56, 57, 58, 61, 62, 64, 67, 68, 71, 72, 76, 80, 82, 84, 85, 86, 87, 90, 92, 102, 103, 105, 106, 107, 110, 114, 115, 117, 119, 120, 122, 123, 126, 127.
Each of the images is displayed as calculated and with contrast enhancement.



visualization-of-CNN-filters-and-maps-for-MNIST-3rd-Conv-layer-1-dr-moenchmeyer

 

So, this is basically the essence of what our CNN “thinks” about digits after a MNIST training! Just joking – there is no “thought” present in out simple static CNN, but just the application of filters which were found by a previous mathematical optimization procedure. Filters which fit to certain geometrical pixel correlations in input images …

You certainly noticed that I did not find OIP patterns for many maps, yet. I fiddled around a bit with the parameters, but got no reaction of my maps with the numbers 1, 3, 5, 6, 9, 10, 11 …. The loss stayed at zero. This does not mean that there is no pattern which triggers those maps. However, it may a very special one for which simple fluctuations on short scales may not be a good starting point for an optimization.

Therefore, it would be good to have some kind of precursor run which investigates the reaction of a map towards a sample of (long scale) fluctuations before we run a full optimization. The next article

A simple CNN for the MNIST dataset – X – filling some gaps in filter visualization

describes a strategy for a more systematic approach and shows some results. A further article will afterwards discuss the required code.

 

A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features

In my last article of my introductory series on “Convolutional Neural Networks” [CNNs] I described how we can visualize the output of different maps at convolutional (or pooling) layers of a CNN.

A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

We are now well equipped to look a bit closer at the maps of a trained CNN. The output of the last convolutional layer is of course of special interest: It is fed (in the form of a flattened input vector) into the MLP-part of the CNN for a classification analysis. As an MLP detects “patterns” the question arises whether we actually can see common “patterns” in the visualized maps of different images belonging to the same class. In our case we shall have a look at the maps of different MNIST images of a handwritten “4”.

Note for my readers, 20.08.2020:
This article has recently been revised and completely rewritten. It required a much more careful description of what we mean by “patterns” and “features” – and what we can say about them when looking at images of activation outputs on higher convolutional layers. I also postponed a thorough “philosophical” argumentation against a humanized usage of the term “features” to a later article in this series.

Objectives

We saw already in the last article that the images of maps get more and more abstract when we move to higher convolutional layers – i.e. layers deeper inside a CNN. At the same time we loose resolution due to intermediate pooling operations. It is quite obvious that we cannot see much of any original “features” of a handwritten “4” any longer in a (3×3)-map, whose values are produced by a sequence of complex transformation operations.

Nevertheless people talk about “feature detection” performed by CNNs – and they refer to “features” in a very concrete and descriptive way (e.g. “eyes”, “spectacles”, “bows”). How can this be? What is the connection of abstract activation patterns in low resolution maps to original “features” of an image? What is meant when CNN experts claim that neurons of higher CNN layers are allegedly able to “detect features”?

We cannot give a full answer, yet. We still need some more Python programs tools. But, wat we are going to do in this article are three things:

  1. Objective 1: I will try to describe the assumed relation between maps and “features”. To start with I shall make a clear distinction between “feature” patterns in input images and patterns in and across the maps of convolutional layers. The rest of the discussion will remain a bit theoretical; but it will use the fact that convolutions at higher layers combine filtered results in specific ways to create new maps. For the time being we cannot do more. We shall actually look at visualizations of “features” in forthcoming articles of this series. Promised.
  2. Objective 2: We follow three different input images, each representing a “4”, as they get processed from one convolutional layer to the next convolutional layer of our CNN. We shall compare the resulting outputs of all feature maps at each convolutional layer.
  3. Objective 3: We try to identify common “patterns” for our different “4” images across the maps of the highest convolutional layer.

We shall visualize each “map” by an image – reflecting the values calculated by the CNN-filters for all points in each map. Note that an individual value at a map point results from adding up many weighted values provided by the maps of lower layers and feeding the result into an activation function. We speak of “activation” values or “map activations”. So our 2-nd objective is to follow the map activations of an input image up to the highest convolutional layer. An interesting question will be if the chain of complex transformation operations leads to visually detectable similarities across the map outputs for the different images of a “4”.

The eventual classification of a CNN is done by its embedded MLP which analyzes information collected at the last convolutional layer. Regarding this input to the MLP we can make the following statements:

The convolutions and pooling operations project information of relatively large parts of the original image into a representation space of very low dimensionality. Each map on the third layer provides a 3×3 value tensor, only. However, we combine the points of all (128) maps together in a flattened input vector to the MLP. This input vector consists of more nodes than the original image itself.

Thus the sequence of convolutional and pooling layers in the end transforms the original images into another representation space of somewhat higher dimensionality (9×128 vs. 28×28). This transformation is associated with the hope that in the new representation space a MLP may find patterns which allow for a better classification of the original images than a direct analysis of the image data. This explains objective 3: We try to play the MLPs role by literally looking at the eventual map activations. We try to find out which patterns are representative for a “4” by comparing the activations of different “4” images of the MNIST dataset.

Enumbering the layers

To distinguish a higher Convolutional [Conv] or Pooling [Pool] layer from a lower one we give them a number “Conv_N” or “Pool_N”.

Our CNN has a sequence of

  • Conv_1 (32 26×26 maps filtering the input image),
  • Pool_1 (32 13×13 maps with half the resolution due to max-pooling),
  • Conv_2 (64 11×11 maps filtering combined maps of Pool_1),
  • Pool_2 (64 5×5 maps with half the resolution due to max-pooling),
  • Conv_3 (128 3×3 maps filtering combined maps of Pool_2).

Patterns in maps?

We have seen already in the last article that the “patterns” which are displayed in a map of a higher layer Conv_N, with N ≥ 2, are rather abstract ones. The images of the maps at Conv_3 do not reflect figurative elements or geometrical patterns of the input images any more – at least not in a directly visible way. It does not help that the activations are probably triggered by some characteristic pixel patterns in the original images.

The convolutions and the pooling operation transform the original image information into more and more abstract representation spaces of shrinking dimensionality and resolution. This is due to the fact that the activation of a point in a map on a layer Conv_(N+1) results

  • from a specific combination of multiple maps of a layer Conv_N or Pool_N
  • and from a loss of resolution due to intermediate pooling.

It is not possible to directly guess in what way active points or activated areas within
a certain map at the third convolutional layer relate to or how they depend on “original and specific patterns in the input image”. If you do not believe me: Well, just look at the maps of the 3rd convolutional layer presented in the last article and tell me: What patterns in the initial image did these maps react to? Without some sophisticated numerical experiments you won’t be able to figure that out.

Patterns in the input image vs. patterns within and across maps

The above remarks indicate already that “patterns” may occur at different levels of consideration and abstraction. We talk about patterns in the input image and patterns within as well as across the maps of convolutional (or pooling) layers. To avoid confusion I already now want to make the following distinction:

  • (Original) input patterns [OIP]: When I speak of (original) “input patterns” I mean patterns or figurative elements in the input image. In more mathematical terms I mean patterns within the input image which correspond to a kind of fixed and strong correlation between the values of pixels distributed over a sufficiently well defined geometrical area with a certain shape. Examples could be line-like elements, bow segments, two connected circles or combined rectangles. But OIPs may be of a much more complex and abstract kind and consist of strange sub-features – and they may not reflect a real world entity or a combination of such entities. An OIP may reside at one or multiple locations in different input images.
  • Filter correlation patterns [FCP]: A CNN produces maps by filtering input data (Conv level 1) or by filtering maps of a lower layer and combining the results. By doing so a higher layer may detect patterns in the filter results of a lower layer. I call a pattern across the maps of a convolutional or pooling layer Conv_N or Pool_N as seen by Conv_(N+1) a FCP.
    Note: Because a 3×3 filter for a map of Conv_(N+1) has fixed parameters per map of the previous layer Conv_N or Pool_N, it combines multiple maps (filters) of Conv_N in a specific, unique way.

Anybody who ever worked with image processing and filters knows that combining basic filters may lead to the display of weirdly looking, combined information residing in complex regions on the original image. E.g., a certain combination of filters may emphasize diagonal lines or bows with some distance in between and suppress all other features. Therefore, it is at least plausible that a map of a higher convolutional layer can be translated back to an OIP. Meaning:

A high activation of certain or multiple points inside a map on Conv_3 may reflect some typical OIP pattern in the input image.

But: At the moment we have no direct proof for such an idea. And it is not at all obvious what kind of OIP pattern this may be for a distinct map – and whether it can directly be described in terms of basic geometrical elements of a figurative number representation in the MNIST case. By just looking at the maps of a layer and their activated points we do not get any clue about this.

If, however, activated maps somehow really correspond to OIPs then a FCP over multiple maps may be associated with a combination of distinct OIPs in an input image.

What are “features” then?

In many textbooks maps are also called “feature maps“. As far I understand it the authors call a “feature” what I called an OIP above. By talking about a “feature” the authors most often refer to a pattern which a CNN somehow detects or identifies in the input images.

Typical examples of “features” text-book authors often discuss and even use in illustrations are very concrete: ears, eyes, feathers, wings, a mustache, leaves, wheels, sun-glasses … I.e., a lot of authors typically name features which human beings identify as physical entities or as entities, for which we have clear conceptual ideas in our mind. I think such examples trigger ideas about CNNs which are too far-fetched and which “humanize” stupid algorithmic processes.

The arguments in favor of the detection of features in the sense of conceptual entities are typically a bit nebulous – to say the least. E.g. in a relatively new book on “Generative Deep Learning” you see a series of CNN neuron layers associated with rather dubious and unclear images of triangles etc. and at the last convolutional layer we suddenly see pretty clear sketches of a mustache, a certain hairdress, eyes, lips, a shirt, an ear .. “. The related text goes like follows (I retranslated the text from the German version of the book): “Layer 1 consists of neurons which activate themselves stronger, when they recognize certain elementary and basic features in the input image, e.g. borders. The output of these neurons is then forwarded to the neurons of layer 2 which can use this information to detect more complex features – and so on across the following layers.” Yeah, “neurons activate themselves” as they “recognize” features – and suddenly the neurons at a high enough layer see a “spectacle”. 🙁

I think it would probably be more correct to say the following:

The activation of a map of a high convolutional layer may indicate the appearance of some kind of (complex) pattern or a sequence of patterns within an input image, for which a specific filter combination produces relatively high values in a low dimensional output space.

Note: At our level of analyzing CNNs even this carefully formulated idea is speculation. Which we will have to prove somehow … Where we stand right now, we are unfortunately not yet ready to identify OIPs or repeated OIP sequences associated with maps. This will be the topic of forthcoming articles.

It is indeed an interesting question whether a trained CNN “detects” patterns in the sense of entities with an underlying “concept”. I would say: Certainly not. At least not pure CNNs. I think, we should be very careful with the use of the term “feature”. Based on the filtering convolutions perform we might say:

A “feature” (hopefully) is a pattern in the sense of defined geometrical pixel correlation in an image.

Not more, not less. Such a “feature” may or may not correspond to entities, which a human being could identify and for which he or she has a concept for. A feature is just a pixel correlation whose appearance triggers output neurons in high level maps.

By the way there are 2 more points regarding the idea of feature detection:

  • A feature or OIP may be located at different places in different images of something like a “5”. Due to different sizes of the depicted “5” and translational effects. So keep in mind that if maps do indeed relate to features it has to be explained how convolutional filtering can account for any translational invariance of the “detection” of a pattern in an image.
  • The concrete examples given for “features” by many authors imply that the features are more or less the same for two differently trained CNNs. Well, regarding the point that training corresponds to finding a minimum on a rather complex multidimensional hyperplane this raises the question how well defined such a (global) minimum really is and whether it or other valid side minima are approached.

Keep these points in mind until we come back to related experiments in further articles.

From “features” to FCPs on the last Conv-layer?

However and independent of how a CNN really reacts to OIPs or “features”, we should not forget the following:
In the end a CNN – more precisely its embedded MLP – reacts to FCPs on the last convolutional level. In our CNN an FCP on the third convolutional layer with specific active points across 128 (3×3)-maps obviously can obviously tell the MLP something about the class an input image belongs to: We have proven already that the MLP part of our simple CNN guesses the class the original image belongs to with a surprisingly high accuracy. And by construction it obviously does so by just analyzing the 128 (3×3)-activation values of the third layer – arranged into a flattened vector.

From a classification point of view it, therefore, seems to be legitimate to look out for any FCP across the maps on Conv_3. As we can visualize the maps it is reasonable to literally look for common activation patterns which different images of handwritten “4”s may trigger on the maps of the last convolutional level. The basic idea behind this experimental step is:

OIPs which are typical for images of a “4” trigger and activate certain maps or points within certain maps. Across all maps we then may see a characteristic FCP for a “4”, which not only a MLP but also we intelligent humans could identify.

Or: Multiple characteristic features in images of a “4” may trigger characteristic FCPs which in turn can be used indicators of a class an image belongs to by an MLP. Well, let us see how far we get with this kind of theory.

Levels of “abstractions”

Let us take a MNIST image which represents something which a European would consider to be a clear representation of a “4”.

In the second image I used the “jet”-color map; i.e. dark blue indicates a low intensity value while colors from light blue to green to yellow and red indicate growing intensity values.

The first conv2D-layer (“Conv2d_1”) produces the following 32 maps of my chosen “4”-image after training:

We see that the filters, which were established during training emphasize general contours but also focus on certain image regions. However, the original “4” is still clearly visible on very many maps as the convolution does not yet reduce resolution too much.

By the way: When looking at the maps the first time I found it surprising that the application of a simple linear 3×3 filter with stride 1 could emphasize an overall oval region and suppress the pixels which formed the “4” inside of this region. A closer look revealed however that the oval region existed already in the original image data. It was emphasized by an inversion of the pixel values …

Pooling
The second Conv2D-layer already combines information of larger areas of the image – as a max (!) pooling layer was applied before. We loose resolution here. But there is a gain, too: the next convolution can filter (already filtered) information over larger areas of the original image.

But note: In other types of more advanced and modern CNNs pooling only is involved after two or more successive convolutions have happened. The direct succession of convolutions corresponds to a direct and unique combination of filters at the same level of resolution.

The 2nd convolution
As we use 64 convolutional maps on the 2nd layer level we allow for a multitude of different new convolutions. It is to be understood that each new map at the 2nd cConv layer is the result of a special unique combination of filtered information of all 32 previous maps (of Pool_1). Each of the previous 32 maps contributes through a specific unique filter and respective convolution operation to a single specific map at layer 2. Remember that we get 3×3 x 32 x 64 parameters for connecting the maps of Pool_1 to maps of Conv_2. It is this unique combination of already filtered results which enriches the analysis of the original image for more complex patterns than just the ones emphasized by the first convolutional filters.

As the max-condition of the pooling layer was applied first and because larger areas are now analyzed we are not too astonished to see that the filters dissolve the original “4”-shape and indicate more general geometrical patterns – which actually reflect specific correlations of map patterns on layer Conv_1.

I find it interesting that our “4” triggers more horizontally activations within some maps on this already abstract level than vertical ones. One should not confuse these patterns with horizontal patterns in the original image. The relation of original patterns with these activations is already much more complex.

The third convolutional layer applies filters which now cover almost the full original image and combine and mix at the same time information from the already rather abstract results of layer 2 – and of all the 64 maps there in parallel.

We again see a dominance of horizontal patterns. We see clearly that on this level any reference to something like an arrangement of parallel vertical lines crossed by a horizontal line is completely lost. Instead the CNN has transformed the original distribution of black (dark grey) pixels into multiple abstract configuration spaces with 2 axes, which only coarsely reflecting the original image area – namely by 3×3 maps; i.e. spaces with a very poor resolution.

What we see here are “correlations” of filtered and transformed original pixel clusters over relatively large areas. But no constructive concept of certain line arrangements.

Now, if this were the level of “FCP-patterns” which the MLP-part of the CNN uses to determine that we have a “4” then we would bet that such abstract patterns (active points on 9×9 grids) appear in a similar way on the maps of the 3rd Conv layer for other MNIST images of a “4”, too.

Well, how similar do different map representations of “4”s look like on the 3rd Conv2D-layer?

What makes a four a four in the eyes of the CNN?

The last question corresponds to the question of what activation outputs of “4”s really have in common. Let us take 3 different images of a “4”:

The same with the “jet”-color-map:

 

Already with our eyes we see that there are similarities but also quite a lot of differences.

Different “4”-representations on the 2nd Conv-layer

Below we see comparison of the 64 maps on the 2nd Conv-layer for our three “4”-images.

If you move your head backwards and ignore details you see that certain maps are not filled in all three map-pictures. Unfortunately, this is no common feature of “4”-representations. Below you see images of the activation of a “1” and a “2”. There the same maps are not activated at all.

We also see that on this level it is still important which points within a map are activated – and not which map on average. The original shape of the underlying number is reflected in the maps’ activations.

Now, regarding the “4”-representations you may say: Well, I still recognize some common line patterns – e.g. parallel lines in a certain 75 degree angle on the 11×11 grids. Yes, but these lines are almost dissolved by the next pooling step:

Consider in addition that the next (3rd) convolution combines 3×3-data of all of the displayed 5×5-maps. Then, probably, we can hardly speak of a concept of abstract line configurations any more …

“4”-representations on the third Conv-layer

Below you find the activation outputs on the 3rd Conv2D-layer for our three different “4”-images:

When we look at details we see that prominent “features” in one map of a specific 4-image do NOT appear in a fully comparable way in the eventual convolutional maps for another image of a “4”. Some of the maps (i.e. filters after 4 transformations) produce really different results for our three images.

But there are common elements, too: I have marked only some of the points which show a significant intensity in all of the maps. But does this mean these individual common points are decisive for a classification of a “4”? We cannot be sure about it – probably it is their combination which is relevant.

So, what we ended up with is that we find some common points or some common point-relations in a few of the 128 “3×3”-maps of our three images of handwritten “4”s.

But how does this compare with maps of images of other digits? Well, look at he maps on the 3rd layer for images of a “1” and a “2” respectively:

On the 3rd layer it becomes more important which maps are not activated at all. But still the activation patterns within certain maps seem to be of importance for an eventual classification.

Conclusion

The maps of a CNN are created by an effective and guided optimization process. The results indicate the eventual detection of rather abstract patterns within and across filter maps on higher convolutional layers.

But these patterns (FCP-patterns) should not be confused with figurative elements or “features” in the original input images. Activation patterns at best vaguely remind of the original image features. At our level of analysis of a CNN we can only speculate about some correspondence of map activations with original features or patterns in an input image.

But it seems pretty clear that patterns in or across maps do not indicate any kind of constructive concept which describes how to build a “4” from underlying more elementary features in the sense of combine-able independent entities. There is no sign of conceptual constructive idea of how to denote a “4”. At least not in pure CNNs … Things may be a bit different in convolutional “autoencoders” (combinations of convolutional encoders and decoders), but this is another story we will come back to in this blog. Right now we would say that abstract (FCP-) patterns in maps of higher convolutional layers result from intricate filter combinations. These filters may react to certain patterns in an input image – but whether these patterns correspond to entities a human being would use to write down and thereby construct a “4” or an “8” is questionable.

We saw that the abstract information maps at the third layer of our CNN do show some common elements between the images belonging to the same class – and delicate differences with respect to activations resulting from images of other classes. However, the differences reside in details and the situation remains complicated. In the end the MLP-part of a CNN still has a lot of work to do. It must perform its classification task based on the correlation or anti-correlation of “point”-like elements in a multitude of maps – and probably even based on the activation level (i.e. output numbers) at these points.

This is seemingly very different from a conscious consideration process and weighing of alternatives which a human brain performs when it looks at sketches of numbers. When in doubt our brain tries to find traces consistent with a construction process defined for writing down a “4”, i.e. signs of a certain arrangement of straight and curved lines. A human brain, thus, would refer to arrangements of line elements, bows or circles – but not to relations of individual points in an extremely coarse and abstract representation space after some mathematical transformations. You may now argue that we do not need such a process when looking at clear representations of a “4” – we look and just know that its a “4”. I do not doubt that a brain may use maps, too – but I want to point out that a conscious intelligent thought process and conceptual ideas about entities involve constructive operations and not just a passive application of filters. Even from this extremely simplifying point of view CNNs are stupid though efficient algorithms. And authors writing about “features” should avoid any kind of a humanized interpretation.

In the next article

A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part

we shall look at the whole procedure again, but then we compare common elements of a “4” with those of a “9” on the 3rd convolutional layer. Then the key question will be: ” What do “4”s have in common on the last convolutional maps which corresponding activations of “9”s do not show – and vice versa.

This will become especially interesting in cases for which a distinction was difficult for pure MLPs. You remember the confusion matrix for the MNIST dataset? See:
A simple Python program for an ANN to cover the MNIST dataset – XI – confusion matrix
We saw at that point in time that pure MLPs had some difficulties to distinct badly written “4”s from “9s”. We will see that the better distinction abilities of CNNs in the end depend on very few point like elements of the eventual activation on the last layer before the MLP.

Further articles in this series

A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part

 

A simple CNN for the MNIST datasets – I – CNN basics

In a previous article series
A simple Python program for an ANN to cover the MNIST dataset – I – a starting point
we have played with a Python/Numpy code, which created a configurable and trainable “Multilayer Perceptron” [MLP] for us. See also
MLP, Numpy, TF2 – performance issues – Step I – float32, reduction of back propagation
for ongoing code and performance optimization.

A MLP program is useful to study multiple topics in Machine Learning [ML] on a basic level. However, MLPs with dense layers are certainly not at the forefront of ML technology – though they still are fundamental bricks in other more complicated architectures of “Artifical Neural Networks” [ANNs]. During my MLP experiments I became sufficiently acquainted with Python, Jupyter and matplotlib to make some curious first steps into another field of Machine Learning [ML] now: “Convolutional Neural Networks” [CNNs].

CNNs on my level as an interested IT-affine person are most of all fun. Nevertheless, I quickly found out that a somewhat systematic approach is helpful – especially if you later on want to use the Tensorflow’s API and not only Keras. When I now write about some experiments I did and do I summarize my own biased insights and sometimes surprises. Probably there are other hobbyists as me out there who also fight with elementary points in the literature and practical experiments. Books alone are not enough … I hope to deliver some practical hints for this audience. The present articles are, however, NOT intended for ML and CNN experts. Experts will almost certainly not find anything new here.

Although I address CNN-beginners I assume that people who stumble across this article and want to follow me through some experiments have read something about CNNs already. You should know fundamentals about filters, strides and the basic principles of convolution. I shall comment on all these points but I shall not repeat the very basics. I recommend to read relevant chapters in one of the books I recommend at the end of this article. You should in addition have some knowledge regarding the basic structure and functionality of a MLP as well as “gradient descent” as an optimization technique.

The objective of this introductory mini-series is to build a first simple CNN, to apply it to the MNIST dataset and to visualize some of the elementary “features” a CNN allegedly detects in the images of handwritten digits – at least according to many authors in the field of AI. We shall use Keras (with the Tensorflow 2.2 backend and CUDA 10.2) for this purpose. And, of course, a bit of matplotlib and Python/Numpy, too. We are working with MNIST images in the first place – although CNNs can be used to analyze other types of input data. After we have covered the simple standard MNIST image set, we shall also work a bit with the so called “MNIST fashion” set.

But in this article I start with some introductory words on the structure of CNNs and the task of its layers. We shall use the information later on as a reference. In the second article we shall set up and test a simple version of a CNN. Further articles will then concentrate on visualizing what a trained CNN reacts to and how it modifies and analyzes the input data on its layers.

Why CNNs?

When we studied an MLP in combination with the basic MNIST dataset of handwritten digits we found that we got an improvement in accuracy (for the same setup of dense layers) when we pre-processed the data to find “clusters” in the image data before training. Such a process corresponds to detecting parts of an MNIST image with certain gray-white pixel constellations. We used Scikit-Learn’s “MiniBatchKMeans” for this purpose.

We saw that the identification of 40 to 70 cluster areas in the images helped the MLP algorithm to analyze the MNIST data faster and better than before. Obviously, training the MLP with respect to combinations of characteristic sub-structures of the different images helped us to classify them as representations of digits. This leads directly to the following question:

What if we could combine the detection of sub-structures in an image with the training process of an ANN?

CNNs seem to the answer! According to teaching books they have the following abilities: They are designed to detect elementary structures or patterns in image data (and other data) systematically. In addition they are enabled to learn something about characteristic compositions of such elementary features during training. I.e., they detect more abstract and composite features specific for the appearance of certain objects within an image. We speak of a “feature hierarchy“, which a CNN can somehow grasp and use – e.g. for classification tasks.

While a MLP must learn about pixel constellations and their relations on the whole image area, CNNs are much more flexible and even reusable. They identify and remember elementary sub-structures independent of the exact position of such features within an image. They furthermore learn “abstract concepts” about depicted objects via identifying characteristic and complex composite features on a higher level.

This simplified description of the astonishing capabilities of a CNN indicates that its training and learning is basically a two-fold process:

  • Detecting elementary structures in an image (or other structured data sets) by filtering and extracting patterns within relatively small image areas. We shall call these areas “filter areas”.
  • Constructing abstract characteristic features out of the elementary filtered structural elements. This corresponds to building a “hierarchy” of significant features for the classification of images or of distinguished objects or of the positions of such objects within an image.

Now, if you think about the MNIST digit data we understand intuitively that written digits represent some abstract concepts like certain combinations of straight vertical and horizontal line elements, bows and line crossings. The recognition of certain feature combinations of such elementary structures would of course be helpful to recognize and classify written digits better – especially when the recognition of the combination of such features is independent of their exact position on an image.

So, CNNs seem to open up a world of wonders! Some authors of books on CNNs, GANs etc. praise the ability to react to “features” by describing them as humanly interpretable entities as e.g. “eyes”, “feathers”, “lips”, “line segments”, etc. – i.e. in the sense of entity conceptions. Well, we shall critically review this idea, which I think is a misleading over-interpretation of the capacities of CNNs.

Filters, kernels and feature maps

An important concept behind CNNs is the systematic application of (various) filters (described and defined by so called “kernels”).

A “filter” defines a kind of masking pixel area of limited small size (e.g. 3×3 pixels). A filter combines weighted output values at neighboring nodes of a input layer in a specific defined way. It processes the offered information in a defined area always in the same fixed way –
independent of where the filter area is exactly placed on the (bigger) image (or a processed version of it). We call a processed version of an image a “map“.

A specific type of CNN layer, called a “Convolution Layer” [Conv layer], and a related operational algorithm let a series of such small masking areas cover the complete surface of an image (or a map). The first Conv layer of a CNN filters the information of the original image information via a multitude of such masking areas. The masks can be arranged overlapping, i.e. they can be shifted against each other by some distance along their axes. Think of the masking filter areas as a bunch of overlapping tiles covering the image. The shift is called stride.

The “filter” mechanism (better: the mathematical recipe) of a specific filter remains the same for all of its small masking areas covering the image. A specific filter emphasizes certain parts of the original information and suppresses other parts in a defined way. If you combine the information of all masks you get a new (filtered) representation of the image – we speak of a “feature map” – sometimes with a smaller size than the original image (or map) the filter is applied to. The blending of the original data with a filtering mask creates a “feature map“, i.e. a filtered view onto the input data. The blending process is called “convolution” (due to the related mathematical operations).

The picture below sketches the basic principle of a 3×3-filter which is applied with a constant stride of 2 along each axis of the image:

Convolution is not so complicated as it sounds. It means: You multiply the original data values in the covered small area by factors defined in the filter’s kernel and add the resulting values up to get a a distinct value at a defined position inside the map. In the given example with a stride of 2 we get a resulting feature map of 4×4 out of a original 9×9 (image or map).

Note that a filter need not be defined as a square. It can have a rectangular (n x m) shape with (n, m) being integers. (In principle we could also think of other tile forms as e.g. hexagons – as long as they can seamlessly cover a defined plane. Interesting, although I have not seen a hexagon based CNN in the literature, yet).

A filter’s kernel defines factors used in the convolution operation – one for each of the (n x m) defined points in the filter area.

Note also that filters may have a “depth” property when they shall be applied to three-dimensional data sets; we may need a depth when we cover colored images (which require 3 input layers). But let us keep to flat filters in this introductory discussion …

Now we come to a central question: Does a CNN Conv layer use just one filter? The answer is: No!

A Conv layer of a CNN you allows for the construction of multiple different filters. Thus we have to deal with a whole bunch of filters per each convolutional layer. E.g. 32 filters for the first convolutional layer and 64 for the second and 128 for the third. The outcome of the respective filter operations is the creation is of equally many “feature maps” (one for each filter) per convolutional layer. With 32 different filters on a Conv layer we would thus build 32 maps at this layer.

This means: A Conv layer has a multitude of sub-layers, i.e. “maps” which result of the application of different filters on previous image or map data.

You may have guessed already that the next step of abstraction is:
You can apply filters also to “maps” of previous filters, i.e. you can chain convolutions. Thus, feature maps are either connected to the image (1st Conv layer) or to the maps of a previous layer.

By using a sequence of multiple Conv layers you cover growing areas of the original image. Everything clear? Probably not …

Filters and their related weights are the end products of the training and optimization of a CNN!

When I first was confronted with the concept of filters, I got confused because many authors only describe the basic technical details of the “convolution” mechanism. They explain with many words how a filter and its kernel work when the filtering area is “moved” across the surface of an image. They give you pretty concrete filter examples; very popular are straight lines and crosses indicated by “ones” as factors in the filter’s kernel and zeros otherwise. And then you get an additional lecture on strides and padding. You have certainly read various related passages in books about ML and/or CNNs. A pretty good example for this “explanation” is the (otherwise interesting and helpful!) book of Deru and Ndiaye (see the bottom of this article. I refer to the introductory chapter 3.5.1 on CNN architectures.)

Well, the technical procedure is pretty easy to understand from drawings as given above – the real question that nags in your brain is:

“Where the hell do all the different filter definitions come from?”

What many authors forget is a central introductory sentence for beginners:

A filter is not given a priori. Filters (and their kernels) are systematically constructed and build up during the training of a CNN; filters are the end products of a learning and optimization process every CNN must absolve.

This means: For a given problem or dataset you do not know in advance what the “filters” (and their defining kernels) will look like after training (aside of their pixel dimensions already fixed by the CNN’s layer definitions). The “factors” of a filter used in the convolution operation are actually weights, whose final values are the outcome of a learning process. Just as in MLPs …

Nothing is really “moved” …

Another critical point is the somewhat misleading analogy of “moving” a filter across an image’s or map’s pixel surface. Nothing is ever actually “moved” in a CNN’s algorithm. All masks are already in place when the convolution operations are performed:

Every element of a specific e.g. 3×3 kernel corresponds to “factors” for the convolution operation. What are these factors? Again: They are nothing else but weights – in exactly the same sense as we used them in MLPs. A filter kernel represents a set of weight-values to be multiplied with original output values at the “nodes” in other layers or maps feeding input to the nodes of the present map.

Things become much clearer if you imagine a feature map as a bunch of arranged “nodes”. Each node of a map is connected to (n x m) nodes of a previous set of nodes on a map or layer delivering input to the Conv layer’s maps.

Let us look at an example. The following drawing shows the connections from “nodes” of a feature map “m” of a Conv layer L_(N+1) to nodes of two different maps “1” and “2” of
Conv layer L_N. The stride for the kernels is assumed to be just 1.

In the example the related weights are described by two different (3×3) kernels. Note, however, that each node of a specific map uses the same weights for connections to another specific map or sub-layer of the previous (input) layer. This explains the total number of weights between two sequential Conv layers – one with 32 maps and the next with 64 maps – as (64 x 32 x 9) + 64 = 18496. The 64 extra weights account for bias values per map on layer L_(N+1). (As all nodes of a map use fixed bunches of weights, we only need exactly one bias value per map).

Note also that a stride is defined for the whole layer and not per map. Thus we enforce the same size of all maps in a layer. The convolutions between a distinct map and all maps of the previous layer L_N can be thought as operations performed on a column of stacked filter areas at the same position – one above the other across all maps of L_N. See the illustration below:

The weights of a specific kernel work together as an ensemble: They condense the original 3×3 pixel information in the filtered area of the connected input layer or a map to a value at one node of the filter specific feature map. Please note that there is a bias weight in addition for every map; however, at all masking areas of a specific filter the very same 9 weights are applied. See the next drawing for an illustration of the weight application in our example for fictitious node and kernel values.

A CNN learns the appropriate weights (= the filter definitions) for a given bunch of images via training and is guided by the optimization of a loss function. You know these concepts already from MLPs …

The difference is that the ANN now learns about appropriate “weight ensembles” – eventually (!) working together as a defined convolutional filter between different maps of neighboring Conv (and/or sampling ) Layers. (For sampling see a separate paragraph below.)

The next picture illustrates the column like convolution of information across the identically positioned filter areas across multiple maps of a previous convolution layer:

The fact that the weight ensemble of a specific filter between maps is always the same, explains, by the way, the relatively (!) small number of weight parameters in deep CNNS.

Intermediate summary: The weights, which represent the factors used by a specific filter operation called convolution, are defined during a training process. The filter, its kernel and the respective weight values are the outcome of a mathematical optimization process – mostly guided by gradient descent.

Activation functions

As in MLPs each Conv layer has an associated “activation function” which is applied at each node of all maps after the resulting values of the convolution have been calculated as
the nodes input. The output then feeds the connections to the next layer. In CNNs for image handling often “Relu” or “Selu” are used as activation functions – and not “sigmoid” which we applied in the MLP code discussed in another article series of this blog.

Tensors

The above drawings indicate already that we need to arrange the data (of an image) and also the resulting map data in an organized way to be able to apply the required convolutional multiplications and summations the right way.

An colored image is basically a regular 3 dimensional structure with a width “w” (number of pixels along the x-axis), a height “h” (number of pixels along the y-axis) and a (color) depth “d” (d=3 for RGB colors).

If you represent the color value at each pixel and RGB-layer by a float you get a bunch of w x h x d float values which we can organize and index in a 3 dimensional Numpy array. Mathematically such well organized arrays with a defined number of axes (rank), a set of numbers describing the dimension along each axis (shape), a data-type, possible operations (and invariance aspects) define an abstract object called a “tensor“. Colored image data can be arranged in 3-dimensional tensors; gray colored images in a pseudo 3D-tensor which has a shape of (n, m, 1). (Keras and Tensorflow want to get imagedata in form of 2D tensors).

Now the important point is: The output data of Conv-layers and their feature maps also represent tensors. A bunch of 32 maps with a defined width and height defines data of a 3D-tensor.

You can imagine each value of such a tensor as the input or output given at a specific node in a layer with a 3-dimensional sub-structure. (In other even more complex data structures than images we would other multi-dimensional data structures.) The weights of a filter kernel describe the connections of the nodes of a feature map on a layer L_N to a specific map of a previous layer. Weights, actually, also define elements of a tensor.

The forward- and backward-propagation operations performed throughout such a complex net during training thus correspond to sequences of tensor-operations – i.e. generalized versions of the np.dot()-product we got to know in MLPs.

You understood already that e.g strides are important. But you do not need to care about details – Keras and Tensorflow will do the job for you! If you want to read a bit look a the documentation of the TF function “tf.nn.conv2d()”.

When we later on train with mini-batches of input data (i.e. batches of images) we get yet another dimension of our tensors. This batch dimension can – quite similar to MLPs – be used to optimize the tensor operations in a vectorized way. See my series on MLPs.

Chained convolutions cover growing areas of the original image

Two sections above I characterized the training of a CNN as a two-fold procedure. From the first drawing it is relatively easy to understand how we get to grasp tiny sub-structures of an image: Just use filters with small kernel sizes!

Fine, but there is probably a second question already arising in your mind:

By what mechanism does a CNN find or recognize a hierarchy of features?

One part of the answer is: Chain convolutions!

Let us assume a first convolutional layer with filters having a stride of 1 and a (3×3) kernel. We get maps with a shape of (26, 26) on this layer. The next Conv layer shall use a (4×4) kernel and also a stride of 1; then we get maps with a shape of (23, 23). A node on the second layer covers (6×6)-arrays on the original image. Two neighboring nodes a total area of (7×7). The individual (6×6)-areas of course overlap.

With a stride of 2 on each Conv-layer the corresponding areas on the original image are (7×7) and (11×11).

So a stack of consecutive (sequential) Conv-layers covers growing areas on the original image. This supports the detection of a pattern or feature hierarchy in the data of the input images.

However: Small strides require a relatively big number of sequential Conv-layers (for 3×3 kernels and stride 2) at least 13 layers to eventually cover the full image area.

Even if we would not enlarge the number of maps beyond 128 with growing layer number, we would get

(32 x 9 + 32) + (64 x 32 +64) + (128 x 64 + 128) + 10 x (128 x 128 + 128) = 320 + 18496 + 73856 + 10*147584 = 1.568 million weight parameters

to take care of!

This number has to be multiplied by the number of images in a mini-batch – e.g. 500. And – as we know from MLPs we have to keep all intermediate output results in RAM to accelerate the BW propagation for the determination of gradients. Too many data and parameters for the analysis of small 28×28 images!

Big strides, however, would affect the spatial resolution of the first layers in a CNN. What is the way out?

Sub-sampling is necessary!

The famous VGG16 CNN uses pairs and triples of convolution chains in its architecture. How does such a network get control over the number of weight parameters and the RAM requirement for all the output data at all the layers?

To get information in the sense of a feature hierarchy the CNN clearly should not look at details and related small sub-fields of the image, only. It must cover step-wise growing (!) areas of the original image, too. How do we combine these seemingly contradictory objectives in one training algorithm which does not lead to an exploding number of parameters, RAM and CPU time? Well, guys, this is the point where we should pay due respect to all the creative inventors of CNNs:

The answer is: We must accumulate or sample information across larger image or map areas. This is the (underestimated?) task of pooling– or sampling-layers.

For me it was just another confusing point in the beginning – until one grasps the real magic behind it. At first sight a layer like a typical “maxpooling” layer seems to reduce information, only; see the next picture:

The drawing explains that we “sample” the information over multiple pixels e.g. by

  • either calculating an average over pixels (or map node values)
  • or by just picking the maximum value of pixels or map node values (thereby stressing the most important information)

in a certain defined sub-area of an image or map.

The shift or stride used as a default in a pooling layer is exactly the side length of the pooling area. We thus cover the image by adjacent, non-overlapping tiles! This leads to a substantial decrease of the dimensions of the resulting map! With a (2×2) pooling size by a an effective factor of 2. (You can change the default pooling stride – but think about the consequences!)

Of course, averaging or picking a max value corresponds to information reduction.

However: What the CNN really also will do in a subsequent Conv layer is to invest in further weights for the combination of information (features) in and of substantially larger areas of the original image! Pooling followed by an additional convolution obviously supports hierarchy building of information on different scales of image areas!

After we first have concentrated on small scale features (like with a magnifying glass) we now – in a figurative sense – make a step backwards and look at larger scales of the image again.

The trick is to evaluate large scale information by sampling layers in addition to the small scale information information already extracted by the previous convolutions. Yes, we drop resolution information – but by introducing a suitable mix of convolutions and sampling layers we also force the network systematically to concentrate on combined large scale features, which in the end are really important for the image classification as a whole!

As sampling counterbalances an explosion of parameters we can invest into a growing number of feature maps with growing scales of covered image areas. I.e. we add more and new filters reacting to combinations of larger scale information.

Look at the second to last illustration: Assume that the 32 maps on layer L_N depicted there are the result of a sampling operation. The next convolution gathers new knowledge about more, namely 64 different combinations of filtered structures over a whole vertical stack of small filter areas located at the same position on the 32 maps of layer N. The new information is in the course of training conserved into 64 weight ensembles for 64 maps on layer N+1.

Resulting options for architectures

We can think of multiple ways of combining Conv layers and pooling layers. A simple recipe for small images could be

  • Layer 0: Input layer (tensor of original image data, 3 color layers or one gray layer)
  • Layer 1: Conv layer (small 3×3 kernel, stride 1, 32 filters, 32 maps (26×26), analyzes 3×3 overlapping areas)
  • Layer 2: Pooling layer (2×2 max pooling => 32 (13×13) maps,
    a node covers 4×4 non overlapping areas per node on the original image)
  • Layer 3: Conv layer (3×3 kernel, stride 1, 64 filters, 64 maps (11×11),
    a node covers 8×8 overlapping areas on the original image (total effective stride 2))
  • Layer 4: Pooling layer (2×2 max pooling => 64 maps (5×5),
    a node covers 10×10 areas per node on the original image (total effective stride 5), some border info lost)
  • Layer 5: Conv layer (3×3 kernel, stride 1, 64 filters, 64 maps (3×3),
    a node covers 18×18 per node (effective stride 5), some border info lost )

The following picture illustrates the resulting successive combinations of nodes along one axis of a 28×28 image.

Note that I only indicated the connections to border nodes of the Conv filter areas.

The kernel size decides on the smallest structures we look at – especially via the first convolution. The sampling decides on the sequence of steadily growing areas which we then analyze for specific combinations of smaller structures.

Again: It is most of all the (down-) sampling which allows for an effective hierarchical information building over growing larger image areas! Actually we do not really drop information by sampling – instead we give the network a chance to collect and code new information on a higher, more abstract level (via a whole bunch of numerous new weights).

The big advantages of the sampling layers get obvious:

  • They reduce the numbers of required weights
  • They reduce the amount of required memory – not only for weights but also for the output data, which must be saved for every layer, map and node.
  • They reduce the CPU load for FW and BW propagation
  • They also limit the risk of overfitting as some detail information is dropped.

Of course there are many other sequences of layers one could think about. E.g., we could combine 2 to 3 Conv layers before we apply a pooling layer. Such a layer sequence is characteristic of the VGG nets.

Further aspects

Just as MLPs a CNN represents an acyclic graph, where the maps contain increasingly fewer nodes but where the number of maps per layer increases on average.

Questions and objectives for this article series

An interesting question, which seldom is answered in introductory books, is whether two totally independent training runs for a given CNN-architecture applied on the same input data will produce the same filters in the same order. We shall investigate this point in the forthcoming articles.

Another interesting point is: What does a CNN see at which convolution layer?
And even more important: What do the “features” (= basic structural elements) in an image which trigger/activate a specific filter or map, look like?

If we could look into the output at some maps we could possibly see what filters do with the original image. And if we found a way to construct a structured image which triggers a specific filter then we could better understand what patterns the CNN reacts to. Are these patterns really “features” in the sense of conceptual entities? Examples for these different types of visualizations with respect to convolution in a CNN are objectives of this article series.

Conclusion

Today we covered a lot of “theory” on some aspects of CNNs. But we have a sufficiently solid basis regarding the structure and architecture now.

CNNs obviously have a much more complex structure than MLPs: They are deep in the sense of many sequential layers. And each convolutional layer has a complex structure in form of many parallel sub-layers (feature maps) itself. Feature maps are associated with filters, whose parameters (weights) get learned during the training. A map results from covering the original image or a map of a previous layer with small (overlapping) tiles of small filtering areas.

A mix of convolution and pooling layers allows for a look at detail patterns of the image in small areas in lower layers, whilst later layers can focus on feature combinations of larger image areas. The involved filters thus allow for the “awareness” of a hierarchy of features with translational invariance.

Pooling layers are important because they help to control the amount of weight parameters – and they enhance the effectiveness of detecting the most important feature correlations on larger image scales.

All nice and convincing – but the attentive reader will ask: Where and how do we do the classification?
Try to answer this question yourself first.

In the next article we shall build a concrete CNN and apply it to the MNIST dataset of images of handwritten digits. And whilst we do it I deliver the answer to the question posed above. Stay tuned …

Literature

“Advanced Machine Learning with Python”, John Hearty, 2016, Packt Publishing – See chapter 4.

“Deep Learning mit Python und Keras”, Francois Chollet, 2018, mitp Verlag – See chapter 5.

“Hands-On Machine learning with SciKit-Learn, Keras & Tensorflow”, 2nd edition, Aurelien Geron, 2019, O’Reilly – See chapter 14.

“Deep Learning mit Tensorflow, keras und Tensorflow.js”, Matthieu Deru, Alassane Ndiaye, 2019, Rheinwerk Verlag, Bonn – see chapter 3

Further articles in this series

A simple CNN for the MNIST dataset – XI – Python code for filter visualization and OIP detection
A simple CNN for the MNIST dataset – X – filling some gaps in filter
visualization

A simple CNN for the MNIST dataset – IX – filter visualization at a convolutional layer
A simple CNN for the MNIST dataset – VIII – filters and features – Python code to visualize patterns which activate a map strongly
A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part
A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features
A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

Addendum 20.10.2020: “Feature maps” or “response maps”?

Whilst reading some more books on AI I got the impression that some authors associate the term “feature map” with the full collection of all maps related to a Conv layer. A single map then is called a “response map” or an “output channel” of the feature map. However, as most books (e.g. the book of A. Geron on Machine Learning) call a singular map a “feature map” I cling to this usage throughout this series.