Deep Dreams of a CNN trained on MNIST data – II – some code for pattern carving

In the last article of this series

Deep Dreams of a CNN trained on MNIST data – I – a first approach based on one selected map of a convolutional layer

I presented some “Deep Dream” images of a CNN trained on MNIST data. Although one cannot produce spectacular images with a simple CNN based on gray scale, low resolution image data, we still got a glimpse of what the big guys do with more advanced CNNs and image data. But we have only started, yet ….

In the forthcoming articles we shall extend our abilities step by step beyond the present level. Our next goal is to work on different length scales within the image, i.e. we go down into sub-tiles of the original image, analyze the sub-structures there and include the results in the CNN’s dreams. But first we need to understand more precisely how we apply the image manipulation techniques that “carve” some additional dream-like optical elements, which correspond to map triggering features, into an arbitrary input image presented to the CNN.

Carving: A combination of low-resolution OIP-pattern creation and a reproduction of high-resolution details

I used the term “carved” intentionally. Artists or craftsmen who create little figures or faces out of wooden blocks by a steady process of carving often say that they only bring to the surface what already was there – namely in the basic fiber structures of the wood. Well, the original algorithm for the creation of pure OIP- or feature patterns systematically amplifies some minimal correlated pixel “structures” detected in an input image with random pixel values. The values of all other pixels outside the OIP pattern are reduced; sooner or later they form a darker and rather homogeneous background. The “detection” is based on (trained) filters which react to certain pixel constellations. We already encoded an adaption of a related method for CNN filter visualization, which Francois Chollet discusses in his book on “Deep learning with Python”, into a Python class in another article series of this blog (see the link section at the end of this article).

There is, however, a major difference regarding our present context: During the image manipulations for the visualization of Deep Dreams we do not want to remove all details of the original image presented to the DeepDream algorithm. Instead, we just modify it at places where the CNN detects traces of patterns which trigger a strong response of one or multiple maps of our CNN – and keep detail information alive elsewhere. I.e., we must counterbalance the tendency of our OIP algorithm to level out information outside the OIP structures.

How do we do this, when both our present OIP algorithm and the MNIST-trained CNN are limited to a resolution of (28×28) pixels, only?

Carving – amplification of certain OIP structures and intermittent replenishment of details of the original image

In this article we follow the simple strategy explained in the last article:

  • Preparation:
    • We choose a map.
    • We turn the colored original input image (about which the CNN shall dream) into a gray one.
    • We downscale the original image to a size of (28×28) by interpolation, upscale the result again by interpolating again (with loss) and calculate the difference to the original full resolution image (all interpolations done in a bicubic way). (The difference contains information on detail structures.)
  • Loop (4 times or so):
    • We apply the OIP-algorithm on the downscaled input image for a fixed amount of epochs with a small epsilon.
      The result is a slight emphasis of
      pixel structures resembling OIP patterns.
    • We upscale the result by bicubic interpolation to the original size.
    • We re-add the difference to the original image, i.e. we supply details again.
    • We downscale the result again.
  • Eventual image optimization: We apply some contrast treatment to the resulting image, whose pixel value range and distribution has gradually been changed in comparison to the original image.

This corresponds to a careful accentuation of an OIP-pattern detected by a certain map in the given structured pixel data of the input image (on coarse scales) – without loosing all details of the initial image. As said – it resembles a process of carving coarse patterns into the original image ..

Note: The above algorithm works on the whole image – a modified version working on a variety of smaller length scales is discussed in further articles.

What does this strategy look like in Python statements for Jupyter cells?

Start with loading required modules and involving your graphics card

We have to load python modules to support our CNN, the OIP algorithm and some plotting. In a second step we need to invoke the graphics card. This is done in two Jupyter cells (1 and 2). Their content was already described in the article

A simple CNN for the MNIST dataset – XI – Python code for filter visualization and OIP detection

of a parallel series in this blog. These initial steps are followed by an instantiation of a class “My_OIP” containing all required methods. The __init__() function of this class restores a Keras model of our basic CNN from a h5-file:

Jupyter cell 3

%matplotlib inline

# Load the CNN-model 
#  ~~~~~~~~~~~~~~~~~
imp.reload(myOIP)
try:
    with tf.device("/GPU:0"):
        MyOIP = myOIP.My_OIP(cnn_model_file = 'cnn_best.h5', layer_name = 'Conv2D_3')

except SystemExit:
    print("stopped")
    

Turn the original image into a gray one, downscale it and calculate difference-tensors for coarsely re-enlarged versions

In the next Jupyter cell we deal with the following initial image:

We choose a method (out of 3) to turn it into a gray scaled image. Afterwards we calculate the difference between an image tensor of the original (gray) image and a coarse, but softly rescaled image derived from a (28×28)-pixel version.

Jupyter cell 4

# Down-, gray- and Re-scale images, determine correction differences  
# *******************************************************************
import PIL
from PIL import Image, ImageOps

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 15
fig_size[1] = 30
fig_1 = plt.figure(1)
ax1_1 = fig_1.add_subplot(531)
ax1_2 = fig_1.add_subplot(532)
ax1_3 = fig_1.add_subplot(533)
ax2_1 = fig_1.add_subplot(534)
ax2_2 = fig_1.add_subplot(535)
ax2_3 = fig_1.add_subplot(536)
ax3_1 = fig_1.add_subplot(537)
ax3_2 = fig_1.add_subplot(538)
ax3_3 = fig_1.add_subplot(539)
ax4_1 = fig_1.add_subplot(5,3,10)
ax4_2 = fig_1.add_subplot(5,3,11)
ax4_3 = fig_1.add_subplot(5,3,12)

# Parameters 
# ************
# size of the image to work with 
img_wk_size = 560 
# method to turn the original picture into a gray scaled one 
gray_scaling_
method = 2

# bring the orig img down to (560, 560) 
# ***************************************
imgvc    = Image.open("rosen_orig_farbe.jpg")
imgvc_g  = imgvc.convert("L")

# downsize with PIL 
# ******************
imgvc_wk_size    = imgvc.resize((img_wk_size,img_wk_size), resample=PIL.Image.BICUBIC)
imgvc_wk_size_g  = imgvc_wk_size.convert("L")
imgvc_28      = imgvc.resize(  (28,28), resample=PIL.Image.BICUBIC)
imgvc_g_28    = imgvc_g.resize((28,28), resample=PIL.Image.BICUBIC)

# Change to np array
ay_picc = np.array(imgvc_wk_size)

# Display orig and wk_size images  
# *********************************
ax1_1.imshow(imgvc)
ax1_2.imshow(imgvc_wk_size)
ax1_3.imshow(imgvc_g, cmap=plt.cm.gray)

# Apply 3 different methods to turn the image into a gray one 
# **************************************************************
#Red * 0.3 + Green * 0.59 + Blue * 0.11
#Red * 0.2126 + Green * 0.7152 + Blue * 0.0722
#Red * 0.299 + Green * 0.587 + Blue * 0.114

ay_picc_g1 = ( 0.3    * ay_picc[:,:,0] + 0.59   * ay_picc[:,:,1] + 0.11   * ay_picc[:,:,2] )  
ay_picc_g2 = ( 0.299  * ay_picc[:,:,0] + 0.587  * ay_picc[:,:,1] + 0.114  * ay_picc[:,:,2] )  
ay_picc_g3 = ( 0.2126 * ay_picc[:,:,0] + 0.7152 * ay_picc[:,:,1] + 0.0722 * ay_picc[:,:,2] )  

ay_picc_g1 = ay_picc_g1.astype('float32') 
ay_picc_g2 = ay_picc_g2.astype('float32') 
ay_picc_g3 = ay_picc_g3.astype('float32') 

# Prepare tensors
# *****************
t_picc_g1 = ay_picc_g1.reshape((1, img_wk_size, img_wk_size, 1))
t_picc_g2 = ay_picc_g2.reshape((1, img_wk_size, img_wk_size, 1))
t_picc_g3 = ay_picc_g3.reshape((1, img_wk_size, img_wk_size, 1))

t_picc_g1 = tf.image.per_image_standardization(t_picc_g1)
t_picc_g2 = tf.image.per_image_standardization(t_picc_g2)
t_picc_g3 = tf.image.per_image_standardization(t_picc_g3)

# Display gray-tensor variants    
# ****************************
ax2_1.imshow(t_picc_g1[0,:,:,0], cmap=plt.cm.gray)
ax2_2.imshow(t_picc_g2[0,:,:,0], cmap=plt.cm.gray)
ax2_3.imshow(t_picc_g3[0,:,:,0], cmap=plt.cm.gray)

# choose one gray img
# ******************
if   gray_scaling_method == 1:
    t_picc_g = t_picc_g1
elif gray_scaling_method == 2:
    t_picc_g = t_picc_g2
else:
    t_picc_g = t_picc_g3

# downsize to (28,28) and standardize
# *****************************************
t_picc_g_28_scd  = tf.image.resize(t_picc_g, [28,28], method="bicubic", antialias=True)
t_picc_g_28_std  = tf.image.per_image_standardization(t_picc_g_28_scd)
t_picc_g_28      = t_picc_g_28_std

# display 
ax3_1.imshow(imgvc_g_28, cmap=plt.cm.gray)
ax3_2.imshow(t_picc_g_28_scd[0,:,:,0], cmap=plt.cm.gray)
ax3_3.imshow(t_picc_g_28_std[0,:,:,0], cmap=plt.cm.gray)

# Upscale and get correction values  
# **********************************
t_picc_g_wk_size_scd     = tf.image.resize(t_picc_g_28, [img_wk_size,img_wk_size], method="bicubic", antialias=True)
t_picc_g_wk_size_scd_std = tf.image.per_image_standardization(t_picc_g_wk_size_scd)
t_picc_g_wk_size_corr    =  t_picc_g - t_picc_g_wk_size_scd_std   

# Correct and display 
# **********************
t_picc_g_wk_size_re   = t_picc_g_wk_size_scd_std + t_picc_g_wk_size_corr 
# Display
ax4_1.imshow(t_picc_g_wk_size_scd_std[0,:,:,0], cmap=plt.cm.gray)
ax4_2.imshow(t_picc_g[0,:,:,0],                 cmap=plt.cm.gray)
ax4_3.imshow(t_picc_g_wk_size_re[0,:,:,0],      cmap=plt.cm.gray)

 
The code is straightforward – a lot of visualization steps will later on just show tiny differences. For production you may shorten the code significantly.

In a first step we downscale the image to a work-size of (560×560) px with the help of PIL modules. (Note that 560 can be divided by 2, 4, 8, 16, 28). We then test out three methods to turn the image into a gray one. This step is required, because our MNIST-oriented CNN is trained for gray images, only. We
reshape the image data arrays to prepare for tensor-operations and standardize with the help of Tensorflow – and produce image tensors on the fly. Afterwards, we downscale to a size of (28×28) pixels; this is the size of the MNIST images for which the CNN has been trained. We upscale to the work-size again, compare it to the original detailed image and calculate respective differences of the tensor-components. These differences will later be used to re-add details to our manipulated images.

The following picture displays the resulting output:

Note: The last two images should not show any difference. The middle row and the displays how coarse the (28×28) resolution really is and how much information we have lost after upscaling to (560×560)px.

OIP-Carving and detail replenishment

We now follow the algorithm described above. This leads to a main loop with steps for downscaling, OIP-carving on the (28×28)-scale, upscaling and a supplementation of original details:

Jupyter cell 5

# **************************************************+
# OIP analysis (to be used after the previous cell)
# **************************************************+

# Prepare plotting 
# -------------------
#interactive plotting 
#%matplotlib notebook 
#plt.ion()

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 15
fig_size[1] = 25
fig_2 = plt.figure(2)
ax2_1_1 = fig_2.add_subplot(531)
ax2_1_2 = fig_2.add_subplot(532)
ax2_1_3 = fig_2.add_subplot(533)
ax2_2_1 = fig_2.add_subplot(534)
ax2_2_2 = fig_2.add_subplot(535)
ax2_2_3 = fig_2.add_subplot(536)
ax2_3_1 = fig_2.add_subplot(537)
ax2_3_2 = fig_2.add_subplot(538)
ax2_3_3 = fig_2.add_subplot(539)
ax2_4_1 = fig_2.add_subplot(5,3,10)
ax2_4_2 = fig_2.add_subplot(5,3,11)
ax2_4_3 = fig_2.add_subplot(5,3,12)

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_3 = plt.figure(3)
axa_1 = fig_3.add_subplot(241)
axa_2 = fig_3.add_subplot(242)
axa_3 = fig_3.add_subplot(243)
axa_4 = fig_3.add_subplot(244)
axa_5 = fig_3.add_subplot(245)
axa_6 = fig_3.add_subplot(246)
axa_7 = fig_3.add_subplot(247)
axa_8 = fig_3.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 4
fig_h = plt.figure(50)
ax_h_1 = fig_h.add_subplot(141)
ax_h_2 = fig_h.add_subplot(142)
ax_h_3 = fig_h.add_subplot(143)
ax_h_4 = fig_h.add_subplot(144)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

# list for img tensors 
li_t_imgs = []

# Parameters for the OIP run
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
map_index = 56         # map-index we are interested in 
# map_index = -1         # for this value the whole layer is taken; the loss function is averaged over the maps
n_epochs  = 20          # should be divisible by 5  
n_steps   = 6           # number of intermediate reports 
epsilon   = 0.01        # step size for gradient correction  
conv_criterion = 2.e-4  # criterion for a potential stop of optimization 

n_iter    = 5           # number of iterations for down-, upscaling and supplementation of details

# Initial image
# -----------------
MyOIP._initial_inp_img_data = t_picc_g_28

# Main iteration loop 
# ***********************
for i in range(n_iter): 

    # Perform the OIP analysis 
    MyOIP._derive_OIP(map_index = map_index, 
                      n_epochs = n_epochs, n_steps = n_steps, 
       
               epsilon = epsilon , conv_criterion = conv_criterion, 
                      b_stop_with_convergence=False,
                      b_print = False,
                      li_axa = li_axa,
                      ax1_1 = ax2_1_1, ax1_2 = ax2_1_2,
                      centre_move = 0.33, fact = 1.0)

    t_oip_c_g_28  = MyOIP._inp_img_data
    ay_oip_c_g_28 = t_oip_c_g_28[0,:,:,0].numpy()

    ay_oip_c_g_28_cont = MyOIP._transform_tensor_to_img(T_img=t_oip_c_g_28[0,:,:,0], centre_move=0.33, fact=1.0)
    ax2_1_3.imshow(ay_oip_c_g_28_cont, cmap=plt.cm.gray)

    # Rescale to 500
    t_oip_c_g_wk_size  = tf.image.resize(t_oip_c_g_28, [img_wk_size,img_wk_size], method="bicubic", antialias=True)
    #t_oip_c_g_500 = tf.image.per_image_standardization(t_oip_c_g_500)

    t_oip_c_g_wk_size_re     = t_oip_c_g_wk_size + t_picc_g_wk_size_corr 
    t_oip_c_g_wk_size_re_std = tf.image.per_image_standardization(t_oip_c_g_wk_size_re)

    # remember the img tensors of the iterations    
    li_t_imgs.append(t_oip_c_g_wk_size_re_std)
    
    # do not change the gray spectrum for iterations 
    MyOIP._initial_inp_img_data = tf.image.resize(t_oip_c_g_wk_size_re_std, [28,28], method="bicubic", antialias=True)
    
    # Intermediate plotting - if required and if plt.ion() is set 
    # ------------------------
    if b_plot_intermediate and j < n_iter-1: 
        ay_oip_c_g_wk_size = t_oip_c_g_wk_size[0,:,:,0].numpy()
        #display OIPs
        ax2_2_1.imshow(t_picc_g_28[0,:,:,0], cmap=plt.cm.gray)
        ax2_2_2.imshow(ay_oip_c_g_28,  cmap=plt.cm.gray)
        ax2_2_3.imshow(ay_oip_c_g_wk_size, cmap=plt.cm.gray)

        ax2_3_1.imshow(t_oip_c_g_wk_size[0,:,:,0], cmap=plt.cm.gray)
        ax2_3_2.imshow(t_oip_c_g_wk_size_re[0,:,:,0], cmap=plt.cm.gray)
        ax2_3_3.imshow(t_picc_g[0,:,:,0], cmap=plt.cm.gray)
    
        # modify the gray spectrum for plotting - cut off by clipping to limit the extend of the gray color map  
        min1 = tf.reduce_min(t_picc_g)
        min2 = tf.reduce_min(t_oip_c_g_wk_size_re)
        max1 = tf.reduce_max(t_picc_g)
        max2 = tf.reduce_max(t_oip_c_g_wk_size_re)
        # tf.print("min1 = ", min1, " :: min2 = ", min2, " :: max1 = ", max1, " :: max2 = ", max2 )

        #fac1 = min2/min1 * 0.95
        #fac2 = 1.0*fac1
        fac2 = 1.18
        fac3 = 0.92
        cut_lft = fac3*min1
        cut_rht = fac3*max1
        cut_rht = fac3*abs(min1)
        tf.print("cut_lft = ", cut_lft, "  cut_rht = ", cut_rht)

        t_oip_c_g_wk_size_re_std_plt = fac2 * t_oip_c_g_wk_size_re_std 
        t_oip_c_g_wk_size_re_std_plt = tf.clip_by_value(t_oip_c_g_wk_size_re_std_plt, cut_lft, cut_rht)
    
        ax2_4_1.imshow(t_oip_c_g_wk_size[0,:,:,0], cmap=plt.cm.gray)
        ax2_4_2.imshow(t_oip_c_g_wk_size_re_std[0,:,:,0], cmap=plt.cm.gray)
        ax2_4_3.imshow(t_oip_c_g_wk_size_re_std_plt[0,:,:,0], cmap=plt.cm.gray)


# Eventual plotting 
# ********************
ay_oip_c_g_wk_size = t_oip_c_g_wk_size[0,:,:,0].numpy()

#display OIPs
ax2_2_1.imshow(t_picc_g_28[0,:,:,0], cmap=plt.cm.gray)
ax2_2_2.imshow(ay_oip_c_g_28,  cmap=plt.cm.gray)
ax2_2_3.imshow(ay_oip_c_g_wk_size, cmap=plt.cm.gray)

ax2_3_1.imshow(t_oip_c_g_wk_size[0,:,:,0], cmap=plt.cm.gray)
ax2_3_2.imshow(t_oip_c_g_wk_size_re[0,:,:,0], cmap=plt.cm.gray)
ax2_3_3.imshow(t_picc_g[0,:,:,0], cmap=plt.cm.gray)

# modify the gray spectrum for plotting - cut off by clipping to limit the extend of the gray color map  
min1 = tf.reduce_min(t_picc_g)
min2 = tf.reduce_min(t_oip_c_g_wk_size_re)
max1 = tf.reduce_max(t_picc_g)
max2 = tf.reduce_max(t_oip_c_g_wk_size_re)
# tf.print("min1 = ", min1, " :: min2 = ", min2, " :: max1 = ", max1, " :: max2 = ", max2 )

#fac1 = min2/min1 * 0.95
#fac2 = 1.0*fac1
fac2 = 1.10
fac3 = 0.92
cut_lft = fac3*min1
cut_rht = fac3*
max1
#cut_rht = fac3*abs(min1)
tf.print("cut_lft = ", cut_lft, "  cut_rht = ", cut_rht)

t_oip_c_g_wk_size_re_std_plt = fac2 * t_oip_c_g_wk_size_re_std 
t_oip_c_g_wk_size_re_std_plt = tf.clip_by_value(t_oip_c_g_wk_size_re_std_plt, cut_lft, cut_rht)

ax2_4_1.imshow(t_oip_c_g_wk_size[0,:,:,0], cmap=plt.cm.gray)
ax2_4_2.imshow(t_oip_c_g_wk_size_re_std[0,:,:,0], cmap=plt.cm.gray)
ax2_4_3.imshow(t_oip_c_g_wk_size_re_std_plt[0,:,:,0], cmap=plt.cm.gray)
    
# Histogram analysis and plotting 
# *********************************    
# prepare the histograms of greay values for the saved imagetensors     
ay0 = np.sort( t_picc_g[0, :, :, 0].numpy().ravel() )
ay1 = np.sort( li_t_imgs[0][0,:,:,0].numpy().ravel() )
ay2 = np.sort( li_t_imgs[1][0,:,:,0].numpy().ravel() )
ay3 = np.sort( li_t_imgs[2][0,:,:,0].numpy().ravel() )
ay4 = np.sort( li_t_imgs[3][0,:,:,0].numpy().ravel() )
#print(ay1)

nx, binsx, patchesx = ax_h_1.hist(ay0, bins='auto')
nx, binsx, patchesx = ax_h_2.hist(ay2, bins='auto')
nx, binsx, patchesx = ax_h_3.hist(ay3, bins='auto')
nx, binsx, patchesx = ax_h_4.hist(ay4, bins='auto')

 

We first prepare some Matplotlib axes frames. Note that the references should be clearly distinguishable and unique across the cells of the Jupyter notebook.

Parameters
Then we define parameters for a DeepDream run. [The attentive reader has noticed that I now allow for a value of nmap_index = -1. This corresponds to working with a loss function calculated and averaged over all maps (nmap=-1) of a layer. The changed code of the class My_OIP is given at the end of the article. We come back to this option in a later post of this series. For the time being we just work with a chosen single map and its cost function.]

The most interesting parameters are “n_epochs” and “n_iter“. These parameters help to tune a careful balance between OIP-“carving” and “detail replenishment”. These are parameters you absolutely should experiment with!

Iterations
We first pick the (28×28) gray image prepared in the previous Jupyter cell as the initial image presented to the CNN’s input layer. We then start an iteration by invoking the “My_OIP” class discussed in the article named above. (The parameters “centre_move = 0.33” and “fact = 1.0” refer to an image manipulation, namely a contrast enhancement.)
After OIP-carving over the chosen limited number of epochs we apply a special treatment to the data for the OPI-image derived so far. In a side step we enhance the contrast contrast for displaying the intermediate results. We then rescale by bicubic interpolation from 28×28 to the working size 560×560, add the prepared correction data and standardize. We add the results to a list for a later histogram analysis.

The preparation of plotting (of an intermediate and eventual image) comprises a step to reduce the spectrum of pixel values. This step is required as the iterative image manipulation leads to an extension of the maximum and the minimum values, i.e. the flanks of the value spectrum, up to a factor of 1.7 – although we only find a few pixels in that extended range. If we applied a standard gray-scale color map to the whole wider range of values then we would get a strong reduction in contrast – for the sake of including some extreme pixel values. The remedy is to cut the spectrum off at the original maximum/minimum value.

The histogram analysis performed at the end in addition shows that the central spectrum has become a bit narrower – we can compensate this by some factors fac2 and fac3. Play around with them.

Results of the carving process

Results for map 57 are displayed in the following image:

The effect of the “carving” is clearly visible as an overlayed ghost like pattern apparition. Standardization of the final images makes no big difference – however, the discussed compensation for the widened spectrum of the eventual image is important.

The histogram analysis shows the flank widening and the narrowing of the central core of the pixel value spectrum:

The following image allow a comparison of the original gray image with the carved image for parameters [n_epochs=20, n_iter=5]:

Conclusion

Deep Dreams are based on special image analysis and manipulation techniques. We adapted some of the techniques discussed in literature for high resolution images and advanced CNNs in a special way and for a single deep layer map of a low resolution CNN. We called the required intermittent combination of down- and upscaling, enhancing OIP-patterns and re-adding details “carving“. This helped us to create a “dream” of a CNN – which was trained with a bunch low resolution MNIST images, only; the dream can be based on arbitrary quadratic input images (here: an image of a bunch of roses).

The technique presented in this article can be extended to include a pattern analysis on various sub-parts and length-scales of the original image. In the next blog post we shall have a look at some of the required programming steps.

My_OIP class – V0.65

I have modified the My_OIP code a bit since the last articles on CNNs. You find the new version below.

 
'''
Module to create OIPs as visualizations of filters of a simple CNN for the MNIST data set
@version: 0.65, 12.12.2020
@change: V0.5:  was based on version 0.4 which was originally created in Jupyter cells
@change: V0.6: General revision of class "my_OIP" and its methods
@change: V0.6: Changes to the documentation 
@change: V0.65: Added intermediate plotting of selected input fluctuation patterns 
@attention: General status: For experimental purposes only! 
@requires: A full CNN trained on MNIST data 
@requires: A Keras model of the CNN and weight data saved in a h5-file, e.g."cnn_MIST_best.h5". 
           This file must be placed in the main directory of the Jupyter notebooks.
@requires: A Jupyter environment - from where the class My_OIP is called and where plotting takes place 
@note: The description to the interface to the class via the __init__()-method may be incomplete
@note: The use of prefixes li_ and ay_ is not yet consistent. ay_ should indicate numpy arrays, li_ instead normal Python lists
@warning: This version has not been tested outside a Jupyter environment - plotting in GTK/Qt-environment may require substantial changes 
@status: Under major development with frequent changes 
@author: Dr. Ralph Mönchmeyer
@copyright: Simplified BSD License, 12.12.2020. Copyright (c) 2020, Dr. Ralph Moenchmeyer, Augsburg, Germany

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this
   list of conditions and 
the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
   this list of conditions and the following disclaimer in the documentation
   and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''

# Modules to be imported - these libs must be imported in a Jupyter cell nevertheless 
# ~~~~~~~~~~~~~~~~~~~~~~~~
import time 
import sys 
import math
import os 
from os import path as path

import numpy as np
from numpy import save  # used to export intermediate data
from numpy import load

import scipy
from sklearn.preprocessing import StandardScaler
from itertools import product 

import tensorflow as tf
from tensorflow import keras as K
from tensorflow.python.keras import backend as B  # this is the only version compatible with TF 1 compat statements
#from tensorflow.keras import backend as B 
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import schedules
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist

from tensorflow.python.client import device_lib

import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.patches as mpat 

from IPython.core.tests.simpleerr import sysexit

class My_OIP:
    '''
    @summary: This class allows for the creation and the display of OIP-patterns, 
              to which a selected map of a CNN-model and related filters react maximally
    
    @version: Version 0.6, 10.10.2020
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    @change: Revised methods 
    @requires: In the present version the class My_OIP requires: 
                * a CNN-model which works with standardized (!) input images, size 28x28,
                * a CNN-Modell which was trained on MNIST digit data,
                * exactly 4 length scales for random data fluctations are used to compose initial statistial image data 
                  (the length scales should roughly have a factor of 2 between them) 
                * Assumption : exactly 1 input image and not a batch of images is assumed in various methods 
    
    @note: Main Functions:     
    0) _init__()
    1) _load_cnn_model()             => load cnn-model
    2) _build_oip_model()            => build an oip-model to create OIP-images
    3) _setup_gradient_tape_and_iterate_function()        
                                    => Implements TF2 GradientTape to watch input data for eager gradient calculation
                                    => Creates a convenience function by the help of Keras to iterate and optimize the OIP-adjustments
    4) _oip_strat_0_optimization_loop():
                                     => Method implementing a simple strategy to create OIP-images, 
                                        based on superposition of random data on long range data (compared to 28 px) 
                                        The optimization uses "gradient ascent"
 to get an optimum output of the selected Conv map 
    6) _derive_OIP():                => Method used to start the creation of an OIP-image for a chosen map 
                                        - based on an input image with statistical random date 
    6) _derive_OIP_for_Prec_Img():   => Method used to start the creation of an OIP-image for a chosen map 
                                       - based on an input image with was derived from a PRECURSOR run, 
                                       which tests the reaction of the map to large scale fluctuations 
                                        
    7) _build_initial_img_data():    => Builds an input image based on random data for fluctuations on 4 length scales 
    8) _build_initial_img_from_prec():    
                                     => Reconstruct an input image based on saved random data for long range fluctuations 
    9) _prepare_precursor():         => Prepare a _precursor run by setting up TF2 GradientTape and the _iterate()-function 
    10) _precursor():                => Precursor run which systematically tests the reaction of a selected convolutional map 
                                        to long range fluctuations based on a (3x3)-grid upscaled to the real image size  
    11) _display_precursor_imgs():   => A method which plots up to 8 selected precursor images with fluctuations,
                                        which triggered a maximum map reaction 
    12) _transfrom_tensor_to_img():  => A method which allows to transform tensor data of a standardized (!) image to standard image data 
                                        with (gray)pixel valus in [0, 255]. Parameters allow for a contrast enhancement. 
    
    Usage hints 
    ~~~~~~~~~~~
    @note: If maps of a new convolutional layer are to be investigated then method _build_oip_model(layer_name) has to be rerun 
           with the layer's name as input parameter
    '''
    
    # Method to initialize an instantiation object 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def __init__(self, cnn_model_file = 'cnn_MNIST_best.h5', 
                 layer_name = 'Conv2D_3', 
                 map_index = 0, 
                 img_dim = 28, 
                 b_build_oip_model = True  
                ): 
        '''
        @summary: Initialization of an object instance - read in a CNN model, build an OIP-Model 
        @note: Input: 
        ~~~~~~~~~~~~
        @param cnn_model_file:  Name of a file containing a fully trained CNN-model; 
                                the model can later be overwritten by self._load_cnn_model()
        @param layer_name:      We can define a layer name, which we are interested in,  already when starting;
                                the layer can later be overwritten by self._build_oip_model()
        @param map_index:       We can define a map, which we are interested in, already when starting;
                                A map-index is NOT required for building the OIP-model, but for the GradientTape-object 
        @param img_dim:         The dimension of the assumed quadratic images (2 for MNIST)

        @note: Major internal variables:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _cnn_model:        A reference to the CNN model object
        @ivar _layer_name:       The name of convolutional layer 
                                 (can be overwritten by method _build_oip_model() ) 
        @ivar _map_index:        Index of the map in the chosen layer's output array 
                                 (can later be overwritten by other methods) 
        @ivar _r_cnn_inputs:     A reference to the input tensor of the CNN model 
                                 Could be a batch of images; but in this class only 1 image is assumed
        @ivar _layer_output:     Tensor with all maps of a certain layer    
        @
ivar _oip_submodel:     A new model connecting the input of the CNN-model with a certain map's (!) output
        @ivar _tape:             An instance of TF2's GradientTape-object 
                                    Watches input, output, loss of a model 
                                    and calculates gradients in TF2 eager mode 
        @ivar _r_oip_outputs:    Reference to the output of the new OIP-model = map-activation
        @ivar _r_oip_grads:      Reference to gradient tensors for the new OIP-model (output dependency on input image pixels)
        @ivar _r_oip_loss:       Reference to a loss defined on the OIP-output - i.e. on the activation values of the map's nodes;
                                 Normally chosen to be an average of the nodes' activations 
                                 The loss defines a hyperplane on the (28x28)-dim representation space of the input image pixel values  
        @ivar _val_oip_loss:     Loss value for a certain input image 
        @ivar _iterate           Reference toa Keras backend function which invokes the new OIP-model for a given image
                                 and calculates both loss and gradient values (in TF2 eager mode) 
                                 This is the function to be used in the optimization loop for OIPs
        
        @note: Internal Parameters controlling the optimization loop:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _oip_strategy:        0, 1 - There are two strategies to evolve OIP patterns out of statistical data 
                                    - only the first strategy is supported in this version 
                                    Both strategies can be combined with a precursor calculation 
                                    0: Simple superposition of fluctuations at different length scales
                                    1: NOT YET SUPPORTED 
                                    Evolution over partially evolved images based on longer scale variations 
                                    enriched with fluctuations on shorter length scales 

        @ivar _ay_epochs:           A list of 4 optimization epochs to be used whilst 
                                    evolving the img data via strategy 1 and intermediate images 
        @ivar _n_epochs:            Number of optimization epochs to be used with strategy 0 
        @ivar _n_steps:             Defines at how many intermediate points we show images and report 
                                    during the optimization process 
        @ivar _epsilon:             Factor to control the amount of correction imposed by the gradient values of the oip-model 

        @note: Input image data of the OIP-model and references to it 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _initial_precursor_img:  The initial image to start a precursor optimization with.
                                       Would normally be an image of only long range fluctuations. 
        @ivar _precursor_image:        The evolved image created and selected by the precursor loop 

        @ivar _initial_inp_img_data:  A tensor representing the data of the input image 
        @ivar _inp_img_data:          A tensor representig the 
        @ivar _img_dim:               We assume quadratic images to work with 
                                      with dimension _img_dim along each axis
                                      For the time being we only support MNIST images 
        
        @note: Internal parameters controlling the composition of random initial image data 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _li_dim_steps:        A list of the intermediate dimensions for random data;
                                    these data are smoothly scaled to the image dimensions 
        @ivar _ay_facts:            A Numpy array of 4 factors to control the amount of 
                         
           contribution of the statistical variations 
                                    on the 4 length scales to the initial image
        
        @note: Internal variables to save data of a precursor run
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
        @ivar _list_of_covs: list of long range fluctuation data for a (3x3)-grid covering the image area 
        @ivar _li_fluct_enrichments: [li_facts, li_dim_steps] data for enrichment with small fluctuations 
        
        
        @note: Internal variables for plotting
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
        
        '''    
        
        # Input data and variable initializations
        # ****************************************
        
        # the model 
        # ~~~~~~~~~~
        self._cnn_model_file = cnn_model_file
        self._cnn_model      = None 
        
        # the chosen layer of the CNN-model
        self._layer_name = layer_name
        # the index of the map in the layer array
        self._map_index  = map_index
        
        # References to objects and the OIP-submodel
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._r_cnn_inputs  = None # reference to input of the CNN_model, also used in the oip-model  
        self._layer_output  = None
        self._oip_submodel  = None
        
        # References to watched GradientTape objects 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._tape          = None # TF2 GradientTape variable
        # some "references"
        self._r_oip_outputs = None # output of the oip-submodel to be watched 
        self._r_oip_grads   = None # gradients determined by GradientTape   
        self._r_oip_loss    = None # loss function
        # loss and gradient values (to be produced ba a backend function _iterate() )
        self._val_oip_grads = None
        self._val_oip_loss  = None
        
        # The Keras function to produce concrete outputs of the new OIP-model  
        self._iterate       = None
        
        # The strategy to produce an OIP pattern out of statistical input images 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~
        # 0: Simple superposition of fluctuations at different length scales 
        # 1: Move over 4 interediate images - partially optimized 
        self._oip_strategy = 0
        
        # Parameters controlling the OIP-optimization process 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~
        # number of epochs for optimization strategy 1
        self._ay_epochs    = np.array((20, 40, 80, 400), dtype=np.int32)
        len_epochs         = len(self._ay_epochs)
        
        # number of epochs for optimization strategy 0
        self._n_epochs     = self._ay_epochs[len_epochs-1]   
        self._n_steps      = 6   # divides the number of n_epochs into n_steps to produce intermediate outputs
        
        # size of corrections by gradients
        self._epsilon       = 0.01 # step-size for gradient correction
        
        # Input image-typess and references 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # precursor image
        self._initial_precursor_img = None
        self._precursor_img         = None # output of the _precursor-method 
        
        # The input image for the OIP-creation - a superposition of inial random fluctuations
        self._initial_inp_img_data  = None  # The initial data constructed 
        self._inp_img_data          = None  # The data used and varied for optimization 
        # image dimension
        self._img_dim               = img_dim   # = 28 => MNIST images for the time being 
        
        # Parameters controlling the setup of an initial image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------
~~~~~~~~~~~~~~~~~~~
        # The length scales for initial input fluctuations
        self._li_dim_steps = ( (3, 3), (7,7), (14,14), (28,28) ) # can later be overwritten 
        # Parameters for fluctuations  - used both in strategy 0 and strategy 1  
        self._ay_facts     = np.array( (0.5, 0.5, 0.5, 0.5), dtype=np.float32 )
        
        # Data of a _precursor()-run 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._list_of_covs = None       # list of long range fluctuation data for a (3x3)-grid covering the image area 
        self._li_fluct_enrichments = None # = [li_facts, li_dim_steps] list with with 2 list of data enrichment for small fluctuations 
        # These data are required to reconstruct the input image to which a map reacted 
        
        # List of references to axis subplots
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # These references may change from Jupyter cell to Jupyter cell and provided by the called methods
        self._li_axa = None # will be set by methods according to axes-frames in Jupyter cells 
        # axis frames for a single image in 2 versions (with contrast enhancement)
        self._ax1_1 = None
        self._ax1_2 = None 
        
        
        # Variables for Deep Dream Analysis 
        # ***********************************
        #     Here we need a stack of tiled sub-images of a large image, as we subdivide on different length scales 
        self._num_tiles = 1
        self._t_image_stack = None 
        
        # ********************************************************
        # Model setup - load the cnn-model and build the OIP-model
        # ************
        if path.isfile(self._cnn_model_file): 
            # We trigger the initial load of a model 
            self._load_cnn_model(file_of_cnn_model = self._cnn_model_file, b_print_cnn_model = True)
            # We trigger the build of a new sub-model based on the CNN model used for OIP search 
            self._build_oip_model(layer_name = self._layer_name, b_print_oip_model = True ) 
        else:
            print("<\nWarning: The standard file " + self._cnn_model_file + 
                  " for the cnn-model could not be found!\n " + 
                  " Please use method _load_cnn_model() to load a valid model")
            sys.exit()
        return
    
    
    #
    # Method to load a specific CNN model
    # **********************************
    def _load_cnn_model(self, file_of_cnn_model=None, b_print_cnn_model=True ):
        '''
        @summary: Method to load a CNN-model from a h5-file and create a reference to its input (image)
        @version: 0.2 of 28.09.2020
        @requires: filename must already have been saved in _cnn_model_file or been given as a parameter 
        @requires: file must be a h5-file 
        @change: minor changes - documentation 
        @note: A reference to the CNN's input is saved internally
        @warning: An input in form of an image - a MNIST-image - is implicitly assumed
        
        @note: Parameters
        -----------------
        @param file_of_cnn_model: Name of h5-file with the trained (!) CNN-model
        @param b_print_cnn_model: boolean - Print some information on the CNN-model 
        '''
        if file_of_cnn_model != None:
            self._cnn_model_file = file_of_cnn_model
        
        # Check existence of the file
        if not path.isfile(self._cnn_model_file): 
            print("\nWarning: The file " + file_of_cnn_model + 
                  " for the cnn-model could not be found!\n" + 
                  "Please change the parameter \"file_of_cnn_model\"" + 
                  " to load a valid model")
        
        # load the CNN model 
        self._cnn_model = models.load_model(self._cnn_model_file)
        
        # Inform about the model and its 
file
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~
        print("Used file to load a ´ model = ", self._cnn_model_file)
        # we print out the models structure
        if b_print_cnn_model:
            print("Structure of the loaded CNN-model:\n")
            self._cnn_model.summary()
        
        # handle/references to the models input => more precise the input image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #    Note: As we only have one image instead of a batch 
        #    we pick only the first tensor element!
        #    The inputs will be needed for buildng the oip-model 
        self._r_cnn_inputs = self._cnn_model.inputs[0]  # !!! We have a btach with just ONE image 
        
        # print out the shape - it should be known from the original cnn-model
        if b_print_cnn_model:
            print("shape of cnn-model inputs = ", self._r_cnn_inputs.shape)
        
        return
    
    
    #
    # Method to construct a model to optimize input for OIP creation 
    # ***************************************
    def _build_oip_model(self, layer_name = 'Conv2D_3', b_print_oip_model=True ): 
        '''
        @summary: Method to build a new (sub-) model - the "OIP-model" - of the CNN-model by 
                  connectng the input of the CNN-model with one of its Conv-layers
        @version: 0.4 of 28.09.2020
        @change: Minor changes - documentation 
        @note: We need a Conv layer to build a working model for input image optimization
        We get the Conv layer by the layer's name 
        The new model connects the first input element of the CNN to the output maps of the named Conv layer CNN 
        We use Keras' models.Model() functionality 
        @note: The layer's name is crucial for all later investigations - if you want to change it this method has to be rerun 
        @requires: The original, trained CNN-model must be loaded and referenced by self._cnn_model 
        @warning: Only 1 input image and not a batch is implicitly assumed 
        
        @note: Parameters
        -----------------
        @param layer_name: Name of the convolutional layer of the CNN for whose maps we want to find OIP patterns
        @param b_print_oip_model: boolean - Print some information on the OIP-model 
        
        '''
        # free some RAM - hopefully 
        del self._oip_submodel
        
        # check for loaded CNN-model
        if self._cnn_model == None: 
            print("Error: CNN-model not yet defined.")
            sys.exit()
        
        # get layer name 
        self._layer_name = layer_name
        
        # We build a new model based on the model inputs and the output 
        self._layer_output = self._cnn_model.get_layer(self._layer_name).output
        # Note: We do not care at the moment about a complex composition of the input 
        # We trust in that we handle only one image - and not a batch
        
        # Create the sub-model via Keras' models.Model()
        model_name = "mod_oip__" + layer_name 
        self._oip_submodel = models.Model( [self._r_cnn_inputs], [self._layer_output], name = model_name)                                    

        # We print out the oip model structure
        if b_print_oip_model:
            print("Structure of the constructed OIP-sub-model:\n")
            self._oip_submodel.summary()
        return
    
    
    #
    # Method to set up GradientTape and an iteration function providing loss and gradient values  
    # *********************************************************************************************
    def _setup_gradient_tape_and_iterate_function(self, b_print = True, b_full_layer = False):
        '''
        @summary: Central method to watch input variables and resulting gradient changes 
        @version: 0.6 of 12.12.2020
n        @change: V0.6: We add the definition of a loss function for a whole layer 
        @note: For TF2 eager execution we need to watch input changes and trigger automatic gradient evaluation
        @note: The normalization of the gradient is strongly recommended; as we fix epsilon for correction steps 
               we thereby will get changes to the input data of an approximately constant order.
               This - together with standardization of the images (!) - will lead to convergence at the size of epsilon !
        @param b_full_layer: Boolean; define a loss function for a single map (map_index > -1) or the full layer (map_index = -1)
        '''   
        # Working with TF2 GradientTape
        self._tape = None

        # Watch out for input, output variables with respect to gradient changes
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        with tf.GradientTape() as self._tape: 
            # Input
            # ~~~~~
            self._tape.watch(self._r_cnn_inputs)
            # Output
            # ~~~~~~
            self._r_oip_outputs = self._oip_submodel(self._r_cnn_inputs)
            
            # only for testing
            # *******************
            #t_shape = self._r_oip_outputs[0, :, :, :].shape  # the 0 is important, else "Noen" => problems 
            #scaling_factor = tf.cast(tf.reduce_prod(t_shape), tf.float32) 
            #tf.print("scaling_factor: ", scaling_factor)
            #t_loss = tf.reduce_sum(tf.math.square(self._r_oip_outputs[0, :, :, :])) / scaling_factor
            
            # Define Loss 
            # ***********
            if (b_full_layer):
                # number of neurons over all layer maps 
                t_shape = self._r_oip_outputs[0, :, :, :].shape  # the 0 is important, else "Noen" => problems 
                scaling_factor = tf.cast(tf.reduce_prod(t_shape), tf.float32) # shape! eg. 3x3x128 for layer 3 
                tf.print("scaling_factor: ", scaling_factor)
                
                # The scaled total loss of the layer 
                self._r_oip_loss = tf.reduce_sum(tf.math.square(self._r_oip_outputs[0, :, :, :])) / scaling_factor
                
                # self._r_oip_loss = tf.reduce_mean(self._r_oip_outputs[0, :, :, self._map_index])
            else:
                self._r_oip_loss = tf.reduce_mean(self._r_oip_outputs[0, :, :, self._map_index])
            
            #self._loss = B.mean(oip_output[:, :, :, map_index])
            #self._loss = B.mean(oip_outputs[-1][:, :, map_index])
            #self._loss = tf.reduce_mean(oip_outputs[-1][ :, :, map_index])
            if b_print:
                print(self._r_oip_loss)
                print("shape of oip_loss = ", self._r_oip_loss.shape)
        
        # Gradient definition and normalization
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._r_oip_grads  = self._tape.gradient(self._r_oip_loss, self._r_cnn_inputs)
        print("shape of grads = ", self._r_oip_grads.shape)

        # normalization of the gradient - required for convergence 
        t_tiny = tf.constant(1.e-7, tf.float32)
        self._r_oip_grads /= (tf.sqrt(tf.reduce_mean(tf.square(self._r_oip_grads))) + t_tiny)
        #self._r_oip_grads /= (B.sqrt(B.mean(B.square(self._r_oip_grads))) + 1.e-7)
        #self._r_oip_grads = tf.image.per_image_standardization(self._r_oip_grads)

        # Define an abstract recallable Keras function as a convenience function for iterations 
        #     The _iterate()-function produces loss and gradient values for corrected img data 
        #     The first list of addresses points to the input data, the last to the output data 
        self._iterate = B.function( [self._r_cnn_inputs], [self._r_oip_loss, self._r_oip_grads] )
        
    
    #        
    # Method to optimize an emerging OIP out of statistical input 
image data 
    # (simple strategy - just optimize, no precursor, no intermediate pattern evolution, ..) 
    # ********************************
    def _oip_strat_0_optimization_loop(self, conv_criterion = 5.e-4, 
                                            b_stop_with_convergence = False,
                                            b_show_intermediate_images = True,
                                            b_print = True):
        '''
        @summary: Method to control the optimization loop for OIP reconstruction of an initial input image
                  with a statistical distribution of pixel values. 
        @version: 0.4 of 28.09.2020
        @changes: Minor changes - eliminated some unused variables
        @note:    The function also provides intermediate output in the form of printed data and images !.
        @requires: An input image tensor must already be available at _inp_img_data - created by _build_initial_img_data()
        @requires: Axis-objects for plotting, typically provided externally by the calling functions 
                  _derive_OIP() or _precursor()
        
        @note: Parameters:
        ----------------- 
        @param conv_criterion:  A small threshold number for (difference of loss-values / present loss value )
        @param b_stop_with_convergence: Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param b_show_intermediate_image: Boolean which decides whether we show up to 8 intermediate images 
        @param b_print: Boolean which decides whether we print intermediate loss values 
        
        @note: Intermediate information is provided at _n_steps intervals, 
               which are logarithmically distanced with respect to _n_epochs
               Reason: Most changes happen at the beginning 
        @note: This method produces some intermediate output during the optimization loop in form of images.
        It uses an external grid of plot frames and their axes-objects. The addresses of the 
        axis-objects must provided by an external list "li_axa[]" to self._li_axa[].  
        We need a seqence of _n_steps+2 axis-frames (or more) => length(_li_axa) >= _n_steps + 2). 
        
        @todo: Loop not optimized for TF 2 - but not so important here - a run takes less than a second 
        
        '''
        
        # Check that we already an input image tensor
        if ( (self._inp_img_data == None) or 
             (self._inp_img_data.shape[1] != self._img_dim) or 
             (self._inp_img_data.shape[2] != self._img_dim) ) :
            print("There is no initial input image or it does not fit dimension requirements (28,28)")
            sys.exit()

        # Print some information
        if b_print:
            print("*************\nStart of optimization loop\n*************")
            print("Strategy: Simple initial mixture of long and short range variations")
            print("Number of epochs = ", self._n_epochs)
            print("Epsilon =  ", self._epsilon)
            print("*************")

        # some initial value
        loss_old = 0.0
        
        # Preparation of intermediate reporting / img printing
        # --------------------------------------
        # Logarithmic spacing of steps (most things happen initially)
        n_el = math.floor(self._n_epochs / 2**(self._n_steps) ) 
        li_int = []
        if n_el != 0:
            for j in range(self._n_steps):
                li_int.append(n_el*2**j)
        else:  # linear spacing
            n_el = math.floor(self._n_epochs / (self._n_steps+1) ) 
            for j in range(self._n_steps+1):
                li_int.append(n_el*j)
        
        if b_print:
            print("li_int = ", li_int)
        
        # A counter for intermediate reporting  
        n_rep = 0
        
        # 
Convergence? - A list for steps meeting the convergence criterion
        # ~~~~~~~~~~~~
        li_conv = []
        
        
        # optimization loop 
        # *******************
        # counter for steps with zero loss and gradient values 
        n_zeros = 0
        
        for j in range(self._n_epochs):
            
            # Get output values of our Keras iteration function 
            # ~~~~~~~~~~~~~~~~~~~
            self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
            
            # loss difference to last step - shuold steadily become smaller 
            loss_diff = self._val_oip_loss - loss_old 
            #if b_print:
            #    print("j: ", j, " :: loss_val = ", self._val_oip_loss, " :: loss_diff = ",  loss_diff)
            #    # print("loss_diff = ", loss_diff)
            loss_old = self._val_oip_loss
            
            if j > 10 and (loss_diff/(self._val_oip_loss + 1.-7)) < conv_criterion:
                li_conv.append(j)
                lenc = len(li_conv)
                # print("conv - step = ", j)
                # stop only after the criterion has been met in 4 successive steps
                if b_stop_with_convergence and lenc > 5 and li_conv[-4] == j-4:
                    return
            
            grads_val     = self._val_oip_grads
            #grads_val =   normalize_tensor(grads_val)
            
            # The gradients average value 
            avg_grads_val = (tf.reduce_mean(grads_val)).numpy()
            
            # Check if our map reacts at all
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if self._val_oip_loss == 0.0 and avg_grads_val == 0.0 and b_print :
                print( "0-values, j= ", j, 
                       " loss = ", self._val_oip_loss, " avg_loss = ", avg_grads_val )
                n_zeros += 1
            
            if n_zeros > 10 and b_print: 
                print("More than 10 times zero values - Try a different initial random distribution of pixel values")
                return
            
            # gradient ascent => Correction of the input image data 
            # ~~~~~~~~~~~~~~~
            self._inp_img_data += self._val_oip_grads * self._epsilon
            
            # Standardize the corrected image - we won't get a convergence otherwise 
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
            
            # Some output at intermediate points 
            #     Note: We us logarithmic intervals because most changes 
            #           appear in the intial third of the loop's range  
            if (j == 0) or (j in li_int) or (j == self._n_epochs-1) :
                if b_print or (j == self._n_epochs-1):
                    # print some info 
                    print("\nstep " + str(j) + " finalized")
                    #print("Shape of grads = ", grads_val.shape)
                    print("present loss_val = ", self._val_oip_loss)
                    print("loss_diff = ", loss_diff)
                # show the intermediate image data 
                if b_show_intermediate_images: 
                    imgn = self._inp_img_data[0,:,:,0].numpy()
                    # print("Shape of intermediate img = ", imgn.shape)
                    self._li_axa[n_rep].imshow(imgn, cmap=plt.cm.get_cmap('viridis'))
                    # counter
                    n_rep += 1
        
        return
    
    
    #        
    # Standard UI-method to derive OIP from a given initial input image
    # ********************
    def _derive_OIP(self, map_index = 1, 
                          n_epochs = None, 
                          n_steps = 6, 
                          epsilon = 
0.01, 
                          conv_criterion = 5.e-4, 
                          li_axa = [], 
                          ax1_1 = None, ax1_2 = None, 
                          b_stop_with_convergence = False,
                          b_show_intermediate_images = True,
                          b_print = True,
                          centre_move = 0.33, fact = 1.0):
        '''
        @summary: Method to create an OIP-image for a given initial input image
                  This is the standard user interface for finding an OIP 
        @warning: This method should NOT be used for finding an initial precursor image 
                  Use _prepare_precursor() to define the map first and then _precursor() to evaluate initial images 
        @version: V0.5, 12.12.2020
        @change:  V0.4: Minor changes - added internal _li_axa for plotting, added documentation 
                  This method starts the process of producing an OIP of statistical input image data
        @change:  V0.5 : map_index can now have the value "-". Then a loss function for the whole layer (instead of a single map) is prepared
        @requires: A map index (or -1) should be provided to this method 
        @requires: An initial input image with statistical fluctuations of pixel values must have been created. 

        @warning:    This version only supports the most simple strategy - "strategy 0" 
        -------------    Optimize in one loop - starting from a superposition of fluctuations
                         No precursor, no intermediate evolutions

        @note: Parameters:
        -----------------
        @param map_index: We can and should chose a map here  (overwrites previous settings)
                          If map_index == -1 => We take a loss for the whole layer  
        @param n_epochs: Number of optimization steps  (overwrites previous settings) 
        @param n_steps:  Defines number of intervals (with length n_epochs/ n_steps) for reporting
                         standard value: 6 => 8 images - start image, end image + 6 intermediate 
                         This number also sets a requirement for providing (n_step + 2) external axis-frames 
                         to display intermediate images of the emerging OIP  
                         => see _oip_strat_0_optimization_loop()
        @param epsilon:  Size for corrections by gradient values
        @param conv_criterion: A small threshold number for convegenc (checks:  difference of loss-values / present loss value )
        @param b_stop_with_convergence: 
                         Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
                 
        
        @note: Preparations for plotting: 
        We need n_step + 2 axis-frames which must be provided externally
        
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        '''
        # Get input parameters
        self._map_index = map_index
        self._n_epochs  = n_epochs   
        self._n_steps   = n_steps
        self._epsilon   = epsilon
        
        # references to plot frames 
        self._li_axa = li_axa
        num_frames = len(li_
axa)
        if num_frames < n_steps+2:
            print("The number of available image frames (", num_frames, ") is smaller than required for intermediate output (", n_steps+2, ")")
            sys.exit()
        
        # 2 axes frames to display the final OIP image (with contrast enhancement) 
        self._ax1_1 = ax1_1
        self._ax1_2 = ax1_2
        
        # number of epochs for optimization strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if n_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = n_epochs

        # Reset some variables  
        self._val_oip_grads = None
        self._val_oip_loss  = None 
        self._iterate       = None 

        # Setup the TF2 GradientTape watch and a Keras function for iterations 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # V0.5 => map_index == "-1"
        if self._map_index == -1: 
            self._setup_gradient_tape_and_iterate_function(b_print = b_print, b_full_layer=True)
            if b_print:
                print("GradientTape watch activated (map-based: map = ", self._map_index, ")")
        else: 
            self._setup_gradient_tape_and_iterate_function(b_print = b_print, b_full_layer=False)
            if b_print:
                print("GradientTape watch activated (layer-based: layer = ", self._layer_name, ")")
        
        '''
        # Gradient handling - so far we only deal with addresses 
        # ~~~~~~~~~~~~~~~~~~
        self._r_oip_grads  = self._tape.gradient(self._r_oip_loss, self._r_cnn_inputs)
        print("shape of grads = ", self._r_oip_grads.shape)
        
        # normalization of the gradient 
        self._r_oip_grads /= (B.sqrt(B.mean(B.square(self._r_oip_grads))) + 1.e-7)
        #self._r_oip_grads = tf.image.per_image_standardization(self._r_oip_grads)
        
        # define an abstract recallable Keras function 
        # producing loss and gradients for corrected img data 
        # the first list of addresses points to the input data, the last to the output data 
        self._iterate = B.function( [self._r_cnn_inputs], [self._r_oip_loss, self._r_oip_grads] )
        '''
            
        # get the initial image into a variable for optimization 
        self._inp_img_data = self._initial_inp_img_data

        # Start optimization loop for strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if self._oip_strategy == 0: 
            self._oip_strat_0_optimization_loop( conv_criterion = conv_criterion, 
                                                b_stop_with_convergence = b_stop_with_convergence,  
                                                b_show_intermediate_images = b_show_intermediate_images,
                                                b_print = b_print
                                               )
        
        # Display the last OIP-image created at the end of the optimization loop
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # standardized image 
        oip_img = self._inp_img_data[0,:,:,0].numpy()
        # transformed image 
        oip_img_t = self._transform_tensor_to_img(T_img=self._inp_img_data[0,:,:,0], 
                                                  centre_move = centre_move,
                                                  fact = fact)
        
        # display 
        ax1_1.imshow(oip_img, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(oip_img_t, cmap=plt.cm.get_cmap('viridis'))
        
        return
    
    
    # 
    # Method to derive OIP from a given initial input image if map_index is already defined 
    # ********************
    def _derive_OIP_for_Prec_
Img(self, 
                          n_epochs = None, 
                          n_steps = 6, 
                          epsilon = 0.01, 
                          conv_criterion = 5.e-4, 
                          li_axa = [], 
                          ax1_1 = None, ax1_2 = None, 
                          b_stop_with_convergence = False,
                          b_show_intermediate_images = True,
                          b_print = True):
        '''
        @summary: Method to create an OIP-image for an already given map-index and a given initial input image
                  This is the core of OIP-detection, which starts the optimization loop  
        @warning: This method should NOT be used directly for finding an initial precursor image 
                  Use _prepare_precursor() to define the map first and then _precursor() to evaluate initial images 
        @version: V0.4, 28.09.2020
        @changes: Minor changes - added internal _li_axa for plotting, added documentation 
                  This method starts the process of producing an OIP of statistical input image data
        
        @note:    This method should only be called after _prepare_precursor(), _precursor(), _build_initial_img_prec() 
                  For a trial of different possible precursor images rerun _build_initial_img_prec() with a different index
        
        @requires: A map index should be provided to this method 
        @requires: An initial input image with statistical fluctuations of pixel values must have been created. 

        @warning:    This version only supports the most simple strategy - "strategy 0" 
        -------------    Optimize in one loop - starting from a superposition of fluctuations
                         no intermediate evolutions

        @note: Parameters:
        -----------------
        @param n_epochs: Number of optimization steps  (overwrites previous settings) 
        @param n_steps:  Defines number of intervals (with length n_epochs/ n_steps) for reporting
                         standard value: 6 => 8 images - start image, end image + 6 intermediate 
                         This number also sets a requirement for providing (n_step + 2) external axis-frames 
                         to display intermediate images of the emerging OIP  
                         => see _oip_strat_0_optimization_loop()
        @param epsilon:  Size for corrections by gradient values
        @param conv_criterion: A small threshold number for convegenc (checks:  difference of loss-values / present loss value )
        @param b_stop_with_convergence: 
                         Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
                 
        
        @note: Preparations for plotting: 
        We need n_step + 2 axis-frames which must be provided externally
        
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        '''
        # Get input parameters
        self._n_epochs  = n_epochs   
        self._n_steps   = n_steps
        self._epsilon   = epsilon
        
        # references to plot frames 
        self._li_axa = li_axa
        num_frames 
= len(li_axa)
        if num_frames < n_steps+2:
            print("The number of available image frames (", num_frames, ") is smaller than required for intermediate output (", n_steps+2, ")")
            sys.exit()
            
        # 2 axes frames to display the final OIP image (with contrast enhancement) 
        self._ax1_1 = ax1_1
        self._ax1_2 = ax1_2
        
        # number of epochs for optimization strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if n_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = n_epochs
        
        # Note: No setup of GradientTape and _iterate(required) - this is done by _prepare_precursor 
            
        # get the initial image into a variable for optimization 
        self._inp_img_data = self._initial_inp_img_data

        # Start optimization loop for strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if self._oip_strategy == 0: 
            self._oip_strat_0_optimization_loop( conv_criterion = conv_criterion, 
                                                b_stop_with_convergence = b_stop_with_convergence,  
                                                b_show_intermediate_images = b_show_intermediate_images,
                                                b_print = b_print
                                               )
        # Display the last OIP-image created at the end of the optimization loop
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # standardized image 
        oip_img = self._inp_img_data[0,:,:,0].numpy()
        # transfored image 
        oip_img_t = self._transform_tensor_to_img(T_img=self._inp_img_data[0,:,:,0])
        
        # display 
        ax1_1.imshow(oip_img, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(oip_img_t, cmap=plt.cm.get_cmap('viridis'))
        
        return
    
    
    # 
    # Method to build an initial image from a superposition of random data on different length scales 
    # ***********************************
    def _build_initial_img_data( self, 
                                 strategy = 0, 
                                 li_epochs    = [20, 50, 100, 400], 
                                 li_facts     = [0.5, 0.5, 0.5, 0.5],
                                 li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
                                 b_smoothing = False,
                                 ax1_1 = None, ax1_2 = None):
        
        '''
        @summary: Standard method to build an initial image with random fluctuations on 4 length scales
        @version: V0.2 of 29.09.2020
        
        @note: This method should NOT be used for initial images based on a precursor images. 
               See _build_initial_img_prec(), instead.  
        
        @note: This method constructs an initial input image with a statistical distribution of pixel-values.
        We use 4 length scales to mix fluctuations with different "wave-length" by a simple approach: 
        We fill four square with a different numer of cells below the number of pixels 
        in each dimension of the real input image; e.g. (4x4), (7x7, (14x14), (28,28) <= (28,28). 
        We fill the cells with random numbers in [-1.0, 1.]. We smootly scale the resulting pattern 
        up to (28,28) (or what ever the input image dimensions are) by bicubic interpolations 
        and eventually add up all values. As a final step we standardize the pixel value distribution.          
        
        @warning: This version works with 4 length scales, only. 
        @warning: In the present version th eparameters "strategy " and li_epochs have no effect 
        
        @note: Parameters:
        -----------------
r
        @param strategy:  The strategy, how to build an image (overwrites previous settings) - presently only 0 is supported 
        @param li_epochs: A list of epoch numbers which will be used in strategy 1 - not yet supported 
        @param li_facts:  A list of factors which the control the relative strength of the 4 fluctuation patterns 
        @param li_dim_steps: A list of square dimensions for setting the length scale of the fluctuations  
        @param b_smoothing: Parameter which builds a control image   
        @param ax1_1: matplotlib axis-frame to display the built image 
        @param ax1_2: matplotlib axis-frame to display a second version of the built image 
        
        '''
        
        # Get input parameters 
        # ~~~~~~~~~~~~~~~~~~
        self._oip_strategy = strategy               # no effect in this version 
        self._ay_epochs    = np.array(li_epochs)    # no effect in this version 
        
        # factors by which to mix the random number fluctuations of the different length scales 
        self._ay_facts     = np.array(li_facts)
        # dimensions of the squares which simulate fluctuations 
        self._li_dim_steps = li_dim_steps
        
        # A Numpy array for the eventual superposition of random data 
        fluct_data = None
        
        
        # Strategy 0: Simple superposition of random patterns at 4 different wave-length
        # ~~~~~~~~~~
        if self._oip_strategy == 0:
            
            dim_1_1 = self._li_dim_steps[0][0] 
            dim_1_2 = self._li_dim_steps[0][1] 
            dim_2_1 = self._li_dim_steps[1][0] 
            dim_2_2 = self._li_dim_steps[1][1] 
            dim_3_1 = self._li_dim_steps[2][0] 
            dim_3_2 = self._li_dim_steps[2][1] 
            dim_4_1 = self._li_dim_steps[3][0] 
            dim_4_2 = self._li_dim_steps[3][1] 
            
            fact1 = self._ay_facts[0]
            fact2 = self._ay_facts[1]
            fact3 = self._ay_facts[2]
            fact4 = self._ay_facts[3]
            
            # print some parameter information
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            print("\nInitial image composition - strategy 0:\n Superposition of 4 different wavelength patterns")
            print("Parameters:\n", 
                 fact1, " => (" + str(dim_1_1) +", " + str(dim_1_2) + ") :: ", 
                 fact2, " => (" + str(dim_2_1) +", " + str(dim_2_2) + ") :: ", 
                 fact3, " => (" + str(dim_3_1) +", " + str(dim_3_2) + ") :: ", 
                 fact4, " => (" + str(dim_4_1) +", " + str(dim_4_2) + ")" 
                 )
            
            # fluctuations
            fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
            fluct2 =  2.0 * ( np.random.random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
            fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
            fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
            
            # Scaling with bicubic interpolation to the required image size
            fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
            fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
            fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
            fluct4_scale = fluct4
            
            # superposition
            fluct_data = fact1*fluct1_scale + fact2*fluct2_scale + fact3*fluct3_scale + fact4*fluct4_scale
            
        
        # get the standardized plus smoothed and unsmoothed image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #  TF2 provides a function performing standardization of image data function
        fluct_data_unsmoothed = tf.image.per_image_
standardization(fluct_data) 
        fluct_data_smoothed   = tf.image.per_image_standardization(
                                    tf.image.resize( fluct_data, [28,28], method="bicubic", antialias=True) )
        
        if b_smoothing: 
            self._initial_inp_img_data = fluct_data_smoothed
        else:
            self._initial_inp_img_data = fluct_data_unsmoothed
        
        # Display of both variants => there should be no difference 
        # ~~~~~~~~~~~~~~~~~~~~~~~~
        img_init_unsmoothed = fluct_data_unsmoothed[0,:,:,0].numpy()
        img_init_smoothed   = fluct_data_smoothed[0,:,:,0].numpy()
        ax1_1.imshow(img_init_unsmoothed, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(img_init_smoothed, cmap=plt.cm.get_cmap('viridis'))
        
        print("Initial images plotted")
        
        return self._initial_inp_img_data    


    # 
    # Method to build an initial image from a superposition of a PRECURSOR image with random data on different length scales 
    # ***********************************
    def _build_initial_img_from_prec( self, 
                                 prec_index = -1,
                                 strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (1.0, 0.0, 0.0, 0.0, 0.0),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False,
                                 b_print = True, 
                                 b_display = False, 
                                 ax1_1 = None, ax1_2 = None):
        
        '''
        @summary: Method to build an initial image based on a Precursor image => Input for _derive_OIP_for_Prec_Img()
        @version: V0.3, 03.10.2020
        @changes: V0.2: Minor- only documentation and comparison of index to length of _li_of_flucts[] 
        @changes: V0.3: Added Booleans to control the output and display of images 
        @changes: V0.4: Extended the reconstruction part / extended documentation
        
        @note: This method differs from _build_initial_img_data() as follows:
                * It uses a Precursor image as the fundamental image 
                * The data of the Precursor Image will be reconstructed from a (3x3) fluctuation pattern and enrichments
                * This method adds even further fluctuations if requested 
        @note: This method should be called manually from a Jupyter cell 
        @note: This method saves the reconstructed input image into self._initial_inp_img_data
        @note: This method should be followed by a call of self._derive_OIP_for_Prec_Img()
        
        @requires: Large scale fluctuation data saved in self._li_of_flucts[]
        @requires: Additional enrichment information in self._li_of_fluct_enrichments[]
        
        @param prec_index: This is an index ([0, 7[) of a large scale fluctuation pattern which was saved in self._li_of_flucts[] 
                           during the run of the method "_precursor()". The image tensor is reconstructed from the fluctuation pattern. 
        @warning: We support a maximum of 8 selected fluctuation patterns for which a map reacts 
        @warning: However, less precursor patterns may be found - so you should watch for the output of _precursor() before you run this method
        
        @param li_facts:  A list of factors which the control the relative strength of the precursor image vs. 
                          4 additional fluctuation patterns 
        @warning: Normally, it makes no sense to set li_facts[1] > 0 - because this will destroy the original large scale pattern 
        
        @note: For other input parameters see _build_initial_img_data()
        '''
        
        # Get input parameters 
        self._oip_strategy 
= strategy
        self._ay_facts     = np.array(li_facts)
        self._ay_epochs    = np.array(li_epochs)
        self._li_dim_steps = li_dim_steps
        
        fluct_data = None
        
        # Reconstruct an precursor image from a saved large scale fluctuation pattern (result of _precursor() ) 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        length_li_cov = len(self._li_of_flucts)
        if prec_index > -1: 
            if length_li_cov < prec_index+1:
                print("index too large for number of detected patterns (", length_li_cov, ")")
                sys.exit()
            cov_p = self._li_of_flucts[prec_index][1][1]
            
            fluct0_scale_p = tf.image.resize(cov_p, [28,28], method="bicubic", antialias=True)
            
            # Add fluctuation enrichments - if saved [ len(self._li_fluct_enrichments) > 0 ]
            if len(self._li_fluct_enrichments) > 0:
                # Scaling enrichment flucts with bicubic interpolation to the required image size
                fluct1_p = self._li_fluct_enrichments[2][0] 
                fluct2_p = self._li_fluct_enrichments[2][1] 
                fluct3_p = self._li_fluct_enrichments[2][2] 
                fluct4_p = self._li_fluct_enrichments[2][3] 
                
                fact0_p = self._li_fluct_enrichments[0][0]
                fact1_p = self._li_fluct_enrichments[0][1]
                fact2_p = self._li_fluct_enrichments[0][2]
                fact3_p = self._li_fluct_enrichments[0][3]
                fact4_p = self._li_fluct_enrichments[0][4]
                
                fluct1_scale_p = tf.image.resize(fluct1_p, [28,28], method="bicubic", antialias=True)
                fluct2_scale_p = tf.image.resize(fluct2_p, [28,28], method="bicubic", antialias=True)
                fluct3_scale_p = tf.image.resize(fluct3_p, [28,28], method="bicubic", antialias=True)
                fluct4_scale_p = fluct4_p
                
                fluct_scale_p = fact0_p*fluct0_scale_p \
                 + fact1_p*fluct1_scale_p + fact2_p*fluct2_scale_p \
                 + fact3_p*fluct3_scale_p + fact4_p*fluct4_scale_p
                 
            else: 
                fluct_scale_p = fluct0_scale_p
            
            # get the img-data 
            fluct_data_p  = tf.image.per_image_standardization(fluct_scale_p)     
            fluct_data_p  = tf.where(fluct_data_p > 5.e-6, fluct_data_p, tf.zeros_like(fluct_data_p))
            self._initial_inp_img_data = fluct_data_p
            self._inp_img_data         = fluct_data_p
            
        
        # Strategy 0: Simple superposition of the precursor image AND additional patterns at 4 different wave-length
        # ~~~~~~~~~~
        if self._oip_strategy == 0:
            
            dim_1_1 = self._li_dim_steps[0][0] 
            dim_1_2 = self._li_dim_steps[0][1] 
            dim_2_1 = self._li_dim_steps[1][0] 
            dim_2_2 = self._li_dim_steps[1][1] 
            dim_3_1 = self._li_dim_steps[2][0] 
            dim_3_2 = self._li_dim_steps[2][1] 
            dim_4_1 = self._li_dim_steps[3][0] 
            dim_4_2 = self._li_dim_steps[3][1] 
            
            fact0 = self._ay_facts[0]
            fact1 = self._ay_facts[1]
            fact2 = self._ay_facts[2]
            fact3 = self._ay_facts[3]
            fact4 = self._ay_facts[4]
            
            # print some parameter information
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if b_print:
                print("\nInitial image composition - strategy 0:\n Superposition of 4 different wavelength patterns")
                print("Parameters:\n", 
                     fact0, " => precursor image \n", 
                     fact1, " => (" + str(dim_1_1) +", " + str(dim_1_2) + ") :: ", 
                  
   fact2, " => (" + str(dim_2_1) +", " + str(dim_2_2) + ") :: ", 
                     fact3, " => (" + str(dim_3_1) +", " + str(dim_3_2) + ") :: ", 
                     fact4, " => (" + str(dim_4_1) +", " + str(dim_4_2) + ")" 
                     )
            
            # fluctuations
            fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
            fluct2 =  2.0 * ( np.random.random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
            fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
            fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
            
            # Scaling with bicubic interpolation to the required image size
            fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
            fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
            fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
            fluct4_scale = fluct4
            
            # superposition with the already calculated image 
            fluct_data = fact0 * self._initial_inp_img_data  \
                         + fact1*fluct1_scale + fact2*fluct2_scale \
                         + fact3*fluct3_scale + fact4*fluct4_scale
            
        
        # get the standardized plus smoothed and unsmoothed image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #    TF2 provides a function performing standardization of image data function
        fluct_data_unsmoothed = fluct_data 
        fluct_data_smoothed   = tf.image.per_image_standardization(
                                    tf.image.resize( fluct_data, [28,28], 
                                                     method="bicubic", antialias=True) )
        
        if b_smoothing: 
            self._initial_inp_img_data = fluct_data_smoothed
        else:
            self._initial_inp_img_data = fluct_data_unsmoothed
        
        # There should be no difference 
        img_init_unsmoothed = fluct_data_unsmoothed[0,:,:,0].numpy()
        img_init_smoothed   = fluct_data_smoothed[0,:,:,0].numpy()
        
        if b_display:
            ax1_1.imshow(img_init_unsmoothed, cmap=plt.cm.get_cmap('viridis'))
            ax1_2.imshow(img_init_smoothed, cmap=plt.cm.get_cmap('viridis'))
            print("Initial images plotted")
        
        return self._initial_inp_img_data    
    
    #
    # Method to prepare a Precursor run which checks a variety of large scale fluctuations for optimum actvation  
    # ***********************************
    def _prepare_precursor(self, map_index = 120, b_print = False):
        '''
        @summary: A method to prepare a Precursor run by setting up GradientTape and the _ierate() function for an optimization loop
        @version: 0.2, 30.09.2020
        @changes: Minor - adaption to _setup_gradient_tape_and_iterate_function() instead of the obsolete _setup_gradient_tape()
        @requires: A loaded CNN-Model and an already built OIP-model 
        @requires: A valid map-index as input parameter 
        
        @note: This method sets up the GradientTape and _iterate only once 
               - it will not be done again during the investigation of thousands of different input images (with large scale fluctuations) 
               which we investigate during the _precursor()-run. 
        
        @param map_index: Index selecting a map for the CNN layer defined previously by _build_oip_model()
        @param b_print: Boolean - decides about intemediate output 
        
        '''
        
        # Get input parameters 
        # ~~~~~~~~~~~~~~~~~~~~~~
        self._map_index = map_index
        # Rest some variables  
        self._val_oip_grads = None
        self._val_oip_loss  = None 
 
       self._iterate       = None 
        
        # Setup the TF2 GradientTape watch
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._setup_gradient_tape_and_iterate_function(b_print = b_print)
        if b_print:
            print("GradientTape watch activated and _iterate()-function defined")
    
    
    # Method to prepare a Precursor run which checks a variety of large scale fluctuations for optimum actvation
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _precursor(self, 
                   li_pre_val=[0.2, 0.5, 0.8], 
                   num_epochs=10, 
                   loss_limit = 0.5, 
                   b_print = True, 
                   b_with_enriched_fluct = False, 
                   li_facts     = (1.0, 0.0, 0.0, 0.0, 0.1),
                   li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                   b_check = True, 
                   fig_a = None, 
                   li_axa = None,
                   # the next parameters are new in V0.6
                   b_show_test_input_images = False,
                   fig_test_img = None,
                   ax_b = None, 
                   interval_test_img = 1):

        '''
        @summary: Method to investigate thousands of input images with large scale fluctuations for the reaction of a specified map (filters)
                  and a given number of epochs in pattern creation
        @version: 0.6, 12.12.2020
        @changes: V0.5: Minor - documentation, skipped some superfluous statements 
        @changes: V0.6: Added intermediate plot output of constructed input images (a fig_a-reference must have been provided!) 
        @note: We select the 8 most dominant images - or less, if there are fewer input images which trigger the map 
        @requires: Previous run of _prepare_cursor() with a definition of the map-index
        @note: We vary only 3 given pixel values on (3x3) grids (19683 possibilities)
        @note: The optimization loop is completely done within this method - due to performance reasons
        
        @param li_pre_val: A list of three scaled pixel values between ]0, 1[ which shall be combined in (3x3)-fluctuation patterns
        
        @param num_epochs: The number of epochs used in the optimization loop for pattern creation
        @note: It is worthwile to experiment with the number of epochs - the (3x3)-pattern selection may change !!!
        
        @param loss_limit: Threshold of loss for which we register a fluctuation image as relevant 
        
        @param b_print: Boolean - controls printout for intermediate results - useful to see map response 
        @param b_check: Check the response of map for the first saved pattern - check the image reconstruction at the same time
        
        @note: Parameters to enrich the (3x3)-large scale fluctuation with a small scale pattern 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param b_with_enriched_fluct: Boolean - controls whether we enrich the long-range pattern with other additional patterns 
        @param li_facts:  A list of factors which the control the relative strength of the precursor image vs. 
                          4 additional fluctuation patterns 
        @param li_dim_steps: A list of square dimensions for setting the length scale of the fluctuations  
        
        @note: Parameters for plotting  
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param fig_a: reference to the 8 axes-frame 
        @param li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
        
        Plots for intermediate test images 
        Note that for inernediate image plotting the cell must have a command line with metacommand 
        %matplotlib notebook
        plt.ion()
        @param b_show_test_input_images = Boolean; show intermediate 
fluctuation patterns 
        @param fig_test_img: reference to external image plot frame. 
        @param ax_b: Rference to axis-frame for the intermediate display of test-images  
        
        @note: Preparations for plotting: 
        ---------------------------------
        fig_b : a simple figure for an image
        fig_a: We need 8 axis-frames which must be provided externally
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        fig_b_pre, ax_b_pre = plt.subplots(1, figsize=(3,3))
        fig_b_pre.canvas.draw()
        
        fig_a_pre = plt.figure(2, figsize=(10,5))
        axa_1 = fig_a_pre.add_subplot(241)
        axa_2 = fig_a_pre.add_subplot(242)
        axa_3 = fig_a_pre.add_subplot(243)
        axa_4 = fig_a_pre.add_subplot(244)
        axa_5 = fig_a_pre.add_subplot(245)
        axa_6 = fig_a_pre.add_subplot(246)
        axa_7 = fig_a_pre.add_subplot(247)
        axa_8 = fig_a_pre.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, aab_5, axa_6, axa_7, axa_8]
        fig_a_pre.canvas.draw()
        
        
        '''
        # Internal parameter for number of selected input patterns 
        num_selected = 8 
        # check if length of li_axa is sufficient
        if li_axa == None or len(li_axa) < num_selected:
            print("Error: The length of the provided list with axes-frames for plotting must be at least ", num_selected )
            sys.exit()
        
        # get required exernal params 
        # ~~~~~~~~~~~~~~~~~~~~~~~~
        self._n_steps   = 2           # only a dummy 
        self._epsilon   = 0.01        # only a dummy  
        
        # number of epochs for optimization strategy 0 
        if num_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = num_epochs  
        
        # Create a fixed random flutution pattern which we later can overlay to the long range fluctuation patterns 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._li_dim_steps = li_dim_steps # dimensions for fluctuations 
        self._ay_facts     = np.array(li_facts)
        
        dim_1_1 = self._li_dim_steps[0][0] 
        dim_1_2 = self._li_dim_steps[0][1] 
        dim_2_1 = self._li_dim_steps[1][0] 
        dim_2_2 = self._li_dim_steps[1][1] 
        dim_3_1 = self._li_dim_steps[2][0] 
        dim_3_2 = self._li_dim_steps[2][1] 
        dim_4_1 = self._li_dim_steps[3][0] 
        dim_4_2 = self._li_dim_steps[3][1] 
        
        fact0 = self._ay_facts[0]
        fact1 = self._ay_facts[1]
        fact2 = self._ay_facts[2]
        fact3 = self._ay_facts[3]
        fact4 = self._ay_facts[4]
        
        # Create fluctuation patterns for enrichment 
        fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
        fluct2 =  2.0 * ( np.random.random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
        fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
        fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
        
        li_fluct = [fluct1, fluct2, fluct3, fluct4]
        
        # Scaling with bicubic interpolation to the required image size
        fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
        fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
        fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
        fluct4_scale = fluct4

        
        # Create cartesian product of combinatoric possibilities for a (3x3)-grid of long range fluctuations
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        cp = list(product(li_pre_val, repeat=9))
      
  num = len(cp) 
        print ("We test ", num, " possibilities for a (3x3) fluctuations ")
        
        # Prepare lists to save parameter data for the fluctuation pattern 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # Intermediate list to save relevant long scale fluctuations 
        d_cov = {}
        # list to save parameters for an enrichments of the large scale pattern with small fluctuations
        if b_with_enriched_fluct:
            self._li_fluct_enrichments = [li_facts, li_dim_steps, li_fluct]
        else:
            self._li_fluct_enrichments = []
            
        # Loop to check for relevant fluctuations => Loop over all combinations
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        for i_cp in range(num): 
            
            # create the value distribution 
            cov = np.array(cp[i_cp])
            cov = cov.reshape(1,3,3,1) 

            # create basic image to investigate
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            fluct_scale = tf.image.resize(cov, [28,28], method="bicubic", antialias=True) 
            
            # enrich with additional smallscale fluctuations 
            if b_with_enriched_fluct: 
                # superposition with the already calculated image 
                fluct_scale = fact0 * fluct_scale \
                             + fact1*fluct1_scale + fact2*fluct2_scale \
                             + fact3*fluct3_scale + fact4*fluct4_scale
            
            #standardization of the image data 
            fluct_data  = tf.image.per_image_standardization(fluct_scale)     
            # eliminatng very small values - prove to be helpful in many cases 
            fluct_data = tf.where(fluct_data > 5.e-6, fluct_data, tf.zeros_like(fluct_data))
            
            # save image data 
            self._initial_inp_img_data = fluct_data
            self._inp_img_data         = fluct_data
            
            # Display the present input image 
            if b_show_test_input_images and i_cp%interval_test_img == 0: 
                img_b = fluct_data[0, :,:, 0].numpy()
                ax_b.imshow(img_b, cmap=plt.cm.get_cmap('viridis'))
                fig_test_img.canvas.draw()
            
            # optimization loop 
            # ~~~~~~~~~~~~~~~~~
            for j in range(self._n_epochs):
                # Get output values of our Keras iteration function 
                self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
                # gradient ascent => Correction of the input image data 
                self._inp_img_data += self._val_oip_grads * self._epsilon
                # Standardize the corrected image - we won't get a convergence otherwise 
                self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
                
            # Check if we have a loss value > 0.x and save the (3x3)-data
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if self._val_oip_loss > loss_limit:
                d_cov[i_cp] = [self._val_oip_loss, cov] 
                if b_print:
                    tf.print("i = ", i_cp, " loss = ", self._val_oip_loss)

        # We restrict the number to those 8 with highest loss 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if len(d_cov) > 0: 
            self._li_of_flucts = sorted(d_cov.items() , reverse = True,  key=lambda x: x[1][0])
            # print("num of relevant covs = ", len(self._li_of_flucts), len(d_cov))   
            print("\nnum of relevant covs = ", len(self._li_of_flucts))   
            self._li_of_flucts = self._li_of_flucts[:num_selected].copy()
            #save( 'li_of_flucts.npy', np.array(self._li_of_flucts, dtype=np.float32) )
   
         save( 'li_of_flucts.npy', self._li_of_flucts)
            # save the enrichment-setting 
            save('li_of_cov_enrichments.npy',self._li_fluct_enrichments)
            
            
            # Check of map really reacted - and check the reconstruction mechanism 
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if b_check: 
                print("check of map reaction to first selected image")
                cov_t = self._li_of_flucts[0][1][1]
                #print("cov_t-shape = ", cov_t.shape)
                #cov_del = cov_t - cov
                #print("cov_del =\n", cov_del)
                
                fluct0_scale_t = tf.image.resize(cov_t, [28,28], method="bicubic", antialias=True) 
                        
                # Add fluctuation enrichments - if saved
                if b_with_enriched_fluct:
                    
                    # Scaling enrichment flucts with bicubic interpolation to the required image size
                    fluct1_t = self._li_fluct_enrichments[2][0] 
                    fluct2_t = self._li_fluct_enrichments[2][1] 
                    fluct3_t = self._li_fluct_enrichments[2][2] 
                    fluct4_t = self._li_fluct_enrichments[2][3] 
                    
                    fact0_t = self._li_fluct_enrichments[0][0]
                    fact1_t = self._li_fluct_enrichments[0][1]
                    fact2_t = self._li_fluct_enrichments[0][2]
                    fact3_t = self._li_fluct_enrichments[0][3]
                    fact4_t = self._li_fluct_enrichments[0][4]
                    
                    fluct1_scale_t = tf.image.resize(fluct1_t, [28,28], method="bicubic", antialias=True)
                    fluct2_scale_t = tf.image.resize(fluct2_t, [28,28], method="bicubic", antialias=True)
                    fluct3_scale_t = tf.image.resize(fluct3_t, [28,28], method="bicubic", antialias=True)
                    fluct4_scale_t = fluct4_t
                    
                    fluct_scale_t = fact0_t*fluct0_scale_t \
                                 + fact1_t*fluct1_scale_t + fact2_t*fluct2_scale_t \
                                 + fact3_t*fluct3_scale_t + fact4_t*fluct4_scale_t
                    
                else: 
                    fluct_scale_t = fluct0_scale_t
                
                #standardization
                fluct_data_t  = tf.image.per_image_standardization(fluct_scale_t)     
                fluct_data_t  = tf.where(fluct_data_t > 5.e-6, fluct_data_t, tf.zeros_like(fluct_data_t))
                self._initial_inp_img_data = fluct_data_t
                self._inp_img_data         = fluct_data_t
                self._precursor_img        = fluct_data_t
                
                # optimization loop 
                for j in range(self._n_epochs):
                    self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
                    self._inp_img_data += self._val_oip_grads * self._epsilon
                    self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
                print("loss for 1st selected img = ", self._val_oip_loss )

            # show the imgs 
            # ~~~~~~~~~~~~~~
            self._display_precursor_imgs(li_axa = li_axa, fig_a = fig_a)
            
        # list contains no patterns 
        else:
            print("No image found !")
    
    
    # Method to display initial fluctuation images identified as objects for OIP creation 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _display_precursor_imgs(self, li_axa = None, fig_a = None):
        '''
        @summary: Method to display up to 8 selected precursor images 
        @version: 0.2, 02.10.2020
        @change: Only some documentation 
        @note: We first 
reconstruct the image from saved data of the large scale fluctuations 
        @note: We then display the images in externally delivered axes-frames of matplotlib
        @requires: A filled set of valid fluctuation patterns in self._li_of_flucts[][][] - determined by a run of _precursor()
        @requires: Information on fluctuation enrichments in self._li_fluct_enrichments[][] - determined by a _precursor() run
        @requires: A set of axes-frames for plotting - preferably defined in a Jupyter cell calling thi smethod 
        @note: Parameters for plotting  
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
         '''
        # length of _li_of_flucts[] vs. length of li_axa
        len_cov = len(self._li_of_flucts)
        if li_axa == None or len(li_axa) < len_cov:
            print("Error: The length of the provided list with axes-frames for plotting must be at least ", len_cov )
            sys.exit()
        
        # Loop to reconstruct and display the found precursor images 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        for j in range(len(self._li_of_flucts)):
            print(j, "loss = ", self._li_of_flucts[j][1][0])
            
            # reconstruct the image from the data of the precursor run 
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            cov = self._li_of_flucts[j][1][1]
            fluct0_scale_t  = tf.image.resize(cov, [28,28], method="bicubic", antialias=True)        
            
            # add enrichments if defined 
            if len(self._li_fluct_enrichments) > 0: 
                # Scaling enrichment flucts with bicubic interpolation to the required image size
                fluct1_t = self._li_fluct_enrichments[2][0] 
                fluct2_t = self._li_fluct_enrichments[2][1] 
                fluct3_t = self._li_fluct_enrichments[2][2] 
                fluct4_t = self._li_fluct_enrichments[2][3] 
                
                fact0_t = self._li_fluct_enrichments[0][0]
                fact1_t = self._li_fluct_enrichments[0][1]
                fact2_t = self._li_fluct_enrichments[0][2]
                fact3_t = self._li_fluct_enrichments[0][3]
                fact4_t = self._li_fluct_enrichments[0][4]
                
                fluct1_scale_t = tf.image.resize(fluct1_t, [28,28], method="bicubic", antialias=True)
                fluct2_scale_t = tf.image.resize(fluct2_t, [28,28], method="bicubic", antialias=True)
                fluct3_scale_t = tf.image.resize(fluct3_t, [28,28], method="bicubic", antialias=True)
                fluct4_scale_t = fluct4_t
                
                fluct_scale_t = fact0_t*fluct0_scale_t \
                             + fact1_t*fluct1_scale_t + fact2_t*fluct2_scale_t \
                             + fact3_t*fluct3_scale_t + fact4_t*fluct4_scale_t
            else:
                fluct_scale_t = fluct0_scale_t                 
            
            fluct_datx  = tf.image.per_image_standardization(fluct_scale_t)     
            fluct_dat  = tf.where(fluct_datx > 5.e-6, fluct_datx, tf.zeros_like(fluct_datx))
            img = fluct_dat[0, :,:, 0].numpy()
            li_axa[j].imshow(img, cmap=plt.cm.get_cmap('viridis'))
        
        fig_a.canvas.draw()
        return
    
    
    # Method to transform an img tensor into a standard image - used for contrast enhancemant  
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _transform_tensor_to_img(self, T_img = None, centre_move = 0.33, fact = 1.0):
        '''
        @summary: Method to transform (standardized) tensor img data into standard img array data 
                  and to apply contrast enhancement(> V0.2)
        @change: V0.2: Additional 
parameers to control contrast enhancement. 
                 We scale the the value distribution, move its center and apply some clipping  
        @note: Clipping is used to remove pixel values outside [0, 255] 
        @version: 0.2, 12.10.2020
        
        @requires: A defined TF2/Keras backend 
        
        @param T_img: The TF2 or keras tensor for image data 
        @param centre_move: A decimal to move the centre of the pixel value distribution  
        
        
        '''
        ay_x = T_img.numpy()  # floating point array  
        
        maxi_o    = np.max(T_img)
        avg_o     = np.mean(T_img)
        mini_o    = np.min(T_img)
        std_dev_o = np.std(T_img)
        print("\nInfos on pixel value distribution during contrast enhancement: ") 
        print("\max_orig = ", maxi_o, " :: avg_orig = ", avg_o, " :: min_orig: ", mini_o) 
        print("std_dev_orig = ", std_dev_o)

        # the following operation should have no effect on standardized images
        ay_x -= ay_x.mean()
        ay_x /= (ay_x.std() + B.epsilon())
        
        maxi    = np.max(ay_x)
        avg     = np.mean(ay_x)
        mini    = np.min(ay_x)
        std_dev = np.std(ay_x)
        print("max_ay = ", maxi, " :: avg_ay = ", avg, " :: min_ay: ", mini) 
        print("std_dev_ay = ", std_dev)
        
        div = fact * 0.5 * ( abs(maxi_o) + abs(mini_o) )
        print("div = ", div)
        ay_x /= div          # scaling  
        ay_x += centre_move  # moving the data center - 0.5 would move of the distribution into [0,1]
        #                    # smaller values emphasize light/dark contratss 
        maxi = np.max(ay_x)
        avg  = np.mean(ay_x)
        mini = np.min(ay_x)
        std_dev = np.std(ay_x)
        print("max_fin = ", maxi, " :: avg_fin = ", avg, " :: min_fin: ", mini) 
        print("std_dev_fin = ", std_dev)
        
        ay_x = np.clip(ay_x, 0, 1)
        
        ay_x *= 255
        ay_x_img = np.clip(ay_x, 0, 255).astype('uint8')
        
        maxi = np.max(ay_x_img)
        avg  = np.mean(ay_x_img)
        mini = np.min(ay_x_img)
        std_dev = np.std(ay_x_img)
        print("max_img = ", maxi, " :: avg_img = ", avg, " :: min_img: ", mini) 
        print("std_dev_img = ", std_dev, "\n")
        
        
        return ay_x_img


 

Links

A simple CNN for the MNIST dataset – XI – Python code for filter visualization and OIP detection

A simple Python program for an ANN to cover the MNIST dataset – VIII – coding Error Backward Propagation

I continue with my series on a Python program for coding small “Multilayer Perceptrons” [MLPs].

A simple program for an ANN to cover the Mnist dataset – VII – EBP related topics and obstacles
A simple program for an ANN to cover the Mnist dataset – VI – the math behind the „error back-propagation“
A simple program for an ANN to cover the Mnist dataset – V – coding the loss function
A simple program for an ANN to cover the Mnist dataset – IV – the concept of a cost or loss function
A simple program for an ANN to cover the Mnist dataset – III – forward propagation
A simple program for an ANN to cover the Mnist dataset – II – initial random weight values
A simple program for an ANN to cover the Mnist dataset – I – a starting point

After all the theoretical considerations of the last two articles we now start coding again. Our objective is to extend our methods for training the MLP on the MNIST dataset by methods which perform the “error back propagation” and the correction of the weights. The mathematical prescriptions were derived in the following PDF:
My PDF on “The math behind EBP”

When you study the new code fragments below remember a few things:
We are prepared to use mini-batches. Therefore, the cost functions will be calculated over the data records of each batch and all the matrix operations for back propagation will cover all batch-records in parallel. Training means to loop over epochs and mini-batches – in pseudo-code:

  • Loop over epochs
    1. adjust learning rate,
    2. check for convergence criteria,
    3. Shuffle all data records in the test data set and build new mini-batches
  • Loop over mini-batches
    1. Perform forward propagation for all records of the mini-batch
    2. calculate and save the total cost value for each mini-batch
    3. calculate and save an averaged error on the output layer for each mini-batch
    4. perform error backward propagation on all records of the mini-batch to get the gradient of the cost function with respect to all weights
    5. adjust all weights on all layers

As discussed in the last article: The cost hyperplane changes a bit with each mini-batch. If there is a good mixture of records in a batch then the form of its specific cost hyperplane will (hopefully) resemble the form of an overall cost hyperplane, but it will not be the same. By the second step in the outer loop we want to avoid that the same data records always get an influence on the gradients at the same position in the correction procedure. Both statistical elements help a bit to overcome dominant records and a non-equal distribution of test records. If we had only pictures for number 3 at the end of our MNIST data set we
may start learning “3” representations very well, but not other numbers. Statistical variation also helps to avoid side minima on the overall cost hyperplane for all data records of the test set.

We shall implement the second step and third step in the epoch loop in the next article – when we are sure that the training algorithm works as expected. So, at the moment we will stop our training only after a given number of epochs.

More input parameters

In the first articles we had build an __init__()-method to parameterize a training run. We have to include three more parameters to control the backward propagation.

learn_rate = 0.001, # the learning rate (often called epsilon in textbooks)
decrease_const = 0.00001, # a factor for decreasing the learning rate with epochs
mom_rate = 0.0005, # a factor for momentum learning

The first parameter controls by how much we change weights with the help of gradient values. See formula (93) in the PDF of article VI (you find the Link to the latest version in the last section of this article). The second parameter will give us an option to decrease the learning rate with the number of training epochs. Note that a constant decrease rate only makes sense, if we can be relatively sure that we do not end up in a side minimum of the cost function.

The third parameter is interesting: It will allow us to mix the presently calculated weight correction with the correction term from the last step. So to say: We extend the “momentum” of the last correction into the next correction. This helps us not to follow indicated direction changes on the cost hyperplanes too fast.

Some hygienic measures regarding variables

In the already written parts of the code we have used a prefix “ay_” for all variables which represent some vector or array like structure – including Python lists and Numpy arrays. For back propagation coding it will be more important to distinguish between lists and arrays. So, I changed the variable prefix for some important Python lists from “ay_” to “li_”. (I shall do it for all lists used in a further version). In addition I have changed the prefix for Python ranges to “rg_”. These changes will affect the contents and the interface of some methods. You will notice when we come to these methods.

The changed __input__()-method

Our modified __init__() function now looks like this:

    def __init__(self, 
                 my_data_set = "mnist", 
                 n_hidden_layers = 1, 
                 ay_nodes_layers = [0, 100, 0], # array which should have as much elements as n_hidden + 2
                 n_nodes_layer_out = 10,  # expected number of nodes in output layer 
                                                  
                 my_activation_function = "sigmoid", 
                 my_out_function        = "sigmoid",   
                 my_loss_function       = "LogLoss",   
                 
                 n_size_mini_batch = 50,  # number of data elements in a mini-batch 
                 
                 n_epochs      = 1,
                 n_max_batches = -1,  # number of mini-batches to use during epochs - > 0 only for testing 
                                      # a negative value uses all mini-batches 
                 
                 lambda2_reg = 0.1,     # factor for quadratic regularization term 
                 lambda1_reg = 0.0,     # factor for linear regularization term 
                 
                 vect_mode = 'cols', 
                 
                 learn_rate = 0.001,        # the learning rate (often called epsilon in textbooks) 
                 decrease_const = 0.00001,  # a factor for decreasing the learning rate with epochs
                 mom_rate  
 = 0.0005,       # a factor for momentum learning
                 
                 figs_x1=12.0, figs_x2=8.0, 
                 legend_loc='upper right',
                 
                 b_print_test_data = True
                 
                 ):
        '''
        Initialization of MyANN
        Input: 
            data_set: type of dataset; so far only the "mnist", "mnist_784" datsets are known 
                      We use this information to prepare the input data and learn about the feature dimension. 
                      This info is used in preparing the size of the input layer.     
            n_hidden_layers = number of hidden layers => between input layer 0 and output layer n 
            
            ay_nodes_layers = [0, 100, 0 ] : We set the number of nodes in input layer_0 and the output_layer to zero 
                              Will be set to real number afterwards by infos from the input dataset. 
                              All other numbers are used for the node numbers of the hidden layers.
            n_nodes_out_layer = expected number of nodes in the output layer (is checked); 
                                this number corresponds to the number of categories NC = number of labels to be distinguished
            
            my_activation_function : name of the activation function to use 
            my_out_function : name of the "activation" function of the last layer which produces the output values 
            my_loss_function : name of the "cost" or "loss" function used for optimization 
            
            n_size_mini_batch : Number of elements/samples in a mini-batch of training data 
                                The number of mini-batches will be calculated from this
            
            n_epochs : number of epochs to calculate during training
            n_max_batches : > 0: maximum of mini-batches to use during training 
                            < 0: use all mini-batches  
            
            lambda_reg2:    The factor for the quadartic regularization term 
            lambda_reg1:    The factor for the linear regularization term 
            
            vect_mode: Are 1-dim data arrays (vctors) ordered by columns or rows ?
            
            learn rate :     Learning rate - definies by how much we correct weights in the indicated direction of the gradient on the cost hyperplane.
            decrease_const:  Controls a systematic decrease of the learning rate with epoch number 
            mom_const:       Momentum rate. Controls a mixture of the last with the present weight corrections (momentum learning)
            
            figs_x1=12.0, figs_x2=8.0 : Standard sizing of plots , 
            legend_loc='upper right': Position of legends in the plots
            
            b_print_test_data: Boolean variable to control the print out of some tests data 
             
         '''
        
        # Array (Python list) of known input data sets 
        self._input_data_sets = ["mnist", "mnist_784", "mnist_keras"]  
        self._my_data_set = my_data_set
        
        # X, y, X_train, y_train, X_test, y_test  
            # will be set by analyze_input_data 
            # X: Input array (2D) - at present status of MNIST image data, only.    
            # y: result (=classification data) [digits represent categories in the case of Mnist]
        self._X       = None 
        self._X_train = None 
        self._X_test  = None   
        self._y       = None 
        self._y_train = None 
        self._y_test  = None
        
        # relevant dimensions 
        # from input data information;  will be set in handle_input_data()
        self._dim_sets     = 0  
        self._dim_features = 0  
        self._n_labels     = 0   # number of unique labels - will be extracted from y-data 
        
      
  # Img sizes 
        self._dim_img      = 0 # should be sqrt(dim_features) - we assume square like images  
        self._img_h        = 0 
        self._img_w        = 0 
        
        # Layers
        # ------
        # number of hidden layers 
        self._n_hidden_layers = n_hidden_layers
        # Number of total layers 
        self._n_total_layers = 2 + self._n_hidden_layers  
        # Nodes for hidden layers 
        self._ay_nodes_layers = np.array(ay_nodes_layers)
        # Number of nodes in output layer - will be checked against information from target arrays
        self._n_nodes_layer_out = n_nodes_layer_out
        
        # Weights 
        # --------
        # empty List for all weight-matrices for all layer-connections
        # Numbering : 
        # w[0] contains the weight matrix which connects layer 0 (input layer ) to hidden layer 1 
        # w[1] contains the weight matrix which connects layer 1 (input layer ) to (hidden?) layer 2 
        self._li_w = []
        
        # Arrays for encoded output labels - will be set in _encode_all_mnist_labels()
        # -------------------------------
        self._ay_onehot = None
        self._ay_oneval = None
        
        # Known Randomizer methods ( 0: np.random.randint, 1: np.random.uniform )  
        # ------------------
        self.__ay_known_randomizers = [0, 1]

        # Types of activation functions and output functions 
        # ------------------
        self.__ay_activation_functions = ["sigmoid"] # later also relu 
        self.__ay_output_functions     = ["sigmoid"] # later also softmax 
        
        # Types of cost functions 
        # ------------------
        self.__ay_loss_functions = ["LogLoss", "MSE" ] # later also othr types of cost/loss functions  


        # the following dictionaries will be used for indirect function calls 
        self.__d_activation_funcs = {
            'sigmoid': self._sigmoid, 
            'relu':    self._relu
            }
        self.__d_output_funcs = { 
            'sigmoid': self._sigmoid, 
            'softmax': self._softmax
            }  
        self.__d_loss_funcs = { 
            'LogLoss': self._loss_LogLoss, 
            'MSE': self._loss_MSE
            }  
        # Derivative functions 
        self.__d_D_activation_funcs = {
            'sigmoid': self._D_sigmoid, 
            'relu':    self._D_relu
            }
        self.__d_D_output_funcs = { 
            'sigmoid': self._D_sigmoid, 
            'softmax': self._D_softmax
            }  
        self.__d_D_loss_funcs = { 
            'LogLoss': self._D_loss_LogLoss, 
            'MSE': self._D_loss_MSE
            }  
        
        
        # The following variables will later be set by _check_and set_activation_and_out_functions()            
        
        self._my_act_func  = my_activation_function
        self._my_out_func  = my_out_function
        self._my_loss_func = my_loss_function
        self._act_func = None    
        self._out_func = None    
        self._loss_func = None    
        
        # number of data samples in a mini-batch 
        self._n_size_mini_batch = n_size_mini_batch
        self._n_mini_batches = None  # will be determined by _get_number_of_mini_batches()

        # maximum number of epochs - we set this number to an assumed maximum 
        # - as we shall build a backup and reload functionality for training, this should not be a major problem 
        self._n_epochs = n_epochs
        
        # maximum number of batches to handle ( if < 0 => all!) 
        self._n_max_batches = n_max_batches
        # actual number of batches 
        self._n_batches = None

        # regularization parameters
        self._lambda2_reg = lambda2_reg
        self._
lambda1_reg = lambda1_reg
        
        # parameter for momentum learning 
        self._learn_rate = learn_rate
        self._decrease_const = decrease_const
        self._mom_rate   = mom_rate
        self._li_mom = [None] *  self._n_total_layers
        
        # book-keeping for epochs and mini-batches 
        # -------------------------------
        # range for epochs - will be set by _prepare-epochs_and_batches() 
        self._rg_idx_epochs = None
        # range for mini-batches 
        self._rg_idx_batches = None
        # dimension of the numpy arrays for book-keeping - will be set in _prepare_epochs_and_batches() 
        self._shape_epochs_batches = None    # (n_epochs, n_batches, 1) 

        # list for error values at outermost layer for minibatches and epochs during training
        # we use a numpy array here because we can redimension it
        self._ay_theta = None
        # list for cost values of mini-batches during training 
        # The list will later be split into sections for epochs 
        self._ay_costs = None
        
        
        # Data elements for back propagation
        # ----------------------------------
        
        # 2-dim array of partial derivatives of the elements of an additive cost function 
        # The derivative is taken with respect to the output results a_j = ay_ANN_out[j]
        # The array dimensions account for nodes and sampls of a mini_batch. The array will be set in function 
        # self._initiate_bw_propagation()
        self._ay_delta_out_batch = None
        

        # parameter to allow printing of some test data 
        self._b_print_test_data = b_print_test_data

        # Plot handling 
        # --------------
        # Alternatives to resize plots 
        # 1: just resize figure  2: resize plus create subplots() [figure + axes] 
        self._plot_resize_alternative = 1 
        # Plot-sizing
        self._figs_x1 = figs_x1
        self._figs_x2 = figs_x2
        self._fig = None
        self._ax  = None 
        # alternative 2 does resizing and (!) subplots() 
        self.initiate_and_resize_plot(self._plot_resize_alternative)        
        
        
        # ***********
        # operations 
        # ***********
        
        # check and handle input data 
        self._handle_input_data()
        # set the ANN structure 
        self._set_ANN_structure()
        
        # Prepare epoch and batch-handling - sets ranges, limits num of mini-batches and initializes book-keeping arrays
        self._rg_idx_epochs, self._rg_idx_batches = self._prepare_epochs_and_batches()
        
        # perform training 
        start_c = time.perf_counter()
        self._fit(b_print=True, b_measure_batch_time=False)
        end_c = time.perf_counter()
        print('\n\n ------') 
        print('Total training Time_CPU: ', end_c - start_c) 
        print("\nStopping program regularily")
        sys.exit()

The extended method _set_ANN_structure()

I do not change method “_handle_input_data()”. However, I extend method “def _set_ANN_structure()” by a statement to initialize a list with momentum matrices for all layers.

    '''-- Method to set ANN structure --''' 
    def _set_ANN_structure(self):
        # check consistency of the node-number list with the number of hidden layers (n_hidden)
        self._check_layer_and_node_numbers()
        # set node numbers for the input layer and the output layer
        self._set_nodes_for_input_output_layers() 
        self._show_node_numbers() 
        
        # create the weight matrix between input and first hidden layer 
        self._create_WM_Input() 
        # create weight matrices between the 
hidden layers and between tha last hidden and the output layer 
        self._create_WM_Hidden() 
        
        # initialize momentum differences
        self._create_momentum_matrices()
        #print("\nLength of li_mom = ", str(len(self._li_mom)))
        
        # check and set activation functions 
        self._check_and_set_activation_and_out_functions()
        self._check_and_set_loss_function()
        
        return None

 
The following box shows the changed functions _create_WM_Input(), _create_WM_Hidden() and the new function _create_momentum_matrices():

    '''-- Method to create the weight matrix between L0/L1 --'''
    def _create_WM_Input(self):
        '''
        Method to create the input layer 
        The dimension will be taken from the structure of the input data 
        We need to fill self._w[0] with a matrix for conections of all nodes in L0 with all nodes in L1
        We fill the matrix with random numbers between [-1, 1] 
        '''
        # the num_nodes of layer 0 should already include the bias node 
        num_nodes_layer_0 = self._ay_nodes_layers[0]
        num_nodes_with_bias_layer_0 = num_nodes_layer_0 + 1 
        num_nodes_layer_1 = self._ay_nodes_layers[1] 
        
        # fill the matrix with random values 
        #rand_low  = -1.0
        #rand_high = 1.0
        rand_low  = -0.5
        rand_high = 0.5
        rand_size = num_nodes_layer_1 * (num_nodes_with_bias_layer_0) 
        
        randomizer = 1 # method np.random.uniform   
        
        w0 = self._create_vector_with_random_values(rand_low, rand_high, rand_size, randomizer)
        w0 = w0.reshape(num_nodes_layer_1, num_nodes_with_bias_layer_0)
        
        # put the weight matrix into array of matrices 
        self._li_w.append(w0)
        print("\nShape of weight matrix between layers 0 and 1 " + str(self._li_w[0].shape))
        
#
    '''-- Method to create the weight-matrices for hidden layers--''' 
    def _create_WM_Hidden(self):
        '''
        Method to create the weights of the hidden layers, i.e. between [L1, L2] and so on ... [L_n, L_out] 
        We fill the matrix with random numbers between [-1, 1] 
        '''
        
        # The "+1" is required due to range properties ! 
        rg_hidden_layers = range(1, self._n_hidden_layers + 1, 1)

        # for random operation 
        rand_low  = -1.0
        rand_high = 1.0
        
        for i in rg_hidden_layers: 
            print ("Creating weight matrix for layer " + str(i) + " to layer " + str(i+1) )
            
            num_nodes_layer = self._ay_nodes_layers[i] 
            num_nodes_with_bias_layer = num_nodes_layer + 1 
            
            # the number of the next layer is taken without the bias node!
            num_nodes_layer_next = self._ay_nodes_layers[i+1]
            
            # assign random values  
            rand_size = num_nodes_layer_next * num_nodes_with_bias_layer   
            
            randomizer = 1 # np.random.uniform
            
            w_i_next = self._create_vector_with_random_values(rand_low, rand_high, rand_size, randomizer)   
            w_i_next = w_i_next.reshape(num_nodes_layer_next, num_nodes_with_bias_layer)
            
            # put the weight matrix into our array of matrices 
            self._li_w.append(w_i_next)
            print("Shape of weight matrix between layers " + str(i) + " and " + str(i+1) + " = " + str(self._li_w[i].shape))
#
    '''-- Method to create and initialize matrices for momentum learning (differences) '''
    def _create_momentum_matrices(self):
        rg_layers = range(0, self._n_total_layers - 1)
        for i in rg_layers: 
r
            self._li_mom[i] = np.zeros(self._li_w[i].shape)
            #print("shape of li_mom[" + str(i) + "] = ", self._li_mom[i].shape)

 

The modified functions _fit() and _handle_mini_batch()

The _fit()-function is modified to include a systematic decrease of the learning rate.

    ''' -- Method to set the number of batches based on given batch size -- '''
    def _fit(self, b_print = False, b_measure_batch_time = False):
        
        rg_idx_epochs  = self._rg_idx_epochs 
        rg_idx_batches = self._rg_idx_batches
        if (b_print):    
            print("\nnumber of epochs = " + str(len(rg_idx_epochs)))
            print("max number of batches = " + str(len(rg_idx_batches)))
       
        # loop over epochs
        for idxe in rg_idx_epochs:
            if (b_print):
                print("\n ---------")
                print("\nStarting epoch " + str(idxe+1))
                
                self._learn_rate /= (1.0 + self._decrease_const * idxe)
            
            # loop over mini-batches
            for idxb in rg_idx_batches:
                if (b_print):
                    print("\n ---------")
                    print("\nDealing with mini-batch " + str(idxb+1))
                if b_measure_batch_time: 
                    start_0 = time.perf_counter()
                # deal with a mini-batch
                self._handle_mini_batch(num_batch = idxb, num_epoch=idxe, b_print_y_vals = False, b_print = False)
                if b_measure_batch_time: 
                    end_0 = time.perf_counter()
                    print('Time_CPU for batch ' + str(idxb+1), end_0 - start_0) 
                
                #if idxb == 100: 
                #    sys.exit() 
        
        return None

 
Note that the number of epochs is determined by an external parameter as an upper limit of the range “rg_idx_epochs”.

Method “_handle_mini_batch()” requires several changes: First we define lists which are required to save matrix data of the backward propagation. And, of course, we call a method to perform the BW propagation (see step 6 in the code). Some statements print shapes, if required. At step 7 of the code we correct the weights by using the learning rate and the calculated gradient of the loss function.

Note, that we mix the correction evaluated at the last batch-record with the correction evaluated for the present record! This corresponds to a simple form of momentum learning. We then have to save the present correction values, of course. Note that the list for momentum correction “li_mom” is, therefore, not deleted at the end of a mini-batch treatment !

In addition to saving the total costs per mini-batch we now also save a mean error at the output level. The average is done by the help of Numpy’s function numpy.average() for matrices. Remember, we build the average over errors at all output nodes and all records of the mini-batch.


    ''' -- Method to deal with a batch -- '''
    def _handle_mini_batch(self, num_batch = 0, num_epoch = 0, b_print_y_vals = False, b_print = False, b_keep_bw_matrices = True):
        '''
        For each batch we keep the input data array Z and the output data A (output of activation function!) 
        for all layers in Python lists
        We can use this as input variables in function calls - mutable variables are handled by reference values !
        We receive the A and Z data from propagation functions and proceed them to cost and gradient calculation functions
        
        As an initial step we define the Python lists li_Z_
in_layer and li_A_out_layer 
        and fill in the first input elements for layer L0  
        
        Forward propagation:
        --------------------
        li_Z_in_layer : List of layer-related 2-dim matrices for input values z at each node (rows) and all batch-samples (cols).
        li_A_out_layer: List of layer-related 2-dim matrices for output alues z at each node (rows) and all batch-samples (cols).
                        The output is created by Phi(z), where Phi represents an activation or output function 
        
        Note that the matrices in ay_A_out will be permanently extended by a row (over all samples) 
        to account for a bias node of each inner layer. This happens during FW propagation. 
        
        Note that the matrices ay_Z_in will be temporarily extended by a row (over all samples) 
        to account for a bias node of each inner layer. This happens during BW propagation.
        
        Backward propagation:
        -------------------- 
        li_delta_out:  Startup matrix for _out_delta-values at the outermost layer 
        li_grad_layer: List of layer-related matrices with gradient values for the correction of the weights 
        
        Depending on parameter "b_keep_bw_matrices" we keep 
            - a list of layer-related matrices D with values for the derivatives of the act./output functions
            - a list of layer-related matrices for the back propagated delta-values 
        in lists during back propagation. This can support error analysis. 
        
        All matrices in the lists are 2 dimensional with dimensions for nodes (rows) and training samples (cols) 
        All these lists be deleted at the end of the function to accelerate garbadge handling
        
        Input parameters: 
        ----------------
        num_epoch:     Number of present epoch
        num_batch:    Number of present mini-batch 
        '''
        # Layer-related lists to be filled with 2-dim Numpy matrices during FW propagation
        # ********************************************************************************
        li_Z_in_layer  = [None] * self._n_total_layers # List of matrices with z-input values for each layer; filled during FW-propagation
        li_A_out_layer = li_Z_in_layer.copy()          # List of matrices with results of activation/output-functions for each layer; filled during FW-propagation
        li_delta_out   = li_Z_in_layer.copy()          # Matrix with out_delta-values at the outermost layer 
        li_delta_layer = li_Z_in_layer.copy()          # List of the matrices for the BW propagated delta values 
        li_D_layer     = li_Z_in_layer.copy()          # List of the derivative matrices D containing partial derivatives of the activation/ouput functions 
        li_grad_layer  = li_Z_in_layer.copy()          # List of the matrices with gradient values for weight corrections
        
        if b_print: 
            len_lists = len(li_A_out_layer)
            print("\nnum_epoch = ", num_epoch, "  num_batch = ", num_batch )
            print("\nhandle_mini_batch(): length of lists = ", len_lists)
            self._info_point_print("handle_mini_batch: point 1")
        
        # Print some infos
        # ****************
        if b_print:
            self._print_batch_infos()
            self._info_point_print("handle_mini_batch: point 2")
        
        # Major steps for the mini-batch during one epoch iteration 
        # **********************************************************
        
        # Step 0: List of indices for data records in the present mini-batch
        # ******
        ay_idx_batch = self._ay_mini_batches[num_batch]
        
        # Step 1: Special preparation of the Z-input to the MLP's input Layer L0
        # ******
        # Layer L0: Fill in the input vector for the ANN's input layer L0 
        li_
Z_in_layer[0] = self._X_train[ay_idx_batch] # numpy arrays can be indexed by an array of integers
        if b_print:
            print("\nPropagation : Shape of X_in = li_Z_in_layer = ", li_Z_in_layer[0].shape)           
            #print("\nidx, expected y_value of Layer L0-input :")           
            #for idx in self._ay_mini_batches[num_batch]:
            #    print(str(idx) + ', ' + str(self._y_train[idx]) )
            self._info_point_print("handle_mini_batch: point 3")
        
        # Step 2: Layer L0: We need to transpose the data of the input layer 
        # *******
        ay_Z_in_0T       = li_Z_in_layer[0].T
        li_Z_in_layer[0] = ay_Z_in_0T
        if b_print:
            print("\nPropagation : Shape of transposed X_in = li_Z_in_layer = ", li_Z_in_layer[0].shape)           
            self._info_point_print("handle_mini_batch: point 4")
        
        # Step 3: Call forward propagation method for the present mini-batch of training records
        # *******
        # this function will fill the ay_Z_in- and ay_A_out-lists with matrices per layer
        self._fw_propagation(li_Z_in = li_Z_in_layer, li_A_out = li_A_out_layer, b_print = b_print) 
        
        if b_print:
            ilayer = range(0, self._n_total_layers)
            print("\n ---- ")
            print("\nAfter propagation through all " + str(self._n_total_layers) + " layers: ")
            for il in ilayer:
                print("Shape of Z_in of layer L" + str(il) + " = " + str(li_Z_in_layer[il].shape))
                print("Shape of A_out of layer L" + str(il) + " = " + str(li_A_out_layer[il].shape))
                if il < self._n_total_layers-1:
                    print("Shape of W of layer L" + str(il) + " = " + str(self._li_w[il].shape))
                    print("Shape of Mom of layer L" + str(il) + " = " + str(self._li_mom[il].shape))
            self._info_point_print("handle_mini_batch: point 5")
        
        
        # Step 4: Cost calculation for the mini-batch 
        # ********
        ay_y_enc = self._ay_onehot[:, ay_idx_batch]
        ay_ANN_out = li_A_out_layer[self._n_total_layers-1]
        # print("Shape of ay_ANN_out = " + str(ay_ANN_out.shape))
        
        total_costs_batch = self._calculate_loss_for_batch(ay_y_enc, ay_ANN_out, b_print = False)
        # we add the present cost value to the numpy array 
        self._ay_costs[num_epoch, num_batch] = total_costs_batch
        if b_print:
            print("\n total costs of mini_batch = ", self._ay_costs[num_epoch, num_batch])
            self._info_point_print("handle_mini_batch: point 6")
        print("\n total costs of mini_batch = ", self._ay_costs[num_epoch, num_batch])
        
        # Step 5: Avg-error for later plotting 
        # ********
        # mean "error" values - averaged over all nodes at outermost layer and all data sets of a mini-batch 
        ay_theta_out = ay_y_enc - ay_ANN_out
        if (b_print): 
            print("Shape of ay_theta_out = " + str(ay_theta_out.shape))
        ay_theta_avg = np.average(np.abs(ay_theta_out)) 
        self._ay_theta[num_epoch, num_batch] = ay_theta_avg 
        
        if b_print:
            print("\navg total error of mini_batch = ", self._ay_theta[num_epoch, num_batch])
            self._info_point_print("handle_mini_batch: point 7")
        print("avg total error of mini_batch = ", self._ay_theta[num_epoch, num_batch])
        
        
        # Step 6: Perform gradient calculation via back propagation of errors
        # ******* 
        self._bw_propagation( ay_y_enc = ay_y_enc, 
                              li_Z_in = li_Z_in_layer, 
                              li_A_out = li_A_out_layer, 
                              li_delta_out = li_delta_out, 
                              li_delta = li_delta_
layer,
                              li_D = li_D_layer, 
                              li_grad = li_grad_layer, 
                              b_print = b_print,
                              b_internal_timing = False 
                              ) 
        
        
        # Step 7: Adjustment of weights  
        # *******
        rg_layer=range(0, self._n_total_layers -1)
        for N in rg_layer:
            delta_w_N = self._learn_rate * li_grad_layer[N]
            self._li_w[N] -= ( delta_w_N + (self._mom_rate * self._li_mom[N]) )
            # save momentum
            self._li_mom[N] = delta_w_N
        
        # try to accelerate garbage handling
        # **************
        if len(li_Z_in_layer) > 0:
            del li_Z_in_layer
        if len(li_A_out_layer) > 0:
            del li_A_out_layer
        if len(li_delta_out) > 0:
            del li_delta_out
        if len(li_delta_layer) > 0:
            del li_delta_layer
        if len(li_D_layer) > 0:
            del li_D_layer
        if len(li_grad_layer) > 0:
            del li_grad_layer
            
        return None

 

Forward Propagation

The method for forward propagation remains unchanged in its structure. We only changed the prefix for the Python lists.

    ''' -- Method to handle FW propagation for a mini-batch --'''
    def _fw_propagation(self, li_Z_in, li_A_out, b_print= False):
        
        b_internal_timing = False
        
        # index range of layers 
        #    Note that we count from 0 (0=>L0) to E L(=>E) / 
        #    Careful: during BW-propgation we may need a correct indexing of lists filled during FW-propagation
        ilayer = range(0, self._n_total_layers-1)

        # propagation loop
        # ***************
        for il in ilayer:
            if b_internal_timing: start_0 = time.perf_counter()
            
            if b_print: 
                print("\nStarting propagation between L" + str(il) + " and L" + str(il+1))
                print("Shape of Z_in of layer L" + str(il) + " (without bias) = " + str(li_Z_in[il].shape))
            
            # Step 1: Take input of last layer and apply activation function 
            # ******
            if il == 0: 
                A_out_il = li_Z_in[il] # L0: activation function is identity 
            else: 
                A_out_il = self._act_func( li_Z_in[il] ) # use real activation function 
            
            # Step 2: Add bias node
            # ****** 
            A_out_il = self._add_bias_neuron_to_layer(A_out_il, 'row')
            # save in array     
            li_A_out[il] = A_out_il
            if b_print: 
                print("Shape of A_out of layer L" + str(il) + " (with bias) = " + str(li_A_out[il].shape))
            
            # Step 3: Propagate by matrix operation
            # ****** 
            Z_in_ilp1 = np.dot(self._li_w[il], A_out_il) 
            li_Z_in[il+1] = Z_in_ilp1
            
            if b_internal_timing: 
                end_0 = time.perf_counter()
                print('Time_CPU for layer propagation L' + str(il) + ' to L' + str(il+1), end_0 - start_0) 
        
        # treatment of the last layer 
        # ***************************
        il = il + 1
        if b_print:
            print("\nShape of Z_in of layer L" + str(il) + " = " + str(li_Z_in[il].shape))
        A_out_il = self._out_func( li_Z_in[il] ) # use the output function 
        li_A_out[il] = A_out_il
        if b_print:
            print("Shape of A_out of last layer L" + str(il) + " = " + str(li_A_out[il].shape))
        
        return None

 
nAddendum, 15.05.2020:
We shall later learn that the treatment of bias neurons can be done more efficiently. The present way of coding it reduces performance – especially at the input layer. See the article series starting with
MLP, Numpy, TF2 – performance issues – Step I – float32, reduction of back propagation
for more information. At the present stage of our discussion we are, however, more interested in getting a working code first – and not so much in performance optimization.

Methods for Error Backward Propagation

In contrast to the recipe given in my PDF on the EBP-math we cannot calculate the matrices with the derivatives of the activation functions “ay_D” in advance for all layers. The reason was discussed in the last article VII: Some matrices have to be intermediately adjusted for a bias-neuron, which is ignored in the analysis of the PDF.

The resulting code of our method for EBP looks like given below:

 
    ''' -- Method to handle error BW propagation for a mini-batch --'''
    def _bw_propagation(self, 
                        ay_y_enc, li_Z_in, li_A_out, 
                        li_delta_out, li_delta, li_D, li_grad, 
                        b_print = True, b_internal_timing = False):
        
        # List initialization: All parameter lists or arrays are filled or to be filled by layers 
        # Note: the lists li_Z_in, li_A_out were already filled by _fw_propagation() for the present batch 
        
        # Initiate BW propagation - provide delta-matrices for outermost layer
        # *********************** 
        # Input Z at outermost layer E  (4 layers -> layer 3)
        ay_Z_E = li_Z_in[self._n_total_layers-1]
        # Output A at outermost layer E (was calculated by output function)
        ay_A_E = li_A_out[self._n_total_layers-1]
        
        # Calculate D-matrix (derivative of output function) at outmost the layer - presently only D_sigmoid 
        ay_D_E = self._calculate_D_E(ay_Z_E=ay_Z_E, b_print=b_print )
        
        # Get the 2 delta matrices for the outermost layer (only layer E has 2 delta-matrices)
        ay_delta_E, ay_delta_out_E = self._calculate_delta_E(ay_y_enc=ay_y_enc, ay_A_E=ay_A_E, ay_D_E=ay_D_E, b_print=b_print) 
        
        # We check the shapes
        shape_theory = (self._n_nodes_layer_out, self._n_size_mini_batch)
        if (b_print and ay_delta_E.shape != shape_theory):
            print("\nError: Shape of ay_delta_E is wrong:")
            print("Shape = ", ay_delta_E.shape, "  ::  should be = ", shape_theory )
        if (b_print and ay_D_E.shape != shape_theory):
            print("\nError: Shape of ay_D_E is wrong:")
            print("Shape = ", ay_D_E.shape, "  ::  should be = ", shape_theory )
        
        # add the matrices to their lists ; li_delta_out gets only one element 
        idxE = self._n_total_layers - 1
        li_delta_out[idxE] = ay_delta_out_E # this happens only once
        li_delta[idxE]     = ay_delta_E
        li_D[idxE]         = ay_D_E
        li_grad[idxE]      = None    # On the outermost layer there is no gradient ! 
        
        if b_print:
            print("bw: Shape delta_E = ", li_delta[idxE].shape)
            print("bw: Shape D_E = ", ay_D_E.shape)
            self._info_point_print("bw_propagation: point bw_1")
        
        
        # Loop over all layers in reverse direction 
        # ******************************************
        # index range of target layers N in BW direction (starting with E-1 => 4 layers -> layer 2))
        if b_print:
            range_N_bw_layer_test = reversed(range(0,
 self._n_total_layers-1))   # must be -1 as the last element is not taken 
            rg_list = list(range_N_bw_layer_test) # Note this exhausts the range-object
            print("range_N_bw_layer = ", rg_list)
        
        range_N_bw_layer = reversed(range(0, self._n_total_layers-1))   # must be -1 as the last element is not taken 
        
        # loop over layers 
        for N in range_N_bw_layer:
            if b_print:
                print("\n N (layer) = " + str(N) +"\n")
            # start timer 
            if b_internal_timing: start_0 = time.perf_counter()
            
            # Back Propagation operations between layers N+1 and N 
            # *******************************************************
            # this method handles the special treatment of bias nodes in Z_in, too
            ay_delta_N, ay_D_N, ay_grad_N = self._bw_prop_Np1_to_N( N=N, li_Z_in=li_Z_in, li_A_out=li_A_out, li_delta=li_delta, b_print=False )
            
            if b_internal_timing: 
                end_0 = time.perf_counter()
                print('Time_CPU for BW layer operations ', end_0 - start_0) 
            
            # add matrices to their lists 
            li_delta[N] = ay_delta_N
            li_D[N]     = ay_D_N
            li_grad[N]= ay_grad_N
            #sys.exit()
        
        return

 

We first handle the necessary matrix evaluations for the outermost layer. We use two helper functions there to calculate the derivative of the output function with respect to the a-term [ _calculate_D_E() ] and to calculate the values for the “delta“-terms at all nodes and for all records [ _calculate_delta_E() ] according to the prescription in the PDF:

    
    ''' -- Method to calculate the matrix with the derivative values of the output function at outermost layer '''
    def _calculate_D_E(self, ay_Z_E, b_print= True):
        '''
        This method calculates and returns the D-matrix for the outermost layer
        The D matrix contains derivatives of the output function with respect to local input "z_j" at outermost nodes. 
        
        Returns
        ------
        ay_D_E:    Matrix with derivative values of the output function 
                   with respect to local z_j valus at the nodes of the outermost layer E
        Note: This is a 2-dim matrix over layer nodes and training samples of the mini-batch
        '''
        if self._my_out_func == 'sigmoid':
            ay_D_E = self._D_sigmoid(ay_Z=ay_Z_E)
        
        else:
            print("The derivative for output function " + self._my_out_func + " is not known yet!" )
            sys.exit()
        
        return ay_D_E

    ''' -- Method to calculate the delta_E matrix as a starting point of the backward propagation '''
    def _calculate_delta_E(self, ay_y_enc, ay_A_E, ay_D_E, b_print= False):
        '''
        This method calculates and returns the 2 delta-matrices for the outermost layer 
        
        Returns
        ------
        delta_E:     delta_matrix of the outermost layer (indicated by E)
        delta_out:   delta_out matrix => elements are local derivative values of the cost function 
                     with respect to the output "a_j" at an outermost node  
                     !!! delta_out will only be returned if calculable !!!
        
        Note: these are 2-dim matrices over layer nodes and training samples of the mini-batch
        '''
        
        if self._my_loss_func == 'LogLoss':
            # Calculate delta_S_E directly to avoid problems with zero denominators
            ay_delta_E = ay_A_E - ay_y_enc
            # delta_out is fetched but may be None 
            ay_delta_out, ay_D_
numerator, ay_D_denominator = self._D_loss_LogLoss(ay_y_enc, ay_A_E, b_print = False)
            
            # To be done: Analyze critical values in D_denominator 
            
            # Release variables explicitly 
            del ay_D_numerator
            del ay_D_denominator
            
        
        if self._my_loss_func == 'MSE':
            # First calculate delta_out and then the delta_E
            delta_out = self._D_loss_MSE(ay_y_enc, ay_A_E, b_print=False)
            # calculate delta_E via matrix multiplication 
            ay_delta_E = delta_out * ay_D_E
                    
        return ay_delta_E, ay_delta_out

 

Further required helper methods to calculate the cost functions and related derivatives are :

    ''' method to calculate the logistic regression loss function '''
    def _loss_LogLoss(self, ay_y_enc, ay_ANN_out, b_print = False):
        '''
        Method which calculates LogReg loss function in a vectorized form on multidimensional Numpy arrays 
        '''
        b_test = False

        if b_print:
            print("From LogLoss: shape of ay_y_enc =  " + str(ay_y_enc.shape))
            print("From LogLoss: shape of ay_ANN_out =  " + str(ay_ANN_out.shape))
            print("LogLoss: ay_y_enc = ", ay_y_enc) 
            print("LogLoss: ANN_out = \n", ay_ANN_out) 
            print("LogLoss: log(ay_ANN_out) =  \n", np.log(ay_ANN_out) )

        # The following means an element-wise (!) operation between matrices of the same shape!
        Log1 = -ay_y_enc * (np.log(ay_ANN_out))
        # The following means an element-wise (!) operation between matrices of the same shape!
        Log2 = (1 - ay_y_enc) * np.log(1 - ay_ANN_out)
        
        # the next operation calculates the sum over all matrix elements 
        # - thus getting the total costs for all mini-batch elements 
        cost = np.sum(Log1 - Log2)
        
        #if b_print and b_test:
            # Log1_x = -ay_y_enc.dot((np.log(ay_ANN_out)).T)
            # print("From LogLoss: L1 =   " + str(L1))
            # print("From LogLoss: L1X =  " + str(L1X))
        
        if b_print: 
            print("From LogLoss: cost =  " + str(cost))
        
        # The total costs is just a number (scalar)
        return cost
#
    ''' method to calculate the derivative of the logistic regression loss function 
        with respect to the output values '''
    def _D_loss_LogLoss(self, ay_y_enc, ay_ANN_out, b_print = False):
        '''
        This function returns the out_delta_S-matrix which is required to initialize the 
        BW propagation (EBP) 
        Note ANN_out is the A_out-list element ( a 2-dim matrix) for the outermost layer 
        In this case we have to take care of denominators = 0 
        '''
        D_numerator = ay_ANN_out - ay_y_enc
        D_denominator = -(ay_ANN_out - 1.0) * ay_ANN_out
        n_critical = np.count_nonzero(D_denominator < 1.0e-8)
        if n_critical > 0:
            delta_s_out = None
        else:
            delta_s_out = np.divide(D_numerator, D_denominator)
        return delta_s_out, D_numerator, D_denominator
#
    ''' method to calculate the MSE loss function '''
    def _loss_MSE(self, ay_y_enc, ay_ANN_out, b_print = False):
        '''
        Method which calculates LogReg loss function in a vectorized form on multidimensional Numpy arrays 
        '''
        if b_print:
            print("From loss_MSE: shape of ay_y_enc =  " + str(ay_y_enc.shape))
            print("From loss_MSE: shape of ay_ANN_out =  " + str(ay_ANN_out.shape))
            #print("LogReg: ay_y_enc = ", ay_y_enc) 
            #print("LogReg: ANN_out = \n", ay_
ANN_out) 
            #print("LogReg: log(ay_ANN_out) =  \n", np.log(ay_ANN_out) )
        
        cost = 0.5 * np.sum( np.square( ay_y_enc - ay_ANN_out ) )

        if b_print: 
            print("From loss_MSE: cost =  " + str(cost))
        
        return cost
#
    ''' method to calculate the derivative of the MSE loss function 
        with respect to the output values '''
    def _D_loss_MSE(self, ay_y_enc, ay_ANN_out, b_print = False):
        '''
        This function returns the out_delta_S - matrix which is required to initialize the 
        BW propagation (EBP) 
        Note ANN_out is the A_out-list element ( a 2-dim matrix) for the outermost layer
        In this case the output is harmless (no critical denominator) 
        '''
        delta_s_out = ay_ANN_out - ay_y_enc
        return delta_s_out

 
You see that we are a bit careful to avoid zero denominators for the Logarithmic loss function in all of our helper functions.

The check statements for shapes can be eliminated in a future version when we are sure that everything works correctly. Keeping the layer specific matrices during the handling of a mini-batch will be also good for potentially required error analysis in the beginning. In the end we only may keep the gradient-matrices and the layer specific matrices required to process the local calculations during back propagation.

Then we turn to loop over all other layers down to layer L0. The matrix operation to be done for all these layers are handled in a further method:

    
    ''' -- Method to calculate the BW-propagated delta-matrix and the gradient matrix to/for layer N '''
    def _bw_prop_Np1_to_N(self, N, li_Z_in, li_A_out, li_delta, b_print=False):
        '''
        BW-error-propagation bewtween layer N+1 and N 
        Inputs: 
            li_Z_in:  List of input Z-matrices on all layers - values were calculated during FW-propagation
            li_A_out: List of output A-matrices - values were calculated during FW-propagation
            li_delta: List of delta-matrices - values for outermost ölayer E to layer N+1 should exist 
        
        Returns: 
            ay_delta_N - delta-matrix of layer N (required in subsequent steps)
            ay_D_N     - derivative matrix for the activation function on layer N 
            ay_grad_N  - matrix with gradient elements of the cost fnction with respect to the weights on layer N 
        '''
        
        if b_print:
            print("Inside _bw_prop_Np1_to_N: N = " + str(N) )
        
        # Prepare required quantities - and add bias neuron to ay_Z_in 
        # ****************************
        
        # Weight matrix meddling betwen layer N and N+1 
        ay_W_N = self._li_w[N]
        shape_W_N   = ay_W_N.shape # due to bias node first dim is 1 bigger than Z-matrix 
        if b_print:
            print("shape of W_N = ", shape_W_N )

        # delta-matrix of layer N+1
        ay_delta_Np1 = li_delta[N+1]
        shape_delta_Np1 = ay_delta_Np1.shape 

        # !!! Add intermediate row (for bias) to Z_N !!!
        ay_Z_N = li_Z_in[N]
        shape_Z_N_orig = ay_Z_N.shape
        ay_Z_N = self._add_bias_neuron_to_layer(ay_Z_N, 'row')
        shape_Z_N = ay_Z_N.shape # dimensions should fit now with W- and A-matrix 
        
        # Derivative matrix for the activation function (with extra bias node row)
        #    can only be calculated now as we need the z-values
        ay_D_N = self._calculate_D_N(ay_Z_N)
        shape_D_N = ay_D_N.shape 
        
        ay_A_N = li_A_out[N]
        shape_A_N = ay_A_N.shape
        
        # print shapes 
        if b_print:
            print("shape of W_N = ", shape_W_N)
            print("
shape of delta_(N+1) = ", shape_delta_Np1)
            print("shape of Z_N_orig = ", shape_Z_N_orig)
            print("shape of Z_N = ", shape_Z_N)
            print("shape of D_N = ", shape_D_N)
            print("shape of A_N = ", shape_A_N)
        
        
        # Propagate delta
        # **************
        if li_delta[N+1] is None:
            print("BW-Prop-error:\n No delta-matrix found for layer " + str(N+1) ) 
            sys.exit()
            
        # Check shapes for np.dot()-operation - here for element [0] of both shapes - as we operate with W.T !
        if ( shape_W_N[0] != shape_delta_Np1[0]): 
            print("BW-Prop-error:\n shape of W_N [", shape_W_N, "]) does not fit shape of delta_N+1 [", shape_delta_Np1, "]" )
            sys.exit() 
        
        # intermediate delta 
        # ~~~~~~~~~~~~~~~~~~
        ay_delta_w_N = ay_W_N.T.dot(ay_delta_Np1)
        shape_delta_w_N = ay_delta_w_N.shape
        
        # Check shapes for element wise *-operation !
        if ( shape_delta_w_N != shape_D_N ): 
            print("BW-Prop-error:\n shape of delta_w_N [", shape_delta_w_N, "]) does not fit shape of D_N [", shape_D_N, "]" )
            sys.exit() 
        
        # final delta 
        # ~~~~~~~~~~~
        ay_delta_N = ay_delta_w_N * ay_D_N
        # reduce dimension again 
        ay_delta_N = ay_delta_N[1:, :]
        shape_delta_N = ay_delta_N.shape
        
        # Check dimensions again - ay_delta_N.shape should fit shape_Z_in_orig
        if shape_delta_N != shape_Z_N_orig: 
            print("BW-Prop-error:\n shape of delta_N [", shape_delta_N, "]) does not fit original shape Z_in_N [", shape_Z_N_orig, "]" )
            sys.exit() 
        
        if N > 0:
            shape_W_Nm1 = self._li_w[N-1].shape 
            if shape_delta_N[0] != shape_W_Nm1[0] : 
                print("BW-Prop-error:\n shape of delta_N [", shape_delta_N, "]) does not fit shape of W_Nm1 [", shape_W_Nm1, "]" )
                sysexit() 
        
        
        # Calculate gradient
        # ********************
        #     required for all layers down to 0 
        # check shapes 
        if shape_delta_Np1[1] != shape_A_N[1]:
            print("BW-Prop-error:\n shape of delta_Np1 [", shape_delta_Np1, "]) does not fit shape of A_N [", shape_A_N, "] for matrix multiplication" )
            sys.exit() 
        
        # calculate gradient             
        ay_grad_N = np.dot(ay_delta_Np1, ay_A_N.T)
        
        # regularize gradient (!!!! without adding bias nodes in the L1, L2 sums) 
        ay_grad_N[:, 1:] += (self._li_w[N][:, 1:] * self._lambda2_reg + np.sign(self._li_w[N][:, 1:]) * self._lambda1_reg) 
        
        #
        # Check shape 
        shape_grad_N = ay_grad_N.shape
        if shape_grad_N != shape_W_N:
            print("BW-Prop-error:\n shape of grad_N [", shape_grad_N, "]) does not fit shape of W_N [", shape_W_N, "]" )
            sys.exit() 
        
        # print shapes 
        if b_print:
            print("shape of delta_N = ", shape_delta_N)
            print("shape of grad_N = ", shape_grad_N)
            
            print(ay_grad_N)
        
        return ay_delta_N, ay_D_N, ay_grad_N

 
This function does more or less exactly what we have requested by our theoretical analysis in the last two articles. Note the intermediate handling of bias nodes! Note also that bias nodes are NOT regarded in regularization terms L1 and L2! The function to calculate the derivative of the activation function is:

   
#
    ''' -- Method to calculate the matrix with the derivative values of the output function at outermost layer '''

    def _calculate_D_N(self, ay_Z_N, b_print= False):
        '''
        This method calculates and returns the D-matrix for the outermost layer
        The D matrix contains derivatives of the output function with respect to local input "z_j" at outermost nodes. 
        
        Returns
        ------
        ay_D_E:    Matrix with derivative values of the output function 
                   with respect to local z_j valus at the nodes of the outermost layer E
        Note: This is a 2-dim matrix over layer nodes and training samples of the mini-batch
        '''
        if self._my_out_func == 'sigmoid':
            ay_D_E = self._D_sigmoid(ay_Z = ay_Z_N)
        
        else:
            print("The derivative for output function " + self._my_out_func + " is not known yet!" )
            sys.exit()
        
        return ay_D_E
 

 

The methods to calculate regularization terms for the loss function are:

   
#
    ''' method do calculate the quadratic regularization term for the loss function '''
    def _regularize_by_L2(self, b_print=False): 
        '''
        The L2 regularization term sums up all quadratic weights (without the weight for the bias) 
        over the input and all hidden layers (but not the output layer)
        The weight for the bias is in the first column (index 0) of the weight matrix - 
        as the bias node's output is in the first row of the output vector of the layer 
        '''
        ilayer = range(0, self._n_total_layers-1) # this excludes the last layer 
        L2 = 0.0
        for idx in ilayer:
            L2 += (np.sum( np.square(self._li_w[idx][:, 1:])) ) 
        L2 *= 0.5 * self._lambda2_reg
        if b_print: 
            print("\nL2: total L2 = " + str(L2) )
        return L2 
#
    ''' method do calculate the linear regularization term for the loss function '''
    def _regularize_by_L1(self, b_print=False): 
        '''
        The L1 regularization term sums up all weights (without the weight for the bias) 
        over the input and all hidden layers (but not the output layer
        The weight for the bias is in the first column (index 0) of the weight matrix - 
        as the bias node's output is in the first row of the output vector of the layer 
        '''
        ilayer = range(0, self._n_total_layers-1) # this excludes the last layer 
        L1 = 0.0
        for idx in ilayer:
            L1 += np.sum(np.abs( self._li_w[idx][:, 1:]))
        L1 *= 0.5 * self._lambda1_reg
        if b_print:
            print("\nL1: total L1 = " + str(L1))
        return L1 

 
Addendum, 15.05.2020:
Also the BW-propagation code presented here will later be the target of optimization steps. We shall see that it – despite working correctly – can be criticized regarding efficiency at several points. See again the article series starting with
MLP, Numpy, TF2 – performance issues – Step I – float32, reduction of back propagation.

Conclusion

We have extended our set of methods quite a bit. At the core of the operations we perform matrix operations which are supported by the Openblas library on a Linux system with multiple CPU cores. In the next article

A simple program for an ANN to cover the Mnist dataset – IX – First Tests

we shall test the convergence of our training for the MNIST data set. We shall see that a MLP with two hidden layers with 70 and 30 nodes can give us a convergence
of the averaged relative error down to 0.006 after 1000 epochs on the test data. However, we have to analyze such results for overfitting. Stay tuned …

Links

My PDF on “The math behind EBP”

A simple Python program for an ANN to cover the MNIST dataset – V – coding the loss function

We proceed with encoding a Python class for a Multilayer Perceptron [MLP] to be able to study at least one simple examples of an artificial neural network [ANN] in detail. During the articles

A simple program for an ANN to cover the Mnist dataset – IV – the concept of a cost or loss function
A simple program for an ANN to cover the Mnist dataset – III – forward propagation
A simple program for an ANN to cover the Mnist dataset – II – initial random weight values
A simple program for an ANN to cover the Mnist dataset – I – a starting point

we came so far that we could apply the “Feed Forward Propagation” algorithm [FFPA] to multiple data records of a mini-batch of training data in parallel. We spoke of a so called vectorized form of the FFPA; we used special Linear Algebra matrix operations of Numpy to achieve the parallel operations. In the last article

A simple program for an ANN to cover the Mnist dataset – IV – the concept of a cost or loss function

I commented on the necessity of a so called “loss function” for the MLP. Although not required for a proper training algorithm we will nevertheless encode a class method to calculate cost values for mini-batches. The behavior of such cost values with training epochs will give us an impression of how good the training algorithm works and whether it actually converges into a minimum of the loss function. As explained in the last article this minimum should correspond to an overall minimum distance of the FFPA results for all training data records from their known correct target values in the result vector space of the MLP.

Before we do the coding for two specific cost or loss functions – namely the “Log Loss“-function and the “MSE“-function, I will briefly point out the difference between standard “*”-operations between multidimensional Numpy arrays and real “dot”-matrix-operations in the sense of Linear Algebra. The latte one follows special rules in multiplying specific elements of both matrices and summing up over the results.

As in all the other articles of this series: This is for beginners; experts will not learn anything new – especially not of the first section.

Element-wise multiplication between multidimensional Numpy arrays in contrast to the “dot”-operation of linear algebra

I would like to point out some aspects of combining two multidimensional Numpy arrays which may be confusing for Python beginners. At least they were for me 🙂 . As a former physicist I automatically expected a “*”-like operation for two multidimensional arrays to perform a matrix operation in the sense of linear algebra. This lead to problems when I tried to understand Python code of others.

Let us assume we have two 2-dimensional arrays A and B. A and B shall be similar in the sense that their shape is identical, i.e. A.shape = B.shape – e.g (784, 60000):
The two matrices each have the same specific number of elements in their different dimensions.

Whenever we operate with multidimensional Numpy arrays with the same same shape we can use the standard operators “+”, “-“, “*”, “/”. These
operators then are applied between corresponding elements of the matrices. I.e., the mathematical operation is applied between elements with the same position along the different dimensional axes in A and B. We speak of an element-wise operation. See the example below.

This means (A * B) is not equivalent to the C = numpy.dot(A, B) operation – which appears in Linear Algebra; e.g. for vector and operator transformations!

The”dot()”-operation implies a special operation: Let us assume that the shape of A[i,j,v] is

A.shape = (p,q,y)

and the shape of B[k,w,m] is

B.shape = (r,z,s)

with

y = z .

Then in the “dot()”-operation all elements of a dimension “v” of A[i,j,v] are multiplied with corresponding elements of the dimension “w” of B[k,w,m] and then the results summed up.

dot(A, B)[i,j,k,m] = sum(A[i,j,:] * B[k,:,m])

The “*” operation in the formula above is to be interpreted as a standard multiplication of array elements.

In the case of A being a 2-dim array and B being a 1-dimensional vector we just get an operation which could – under certain conditions – be interpreted as a typical vector transformation in a 2-dim vector space.

So, when we define two Numpy arrays there may exist two different methods to deal with array-multiplication: If we have two arrays with the same shape, then the “*”-operation means an element-wise multiplication of the elements of both matrices. In the context of ANNs such an operation may be useful – even if real linear algebra matrix operations dominate the required calculations. The first “*”-operation will, however, not work if the array-shapes deviate.

The “numpy.dot(A, B)“-operation instead requires a correspondence of the last dimension of matrix A with the second to last dimension of matrix B. Ooops – I realize I just used the expression “matrix” for a multidimensional Numpy array without much thinking. As said: “matrix” in linear algebra has a connotation of a transformation operator on vectors of a vector space. Is there a difference in Numpy?

Yes, there is, indeed – which may even lead to more confusion: We can apply the function numpy.matrix()

A = numpy.matrix(A),
B = numpy.matrix(B)

then the “*”-operator will get a different meaning – namely that of numpy.dot(A,B):

A * B = numpy.dot(A, B)

So, better read Python code dealing with multidimensional arrays rather carefully ….

To understand this better let us execute the following operations on some simple examples in a Jupyter cell:

A1 = np.ones((5,3))
A1[:,1] *= 2
A1[:,2] *= 4
print("\nMatrix A1:\n")
print(A1)


A2= np.random.randint(1, 10, 5*3)
A2 = A2.reshape(5,3)
# A2 = A2.reshape(3,5)
print("\n Matrix A2 :\n")
print(A2)


A3 = A1 * A2 
print("\n\nA3:\n")
print(A3)

A4 = np.dot(A1, A2.T)
print("\n\nA4:\n")
print(A4)

A5 = np.matrix(A1)
A6 = np.matrix(A2)

A7 = A5 * A6.T 
print("\n\nA7:\n")
print(A7)

A8 = A5 * A6

We get the following output:


Matrix A1:

[[1. 2. 4.]
 [1. 2. 4.]
 [1. 2. 4.]
 [1. 2. 4.]
 [1. 2. 4.]]

 Matrix A2 :

[[6 8 9]
 [9 1 6]
 [8 8 9]
 [2 8 3]
 [5 8 8]]


A3:

[[ 6. 16. 36.]
 [ 9.  2. 24.]
 [ 8. 16. 36.]
 [ 2. 16. 12.]
 [
 5. 16. 32.]]


A4:

[[58. 35. 60. 30. 53.]
 [58. 35. 60. 30. 53.]
 [58. 35. 60. 30. 53.]
 [58. 35. 60. 30. 53.]
 [58. 35. 60. 30. 53.]]


A7:

[[58. 35. 60. 30. 53.]
 [58. 35. 60. 30. 53.]
 [58. 35. 60. 30. 53.]
 [58. 35. 60. 30. 53.]
 [58. 35. 60. 30. 53.]]

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-10-4ea2dbdf6272> in <module>
     28 print(A7)
     29 
---> 30 A8 = A5 * A6
     31 

/projekte/GIT/ai/ml1/lib/python3.6/site-packages/numpy/matrixlib/defmatrix.py in __mul__(self, other)
    218         if isinstance(other, (N.ndarray, list, tuple)) :
    219             # This promotes 1-D vectors to row vectors
--> 220             return N.dot(self, asmatrix(other))
    221         if isscalar(other) or not hasattr(other, '__rmul__') :
    222             return N.dot(self, other)

<__array_function__ internals> in dot(*args, **kwargs)

ValueError: shapes (5,3) and (5,3) not aligned: 3 (dim 1) != 5 (dim 0)

This example obviously demonstrates the difference of an multiplication operation on multidimensional arrays and a real matrix “dot”-operation. Note especially how the “*” operator changed when we calculated A7.

If we instead execute the following code

A1 = np.ones((5,3))
A1[:,1] *= 2
A1[:,2] *= 4
print("\nMatrix A1:\n")
print(A1)


A2= np.random.randint(1, 10, 5*3)
#A2 = A2.reshape(5,3)
A2 = A2.reshape(3,5)
print("\n Matrix A2 :\n")
print(A2)

A3 = A1 * A2 
print("\n\nA3:\n")
print(A3)

we directly get an error:


Matrix A1:

[[1. 2. 4.]
 [1. 2. 4.]
 [1. 2. 4.]
 [1. 2. 4.]
 [1. 2. 4.]]

 Matrix A2 :

[[5 8 7 3 8]
 [4 4 8 4 5]
 [8 1 9 4 8]]

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-12-c4d3ffb1e683> in <module>
     13 
     14 
---> 15 A3 = A1 * A2
     16 print("\n\nA3:\n")
     17 print(A3)

ValueError: operands could not be broadcast together with shapes (5,3) (3,5) 

As expected!

Cost calculation for our ANN

As we want to be able to use different types of cost/loss functions we have to introduce new corresponding parameters in the class’s interface. So we update the “__init__()”-function:


    def __init__(self, 
                 my_data_set = "mnist", 
                 n_hidden_layers = 1, 
                 ay_nodes_layers = [0, 100, 0], # array which should have as much elements as n_hidden + 2
                 n_nodes_layer_out = 10,  # expected number of nodes in output layer 
                                                  
                 my_activation_function = "sigmoid", 
                 my_out_function        = "sigmoid",   
                 my_loss_function       = "LogLoss",   
                 
                 n_size_mini_batch = 50,  # number of data elements in a mini-batch 
                 
                 n_epochs      = 1,
                 n_max_batches = -1,  # number of mini-batches to use during epochs - > 0 only for testing 
                                      # a negative value uses all mini-batches 
                 
                 lambda2_reg = 0.1,     # factor for quadratic regularization term 
                 lambda1_reg = 0.0,     # factor for linear regularization term 
                 
                 vect_mode = 'cols', 
                 
                 figs_x1=12.0, figs_x2=8.0, 
                 legend_loc='upper 
right',
                 
                 b_print_test_data = True
                 
                 ):
        '''
        Initialization of MyANN
        Input: 
            data_set: type of dataset; so far only the "mnist", "mnist_784" datsets are known 
                      We use this information to prepare the input data and learn about the feature dimension. 
                      This info is used in preparing the size of the input layer.     
            n_hidden_layers = number of hidden layers => between input layer 0 and output layer n 
            
            ay_nodes_layers = [0, 100, 0 ] : We set the number of nodes in input layer_0 and the output_layer to zero 
                              Will be set to real number afterwards by infos from the input dataset. 
                              All other numbers are used for the node numbers of the hidden layers.
            n_nodes_out_layer = expected number of nodes in the output layer (is checked); 
                                this number corresponds to the number of categories NC = number of labels to be distinguished
            
            my_activation_function : name of the activation function to use 
            my_out_function : name of the "activation" function of the last layer which produces the output values 
            my_loss_function : name of the "cost" or "loss" function used for optimization 
            
            n_size_mini_batch : Number of elements/samples in a mini-batch of training data 
                                The number of mini-batches will be calculated from this
            
            n_epochs : number of epochs to calculate during training
            n_max_batches : > 0: maximum of mini-batches to use during training 
                            < 0: use all mini-batches  
            
            lambda_reg2:    The factor for the quadartic regularization term 
            lambda_reg1:    The factor for the linear regularization term 
            
            vect_mode: Are 1-dim data arrays (vctors) ordered by columns or rows ?
            
            figs_x1=12.0, figs_x2=8.0 : Standard sizing of plots , 
            legend_loc='upper right': Position of legends in the plots
            
            b_print_test_data: Boolean variable to control the print out of some tests data 
             
         '''
        
        # Array (Python list) of known input data sets 
        self._input_data_sets = ["mnist", "mnist_784", "mnist_keras"]  
        self._my_data_set = my_data_set
        
        # X, y, X_train, y_train, X_test, y_test  
            # will be set by analyze_input_data 
            # X: Input array (2D) - at present status of MNIST image data, only.    
            # y: result (=classification data) [digits represent categories in the case of Mnist]
        self._X       = None 
        self._X_train = None 
        self._X_test  = None   
        self._y       = None 
        self._y_train = None 
        self._y_test  = None
        
        # relevant dimensions 
        # from input data information;  will be set in handle_input_data()
        self._dim_sets     = 0  
        self._dim_features = 0  
        self._n_labels     = 0   # number of unique labels - will be extracted from y-data 
        
        # Img sizes 
        self._dim_img      = 0 # should be sqrt(dim_features) - we assume square like images  
        self._img_h        = 0 
        self._img_w        = 0 
        
        # Layers
        # ------
        # number of hidden layers 
        self._n_hidden_layers = n_hidden_layers
        # Number of total layers 
        self._n_total_layers = 2 + self._n_hidden_layers  
        # Nodes for hidden layers 
        self._ay_nodes_layers = np.array(ay_nodes_layers)
        # Number of nodes in output 
layer - will be checked against information from target arrays
        self._n_nodes_layer_out = n_nodes_layer_out
        
        # Weights 
        # --------
        # empty List for all weight-matrices for all layer-connections
        # Numbering : 
        # w[0] contains the weight matrix which connects layer 0 (input layer ) to hidden layer 1 
        # w[1] contains the weight matrix which connects layer 1 (input layer ) to (hidden?) layer 2 
        self._ay_w = []  
        
        # Arrays for encoded output labels - will be set in _encode_all_mnist_labels()
        # -------------------------------
        self._ay_onehot = None
        self._ay_oneval = None
        
        # Known Randomizer methods ( 0: np.random.randint, 1: np.random.uniform )  
        # ------------------
        self.__ay_known_randomizers = [0, 1]

        # Types of activation functions and output functions 
        # ------------------
        self.__ay_activation_functions = ["sigmoid"] # later also relu 
        self.__ay_output_functions     = ["sigmoid"] # later also softmax 
        
        # Types of cost functions 
        # ------------------
        self.__ay_loss_functions = ["LogLoss", "MSE" ] # later also othr types of cost/loss functions  

        # the following dictionaries will be used for indirect function calls 
        self.__d_activation_funcs = {
            'sigmoid': self._sigmoid, 
            'relu':    self._relu
            }
        self.__d_output_funcs = { 
            'sigmoid': self._sigmoid, 
            'softmax': self._softmax
            }  
        self.__d_loss_funcs = { 
            'LogLoss': self._loss_LogLoss, 
            'MSE': self._loss_MSE
            }  
        
        
          
        # The following variables will later be set by _check_and set_activation_and_out_functions()            
        
        self._my_act_func  = my_activation_function
        self._my_out_func  = my_out_function
        self._my_loss_func = my_loss_function
        self._act_func = None    
        self._out_func = None    
        self._loss_func = None    

        # list for cost values of mini-batches during training 
        # The list will later be split into sections for epochs 
        self._ay_cost_vals = []

        # number of data samples in a mini-batch 
        self._n_size_mini_batch = n_size_mini_batch
        self._n_mini_batches = None  # will be determined by _get_number_of_mini_batches()

        # number of epochs 
        self._n_epochs = n_epochs
        # maximum number of batches to handle (<0 => all!) 
        self._n_max_batches = n_max_batches

        # regularization parameters
        self._lambda2_reg = lambda2_reg
        self._lambda1_reg = lambda1_reg

        # paramter to allow printing of some test data 
        self._b_print_test_data = b_print_test_data

        # Plot handling 
        # --------------
        # Alternatives to resize plots 
        # 1: just resize figure  2: resize plus create subplots() [figure + axes] 
        self._plot_resize_alternative = 1 
        # Plot-sizing
        self._figs_x1 = figs_x1
        self._figs_x2 = figs_x2
        self._fig = None
        self._ax  = None 
        # alternative 2 does resizing and (!) subplots() 
        self.initiate_and_resize_plot(self._plot_resize_alternative)        
        
        
        # ***********
        # operations 
        # ***********
        
        # check and handle input data 
        self._handle_input_data()
        # set the ANN structure 
        self._set_ANN_structure()
        
        # Prepare epoch and batch-handling - sets mini-batch index array, too 
        self._prepare_epochs_and_batches()
        
        # perform training 
      
  start_c = time.perf_counter()
        self._fit(b_print=False, b_measure_batch_time=False)
        end_c = time.perf_counter()
        print('\n\n ------') 
        print('Total training Time_CPU: ', end_c - start_c) 
        print("\nStopping program regularily")
        sys.exit()
#

 
The way of accessing a method/function by a parameterized “name”-string should already be familiar from other methods. The method with the given name must of course exist in the Python module; otherwise already Eclipse#s PyDev we display errors.

    '''-- Method to set the loss function--'''
    def _check_and_set_loss_function(self):
        # check for known loss functions 
        try: 
            if (self._my_loss_func not in self.__d_loss_funcs ): 
                raise ValueError
        except ValueError:
            print("\nThe requested loss function " + self._my_loss_func + " is not known!" )
            sys.exit()   
             
        # set the function to variables for indirect addressing 
        self._loss_func = self.__d_loss_funcs[self._my_loss_func]
        
        if self._b_print_test_data:
            z = 2.0
            print("\nThe loss function of the ANN/MLP was defined as \""  + self._my_loss_func + '"') 
        '''
        '''
        return None
#

The “Log Loss” function

The “LogLoss”-function has a special form. If “a_i” characterizes the FFPA result for a special training record and “y_i” the real known value for this record then we calculate its contribution to the costs as:

Loss = SUM_i [- y_i * log(a_i) – (1 – y_i)*log(1 – a_i)]

This loss function has its justification in statistical considerations – for which we assume that our output function produces a kind of probability distribution. Please see the literature for more information.

Now, due to the encoded result representation over 10 different output dimensions in the MNIST case, corresponding to 10 nodes in the output layer; see the second article of this series, we know that a_i and y_i would be 1-dimensional arrays for each training data record. However, if we vectorize this by treating all records of a mini-batch in parallel we get 2-dim arrays. Actually, we have already calculated the respective arrays in the second to last article.

The rows (1st dim) of a represent the output nodes (training data records, the columns (2nd dim) of a represent the results of the FFPA-result values, which due to our output function have values in the interval ]0.0, 1.0].

The same holds for y – with the difference, that 9 of the values in the rows are 0 and exactly one is 1 for a training record.

The “*” multiplication thus can be done via a normal element-wise array “multiplication” on the given 2-dim arrays of our code.

a = ay_ANN_out
y = ay_y_enc

Numpy offers a function “numpy.sum(M)” for a multidimensional array M, which just sums up all element values. The result is of course a simple scalar.

This information should be enough to understand the following new method:

    ''' method to calculate the logistic regression loss function '''
    def _loss_LogLoss(self, ay_y_enc, ay_ANN_out, b_print = False):
        '''
        Method which calculates LogReg loss function in a vectorized form on multidimensional Numpy arrays 
        '''
        b_test = False

        if b_print:
            print("From LogLoss: shape of ay_y_enc =  " + str(ay_y_enc.shape))
            print("From LogLoss: shape of ay_ANN_out =  " + str(ay_
ANN_out.shape))
            print("LogLoss: ay_y_enc = ", ay_y_enc) 
            print("LogLoss: ANN_out = \n", ay_ANN_out) 
            print("LogLoss: log(ay_ANN_out) =  \n", np.log(ay_ANN_out) )

        # The following means an element-wise (!) operation between matrices of the same shape!
        Log1 = -ay_y_enc * (np.log(ay_ANN_out))
        # The following means an element-wise (!) operation between matrices of the same shape!
        Log2 = (1 - ay_y_enc) * np.log(1 - ay_ANN_out)
        
        # the next operation calculates the sum over all matrix elements 
        # - thus getting the total costs for all mini-batch elements 
        cost = np.sum(Log1 - Log2)
        
        #if b_print and b_test:
            # Log1_x = -ay_y_enc.dot((np.log(ay_ANN_out)).T)
            # print("From LogLoss: L1 =   " + str(L1))
            # print("From LogLoss: L1X =  " + str(L1X))
        
        if b_print: 
            print("From LogLoss: cost =  " + str(cost))
        
        # The total costs is just a number (scalar)
        return cost

The Mean Square Error [MSE] cost function

Although not often used for classification tasks (but more for regression problems) this loss function is so simple that we encode it on the fly. Here we just calculate something like a mean quadratic error:

Loss = 9.5 * SUM_i [ (y_ia_i)**2 ]

This loss function is convex by definition and leads to the following method code:

    ''' method to calculate the MSE loss function '''
    def _loss_MSE(self, ay_y_enc, ay_ANN_out, b_print = False):
        '''
        Method which calculates LogReg loss function in a vectorized form on multidimensional Numpy arrays 
        '''
        if b_print:
            print("From loss_MSE: shape of ay_y_enc =  " + str(ay_y_enc.shape))
            print("From loss_MSE: shape of ay_ANN_out =  " + str(ay_ANN_out.shape))
            #print("LogReg: ay_y_enc = ", ay_y_enc) 
            #print("LogReg: ANN_out = \n", ay_ANN_out) 
            #print("LogReg: log(ay_ANN_out) =  \n", np.log(ay_ANN_out) )
        
        cost = 0.5 * np.sum( np.square( ay_y_enc - ay_ANN_out ) )

        if b_print: 
            print("From loss_MSE: cost =  " + str(cost))
        
        return cost
#

Regularization terms

Regularization is a means against overfitting during training. The trick is that the cost function is enhanced by terms which include sums of linear or quadratic terms of all weights of all layers. This enforces that the weights themselves get minimized, too, in the search for a minimum of the loss function. The less degrees of freedom there are the less the chance of overfitting …

In the literature (see the book hints in the last article) you find 2 methods for regularization – one with quadratic terms of the weights – the so called “Ridge-Regression” – and one based on a sum of absolute values of the weights – the so called “Lasso regression”. See the books of Geron and Rashka for more information.

Loss = SUM_i [- y_i * log(a_i) – (1 – y_i)*log(1 – a_i)]
+  lambda_2 * SUM_layer [ SUM_nodes [ (w_layer_nodes)**2 ] ]
+  lambda_1 * SUM_layer [ SUM_nodes [ |w_layer_nodes| ] ]

Note that we included already two factors “lambda_2” and “lamda_1” by which the regularization terms are multiplied and added to the cost/loss function in the “__init__”-method.

The two related methods are easy to understand:

    ''' method do calculate the quadratic regularization term for the loss function '''
    def _regularize_by_L2(self, b_print=False): 
        
'''
        The L2 regularization term sums up all quadratic weights (without the weight for the bias) 
        over the input and all hidden layers (but not the output layer
        The weight for the bias is in the first column (index 0) of the weight matrix - 
        as the bias node's output is in the first row of the output vector of the layer 
        '''
        ilayer = range(0, self._n_total_layers-1) # this excludes the last layer 
        L2 = 0.0
        for idx in ilayer:
            L2 += (np.sum( np.square(self._ay_w[idx][:, 1:])) ) 
        L2 *= 0.5 * self._lambda2_reg
        if b_print: 
            print("\nL2: total L2 = " + str(L2) )
        return L2 
#

    ''' method do calculate the linear regularization term for the loss function '''
    def _regularize_by_L1(self, b_print=False): 
        '''
        The L1 regularization term sums up all weights (without the weight for the bias) 
        over the input and all hidden layers (but not the output layer
        The weight for the bias is in the first column (index 0) of the weight matrix - 
        as the bias node's output is in the first row of the output vector of the layer 
        '''
        ilayer = range(0, self._n_total_layers-1) # this excludes the last layer 
        L1 = 0.0
        for idx in ilayer:
            L1 += (np.sum( self._ay_w[idx][:, 1:])) 
        L1 *= 0.5 * self._lambda1_reg
        if b_print:
            print("\nL1: total L1 = " + str(L1))
        return L1 
#

Why do we not start with index “0” in the weight arrays – self._ay_w[idx][:, 1:]?
The reason is that we do not include the Bias-node in these terms. The weight at the bias nodes of the layers is not varied there during optimization!

Note: Normally we would expect a factor of 1/m, with “m” being the number of records in a mini-batch, for all the terms discussed above. Such a constant factor does not hamper the principal procedure – if we omit it consistently also for for the regularization terms discussed below. It can be taken care of by choosing smaller “lambda”s and a smaller step size during optimization.

Inclusion of the loss function calculations within the handling of mini-batches

For our approach with mini-batches (i.e. an approach between pure stochastic and full batch handling) we have to include the cost calculation in our method “_handle_mini_batch()” to handle mini-batches. Method “_handle_mini_batch()” is modified accordingly:

    ''' -- Method to deal with a batch -- '''
    def _handle_mini_batch(self, num_batch = 0, b_print_y_vals = False, b_print = False):
        '''
        For each batch we keep the input data array Z and the output data A (output of activation function!) 
        for all layers in Python lists
        We can use this as input variables in function calls - mutable variables are handled by reference values !
        We receive the A and Z data from propagation functions and proceed them to cost and gradient calculation functions
        
        As an initial step we define the Python lists ay_Z_in_layer and ay_A_out_layer 
        and fill in the first input elements for layer L0  
        '''
        ay_Z_in_layer  = [] # Input vector in layer L0;  result of a matrix operation in L1,...
        ay_A_out_layer = [] # Result of activation function 
    
        #print("num_batch = " + str(num_batch))
        #print("len of ay_mini_batches = " + str(len(self._ay_mini_batches))) 
        #print("_ay_mini_batches[0] = ")
        #print(self._ay_mini_batches[num_batch])
    
        # Step 1: Special treatment of the ANN's input Layer L0
        # Layer L0: Fill in the input vector for the ANN'
s input layer L0 
        ay_idx_batch = self._ay_mini_batches[num_batch]
        ay_Z_in_layer.append( self._X_train[ay_idx_batch] ) # numpy arrays can be indexed by an array of integers
        #print("\nPropagation : Shape of X_in = ay_Z_in_layer = " + str(ay_Z_in_layer[0].shape))           
        if b_print_y_vals:
            print("\n idx, expected y_value of Layer L0-input :")           
            for idx in self._ay_mini_batches[num_batch]:
                print(str(idx) + ', ' + str(self._y_train[idx]) )
        
        # Step 2: Layer L0: We need to transpose the data of the input layer 
        ay_Z_in_0T       = ay_Z_in_layer[0].T
        ay_Z_in_layer[0] = ay_Z_in_0T

        # Step 3: Call the forward propagation method for the mini-batch data samples 
        self._fw_propagation(ay_Z_in = ay_Z_in_layer, ay_A_out = ay_A_out_layer, b_print = b_print) 
        
        if b_print:
            # index range of layers 
            ilayer = range(0, self._n_total_layers)
            print("\n ---- ")
            print("\nAfter propagation through all " + str(self._n_total_layers) + " layers: ")
            for il in ilayer:
                print("Shape of Z_in of layer L" + str(il) + " = " + str(ay_Z_in_layer[il].shape))
                print("Shape of A_out of layer L" + str(il) + " = " + str(ay_A_out_layer[il].shape))

        
        # Step 4: To be done: cost calculation for the batch 
        ay_y_enc = self._ay_onehot[:, ay_idx_batch]
        ay_ANN_out = ay_A_out_layer[self._n_total_layers-1]
        # print("Shape of ay_ANN_out = " + str(ay_ANN_out.shape))
        
        total_costs_batch = self._calculate_loss_for_batch(ay_y_enc, ay_ANN_out, b_print = False)
        self._ay_cost_vals.append(total_costs_batch)
        
        # Step 5: To be done: gradient calculation via back propagation of errors 
        # Step 6: Adjustment of weights  
        
        # try to accelerate garbage handling
        if len(ay_Z_in_layer) > 0:
            del ay_Z_in_layer
        if len(ay_A_out_layer) > 0:
            del ay_A_out_layer
        
        return None
#

 

Note that we save the cost values of every batch in the 1-dim array “self._ay_cost_vals”. This array can later on easily be split into arrays for epochs.

The whole process must be supplemented by a method which does the real cost value calculation:

    ''' -- Main Method to calculate costs -- '''
    def _calculate_loss_for_batch(self, ay_y_enc, ay_ANN_out, b_print = False, b_print_details = False ):
        '''
        Method which calculates the costs including regularization terms  
        The cost function is called according to an input parameter of the class 
        '''
        pure_costs_batch = self._loss_func(ay_y_enc, ay_ANN_out, b_print = False)
        
        if ( b_print and b_print_details ): 
            print("Calc_Costs: Shape of ay_ANN_out = " + str(ay_ANN_out.shape))
            print("Calc_Costs: Shape of ay_y_enc = " + str(ay_y_enc.shape))
        if b_print: 
            print("From Calc_Costs: pure costs of a batch =  " + str(pure_costs_batch))
        
        # Add regularitzation terms - L1: linear reg. term, L2: quadratic reg. term 
        # the sums over the weights (squared) have to be performed for each batch again due to intermediate corrections 
        L1_cost_contrib = 0.0
        L2_cost_contrib = 0.0
        if self._lambda1_reg > 0:
            L1_cost_contrib = self._regularize_by_L1( b_print=False )
        if self._lambda2_reg > 0:
            L2_cost_contrib = self._regularize_by_L2( b_print=False )
        
        total_costs_batch = pure_costs_batch + L1_cost_contrib + L2_cost_contrib
        return total_costs_batch
#

Conclusion

By the steps
discussed above we completed the inclusion of a cost value calculation in our class for every step dealing with a mini-batch during training. All cost values are saved in a Python list for later evaluation. The list can later be split with respect to epochs.

In contrast to the FFP-algorithm all array-operations required in this step were simple element-wise operations and summations over all array-elements.

Cost value calculation obviously is simple and pretty fast regarding CPU-consumption! Just test it yourself!

In the next article we shall analyze the mathematics behind the calculation of the partial derivatives of our cost-function with respect to the many weights at all nodes of the different layers. We shall see that the gradient calculation reduces to remarkable simple formulas describing a kind of back-propagation of the error terms [y_ia_i] through the network.

We will not be surprised that we need to involve some real matrix operations again as in the FFPA !