Deep Dreams of a CNN trained on MNIST data – III – catching dream patterns at smaller length scales

In the first two articles of this series on Convolutional Neural Networks [CNNs] and “Deep Dream” pictures

Deep Dreams of a CNN trained on MNIST data – II – some code for pattern carving
Deep Dreams of a CNN trained on MNIST data – I – a first approach based on one selected map of a convolutional layer

I introduced the concept of CNN-based “pattern carving” and a related algorithm; it consists of a small number of iterations over of a sequence of image manipulation steps (downscaling to a resolution the CNN can handle, detection and amplification of a map triggering pattern by a CNN, upscaling to the original size and normalization). The included pattern detection and amplification algorithm is nothing else than the OIP-algorithm, which I discussed in another article series on CNNs in this blog. The OIP-algorithm is able to create a pattern, which triggers a chosen CNN map optimally, out of a chaotic fluctuation background. The difference is that we apply such an OIP-algorithms now to a structured input image – the basis for a “dream”. The “carving algorithm” is just a simplifying variation of more advanced Deep Dream algorithms; it can be combined with simple CNNs trained on gray and low-resolution images.

In the last article I provided some code which creates dream like pictures with the help of carving by a CNN trained on MNIST data. Nice, but … In the original theory of “Deep Dreaming” people from Google and others applied their algorithms to a cascade of so called “octaves“: Octaves represent the structures within an image at different levels of resolution. Unfortunately, at first sight, such an approach seems to be beyond the capabilities of our MNIST CNN, because the latter works on a fixed and very coarse resolution scale of 28×28 pixels, only.

As we cannot work on different scales of resolution directly: Can we handle pattern detection and amplification on different length scales in a different way? Can we somehow extend the carving process to smaller length scales and a possible detection of smaller map-triggering patterns within the input image?
The answer in short is: Yes, we can.
We “just” have to replace working with “octaves” by looking at sub-segments of the images – and apply our “carving” algorithm with up and down-scaling to these sub-segments as well as to the full image during an iteration process. We shall test the effects of such an approach in this article for a single selected sub-area of the input image, before we apply it more thoroughly to different input images in further articles.

A first step towards Deep Dreams “dreaming” detail structures of a chosen image …

We again use the image of a bunch of roses, which we worked with in the last articles, as a starting point for the creation of a Deep Dream picture. The easiest way to define sub-areas of our (quadratic) input image is to divide it into 4 (quadratic) adjacent sub-segments or sub-images, each with half of the side length of the original image. We thus get two rows, each with two neighboring sub-areas of the size 280×280 px. Then we could apply the carving algorithm to one selected or to all of the sub-segments. But how do we combine such an approach with an overall treatment of the full image? The answer is that we have to work in parallel on both length-scales involved. We can do this within each cycle of down- and up-scaling according to the following scheme:

  • Step 1: Pick the input image IMG at the full working size (here 560 x 560 px) – and reshape it into a tensor suitable for
    Tensorflow 2 [TF2] functions.
  • Step 2: Down- and upscale the input image “IMG” (560×560 => 28×28 => 560×560) – with information loss
  • Step 3: Calculate the difference between the re-up-scaled image to the original image as a tensor for later detail corrections.
  • Step 4: Subdivide IMG into 4 quadrants (each with a size of of 280×280 px). Save the quadrants as sub-images S_IMG_1, …. S_IMG_4 – and create respective tensors.
  • Step 5: For all sub-images: Down- and up-scale the sub-images S_IMG_m (280×280 => 28×28 => 280×280) – with information loss.
  • Step 6: For all sub-images: Determine the differences between the re-upscaled sub-images and the original sub-images as tensors for later detail corrections.
  • Step 7: Loop A [around 4 to 6 iterations]
    • Step LA-1: Pick the down-scaled image IMG_d (of size 28×28 px) as the present input image for the CNN and the OIP-analysis
    • Step LA-2: Apply the OIP-algorithm for N epochs (20 < N < 40) to the downscaled image IMG_d (28x28)
    • Step LA-3: Upscale the resulting changed version of image IMG_d to the original IMG- size by bicubic interpolation => IMG_u
    • Step LA-4: Add the (constant) correction tensor for details to IMG_u.
    • (Step LA-5: Loop B over Sub-Images)
      • Step LB-1: Cut out the area corresponding to sub-image S_IMG_m from the changed full image IMG_u. Use it as Sub_IMG_m during the following steps.
      • Step LB-2: Downscale the present sub-image Sub_IMG_m to the size suitable for the CNN (here: 28×28) by bicubic interpolation => Sub_IMG_M_d.
      • Step LB-3: Apply the OIP-algorithm for N epochs (20 < N < 40) to the downscaled sub-image (28x28) Sub_IMG_m_d
      • Step LB-4: Upscale the changed Sub_IMG_m_d to the original size of Sub_Img_m by bicubic interpolation => Sub_IMG_m_u
      • Step LB-5: Add the (constant) correction tensor to the tensor for the upscaled sub-image Sub_IMG_m_u
      • Step LB-6: Replace the sub-image region of the upscaled full image IMG_u with the (corrected) sub-image Sub_IMG_m_u
      • Step LB-7: Downscale the new full image IMG_u again (to 28×28 => IMG_d) and use the resulting IMG_d for the next iteration of Loop A

The Loop B has been placed in brackets because we are going to apply the suggested technique only to a single one of the 4 sub-image quadrants in this article.

Code fragments for the sub-image preparation

The following code for a Jupyter cell prepares the quadrants of the original image IMG – here of a bunch of roses. The code is straight forward and easy to understand.

# ****************************
# Work with sub-Images  
# ************************

%matplotlib inline

import PIL
from PIL import Image, ImageOps

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 20
fig_3 = plt.figure(
3)
ax3_1_1 = fig_3.add_subplot(541)
ax3_1_2 = fig_3.add_subplot(542)
ax3_1_3 = fig_3.add_subplot(543)
ax3_1_4 = fig_3.add_subplot(544)
ax3_2_1 = fig_3.add_subplot(545)
ax3_2_2 = fig_3.add_subplot(546)
ax3_2_3 = fig_3.add_subplot(547)
ax3_2_4 = fig_3.add_subplot(548)
ax3_3_1 = fig_3.add_subplot(5,4,9)
ax3_3_2 = fig_3.add_subplot(5,4,10)
ax3_3_3 = fig_3.add_subplot(5,4,11)
ax3_3_4 = fig_3.add_subplot(5,4,12)
ax3_4_1 = fig_3.add_subplot(5,4,13)
ax3_4_2 = fig_3.add_subplot(5,4,14)
ax3_4_3 = fig_3.add_subplot(5,4,15)
ax3_4_4 = fig_3.add_subplot(5,4,16)
ax3_5_1 = fig_3.add_subplot(5,4,17)
ax3_5_2 = fig_3.add_subplot(5,4,18)
ax3_5_3 = fig_3.add_subplot(5,4,19)
ax3_5_4 = fig_3.add_subplot(5,4,20)

# size to work with 
# ******************
img_wk_size = 560 

# bring the orig img down to (560, 560) 
# ***************************************
imgvc = Image.open("rosen_orig_farbe.jpg")
imgvc_wk_size = imgvc.resize((img_wk_size,img_wk_size), resample=PIL.Image.BICUBIC)
# Change to np arrays 
ay_picc = np.array(imgvc_wk_size)
print("ay_picc.shape = ", ay_picc.shape)
print("r = ", ay_picc[0,0,0],    " g = ", ay_picc[0,0,1],    " b = " , ay_picc[0,0,2] )
print("r = ", ay_picc[200,0,0],  " g = ", ay_picc[200,0,1],  " b = " , ay_picc[200,0,2] )

# Turn color to gray 
#Red * 0.3 + Green * 0.59 + Blue * 0.11
#Red * 0.2126 + Green * 0.7152 + Blue * 0.0722
#Red * 0.299 + Green * 0.587 + Blue * 0.114

ay_picc_g = ( 0.299  * ay_picc[:,:,0] + 0.587  * ay_picc[:,:,1] + 0.114  * ay_picc[:,:,2] )  
ay_picc_g = ay_picc_g.astype('float32') 
t_picc_g  = ay_picc_g.reshape((1, img_wk_size, img_wk_size, 1))
t_picc_g  = tf.image.per_image_standardization(t_picc_g)

# downsize to (28,28)  
t_picc_g_28 = tf.image.resize(t_picc_g, [28,28], method="bicubic", antialias=True)
t_picc_g_28 = tf.image.per_image_standardization(t_picc_g_28)

# get the correction for the full image 
t_picc_g_wk_size = tf.image.resize(t_picc_g_28, [img_wk_size,img_wk_size], method="bicubic", antialias=True)
t_picc_g_wk_size = tf.image.per_image_standardization(t_picc_g_wk_size)

t_picc_g_wk_size_corr = t_picc_g - t_picc_g_wk_size
t_picc_g_wk_size_re   = t_picc_g_wk_size + t_picc_g_wk_size_corr


# Display wk_size orig images  
ax3_1_1.imshow(imgvc_wk_size)
ax3_1_2.imshow(t_picc_g[0,:,:,0], cmap=plt.cm.gray)
ax3_1_3.imshow(t_picc_g_28[0,:,:,0], cmap=plt.cm.gray)
ax3_1_4.imshow(t_picc_g_wk_size_re[0,:,:,0], cmap=plt.cm.gray)


# Split in 4 sub-images 
# ***********************

half_wk_size  = int(img_wk_size / 2)  

t_picc_g_1  = t_picc_g[:, 0:half_wk_size, 0:half_wk_size, :]
t_picc_g_2  = t_picc_g[:, 0:half_wk_size, half_wk_size:img_wk_size, :]
t_picc_g_3  = t_picc_g[:, half_wk_size:img_wk_size, 0:half_wk_size, :]
t_picc_g_4  = t_picc_g[:, half_wk_size:img_wk_size, half_wk_size:img_wk_size, :]

# Display wk_size orig images  
ax3_2_1.imshow(t_picc_g_1[0,:,:,0], cmap=plt.cm.gray)
ax3_2_2.imshow(t_picc_g_2[0,:,:,0], cmap=plt.cm.gray)
ax3_2_3.imshow(t_picc_g_3[0,:,:,0], cmap=plt.cm.gray)
ax3_2_4.imshow(t_picc_g_4[0,:,:,0], cmap=plt.cm.gray)


# Downscale sub-images 
t_picc_g_1_28 = tf.image.resize(t_picc_g_1, [28,28], method="bicubic", antialias=True)
t_picc_g_2_28 = tf.image.resize(t_picc_g_2, [28,28], method="bicubic", antialias=True)
t_picc_g_3_28 = tf.image.resize(t_picc_g_3, [28,28], method="bicubic", antialias=True)
t_picc_g_4_28 = tf.image.resize(t_picc_g_4, [28,28], method="bicubic", antialias=True)


# Display downscales sub-images  
ax3_3_1.imshow(t_picc_g_1_28[0,:,:,0], cmap=plt.cm.gray)
ax3_3_2.imshow(t_picc_g_2_28[0,:,:,0], cmap=plt.cm.gray)
ax3_3_3.imshow(t_picc_g_3_28[0,:,:,0], cmap=plt.cm.gray)
ax3_3_4.imshow(t_picc_g_4_28[0,:,:,0], cmap=plt.cm.gray)


# get correction values for upsizing 
t_picc_g_1_wk_half = tf.image.resize(t_picc_g_1_28, [half_wk_size,
half_wk_size], method="bicubic", antialias=True)
t_picc_g_1_wk_half = tf.image.per_image_standardization(t_picc_g_1_wk_half)
t_picc_g_2_wk_half = tf.image.resize(t_picc_g_2_28, [half_wk_size,half_wk_size], method="bicubic", antialias=True)
t_picc_g_2_wk_half = tf.image.per_image_standardization(t_picc_g_2_wk_half)
t_picc_g_3_wk_half = tf.image.resize(t_picc_g_3_28, [half_wk_size,half_wk_size], method="bicubic", antialias=True)
t_picc_g_3_wk_half = tf.image.per_image_standardization(t_picc_g_3_wk_half)
t_picc_g_4_wk_half = tf.image.resize(t_picc_g_4_28, [half_wk_size,half_wk_size], method="bicubic", antialias=True)
t_picc_g_4_wk_half = tf.image.per_image_standardization(t_picc_g_4_wk_half)
 
    
t_picc_g_1_corr =  t_picc_g_1 - t_picc_g_1_wk_half   
t_picc_g_2_corr =  t_picc_g_2 - t_picc_g_2_wk_half   
t_picc_g_3_corr =  t_picc_g_3 - t_picc_g_3_wk_half   
t_picc_g_4_corr =  t_picc_g_4 - t_picc_g_4_wk_half   

t_picc_g_1_re   = t_picc_g_1_wk_half + t_picc_g_1_corr 
t_picc_g_2_re   = t_picc_g_2_wk_half + t_picc_g_2_corr 
t_picc_g_3_re   = t_picc_g_3_wk_half + t_picc_g_3_corr 
t_picc_g_4_re   = t_picc_g_4_wk_half + t_picc_g_4_corr 

print(t_picc_g_1_re.shape)

# Display downscales sub-images  
ax3_4_1.imshow(t_picc_g_1_re[0,:,:,0], cmap=plt.cm.gray)
ax3_4_2.imshow(t_picc_g_2_re[0,:,:,0], cmap=plt.cm.gray)
ax3_4_3.imshow(t_picc_g_3_re[0,:,:,0], cmap=plt.cm.gray)
ax3_4_4.imshow(t_picc_g_4_re[0,:,:,0], cmap=plt.cm.gray)

ay_img_comp = np.zeros((img_wk_size,img_wk_size))
ay_t_comp   = ay_img_comp.reshape((1,img_wk_size, img_wk_size,1))
t_pic_comp  = ay_t_comp
t_pic_comp[0, 0:half_wk_size, 0:half_wk_size, 0]                       = t_picc_g_1_re[0, :, :, 0]
t_pic_comp[0, 0:half_wk_size, half_wk_size:img_wk_size, 0]             = t_picc_g_2_re[0, :, :, 0]
t_pic_comp[0, half_wk_size:img_wk_size, 0:half_wk_size, 0]             = t_picc_g_3_re[0, :, :, 0]
t_pic_comp[0, half_wk_size:img_wk_size, half_wk_size:img_wk_size, 0]   = t_picc_g_4_re[0, :, :, 0]

ax3_5_1.imshow(t_picc_g[0,:,:,0], cmap=plt.cm.gray)
ax3_5_2.imshow(t_pic_comp[0,:,:,0], cmap=plt.cm.gray)

 
Note the computation of the correction tensors; we prove their effectiveness below by displaying respective results for the full image, cut out and re-upscaled, corrected quadrants and replaced areas of the original image.

The results in the prepared image frames look like:

Good!

Some code for the carving algorithm – applied to the full image AND a single selected sub-image

Now, for a first test, we concentrate on the sub-image in the lower-right corner – and apply the above algorithm. A corresponding Jupyter cell code is given below:

# *************************************************************************
# OIP analysis on sub-image tiles (to be used after the previous cell)
# **************************************************************************
# Note: To be applied after previous cell !!!
# ******

#interactive plotting - will be used in a future version when we deal with "octaves" 
#%matplotlib notebook 
#plt.ion()

%matplotlib inline

# preparation of figure frames 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 12
fig_size[1] = 12
fig_4 = plt.figure(4)
ax4_1_1 = fig_4.add_subplot(441)
ax4_1_2 = fig_4.add_subplot(442)
ax4_1_3 = fig_4.add_subplot(443)
ax4_1_4 = fig_4.add_subplot(444)

ax4_2_1 = fig_4.add_subplot(445)
ax4_2_2 = fig_4.add_subplot(446)
ax4_2_3 = fig_4.add_subplot(447)
ax4_2_
4 = fig_4.add_subplot(448)

ax4_3_1 = fig_4.add_subplot(4,4,9)
ax4_3_2 = fig_4.add_subplot(4,4,10)
ax4_3_3 = fig_4.add_subplot(4,4,11)
ax4_3_4 = fig_4.add_subplot(4,4,12)

ax4_4_1 = fig_4.add_subplot(4,4,13)
ax4_4_2 = fig_4.add_subplot(4,4,14)
ax4_4_3 = fig_4.add_subplot(4,4,15)
ax4_4_4 = fig_4.add_subplot(4,4,16)

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 12
fig_size[1] = 6
fig_5 = plt.figure(5)
axa5_1 = fig_5.add_subplot(241)
axa5_2 = fig_5.add_subplot(242)
axa5_3 = fig_5.add_subplot(243)
axa5_4 = fig_5.add_subplot(244)
axa5_5 = fig_5.add_subplot(245)
axa5_6 = fig_5.add_subplot(246)
axa5_7 = fig_5.add_subplot(247)
axa5_8 = fig_5.add_subplot(248)
li_axa5 = [axa5_1, axa5_2, axa5_3, axa5_4, axa5_5, axa5_6, axa5_7, axa5_8]

# Some general OIP run parameters 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
n_epochs  = 30          # should be divisible by 5  
n_steps   = 6           # number of intermediate reports 
epsilon   = 0.01        # step size for gradient correction  
conv_criterion = 2.e-4  # criterion for a potential stop of optimization 

# image width parameters
# ~~~~~~~~~~~~~~~~~~~~~
img_wk_size   = 560    # must be identical to the last cell   
half_wk_size  = int(img_wk_size / 2) 

# min / max values of the input image 
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Note : to be used for contrast enhancement 
min1 = tf.reduce_min(t_picc_g)
max1 = tf.reduce_max(t_picc_g)
# parameters to deal with the spectral distribution of gray values
spect_dist = min(abs(min1), abs(max1))
spect_fact   = 0.85
height_fact  = 1.1

# Set the gray downscaled input image as a startng point for the iteration loop 
# ~~~~~~~~~~~~~----------~~~~--------------~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
MyOIP._initial_inp_img_data = t_picc_g_28

# ************
# Main Loop 
# ***********

num_main_iterate = 4
# map_index_main = -1       # take a cost function for a full layer 
map_index_main = 56         # map-index to be used for the OIP analysis of the full image  
map_index_sub  = 56         # map-index to be used for the OIP analysis of the sub-images   


for j in range(num_main_iterate):
   
    # ******************************************************
    # deal with the full (downscaled) input image 
    # ******************************************************
    
    # Apply OIP-algorithm to the whole downscaled image  
    # ~~~~~~~~~~~~~~~~~~~-----------~~~~~~~~~~~~~~~~~~~~~~~
    map_index_main_oip = map_index_main   # map-index we are interested in 

    # Perform the OIP analysis 
    MyOIP._derive_OIP(map_index = map_index_oip, 
                      n_epochs = n_epochs, n_steps = n_steps, 
                      epsilon = epsilon , conv_criterion = conv_criterion, 
                      b_stop_with_convergence=False,
                      b_print=False,
                      li_axa = li_axa5,
                      ax1_1 = ax4_1_1, ax1_2 = ax4_1_2)

    # display the modified downscaled image  (without and with contrast)
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    t_oip_g_28  = MyOIP._inp_img_data
    ay_oip_g_28 = t_oip_g_28[0,:,:,0].numpy()
    ay_oip_g_28_cont = MyOIP._transform_tensor_to_img(T_img=t_oip_g_28[0,:,:,0], centre_move=0.33, fact=1.0)
    ax4_1_3.imshow(ay_oip_g_28_cont, cmap=plt.cm.gray)
    ax4_1_4.imshow(t_picc_g_28[0, :, :, 0], cmap=plt.cm.gray)
    
    # rescale to 560 and re-add details via the correction tensor  
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    t_oip_g_wk_size        = tf.image.resize(t_oip_g_28, [img_wk_size,img_wk_size], 
                                             method="bicubic", antialias=True)
    t_oip_g_wk_size_re     = t_oip_g_wk_size + t_picc_g_wk_size_corr 
    
    # standardize to get an intermediate image 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
n    # t_oip_g_loop           = t_oip_g_wk_size_re.numpy()
    # t_oip_g_wk_size_re_std = tf.image.per_image_standardization(t_oip_g_wk_size_re)
    t_oip_g_wk_size_re_std = tf.image.per_image_standardization(t_oip_g_wk_size_re)
    t_oip_g_loop           = t_oip_g_wk_size_re_std.numpy()
    
    # contrast required as the reduction of irrelvant pixels has smoothed out the gray scale beneath 
    # ~~~~~~~~~~~~~~~~~
    # the added details at pattern areas = high level of whitened socket + relative small detail variation    
    t_oip_g_wk_size_re_std_plt = tf.clip_by_value(height_fact*t_oip_g_wk_size_re_std, -spect_dist*spect_fact, spect_dist*spect_fact)
    ax4_2_1.imshow(t_oip_g_wk_size[0,:,:,0], cmap=plt.cm.gray)
    ax4_2_2.imshow(t_oip_g_wk_size_re_std[0,:,:,0], cmap=plt.cm.gray)
    ax4_2_3.imshow(t_oip_g_wk_size_re_std_plt[0,:,:,0], cmap=plt.cm.gray)
    ax4_2_4.imshow(t_picc_g[0,:,:,0], cmap=plt.cm.gray)
   

    # *************************************************************
    # deal with a chosen sub-image - in this version: "t_oip_4_g"
    # **************************************************************
   
    # Cut out and downscale a sub-image region 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    t_oip_4_g     = t_oip_g_loop[:, half_wk_size:img_wk_size, half_wk_size:img_wk_size, :]
    t_oip_4_g_28  = tf.image.resize(t_oip_4_g, [28,28], method="bicubic", antialias=True)
    
    # use the downscaled sub-image as an input image to the OIP-algorithm 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    MyOIP._initial_inp_img_data = t_oip_4_g_28
    
    
    # Perform the OIP analysis on the sub-image
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    map_index_sub_oip  = map_index_sub   # we may vary this in later versions 
    
    MyOIP._derive_OIP(map_index = map_index_sub_oip, 
                      n_epochs = n_epochs, n_steps = n_steps, 
                      epsilon = epsilon , conv_criterion = conv_criterion, 
                      b_stop_with_convergence=False,
                      b_print=False,
                      li_axa = li_axa5,
                      ax1_1 = ax4_3_1, ax1_2 = ax4_3_2)
    
    
    # display the modified sub-image without and with contrast  
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    t_oip_4_g_28  = MyOIP._inp_img_data
    ay_oip_4_g_28 = t_oip_4_g_28[0,:,:,0].numpy()
    ay_oip_4_g_28_cont = MyOIP._transform_tensor_to_img(T_img=t_oip_4_g_28[0,:,:,0], centre_move=0.33, fact=1.0)
    ax4_3_3.imshow(ay_oip_4_g_28_cont, cmap=plt.cm.gray)
    ax4_3_4.imshow(t_oip_4_g[0, :, :, 0], cmap=plt.cm.gray)
    
    # upscaling of the present sub-image 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    t_pic_4_g_wk_half = tf.image.resize(t_oip_4_g_28, [half_wk_size,half_wk_size], method="bicubic", antialias=True)
    
    # add the detail correction to the manipulated upscaled sub-image
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    t_pic_4_g_wk_half = t_pic_4_g_wk_half + t_picc_g_4_corr
    #t_pic_4_g_wk_half = tf.image.per_image_standardization(t_pic_4_g_wk_half)
    ax4_4_1.imshow(t_pic_4_g_wk_half[0, :, :, 0], cmap=plt.cm.gray)
    ax4_4_2.imshow(t_oip_4_g[0, :, :, 0], cmap=plt.cm.gray)
    
    # Overwrite the related region in the full image with the new sub-image  
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    t_oip_g_loop[0, half_wk_size:img_wk_size, half_wk_size:img_wk_size, 0]  = t_pic_4_g_wk_half[0, :, :, 0]
    t_oip_g_loop_std = tf.image.per_image_standardization(t_oip_g_loop)
    
    #ax4_4_3.imshow(t_oip_g_loop[0, :, :, 0], cmap=plt.cm.gray)   
    ax4_4_3.imshow(t_oip_g_loop_std[0, :, :, 0], cmap=plt.cm.gray)   
    ay_oip_g_loop_std_plt = tf.clip_by_value(height_fact*t_oip_g_loop_std, -spect_dist*spect_fact, spect_dist*spect_fact)
    ax4_4_4.
imshow(ay_oip_g_loop_std_plt[0, :, :, 0], cmap=plt.cm.gray)
    
    # Downscale the resulting new full image and feed it into into the next iteration of the main loop
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    MyOIP._initial_inp_img_data = tf.image.resize(t_oip_g_loop_std, [28,28], method="bicubic", antialias=True)
   

 

The result of these operations after 4 iterations is displayed in the following image:

We recognize again the worm-like shape resulting for map 56, which we have seen in the last article, already. (By the way: Our CNN’s map 56 originally is strongly triggered for the shapes of handwritten 9-digits and plays a significant role in classifying respective images correctly). But there is a new feature which appeared in the lower right part of the image – a kind of wheel or two closely neighbored 9-like shapes.

The following code compares the final result with the original image:

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 12
fig_size[1] = 12
fig_7X = plt.figure(7)
ax1_7X_1 = fig_7X.add_subplot(221)
ax1_7X_2 = fig_7X.add_subplot(222)
ax1_7X_3 = fig_7X.add_subplot(223)
ax1_7X_4 = fig_7X.add_subplot(224)

ay_pic_full_cont = MyOIP._transform_tensor_to_img(T_img=t_oip_g_loop_std[0, :, :, 0], centre_move=0.46, fact=0.8)

ax1_7X_1.imshow(imgvc_wk_size)
ax1_7X_2.imshow(t_picc_g[0,:,:,0], cmap=plt.cm.gray)
ax1_7X_3.imshow(ay_oip_g_loop_std_plt[0, :, :, 0], cmap=plt.cm.gray)
ax1_7X_4.imshow(ay_pic_full_cont, cmap=plt.cm.gray)

Giving:

Those who do not like the relative strong contrast may do their own experiments with other parameters for contrast enhancement.

You see: Simplifying prejudices about the world, which get or got manifested in inflexible neural networks (or brains ?), may lead to strange, if not bad dreams. May also some politicians learn from this … hmmm, just joking, as this requires both an openness and capacity for learning … can’t await Jan, 21st …

Conclusion

Our “carving algorithm”, which corresponds to an amplification of traces of map-related patterns that a CNN detects in an input image, can also be used for sub-image-areas and their pixel information. We can therefore use even very limited CNNs trained on low resolution MNIST images to create a “Deep MNIST Dream” out of a chosen arbitrary input image. At least in principle.

Addendum 06.01.2021: There is, however, a pitfall – our CNN (for MNIST or other image samples) may have been trained and work on standardized image/pixel data, only. But even if the overall input image may have been standardized before operating on it in the sense of the algorithm discussed in this article, we should take into account that a sub-image area may contain pixel data which are far from a standardized distribution. So, our corrections for details may require more than just the calculation of a difference term. We may have to reverse intermediate standardization steps, too. We shall take care of this in forthcoming code changes.

In the next article we are going to extend our algorithm to multiple cascaded sub-areas of our image – and we vary the maps used for different sub-areas at the same time.

 

A simple CNN for the MNIST dataset – XI – Python code for filter visualization and OIP detection

I continue my article series on a CNN for the MNIST dataset and related studies of a CNN’s reaction to input patterns.

A simple CNN for the MNIST dataset – X – filling some gaps in filter visualization
A simple CNN for the MNIST dataset – IX – filter visualization at a convolutional layer
A simple CNN for the MNIST dataset – VIII – filters and features – Python code to visualize patterns which activate a map strongly
A simple CNN for the MNIST dataset – VII – outline of steps to visualize image patterns which trigger filter maps
A simple CNN for the MNIST dataset – VI – classification by activation patterns and the role of the CNN’s MLP part
A simple CNN for the MNIST dataset – V – about the difference of activation patterns and features

In the course of this series we have already seen that that we must distinguish carefully between

  • activation patterns resulting from the output of the maps of a convolutional layer
  • and pixel patterns (OIPs) within input images which trigger a strong reaction of a selected deep layer map.

OIPs are patterns which pass complicated filter combinations of a sequence of convolutional layers. The images I have showed you so far show: At deep layers some maps react to unique large scale patterns covering a significant area of the input image in one or both dimensions. The patterns may expose sub-structures and spatially shifted repetitions across the images’s surface.

We can display map activation patterns easily on a grid displaying all maps of a layer and their neuron activations. OIP patterns, however, must be made visible via an image whose pixel values are determined by a complicated calculation process: The image data come from an optimization algorithm which analyzes the activation response of a map’s neurons to pixel value changes.

Our hunt for interesting MNIST OIPs, which trigger the maps of the deepest convolutional layer, has been relatively successful over the last two blog posts: Simply assigning a random value to each input pixel as a starting point for our algorithm already gave us OIP patterns for around 48% of the maps. In the last article I discussed an additional method to fill some of the gaps: I suggested to systematically investigate long range fluctuations of the pixel values as an input to the optimization loop. This gave us OIPs for another 26% of the maps. So, we have found OIPs for around 75% of the 128 maps with relatively simple methods.

visualization-of-CNN-filters-and-maps-for-MNIST-3rd-Conv-layer-1-dr-moenchmeyer

visualization-of-CNN-filters-and-maps-for-MNIST-3rd-Conv-layer-2-dr-moenchmeyer

But I still owe my readers some code for OIP-creation and a short guideline how to use it. You find the complete code in the last section of this article. I have encapsulated the methods for producing OIP-patterns in a Python class named “My_OIP”. The code is commented and with the knowledge accumulated during the last articles of this series you should have no major difficulties to understand it. Below I shall provide you with additional code for a sequence of Jupyter cells and walk you through the usage of the class’ methods.

We are mostly interested in maps for which you do not get an OIP pattern (or a “feature”, if you like …) easily by trial and error methods. We therefore pick map Nr. 35 on layer “Conv2D_3” of my (!) trained CNN as an example.

Basic requirements and restrictions

I have tested the code only in combination with a sequence of Jupyter cells. So, you need both a virtual Python environment and a Jupyter installation to repeat my experiments.

The code is not yet build for a full investigation of all maps of a convolutional layer in one extensive run. The present methods are intended to be applied to a single selected map, only. Whenever you want to study another map you have to run a certain sequence of Jupyter cells again.

Another hint: I did all runs in a virtual Python environment with Tensorflow 2.2.1 as my Keras backend. Unfortunately, the code does not yet run as efficiently on TF 2.3.1 as on TF 2.2.1 for unclear reasons. I used the “Keras” version integrated into TF 2.

Required Modules

I invoke the following collection of Python modules:

Jupyter cell 1:

  
import numpy as np
from numpy import save
from numpy import load
import scipy
from itertools import product 
from sklearn.preprocessing import StandardScaler

import tensorflow as tf
from tensorflow import keras as K
from tensorflow.python.keras import backend as B 
#from tensorflow.keras import backend as B 
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import schedules
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist

from tensorflow.python.client import device_lib
#import tensorflow.contrib.eager as tfe

import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.patches as mpat 

import time 
import sys 
import math
import os 
from os import path as path
import imp

# my own class code 
from mycode import myOIP
from mycode import myann32

from IPython.core.tests.simpleerr import sysexit

 

You may have to adjust some versions of these modules to get full consistency with TF 2.2.1. Watch the output of “pip-review” or “pip –upgrade –force” commands carefully! The general form of the required pip-statements to install a certain version of a module in your virtual Python environment looks like

pip install --upgrade --force tensorflow==2.2.1
pip install --upgrade --force six==1.12.0
pip install --upgrade --force bleach==1.5.0
....

n

Enable the code to run at least partially on a graphics card

A systematic investigation of long range fluctuation patterns as a starting point for an OIP-creation should be done with the help of a graphics card – it takes too much time to perform all of the required operations on the cores of a conventional CPU. [At least on my elderly equipment – unfortunately, I have no employer which sponsors my ML- and Linux-activities .. 🙁 ..] You can do the necessary preparatory steps with the help of a 2nd Jupyter cell.

Jupyter cell 2:

  
gpu = True
if gpu: 
    GPU = True;  CPU = False; num_GPU = 1; num_CPU = 4
else: 
    GPU = False; CPU = True;  num_CPU = 1; num_GPU = 0

config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=4,
                        inter_op_parallelism_threads=1, 
                        allow_soft_placement=True,
                        device_count = {'CPU' : num_CPU,
                                        'GPU' : num_GPU}, 
                        log_device_placement=True

config.gpu_options.per_process_gpu_memory_fraction=0.30
config.gpu_options.force_gpu_compatible = True
B.set_session(tf.compat.v1.Session(config=config))

 
I reduced the maximum usage of graphics card memory to 30% of its capacity (4GB).

Creating an object instance of the class My_OIP

As a third step we instantiate an object based on class “My_OIP”. When you look at the code of the class’ “__init__()”-method you see

  • that the my CNN-model is loaded from a h5-file and made available for further operations
  • and that an additional Keras model – named “OIP-(sub)-model” – is created. The OIP-model connects the input of the CNN with the output of the CNN’s innermost convolutional layer “Conv2D_3”. It is a sub-model of the original CNN.

The OIP-sub-model is built with the help of the “Model” class of Keras. At the end of the “__init__()”-method I print out the layer structures of the CNN- and OIP-model. The Juypter code is:

Jupyter cell 3:

# Load the CNN-model - build the OIP-model
#  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
imp.reload(myOIP)
try:
    with tf.device("/GPU:0"):
        MyOIP = myOIP.My_OIP(cnn_model_file = 'cnn_best.h5', layer_name = 'Conv2D_3')

except SystemExit:
    print("stopped")

Note that we chose a specific convolutional layer of the CNN, here, by providing its unique name as a parameter. When you look at the code you see that you can change the layer afterwards by directly calling method “_build_oip_model(layer_name = ‘LAYER_NAME’)” with a suitable name of the layer whose maps you want to study.

The resulting output looks in my case like

Used file to load a ´ model =  cnn_best.h5
Structure of the loaded CNN-model:

Model: "sequential_7"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Conv2D_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
Max_Pool_1 (MaxPooling2D)    (None, 13, 13, 32)        0         
_________________________________________________________________
Conv2D_2 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
Max_Pool_2 (MaxPooling2D)    (None, 5, 5, 64)          0         
_________________________________________________________________
Conv2D_3 (Conv2D)            (None, 3, 3, 128)         73856     
r
_________________________________________________________________
flatten_7 (Flatten)          (None, 1152)              0         
_________________________________________________________________
dense_14 (Dense)             (None, 100)               115300    
_________________________________________________________________
dense_15 (Dense)             (None, 10)                1010      
=================================================================
Total params: 208,982
Trainable params: 208,982
Non-trainable params: 0
_________________________________________________________________
shape of cnn-model inputs =  (None, 28, 28, 1)
Structure of the constructed OIP-sub-model:

Model: "mod_oip__Conv2D_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
Conv2D_1_input (InputLayer)  [(None, 28, 28, 1)]       0         
_________________________________________________________________
Conv2D_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
Max_Pool_1 (MaxPooling2D)    (None, 13, 13, 32)        0         
_________________________________________________________________
Conv2D_2 (Conv2D)            (None, 11, 11, 64)        18496     
_________________________________________________________________
Max_Pool_2 (MaxPooling2D)    (None, 5, 5, 64)          0         
_________________________________________________________________
Conv2D_3 (Conv2D)            (None, 3, 3, 128)         73856     
=================================================================
Total params: 92,672
Trainable params: 92,672
Non-trainable params: 0

 
You see that the OIP-model (“mod_oip__Conv2D_3”) starts with the input layer and ends with the output of the last deepest layer “Conv2D_3”.

Preparation and execution of a precursor-run

The code in the next Jupyter cell

  • first starts a method to prepare a precursor-run,
  • then defines a figure to allow for plots of the 8 promising input images with large scale fluctuations
  • and eventually starts a precursor-run which tests around 19800 fluctuation patterns and selects those 8, which trigger our selected map maximally.

What I call “precursor-run” above is, at its core, nothing else than a loop which tests a selected map’s response to many artificially created input images based on a variety of large scale fluctuations imposed on the pixel values.

Jupyter cell 4:

  
# preparation of the precursor run 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
with tf.device("/GPU:0"):
    MyOIP._prepare_precursor(map_index=38)

# figure for plots
# -----------------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure()
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

# start precursor run 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
start_t = time.perf_counter()
with tf.device("/GPU:0"):
    MyOIP._precursor(num_epochs=10, li_axa=li_axa)
end_t   = time.perf_counter()
fit_t = end_t - start_t
print("time= ", fit_t)

 
Note that we define the map we which we want to test when we call “_prepare_precursor(map_index=38)”; the index of the map refers to its position in the list of maps of the chosen layer.

Technically, method “_prepare_precursor()” sets up the OIP-model for the selected map (here map 38) plus a GradientTape-object, which helps us to automatically calculate gradient component values for the map-output with respect to input pixel-values. It does so by calling method

_setup_gradient_tape_and_iterate_function() .

The latter method also creates the “_iterate()-function” which we use during the optimization of the input pixel values.

Method “_precursor()” then systematically creates input images for the fluctuation patterns originally imposed on a coarse (3×3)-grid by interpolating and upscaling. When you study the code carefully you will see that I included a possibility to overlay some kind of constant short scale fluctuation pattern. (This may prove useful in some cases where a map needs short and long scale input patterns at the same time to react with a reasonable activation.)

The method then loops over all long range patterns, performs a defined number of gradient ascent steps for each image and saves the respective pattern data if the map shows a reaction – indicated by a loss value > 0. The optimization loop is directly handled within the method for performance reasons. Note that we only follow the optimization process for a fixed, relatively small numbers of epochs. The loop prints out loss values for all patterns for which the loss is > 0.

In the end we select those 8 input images which showed the highest loss values and save their basic pattern data for reconstruction. The patterns are afterwards available in a list (“self._li_of_flucts”). (We save them also in a file).

We then check whether the image reconstruction algorithm for one of the saved patterns really works. A final step consists of a display of the selected input images. We get an output that looks like

  
We test  19683  possibilities for a (3x3) fluctuations 
i =  27  loss =  5.528541
i =  28  loss =  3.826129
i =  30  loss =  5.545482
i =  31  loss =  5.8238125
i =  32  loss =  2.5444937
i =  36  loss =  2.7260687
i =  37  loss =  3.1436405
i =  46  loss =  2.8298864
i =  54  loss =  5.528542
i =  55  loss =  5.286209 
...
...
i =  19664  loss =  9.8695545
i =  19671  loss =  2.4347296
i =  19672  loss =  2.8202498
i =  19673  loss =  9.868241

num of relevant covs =  11114 
check of map reaction to first selected image
loss for 1st selected img =  20.427849
0 loss =  20.427849
1 loss =  19.50778
2 loss =  19.394796
3 loss =  18.993208
4 loss =  18.844856
5 loss =  18.794407
6 loss =  18.771235
7 loss =  18.607782
time=  251.3919793430032   

 
The test of the fluctuation patterns for map 38 found 1114 candidates with a significant loss.

In my case the selected images of the 8 most promising pattern candidates for OIP-creation look like:

Reconstruction of the input image candidates from their fluctuation pattern

We can reconstruct the images from the contents of “self._li_of_flucts()” by calling method “_display_precursor_imgs()”.

Jupyter cell 5:

  
# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure()
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_
subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

with tf.device("/GPU:0"):
    #MyOIP._display_imgs_2()    
    MyOIP._display_precursor_imgs(li_axa=li_axa)

 
It will test the reconstruction algorithm and display the 8 images already provided by the precursor run again.

Choose an input image candidate and enrich it by small scale fluctuations

The next Jupyter cell offers the opportunity to select a certain candidate out of our 8 candidates and create the respective input image as a basis for a subsequent OIP-creation. In addition it allows us to enrich its large scale fluctuation pattern with some small scale fluctuations at a lower “amplitude”. We must provide a figure with two subplots to do some plotting.

Jupyter cell 6:

 
# build initial image based on PRECURSOR
# *******************

# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 5
fig1 = plt.figure(1)
ax1_1 = fig1.add_subplot(121)
ax1_2 = fig1.add_subplot(122)

try: 
    # OIP function to setup an initial image 
    with tf.device("/GPU:0"):
        
        initial_img = MyOIP._build_initial_img_from_prec( prec_index=7, 
                                             li_epochs    = [20, 50, 100, 400], 
                                             li_facts     = [1.0, 0.0, 0.0, 0.2, 0.0],
                                             li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
                                             b_smoothing = False, b_display=True, 
                                             ax1_1 = ax1_1, ax1_2 = ax1_2)

except SystemExit:
    print("stopped")

 
Parameter “prec_index” reflects which of the eight long range fluctuation patterns you choose.

You certainly noticed the options for parameterizing additional fluctuations: “li_dim_steps” defines the granularity of the fluctuations which are upscaled to the (28×28)-size of the MNIST images. “li_facts” allows for a relative scaling of the strength of the fluctuations. Note that the first element of li_facts determines the strength of the chosen basic pattern after the “_precursor”-run; the other 4 parameters define the relative strength of the other statistical fluctuations on the four length-scales (li_dim_steps).

Let us look at an example:

initial_img = MyOIP._build_initial_img_from_prec( prec_index=7, 
  li_epochs    = [20, 50, 100, 400], 
  li_facts     = [1.0, 0.0, 0.0, 0.0, 0.0],
  li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
  b_smoothing = False, b_display=True, 
  ax1_1 = ax1_1, ax1_2 = ax1_2)

picks the last of the patterns displayed above as the results of the precursor run. The result looks like:

The second image should look like the first one – it is created for check purposes, only. Now, we enrich this image with fluctuations on the scale defined by (14,14), i.e. on squares of (2×2)-dimensions. (The MNIST image size is (28×28)!)

initial_img = MyOIP._build_initial_img_from_prec( prec_index=7, 
  li_epochs    = [20, 50, 100, 400], 
  li_facts     = [1.0, 0.0, 0.0, 0.2, 0.0],
  li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
  b_smoothing = False, b_display=True, 
  ax1_1 = ax1_1, ax1_2 = ax1_2)

We get the result:

The enrichment of a large scale pattern by small scale fluctuations may support the creation of clearer OIP-images.

Creation of an OIP-image from the results of a precursor run

Method “_derive_OIP_for_Prec_Img()” executes the optimization loop for the creation of an OIP-image based on the input image offered. This method uses the the same instances of the GradienTape object and function “_iterate()” as the precursor run itself. It calls a method “_oip_strat_0_optimization_loop()” performing the calculations during the optimization. To plot the image evolution we have to provide a figure with 8 sub-plots.

Jupyter cell 7:

 
#from IPython.core.display import display, HTML
#display(HTML("<style>div.output_scroll { height:44em; }</style>"))


# Derive an  OIP from a PRECURSOR IMAGE for a selected map 
# *********************************************************

# figure A - 8 frames
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure(1)
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

# figure B - 2 vertical frames for last image + contrats enhancemnet 
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 3
fig_size[1] = 7
#fig_b = plt.figure(2, figsize=(5,11.2))
fig_b = plt.figure(2)
ax1_1 = fig_b.add_subplot(211)
ax1_2 = fig_b.add_subplot(212)


# Parameters of the OIP-image optimization 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
n_epochs = 1200          # should be divisible by 5  
n_steps = 6             # number of intermediate reports 
epsilon = 0.01          # step size for gradient correction  
conv_criterion = 2.e-4  # criterion for a potential stop of optimization 


with tf.device("/GPU:0"):
    MyOIP._derive_OIP_for_Prec_Img( n_epochs = n_epochs, n_steps = n_steps, 
                        epsilon = epsilon , conv_criterion = conv_criterion, 
                        li_axa = li_axa, ax1_1 = ax1_1, ax1_2 = ax1_2, 
                        b_stop_with_convergence=False, 
                        b_show_intermediate_images=True 
                        )
      

 
Note: You can use the first two out-commented statements at the cell’s top to control the height of the output window of your Jupyter cells. Just remove the “#” comment signs.

The result contains intermediate information on the loss values and convergence. This helps to determine the minimum number of epochs for an optional second run.

In the end we get a plot for the history of the image evolution out of the input patterns. The distance between the epochs for which the plots are done changes in a logarithmic manner. At the end our method calls “_transform_tensor_to_img()” with some standard parameters for a display of the OIP with contrast enhancement.

For the non-enriched large scale pattern displayed above we get:

*************
Start of optimization loop
*************
Strategy: Simple initial mixture of long and short range variations
Number of epochs =  1000
Epsilon =   0.01
*************
li_int =  [15, 30, 60, 120, 240, 480]
j:  0  :: loss_val =  0.9638658  :: loss_diff =  0.
9638658165931702

step 0 finalized
present loss_val =  0.9638658
loss_diff =  0.9638658165931702
Shape of intermediate img =  (28, 28)
j:  1  :: loss_val =  12.853093  :: loss_diff =  11.889227
j:  2  :: loss_val =  13.5863  :: loss_diff =  0.73320675
j:  3  :: loss_val =  14.277494  :: loss_diff =  0.69119453
j:  4  :: loss_val =  14.983041  :: loss_diff =  0.7055464
j:  5  :: loss_val =  15.694604  :: loss_diff =  0.7115631
j:  6  :: loss_val =  16.413284  :: loss_diff =  0.7186804
j:  7  :: loss_val =  17.148396  :: loss_diff =  0.73511124
...
...
j:  994  :: loss_val =  99.45101  :: loss_diff =  -0.003944397
j:  995  :: loss_val =  99.453865  :: loss_diff =  0.0028533936
j:  996  :: loss_val =  99.50871  :: loss_diff =  0.054847717
j:  997  :: loss_val =  99.47069  :: loss_diff =  -0.038024902
j:  998  :: loss_val =  99.51533  :: loss_diff =  0.044639587
j:  999  :: loss_val =  99.43279  :: loss_diff =  -0.08253479

step 999 finalized
present loss_val =  99.43279
loss_diff =  -0.08253479

Infos on pixel value distribution during contrast enhancement: 
max_orig =  4.7334514  :: avg_orig =  -1.2164214e-08  :: min_orig:  -3.6500213
std_dev_orig =  1.0
max_ay =  4.733451  :: avg_ay =  0.0  :: min_ay:  -3.6500208
std_dev_ay =  0.9999999
div =  3.562975788116455
max_fin =  1.6585106  :: avg_fin =  0.33000004  :: min_fin:  -0.69443035
std_dev_fin =  0.28066427

max_img =  255  :: avg_img =  85.10331632653062  :: min_img:  0
std_dev_img =  58.67217632503727 

The last information lines reflect some data on the pixel value distribution during transformations for contrast enhancement. See the code of method “_transform_tensor_to_img()”.

The resulting images are:

and

When you repeat the OIP-creation this for the enriched input image you get:

So, there are some differences – and, yes, you may want to play around a bit with the parameter options.

Below I present you the results for all eight long range patterns – un-enriched and for 1000 epochs.

 

OIP-creation obviously depends to some extent on the input patterns offered. But there is an overall similarity.

Pick the OIP you like best. The loss values in order of the OIP-images are: 108.94, 105,95, 102.62, 96.02, 99.67, 100.75, 103.40, 99.45. (This shows by the way that the order in terms of a loss value after 10 tested epochs does not reflect the order after a full run over hundreds of epochs.)

Play around with contrast enhancement

You may not be satisfied with the contrast enhancement and find it somewhat exaggerated. Can you change it? Yes, method “_transform_tensor_to_img()” provides two parameters for it. One (centre_move) is shifting the average of the values, the other (fact) the spread of the pixel values when mapping the calculated standardized values to the conventional range of [0, 255] for pixel values. Thecode reveals that some clipping is used, too.

The following Jupyter cell allows for a variation of the contrast related parameters for the present OIP-image whose original values are available from variable “_inp_img_data[0, :, :, 0]”.

Jupyter cell 8:

        
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height:44em; }</style>"))

# figure B - 2 vertical frames for last image + contrats enhancemnet 
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 6
fig_size[1] = 10
#fig_b = plt.figure(2, figsize=(5,11.2))
fig_b = plt.figure(1)
ax1_1 = fig_b.add_subplot(321)
ax1_2 = fig_b.add_subplot(322)
ax1_3 = fig_b.add_subplot(323)
ax1_4 = fig_b.add_subplot(324)
ax1_5 = fig_b.add_subplot(325)
ax1_6 = fig_b.add_subplot(326)
        
# -------------    
    
X_img = MyOIP._inp_img_data[0, :, :, 0]    

XN_img = X_img.numpy()
XT2_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.5, fact = 2.0)

ax1_1.imshow(XN_img, cmap=plt.cm.get_cmap('viridis'))
ax1_2.imshow(XT2_img, cmap=plt.cm.get_cmap('viridis'))

XT3_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.42, fact = 1.8)
XT4_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.33, fact = 1.4)

ax1_3.imshow(XT3_img, cmap=plt.cm.get_cmap('viridis'))
ax1_4.imshow(XT4_img, 
cmap=plt.cm.get_cmap('viridis'))

XT5_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.33, fact = 1.2)
XT6_img = MyOIP._transform_tensor_to_img(X_img, centre_move = 0.33, fact = 0.9)

ax1_5.imshow(XT5_img, cmap=plt.cm.get_cmap('viridis'))
ax1_6.imshow(XT6_img, cmap=plt.cm.get_cmap('viridis'))
    

 

The images look like:

Enough options to create nice OIP-images.

If you do not care about a precursor run in the first place ….

Now, a precursor-run may be too costly for you. You just want to play around with some statistical input image and a map in a trial and error fashion. Then you may just use the following two Jupyter cells:

Jupyter cell 9:

        
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height:34em; }</style>"))

# build simple initial image composed of fluctuations  
# *******************************************************

# figure
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 10
fig_size[1] = 5
fig1 = plt.figure(1)
ax1_1 = fig1.add_subplot(121)
ax1_2 = fig1.add_subplot(122)

# OIP function to setup an initial image 
initial_img = MyOIP._build_initial_img_data(  strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (0.2, 0.0, 0.0, 0.0),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False,
                                 ax1_1 = ax1_1, ax1_2 = ax1_2)

 

with output

Then use

Jupyter cell 10:

   
# Derive a single OIP from an input image with statistical fluctuations of the pixel values 
# ******************************************************************

# figure A - 8 frames
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 16
fig_size[1] = 8
fig_a = plt.figure(1)
axa_1 = fig_a.add_subplot(241)
axa_2 = fig_a.add_subplot(242)
axa_3 = fig_a.add_subplot(243)
axa_4 = fig_a.add_subplot(244)
axa_5 = fig_a.add_subplot(245)
axa_6 = fig_a.add_subplot(246)
axa_7 = fig_a.add_subplot(247)
axa_8 = fig_a.add_subplot(248)
li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]

# figure B - 2 vertical frames for last image + contrats enhancemnet 
# -----------
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 3
fig_size[1] = 7
#fig_b = plt.figure(2, figsize=(5,11.2))
fig_b = plt.figure(2)
ax1_1 = fig_b.add_subplot(211)
ax1_2 = fig_b.add_subplot(212)


map_index = 38          # map-index we are interested in 
n_epochs = 1000           # should be divisible by 5  
n_steps = 6             # number of intermediate reports 
epsilon = 0.01          # step size for gradient correction  
conv_criterion = 2.e-4  # criterion for a potential stop of optimization 

MyOIP._derive_OIP(map_index = map_index, 
           
       n_epochs = n_epochs, n_steps = n_steps, 
                  epsilon = epsilon , conv_criterion = conv_criterion, 
                  b_stop_with_convergence=False,
                 li_axa = li_axa,
                 ax1_1 = ax1_1, ax1_2 = ax1_2)

 

It produces an image output:

The comparison with other OIP-images above again shows a significant dependency of the outcome of our algorithm on the input image offered. Especially, when large scale OIPs are relevant for a map. The trial-and-error approach often does not reveal the repetitions of some basic pattern at different positions in the OIP-image as clearly as a systematic test of patterns does.

By the way: T was lucky in the above case that I got a reasonable OIP at all! Actually, I needed around 10 trials to get it. But if you tried e.g. with dominant small scale fluctuations defined by

initial_img = MyOIP._build_initial_img_data(  strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (0.1, 0.0, 0.0, 0.2),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False,
                                 ax1_1 = ax1_1, ax1_2 = ax1_2)

then the random number functionality will produce an input image as

which in turn will consistently lead to zero results

Start of optimization loop
*************
Strategy: Simple initial mixture of long and short range variations
Number of epochs =  1000
Epsilon =   0.01
*************
li_int =  [15, 30, 60, 120, 240, 480]
j:  0  :: loss_val =  0.0  :: loss_diff =  0.0
0-values, j=  0  loss =  0.0  avg_loss =  0.0

step 0 finalized
present loss_val =  0.0
loss_diff =  0.0
j:  1  :: loss_val =  0.0  :: loss_diff =  0.0
0-values, j=  1  loss =  0.0  avg_loss =  0.0
j:  2  :: loss_val =  0.0  :: loss_diff =  0.0
...
...
0-values, j=  10  loss =  0.0  avg_loss =  0.0
More than 10 times zero values - Try a different initial random distribution of pixel values

for map 38. Map 38 does not “like” random small scale fluctuations; it sees no pattern in them which would pass its underlying filter combination. Well, this was actually the main reason for a more systematic approach based on large scale patterns … .

But even in a trial-and-error approach you should always test pure long scale fluctuations first.

Does the code run as fast with TF 2.3.1?

No, it does not. Actually, it runs by a factor of 4 to 5 slower! The graphics card is used much less. I could not pin down the reason yet but I think that there is some inefficiency in data transfers between the graka’s environment and the CPU’s environment in the newer version of Tensorflow – maybe there are even unnecessary transfers occurring. If the problems with TF 2.3 do not disappear in a later version, I will file a bug report.

Code extension to cover multiple maps in one run?

Studying the code a bit will enable you to modify, beautify and extend it. Personally, I just have no time which I could invest in major changes right now. If you want to extend the program yourself to cover multiple maps in one run I recommend the following approach:

Build loops across the methods “_prepare_precursor()” and “_precursor()” for a range of defined maps. Save at least some of the identified basic fluctuation pattern for initial input images. Then build and start a method for a separate run which reconstructs input images from the saved fluctuation patterns and builds the related OIP-images – a process which does not take much time. Select theone with highest loss for each map. Then you should add a method for the display of the remaining OIP-images on an image grid for all maps- this would be similar to what we did with activation patterns in previous articles.

Conclusion

The visualization of filters, i.e. the creation of input images with patterns that trigger a map optimally, can become a hard business when we work with maps of deep convolutional layers. Some maps there may not react to input images with purely statistical pixel values and no dominant fluctuation pattern on longer length scales. During this article I have discussed the methods of a simple Python class which allows at least for a more systematic, though GPU/CPU-intensive, approach. It should be easy for readers who work a bit with Python to extend the code and tackle more elaborate tasks – also outside the MNIST case.

In the next article of this series

A simple CNN for the MNIST dataset – XII – filter visualization for maps of the first two convolutional layers

we shall look a bit at the other convolutional layers. So far we have only covered of the deepest Conv layer. Later on we shall close our first encounter with a (simple) CNN by answering a question posed at the series’ beginning: What changes if we re-train the CNN on a shuffled MNIST data set?

Some Python code for OIP detection and creation

 
'''
Module to create OIPs as visualizations of filters of a simple CNN for the MNIST data set
@version: 0.6, 10.10.2020
@change: V0.5:  was based on version 0.4 which was originally created in Jupyter cells
@change: V0.6: General revision of class "my_OIP" and its methods
@change: V0.6: Changes to the documentation 
@attention: General status: For experimental purposes only! 
@requires: A full CNN trained on MNIST data 
@requires: A Keras model of the CNN and weight data saved in a h5-file, e.g."cnn_MIST_best.h5". 
           This file must be placed in the main directory of the Jupyter notebooks.
@requires: A Jupyter environment - from where the class My_OIP is called and where plotting takes place 
@note: The description to the interface to the class via the __init__()-method may be incomplete
@note: The use of prefixes li_ and ay_ is not yet consistent. ay_ should indicate numpy arrays, li_ instead normal Python lists
@warning: This version has not been tested outside a Jupyter environment - plotting in GTK/Qt-environment may require substantial changes 
@status: Under major development with frequent changes 
@author: Dr. Ralph Mönchmeyer
@copyright: Simplified BSD License, 10.10.2020. Copyright (c) 2020, Dr. Ralph Moenchmeyer, Augsburg, Germnay

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this
   list of conditions and the following disclaimer.
2. Redistributions in 
binary form must reproduce the above copyright notice,
   this list of conditions and the following disclaimer in the documentation
   and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
'''

# Modules to be imported - these libs must be imported in a Jupyter cell nevertheless 
# ~~~~~~~~~~~~~~~~~~~~~~~~
import time 
import sys 
import math
import os 
from os import path as path

import numpy as np
from numpy import save  # used to export intermediate data
from numpy import load

import scipy
from sklearn.preprocessing import StandardScaler
from itertools import product 

import tensorflow as tf
from tensorflow import keras as K
from tensorflow.python.keras import backend as B  # this is the only version compatible with TF 1 compat statements
#from tensorflow.keras import backend as B 
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from tensorflow.keras import optimizers
from tensorflow.keras.optimizers import schedules
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist

from tensorflow.python.client import device_lib

import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.patches as mpat 

from IPython.core.tests.simpleerr import sysexit

class My_OIP:
    '''
    @summary: This class allows for the creation and the display of OIP-patterns, 
              to which a selected map of a CNN-model and related filters react maximally
    
    @version: Version 0.6, 10.10.2020
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    @change: Revised methods 
    @requires: In the present version the class My_OIP requires: 
                * a CNN-model which works with standardized (!) input images, size 28x28,
                * a CNN-Modell which was trained on MNIST digit data,
                * exactly 4 length scales for random data fluctations are used to compose initial statistial image data 
                  (the length scales should roughly have a factor of 2 between them) 
                * Assumption : exactly 1 input image and not a batch of images is assumed in various methods 
    
    @note: Main Functions:     
    0) _init__()
    1) _load_cnn_model()             => load cnn-model
    2) _build_oip_model()            => build an oip-model to create OIP-images
    3) _setup_gradient_tape_and_iterate_function()        
                                    => Implements TF2 GradientTape to watch input data for eager gradient calculation
                                    => Creates a convenience function by the help of Keras to iterate and optimize the OIP-adjustments
    4) _oip_strat_0_optimization_loop():
                                     => Method implementing a simple strategy to create OIP-images, 
                                        based on superposition of random data on long range data (compared to 28 px) 
                                        The optimization uses "gradient ascent" to get an optimum output of the selected Conv map 

    6) _derive_OIP():                => Method used to start the creation of an OIP-image for a chosen map 
                                        - based on an input image with statistical random date 
    6) _derive_OIP_for_Prec_Img():   => Method used to start the creation of an OIP-image for a chosen map 
                                       - based on an input image with was derived from a PRECURSOR run, 
                                       which tests the reaction of the map to large scale fluctuations 
                                        
    7) _build_initial_img_data():    => Builds an input image based on random data for fluctuations on 4 length scales 
    8) _build_initial_img_from_prec():    
                                     => Reconstruct an input image based on saved random data for long range fluctuations 
    9) _prepare_precursor():         => Prepare a _precursor run by setting up TF2 GradientTape and the _iterate()-function 
    10) _precursor():                => Precursor run which systematically tests the reaction of a selected convolutional map 
                                        to long range fluctuations based on a (3x3)-grid upscaled to the real image size  
    11) _display_precursor_imgs():   => A method which plots up to 8 selected precursor images with fluctuations,
                                        which triggered a maximum map reaction 
    12) _transfrom_tensor_to_img():  => A method which allows to transform tensor data of a standardized (!) image to standard image data 
                                        with (gray)pixel valus in [0, 255]. Parameters allow for a contrast enhancement. 
    
    Usage hints 
    ~~~~~~~~~~~
    @note: If maps of a new convolutional layer are to be investigated then method _build_oip_model(layer_name) has to be rerun 
           with the layer's name as input parameter
    '''
    
    # Method to initialize an instantiation object 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def __init__(self, cnn_model_file = 'cnn_MNIST_best.h5', 
                 layer_name = 'Conv2D_3', 
                 map_index = 0, 
                 img_dim = 28, 
                 b_build_oip_model = True  
                ): 
        '''
        @summary: Initialization of an object instance - read in a CNN model, build an OIP-Model 
        @note: Input: 
        ~~~~~~~~~~~~
        @param cnn_model_file:  Name of a file containing a fully trained CNN-model; 
                                the model can later be overwritten by self._load_cnn_model()
        @param layer_name:      We can define a layer name, which we are interested in,  already when starting;
                                the layer can later be overwritten by self._build_oip_model()
        @param map_index:       We can define a map, which we are interested in,  already when starting;
                                A map-index is NOT required for building the OIP-model, but for the GradientTape-object 
        @param img_dim:         The dimension of the assumed quadratic images (2 for MNIST)

        @note: Major internal variables:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _cnn_model:        A reference to the CNN model object
        @ivar _layer_name:       The name of convolutional layer 
                                 (can be overwritten by method _build_oip_model() ) 
        @ivar _map_index:        Index of the map in the chosen layer's output array 
                                 (can later be overwritten by other methods) 
        @ivar _r_cnn_inputs:     A reference to the input tensor of the CNN model 
                                 Could be a batch of images; but in this class only 1 image is assumed
        @ivar _layer_output:     Tensor with all maps of a certain layer    
        @ivar _oip_submodel:     A new model connecting the 
input of the CNN-model with a certain map's (!) output
        @ivar _tape:             An instance of TF2's GradientTape-object 
                                    Watches input, output, loss of a model 
                                    and calculates gradients in TF2 eager mode 
        @ivar _r_oip_outputs:    Reference to the output of the new OIP-model = map-activation
        @ivar _r_oip_grads:      Reference to gradient tensors for the new OIP-model (output dependency on input image pixels)
        @ivar _r_oip_loss:       Reference to a loss defined on the OIP-output - i.e. on the activation values of the map's nodes;
                                 Normally chosen to be an average of the nodes' activations 
                                 The loss defines a hyperplane on the (28x28)-dim representation space of the input image pixel values  
        @ivar _val_oip_loss:     Loss value for a certain input image 
        @ivar _iterate           Reference toa Keras backend function which invokes the new OIP-model for a given image
                                 and calculates both loss and gradient values (in TF2 eager mode) 
                                 This is the function to be used in the optimization loop for OIPs
        
        @note: Internal Parameters controlling the optimization loop:
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _oip_strategy:        0, 1 - There are two strategies to evolve OIP patterns out of statistical data 
                                    - only the first strategy is supported in this version 
                                    Both strategies can be combined with a precursor calculation 
                                    0: Simple superposition of fluctuations at different length scales
                                    1: NOT YET SUPPORTED 
                                    Evolution over partially evolved images based on longer scale variations 
                                    enriched with fluctuations on shorter length scales 

        @ivar _ay_epochs:           A list of 4 optimization epochs to be used whilst 
                                    evolving the img data via strategy 1 and intermediate images 
        @ivar _n_epochs:            Number of optimization epochs to be used with strategy 0 
        @ivar _n_steps:             Defines at how many intermediate points we show images and report 
                                    during the optimization process 
        @ivar _epsilon:             Factor to control the amount of correction imposed by the gradient values of the oip-model 

        @note: Input image data of the OIP-model and references to it 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _initial_precursor_img:  The initial image to start a precursor optimization with.
                                       Would normally be an image of only long range fluctuations. 
        @ivar _precursor_image:        The evolved image created and selected by the precursor loop 

        @ivar _initial_inp_img_data:  A tensor representing the data of the input image 
        @ivar _inp_img_data:          A tensor representig the 
        @ivar _img_dim:               We assume quadratic images to work with 
                                      with dimension _img_dim along each axis
                                      For the time being we only support MNIST images 
        
        @note: Internal parameters controlling the composition of random initial image data 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _li_dim_steps:        A list of the intermediate dimensions for random data;
                                    these data are smoothly scaled to the image dimensions 
        @ivar _ay_facts:            A Numpy array of 4 factors to control the amount of 
                                    contribution of the statistical 
variations 
                                    on the 4 length scales to the initial image
        
        @note: Internal variables to save data of a precursor run
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
        @ivar _list_of_covs: list of long range fluctuation data for a (3x3)-grid covering the image area 
        @ivar _li_fluct_enrichments: [li_facts, li_dim_steps] data for enrichment with small fluctuations 
        
        
        @note: Internal variables for plotting
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @ivar _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
        
        '''    
        
        # Input data and variable initializations
        # ****************************************
        
        # the model 
        # ~~~~~~~~~~
        self._cnn_model_file = cnn_model_file
        self._cnn_model      = None 
        
        # the chosen layer of te CNN-model
        self._layer_name = layer_name
        # the index of the map in the layer array
        self._map_index  = map_index
        
        # References to objects and the OIP-submodel
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._r_cnn_inputs  = None # reference to input of the CNN_model, also used in the oip-model  
        self._layer_output  = None
        self._oip_submodel  = None
        
        # References to watched GradientTape objects 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._tape          = None # TF2 GradientTape variable
        # some "references"
        self._r_oip_outputs = None # output of the oip-submodel to be watched 
        self._r_oip_grads   = None # gradients determined by GradientTape   
        self._r_oip_loss    = None # loss function
        # loss and gradient values (to be produced ba a backend function _iterate() )
        self._val_oip_grads = None
        self._val_oip_loss  = None
        
        # The Keras function to produce concrete outputs of the new OIP-model  
        self._iterate       = None
        
        # The strategy to produce an OIP pattern out of statistical input images 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~
        # 0: Simple superposition of fluctuations at different length scales 
        # 1: Move over 4 interediate images - partially optimized 
        self._oip_strategy = 0
        
        # Parameters controlling the OIP-optimization process 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~
        # number of epochs for optimization strategy 1
        self._ay_epochs    = np.array((20, 40, 80, 400), dtype=np.int32)
        len_epochs         = len(self._ay_epochs)
        
        # number of epochs for optimization strategy 0
        self._n_epochs     = self._ay_epochs[len_epochs-1]   
        self._n_steps      = 6   # divides the number of n_epochs into n_steps to produce intermediate outputs
        
        # size of corrections by gradients
        self._epsilon       = 0.01 # step-size for gradient correction
        
        # Input image-typess and references 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # precursor image
        self._initial_precursor_img = None
        self._precursor_img         = None # output of the _precursor-method 
        
        # The input image for the OIP-creation - a superposition of inial random fluctuations
        self._initial_inp_img_data  = None  # The initial data constructed 
        self._inp_img_data          = None  # The data used and varied for optimization 
        # image dimension
        self._img_dim               = img_dim   # = 28 => MNIST images for the time being 
        
        # Parameters controlling the setup of an initial image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~--------~~~~~~~~~~~~~~~~~~~
        # The length scales for 
initial input fluctuations
        self._li_dim_steps = ( (3, 3), (7,7), (14,14), (28,28) ) # can later be overwritten 
        # Parameters for fluctuations  - used both in strategy 0 and strategy 1  
        self._ay_facts     = np.array( (0.5, 0.5, 0.5, 0.5), dtype=np.float32 )
        
        # Data of a _precursor()-run 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._list_of_covs = None       # list of long range fluctuation data for a (3x3)-grid covering the image area 
        self._li_fluct_enrichments = None # = [li_facts, li_dim_steps] list with with 2 list of data enrichment for small fluctuations 
        # These data are required to reconstruct the input image to which a map reacted 
        
        # List of references to axis subplots
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # These references may change from Jupyter cell to Jupyter cell and provided by the called methods
        self._li_axa = None # will be set by methods according to axes-frames in Jupyter cells 
        # axis frames for a single image in 2 versions (with contrast enhancement)
        self._ax1_1 = None
        self._ax1_2 = None 
        
        # ********************************************************
        # Model setup - load the cnn-model and build the OIP-model
        # ************
        if path.isfile(self._cnn_model_file): 
            # We trigger the initial load of a model 
            self._load_cnn_model(file_of_cnn_model = self._cnn_model_file, b_print_cnn_model = True)
            # We trigger the build of a new sub-model based on the CNN model used for OIP search 
            self._build_oip_model(layer_name = self._layer_name, b_print_oip_model = True ) 
        else:
            print("<\nWarning: The standard file " + self._cnn_model_file + 
                  " for the cnn-model could not be found!\n " + 
                  " Please use method _load_cnn_model() to load a valid model")
            sys.exit()
        return
    
    
    #
    # Method to load a specific CNN model
    # **********************************
    def _load_cnn_model(self, file_of_cnn_model=None, b_print_cnn_model=True ):
        '''
        @summary: Method to load a CNN-model from a h5-file and create a reference to its input (image)
        @version: 0.2 of 28.09.2020
        @requires: filename must already have been saved in _cnn_model_file or been given as a parameter 
        @requires: file must be a h5-file 
        @change: minor changes - documentation 
        @note: A reference to the CNN's input is saved internally
        @warning: An input in form of an image - a MNIST-image - is implicitly assumed
        
        @note: Parameters
        -----------------
        @param file_of_cnn_model: Name of h5-file with the trained (!) CNN-model
        @param b_print_cnn_model: boolean - Print some information on the CNN-model 
        '''
        if file_of_cnn_model != None:
            self._cnn_model_file = file_of_cnn_model
        
        # Check existence of the file
        if not path.isfile(self._cnn_model_file): 
            print("\nWarning: The file " + file_of_cnn_model + 
                  " for the cnn-model could not be found!\n" + 
                  "Please change the parameter \"file_of_cnn_model\"" + 
                  " to load a valid model")
        
        # load the CNN model 
        self._cnn_model = models.load_model(self._cnn_model_file)
        
        # Inform about the model and its file
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~
        print("Used file to load a ´ model = ", self._cnn_model_file)
        # we print out the models structure
        if b_print_cnn_model:
            print("Structure of the loaded CNN-model:\n")
            self._cnn_model.summary()
        
        # handle/references to the models input 
=> more precise the input image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #    Note: As we only have one image instead of a batch 
        #    we pick only the first tensor element!
        #    The inputs will be needed for buildng the oip-model 
        self._r_cnn_inputs = self._cnn_model.inputs[0]  # !!! We have a btach with just ONE image 
        
        # print out the shape - it should be known from the original cnn-model
        if b_print_cnn_model:
            print("shape of cnn-model inputs = ", self._r_cnn_inputs.shape)
        
        return
    
    
    #
    # Method to construct a model to optimize input for OIP-detection 
    # ***************************************
    def _build_oip_model(self, layer_name = 'Conv2D_3', b_print_oip_model=True ): 
        '''
        @summary: Method to build a new (sub-) model - the "OIP-model" - of the CNN-model by 
                  connectng the input of the CNN-model with one of its Conv-layers
        @version: 0.4 of 28.09.2020
        @change: Minor changes - documentation 
        @note: We need a Conv layer to build a working model for input image optimization
        We get the Conv layer by the layer's name 
        The new model connects the first input element of the CNN to the output maps of the named Conv layer CNN 
        We use Keras' models.Model() functionality 
        @note: The layer's name is crucial for all later investigations - if you want to change it this method has to be rerun 
        @requires: The original, trained CNN-model must be loaded and referenced by self._cnn_model 
        @warning: Only 1 input image and not a batch is implicitly assumed 
        
        @note: Parameters
        -----------------
        @param layer_name: Name of the convolutional layer of the CNN for whose maps we want to find OIP patterns
        @param b_print_oip_model: boolean - Print some information on the OIP-model 
        
        '''
        # free some RAM - hopefully 
        del self._oip_submodel
        
        # check for loaded CNN-model
        if self._cnn_model == None: 
            print("Error: CNN-model not yet defined.")
            sys.exit()
        
        # get layer name 
        self._layer_name = layer_name
        
        # We build a new model based on the model inputs and the output 
        self._layer_output = self._cnn_model.get_layer(self._layer_name).output
        # Note: We do not care at the moment about a complex composition of the input 
        # We trust in that we handle only one image - and not a batch
        
        # Create the sub-model via Keras' models.Model()
        model_name = "mod_oip__" + layer_name 
        self._oip_submodel = models.Model( [self._r_cnn_inputs], [self._layer_output], name = model_name)                                    

        # We print out the oip model structure
        if b_print_oip_model:
            print("Structure of the constructed OIP-sub-model:\n")
            self._oip_submodel.summary()
        return
    
    
    #
    # Method to set up GradientTape and an iteration function providing loss and gradient values  
    # *********************************************************************************************
    def _setup_gradient_tape_and_iterate_function(self, b_print = True):
        '''
        @summary: Central method to watch input variables and resulting gradient changes 
        @version: 0.5 of 28.09.2020
        @change: 
        @note: For TF2 eager execution we need to watch input changes and trigger automatic gradient evaluation
        @note: The normalization of the gradient is strongly recommended; as we fix epsilon for correction steps 
               we thereby will get changes to the input data of an approximately constant order.
               This - together with 
standardization of the images (!) - will lead to convergence at the size of epsilon !
        '''   
        # Working with TF2 GradientTape
        self._tape = None

        # Watch out for input, output variables with respect to gradient changes
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        with tf.GradientTape() as self._tape: 
            # Input
            # ~~~~~
            self._tape.watch(self._r_cnn_inputs)
            # Output
            # ~~~~~~
            self._r_oip_outputs = self._oip_submodel(self._r_cnn_inputs)
            # Loss 
            self._r_oip_loss = tf.reduce_mean(self._r_oip_outputs[0, :, :, self._map_index])
            #self._loss = B.mean(oip_output[:, :, :, map_index])
            #self._loss = B.mean(oip_outputs[-1][:, :, map_index])
            #self._loss = tf.reduce_mean(oip_outputs[-1][ :, :, map_index])
            if b_print:
                print(self._r_oip_loss)
                print("shape of oip_loss = ", self._r_oip_loss.shape)
        
        # Gradient definition and normalization
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._r_oip_grads  = self._tape.gradient(self._r_oip_loss, self._r_cnn_inputs)
        print("shape of grads = ", self._r_oip_grads.shape)

        # normalization of the gradient - required for convergence 
        t_tiny = tf.constant(1.e-7, tf.float32)
        self._r_oip_grads /= (tf.sqrt(tf.reduce_mean(tf.square(self._r_oip_grads))) + t_tiny)
        #self._r_oip_grads /= (B.sqrt(B.mean(B.square(self._r_oip_grads))) + 1.e-7)
        #self._r_oip_grads = tf.image.per_image_standardization(self._r_oip_grads)

        # Define an abstract recallable Keras function as a convenience function for iterations 
        #     The _iterate()-function produces loss and gradient values for corrected img data 
        #     The first list of addresses points to the input data, the last to the output data 
        self._iterate = B.function( [self._r_cnn_inputs], [self._r_oip_loss, self._r_oip_grads] )
        
    
    #        
    # Method to optimize an emerging OIP out of statistical input image data 
    # (simple strategy - just optimize, no precursor, no intermediate pattern evolution 
    # ********************************
    def _oip_strat_0_optimization_loop(self, conv_criterion = 5.e-4, 
                                            b_stop_with_convergence = False,
                                            b_show_intermediate_images = True,
                                            b_print = True):
        '''
        @summary: Method to control the optimization loop for OIP reconstruction of an initial input image
                  with a statistical distribution of pixel values. 
        @version: 0.4 of 28.09.2020
        @changes: Minor changes - eliminated some unused variables
        @note:    The function also provides intermediate output in the form of printed data and images !.
        @requires: An input image tensor must already be available at _inp_img_data - created by _build_initial_img_data()
        @requires: Axis-objects for plotting, typically provided externally by the calling functions 
                  _derive_OIP() or _precursor()
        
        @note: Parameters:
        ----------------- 
        @param conv_criterion:  A small threshold number for (difference of loss-values / present loss value )
        @param b_stop_with_convergence: Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param b_show_intermediate_image: Boolean which decides whether we show up to 8 intermediate images 
        @param b_print: Boolean which decides whether we print intermediate loss values 
        
        @note: Intermediate information is provided at _n_steps intervals, 
               which are logarithmically distanced with respect to _n_epochs
               
Reason: Most changes happen at the beginning 
        @note: This method produces some intermediate output during the optimization loop in form of images.
        It uses an external grid of plot frames and their axes-objects. The addresses of the 
        axis-objects must provided by an external list "li_axa[]" to self._li_axa[].  
        We need a seqence of _n_steps+2 axis-frames (or more) => length(_li_axa) >= _n_steps + 2). 
        
        @todo: Loop not optimized for TF 2 - but not so important here - a run takes less than a second 
        
        '''
        
        # Check that we already an input image tensor
        if ( (self._inp_img_data == None) or 
             (self._inp_img_data.shape[1] != self._img_dim) or 
             (self._inp_img_data.shape[2] != self._img_dim) ) :
            print("There is no initial input image or it does not fit dimension requirements (28,28)")
            sys.exit()

        # Print some information
        if b_print:
            print("*************\nStart of optimization loop\n*************")
            print("Strategy: Simple initial mixture of long and short range variations")
            print("Number of epochs = ", self._n_epochs)
            print("Epsilon =  ", self._epsilon)
            print("*************")

        # some initial value
        loss_old = 0.0
        
        # Preparation of intermediate reporting / img printing
        # --------------------------------------
        # Logarithmic spacing of steps (most things happen initially)
        n_el = math.floor(self._n_epochs / 2**(self._n_steps) ) 
        li_int = []
        for j in range(self._n_steps):
            li_int.append(n_el*2**j)
        if b_print:
            print("li_int = ", li_int)
        # A counter for intermediate reporting  
        n_rep = 0
        
        # Convergence? - A list for steps meeting the convergence criterion
        # ~~~~~~~~~~~~
        li_conv = []
        
        
        # optimization loop 
        # *******************
        # counter for steps with zero loss and gradient values 
        n_zeros = 0
        
        for j in range(self._n_epochs):
            
            # Get output values of our Keras iteration function 
            # ~~~~~~~~~~~~~~~~~~~
            self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
            
            # loss difference to last step - shuold steadily become smaller 
            loss_diff = self._val_oip_loss - loss_old 
            if b_print:
                print("j: ", j, " :: loss_val = ", self._val_oip_loss, " :: loss_diff = ",  loss_diff)
                # print("loss_diff = ", loss_diff)
            loss_old = self._val_oip_loss

            if j > 10 and (loss_diff/(self._val_oip_loss + 1.-7)) < conv_criterion:
                li_conv.append(j)
                lenc = len(li_conv)
                # print("conv - step = ", j)
                # stop only after the criterion has been met in 4 successive steps
                if b_stop_with_convergence and lenc > 5 and li_conv[-4] == j-4:
                    return
            
            grads_val     = self._val_oip_grads
            #grads_val =   normalize_tensor(grads_val)
            
            # The gradients average value 
            avg_grads_val = (tf.reduce_mean(grads_val)).numpy()
            
            # Check if our map reacts at all
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if self._val_oip_loss == 0.0 and avg_grads_val == 0.0 and b_print :
                print( "0-values, j= ", j, 
                       " loss = ", self._val_oip_loss, " avg_loss = ", avg_grads_val )
                n_zeros += 1
            
            if n_zeros > 10 and b_print: 
                print("
More than 10 times zero values - Try a different initial random distribution of pixel values")
                return
            
            # gradient ascent => Correction of the input image data 
            # ~~~~~~~~~~~~~~~
            self._inp_img_data += self._val_oip_grads * self._epsilon
            
            # Standardize the corrected image - we won't get a convergence otherwise 
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
            
            # Some output at intermediate points 
            #     Note: We us logarithmic intervals because most changes 
            #           appear in the intial third of the loop's range  
            if (j == 0) or (j in li_int) or (j == self._n_epochs-1) :
                if b_print:
                    # print some info 
                    print("\nstep " + str(j) + " finalized")
                    #print("Shape of grads = ", grads_val.shape)
                    print("present loss_val = ", self._val_oip_loss)
                    print("loss_diff = ", loss_diff)
                # show the intermediate image data 
                if b_show_intermediate_images: 
                    imgn = self._inp_img_data[0,:,:,0].numpy()
                    # print("Shape of intermediate img = ", imgn.shape)
                    self._li_axa[n_rep].imshow(imgn, cmap=plt.cm.get_cmap('viridis'))
                    # counter
                    n_rep += 1
        
        return
    
    
    #        
    # Standard UI-method to derive OIP from a given initial input image
    # ********************
    def _derive_OIP(self, map_index = 1, 
                          n_epochs = None, 
                          n_steps = 6, 
                          epsilon = 0.01, 
                          conv_criterion = 5.e-4, 
                          li_axa = [], 
                          ax1_1 = None, ax1_2 = None, 
                          b_stop_with_convergence = False,
                          b_show_intermediate_images = True,
                          b_print = True):
        '''
        @summary: Method to create an OIP-image for a given initial input image
                  This is the standard user interface for finding an OIP 
        @warning: This method should NOT be used for finding an initial precursor image 
                  Use _prepare_precursor() to define the map first and then _precursor() to evaluate initial images 
        @version: V0.4, 28.09.2020
        @changes: Minor changes - added internal _li_axa for plotting, added documentation 
                  This method starts the process of producing an OIP of statistical input image data
        
        @requires: A map index should be provided to this method 
        @requires: An initial input image with statistical fluctuations of pixel values must have been created. 

        @warning:    This version only supports the most simple strategy - "strategy 0" 
        -------------    Optimize in one loop - starting from a superposition of fluctuations
                         No precursor, no intermediate evolutions

        @note: Parameters:
        -----------------
        @param map_index: We can and should chose a map here  (overwrites previous settings)
        @param n_epochs: Number of optimization steps  (overwrites previous settings) 
        @param n_steps:  Defines number of intervals (with length n_epochs/ n_steps) for reporting
                         standard value: 6 => 8 images - start image, end image + 6 intermediate 
                         This number also sets a requirement for providing (n_step + 2) external axis-frames 
                         to display intermediate images of the emerging OIP  
                         => see _oip_strat_0_optimization_loop()
r
        @param epsilon:  Size for corrections by gradient values
        @param conv_criterion: A small threshold number for convegenc (checks:  difference of loss-values / present loss value )
        @param b_stop_with_convergence: 
                         Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
                 
        
        @note: Preparations for plotting: 
        We need n_step + 2 axis-frames which must be provided externally
        
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        '''
        # Get input parameters
        self._map_index = map_index
        self._n_epochs  = n_epochs   
        self._n_steps   = n_steps
        self._epsilon   = epsilon
        
        # references to plot frames 
        self._li_axa = li_axa
        num_frames = len(li_axa)
        if num_frames < n_steps+2:
            print("The number of available image frames (", num_frames, ") is smaller than required for intermediate output (", n_steps+2, ")")
            sys.exit()
        
        # 2 axes frames to display the final OIP image (with contrast enhancement) 
        self._ax1_1 = ax1_1
        self._ax1_2 = ax1_2
        
        # number of epochs for optimization strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if n_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = n_epochs

        # Reset some variables  
        self._val_oip_grads = None
        self._val_oip_loss  = None 
        self._iterate       = None 

        # Setup the TF2 GradientTape watch and a Keras function for iterations 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._setup_gradient_tape_and_iterate_function(b_print = b_print)
        if b_print:
            print("GradienTape watch activated ")
        
        '''
        # Gradient handling - so far we only deal with addresses 
        # ~~~~~~~~~~~~~~~~~~
        self._r_oip_grads  = self._tape.gradient(self._r_oip_loss, self._r_cnn_inputs)
        print("shape of grads = ", self._r_oip_grads.shape)
        
        # normalization of the gradient 
        self._r_oip_grads /= (B.sqrt(B.mean(B.square(self._r_oip_grads))) + 1.e-7)
        #self._r_oip_grads = tf.image.per_image_standardization(self._r_oip_grads)
        
        # define an abstract recallable Keras function 
        # producing loss and gradients for corrected img data 
        # the first list of addresses points to the input data, the last to the output data 
        self._iterate = B.function( [self._r_cnn_inputs], [self._r_oip_loss, self._r_oip_grads] )
        '''
            
        # get the initial image into a variable for optimization 
        self._inp_img_data = self._initial_inp_img_data

        # Start optimization loop for strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if self._oip_strategy == 0: 
            self._oip_strat_0_optimization_loop( conv_criterion = conv_
criterion, 
                                                b_stop_with_convergence = b_stop_with_convergence,  
                                                b_show_intermediate_images = b_show_intermediate_images,
                                                b_print = b_print
                                               )
        
        # Display the last OIP-image created at the end of the optimization loop
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # standardized image 
        oip_img = self._inp_img_data[0,:,:,0].numpy()
        # transfored image 
        oip_img_t = self._transform_tensor_to_img(T_img=self._inp_img_data[0,:,:,0])
        
        # display 
        ax1_1.imshow(oip_img, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(oip_img_t, cmap=plt.cm.get_cmap('viridis'))
        
        return
    
    
    # 
    # Method to derive OIP from a given initial input image if map_index is already defined 
    # ********************
    def _derive_OIP_for_Prec_Img(self, 
                          n_epochs = None, 
                          n_steps = 6, 
                          epsilon = 0.01, 
                          conv_criterion = 5.e-4, 
                          li_axa = [], 
                          ax1_1 = None, ax1_2 = None, 
                          b_stop_with_convergence = False,
                          b_show_intermediate_images = True,
                          b_print = True):
        '''
        @summary: Method to create an OIP-image for an already given map-index and a given initial input image
                  This is the core of OIP-detection, which starts the optimization loop  
        @warning: This method should NOT be used directly for finding an initial precursor image 
                  Use _prepare_precursor() to define the map first and then _precursor() to evaluate initial images 
        @version: V0.4, 28.09.2020
        @changes: Minor changes - added internal _li_axa for plotting, added documentation 
                  This method starts the process of producing an OIP of statistical input image data
        
        @note:    This method should only be called after _prepare_precursor(), _precursor(), _build_initial_img_prec() 
                  For a trial of different possible precursor images rerun _build_initial_img_prec() with a different index
        
        @requires: A map index should be provided to this method 
        @requires: An initial input image with statistical fluctuations of pixel values must have been created. 

        @warning:    This version only supports the most simple strategy - "strategy 0" 
        -------------    Optimize in one loop - starting from a superposition of fluctuations
                         no intermediate evolutions

        @note: Parameters:
        -----------------
        @param n_epochs: Number of optimization steps  (overwrites previous settings) 
        @param n_steps:  Defines number of intervals (with length n_epochs/ n_steps) for reporting
                         standard value: 6 => 8 images - start image, end image + 6 intermediate 
                         This number also sets a requirement for providing (n_step + 2) external axis-frames 
                         to display intermediate images of the emerging OIP  
                         => see _oip_strat_0_optimization_loop()
        @param epsilon:  Size for corrections by gradient values
        @param conv_criterion: A small threshold number for convegenc (checks:  difference of loss-values / present loss value )
        @param b_stop_with_convergence: 
                         Booelan which decides whether we stop a loop if the conv-criterion is fulfilled
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
                 
        
        @note: 
Preparations for plotting: 
        We need n_step + 2 axis-frames which must be provided externally
        
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        '''
        # Get input parameters
        self._n_epochs  = n_epochs   
        self._n_steps   = n_steps
        self._epsilon   = epsilon
        
        # references to plot frames 
        self._li_axa = li_axa
        num_frames = len(li_axa)
        if num_frames < n_steps+2:
            print("The number of available image frames (", num_frames, ") is smaller than required for intermediate output (", n_steps+2, ")")
            sys.exit()
            
        # 2 axes frames to display the final OIP image (with contrast enhancement) 
        self._ax1_1 = ax1_1
        self._ax1_2 = ax1_2
        
        # number of epochs for optimization strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if n_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = n_epochs
        
        # Note: No setup of GradientTape and _iterate(required) - this is done by _prepare_precursor 
            
        # get the initial image into a variable for optimization 
        self._inp_img_data = self._initial_inp_img_data

        # Start optimization loop for strategy 0 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if self._oip_strategy == 0: 
            self._oip_strat_0_optimization_loop( conv_criterion = conv_criterion, 
                                                b_stop_with_convergence = b_stop_with_convergence,  
                                                b_show_intermediate_images = b_show_intermediate_images,
                                                b_print = b_print
                                               )
        # Display the last OIP-image created at the end of the optimization loop
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # standardized image 
        oip_img = self._inp_img_data[0,:,:,0].numpy()
        # transfored image 
        oip_img_t = self._transform_tensor_to_img(T_img=self._inp_img_data[0,:,:,0])
        
        # display 
        ax1_1.imshow(oip_img, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(oip_img_t, cmap=plt.cm.get_cmap('viridis'))
        
        return
    
    
    # 
    # Method to build an initial image from a superposition of random data on different length scales 
    # ***********************************
    def _build_initial_img_data( self, 
                                 strategy = 0, 
                                 li_epochs    = [20, 50, 100, 400], 
                                 li_facts     = [0.5, 0.5, 0.5, 0.5],
                                 li_dim_steps = [ (3,3), (7,7), (14,14), (28,28) ], 
                                 b_smoothing = False,
                                 ax1_1 = None, ax1_2 = None):
        
        '''
        @summary: Standard method to build an initial image with random fluctuations on 4 length scales
        @version: V0.2 of 29.09.2020
        
        @note: This method should NOT be used for initial 
images based on a precursor images. 
               See _build_initial_img_prec(), instead.  
        
        @note: This method constructs an initial input image with a statistical distribution of pixel-values.
        We use 4 length scales to mix fluctuations with different "wave-length" by a simple approach: 
        We fill four square with a different numer of cells below the number of pixels 
        in each dimension of the real input image; e.g. (4x4), (7x7, (14x14), (28,28) <= (28,28). 
        We fill the cells with random numbers in [-1.0, 1.]. We smootly scale the resulting pattern 
        up to (28,28) (or what ever the input image dimensions are) by bicubic interpolations 
        and eventually add up all values. As a final step we standardize the pixel value distribution.          
        
        @warning: This version works with 4 length scales, only. 
        @warning: In the present version th eparameters "strategy " and li_epochs have no effect 
        
        @note: Parameters:
        -----------------
        @param strategy:  The strategy, how to build an image (overwrites previous settings) - presently only 0 is supported 
        @param li_epochs: A list of epoch numbers which will be used in strategy 1 - not yet supported 
        @param li_facts:  A list of factors which the control the relative strength of the 4 fluctuation patterns 
        @param li_dim_steps: A list of square dimensions for setting the length scale of the fluctuations  
        @param b_smoothing: Parameter which builds a control image   
        @param ax1_1: matplotlib axis-frame to display the built image 
        @param ax1_2: matplotlib axis-frame to display a second version of the built image 
        
        '''
        
        # Get input parameters 
        # ~~~~~~~~~~~~~~~~~~
        self._oip_strategy = strategy               # no effect in this version 
        self._ay_epochs    = np.array(li_epochs)    # no effect in this version 
        
        # factors by which to mix the random number fluctuations of the different length scales 
        self._ay_facts     = np.array(li_facts)
        # dimensions of the squares which simulate fluctuations 
        self._li_dim_steps = li_dim_steps
        
        # A Numpy array for the eventual superposition of random data 
        fluct_data = None
        
        
        # Strategy 0: Simple superposition of random patterns at 4 different wave-length
        # ~~~~~~~~~~
        if self._oip_strategy == 0:
            
            dim_1_1 = self._li_dim_steps[0][0] 
            dim_1_2 = self._li_dim_steps[0][1] 
            dim_2_1 = self._li_dim_steps[1][0] 
            dim_2_2 = self._li_dim_steps[1][1] 
            dim_3_1 = self._li_dim_steps[2][0] 
            dim_3_2 = self._li_dim_steps[2][1] 
            dim_4_1 = self._li_dim_steps[3][0] 
            dim_4_2 = self._li_dim_steps[3][1] 
            
            fact1 = self._ay_facts[0]
            fact2 = self._ay_facts[1]
            fact3 = self._ay_facts[2]
            fact4 = self._ay_facts[3]
            
            # print some parameter information
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            print("\nInitial image composition - strategy 0:\n Superposition of 4 different wavelength patterns")
            print("Parameters:\n", 
                 fact1, " => (" + str(dim_1_1) +", " + str(dim_1_2) + ") :: ", 
                 fact2, " => (" + str(dim_2_1) +", " + str(dim_2_2) + ") :: ", 
                 fact3, " => (" + str(dim_3_1) +", " + str(dim_3_2) + ") :: ", 
                 fact4, " => (" + str(dim_4_1) +", " + str(dim_4_2) + ")" 
                 )
            
            # fluctuations
            fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
            fluct2 =  2.0 * ( np.random.
random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
            fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
            fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
            
            # Scaling with bicubic interpolation to the required image size
            fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
            fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
            fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
            fluct4_scale = fluct4
            
            # superposition
            fluct_data = fact1*fluct1_scale + fact2*fluct2_scale + fact3*fluct3_scale + fact4*fluct4_scale
            
        
        # get the standardized plus smoothed and unsmoothed image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #  TF2 provides a function performing standardization of image data function
        fluct_data_unsmoothed = tf.image.per_image_standardization(fluct_data) 
        fluct_data_smoothed   = tf.image.per_image_standardization(
                                    tf.image.resize( fluct_data, [28,28], method="bicubic", antialias=True) )
        
        if b_smoothing: 
            self._initial_inp_img_data = fluct_data_smoothed
        else:
            self._initial_inp_img_data = fluct_data_unsmoothed
        
        # Display of both variants => there should be no difference 
        # ~~~~~~~~~~~~~~~~~~~~~~~~
        img_init_unsmoothed = fluct_data_unsmoothed[0,:,:,0].numpy()
        img_init_smoothed   = fluct_data_smoothed[0,:,:,0].numpy()
        ax1_1.imshow(img_init_unsmoothed, cmap=plt.cm.get_cmap('viridis'))
        ax1_2.imshow(img_init_smoothed, cmap=plt.cm.get_cmap('viridis'))
        
        print("Initial images plotted")
        
        return self._initial_inp_img_data    


    # 
    # Method to build an initial image from a superposition of a PRECURSOR image with random data on different length scales 
    # ***********************************
    def _build_initial_img_from_prec( self, 
                                 prec_index = -1,
                                 strategy = 0, 
                                 li_epochs    = (20, 50, 100, 400), 
                                 li_facts     = (1.0, 0.0, 0.0, 0.0, 0.0),
                                 li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                                 b_smoothing = False,
                                 b_print = True, 
                                 b_display = False, 
                                 ax1_1 = None, ax1_2 = None):
        
        '''
        @summary: Method to build an initial image based on a Precursor image => Input for _derive_OIP_for_Prec_Img()
        @version: V0.3, 03.10.2020
        @changes: V0.2: Minor- only documentation and comparison of index to length of _li_of_flucts[] 
        @changes: V0.3: Added Booleans to control the output and display of images 
        @changes: V0.4: Extended the reconstruction part / extended documentation
        
        @note: This method differs from _build_initial_img_data() as follows:
                * It uses a Precursor image as the fundamental image 
                * The data of the Precursor Image will be reconstructed from a (3x3) fluctuation pattern and enrichments
                * This method adds even further fluctuations if requested 
        @note: This method should be called manually from a Jupyter cell 
        @note: This method saves the reconstructed input image into self._initial_inp_img_data
        @note: This method should be followed by a call of self._derive_OIP_for_Prec_Img()
        
        @requires: Large scale fluctuation data saved in self._li_of_flucts[]
        @requires: Additional enrichment 
information in self._li_of_fluct_enrichments[]
        
        @param prec_index: This is an index ([0, 7[) of a large scale fluctuation pattern which was saved in self._li_of_flucts[] 
                           during the run of the method "_precursor()". The image tensor is reconstructed from the fluctuation pattern. 
        @warning: We support a maximum of 8 selected fluctuation patterns for which a map reacts 
        @warning: However, less precursor patterns may be found - so you should watch for the output of _precursor() before you run this method
        
        @param li_facts:  A list of factors which the control the relative strength of the precursor image vs. 
                          4 additional fluctuation patterns 
        @warning: Normally, it makes no sense to set li_facts[1] > 0 - because this will destroy the original large scale pattern 
        
        @note: For other input parameters see _build_initial_img_data()
        '''
        
        # Get input parameters 
        self._oip_strategy = strategy
        self._ay_facts     = np.array(li_facts)
        self._ay_epochs    = np.array(li_epochs)
        self._li_dim_steps = li_dim_steps
        
        fluct_data = None
        
        # Reconstruct an precursor image from a saved large scale fluctuation pattern (result of _precursor() ) 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        length_li_cov = len(self._li_of_flucts)
        if prec_index > -1: 
            if length_li_cov < prec_index+1:
                print("index too large for number of detected patterns (", length_li_cov, ")")
                sys.exit()
            cov_p = self._li_of_flucts[prec_index][1][1]
            
            fluct0_scale_p = tf.image.resize(cov_p, [28,28], method="bicubic", antialias=True)
            
            # Add fluctuation enrichments - if saved [ len(self._li_fluct_enrichments) > 0 ]
            if len(self._li_fluct_enrichments) > 0:
                # Scaling enrichment flucts with bicubic interpolation to the required image size
                fluct1_p = self._li_fluct_enrichments[2][0] 
                fluct2_p = self._li_fluct_enrichments[2][1] 
                fluct3_p = self._li_fluct_enrichments[2][2] 
                fluct4_p = self._li_fluct_enrichments[2][3] 
                
                fact0_p = self._li_fluct_enrichments[0][0]
                fact1_p = self._li_fluct_enrichments[0][1]
                fact2_p = self._li_fluct_enrichments[0][2]
                fact3_p = self._li_fluct_enrichments[0][3]
                fact4_p = self._li_fluct_enrichments[0][4]
                
                fluct1_scale_p = tf.image.resize(fluct1_p, [28,28], method="bicubic", antialias=True)
                fluct2_scale_p = tf.image.resize(fluct2_p, [28,28], method="bicubic", antialias=True)
                fluct3_scale_p = tf.image.resize(fluct3_p, [28,28], method="bicubic", antialias=True)
                fluct4_scale_p = fluct4_p
                
                fluct_scale_p = fact0_p*fluct0_scale_p \
                 + fact1_p*fluct1_scale_p + fact2_p*fluct2_scale_p \
                 + fact3_p*fluct3_scale_p + fact4_p*fluct4_scale_p
                 
            else: 
                fluct_scale_p = fluct0_scale_p
            
            # get the img-data 
            fluct_data_p  = tf.image.per_image_standardization(fluct_scale_p)     
            fluct_data_p  = tf.where(fluct_data_p > 5.e-6, fluct_data_p, tf.zeros_like(fluct_data_p))
            self._initial_inp_img_data = fluct_data_p
            self._inp_img_data         = fluct_data_p
            
        
        # Strategy 0: Simple superposition of the precursor image AND additional patterns at 4 different wave-length
        # ~~~~~~~~~~
        if self._oip_strategy == 0:
            
            dim_1_1 = self._
li_dim_steps[0][0] 
            dim_1_2 = self._li_dim_steps[0][1] 
            dim_2_1 = self._li_dim_steps[1][0] 
            dim_2_2 = self._li_dim_steps[1][1] 
            dim_3_1 = self._li_dim_steps[2][0] 
            dim_3_2 = self._li_dim_steps[2][1] 
            dim_4_1 = self._li_dim_steps[3][0] 
            dim_4_2 = self._li_dim_steps[3][1] 
            
            fact0 = self._ay_facts[0]
            fact1 = self._ay_facts[1]
            fact2 = self._ay_facts[2]
            fact3 = self._ay_facts[3]
            fact4 = self._ay_facts[4]
            
            # print some parameter information
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if b_print:
                print("\nInitial image composition - strategy 0:\n Superposition of 4 different wavelength patterns")
                print("Parameters:\n", 
                     fact0, " => precursor image \n", 
                     fact1, " => (" + str(dim_1_1) +", " + str(dim_1_2) + ") :: ", 
                     fact2, " => (" + str(dim_2_1) +", " + str(dim_2_2) + ") :: ", 
                     fact3, " => (" + str(dim_3_1) +", " + str(dim_3_2) + ") :: ", 
                     fact4, " => (" + str(dim_4_1) +", " + str(dim_4_2) + ")" 
                     )
            
            # fluctuations
            fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
            fluct2 =  2.0 * ( np.random.random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
            fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
            fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
            
            # Scaling with bicubic interpolation to the required image size
            fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
            fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
            fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
            fluct4_scale = fluct4
            
            # superposition with the already calculated image 
            fluct_data = fact0 * self._initial_inp_img_data  \
                         + fact1*fluct1_scale + fact2*fluct2_scale \
                         + fact3*fluct3_scale + fact4*fluct4_scale
            
        
        # get the standardized plus smoothed and unsmoothed image 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        #    TF2 provides a function performing standardization of image data function
        fluct_data_unsmoothed = fluct_data 
        fluct_data_smoothed   = tf.image.per_image_standardization(
                                    tf.image.resize( fluct_data, [28,28], 
                                                     method="bicubic", antialias=True) )
        
        if b_smoothing: 
            self._initial_inp_img_data = fluct_data_smoothed
        else:
            self._initial_inp_img_data = fluct_data_unsmoothed
        
        # There should be no difference 
        img_init_unsmoothed = fluct_data_unsmoothed[0,:,:,0].numpy()
        img_init_smoothed   = fluct_data_smoothed[0,:,:,0].numpy()
        
        if b_display:
            ax1_1.imshow(img_init_unsmoothed, cmap=plt.cm.get_cmap('viridis'))
            ax1_2.imshow(img_init_smoothed, cmap=plt.cm.get_cmap('viridis'))
            print("Initial images plotted")
        
        return self._initial_inp_img_data    
    
    #
    # Method to prepare a Precursor run which checks a variety of large scale fluctuations for optimum actvation  
    # ***********************************
    def _prepare_precursor(self, map_index = 120, b_print = False):
        '''
        @summary: A method to prepare a Precursor run by setting up GradientTape and the _ierate() 
function for an optimization loop
        @version: 0.2, 30.09.2020
        @changes: Minor - adaption to _setup_gradient_tape_and_iterate_function() instead of the obsolete _setup_gradient_tape()
        @requires: A loaded CNN-Model and an already built OIP-model 
        @requires: A valid map-index as input parameter 
        
        @note: This method sets up the GradientTape and _iterate only once 
               - it will not be done again during the investigation of thousands of different input images (with large scale fluctuations) 
               which we investigate during the _precursor()-run. 
        
        @param map_index: Index selecting a map for the CNN layer defined previously by _build_oip_model()
        @param b_print: Boolean - decides about intemediate output 
        
        '''
        
        # Get input parameters 
        # ~~~~~~~~~~~~~~~~~~~~~~
        self._map_index = map_index
        # Rest some variables  
        self._val_oip_grads = None
        self._val_oip_loss  = None 
        self._iterate       = None 
        
        # Setup the TF2 GradientTape watch
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._setup_gradient_tape_and_iterate_function(b_print = b_print)
        if b_print:
            print("GradientTape watch activated and _iterate()-function defined")
    
    
    # Method to prepare a Precursor run which checks a variety of large scale fluctuations for optimum actvation
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _precursor(self, 
                   li_pre_val=[0.2, 0.5, 0.8], 
                   num_epochs=10, 
                   loss_limit = 0.5, 
                   b_print = True, 
                   b_with_enriched_fluct = False, 
                   li_facts     = (1.0, 0.0, 0.0, 0.0, 0.1),
                   li_dim_steps = ( (3,3), (7,7), (14,14), (28,28) ), 
                   b_check = True, 
                   li_axa = None):

        '''
        @summary: Method to investigate thousands of input images with large scale fluctuations for the reaction of a specified map (filters)
                  and a given number of epochs in pattern creation
        @version: 0.5, 02.10.2020
        @changes: Minor - documentation, skipped some superfluous statements 
        @note: We select the 8 most dominant images - or less, if there are fewer input images which trigger the map 
        @requires: Previous run of _prepare_cursor() with a definition of the map-index
        @note: We vary only 3 given pixel values on (3x3) grids (19683 possibilities)
        @note: The optimization loop is completely done within this method - due to performance reasons
        
        @param li_pre_val: A list of three scaled pixel values between ]0, 1[ which shall be combined in (3x3)-fluctuation patterns
        
        @param num_epochs: The number of epochs used in the optimization loop for pattern creation
        @note: It is worthwile to experiment with the number of epochs - the (3x3)-pattern selection may change !!!
        
        @param loss_limit: Threshold of loss for which we register a fluctuation image as relevant 
        
        @param b_print: Boolean - controls printout for intermediate results - useful to see map response 
        @param b_check: Check the response of map for the first saved pattern - check the image reconstruction at the same time
        
        @note: Parameters to enrich the (3x3)-large scale fluctuation with a small scale pattern 
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param b_with_enriched_fluct: Boolean - controls whether we enrich the long-range pattern with other additional patterns 
        @param li_facts:  A list of factors which the control the relative strength of the precursor image vs. 
                          4 
additional fluctuation patterns 
        @param li_dim_steps: A list of square dimensions for setting the length scale of the fluctuations  
        
        @note: Parameters for plotting  
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
                 
        @note: Preparations for plotting: 
        We need n_step + 2 axis-frames which must be provided externally
        
        With Jupyter this can externally be done by statements like 

        # figure
        # -----------
        #sizing
        fig_size = plt.rcParams["figure.figsize"]
        fig_size[0] = 16
        fig_size[1] = 8
        fig_a = plt.figure()
        axa_1 = fig_a.add_subplot(241)
        axa_2 = fig_a.add_subplot(242)
        axa_3 = fig_a.add_subplot(243)
        axa_4 = fig_a.add_subplot(244)
        axa_5 = fig_a.add_subplot(245)
        axa_6 = fig_a.add_subplot(246)
        axa_7 = fig_a.add_subplot(247)
        axa_8 = fig_a.add_subplot(248)
        li_axa = [axa_1, axa_2, axa_3, axa_4, axa_5, axa_6, axa_7, axa_8]
        
        
        '''
        # Internal parameter 
        num_selected = 8 
        #check if length of li_axa is sufficient
        if li_axa == None or len(li_axa) < num_selected:
            print("Error: The length of the provided list with axes-frames for plotting must be at least ", num_selected )
            sys.exit()
        
        # get required exernal params 
        # ~~~~~~~~~~~~~~~~~~~~~~~~
        self._n_steps   = 2           # only a dummy 
        self._epsilon   = 0.01        # only a dummy  
        
        # number of epochs for optimization strategy 0 
        if num_epochs == None:
            len_epochs = len(self._ay_epochs)
            self._n_epochs   = self._ay_epochs[len_epochs-1]
        else: 
            self._n_epochs = num_epochs  
        
        # Create a fixed random flutution pattern which we later can overlay to the long range fluctuation patterns 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        self._li_dim_steps = li_dim_steps # dimensions for fluctuations 
        self._ay_facts     = np.array(li_facts)
        
        dim_1_1 = self._li_dim_steps[0][0] 
        dim_1_2 = self._li_dim_steps[0][1] 
        dim_2_1 = self._li_dim_steps[1][0] 
        dim_2_2 = self._li_dim_steps[1][1] 
        dim_3_1 = self._li_dim_steps[2][0] 
        dim_3_2 = self._li_dim_steps[2][1] 
        dim_4_1 = self._li_dim_steps[3][0] 
        dim_4_2 = self._li_dim_steps[3][1] 
        
        fact0 = self._ay_facts[0]
        fact1 = self._ay_facts[1]
        fact2 = self._ay_facts[2]
        fact3 = self._ay_facts[3]
        fact4 = self._ay_facts[4]
        
        # Create fluctuation patterns for enrichment 
        fluct1 =  2.0 * ( np.random.random((1, dim_1_1, dim_1_2, 1)) - 0.5 ) 
        fluct2 =  2.0 * ( np.random.random((1, dim_2_1, dim_2_2, 1)) - 0.5 ) 
        fluct3 =  2.0 * ( np.random.random((1, dim_3_1, dim_3_2, 1)) - 0.5 ) 
        fluct4 =  2.0 * ( np.random.random((1, dim_4_1, dim_4_2, 1)) - 0.5 ) 
        
        li_fluct = [fluct1, fluct2, fluct3, fluct4]
        
        # Scaling with bicubic interpolation to the required image size
        fluct1_scale = tf.image.resize(fluct1, [28,28], method="bicubic", antialias=True)
        fluct2_scale = tf.image.resize(fluct2, [28,28], method="bicubic", antialias=True)
        fluct3_scale = tf.image.resize(fluct3, [28,28], method="bicubic", antialias=True)
        fluct4_scale = fluct4

        
        # Create cartesian product of combinatoric possibilities for a (3x3)-grid of long range fluctuations
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        cp = list(product(li_pre_val, repeat=9))
        num = len(cp) 
        print ("We test ", num, " possibilities for a (3x3) fluctuations ")
        
        # Prepare lists to save parameter data for the fluctuation pattern 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # Intermediate list to save relevant long scale fluctuations 
        d_cov = {}
        # list to save parameters for an enrichments of the large scale pattern with small fluctuations
        if b_with_enriched_fluct:
            self._li_fluct_enrichments = [li_facts, li_dim_steps, li_fluct]
        else:
            self._li_fluct_enrichments = []
            
        # Loop to check for relevant fluctuations => Loop over all combinations
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        for i_cp in range(num): 
            
            # create the value distribution 
            cov = np.array(cp[i_cp])
            cov = cov.reshape(1,3,3,1) 

            # create basic image to investigate
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            fluct_scale = tf.image.resize(cov, [28,28], method="bicubic", antialias=True) 
            
            # enrich with additional smallscale fluctuations 
            if b_with_enriched_fluct: 
                # superposition with the already calculated image 
                fluct_scale = fact0 * fluct_scale \
                             + fact1*fluct1_scale + fact2*fluct2_scale \
                             + fact3*fluct3_scale + fact4*fluct4_scale
            
            #standardization of the image data 
            fluct_data  = tf.image.per_image_standardization(fluct_scale)     
            # eliminatng very small values - prove to be helpful in many cases 
            fluct_data = tf.where(fluct_data > 5.e-6, fluct_data, tf.zeros_like(fluct_data))
            
            # save image data 
            self._initial_inp_img_data = fluct_data
            self._inp_img_data         = fluct_data
            
            # optimization loop 
            # ~~~~~~~~~~~~~~~~~
            for j in range(self._n_epochs):
                # Get output values of our Keras iteration function 
                self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
                # gradient ascent => Correction of the input image data 
                self._inp_img_data += self._val_oip_grads * self._epsilon
                # Standardize the corrected image - we won't get a convergence otherwise 
                self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
                
            # Check if we have a loss value > 0.x and save the (3x3)-data
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if self._val_oip_loss > loss_limit:
                d_cov[i_cp] = [self._val_oip_loss, cov] 
                if b_print:
                    tf.print("i = ", i_cp, " loss = ", self._val_oip_loss)

        # We restrict the number to those 8 with highest loss 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        if len(d_cov) > 0: 
            self._li_of_flucts = sorted(d_cov.items() , reverse = True,  key=lambda x: x[1][0])
            # print("num of relevant covs = ", len(self._li_of_flucts), len(d_cov))   
            print("\nnum of relevant covs = ", len(self._li_of_flucts))   
            self._li_of_flucts = self._li_of_flucts[:num_selected].copy()
            #save( 'li_of_flucts.npy', np.array(self._li_of_flucts, dtype=np.float32) )
            save( 'li_of_flucts.npy', self._li_of_flucts)
            # save the enrichment-setting 
            save('li_of_cov_enrichments.npy',self._li_fluct_enrichments)
            
            
            # Check of map really reacted - and check the reconstruction mechanism 
            # 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            if b_check: 
                print("check of map reaction to first selected image")
                cov_t = self._li_of_flucts[0][1][1]
                #print("cov_t-shape = ", cov_t.shape)
                #cov_del = cov_t - cov
                #print("cov_del =\n", cov_del)
                
                fluct0_scale_t = tf.image.resize(cov_t, [28,28], method="bicubic", antialias=True) 
                        
                # Add fluctuation enrichments - if saved
                if b_with_enriched_fluct:
                    
                    # Scaling enrichment flucts with bicubic interpolation to the required image size
                    fluct1_t = self._li_fluct_enrichments[2][0] 
                    fluct2_t = self._li_fluct_enrichments[2][1] 
                    fluct3_t = self._li_fluct_enrichments[2][2] 
                    fluct4_t = self._li_fluct_enrichments[2][3] 
                    
                    fact0_t = self._li_fluct_enrichments[0][0]
                    fact1_t = self._li_fluct_enrichments[0][1]
                    fact2_t = self._li_fluct_enrichments[0][2]
                    fact3_t = self._li_fluct_enrichments[0][3]
                    fact4_t = self._li_fluct_enrichments[0][4]
                    
                    fluct1_scale_t = tf.image.resize(fluct1_t, [28,28], method="bicubic", antialias=True)
                    fluct2_scale_t = tf.image.resize(fluct2_t, [28,28], method="bicubic", antialias=True)
                    fluct3_scale_t = tf.image.resize(fluct3_t, [28,28], method="bicubic", antialias=True)
                    fluct4_scale_t = fluct4_t
                    
                    fluct_scale_t = fact0_t*fluct0_scale_t \
                                 + fact1_t*fluct1_scale_t + fact2_t*fluct2_scale_t \
                                 + fact3_t*fluct3_scale_t + fact4_t*fluct4_scale_t
                    
                else: 
                    fluct_scale_t = fluct0_scale_t
                
                #standardization
                fluct_data_t  = tf.image.per_image_standardization(fluct_scale_t)     
                fluct_data_t  = tf.where(fluct_data_t > 5.e-6, fluct_data_t, tf.zeros_like(fluct_data_t))
                self._initial_inp_img_data = fluct_data_t
                self._inp_img_data         = fluct_data_t
                self._precursor_img        = fluct_data_t
                
                # optimization loop 
                for j in range(self._n_epochs):
                    self._val_oip_loss, self._val_oip_grads = self._iterate([self._inp_img_data])
                    self._inp_img_data += self._val_oip_grads * self._epsilon
                    self._inp_img_data = tf.image.per_image_standardization(self._inp_img_data)
                print("loss for 1st selected img = ", self._val_oip_loss )

            # show the imgs 
            # ~~~~~~~~~~~~~~
            self._display_precursor_imgs(li_axa = li_axa)
            
        # list contains no patterns 
        else:
            print("No image found !")
    
    
    # Method to display initial fluctuation images identified as objects for OIP creation 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _display_precursor_imgs(self, li_axa = None):
        '''
        @summary: Method to display up to 8 selected precursor images 
        @version: 0.2, 02.10.2020
        @change: Only some documentation 
        @note: We first reconstruct the image from saved data of the large scale fluctuations 
        @note: We then display the images in externally delivered axes-frames of matplotlib
        @requires: A filled set of valid fluctuation patterns in self._li_of_flucts[][][] - determined by a run of _precursor()
        @requires: Information on fluctuation 
enrichments in self._li_fluct_enrichments[][] - determined by a _precursor() run
        @requires: A set of axes-frames for plotting - preferably defined in a Jupyter cell calling thi smethod 
        @note: Parameters for plotting  
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        @param _li_axa: A Python list of references to external (Jupyter-) axes-frames for plotting 
         '''
        # length of _li_of_flucts[] vs. length of li_axa
        len_cov = len(self._li_of_flucts)
        if li_axa == None or len(li_axa) < len_cov:
            print("Error: The length of the provided list with axes-frames for plotting must be at least ", len_cov )
            sys.exit()
        
        # Loop to reconstruct and display the found precursor images 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        for j in range(len(self._li_of_flucts)):
            print(j, "loss = ", self._li_of_flucts[j][1][0])
            
            # reconstruct the image from the data of the precursor run 
            # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            cov = self._li_of_flucts[j][1][1]
            fluct0_scale_t  = tf.image.resize(cov, [28,28], method="bicubic", antialias=True)        
            
            # add enrichments if defined 
            if len(self._li_fluct_enrichments) > 0: 
                # Scaling enrichment flucts with bicubic interpolation to the required image size
                fluct1_t = self._li_fluct_enrichments[2][0] 
                fluct2_t = self._li_fluct_enrichments[2][1] 
                fluct3_t = self._li_fluct_enrichments[2][2] 
                fluct4_t = self._li_fluct_enrichments[2][3] 
                
                fact0_t = self._li_fluct_enrichments[0][0]
                fact1_t = self._li_fluct_enrichments[0][1]
                fact2_t = self._li_fluct_enrichments[0][2]
                fact3_t = self._li_fluct_enrichments[0][3]
                fact4_t = self._li_fluct_enrichments[0][4]
                
                fluct1_scale_t = tf.image.resize(fluct1_t, [28,28], method="bicubic", antialias=True)
                fluct2_scale_t = tf.image.resize(fluct2_t, [28,28], method="bicubic", antialias=True)
                fluct3_scale_t = tf.image.resize(fluct3_t, [28,28], method="bicubic", antialias=True)
                fluct4_scale_t = fluct4_t
                
                fluct_scale_t = fact0_t*fluct0_scale_t \
                             + fact1_t*fluct1_scale_t + fact2_t*fluct2_scale_t \
                             + fact3_t*fluct3_scale_t + fact4_t*fluct4_scale_t
            else:
                fluct_scale_t = fluct0_scale_t                 
            
            fluct_datx  = tf.image.per_image_standardization(fluct_scale_t)     
            fluct_dat  = tf.where(fluct_datx > 5.e-6, fluct_datx, tf.zeros_like(fluct_datx))
            img = fluct_dat[0, :,:, 0].numpy()
            li_axa[j].imshow(img, cmap=plt.cm.get_cmap('viridis'))
    
    
    # Method to transform an img tensor into a standard image - used for contrast enhancemant  
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def _transform_tensor_to_img(self, T_img = None, centre_move = 0.33, fact = 0.85):
        '''
        @summary: Method to transform (standardized) tensor img data into standard img array data 
        @note: Clipping is used to remove pixel values outside [0, 255] 
        @version: 0.1, 10.10.2019
        
        @requires: A defined TF2 Keras backend 
        
        @param T_img: The TF2 or keras tensor for image data 
        @param std_dev: A reasonable standard deviation for the spread of the values around an average (=> contrast enhancement)
        
        '''
        ay_x = T_img.numpy()  # floating point array  
        
        maxi_o    = 
np.max(T_img)
        avg_o     = np.mean(T_img)
        mini_o    = np.min(T_img)
        std_dev_o = np.std(T_img)
        print("\nInfos on pixel value distribution during contrast enhancement: ") 
        print("\max_orig = ", maxi_o, " :: avg_orig = ", avg_o, " :: min_orig: ", mini_o) 
        print("std_dev_orig = ", std_dev_o)

        # the following operatin should have no effect on standardized images
        ay_x -= ay_x.mean() # this resuts already in a numpy-array 
        ay_x /= (ay_x.std() + B.epsilon())
        
        maxi    = np.max(ay_x)
        avg     = np.mean(ay_x)
        mini    = np.min(ay_x)
        std_dev = np.std(ay_x)
        print("max_ay = ", maxi, " :: avg_ay = ", avg, " :: min_ay: ", mini) 
        print("std_dev_ay = ", std_dev)
        
        div = fact * 0.5 * ( abs(maxi_o) + abs(mini_o) )
        print("div = ", div)
        ay_x /= div          # scaling  
        ay_x += centre_move  # moving the data center 
        
        maxi = np.max(ay_x)
        avg  = np.mean(ay_x)
        mini = np.min(ay_x)
        std_dev = np.std(ay_x)
        print("max_fin = ", maxi, " :: avg_fin = ", avg, " :: min_fin: ", mini) 
        print("std_dev_fin = ", std_dev)
        
        ay_x = np.clip(ay_x, 0, 1)
        
        ay_x *= 255
        ay_x_img = np.clip(ay_x, 0, 255).astype('uint8')
        
        maxi = np.max(ay_x_img)
        avg  = np.mean(ay_x_img)
        mini = np.min(ay_x_img)
        std_dev = np.std(ay_x_img)
        print("max_img = ", maxi, " :: avg_img = ", avg, " :: min_img: ", mini) 
        print("std_dev_img = ", std_dev, "\n")
        
        
        return ay_x_img



 

Other (previous) articles in this series

A simple CNN for the MNIST dataset – IV – Visualizing the activation output of convolutional layers and maps
A simple CNN for the MNIST dataset – III – inclusion of a learning-rate scheduler, momentum and a L2-regularizer
A simple CNN for the MNIST datasets – II – building the CNN with Keras and a first test
A simple CNN for the MNIST datasets – I – CNN basics

 

Getting a Keras based MLP to run with Cuda 10.2, Cudnn 7.6 and TensorFlow 2.0 on an Opensuse Leap 15.1 system

During last weekend I wanted to compare the performance of an old 20-line Keras setup for a simple MLP with the performance of a self-programmed Python- and Numpy-based MLP regarding training epochs on the MNIST dataset. The Keras code was set up in a Jupyter notebook last autumn – at that time for TensorFlow 1 and Cuda 10.0 for my Nvidia graphics card. I thought it might be a good time to move everything to Tensorflow 2 and the latest Cuda libraries on my Opensuse Leap 15.1 system. This was more work than expected – and for some problems I was forced to apply some dirty workarounds. I got it running. Maybe the necessary steps, which are not really obvious, are helpful for others, too.

Install Cuda 10.2 and Cudnn on an Opensuse Leap 15.1

Before you want to use TensorFlow [TF] on a Nvidia graphics card you must install Cuda. The present version is Cuda 10.2. I was a bit naive to assume that this should be the right version – as it has been available for some time already. Wrong! Afterwards I read somewhere that TensorFlow2 [TF2] is working with Cuda 10.1, only, and not yet fully compatible with Cuda 10.2. Well, at least for my purposes [MLP training] it seemed to work nevertheless – with some “dirty” additional library links.

There is a central Cuda repository available at this address: cuda10.2. Actually, the repo offers both cuda10.0, cuda10.1 and cuda10.2 (plus some nvidia drivers). I selected some central cuda10.2 packages for installation – just to find out where the related files were placed in the filesystem. I then ran into a major chain of packet dependencies, which I had to resolve during many tedious steps . Some packages may not have been necessary for a basic installation. In the end I was too lazy to restrict the libs to what is really required for Keras. The bill came afterwards: Cuda 10.2 is huge! If you do not know exactly what you need: Be prepared to invest up to 3 GB on your hard disk.

The Cuda 10.2 RPM packets install most of the the required “*.so”-shared library files and many other additional files in a directory “/usr/local/cuda-10.2/”. To make changes between different versions of Cuda possible we also find a link “/usr/local/cuda” pointing to
“/usr/local/cuda-10.2/” after the installation. Ok, reasonable – we could change the link to point to “/usr/local/cuda-10.0/”. This makes you assume that the Tensorflow 2 libraries and other modules in your virtual Python3 and Jupyter environment would look for required Cuda files in the central directory “/usr/local/cuda” – i.e. without special version attributes of the files. Unfortunately, this was another wrong assumption. 🙁 See below.

In addition to the Cuda packages you must install the present “cudnn” libraries from Nvidia – more precisely: The runtime and the development package. You get the RPMs from here. Be prepared to give Nvidia your private data. 🙁

I should add that I ignored and ignore the Nvidia drivers from the Cuda repository, i.e. I never installed them. Instead, I took those from the standard Nvidia community repository. They worked and work well so far – and make my update life on Opensuse easier.

Installation of Tensorflow2 modules in your (virtual) Python3 environment

I use a virtual Python3 environment and update it regularly via “pip”. Regarding TF2 an update via the command “pip install –upgrade tensorflow” should be sufficient – it will resolve dependencies. (By the way: If you want to bring all Python libs to their present version you can also use “pip-review –auto”. Note that under certain circumstances you may need the “–force” option for special upgrades. I cannot go into
details in this article.)

Multiple complaints about missing libraries …

Unfortunately, the next time I started my virtual Python environment I got the warning that the dynamic library “libnvinfer.so.6” could not be found, but was required in case I planned to use TensorRT. What? Well, you may find some information here
https://blogs.nvidia.com/blog/2016/08/22/difference-deep-learning-training-inference-ai/
https://developer.nvidia.com/tensorrt

I leave it up to you whether you really need TensorRT. You can ignore this message – TF will run for standard purposes without it. But, dear TF-developers: a clear message in the warning would in my opinion have been helpful. Then I checked whether some version of the Nvidia related library “libnvinfer.so” came with Cuda or Cudnn onto my system. Yeah, it did – unfortunately version 7 instead of 6. :-(.
So, we are confronted with a dependency on a specific file version which is older than the present one. I do not like this style of development. Normally, it should be the other way round: If a newer version is required due to new capabilities you warn the user. But under normal circumstances a backward compatibility of libs should be established. You would assume such a backward compatibility and that TF would search for the present version via looking for files “libnvinfer.so” and “libnvinfer_plugin.so” which do exist and point to the latest versions. But, no, in this case they want it explicitly to be version 6 … Makes you wonder whether the old Cudnn version is still available. I did not check it. Ok, ok – backward compatibility is not always possible ….

Just to see how good the internal checking of the alleged dependency is, I did something you normally do not do: I created a link “libnvinfer.so.6” in “/usr/lib64” to “libnvinfer.7.so”. Had to do the same for “libnvinfer_plugin.so.6”. Effect: I got rid of the warning – so much about dependency checking. I left the linking. You see I trust in coming developments sometimes and run some risks ….

Then came the next surprise. I had read a bit about changed statements in TF2 (compared to TF1) – and thought I was prepared for this. But, when I tried to execute some initial commands to configure TF2 from a Jupyter cell as e.g.

 
import time 
import tensorflow as tf
from tensorflow import keras as K
from keras.datasets import mnist
from keras import models
from keras import layers
from tensorflow.python.client import device_lib

import os
#os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

tf.config.optimizer.set_jit(True)
tf.config.threading.set_intra_op_parallelism_threads(4)
tf.config.threading.set_inter_op_parallelism_threads(4)
tf.debugging.set_log_device_placement(True)

device_lib.list_local_devices()  

I at once got a complaint in the shell from which I had started the Jupyter notebook – saying that a lib called “libcudart.so.10.1” was missing. Again – an explicit version dependency 🙁 . On purpose or just a glitch? Just one out of many files version dependent? Without a clear information? If this becomes the standard in the interaction between TF2 and Cuda – well, no fun any longer. In my opinion the TF2 developers should not use a search for files via version specific names – but do an analysis of headers and warn explicitly that the present version requires a specific Cuda version. Would be much more convenient for the user and compatible with the link mechanism described above.

Whilst a bunch of other dynamic libs was loaded by their name without a version in this case TF2 asks for a very specific version – although there is a corresponding lib available in the directory “/usr/lib/cuda-10.2″…. Nevertheless with full trust again in a better future
I offered TF2 a softlink “libcudart.so.10.1” in “/usr/lib64/” pointing to the “/usr/local/cuda-10.2/lib64/libcudart.so”. It cleared my way to the next hurdle. And my Keras MLP worked in the end …

Missing “./bin” directory … and other path related problems

When I tried to run specific Keras commands, which TF2 wanted to compile as XLA-supported statements, I again got complaints that files in a local directory “./bin” were missing. This was a first clear indication that Cuda paths were totally ignored in my Python/Jupyter environment. But what directory did the “./” refer to? Some experiments revealed:

I had to link an artificial subdirectory “./bin” in the directory where I kept my Jupyter notebooks to “/usr/local/Cuda-10.2/bin”.

But the next problems with other directories waited directly around the corner. Actually many … To make a long story short – the installation of TF2 in combination with Cuda 1.2 does not evaluate paths or ask for paths when used in a Python3/Jupyter environment. We have to provide and export them as shell environment variables. See below.

Warnings and errors regarding XLA capabilities

Another thing which drove me nuts was that TF2 required information about XLA-flags. It took me a while to find out that this also could be handled via environment variables.

All in all I now start the shell from which I launch my virtual Python environment and Jupyter notebooks with the following command sequence:

myself@mytux:/projekte/GIT/....../ml> export XLA_FLAGS=--xla_gpu_cuda_data_dir=/usr/local/cuda
myself@mytux:/projekte/GIT/....../ml> export TF_XLA_FLAGS=--tf_xla_cpu_global_jit
myself@mytux:/projekte/GIT/....../ml_1> export OPENBLAS_NUM_THREADS=4              
myself@mytux:/projekte/GIT/....../ml_1> source bin/activate
(ml) myself@mytux:/projekte/GIT/....../ml_1> jupyter notebook 

The first two commands did the magic regarding the path-problems! TF2 worked afterwards both for XLA-capable CPUs and Nvidia GPUs. So, a specific version may or may not have advantages – I do not know – but at least you can get TF2 running with Cuda 10.2.

Changed commands to control threading and memory consumption

Without the use of explicit compatibility commands TF2 does not support commands like

config = tf.ConfigProto(intra_op_parallelism_threads=num_cores,
                        inter_op_parallelism_threads=num_cores, 
                        allow_soft_placement=True,
                        device_count = {'CPU' : num_CPU,
                                        'GPU' : num_GPU}, 
                        log_device_placement=True

                       )
config.gpu_options.per_process_gpu_memory_fraction=0.4
config.gpu_options.force_gpu_compatible = True  

any longer. But as with TF1 you probably do not want to pick all the memory from your graphics card and you do not want to use all cores of a CPU in TF2. You can circumvent the lack of a “ConfigProto” property in TF2 by the following commands:

# configure use of just in time compiler 
tf.config.optimizer.set_jit(True) 
# limit use of parallel threads 
tf.config.threading.set_intra_op_parallelism_threads(4) 
tf.config.threading.set_inter_op_parallelism_threads(4)
# Not required in TF2: tf.enable_eager_execution()
# print out use of certain device (at first run)
tf.debugging.set_log_device_placement(True)
#limit use of graphics card memory 
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    tf.config.experimental.set_virtual_device_configuration(gpus[0], 
          [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
  except RuntimeError as e:
    print(e)
# Not required in TF2: tf.enable_eager_execution()
# print out a list of usable devices
device_lib.list_local_devices()   

Addendum, 15.
05.2020:

Well, this actually proved to be correct for the limitation of the GPU memory, only. The limitations on the CPU cores do NOT work. At least not on my system. See also:
tensorflow issues 34415.

I give you a workaround below.

Test run with MNIST

Afterwards the following simple Keras MLP ran without problems and with the expected performance on a GPU and a multicore CPU:

Jupyter cell 1

import time 
import tensorflow as tf
#from tensorflow import keras as K
import keras as K
from keras.datasets import mnist
from keras import models
from keras import layers
from keras.utils import to_categorical
from tensorflow.python.client import device_lib
import os

# use to work with CPU (CPU XLA ) only 
# os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    tf.config.experimental.set_virtual_device_configuration(gpus[0], 
          [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
  except RuntimeError as e:
    print(e)
    
# if not yet done elsewhere 

tf.config.optimizer.set_jit(True)
tf.debugging.set_log_device_placement(True)

use_cpu_or_gpu = 1 # 0: cpu, 1: gpu

# The following can only be done once - all CPU cores are used otherwise  
#if use_cpu_or_gpu == 0:
#    tf.config.threading.set_intra_op_parallelism_threads(4)
#    tf.config.threading.set_inter_op_parallelism_threads(6)


# function for training 
def train(train_images, train_labels, epochs, batch_size):
    network.fit(train_images, train_labels, epochs=epochs, batch_size=batch_size)

# setup of the MLP
network = models.Sequential()
network.add(layers.Dense(200, activation='sigmoid', input_shape=(28*28,)))
network.add(layers.Dense(100, activation='sigmoid'))
network.add(layers.Dense(50, activation='sigmoid'))
network.add(layers.Dense(30, activation='sigmoid'))
network.add(layers.Dense(10, activation='sigmoid'))
network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

# load MNIST 
mnist = K.datasets.mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# simple normalization
train_images = X_train.reshape((60000, 28*28))
train_images = train_images.astype('float32') / 255
test_images = X_test.reshape((10000, 28*28))
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(y_train)
test_labels = to_categorical(y_test)

Jupyter cell 2

# run it 
if use_cpu_or_gpu == 1:
    start_g = time.perf_counter()
    train(train_images, train_labels, epochs=45, batch_size=1500)
    end_g = time.perf_counter()
    test_loss, test_acc= network.evaluate(test_images, test_labels)
    print('Time_GPU: ', end_g - start_g)  
else:
    start_c = time.perf_counter()
    train(train_images, train_labels, epochs=45, batch_size=1500)
    end_c = time.perf_counter()
    test_loss, test_acc= network.evaluate(test_images, test_labels)
    print('Time_CPU: ', end_c - start_c)  

# test accuracy 
print('Acc: ', test_acc)

Switch to force Tensorflow to use the CPU, only

Another culprit is that – depending on the exact version of TF 2 – you may need to use the following statement to run (parts of) your code on the CPU only:

os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

in the beginning. Otherwise Tensorflow 2.0 and version 2.1 will choose execution on the GPU even if you use a statement like

with tf.device("/CPU:0"):

(which worked in TF1).
It seems that this problem was solved with TF 2.2 (tested it on 15.05.2020)! But you may have to check it yourself.
You can
watch the involvement of the GPU e.g. with “watch -n0.1 nvidia-smi” on a terminal. Another possibility is to set

tf.debugging.set_log_device_placement(True)  

and get messages in the shell of your virtual Python environment or in the presently used Jupyter cell.

Addendum 16.05.2020: Limiting the number of CPU cores for Tensorflow 2.0 on Linux

After several trials and tests I think that both TF2 and the Keras version delivered with handle the above given TF2 statements to limit the number of CPU cores to use inefficiently. I addition the behavior of TF2/Keras has changed with the TF2 versions 2.0, 2.1 and now 2.2.

Strange things also happen, when you combine statements of the TF1 compat layer with pure TF2 restriction statements. You should refrain from mixing them.

So, it is either

Option 1: CPU only and limited number of cores

from tensorflow import keras as K
from tensorflow.python.keras import backend as B 
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
...
config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=4, inter_op_parallelism_threads=1)
B.set_session(tf.compat.v1.Session(config=config))    
...

OR
Option 2: Mixture of GPU (with limited memory) and CPU (limited core number) with TF2 statements

import tensorflow as tf
from tensorflow import keras as K
from tensorflow.python.keras import backend as B 
from keras import models
from keras import layers
from keras.utils import to_categorical
from keras.datasets import mnist
from tensorflow.python.client import device_lib
import os

tf.config.threading.set_intra_op_parallelism_threads(6)
tf.config.threading.set_inter_op_parallelism_threads(1)
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        tf.config.experimental.set_virtual_device_configuration(gpus[0], 
        [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
    except RuntimeError as e:
        print(e)

OR
Option 3: Mixture of GPU (limited memory) and CPU (limited core numbers) with TF1 compat statements

import tensorflow as tf
from tensorflow import keras as K
from tensorflow.python.keras import backend as B 
from keras import models
from keras import layers
from keras.utils import to_categorical
from keras.datasets import mnist
from tensorflow.python.client import device_lib
import os

#gpu = False 
gpu = True
if gpu: 
    GPU = True;  CPU = False; num_GPU = 1; num_CPU = 1
else: 
    GPU = False; CPU = True;  num_CPU = 1; num_GPU = 0

config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=6,
                        inter_op_parallelism_threads=1, 
                        allow_soft_placement=True,
                        device_count = {'CPU' : num_CPU,
                                        'GPU' : num_GPU}, 
                        log_device_placement=True

                       )
config.gpu_options.per_process_gpu_memory_fraction=0.35
config.gpu_options.force_gpu_compatible = True
B.set_session(tf.compat.v1.Session(config=config))

Hint 1:

If you want to test some code parts on the GPU and others on the CPU in the same session, I strongly recommend to use the compat statements in the form given by Option 3 above

The reason is that it – strangely enough – gives you a faster performance on a multicore CPU by more than 25% in comparison to the pure TF2 statements .

Afterwards you can use statements like:

batch_size=64
epochs=5

if use_cpu_or_gpu == 0:
    start_g = time.perf_counter()
    with tf.device("/GPU:0"):
        train(train_imgs, train_labels, epochs, batch_size)
    end_g = time.perf_
counter()
    print('Time_GPU: ', end_g - start_g)  
else:
    start_c = time.perf_counter()
    with tf.device("/CPU:0"):
        train(train_imgs, train_labels, epochs, batch_size)
    end_c = time.perf_counter()
    print('Time_CPU: ', end_c - start_c)  

Hint 2:
If you check the limitations on CPU cores (threads) via watching the CPU load on tools like “gkrellm” or “ksysguard”, it may appear that all cores are used in parallel. You have to set the update period of these tools to 0.1 sec to see that each core is only used intermittently. In gkrellm you should also see a variation of the average CPU load value with a variation of the parameter “intra_op_parallelism_threads=n”.

Hint 3:
In my case with a Quadcore CPU with hyperthreading the following settings seem to be optimal for a variety of Keras CNN models – whenever I want to train them on the CPU only:

...
config = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=6, inter_op_parallelism_threads=1)
B.set_session(tf.compat.v1.Session(config=config)) 
...

Hint 4:
If you want to switch settings in a Jupyter session it is best to stop and restart the respective kernel. You can do this via the commands under “kernel” in the Jupyter interface.

Conclusion

Well, my friends the above steps where what I had to do to get Keras working in combination with TF2, Cuda 10.2 and the present version of Cudnn. I regard this not as a straightforward procedure – to say it mildly.

In addition after some tests I might also say that the performance seems to be worse than with Tensorflow 1. Especially when using Keras – whether the Keras included with Tensorflow 2 or Keras in form of separate Python lib. Especially the performance on a GPU is astonishingly bad with Keras for small networks.

This impression of sluggishness stands in a strange contrast to elementary tests were I saw a factor of 5 difference for a series of typical matrix multiplications executed directly with tf.matmul() on a GPU vs. a CPU. But this another story …..

Links

tensorflow-running-version-with-cuda-on-cpu-only