A single neuron perceptron with sigmoid activation function – II – normalization to overcome saturation

I continue my small series on a single neuron perceptron to study the positive effects of the normalization of input data in combination with the use of the sigmoid function as the activation function. In the last article

A single neuron perceptron with sigmoid activation function – I – failure of gradient descent due to saturation

we have seen that the saturation of the sigmoid function for big positive or negative arguments can prevent a smooth gradient descent under certain conditions – even if a global minimum clearly exists.

A perceptron with just one computing neuron is just a primitive example which demonstrates what can happen at the neurons of the first computing layer after the input layer of a real “Artificial Neural Network” [ANN]. We should really avoid to provide too big input values there and take into account that input values for different features get added up.

Measures against saturation at neurons in the first computing layer

There are two elementary methods to avoid saturation of sigmoid like functions at neurons of the first hidden layer:

  • Normalization: One measure to avoid big input values is to normalize the input data. Normalization can be understood as a transformation of given real input values for all of the features into an interval [0, 1] or [-1, 1]. There are of course many transformations which map a real number distribution into a given limited interval. Some keep up the relative distance of data points, some not. We shall have a look at some standard normalization variants used in Machine Learning [ML] during this and the next article .
    The effect with respect to a sigmoidal activation function is that the gradient for arguments in the range [-1, 1] is relatively big. The sigmoid function behaves almost as a linear function in this argument region; see the plot in the last article.
  • Choosing an appropriate (statistical) initial weight distribution: If we have a relatively big feature space as e.g. for the MNIST dataset with 784 features, normalization alone is not enough. The initial value distribution for weights must also be taken care of as we add up contributions of all input nodes (multiplied by the weights). We can follow a recommendation of LeCun (1990); see the book of Aurelien Geron recommended (here) for more details.
    Then we would choose a uniform distribution of values in a range [-alpha*sqrt(1/num_inp_nodes), alpha*sqrt(1/num_inp_nodes)], with alpha $asymp; 1.73 and num_inp_nodes giving the number of input nodes, which typically is the number of features plus 1, if you use a bias neuron. As a rule of thumb I personally take [-0.5*sqrt(1/num_inp_nodes, 0.5*sqrt[1/num_inp_nodes].

Normalization functions

The following quick&dirty Python code for a Jupyter cell calls some normalization functions for our simple perceptron scenario and directly executes the transformation; I have provided the required import statements for libraries already in the last article.

# ********
# Scaling
# ********

b_scale = True
scale_method = 3
# 0: Normalizer (standard), 1: StandardScaler, 2. By factor, 3: Normalizer per pair 
# 4: Min_Max, 5: Identity (no transformation) - just there for convenience  

shape_ay = (num_samples,)
ay_K1 = np.zeros(shape_ay)
ay_K2 = np.zeros(shape_ay)

# apply scaling
if b_scale:
    # shape_input = (num_samples,2)
    rg_idx = range(num_samples)
    if scale_method == 0:
      
        shape_input = (2, num_samples)
        ay_K = np.zeros(shape_input)
        for idx in rg_idx:
            ay_K[0][idx] = li_K1[idx] 
            ay_K[1][idx] = li_K2[idx] 
        scaler = Normalizer()
        ay_K = scaler.fit_transform(ay_K)
        for idx in rg_idx:
            ay_K1[idx] = ay_K[0][idx]   
            ay_K2[idx] = ay_K[1][idx] 
        print(ay_K1)
        print("\n")
        print(ay_K2)
    elif scale_method == 1: 
        shape_input = (num_samples,2)
        ay_K = np.zeros(shape_input)
        for idx in rg_idx:
            ay_K[idx][0] = li_K1[idx] 
            ay_K[idx][1] = li_K2[idx] 
        scaler = StandardScaler()
        ay_K = scaler.fit_transform(ay_K)
        for idx in rg_idx:
            ay_K1[idx] = ay_K[idx][0]   
            ay_K2[idx] = ay_K[idx][1]
    elif scale_method == 2:
        dmax = max(li_K1.max() - li_K1.min(), li_K2.max() - li_K2.min())
        ay_K1 = 1.0/dmax * li_K1
        ay_K2 = 1.0/dmax * li_K2
    elif scale_method == 3:
        shape_input = (num_samples,2)
        ay_K = np.zeros(shape_input)
        for idx in rg_idx:
            ay_K[idx][0] = li_K1[idx] 
            ay_K[idx][1] = li_K2[idx] 
        scaler = Normalizer()
        ay_K = scaler.fit_transform(ay_K)
        for idx in rg_idx:
            ay_K1[idx] = ay_K[idx][0]   
            ay_K2[idx] = ay_K[idx][1]
    elif scale_method == 4:
        shape_input = (num_samples,2)
        ay_K = np.zeros(shape_input)
        for idx in rg_idx:
            ay_K[idx][0] = li_K1[idx] 
            ay_K[idx][1] = li_K2[idx] 
        scaler = MinMaxScaler()
        ay_K = scaler.fit_transform(ay_K)
        for idx in rg_idx:
            ay_K1[idx] = ay_K[idx][0]   
            ay_K2[idx] = ay_K[idx][1]
    elif scale_method == 5:
        ay_K1 = li_K1
        ay_K2 = li_K2
            
            
# Get overview over costs on weight-mesh
wm1 = np.arange(-5.0,5.0,0.002)
wm2 = np.arange(-5.0,5.0,0.002)
#wm1 = np.arange(-0.3,0.3,0.002)
#wm2 = np.arange(-0.3,0.3,0.002)
W1, W2 = np.meshgrid(wm1, wm2) 
C, li_C_sgl = costs_mesh(num_samples = num_samples, W1=W1, W2=W2, li_K1 = ay_K1, li_K2 = ay_K2, \
                               li_a_tgt = li_a_tgt)


C_min = np.amin(C)
print("C_min = ", C_min)
IDX = np.argwhere(C==C_min)
print ("Coordinates: ", IDX)
wmin1 = W1[IDX[0][0]][IDX[0][1]] 
wmin2 = W2[IDX[0][0]][IDX[0][1]]
print("Weight values at cost minimum:",  wmin1, wmin2)

# Plots
# ******
fig_size = plt.rcParams["figure.figsize"]
#print(fig_size)
fig_size[0] = 19; fig_size[1] = 19

fig3 = plt.figure(3); fig4 = plt.figure(4)

ax3 = fig3.gca(projection='3d')
ax3.get_proj = lambda: np.dot(Axes3D.get_proj(ax3), np.diag([1.0, 1.0, 1, 1]))
ax3.view_init(25,135)
ax3.set_xlabel('w1', fontsize=16)
ax3.set_ylabel('w2', fontsize=16)
ax3.set_zlabel('Total costs', fontsize=16)
ax3.plot_wireframe(W1, W2, 1.2*C, colors=('green'))


ax4 = fig4.gca(projection='3d')
ax4.get_proj = lambda: np.dot(Axes3D.get_proj(ax4), np.diag([1.0, 1.0, 1, 1]))
ax4.view_init(25,135)
ax4.set_xlabel('w1', fontsize=16)
ax4.set_ylabel('w2', fontsize=16)
ax4.set_zlabel('Single costs', fontsize=16)
ax4.plot_wireframe(W1, W2, li_C_sgl[0], colors=('blue'))
#ax4.plot_wireframe(W1, W2, li_C_sgl[1], colors=('red'))
ax4.plot_wireframe(W1, W2, li_C_sgl[5], colors=('orange'))
#ax4.plot_wireframe(W1, W2, li_C_sgl[6], colors=('yellow'))
#ax4.plot_wireframe(W1, W2, li_C_sgl[9], colors=('magenta'))
#ax4.plot_wireframe(W1, W2, li_C_sgl[12], colors=('green'))

plt.show()

 

The results of the transformation for our two features are available in the arrays “ay_K1” and “ay_K2”. These arrays will then be used as an input to gradient descent.

Some
remarks on some normalization methods:

Normalizer: It is in the above code called by setting “scale_method=0”. The “Normalizer” with standard parameters scales by applying a division by an averaged L2-norm distance. However, its application is different from other SciKit-Learn scalers:
It normalizes over all data given in a sample. The dimensions beyond 1 are NOT interpreted as features which have to be normalizes separately – as e.g. the “StandardScaler” does. So, you have to be careful with index handling! This explains the different index-operation for “scale_method = 0” compared to other cases.

StandardScaler: Called by setting “scale_method=1”. The StandardScaler accepts arrays of samples with columns for features. It scales all features separately. It subtracts the mean average of all feature values of all samples and divides afterwards by the standard deviation. It thus centers the value distribution with a mean value of zero and a variance of 1. Note however that it does not limit all transformed values to the interval [-1, 1].

MinMaxScaler: Called by setting “scale_method=4”. The MinMaxScaler
works similar to the StandardScaler but subtracts the minimum and divides by the (max-min)-difference. It therefore does not center the distribution and does not set the variance to 1. However, it limits the transformed values to the interval [-1, 1].

Normalizer per sample: Called by setting “scale_method=3”. This applies the Normalizer per sample! I.e., it scales in our case both the given feature values for one single by their mean and standard deviation. This may at first sound totally meaningless. But we shall see in the next article that it is not in case for our special set of 14 input samples.

Hint: For the rest of this article we shall only work with the StandardScaler.

Input data transformed by the StandardScaler

The following plot shows the input clusters after a transformation with the “StandardScaler”:

You should recognize two things: The centralization of the features and the structural consistence of the clusters to the original distribution before scaling!

The cost hyperplane over the {w1, w2}-space after the application of the StandardScaler to our input data

Let us apply the StandardScaler and look at the resulting cost hyperplane. When we set the parameters for a mesh display to

wm1 = np.arange(-5.0,5.0,0.002), wm2 = np.arange(-5.0,5.0,0.002)

we get the following results:

C_min =  0.0006239618496774544
Coordinates:  [[2695 2259]]
Weight values at cost minimum: -0.4820000000004976 0.3899999999994064

Plots for total costs over the {w1, w2}-space from different angles

Plot for individual costs (i=0, i=5) over the {w1, w2}-space

The index “i” refers to our sample-array (see the last article).

Gradient descent after scaling with the “StandardScaler”

Ok, let us now try gradient descent again. We set the following parameters:

w1_start = -0.20, w2_start = 0.25 eta = 0.1, decrease_rate = 0.000001, num_steps = 2000

Results:

Stoachastic Descent
          Kt1       Kt2     K1     K2  Tgt       Res       Err
0   1.276259 -0.924692  200.0   14.0  0.3  0.273761  0.087463
1  -1.067616  0.160925    1.0  107.0  0.7  0.640346  0.085220
2   0.805129 -0.971385  160.0   10.0  0.3  0.317122  0.057074
3  -0.949833  1.164828   11.0  193.0  0.7  0.713461  0.019230
4   1.511825 -0.714572  220.0   32.0  0.3  0.267573  0.108090
5  -0.949833  0.989729   11.0  178.0  0.7  0.699278  0.001031
6   0.333998 -1.064771  120.0    2.0  0.3  0.359699  0.198995
7  -0.914498  1.363274   14.0  210.0  0.7  0.725667  0.036666
8   1.217368 -0.948038  195.0   12.0  0.3  0.277602  0.074660
9  -0.902720  0.476104   15.0  134.0  0.7  0.650349  0.070930
10  0.451781 -1.006405  130.0    7.0  0.3  0.351926  0.173086
11 -1.020503  0.861322    5.0  167.0  0.7  0.695876  0.005891
12  1.099585 -0.971385  185.0   10.0  0.3  0.287246  0.042514
13 -0.890942  1.585067   16.0  229.0  0.7  0.740396  0.057709

Batch Descent
          Kt1       Kt2     K1     K2  Tgt       Res       Err
0   1.276259 -0.924692  200.0   14.0  0.3  0.273755  0.087482
1  -1.067616  0.160925    1.0  107.0  0.7  0.640352  0.085212
2   0.805129 -0.971385  160.0   10.0  0.3  0.317118  0.057061
3  -0.949833  1.164828   11.0  193.0  0.7  0.713465  0.019236
4   1.511825 -0.714572  220.0   32.0  0.3  0.267566  0.108113
5  -0.949833  0.989729   11.0  178.0  0.7  0.699283  0.001025
6   0.333998 -1.064771  120.0    2.0  0.3  0.359697  0.198990
7  -0.914498  1.363274   14.0  210.0  0.7  0.725670  0.036672
8   1.217368 -0.948038  195.0   12.0  0.3  0.277597  0.074678
9  -0.902720  0.476104   15.0  134.0  0.7  0.650354  0.070923
10  0.451781 -1.006405  130.0    7.0  0.3  0.351924  0.173080
11 -1.020503  0.861322    5.0  167.0  0.7  0.695881  0.005884
12  1.099585 -0.971385  185.0   10.0  0.3  0.287241  0.042531
13 -0.890942  1.585067   16.0  229.0  0.7  0.740400  0.057714

Total error stoch descent:  0.07275422919538276
Total error batch descent:  0.07275715820661666

The attentive reader has noticed that I extended my code to include the columns with the original (K1, K2)-values into the Pandas dataframe. The code of the new function “predict_batch()” is given below. Do not forget to change the function calls at the end of the gradient descent code, too.

Now we obviously can speak of a result! The calculated (w1, w2)-data are:

Final (w1,w2)-values stoch : ( -0.4816 ,  0.3908 )
Final (w1,w2)-values batch : ( -0.4815 ,  0.3906 )

Yeah, this is pretty close to the values we got via the fine grained mesh analysis of the cost function before! And within the error range!

Changed code for two of our functions in the last article

def predict_batch(num_samples, w1, w2, ay_k_1, ay_k_2, li_K1, li_K2, li_a_tgt):
    shape_res = (num_samples, 7)
    ResData = np.zeros(shape_
res)  
    rg_idx = range(num_samples)
    err = 0.0
    for idx in rg_idx:
        z_in  = w1 * ay_k_1[idx] + w2 * ay_k_2[idx] 
        a_out = expit(z_in)
        a_tgt = li_a_tgt[idx]
        err_idx = np.absolute(a_out - a_tgt) / a_tgt 
        err += err_idx
        ResData[idx][0] = ay_k_1[idx] 
        ResData[idx][1] = ay_k_2[idx] 
        ResData[idx][2] = li_K1[idx] 
        ResData[idx][3] = li_K2[idx] 
        ResData[idx][4] = a_tgt
        ResData[idx][5] = a_out
        ResData[idx][6] = err_idx
    err /= float(num_samples)
    return err, ResData    

def create_df(ResData):
    ''' ResData: Array with result values K1, K2, Tgt, A, rel.err 
    '''
    cols=["Kt1", "Kt2", "K1", "K2", "Tgt", "Res", "Err"]
    df = pd.DataFrame(ResData, columns=cols)
    return df    

 

How does the epoch evolution after the application of the StandardScaler look like?

Let us plot the evolution for the stochastic gradient descent:

Cost and weight evolution during stochastic gradient descent

Ok, we see that despite convergence the difference in the costs for different samples cannot be eliminated. It should be clear to the reader, why, and that this was to be expected.

We also see that the total costs (calculated from the individual costs) seemingly converges much faster than the weight values! Our gradient descent path obviously follows a big slope into a rather flat valley first (see the plot of the total costs above). Afterwards there is a small gradient sideways and down into the real minimum – and it obviously takes some epochs to get there. We also understand that we have to keep up a significant “learning rate” to follow the gradient in the flat valley. In addition the following rule seems to be appropriate sometimes:

We must not only watch the cost evolution but also the weight evolution – to avoid stopping gradient descent too early!

We shall keep this in mind for experiments with real multi-layer “Artificial Neural Networks” later on!

And how does the gradient descent based on the full “batch” of 14 samples look like?

Cost and weight evolution during batch gradient descent

A smooth beauty!

Contour plot for separation curves in the {K1, K2}-plane

We add the following code to our Jupyter notebook:

# ***********
# Contours 
# ***********

from matplotlib import ticker, cm

# Take w1/w2-vals from above w1f, w2f
w1_len = len(li_w1_ba)
w2_len = len(li_w1_ba)
w1f = li_w1_ba[w1_len -1]
w2f = li_w2_ba[w2_len -1]

def A_mesh(w1,w2, Km1, Km2):
    kshape = Km1.shape
    A = np.zeros(kshape) 
    
    Km1V = Km1.reshape(kshape[0]*kshape[1], )
    Km2V = Km2.reshape(kshape[0]*kshape[1], )
    # print("km1V.shape = ", Km1V.shape, "\nkm1V.shape = ", Km2V.shape )
    
    KmV = np.column_stack((Km1V, Km2V))
    
    # scaling trafo
    KmT = scaler.transform(KmV)
    
    Km1T, Km2T = KmT.T
    Km1TR = Km1T.reshape(
kshape)
    Km2TR = Km2T.reshape(kshape)
    #print("km1TR.shape = ", Km1TR.shape, "\nkm2TR.shape = ", Km2TR.shape )
    
    rg_idx = range(num_samples)
    Z      = w1 * Km1TR + w2 * Km2TR
    A = expit(Z)
    return A

#Build K1/K2-mesh 
minK1, maxK1 = li_K1.min()-20, li_K1.max()+20 
minK2, maxK2 = li_K2.min()-20, li_K2.max()+20
resolution = 0.1
Km1, Km2 = np.meshgrid( np.arange(minK1, maxK1, resolution), 
                        np.arange(minK2, maxK2, resolution))

A = A_mesh(w1f, w2f, Km1, Km2 )

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 14
fig_size[1] = 11
fig, ax = plt.subplots()
cmap=cm.PuBu_r
cmap=cm.RdYlBu
#cs = plt.contourf(X, Y, Z1, levels=25, alpha=1.0, cmap=cm.PuBu_r)
cs = ax.contourf(Km1, Km2, A, levels=25, alpha=1.0, cmap=cmap)
cbar = fig.colorbar(cs)
N = 14
r0 = 0.6
x = li_K1
y = li_K2
area = 6*np.sqrt(x ** 2 + y ** 2)  # 0 to 10 point radii
c = np.sqrt(area)
r = np.sqrt(x ** 2 + y ** 2)
area1 = np.ma.masked_where(x < 100, area)
area2 = np.ma.masked_where(x >= 100, area)
ax.scatter(x, y, s=area1, marker='^', c=c)
ax.scatter(x, y, s=area2, marker='o', c=c)
# Show the boundary between the regions:
ax.set_xlabel("K1", fontsize=16)
ax.set_ylabel("K2", fontsize=16)

 

This code enables us to plot contours of predicted output values of our solitary neuron, i.e. A-values, on a mesh of the original {K1, K2}-plane. As we classified after a transformation of our input data, the following hint should be obvious:

Important hint: Of course you have to apply your scaling method to all the new input data created by the mesh-function! This is done in the above code in the “A_mesh()”-function with the following lines:

    # scaling trafo
    if (scale_method == 3): 
        KmT = scaler.fit_transform(KmV)
    else: 
        KmT = scaler.transform(KmV)

We can directly apply the StandardScaler on our new data via its method transform(); the scaler will use the parameters it found during his first “scaler.fit_transform()”-operation on our input samples. However, we cannot do it this way when using the Normalizer for each individual new data sample via “scale_method =3”. I shall come back to this point in a later article.

The careful reader also sees that our code will, for the time being, not work for scale_method=0, scale_method=2 and scale_method=5. Reason: I was too lazy to write a class or code suitable for these normalizing operations. I shall correct this when we need it.

But at least I added our input samples via scatter plotting to the final output. The result is:

The deviations from our target values is to be expected. With a given pair of (w1, w2)-values we cannot do much better with a single neuron and a linear weight impact on the input data.

But we see: If we set up a criterion like:

  • A > 0.5 => sample belongs to the left cluster,
  • A ≤ 0.5 => sample belongs to the right cluster

we would have a relatively good classificator available – based on one neuron only!

Intermediate Conclusion

In this article I have shown that the “standardization” of input data, which are fed into a perceptron ahead of a gradient descent calculation, helps to circumvent problems with the saturation of the sigmoid function at the computing neuron following the input layer. We achieved this by applying the ”
StandardScaler” of Scikit-Learn. We got a smooth development of both the cost function and the weight parameters during gradient descent in the transformed data space.

We also learned another important thing:

An apparent convergence of the cost function in the vicinity of a minimum value does not always mean that we have reached the global minimum, yet. The evolution of the weight parameters may not yet have come to an end! Therefore, it is important to watch both the evolution of the costs AND the evolution of the weights during gradient descent. A too fast decline of the learning rate may not be good either under certain conditions.

In the next article

A single neuron perceptron with sigmoid activation function – III – two ways of applying Normalizer

we shall look at two other normalization methods for our simplistic scenario. One of them will give us an even better classificator.

Stay tuned and remain healthy …

And Mr Trump:
One neuron can obviously learn something about the difference of big and small numbers. This leads me to two questions, which you as a “natural talent” on epidemics can certainly answer: How many neurons are necessary to understand something about an exponential epidemic development? And why did it take so much time to activate them?

 

Machine Learning – book recommendations

A reader who follows my article series on MLP-coding has asked me, which books I would recommend for beginners in “Machine Learning”. I assume that he did not mean introductory books on Python, but books on things like SciKit, Artificial Neural Networks, Keras, Tensorflow, …. I also assume that he had at least a little interest in the basic mathematical background.

I should also say that the point regarding “beginners” is a difficult one. With my own limited experience I would say:

Get an overview, but then start playing around on your computer with something that interests you. Afterwards extend your knowledge about tools and methods. As with computers in general it is necessary to get used to terms and tools – even if you do not or not fully understand the theory behind them. Meaning: Books open only a limited insight into Machine Learning, you will probably learn more from practical exercises. So, do not be afraid of coding in Python – and use the tools the authors of the following books worked with. And: You should get familiar with Numpy and matplotlib pretty soon!

Well, here are my recommendations for reading – and the list of books actually implies an order:

“Machine Learning – An applied mathematics introduction”, Paul Wilmott, 2019, Panda Ohana Publishing
This is one of the best introductory books on Machine learning I have read so far – if not simply the best. One of its many advantages is: It is relatively short – around 220 pages – and concise. Despite the math it is written in a lively, personal style and I like the somewhat dry humor of the author.

This book will give you a nice overview of the most important methodologies in ML. It does not include any (Python) coding – and in my opinion this is another advantage for beginners. Coding does not distract you from the basic concepts. A thing this book will not give you is an abstract introduction into “Convolutional Neural Networks” [CNNs].

Nevertheless: Read this book before you read anything else!

“Python Machine Learning”, Sebastian Rashka, 2016, Packt Publishing Ltd
When I started reading books on ML this was one of the first I came across. Actually, I did not like it at my first reading trial. One of my problems was that I had not enough knowledge regarding Python and Matplotlib. After some months and some basics in Python I changed my mind. The book is great! It offers a lot of coding examples and the introduction into SciKit and at some points also Pandas is quite OK.

It is a “hands on” type of book – you should work with the code examples given and modify them. This first edition of the book does, however, not provide you with an introduction into CNNs and advanced tools like Tensorflow and the Keras interface. Which in my opinion was not a disadvantage at the time of reading. What I still do not like are the mathematical explanations at some points where the author argues more on the level of hints than real explanations. But it is a great book to start with SciKit-Learn experiments – and you will deepen your insight in methodologies learned in the book of Wilmott.

Note: There is a new edition available: “Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2”, 3rd Edition, by S. Rashka and V.Mirjalili.
I have not read it, but if you think about a book of Rashka, you should probably buy this edition.
For my German Readers: There is a German version of the 2nd edition available – in much better hard cover and printing quality (mitp verlag), but also more expensive. But it does not cover Tensorflow 2, to my knowledge.

“Hands-On Machine learning with SciKit-Learn, Keras & Tensorflow”, 2nd edition, Aurelien Geron, 2019, O’Reilly
This is a pure treasure trove! I have read the 1st
edition, but a week ago I bought the 2nd edition. It seems to be far better than the first edition! Not only because of the colored graphics – which really help the reader to understand things better. The book has also be revised and extended! Compared with the first edition it is partially a new book. In my opinion all important topics in ML have been covered. So, if you want to extend your practical knowledge to CNNs, RNNS and Reinforcemant Learning go for it. But around 780 pages will require their time ….

“Deep Learning mit Python und Keras”, Francois Chollet, 2018, mitp Verlag
I have only read and worked with the German version. The English version was published by Manning Publications. A profound introduction into Keras and at the same time a nice introduction into CNNs. I also liked the chapters on Generative Deep Learning. Be prepared to have a reasonable GPU ready when you start working with this book!

For theorists: Neural Networks and Analog Computation – Beyond the Turing Limit”, H.T. Siegelmann, 1999, Springer Science+, Business Media, New York
This book is only for those with a background in mathematics and theoretical information and computation science. You have been warned! But it is a strong, strong book on ANN-theory which provides major insights on the relation between ANNs, super-Turing systems and even physical systems. Far ahead of its time …

There are of course many more books on the market on very different levels – and I meanwhile own quite a bunch of them. But I limited the list intentionally. Have fun!

A simple Python program for an ANN to cover the MNIST dataset – XI – confusion matrix

Welcome back to my readers who followed me through the (painful?) process of writing a Python class to simulate a “Multilayer Perceptron” [MLP]. The pain in my case resulted from the fact that I am still a beginner in Machine Learning [ML] and Python. Nevertheless, I hope that we have meanwhile acquired some basic understanding of how a MLP works and “learns”. During the course of the last articles we had a close look at such nice things as “forward propagation”, “gradient descent”, “mini-batches” and “error backward propagation”. For the latter I gave you a mathematical description to grasp the background of the matrix operations involved.

Where do we stand after 10 articles and a PDF on the math?

A simple program for an ANN to cover the Mnist dataset – X – mini-batch-shuffling and some more tests
A simple program for an ANN to cover the Mnist dataset – IX – First Tests
A simple program for an ANN to cover the Mnist dataset – VIII – coding Error Backward Propagation
A simple program for an ANN to cover the Mnist dataset – VII – EBP related topics and obstacles
A simple program for an ANN to cover the Mnist dataset – VI – the math behind the „error back-propagation“
A simple program for an ANN to cover the Mnist dataset – V – coding the loss function
A simple program for an ANN to cover the Mnist dataset – IV – the concept of a cost or loss function
A simple program for an ANN to cover the Mnist dataset – III – forward propagation
A simple program for an ANN to cover the Mnist dataset – II – initial random weight values
A simple program for an ANN to cover the Mnist dataset – I – a starting point

We have a working code

  • with some parameters to control layers and node numbers, learning and momentum rates and regularization,
  • with many dummy parts for other output and activation functions than the sigmoid function we used so far,
  • with prepared code fragments for applying MSE instead of “Log Loss” as a cost function,
  • and with dummy parts for handling different input datasets than the MNIST example.

The code is not yet optimized; it includes e.g. many statements for tests which we should eliminate or comment out. A completely open conceptual aspect is the optimization of the adaption of the learning rate; it is very primitive so far. We also need an export/import functionality to be able to perform training with a series of limited epoch numbers per run.
We also should save the weights and accuracy data after a fixed epoch interval to be able to analyze a bit more after training. Another idea – though probably costly – is to even perform intermediate runs on the test data set an get some information on the development of the averaged error on the test data set.

Despite all these deficits, which we need to cover in some more articles, we are already able to perform an insightful task – namely to find out with which numbers and corresponding images of the MNIST data set our MLP has problems with. This leads us to the topics of a confusion matrix and other measures for the accuracy of our algorithm.

However, before we look at these topics, we first create some useful code, which we can save inside cells of the Jupyter notebook we maintain for testing our class “MyANN”.

Some functions to evaluate the prediction capability of our ANN after training

For further analysis we shall apply the following functions later on:

# ------ predict results for all test data 
# *************************
def predict_all_test_data(): 
    size_set = ANN._X_test.shape[0]

    li_Z_in_layer_test  = [None] * ANN._n_total_layers
    li_Z_in_layer_test[0] = ANN._X_test
    
    # Transpose input data matrix  
    ay_Z_in_0T       = li_Z_in_layer_test[0].T
    li_Z_in_layer_test[0] = ay_Z_in_0T
    li_A_out_layer_test  = [None] * ANN._n_total_layers

    # prediction by forward propagation of the whole test set 
    ANN._fw_propagation(li_Z_in = li_Z_in_layer_test, li_A_out = li_A_out_layer_test, b_print = False) 
    ay_predictions_test = np.argmax(li_A_out_layer_test[ANN._n_total_layers-1], axis=0)
    
    # accuracy 
    ay_errors_test = ANN._y_test - ay_predictions_test 
    acc = (np.sum(ay_errors_test == 0)) / size_set
    print ("total acc for test data = ", acc)

def predict_all_train_data(): 
    size_set = ANN._X_train.shape[0]

    li_Z_in_layer_test  = [None] * ANN._n_total_layers
    li_Z_in_layer_test[0] = ANN._X_train
    # Transpose 
    ay_Z_in_0T       = li_Z_in_layer_test[0].T
    li_Z_in_layer_test[0] = ay_Z_in_0T
    li_A_out_layer_test  = [None] * ANN._n_total_layers

    ANN._fw_propagation(li_Z_in = li_Z_in_layer_test, li_A_out = li_A_out_layer_test, b_print = False) 
    Result = np.argmax(li_A_out_layer_test[ANN._n_total_layers-1], axis=0)
    Error = ANN._y_train - Result 
    acc = (np.sum(Error == 0)) / size_set
    print ("total acc for train data = ", acc)    
    

# Plot confusion matrix 
# orginally from Runqi Yang; 
# see https://gist.github.com/hitvoice/36cf44689065ca9b927431546381a3f7
def cm_analysis(y_true, y_pred, filename, labels, ymap=None, figsize=(10,10)):
    """
    Generate matrix plot of confusion matrix with pretty annotations.
    The plot image is saved to disk.
    args: 
      y_true:    true label of the data, with shape (nsamples,)
      y_pred:    prediction of the data, with shape (nsamples,)
      filename:  filename of figure file to save
      labels:    string array, name the order of class labels in the confusion matrix.
                 use `clf.classes_` if using scikit-learn models.
                 with shape (nclass,).
      ymap:      dict: any -> string, length == nclass.
                 if not None, map the labels & ys to more understandable strings.
                 Caution: original y_true, y_pred and labels must align.
      figsize:   the size of the figure plotted.
    """
    if ymap is not None:
        y_pred = [ymap[yi] for yi in y_pred]
        y_true = [ymap[yi] for yi in y_true]
        labels = [ymap[yi] for yi in labels]
    cm = confusion_matrix(y_true, y_pred, labels=labels)
    cm_sum = np.sum(cm, axis=1, keepdims=True)
    cm_perc = cm / cm_sum.astype(float)
 * 100
    annot = np.empty_like(cm).astype(str)
    nrows, ncols = cm.shape
    for i in range(nrows):
        for j in range(ncols):
            c = cm[i, j]
            p = cm_perc[i, j]
            if i == j:
                s = cm_sum[i]
                annot[i, j] = '%.1f%%\n%d/%d' % (p, c, s)
            elif c == 0:
                annot[i, j] = ''
            else:
                annot[i, j] = '%.1f%%\n%d' % (p, c)
    cm = pd.DataFrame(cm, index=labels, columns=labels)
    cm.index.name = 'Actual'
    cm.columns.name = 'Predicted'
    fig, ax = plt.subplots(figsize=figsize)
    ax=sns.heatmap(cm, annot=annot, fmt='')
    #plt.savefig(filename)

    
#
# Plotting 
# **********
def plot_ANN_results(): 
    num_epochs  = ANN._n_epochs
    num_batches = ANN._n_batches
    num_tot = num_epochs * num_batches

    cshape = ANN._ay_costs.shape
    print("n_epochs = ", num_epochs, " n_batches = ", num_batches, "  cshape = ", cshape )
    tshape = ANN._ay_theta.shape
    print("n_epochs = ", num_epochs, " n_batches = ", num_batches, "  tshape = ", tshape )


    #sizing
    fig_size = plt.rcParams["figure.figsize"]
    fig_size[0] = 12
    fig_size[1] = 5

    # Two figures 
    # -----------
    fig1 = plt.figure(1)
    fig2 = plt.figure(2)

    # first figure with two plot-areas with axes 
    # --------------------------------------------
    ax1_1 = fig1.add_subplot(121)
    ax1_2 = fig1.add_subplot(122)

    ax1_1.plot(range(len(ANN._ay_costs)), ANN._ay_costs)
    ax1_1.set_xlim (0, num_tot+5)
    ax1_1.set_ylim (0, 1500)
    ax1_1.set_xlabel("epochs * batches (" + str(num_epochs) + " * " + str(num_batches) + " )")
    ax1_1.set_ylabel("costs")

    ax1_2.plot(range(len(ANN._ay_theta)), ANN._ay_theta)
    ax1_2.set_xlim (0, num_tot+5)
    ax1_2.set_ylim (0, 0.15)
    ax1_2.set_xlabel("epochs * batches (" + str(num_epochs) + " * " + str(num_batches) + " )")
    ax1_2.set_ylabel("averaged error")


 
The first function “predict_all_test_data()” allows us to create an array with the predicted values for all test data. This is based on a forward propagation of the full set of test data; so we handle some relatively big matrices here. The second function delivers prediction values for all training data; the operations of propagation algorithm involve even bigger matrices here. You will nevertheless experience that the calculations are performed very quickly. Prediction is much faster than training!

The third function “cm_analysis()” is not from me, but taken from Github Gist; see below. The fourth function “plot_ANN_results()” creates plots of the evolution of the cost function and the averaged error after training. We come back to these functions below.

To be able to use these functions we need to perform some more imports first. The full list of statements which we should place in the first Jupyter cell of our test notebook now reads:

import numpy as np
import numpy.random as npr
import math 
import sys
import pandas as pd
from sklearn.datasets import fetch_openml
from sklearn.metrics import confusion_matrix
from scipy.special import expit  
import seaborn as sns
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.patches as mpat 
import time 
import imp
from mycode import myann

Note the new lines for the import of the “pandas” and “seaborn” libraries. Please inform yourself about the purpose of each library on the Internet.

Limited Accuracy

In the last article we performed some tests which showed a thorough robustness of our MLP regarding the MNIST datatset. There was some slight overfitting, but
playing around with hyper-parameters showed no extraordinary jump in “accuracy“, which we defined to be the percentage of correctly predicted records in the test dataset.

In general we can say that an accuracy level of 95% is what we could achieve within the range of parameters we played around with. Regression regularization (Lambda2 > 0) had some positive impact. A structural change to a MLP with just one layer did NOT give us a real breakthrough regarding CPU-time consumption, but when going down to 50 or 30 nodes in the intermediate layer we saw at least some reduction by up to 25%. But then our accuracy started to become worse.

Whilst we did our tests we measured the ANN’s “accuracy” by comparing the number of records for which our ANN did a correct prediction with the total number of records in the test data set. This is a global measure of accuracy; it averages over all 10 digits, i.e. all 10 classification categories. However, if we want to look a bit deeper into the prediction errors our MLP obviously produces it is, however, useful to introduce some more quantities and other measures of accuracy, which can be applied on the level of each output category.

Measures of accuracy, related quantities and classification errors for a specific category

The following quantities and basic concepts are often used in the context of ML algorithms for classification tasks. Predictions of our ANN will not be error free and thus we get an accuracy less than 100%. There are different reasons for this – and they couple different output categories. In the case of MNIST the output categories correspond to the digits 0 to 9. Let us take a specific output category, namely the digit “5”. Then there are two basic types of errors:

  • The network may have predicted a “3” for a MNIST image record, which actually represents a “5” (according to the “y_train”-value for this record). This error case is called a “False Negative“.
  • The network may have predicted a “5” for a MNIST image record, which actually represents a “3” according to its “y_train”-value. This error case is called a “False Positive“.

Both cases mark some difference between an actual and predicted number value for a MNIST test record. Technically, “actual” refers to the number value given by the related record in our array “ANN._y_test”. “Predicted” refers to the related record in an array “ay_prediction_test”, which our function “predict_all_test_data()” returns (see the code above).

Regarding our example digit “5” we obviously can distinguish between the following quantities:

  • AN : The total number of all records in the test data set which actually correspond to our digit “5”.
  • TP: The number of “True Positives”, i.e. the number of those cases correctly detected as “5”s.
  • FP: The number of “False Positives”, i.e. the number of those cases where our ANN falsely predicts a “5”.
  • FN: The number of “False Negatives”, i.e. the number of those cases where our ANN falsely predicts another digit than “5”, but where it actually should predict a “5”.

Then we can calculate the following ratios which all somehow measure “accuracy” for a specific output category:

  • Precision:
    TP / (TP + FP)
  • Recall:
    TP / ( TP + FN))
  • Accuracy:
    TP / AN
  • F1:
    TP / ( TP + 0.5*(FN + TP) )

A careful reader will (rightly) guess that the quantity “recall” corresponds to what we would naively define as “accuracy” – namely the ratio TP/AN.
From its definition it is clear that the quantity “F1” gives us a weighted average between the measures “precision” and “recall”.

How can we get these numbers for all 10 categories from our MLP after training ?

Confusion matrix

When we want to analyze our basic error types per category we need to look at the discrepancy between predicted and actual data. This suggests a presentation in form of a matrix with all for all possible category values both in x- and y-direction. The cells of such a matrix – e.g. a cell for an actual “5” and a predicted “3” – could e.g. be filled with the corresponding FN-number.

We will later on develop our own code to solve the task of creating and displaying such a matrix. But there is a nice guy called Runqi Yang who shared some code for precisely this purpose on GitHub Gist; see https://gist.github.com/hitvoice/36c…
We can use his suggested code as it is in our context. We have already presented it above in form of the function “cm_analysis()“, which uses the pandas and seaborn libraries.

After a training run with the following parameters

try: 
    ANN = myann.MyANN(my_data_set="mnist_keras", n_hidden_layers = 2, 
                 ay_nodes_layers = [0, 70, 30, 0], 
                 n_nodes_layer_out = 10,  
                 my_loss_function = "LogLoss",
                 n_size_mini_batch = 500,
                 n_epochs = 1800, 
                 n_max_batches = 2000,  # small values only for test runs
                 lambda2_reg = 0.2, 
                 lambda1_reg = 0.0,      
                 vect_mode = 'cols', 
                 learn_rate = 0.0001,
                 decrease_const = 0.000001,
                 mom_rate   = 0.00005,  
                 shuffle_batches = True,
                 print_period = 50,         
                 figs_x1=12.0, figs_x2=8.0, 
                 legend_loc='upper right',
                 b_print_test_data = True
                 )
except SystemExit:
    print("stopped")

we get

and

and eventually

When I studied the last plot for a while I found it really instructive. Each of its cell outside the diagonal obviously contains the number of “False Negative” records for these two specific category values – but with respect to actual value.

What more do we learn from the matrix? Well, the numbers in the cells on the diagonal, in a row and in a
column are related to our quantities TP, FN and FP:

  • Cells on the diagonal: For the diagonal we should find many correct “True Positive” values compared to the actual correct MNIST digits. (At least if all numbers are reasonably distributed across the MNIST dataset). We see that this indeed is the case. The ration of “True Positives” and the “Actual Positives” is given as a percentage and with the related numbers inside the respective cells on the diagonal.
  • Cells of a row: The values in the cells of a row (without the cell on the diagonal) of the displayed matrix give us the numbers/ratios for “False Negatives” – with respect to the actual value. If you sum up the individual FN-numbers you get the total number of “False negatives”, which of course is the difference between the total number AN and the number TP for the actual category.
  • Cells of a column: The column cells contain the numbers/ratios for “False Positives” – with respect to the predicted value. If you sum up the individual FN-numbers you get the total number of “False Positives” with respect to the predicted column value.

So, be a bit careful: A FN value with respect to an actual row value is a FP value with respect to the predicted column value – if the cell is one outside the diagonal!

All ratios are calculated with respect to the total actual numbers of data records for a specific category, i.e. a digit.

Looking closely we detect that our code obviously has some problems with distinguishing pictures of “5”s with pictures of “3”s, “6”s and “8”s. The same is true for “8”s and “3”s or “2s”. Also the distinction between “9”s, “3”s and “4”s seems to be difficult sometimes.

Does the confusion matrix change due to random initial weight values and mini-batch-shuffling?

We have seen already that statistical variations have no big impact on the eventual accuracy when training converges to points in the parameter-space close to the point for the minimum of the overall cost-function. Statistical effects between to training runs stem in our case from statistically chosen initial values of the weights and the changes to our mini-batch composition between epochs. But as long as our training converges (and ends up in a global minimum) we should not see any big impact on the confusion matrix. And indeed a second run leads to:

The values are pretty close to those of the first run.

Precision, Recall values per digit category and our own confusion matrix

Ok, we now can look at the nice confusion matrix plot and sum up all the values in a row of the confusion matrix to get the total FN-number for the related actual digit value. Or sum up the entries in a column to get the total FP-number. But we want to calculate these values from the ANN’s prediction results without looking at a plot and summation handwork. In addition we want to get the data of the confusion matrix in our own Numpy matrix array independently of foreign code. The following box displays the code for two functions, which are well suited for this task:

# A class to print in color and bold 
class color:
   PURPLE = '\033[95m'
   CYAN = '\033[96m'
   DARKCYAN = '\033[36m'
   BLUE = '\033[94m'
   GREEN = '\033[92m'
   YELLOW = '\033[93m'
   RED = '\033[91m'
   BOLD = '\033[1m'
   UNDERLINE = '\033[4m'
   END = '\033[
0m'

def acc_values(ay_pred_test, ay_y_test):
    ay_x = ay_pred_test
    ay_y = ay_y_test
    # ----- 
    #- dictionary for all false positives for all 10 digits
    fp = {}
    fpnum = {}
    irg = range(10)
    for i in irg:
        key = str(i)
        xfpi = np.where(ay_x==i)[0]
        fpi = np.zeros((10000, 3), np.int64)

        n = 0
        for j in xfpi: 
            if ay_y[j] != i: 
                row = np.array([j, ay_x[j], ay_y[j]])
                fpi[n] = row
                n+=1

        fpi_real   = fpi[0:n]
        fp[key]    = fpi_real
        fpnum[key] = fp[key].shape[0] 

    #- dictionary for all false negatives for all 10 digits
    fn = {}
    fnnum = {}
    irg = range(10)
    for i in irg:
        key = str(i)
        yfni = np.where(ay_y==i)[0]
        fni = np.zeros((10000, 3), np.int64)

        n = 0
        for j in yfni: 
            if ay_x[j] != i: 
                row = np.array([j, ay_x[j], ay_y[j]])
                fni[n] = row
                n+=1

        fni_real = fni[0:n]
        fn[key] = fni_real
        fnnum[key] = fn[key].shape[0] 

    #- dictionary for all true positives for all 10 digits
    tp = {}
    tpnum = {}
    actnum = {}
    irg = range(10)
    for i in irg:
        key = str(i)
        ytpi = np.where(ay_y==i)[0]
        actnum[key] = ytpi.shape[0]
        tpi = np.zeros((10000, 3), np.int64)

        n = 0
        for j in ytpi: 
            if ay_x[j] == i: 
                row = np.array([j, ay_x[j], ay_y[j]])
                tpi[n] = row
                n+=1

        tpi_real = tpi[0:n]
        tp[key] = tpi_real
        tpnum[key] = tp[key].shape[0] 
 
    #- We create an array for the precision values of all 10 digits 
    ay_prec_rec_f1 = np.zeros((10, 9), np.int64)
    print(color.BOLD + "Precision, Recall, F1, Accuracy, TP, FP, FN, AN" + color.END +"\n")
    print(color.BOLD + "i  ", "prec  ", "recall  ", "acc    ", "F1       ", "TP    ", 
          "FP    ", "FN    ", "AN" + color.END) 
    for i in irg:
        key = str(i)
        tpn = tpnum[key]
        fpn = fpnum[key]
        fnn = fnnum[key]
        an  = actnum[key]
        precision = tpn / (tpn + fpn) 
        prec = format(precision, '7.3f')
        recall = tpn / (tpn + fnn) 
        rec = format(recall, '7.3f')
        accuracy = tpn / an
        acc = format(accuracy, '7.3f')
        f1 = tpn / ( tpn + 0.5 * (fnn+fpn) )
        F1 = format(f1, '7.3f')
        TP = format(tpn, '6.0f')
        FP = format(fpn, '6.0f')
        FN = format(fnn, '6.0f')
        AN = format(an,  '6.0f')

        row = np.array([i, precision, recall, accuracy, f1, tpn, fpn, fnn, an])
        ay_prec_rec_f1[i] = row 
        print (i, prec, rec, acc, F1, TP, FP, FN, AN)
        
    return tp, tpnum, fp, fpnum, fn, fnnum, ay_prec_rec_f1 

def create_cf(ay_fn, ay_tpnum):
    ''' fn: array with false negatives row = np.array([j, x[j], y[j]])
    '''
    cf = np.zeros((10, 10), np.int64)
    rgi = range(10)
    rgj = range(10)
    for i in rgi:
        key = str(i)
        fn_i = ay_fn[key][ay_fn[key][:,2] == i]
        for j in rgj:
            if j!= i: 
                fn_ij = fn_i[fn_i[:,1] == j]
                #print(i, j, fn_ij)
                num_fn_ij = fn_ij.shape[0]
                cf[i,j] = num_fn_ij
            if j==i:
                cf[i,j] = ay_tpnum[key]

    cols=["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
    df = pd.DataFrame(cf, columns=cols, index=cols)
    # print( "\n", df, "\n")
    # df.style
    
    return cf, df
    
 

 

The first function takes a array with prediction values (later on provided externally
by our “ay_predictions_test”) and compares its values with those of an y_test array which contains the actual values (later provided externally by our “ANN._y_test”). Then it uses array-slicing to create new arrays with information on all error records, related indices and the confused category values. Eventually, the function determines the numbers for AN, TP, FP and FN (per digit category) and prints the gathered information. It also returns arrays with information on records which are “True Positives”, “False Positives”, “False Negatives” and the various numbers.

The second function uses array-slicing of the array which contains all information on the “False Negatives” to reproduce the confusion matrix. It involves Pandas to produce a styled output for the matrix.

Now you can run the above code and the following one in Jupyter cells – of course, only after you have completed a training and a prediction run:

For my last run I got the following data:

We again see that especially “5”s and “9”s have a problem with FNs. When you compare the values of the last printed matrix with those in the plot of the confusion matrix above, you will see that our code produces the right FN/FP/TP-values. We have succeeded in producing our own confusion matrix – and we have all values directly available in our own Numpy arrays.

Some images of “4”-digits with errors

We can use the arrays which we created with functions above to get a look at the images. We use the function “plot_digits()” of Aurelien Geron at handson-ml2 chapter 03 on classification to plot several images in a series of rows and columns. The code is pretty easy to understand; at its center we find the matplotlib-function “imshow()”, which we have already used in other ML articles.

We again perform some array-slicing of the arrays our function “acc_values()” (see above) produces to identify the indices of images in the “X_test”-dataset we want to look at. We collect the first 50 examples of “true positive” images of the “4”-digit, then we take the “false positives” of the 4-digit and eventually the “fales negative” cases. We then plot the images in this order:

def plot_digits(instances, images_per_row=10, **options):
    size = 28
    images_per_row = min(len(instances), images_per_row)
    images = [instance.reshape(size,size) for instance in instances]
    n_rows = (len(instances) - 1) // images_per_row + 1
    row_images = []
    n_empty = n_rows * images_per_row - len(instances)
    images.append(np.zeros((size, size * n_empty)))
    for row in range(n_rows):
        rimages = images[row * images_per_row : (row + 1) * images_per_row]
        row_images.append(np.concatenate(rimages, axis=1))
    image = np.concatenate(row_images, axis=0)
    plt.imshow(image, cmap = mpl.cm.binary, **options)
    plt.axis("off")

ay_tp, ay_tpnum, ay_fp, ay_fpnum, ay_fn, ay_
fnnum, ay_prec_rec_f1 = \
    acc_values(ay_pred_test = ay_predictions_test, ay_y_test = ANN._y_test)

idx_act = str(4)

# fetching the true positives 
num_tp = ay_tpnum[idx_act]
idx_tp = ay_tp[idx_act][:,[0]]
idx_tp = idx_tp[:,0]
X_test_tp = ANN._X_test[idx_tp]

# fetching the false positives 
num_fp = ay_fpnum[idx_act]
idx_fp = ay_fp[idx_act][:,[0]]
idx_fp = idx_fp[:,0]
X_test_fp = ANN._X_test[idx_fp]

# fetching the false negatives 
num_fn = ay_fnnum[idx_act]
idx_fn = ay_fn[idx_act][:,[0]]
idx_fn = idx_fn[:,0]
X_test_fn = ANN._X_test[idx_fn]

# plotting 
# +++++++++++
plt.figure(figsize=(12,12))

# plotting the true positives
# --------------------------
plt.subplot(321)
plot_digits(X_test_tp[0:25], images_per_row=5 )
plt.subplot(322)
plot_digits(X_test_tp[25:50], images_per_row=5 )

# plotting the false positives
# --------------------------
plt.subplot(323)
plot_digits(X_test_fp[0:25], images_per_row=5 )
plt.subplot(324)
plot_digits(X_test_fp[25:], images_per_row=5 )

# plotting the false negatives
# ------------------------------
plt.subplot(325)
plot_digits(X_test_fn[0:25], images_per_row=5 )
plt.subplot(326)
plot_digits(X_test_fn[25:], images_per_row=5 )

 

The first row of the plot shows the (first) 50 “True Positives” for the “4”-digit images in the MNIST test data set. The second row shows the “False Positives”, the third row the “False Negatives”.

Very often you can guess why our MLP makes a mistake. However, in some cases we just have to acknowledge that the human brain is a much better pattern recognition machine than a stupid MLP 🙂 .

Conclusion

With the help of a “confusion matrix” it is easy to find out for which MNIST digit-images our algorithm has major problems. A confusion matrix gives us the necessary numbers of those digits (and their images) for which the MLP wrongly predicts “False Positives” or “False Negatives”.

We have also seen that there are three quantities – precision, recall, F1 – which are useful to describe the accuracy of a classification algorithm per classification category.

We have written some code to collect all necessary information about “confused” images into our own Numpy arrays after training. Slicing of Numpy arrays proved to be useful, and matplotlib helped us to visualize examples of the wrongly classified MNIST digit-images.

In the next article
A simple program for an ANN to cover the Mnist dataset – XII – accuracy evolution, learning rate, normalization
we shall extract some more information on the evolution of accuracy during training. We shall also make use of a “clustering” technique to reduce the number of input nodes.

Links

The python code of Runqi Yang (“hitvoice”) at gist.github.com for creating a plot of a confusion-matrix
Information on the function confusion_matrix() provided by sklearn.metrics
Information on the heatmap-functionality provided by “seaborn”
A python seaborn tutorial