A simple Python program for an ANN to cover the MNIST dataset – XIII – the impact of regularization

I continue with my growing series on a Multilayer perceptron and the MNIST dataset.

A simple Python program for an ANN to cover the MNIST dataset – XII – accuracy evolution, learning rate, normalization
A simple Python program for an ANN to cover the MNIST dataset – XI – confusion matrix
A simple Python program for an ANN to cover the MNIST dataset – X – mini-batch-shuffling and some more tests
A simple Python program for an ANN to cover the MNIST dataset – IX – First Tests
A simple Python program for an ANN to cover the MNIST dataset – VIII – coding Error Backward Propagation
A simple Python program for an ANN to cover the MNIST dataset – VII – EBP related topics and obstacles
A simple Python program for an ANN to cover the MNIST dataset – VI – the math behind the „error back-propagation“
A simple Python program for an ANN to cover the MNIST dataset – V – coding the loss function
A simple Python program for an ANN to cover the MNIST dataset – IV – the concept of a cost or loss function
A simple Python program for an ANN to cover the MNIST dataset – III – forward propagation
A simple Python program for an ANN to cover the MNIST dataset – II – initial random weight values
A simple Python program for an ANN to cover the MNIST dataset – I – a starting point

In the last article of the series we made some interesting experiences with the variation of the “leaning rate”. We also saw that a reasonable range for initial weight values should be chosen.

Even more fascinating was, however, the impact of a normalization of the input data on a smooth and fast gradient descent. We drew the conclusion that normalization is of major importance when we use the sigmoid function as the MLP’s activation function – especially for nodes in the first hidden layer and for input data which are on average relatively big. The reason for our concern were saturation effects of the sigmoid functions and other functions with a similar variation with their argument. In the meantime I have tried to make the importance of normalization even more plausible with the help of a a very minimalistic perceptron for which we can analyze saturation effects a bit more in depth; you get to the related article series via the following link:

A single neuron perceptron with sigmoid activation function – III – two ways of applying Normalizer

There we also have a look at other normalizers or feature scalers.

But back to our series on a multi-layer perceptron. You may have have asked yourself in the meantime: Why did he not check the impact of the regularization? Indeed: We kept the parameter Lambda2 for the quadratic regularization term constant in all experiments so far: Lambda2 = 0.2. So, the question about the impact of regularization e.g. on accuracy is a good one.

How big is the regularization term and how does it evolve during gradient decent training?

I add even one more question: How big is the relative contribution of the regularization term to the total loss or cost function? In our Python program for a MLP model we included a so called quadratic Ridge term:

Lambda2 * 0.5 * SUM[all weights**2], where bias nodes are excluded from the sum.

From various books on Machine Learning [ML] you just learn to choose the factor Lambda2 in the range between 0.01 and 0.1. But how big is the resulting term actually in comparison to the standard cost term, then, and how does the ratio between both terms evolve during gradient descent? What factors influence this ratio?

As we follow a training strategy based on mini-batches the regularization contribution was and is added up to the costs of each mini-batch. So its relative importance varies of course with the size of the mini-batches! Other factors which may also be of some importance – at least during the first epochs – could be the total number of weights in our network and the range of initial weight values.

Regarding the evolution during a converging gradient descent we know already that the total costs go down on the path to a cost minimum – whilst the weight values reach a stable level. So there is a (non-linear!) competition between the regularization term and the real costs of the “Log Loss” cost function! During convergence the relative importance of the regularization term may therefore become bigger until the ratio to the standard costs reaches an eventual constant level. But how dominant will the regularization term get in the end?

Let us do some experiments with the MNIST dataset again! We fix some common parameters and conditions for our test runs:
As we saw in the last article we should normalize the input data. So, all of our numerical experiments below (with the exception of the last one) are done with standardized input data (using Scikit-Learn’s StandardScaler). In addition initial weights are all set according to the sqrt(nodes)-rule for all layers in the interval [-0.5*sqrt(1/num_nodes), 0.5*sqrt(1/num_nodes)], with num_nodes meaning the number of nodes in a layer. Other parameters, which we keep constant, are:

Parameters: learn_rate = 0.001, decrease_rate = 0.00001, mom_rate = 0.00005, n_size_mini_batch = 500, n_epochs = 800.

I added some statements to the method for cost calculation in order to save the relative part of the regularization terms with respect to the total costs of each mini-batch in a Numpy array and plot the evolution in the end. The changes are so simple that I omit showing the modified code.

A first look at the evolution of the relative contribution of regularization to the total loss of a mini-batch

How does the outcome of gradient descent look for standardized input data and a Lambda2-value of 0.1?

Lambda2 = 0.1
Results: acc_train: 0.999 , acc_test: 0.9714, convergence after ca. 600 epochs

We see that the regularization term actually dominates the total loss of a mini-batch at convergence. At least with our present parameter setting. In comparisoin to the total loss of the full training set the contribution is of course much smaller and typically below 1%.

A small Lambda term

Let us reduce the regularization term via setting Lambda = 0.01. We expect its initial contribution to the costs of a batch to be smaller then, but this does NOT mean that the ratio to the standard costs of the batch automatically shrinks significantly, too:

Lambda2 = 0.01
Results: acc_train: 1.0 , acc_test: 0.9656, convergence after ca. 350 epochs

Note the absolute scale of the costs in the plots! We ended up at a much lower level of the total loss of a batch! But the relative dominance of regularization at the point of convergence actually increased! However, this did not help us with the accuracy of our MLP-algorithm on the test data set – although we perfectly fit the training set by a 100% accuracy.

In the end this is what regularization is all about. We do not want a total overfitting, a perfect adaption of the grid to the training set. It will not help in the sense of getting a better general accuracy on other input data. A Lambda2 of 0.01 is much too small in our case!

Slightly bigger regularization with Lambda2 = 0.2

So lets enlarge Lambda2 a bit:
Lambda2 = 0.2
Results: acc_train: 0.9946 , acc_test: 0.9728, convergence after ca. 700 epochs

We get an improved accuracy!

Two other cases with significantly bigger Lambda2

Lambda2 = 0.4
Results: acc_train: 0.9858 , acc_test: 0.9693, convergence after ca. 600 epochs

Lambda2 = 0.8
Results: acc_train: 0.9705 , acc_test: 0.9588, convergence after ca. 400 epochs

OK, but in both cases we see a significant and systematic trend towards reduced accuracy values on the test data set with growing Lambda2-values > 0.2 for our chosen mini-batch size (500 samples).

Conclusion

We learned a bit about the impact of regularization today. Whatever the exact Lambda2-value – in the end the contribution of a regularization term becomes a significant part of the total loss of a mini-batch when we approached the total cost minimum. However, the factor Lambda2 must be chosen with a reasonable size to get an impact of regularization on the final minimum position in the weight-space! But then it will help to improve accuracy on general input data in comparison to overfitted solutions!

But we also saw that there is some balance to take care of: For an optimum of generalization AND accuracy you should neither make Lambda2 too small nor too big. In our case Lambda2 = 0.2 seems to be a reasonable and good choice. Might be different with other datasets.

All in all studying the impact of a variation of achieved accuracy with the factor for a Ridge regularization term seems to be a good investment of time in ML projects. We shall come back to this point already in the next articles of this series.

In the next article

A simple Python program for an ANN to cover the MNIST dataset – XIV – cluster detection in feature space

we shall start to work on cluster detection in the feature space of the MNIST data before using gradient descent.

 

A single neuron perceptron with sigmoid activation function – II – normalization to overcome saturation

I continue my small series on a single neuron perceptron to study the positive effects of the normalization of input data in combination with the use of the sigmoid function as the activation function. In the last article

A single neuron perceptron with sigmoid activation function – I – failure of gradient descent due to saturation

we have seen that the saturation of the sigmoid function for big positive or negative arguments can prevent a smooth gradient descent under certain conditions – even if a global minimum clearly exists.

A perceptron with just one computing neuron is just a primitive example which demonstrates what can happen at the neurons of the first computing layer after the input layer of a real “Artificial Neural Network” [ANN]. We should really avoid to provide too big input values there and take into account that input values for different features get added up.

Measures against saturation at neurons in the first computing layer

There are two elementary methods to avoid saturation of sigmoid like functions at neurons of the first hidden layer:

  • Normalization: One measure to avoid big input values is to normalize the input data. Normalization can be understood as a transformation of given real input values for all of the features into an interval [0, 1] or [-1, 1]. There are of course many transformations which map a real number distribution into a given limited interval. Some keep up the relative distance of data points, some not. We shall have a look at some standard normalization variants used in Machine Learning [ML] during this and the next article .
    The effect with respect to a sigmoidal activation function is that the gradient for arguments in the range [-1, 1] is relatively big. The sigmoid function behaves almost as a linear function in this argument region; see the plot in the last article.
  • Choosing an appropriate (statistical) initial weight distribution: If we have a relatively big feature space as e.g. for the MNIST dataset with 784 features, normalization alone is not enough. The initial value distribution for weights must also be taken care of as we add up contributions of all input nodes (multiplied by the weights). We can follow a recommendation of LeCun (1990); see the book of Aurelien Geron recommended (here) for more details.
    Then we would choose a uniform distribution of values in a range [-alpha*sqrt(1/num_inp_nodes), alpha*sqrt(1/num_inp_nodes)], with alpha $asymp; 1.73 and num_inp_nodes giving the number of input nodes, which typically is the number of features plus 1, if you use a bias neuron. As a rule of thumb I personally take [-0.5*sqrt(1/num_inp_nodes, 0.5*sqrt[1/num_inp_nodes].

Normalization functions

The following quick&dirty Python code for a Jupyter cell calls some normalization functions for our simple perceptron scenario and directly executes the transformation; I have provided the required import statements for libraries already in the last article.

# ********
# Scaling
# ********

b_scale = True
scale_method = 3
# 0: Normalizer (standard), 1: StandardScaler, 2. By factor, 3: Normalizer per pair 
# 4: Min_Max, 5: Identity (no transformation) - just there for convenience  

shape_ay = (num_samples,)
ay_K1 = np.zeros(shape_ay)
ay_K2 = np.zeros(shape_ay)

# apply scaling
if b_scale:
    # shape_input = (num_samples,2)
    rg_idx = range(num_samples)
    if scale_method == 0:
      
        shape_input = (2, num_samples)
        ay_K = np.zeros(shape_input)
        for idx in rg_idx:
            ay_K[0][idx] = li_K1[idx] 
            ay_K[1][idx] = li_K2[idx] 
        scaler = Normalizer()
        ay_K = scaler.fit_transform(ay_K)
        for idx in rg_idx:
            ay_K1[idx] = ay_K[0][idx]   
            ay_K2[idx] = ay_K[1][idx] 
        print(ay_K1)
        print("\n")
        print(ay_K2)
    elif scale_method == 1: 
        shape_input = (num_samples,2)
        ay_K = np.zeros(shape_input)
        for idx in rg_idx:
            ay_K[idx][0] = li_K1[idx] 
            ay_K[idx][1] = li_K2[idx] 
        scaler = StandardScaler()
        ay_K = scaler.fit_transform(ay_K)
        for idx in rg_idx:
            ay_K1[idx] = ay_K[idx][0]   
            ay_K2[idx] = ay_K[idx][1]
    elif scale_method == 2:
        dmax = max(li_K1.max() - li_K1.min(), li_K2.max() - li_K2.min())
        ay_K1 = 1.0/dmax * li_K1
        ay_K2 = 1.0/dmax * li_K2
    elif scale_method == 3:
        shape_input = (num_samples,2)
        ay_K = np.zeros(shape_input)
        for idx in rg_idx:
            ay_K[idx][0] = li_K1[idx] 
            ay_K[idx][1] = li_K2[idx] 
        scaler = Normalizer()
        ay_K = scaler.fit_transform(ay_K)
        for idx in rg_idx:
            ay_K1[idx] = ay_K[idx][0]   
            ay_K2[idx] = ay_K[idx][1]
    elif scale_method == 4:
        shape_input = (num_samples,2)
        ay_K = np.zeros(shape_input)
        for idx in rg_idx:
            ay_K[idx][0] = li_K1[idx] 
            ay_K[idx][1] = li_K2[idx] 
        scaler = MinMaxScaler()
        ay_K = scaler.fit_transform(ay_K)
        for idx in rg_idx:
            ay_K1[idx] = ay_K[idx][0]   
            ay_K2[idx] = ay_K[idx][1]
    elif scale_method == 5:
        ay_K1 = li_K1
        ay_K2 = li_K2
            
            
# Get overview over costs on weight-mesh
wm1 = np.arange(-5.0,5.0,0.002)
wm2 = np.arange(-5.0,5.0,0.002)
#wm1 = np.arange(-0.3,0.3,0.002)
#wm2 = np.arange(-0.3,0.3,0.002)
W1, W2 = np.meshgrid(wm1, wm2) 
C, li_C_sgl = costs_mesh(num_samples = num_samples, W1=W1, W2=W2, li_K1 = ay_K1, li_K2 = ay_K2, \
                               li_a_tgt = li_a_tgt)


C_min = np.amin(C)
print("C_min = ", C_min)
IDX = np.argwhere(C==C_min)
print ("Coordinates: ", IDX)
wmin1 = W1[IDX[0][0]][IDX[0][1]] 
wmin2 = W2[IDX[0][0]][IDX[0][1]]
print("Weight values at cost minimum:",  wmin1, wmin2)

# Plots
# ******
fig_size = plt.rcParams["figure.figsize"]
#print(fig_size)
fig_size[0] = 19; fig_size[1] = 19

fig3 = plt.figure(3); fig4 = plt.figure(4)

ax3 = fig3.gca(projection='3d')
ax3.get_proj = lambda: np.dot(Axes3D.get_proj(ax3), np.diag([1.0, 1.0, 1, 1]))
ax3.view_init(25,135)
ax3.set_xlabel('w1', fontsize=16)
ax3.set_ylabel('w2', fontsize=16)
ax3.set_zlabel('Total costs', fontsize=16)
ax3.plot_wireframe(W1, W2, 1.2*C, colors=('green'))


ax4 = fig4.gca(projection='3d')
ax4.get_proj = lambda: np.dot(Axes3D.get_proj(ax4), np.diag([1.0, 1.0, 1, 1]))
ax4.view_init(25,135)
ax4.set_xlabel('w1', fontsize=16)
ax4.set_ylabel('w2', fontsize=16)
ax4.set_zlabel('Single costs', fontsize=16)
ax4.plot_wireframe(W1, W2, li_C_sgl[0], colors=('blue'))
#ax4.plot_wireframe(W1, W2, li_C_sgl[1], colors=('red'))
ax4.plot_wireframe(W1, W2, li_C_sgl[5], colors=('orange'))
#ax4.plot_wireframe(W1, W2, li_C_sgl[6], colors=('yellow'))
#ax4.plot_wireframe(W1, W2, li_C_sgl[9], colors=('magenta'))
#ax4.plot_wireframe(W1, W2, li_C_sgl[12], colors=('green'))

plt.show()

 

The results of the transformation for our two features are available in the arrays “ay_K1” and “ay_K2”. These arrays will then be used as an input to gradient descent.

Some
remarks on some normalization methods:

Normalizer: It is in the above code called by setting “scale_method=0”. The “Normalizer” with standard parameters scales by applying a division by an averaged L2-norm distance. However, its application is different from other SciKit-Learn scalers:
It normalizes over all data given in a sample. The dimensions beyond 1 are NOT interpreted as features which have to be normalizes separately – as e.g. the “StandardScaler” does. So, you have to be careful with index handling! This explains the different index-operation for “scale_method = 0” compared to other cases.

StandardScaler: Called by setting “scale_method=1”. The StandardScaler accepts arrays of samples with columns for features. It scales all features separately. It subtracts the mean average of all feature values of all samples and divides afterwards by the standard deviation. It thus centers the value distribution with a mean value of zero and a variance of 1. Note however that it does not limit all transformed values to the interval [-1, 1].

MinMaxScaler: Called by setting “scale_method=4”. The MinMaxScaler
works similar to the StandardScaler but subtracts the minimum and divides by the (max-min)-difference. It therefore does not center the distribution and does not set the variance to 1. However, it limits the transformed values to the interval [-1, 1].

Normalizer per sample: Called by setting “scale_method=3”. This applies the Normalizer per sample! I.e., it scales in our case both the given feature values for one single by their mean and standard deviation. This may at first sound totally meaningless. But we shall see in the next article that it is not in case for our special set of 14 input samples.

Hint: For the rest of this article we shall only work with the StandardScaler.

Input data transformed by the StandardScaler

The following plot shows the input clusters after a transformation with the “StandardScaler”:

You should recognize two things: The centralization of the features and the structural consistence of the clusters to the original distribution before scaling!

The cost hyperplane over the {w1, w2}-space after the application of the StandardScaler to our input data

Let us apply the StandardScaler and look at the resulting cost hyperplane. When we set the parameters for a mesh display to

wm1 = np.arange(-5.0,5.0,0.002), wm2 = np.arange(-5.0,5.0,0.002)

we get the following results:

C_min =  0.0006239618496774544
Coordinates:  [[2695 2259]]
Weight values at cost minimum: -0.4820000000004976 0.3899999999994064

Plots for total costs over the {w1, w2}-space from different angles

Plot for individual costs (i=0, i=5) over the {w1, w2}-space

The index “i” refers to our sample-array (see the last article).

Gradient descent after scaling with the “StandardScaler”

Ok, let us now try gradient descent again. We set the following parameters:

w1_start = -0.20, w2_start = 0.25 eta = 0.1, decrease_rate = 0.000001, num_steps = 2000

Results:

Stoachastic Descent
          Kt1       Kt2     K1     K2  Tgt       Res       Err
0   1.276259 -0.924692  200.0   14.0  0.3  0.273761  0.087463
1  -1.067616  0.160925    1.0  107.0  0.7  0.640346  0.085220
2   0.805129 -0.971385  160.0   10.0  0.3  0.317122  0.057074
3  -0.949833  1.164828   11.0  193.0  0.7  0.713461  0.019230
4   1.511825 -0.714572  220.0   32.0  0.3  0.267573  0.108090
5  -0.949833  0.989729   11.0  178.0  0.7  0.699278  0.001031
6   0.333998 -1.064771  120.0    2.0  0.3  0.359699  0.198995
7  -0.914498  1.363274   14.0  210.0  0.7  0.725667  0.036666
8   1.217368 -0.948038  195.0   12.0  0.3  0.277602  0.074660
9  -0.902720  0.476104   15.0  134.0  0.7  0.650349  0.070930
10  0.451781 -1.006405  130.0    7.0  0.3  0.351926  0.173086
11 -1.020503  0.861322    5.0  167.0  0.7  0.695876  0.005891
12  1.099585 -0.971385  185.0   10.0  0.3  0.287246  0.042514
13 -0.890942  1.585067   16.0  229.0  0.7  0.740396  0.057709

Batch Descent
          Kt1       Kt2     K1     K2  Tgt       Res       Err
0   1.276259 -0.924692  200.0   14.0  0.3  0.273755  0.087482
1  -1.067616  0.160925    1.0  107.0  0.7  0.640352  0.085212
2   0.805129 -0.971385  160.0   10.0  0.3  0.317118  0.057061
3  -0.949833  1.164828   11.0  193.0  0.7  0.713465  0.019236
4   1.511825 -0.714572  220.0   32.0  0.3  0.267566  0.108113
5  -0.949833  0.989729   11.0  178.0  0.7  0.699283  0.001025
6   0.333998 -1.064771  120.0    2.0  0.3  0.359697  0.198990
7  -0.914498  1.363274   14.0  210.0  0.7  0.725670  0.036672
8   1.217368 -0.948038  195.0   12.0  0.3  0.277597  0.074678
9  -0.902720  0.476104   15.0  134.0  0.7  0.650354  0.070923
10  0.451781 -1.006405  130.0    7.0  0.3  0.351924  0.173080
11 -1.020503  0.861322    5.0  167.0  0.7  0.695881  0.005884
12  1.099585 -0.971385  185.0   10.0  0.3  0.287241  0.042531
13 -0.890942  1.585067   16.0  229.0  0.7  0.740400  0.057714

Total error stoch descent:  0.07275422919538276
Total error batch descent:  0.07275715820661666

The attentive reader has noticed that I extended my code to include the columns with the original (K1, K2)-values into the Pandas dataframe. The code of the new function “predict_batch()” is given below. Do not forget to change the function calls at the end of the gradient descent code, too.

Now we obviously can speak of a result! The calculated (w1, w2)-data are:

Final (w1,w2)-values stoch : ( -0.4816 ,  0.3908 )
Final (w1,w2)-values batch : ( -0.4815 ,  0.3906 )

Yeah, this is pretty close to the values we got via the fine grained mesh analysis of the cost function before! And within the error range!

Changed code for two of our functions in the last article

def predict_batch(num_samples, w1, w2, ay_k_1, ay_k_2, li_K1, li_K2, li_a_tgt):
    shape_res = (num_samples, 7)
    ResData = np.zeros(shape_
res)  
    rg_idx = range(num_samples)
    err = 0.0
    for idx in rg_idx:
        z_in  = w1 * ay_k_1[idx] + w2 * ay_k_2[idx] 
        a_out = expit(z_in)
        a_tgt = li_a_tgt[idx]
        err_idx = np.absolute(a_out - a_tgt) / a_tgt 
        err += err_idx
        ResData[idx][0] = ay_k_1[idx] 
        ResData[idx][1] = ay_k_2[idx] 
        ResData[idx][2] = li_K1[idx] 
        ResData[idx][3] = li_K2[idx] 
        ResData[idx][4] = a_tgt
        ResData[idx][5] = a_out
        ResData[idx][6] = err_idx
    err /= float(num_samples)
    return err, ResData    

def create_df(ResData):
    ''' ResData: Array with result values K1, K2, Tgt, A, rel.err 
    '''
    cols=["Kt1", "Kt2", "K1", "K2", "Tgt", "Res", "Err"]
    df = pd.DataFrame(ResData, columns=cols)
    return df    

 

How does the epoch evolution after the application of the StandardScaler look like?

Let us plot the evolution for the stochastic gradient descent:

Cost and weight evolution during stochastic gradient descent

Ok, we see that despite convergence the difference in the costs for different samples cannot be eliminated. It should be clear to the reader, why, and that this was to be expected.

We also see that the total costs (calculated from the individual costs) seemingly converges much faster than the weight values! Our gradient descent path obviously follows a big slope into a rather flat valley first (see the plot of the total costs above). Afterwards there is a small gradient sideways and down into the real minimum – and it obviously takes some epochs to get there. We also understand that we have to keep up a significant “learning rate” to follow the gradient in the flat valley. In addition the following rule seems to be appropriate sometimes:

We must not only watch the cost evolution but also the weight evolution – to avoid stopping gradient descent too early!

We shall keep this in mind for experiments with real multi-layer “Artificial Neural Networks” later on!

And how does the gradient descent based on the full “batch” of 14 samples look like?

Cost and weight evolution during batch gradient descent

A smooth beauty!

Contour plot for separation curves in the {K1, K2}-plane

We add the following code to our Jupyter notebook:

# ***********
# Contours 
# ***********

from matplotlib import ticker, cm

# Take w1/w2-vals from above w1f, w2f
w1_len = len(li_w1_ba)
w2_len = len(li_w1_ba)
w1f = li_w1_ba[w1_len -1]
w2f = li_w2_ba[w2_len -1]

def A_mesh(w1,w2, Km1, Km2):
    kshape = Km1.shape
    A = np.zeros(kshape) 
    
    Km1V = Km1.reshape(kshape[0]*kshape[1], )
    Km2V = Km2.reshape(kshape[0]*kshape[1], )
    # print("km1V.shape = ", Km1V.shape, "\nkm1V.shape = ", Km2V.shape )
    
    KmV = np.column_stack((Km1V, Km2V))
    
    # scaling trafo
    KmT = scaler.transform(KmV)
    
    Km1T, Km2T = KmT.T
    Km1TR = Km1T.reshape(
kshape)
    Km2TR = Km2T.reshape(kshape)
    #print("km1TR.shape = ", Km1TR.shape, "\nkm2TR.shape = ", Km2TR.shape )
    
    rg_idx = range(num_samples)
    Z      = w1 * Km1TR + w2 * Km2TR
    A = expit(Z)
    return A

#Build K1/K2-mesh 
minK1, maxK1 = li_K1.min()-20, li_K1.max()+20 
minK2, maxK2 = li_K2.min()-20, li_K2.max()+20
resolution = 0.1
Km1, Km2 = np.meshgrid( np.arange(minK1, maxK1, resolution), 
                        np.arange(minK2, maxK2, resolution))

A = A_mesh(w1f, w2f, Km1, Km2 )

fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 14
fig_size[1] = 11
fig, ax = plt.subplots()
cmap=cm.PuBu_r
cmap=cm.RdYlBu
#cs = plt.contourf(X, Y, Z1, levels=25, alpha=1.0, cmap=cm.PuBu_r)
cs = ax.contourf(Km1, Km2, A, levels=25, alpha=1.0, cmap=cmap)
cbar = fig.colorbar(cs)
N = 14
r0 = 0.6
x = li_K1
y = li_K2
area = 6*np.sqrt(x ** 2 + y ** 2)  # 0 to 10 point radii
c = np.sqrt(area)
r = np.sqrt(x ** 2 + y ** 2)
area1 = np.ma.masked_where(x < 100, area)
area2 = np.ma.masked_where(x >= 100, area)
ax.scatter(x, y, s=area1, marker='^', c=c)
ax.scatter(x, y, s=area2, marker='o', c=c)
# Show the boundary between the regions:
ax.set_xlabel("K1", fontsize=16)
ax.set_ylabel("K2", fontsize=16)

 

This code enables us to plot contours of predicted output values of our solitary neuron, i.e. A-values, on a mesh of the original {K1, K2}-plane. As we classified after a transformation of our input data, the following hint should be obvious:

Important hint: Of course you have to apply your scaling method to all the new input data created by the mesh-function! This is done in the above code in the “A_mesh()”-function with the following lines:

    # scaling trafo
    if (scale_method == 3): 
        KmT = scaler.fit_transform(KmV)
    else: 
        KmT = scaler.transform(KmV)

We can directly apply the StandardScaler on our new data via its method transform(); the scaler will use the parameters it found during his first “scaler.fit_transform()”-operation on our input samples. However, we cannot do it this way when using the Normalizer for each individual new data sample via “scale_method =3”. I shall come back to this point in a later article.

The careful reader also sees that our code will, for the time being, not work for scale_method=0, scale_method=2 and scale_method=5. Reason: I was too lazy to write a class or code suitable for these normalizing operations. I shall correct this when we need it.

But at least I added our input samples via scatter plotting to the final output. The result is:

The deviations from our target values is to be expected. With a given pair of (w1, w2)-values we cannot do much better with a single neuron and a linear weight impact on the input data.

But we see: If we set up a criterion like:

  • A > 0.5 => sample belongs to the left cluster,
  • A ≤ 0.5 => sample belongs to the right cluster

we would have a relatively good classificator available – based on one neuron only!

Intermediate Conclusion

In this article I have shown that the “standardization” of input data, which are fed into a perceptron ahead of a gradient descent calculation, helps to circumvent problems with the saturation of the sigmoid function at the computing neuron following the input layer. We achieved this by applying the ”
StandardScaler” of Scikit-Learn. We got a smooth development of both the cost function and the weight parameters during gradient descent in the transformed data space.

We also learned another important thing:

An apparent convergence of the cost function in the vicinity of a minimum value does not always mean that we have reached the global minimum, yet. The evolution of the weight parameters may not yet have come to an end! Therefore, it is important to watch both the evolution of the costs AND the evolution of the weights during gradient descent. A too fast decline of the learning rate may not be good either under certain conditions.

In the next article

A single neuron perceptron with sigmoid activation function – III – two ways of applying Normalizer

we shall look at two other normalization methods for our simplistic scenario. One of them will give us an even better classificator.

Stay tuned and remain healthy …

And Mr Trump:
One neuron can obviously learn something about the difference of big and small numbers. This leads me to two questions, which you as a “natural talent” on epidemics can certainly answer: How many neurons are necessary to understand something about an exponential epidemic development? And why did it take so much time to activate them?

 

A single neuron perceptron with sigmoid activation function – I – failure of gradient descent due to saturation

Readers who follow my series on a Python program for a “Multilayer Perceptron” [MLP] have noticed that I emphasized the importance of a normalization of input data in my last article:

A simple Python program for an ANN to cover the MNIST dataset – XII – accuracy evolution, learning rate, normalization

Our MLP “learned” by searching for the global minimum of the loss function via the “gradient descent” method. Normalization seemed to improve a smooth path to convergence whilst our algorithm moved into the direction of a global minimum on the surface of the loss or cost functions hyperplane over the weight parameter space. At least in our case where we used the sigmoid function as the activation function of the artificial neurons. I indicated that the reason for our observations had to do with properties of this function – especially for neurons in the first hidden layer of an MLP.

In case of the 4-layer MLP, which I used on the MNIST dataset, I applied a special form of normalization namely “standardization“. I did this by using the StandardScaler of SciKit-Learn. See the following link for a description: Sklearn preprocessing StandardScaler

We saw the smoothing and helpful impact of normalization on the general convergence of gradient descent by the help of a few numerical experiments. The interaction of the normalization of 784 features with mini-batches and with a complicated 4 layer-MLP structure, which requires the determination of several hundreds of thousands weight values is however difficult to grasp or analyze. One understands that there is a basic relation to the properties of the activation function, but the sheer number of the dimensions of the feature and weight spaces and statistics make a thorough understanding difficult.

Since then I have thought a bit about how to set up a really small and comprehensible experiment which makes the positive impact of normalization visible in a direct and visual form. I have come up with the following most simple scenario, namely the training of a simple perceptron with only one computing neuron and two additional “stupid” neurons in an input layer which just feed input data to our computing neuron:

Input values K1 and K2 are multiplied by weights w1 and w2 and added by the central solitary “neuron”. We use the sigmoid function as the “activation function” of this neuron – well anticipating that the properties of this function may lead to trouble.

The perceptron has only one task: Discriminate between two different types of input data by assigning them two distinct output values.

  • For K1 > 100 and K2 < 25 we want an output of A=0.3.
  • For K1 &l; 25 and K2 > 100 we, instead, want an output of A=0.7

We shall feed the perceptron only 14 different pairs of input values K1[i], K2[i] (i =0,1,..13), e.g. in form of lists:

li_K1 = [200.0,   1.0, 160.0,  11.0, 220.0,  11.0, 120.0,  22.0, 195.0,  15.0, 130.0,   5.0, 185.0,  16.0]
li_K2 = [ 14.0, 107.0,  10.0, 193.0,  32.0, 178.0,   2.0, 210.0,  12.0, 134.0,  15.0, 167.0,  10.0, 229.0] 

(The careful reader detects one dirty example li_K2[4] = 32 (> 25), in contrast to our setting. Just
to see how much of an impact such a deviation has …)

We call each pair (K1, K2)=(li_K1[i], li_K2[i]) for a give “i” a “sample“. Each sample contains values for two “features“: K1 and K2. So, our solitary computing neuron has to solve a typical classification problem – it shall distinguish between two groups of input samples. In a way it must learn the difference between small and big numbers for 2 situations appearing at its input channels.

Off topic: This morning I listened to a series of comments of Mr. Trump during the last weeks on the development of the Corona virus crisis in the USA. Afterwards, I decided to dedicate this mini-series of articles on a perceptron to him – a person who claims to “maybe” be “natural talent” on complicated things as epidemic developments. Enjoy (?) his own words via an audio compilation in the following news article:
https://www.faz.net/aktuell/politik/trumps-praesidentschaft/usa-zehn-wochen-corona-in-den-worten-von-trump-16708603.html

Two well separable clusters

In the 2-dim feature space {K1, K2} we have just two clusters of input data:

Each cluster has a long diameter along one of the feature axes – but overall the clusters are well separable. A rather good separation surface would e.g. be a diagonal line.

For a given input sample with K-values K1 and K2 we define the output of our computing neuron to be

A(w1, w2) = expit( w1*K1 + w2*K2 ) ,

where expit() symbolizes the sigmoid function. See the next section for more details.

Corresponding target-values for the output A are (with at1 = 0.3 and at2 = 0.7):

li_a_tgt = [at1,  at2,  at1,  at2,  at1,  at2,  at1,  at2,  at1,  at2,  at1,  at2,   at1,  at2]

With the help of these target values our poor neuron shall learn from the 14 input samples to which cluster a given future sample probably belongs to. We shall use the “gradient descent” method to train the neuron for this classification task. o solve the task our neuron must find a reasonable separation surface – a line – in the {K1,K2}-plane; but it got the additional task to associate two distinct output values “A” with the two clusters:

A=0.7 for data points with a large K1-value and A=0.3 for data points with a small K1-value.

So, the separation surface has to fulfill some side conditions.

Readers with a background in Machine Learning and MLPs will now ask: Why did you pick the special values 0.3 and 0.7? A good question – I will come back to it during our experiments. Another even more critical question could be: What about a bias neuron in the input layer? Don’t we need it? Again, a very good question! A bias neuron allows for a shift of a separation surface in the feature space. But due to the almost symmetrical nature of our input data (see the positions and orientations of the clusters!) and the target values the impact of a bias neuron on the end result would probably only be small – but we shall come back to the topic of a bias neuron in a later article. But you are free to extend the codes given below to account for a bias neuron in the input layer. You will notice a significant impact if you change either the relative symmetry of the input or of the output data. But lets keep things simple for the next hours …

The sigmoid function and its
saturation for big arguments

You can import the sigmoid function under the name “expit” from the “scipy” library into your Python code. The sigmoid function is a smooth one – but it quickly saturates for big negative or positive values:

So, output values get almost indistinguishable if the absolute values of the arguments are bigger than 15.

What is interesting about our input data? What is the relation to MNIST data?

The special situation about the features in our example is the following: For one and the same feature we have a big number and a small number to work with – depending on the sample. Which feature value – K1 or K2 – is big depends on the sample.

This is something that also happens with the input “features” (i.e. pixels) coming from a MNIST-image:
For a certain feature (= a pixel at a defined position in the 28×28 picture) in a given sample (= a distinct image) we may get a big number as 255. For another feature (= pixel) the situation may be totally different and we may find a value of 0. In another sample (= picture) we may get the reverse situation for the same two pixels.

What happens in such a situation at a specific neuron in the first hidden neuron layer of a MLP when we start gradient descent with a statistical distribution of weight values? If we are unlucky then the initial statistical weight constellation for a sample may lead to a big number of the total input to our selected hidden neuron with the effect of a very small gradient at this node – due to saturation of the sigmoid function.

To give you a feeling: Assume that you have statistical weights in the range of [-0.025, 0.025]. Assume further that only 4 pixels of a MNIST picture with a value of 200 contribute with a local maximum weight of 0.25; then we get a a minimum input at our node of 4*0.25*200 = 20. The gradient of expit(20) has a value of 2e-9. Even if we multiply by the required factor of 200 for a weight correction at one of the contributing input nodes we would would arrive at 4e-7. Quite hopeless. Of course, the situation is not that bad for all weights and image-samples, but you get an idea about the basic problem ….

Our simple scenario breaks the MNIST situation down to just two features and just one working neuron – and therefore makes the correction situation for gradient descent really extreme – but interesting, too. And we can analyze the situation far better than for a MLP because we deal with an only 2-dimensional feature-space {K1, K2} and a 2-dimensional weight-space {w1, w2}.

A simple quadratic cost function for our neuron

For given values w1 and w2, i.e. a tuple (w1, w2), we define a quadratic cost or loss function C_sgl for one single input sample (K1, K2) as follows:

C_sgl = 0.5 * ( li_a_tgt(i) – expit(z_i) )**2, with z_i = li_K[i]*w1 + li_K2[i]*w2

The total cost-function for the batch of all 14 samples is just the sum of all these terms for the individual samples.

Existence of a solution for our posed problem?

From all we theoretically know about the properties of a simple perceptron it should be able to find a reasonable solution! But, how do we know that a reasonable solution for a (w1, w2)-pair does exist at all? One line of reasoning goes like follows:

For two samples – each a member of either cluster – you can plot the hyperplanes of the related outputs “A(K1, K2) = expit(w1*K1+w2*K2)” over the (w1, w2)-space. These hyperplanes are almost orthogonal to each other. If you project the curves of a cut with the A=0.3-planes and the A=0.7-planes down to the (w1, w2)-plane at the center you get straight
lines – almost orthogonally oriented against each other. So, such 2 specific lines cut each other in a well defined point – somewhere off the center. As the expit()-function is a relatively steep one for our big input values the crossing point is relatively close to the center. If we choose other samples we would get slightly different lines and different crossing points of the projected lines – but not too far from each other.

The next plot shows the expit()-functions close to the center of the (w1, w2)-plane for two specific samples of either cluster. We have in addition displayed the surfaces for A=0.7 and A=0.3.

The following plot shows the projections of the cuts of the surfaces for 7 samples of each cluster with the A=0.3-plane and the A=0.7, respectively.

The area of crossings is not too big on the chosen scale of w1, w2. Looking at the graphics we would expect an optimal point around (w1=-0.005, w2=+0.005) – for the original, unnormalized input data.

By the way: There is also a solution for at1=0.3 and at2=0.3, but a strange one. Such a setup would not allow for discrimination. We expect a rather strange behavior then. A first guess could be: The resulting separation curve in the (K1, K2)-plane would move out of the area between the two clusters.

Code for a mesh based display of the costs over the weight-parameter space

Below you find the code suited for a Jupyter cell to get a mesh display of the cost values

import numpy as np
import numpy as np
import random
import math 
import sys

from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer 
from sklearn.preprocessing import MinMaxScaler
from scipy.special import expit  

import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import matplotlib.patches as mpat 
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import Axes3D

# total cost functions for overview 
# *********************************
def costs_mesh(num_samples, W1, W2, li_K1, li_K2, li_a_tgt):
    zshape = W1.shape
    li_Z_sgl = []
    li_C_sgl = []
    C = np.zeros(zshape) 
    
    rg_idx = range(num_samples)
    for idx in rg_idx:
        Z_idx      = W1 * li_K1[idx] + W2 * li_K2[idx]
        A_tgt_idx  = li_a_tgt[idx] * np.ones(zshape) 
        C_idx = 0.5 * ( A_tgt_idx - expit(Z_idx) )**2
        li_C_sgl.append( C_idx )  
        C += C_idx 
    
    C /= np.float(num_samples)
    return C, li_C_sgl


# ******************
# Simple Perceptron  
#*******************

# input at 2 nodes => 2 features K1 and K2 => there will be just one output neuron 
li_K1 = [200.0,   1.0, 160.0,  11.0, 220.0,  11.0, 120.0,  14.0, 195.0,  15.0, 130.0,   5.0, 185.0,  16.0  ]
li_K2 = [ 14.0, 107.0,  10.0, 193.0,  32.0, 178.0,   2.0, 210.0,  12.0, 134.0,  7.0, 167.0,  10.0, 229.0 ]

# target values 
at1 = 0.3; at2 = 0.7
li_a_tgt  = [at1,  at2,  at1,  at2,   at1,   at2,  at1,   at2,  at1,  at2,  at1,  at2,   at1,  at2 ]

# Change to np floats 
li_K1 = np.array(li_K1)
li_K2 = np.array(li_K2)
li_a_tgt = np.array(li_a_tgt)

num_samples = len(li_K1)


# Get overview over costs on mesh
# *****************
**************
# Mesh of weight values  
wm1 = np.arange(-0.2,0.4,0.002)
wm2 = np.arange(-0.2,0.2,0.002)
W1, W2 = np.meshgrid(wm1, wm2) 
# costs 
C, li_C_sgl  = costs_mesh(num_samples=num_samples, W1=W1, W2=W2, \
                          li_K1=li_K1, li_K2=li_K2, li_a_tgt = li_a_tgt)


# Mesh-Plots
# ********
fig_size = plt.rcParams["figure.figsize"]
print(fig_size)
fig_size[0] = 19
fig_size[1] = 19

fig1 = plt.figure(1)
fig2 = plt.figure(2)

ax1 = fig1.gca(projection='3d')
ax1.get_proj = lambda: np.dot(Axes3D.get_proj(ax1), np.diag([1.0, 1.0, 1, 1]))
ax1.view_init(15,148)
ax1.set_xlabel('w1', fontsize=16)
ax1.set_ylabel('w2', fontsize=16)
ax1.set_zlabel('single costs', fontsize=16)

#ax1.plot_wireframe(W1, W2, li_C_sgl[0], colors=('blue'))
#ax1.plot_wireframe(W1, W2, li_C_sgl[1], colors=('orange'))
ax1.plot_wireframe(W1, W2, li_C_sgl[6], colors=('orange'))
ax1.plot_wireframe(W1, W2, li_C_sgl[5], colors=('green'))
#ax1.plot_wireframe(W1, W2, li_C_sgl[9], colors=('orange'))
#ax1.plot_wireframe(W1, W2, li_C_sgl[6], colors=('magenta'))

ax2 = fig2.gca(projection='3d')
ax2.get_proj = lambda: np.dot(Axes3D.get_proj(ax2), np.diag([1.0, 1.0, 1, 1]))
ax2.view_init(15,140)
ax2.set_xlabel('w1', fontsize=16)
ax2.set_ylabel('w2', fontsize=16)
ax2.set_zlabel('Total costs', fontsize=16)
ax2.plot_wireframe(W1, W2, 1.2*C, colors=('green'))

 

The cost landscape for individual samples without normalization

Gradient descent tries to find a minimum of a cost function by varying the weight values systematically in the cost gradient’s direction. To get an overview about the cost hyperplane over the 2-dim (w1, w2)-space we make some plots. Let us first plot 2 the individual costs for the input samples i=0 and i=5.

Actually the cost functions for the different samples do show some systematic, though small differences. Try it out yourself … Here is the plot for samples 1,5,9 (counted from 0!).

You see different orientation angles in the (w1, w2)-plane?

Total cost landscape without normalization

Now let us look at the total costs; to arrive at a comparable level with the other plots I divided the sum by 14:

All of the above cost plots look like big trouble for both the “stochastic gradient descent” and the “batch gradient descent” methods for “Machine Learning” [ML]:

We clearly see the effect of the sigmoid saturation. We get almost flat areas beyond certain relatively small w1- and w2-values (|w1| > 0.02, |w2| > 0.02). The gradients in this areas will be very, very close to zero. So, if we have a starting point as e.g. (w1=0.3, w2=0.2) our gradient descent would get stuck. Due to the big input values of at least one feature.

In the center of the {w1, w2}-plane, however, we detect a steep slope to a global minimum.

But how to get there? Let us say, we start with w1=0.04, w2=0.04. The learning-rate “η” is used to correct the weight values by

w1 = w1 – &
eta;
*grad_w1
w2 = w2 – η*grad_w2

where grad_w1 and grad_w2 describe the relevant components of the cost-function’s gradient.

In the beginning you would need a big “η” to get closer to the center due to small gradient values. However, if you choose it too big you may pass the tiny area of the minimum and just hop to an area of a different cost level with again a gradient close to zero. But you cannot decrease the learning rate fast as a remedy, either, to avoid getting stuck again.

A view at the center of the loss function hyperplane

Let us take a closer look at the center of our disastrous total cost function. We can get there by reducing our mesh to a region defined by “-0.05 < w1, w2 < 0.05”. We get :

This looks actually much better – on such a surface we could probably work with gradient descent. There is a clear minimum visible – and on this scale of the (w1, w2)-values we also recognize reasonably paths into this minimum. An analysis of the meshdata to get values for the minimum is possible by the following statements:

print("min =", C.min())
pt_min = np.argwhere(C==C.min())
w1=W1[pt_min[0][0]][pt_min[0][1]]  
w2=W2[pt_min[0][0]][pt_min[0][1]]  
print("w1 = ", w1)
print("w2 = ", w2)

The result is:

min = 0.0006446277000906343
w1 = -0.004999999999999963
w2 = 0.005000000000000046

But to approach this minimum by a smooth gradient descent we would have had to know in advance at what tiny values of (w1, w2) to start with gradient descent – and choose our η suitably small in addition. This is easy in our most simplistic one neuron case, but you almost never can fulfill the first condition when dealing with real artificial neural networks for complex scenarios.

And a naive gradient descent with a standard choice of a (w1, w2)-starting point would have lead us to nowhere in our one-neuron case – as we shall see in a minute ..

Let us keep one question in mind for a while: Is there a chance that we could get the hyperplane surface to look similar to the one at the center – but for much bigger weight values?

Some Python code for gradient descent for our one neuron scenario

Here are some useful functions, which we shall use later on to perform a gradient descent:

 
# ****************************************
# Functions for stochastic GRADIENT DESCENT  
# *****************************************
import random
import pandas as pd

# derivative of expit 
def d_expit(z): 
    exz = expit(z)
    dex = exz * (1.0 - exz)
    return dex


# single costs for stochastic descent 
# ************************************
def dcost_sgl(w1, w2, idx, li_K1, li_K2, li_a_tgt):
    z_in  = w1 * li_K1[idx] + w2 * li_K2[idx] 
    a_tgt = li_a_tgt[idx] 
    c = 0.5 * ( a_tgt - expit(z_in))**2
    return c

# Gradients
# *********
def grad_sgl(w1, w2, idx, li_K1, li_K2, li_a_tgt):
    z_in  = w1 * li_K1[idx] + w2 * li_K2[idx] 
    a_tgt = li_a_tgt[idx] 
    gradw1 = 0.5 * 2.0 * (a_tgt - expit(z_in)) * (-d_expit(z_in)) * li_K1[idx]
    gradw2 = 0.5 * 2.0 * (a_tgt - expit(z_in)) * (-d_expit(
z_in)) * li_K2[idx]
    grad = (gradw1, gradw2)
    return grad

def grad_tot(num_samples, w1, w2, li_K1, li_K2, li_a_tgt):
    gradw1 = 0 
    gradw2 = 0 
    rg_idx = range(num_samples)
    for idx in rg_idx:
        z_in  = w1 * li_K1[idx] + w2 * li_K2[idx] 
        a_tgt = li_a_tgt[idx] 
        gradw1_idx = 0.5 * 2.0 * (a_tgt - expit(z_in)) * (-d_expit(z_in)) * li_K1[idx]
        gradw2_idx = 0.5 * 2.0 * (a_tgt - expit(z_in)) * (-d_expit(z_in)) * li_K2[idx]
        gradw1 += gradw1_idx
        gradw2 += gradw2_idx
    #gradw1 /= float(num_samples) 
    #gradw2 /= float(num_samples) 
    grad = (gradw1, gradw2)
    return grad


# total costs at given point 
# ************************************
def dcost_tot(num_samples, w1, w2,li_K1, li_K2, li_a_tgt):
    c_tot  = 0
    rg_idx = range(num_samples)
    for idx in rg_idx:
        #z_in  = w1 * li_K1[idx] + w2 * li_K2[idx] 
        a_tgt = li_a_tgt[idx] 
        c_idx = dcost_sgl(w1, w2, idx, li_K1, li_K2, li_a_tgt) 
        c_tot += c_idx
    ctot = 1.0/num_samples * c_tot
    return c_tot

# Prediction function 
# ********************
def predict_batch(num_samples, w1, w2,ay_k_1, ay_k_2, li_a_tgt):
    shape_res = (num_samples, 5)
    ResData = np.zeros(shape_res)  
    rg_idx = range(num_samples)
    err = 0.0
    for idx in rg_idx:
        z_in  = w1 * ay_k_1[idx] + w2 * ay_k_2[idx] 
        a_out = expit(z_in)
        a_tgt = li_a_tgt[idx]
        err_idx = np.absolute(a_out - a_tgt) / a_tgt 
        err += err_idx
        ResData[idx][0] = ay_k_1[idx] 
        ResData[idx][1] = ay_k_2[idx] 
        ResData[idx][2] = a_tgt
        ResData[idx][3] = a_out
        ResData[idx][4] = err_idx
    err /= float(num_samples)
    return err, ResData    


def predict_sgl(k1, k2, w1, w2):
    z_in  = w1 * k1 + w2 * k2 
    a_out = expit(z_in)
    return a_out

def create_df(ResData):
    ''' ResData: Array with result values K1, K2, Tgt, A, rel.err 
    '''
    cols=["K1", "K2", "Tgt", "Res", "Err"]
    df = pd.DataFrame(ResData, columns=cols)
    return df    

 

With these functions a quick and dirty “gradient descent” can be achieved by the following code:

 
# **********************************
# Quick and dirty Gradient Descent  
# **********************************
b_scale_2 = False
if b_scale_2:
    ay_k_1 = ay_K1
    ay_k_2 = ay_K2
else: 
    ay_k_1 = li_K1
    ay_k_2 = li_K2

li_w1_st = []
li_w2_st = []
li_c_sgl_st = []
li_c_tot_st = []

li_w1_ba = []
li_w2_ba = []
li_c_sgl_ba = []
li_c_tot_ba = []

idxc = 2    
    
# Starting point
#***************
w1_start = -0.04
w2_start = -0.0455
#w1_start = 0.5
#w2_start = -0.5

# learn rate 
# **********
eta = 0.0001
decrease_rate = 0.000000001
num_steps = 2500 

# norm = 1
#eta = 0.75
#decrease_rate = 0.000000001
#num_steps = 100 

# Gradient descent loop
# *********************
rg_j = range(num_steps) 
rg_i = range(num_samples) 
w1d_st = w1_start
w2d_st = w2_start 
w1d_ba = w1_start
w2d_ba = w2_start 

for j in rg_j:
    eta = eta / (1.0 + float(j) * decrease_rate)
    gradw1 = 0
    gradw2 = 0
    # loop over samples and individ. corrs 
    ns = num_samples
    rg = range(ns)
    rg_idx = random.sample(rg, num_samples)
    #print("\n")
    for idx in rg_idx:  
        #print("idx = ", idx) 
        grad_st = grad_sgl(w1d_st, w2d_st, idx, ay_k_1, ay_k_2, li_a_tgt) 
        gradw1_st = grad_st[0]
        gradw2_st = grad_st[1]
        w1d_st -= gradw1_st * eta
        w2d_st -= gradw2_st * eta
        li_w1_st.append(w1d_st)
        li_w2_st.append(w2d_st)
        

        # costs for special sample 
        cd_sgl_st = dcost_sgl(w1d_st, w2d_st, idx, ay_k_1, ay_k_2, li_a_tgt) 
        li_c_sgl_st.append(cd_sgl_st)

        # total costs for special sample 
        cd_tot_st = dcost_tot(num_samples, w1d_st, w2d_st, ay_k_1, ay_k_2, li_a_tgt) 
        li_c_tot_st.append(cd_tot_st)
        #print("j:", j, " li_c_tot[j] = ", li_c_tot[j] )            

    # work with total costs and total gradient 
    grad_ba = grad_tot(num_samples, w1d_ba, w2d_ba, ay_k_1, ay_k_2, li_a_tgt)
    gradw1_ba = grad_ba[0]
    gradw2_ba = grad_ba[1]
    w1d_ba -= gradw1_ba * eta
    w2d_ba -= gradw2_ba * eta
    li_w1_ba.append(w1d_ba)
    li_w2_ba.append(w2d_ba)
    co_ba = dcost_tot(num_samples, w1d_ba, w2d_ba, ay_k_1, ay_k_2, li_a_tgt)    
    li_c_tot_ba.append(co_ba) 

    
# Printed Output
# ***************
num_end = len(li_w1_st)    
err_sgl, ResData_sgl = predict_batch(num_samples, li_w1_st[num_end-1], li_w2_st[num_end-1], ay_k_1, ay_k_2, li_a_tgt)
err_ba,  ResData_ba = predict_batch(num_samples, li_w1_ba[num_steps-1], li_w2_ba[num_steps-1], ay_k_1, ay_k_2, li_a_tgt)
df_sgl = create_df(ResData_sgl)
df_ba  = create_df(ResData_ba)
print("\n", df_sgl)
print("\n", df_ba)
print("\nTotal error stoch descent: ", err_sgl )            
print("Total error batch descent: ", err_ba )  

# Styled Pandas Output 
# *******************
df_ba

 

Those readers who followed my series on a Multilayer Perceptron should have no difficulties to understand the code: I used two methods in parallel – one for a “stochastic descent” and one for a “batch descent“:

  • During “stochastic descent” we correct the weights by a stepwise application of the cost-gradients of single samples. (We shuffle the order of the samples statistically during epochs to avoid cyclic effects.) This is done for all samples during an epoch.
  • During batch gradient we apply the gradient of the total costs of all samples once during each epoch.

And here is also some code to perform some plotting after training runs:

 
# Plots for Single neuron Gradient Descent        
# ****************************************
#sizing
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 14
fig_size[1] = 5

fig1 = plt.figure(1)
fig2 = plt.figure(2)
fig3 = plt.figure(3)
fig4 = plt.figure(4)

ax1_1 = fig1.add_subplot(121)
ax1_2 = fig1.add_subplot(122)

ax1_1.plot(range(len(li_c_sgl_st)), li_c_sgl_st)
#ax1_1.set_xlim (0, num_tot+5)
#ax1_1.set_ylim (0, 0.4)
ax1_1.set_xlabel("epochs * batches (" + str(num_steps) + " * " + str(num_samples) + " )")
ax1_1.set_ylabel("sgl costs")

ax1_2.plot(range(len(li_c_tot_st)), li_c_tot_st)
#ax1_2.set_xlim (0, num_tot+5)
#ax1_2.set_ylim (0, y_max_err)
ax1_2.set_xlabel("epochs * batches (" + str(num_steps) + " * " + str(num_samples) + " )")
ax1_2.set_ylabel("total costs ")

ax2_1 = fig2.add_subplot(121)
ax2_2 = fig2.add_subplot(122)

ax2_1.plot(range(len(li_w1_st)), li_w1_st)
#ax1_1.set_xlim (0, num_tot+5)
#ax1_1.set_ylim (0, y_max_costs)
ax2_1.set_xlabel("epochs * batches (" + str(num_steps) + " * " + str(num_samples) + " )")
ax2_1.set_ylabel("w1")

ax2_2.plot(range(len(li_w2_st)), li_w2_st)
#ax1_2.set_xlim (0, num_to_t+5)
#ax1_2.set_ylim (0, y_max_err)
ax2_2.set_xlabel("epochs * batches (" + str(num_steps) + " * " + str(num_samples) + " )")
ax2_2.set_ylabel("w2")

ax3_1 = fig3.add_subplot(121)
ax3_2 = fig3.add_subplot(122)

ax3_1.plot(range(len(li_c_tot_ba)), li_c_tot_ba)
#ax3_1.set_xlim (0, num_tot+5)
#ax3_1.set_ylim (0, 0.4)
ax3_1.set_xlabel("epochs (" + str(
num_steps) + " )")
ax3_1.set_ylabel("batch costs")

ax4_1 = fig4.add_subplot(121)
ax4_2 = fig4.add_subplot(122)

ax4_1.plot(range(len(li_w1_ba)), li_w1_ba)
#ax4_1.set_xlim (0, num_tot+5)
#ax4_1.set_ylim (0, y_max_costs)
ax4_1.set_xlabel("epochs (" + str(num_steps) + " )")
ax4_1.set_ylabel("w1")

ax4_2.plot(range(len(li_w2_ba)), li_w2_ba)
#ax4_2.set_xlim (0, num_to_t+5)
#ax4_2.set_ylim (0, y_max_err)
ax4_2.set_xlabel("epochs (" + str(num_steps) + " )")
ax4_2.set_ylabel("w2")

 

You can put these codes into suitable cells of a Jupyter environment and start doing experiments on your PC.

Frustration WITHOUT data normalization …

Let us get the worst behind us:
Let us use un-normalized input data, set a standard starting point for the weights and try a gradient descent with 2500 epochs.
Well, what are standard initial weight values? We can follow LeCun’s advice on bigger networks: a uniform distribution between – sqrt(1/2) and +srt(1/2) = 0.7 should be helpful. Well, we take such values. The parameters of our trial run are:

w1_start = -0.1, w2_start = 0.1, eta = 0.01, decrease_rate = 0.000000001, num_steps = 12500

You, unfortunately, get nowhere:

        K1     K2  Tgt           Res       Err
0   200.0   14.0  0.3  3.124346e-15  1.000000
1     1.0  107.0  0.7  9.999996e-01  0.428571
2   160.0   10.0  0.3  2.104822e-12  1.000000
3    11.0  193.0  0.7  1.000000e+00  0.428571
4   220.0   32.0  0.3  1.117954e-15  1.000000
5    11.0  178.0  0.7  1.000000e+00  0.428571
6   120.0    2.0  0.3  8.122661e-10  1.000000
7    14.0  210.0  0.7  1.000000e+00  0.428571
8   195.0   12.0  0.3  5.722374e-15  1.000000
9    15.0  134.0  0.7  9.999999e-01  0.428571
10  130.0    7.0  0.3  2.783284e-10  1.000000
11    5.0  167.0  0.7  1.000000e+00  0.428571
12  185.0   10.0  0.3  2.536279e-14  1.000000
13   16.0  229.0  0.7  1.000000e+00  0.428571

        K1     K2  Tgt           Res       Err
0   200.0   14.0  0.3  7.567897e-24  1.000000
1     1.0  107.0  0.7  1.000000e+00  0.428571
2   160.0   10.0  0.3  1.485593e-19  1.000000
3    11.0  193.0  0.7  1.000000e+00  0.428571
4   220.0   32.0  0.3  1.411189e-21  1.000000
5    11.0  178.0  0.7  1.000000e+00  0.428571
6   120.0    2.0  0.3  2.293804e-16  1.000000
7    14.0  210.0  0.7  1.000000e+00  0.428571
8   195.0   12.0  0.3  1.003437e-23  1.000000
9    15.0  134.0  0.7  1.000000e+00  0.428571
10  130.0    7.0  0.3  2.463730e-16  1.000000
11    5.0  167.0  0.7  1.000000e+00  0.428571
12  185.0   10.0  0.3  6.290055e-23  1.000000
13   16.0  229.0  0.7  1.000000e+00  0.428571

Total error stoch descent:  0.7142856616691734
Total error batch descent:  0.7142857142857143

A parameter setting like

w1_start = -0.1, w2_start = 0.1, eta = 0.0001, decrease_rate = 0.000000001, num_steps = 25000

does not bring us any further:

        K1     K2  Tgt           Res       Err
0   200.0   14.0  0.3  9.837323e-09  1.000000
1     1.0  107.0  0.7  9.999663e-01  0.428523
2   160.0   10.0  0.3  3.496673e-07  0.999999
3    11.0  193.0  0.7  1.000000e+00  0.428571
4   220.0   32.0  0.3  7.812207e-09  1.000000
5    11.0  178.0  0.7  9.999999e-01  0.428571
6   120.0    2.0  0.3  8.425742e-06  0.999972
7    14.0  210.0  0.7  1.000000e+00  0.428571
8   195.0   12.0  0.3  1.328667e-08  1.000000
9    15.0  134.0  0.7  9.999902e-01  0.428557
10  130.0    7.0  0.3  5.090220e-06  0.999983
11    5.0  167.0  0.7  9.999999e-01  0.428571
12  185.0   10.0  0.3  2.943780e-08  1.000000
13   16.0  229.0  0.7  1.000000e+00  0.428571

        K1     K2  Tgt           Res       Err
0   200.0   14.0  0.3  9.837323e-09  1.000000
1     1.0  107.0  0.7  9.999663e-01  0.428523
2   160.0   10.0  0.3  3.496672e-07  0.
999999
3    11.0  193.0  0.7  1.000000e+00  0.428571
4   220.0   32.0  0.3  7.812208e-09  1.000000
5    11.0  178.0  0.7  9.999999e-01  0.428571
6   120.0    2.0  0.3  8.425741e-06  0.999972
7    14.0  210.0  0.7  1.000000e+00  0.428571
8   195.0   12.0  0.3  1.328667e-08  1.000000
9    15.0  134.0  0.7  9.999902e-01  0.428557
10  130.0    7.0  0.3  5.090220e-06  0.999983
11    5.0  167.0  0.7  9.999999e-01  0.428571
12  185.0   10.0  0.3  2.943780e-08  1.000000
13   16.0  229.0  0.7  1.000000e+00  0.428571

Total error stoch descent:  0.7142779420120247
Total error batch descent:  0.7142779420164836

However:
For the following parameters we do get something:

w1_start = -0.1, w2_start = 0.1, eta = 0.001, decrease_rate = 0.000000001, num_steps = 25000

        K1     K2  Tgt       Res       Err
0   200.0   14.0  0.3  0.298207  0.005976
1     1.0  107.0  0.7  0.603422  0.137969
2   160.0   10.0  0.3  0.334158  0.113860
3    11.0  193.0  0.7  0.671549  0.040644
4   220.0   32.0  0.3  0.294089  0.019705
5    11.0  178.0  0.7  0.658298  0.059574
6   120.0    2.0  0.3  0.368446  0.228154
7    14.0  210.0  0.7  0.683292  0.023869
8   195.0   12.0  0.3  0.301325  0.004417
9    15.0  134.0  0.7  0.613729  0.123244
10  130.0    7.0  0.3  0.362477  0.208256
11    5.0  167.0  0.7  0.654627  0.064819
12  185.0   10.0  0.3  0.309307  0.031025
13   16.0  229.0  0.7  0.697447  0.003647

        K1     K2  Tgt       Res       Err
0   200.0   14.0  0.3  0.000012  0.999961
1     1.0  107.0  0.7  0.997210  0.424586
2   160.0   10.0  0.3  0.000106  0.999646
3    11.0  193.0  0.7  0.999957  0.428510
4   220.0   32.0  0.3  0.000009  0.999968
5    11.0  178.0  0.7  0.999900  0.428429
6   120.0    2.0  0.3  0.000771  0.997429
7    14.0  210.0  0.7  0.999980  0.428543
8   195.0   12.0  0.3  0.000014  0.999953
9    15.0  134.0  0.7  0.998541  0.426487
10  130.0    7.0  0.3  0.000555  0.998150
11    5.0  167.0  0.7  0.999872  0.428389
12  185.0   10.0  0.3  0.000023  0.999922
13   16.0  229.0  0.7  0.999992  0.428560

Total error single:  0.07608269490258893
Total error batch:  0.7134665897677123

By pure chance we found a combination of starting point and learning-rate for which we – by hopping around on the flat cost areas – we accidentally arrived at the slope area of one sample and started a gradient descent. This did however not (yet) happen for the total costs.
We get a minimum around (w1=-0.005,w2=0.005) but with a big spread of 0.0025 for each of the weight values.

Intermediate Conclusion

We looked at a simple perceptron scenario with one computing neuron. Our solitary neuron should learn to distinguish between input data of two distinct and separate data clusters in a 2-dimensional feature space. The feature data change between big and small values for different samples. The neuron used the sigmoid-function as activation and output function. The cost function for all samples shows a minimum at a tiny area in the weight space. We found this minimum with the help of a fine grained and mesh-based analysis of the cost values. However, such an analysis is not applicable to general ML-scenarios.

The problem we face is that due to the saturation properties of the sigmoid function the minimum cannot be detected automatically via gradient descent without knowing already precise details about the solution. Gradient descent does not work – we either get stuck on areas of nearly constant costs or we hop around between different plateaus of the cost function – missing a tiny location in the (w1, w2)-parameter space for a real descent into the existing minimum.

We need to find a way out of this dilemma. In the next article

A single neuron perceptron with sigmoid activation function – II – normalization to overcome saturation

I shall show that normalization opens such a way.

A simple Python program for an ANN to cover the MNIST dataset – VII – EBP related topics and obstacles

I continue with my series about a Python program to build simple MLPs:

A simple Python program for an ANN to cover the MNIST dataset – VI – the math behind the „error back-propagation“
A simple program for an ANN to cover the Mnist dataset – V – coding the loss function
A simple program for an ANN to cover the Mnist dataset – IV – the concept of a cost or loss function
A simple program for an ANN to cover the Mnist dataset – III – forward propagation
A simple program for an ANN to cover the Mnist dataset – II – initial random weight values
A simple program for an ANN to cover the Mnist dataset – I – a starting point

On our tour we have already learned a lot about multiple aspects of MLP usage. I name forward propagation, matrix operations, loss or cost functions. In the last article of this series
A simple program for an ANN to cover the Mnist dataset – VI – the math behind the „error back-propagation“
I tried to explain some of the math which governs “Error Back Propagation” [EBP]. See the PDF attached to the last article.

EBP is an algorithm which applies the “Gradient Descent” method for the optimization of the weights of a Multilayer Perceptron [MLP]. “Gradient Descent” itself is a method where we step-wise follow short tracks perpendicular to contour lines of a hyperplane in a multidimensional parameter space to hopefully approach a global minimum. A step means a change of of parameter values – in our context of weights. In our case the hyperplane is the surface formed by the cost function over the weights. If we have m weights we get a hyperplane in an (m+1) dimensional space.

To apply gradient descent we have to calculate partial derivatives of the cost function with respect to the weights. We have discussed this in detail in the last article. If you read the PDF you certainly have noted: Most of the time we shall execute matrix operations to provide the components of the weight gradient. Of course, we must guarantee that the matrices’ dimensions fit each other such that the required operations – as an element-wise multiplication and the numpy.dot(X,Y)-operation – become executable.

Unfortunately, there are some challenges regarding this point which we have not covered, yet. One objective of this article is to get prepared for these potential problems before we start coding EBP.

Another point worth discussing is: Is there really just one cost function when we use mini-batches in combination with gradient descent? Regarding the descriptions and the formulas in the PDF of the last article this was and is not fully clear. We only built sums there over cost contributions of all the records in a mini-batch. We did NOT use a loss function which assigned to costs to deviations of the predicted result (after forward propagation) from known values for all training data records.

This triggers the question what our
code in the end really does if and when it works with mini-batches during weight optimization … We start with this point.

In the following I try to keep the writing close to the quantity notations in the PDF. Sorry for a bad display of the δs in HTML.

Gradient descent and mini-batches – one or multiple cost functions?

Regarding the formulas given so far, we obviously handle costs and gradient descent batch-wise. I.e. each mini-batch has its own cost function – with fewer contributions than a cost function for all records would have. Each cost function has (hopefully) a defined position of a global minimum in the weights’ parameter space. Taking this into consideration the whole mini-batch approach is obviously based on some conceptually important assumptions:

  • The basic idea is that the positions of the global minima of all the cost-functions for the different batches do not deviate too much from each other in the basic parameter space.
  • If we additionally defined a cost function for all test data records (over all batches) then this cost function should display a global minimum positioned in between the ones of the batches’ cost functions.
  • This also means that there should be enough records in each batch with a really statistical distribution and no specialties associated with them.
  • Contour lines and gradients on the hyperplanes defined by the loss functions will differ from each other. On average over all mini-batches this should not hinder convergence into a common optimum.

To understand the last point let us assume that we have a batch for MNIST dataset where all records of handwritten digits show a tendency to be shifted to the left border of the basic 28×28 pixel frames. Then this batch would probably give us other weights than other batches.

To get a deeper understanding, let us take only two batches. By chance their cost functions may deviate a bit. In the plots below I have just simulated this by two assumed “cost” functions – each forming a hyperplane in 3 dimensions over only two parameter (=weight) dimensions x and y. You see that the “global” minima of the blue and the red curve deviate a bit in their position.

The next graph shows the sum, i.e. the full “cost function”, in green in comparison to the (vertically shifted and scaled) original functions.

Also here you clearly see the differences in the minimas’ positions. What does this mean for gradient descent?

Firstly, the contour lines on the total cost function would deviate from the ones on the cost function hyperplanes of our 2 batches. So would the directions of the different gradients at the point presently reached in the parameter space during optimization! Working with batches therefore means jumping around on the surface of the total cost function a bit erratically and not precisely along the direction of steepest descent there. By the way: This behavior can be quite helpful to overcome local minima.

Secondly, in our simplified example we would in the end not converge completely, but jump or circle around the minimum of the total cost function. Reason: Each batch forces the weight corrections for x,y into different directions namely those of
its own minimum. So, a weight correction induced by one bath would be countered by corrections imposed by the optimization for the other batch. (Regarding MNIST it would e.g. be interesting to run a batch with handwritten digits of Europeans against a batch with digits written by Americans and see how the weights differ after gradient descent has converged for each batch.)

This makes us understand multiple things:

  • Mini-batches should be built with a statistical distribution of records and their composition should be changed statistically from epoch to epoch.
  • We need a criterion to stop iterating over too many epochs senselessly.
  • We should investigate whether the number and thus the size of mini-batches influences the results of EBP.
  • At the end of an optimization run we could invest in some more iterations not for the batches, but for the full cost function of all training records and see if we can get a little deeper into the minimum of this total cost function.
  • We should analyze our batches – if we keep them up and do not create them statistically anew at the beginning of each epoch – for special data records whose properties are off of the normal – and maybe eliminate those data records.

Repetition: Why Back-propagation of 2 dimensional matrices and not vectors?

The step wise matrix operations of EBP are to be performed according to a scheme with the following structure:

  • On a given layer N apply a layer specific matrix “NW.T” (depending on the weights there) by some operational rule on some matrix “(N+1)δS“, which contains some data already calculated for layer (N+1).
  • Take the results and modify it properly by multiplying it element-wise with some other matrix ND (containing derivative expressions for the activation function) until you get a new NδS.
  • Get partial derivatives of the cost function with respect to the weights on layer (N-1) by a further matrix operation of NδS on a matrix with output values (N-1)A.TS on layer (N-1).
  • Proceed to the next layer in backward direction.

The input into this process is a matrix of error-dependent quantities, which are defined at the output layer. These values are then back-propagated in parallel to the inner layers of our MLP.

Now, why do we propagate data matrices and not just data vectors? Why are we allowed to combine so many different multiplications and summations described in the last article when we deal with partial derivatives with respect to variables deep inside the network?

The answer to the first question is numerical efficiency. We operate on all data records of a mini-batch in parallel; see the PDF. The answer to the second question is 2-fold:

  • We are allowed to perform so many independent operations because of the linear structure of our cost-functions with respect to contributions coming from the records of a mini-batch and the fact that we just apply linear operations between layers during forward propagation. All contributions – however non-linear each may be in itself – are just summed up. And propagation itself between layers is defined to be linear.
  • The only non-linearity occurring – namely in the form of non-linear activation functions – is to be applied just on layers. And there it works only node-wise! We do not
    couple values for nodes on one and the same layer.

In this sense MLPs are very simple by definition – although they may look complex! (By the way and if you wonder why MLPs are nevertheless so powerful: One reason has to do with the “Universal Approximation Theorem”; see the literature hint at the end.)

Consequence of the simplicity: We can deal with δ-values (see the PDF) for both all nodes of a layer and all records of a mini-batch in parallel.

Results derived in the last article would change dramatically if we had rules that coupled the Z- or A-values of different nodes! E.g. if the squared value at node 7 in layer X must always be the sum of squared values at nodes 5 an 6. Believe me: There are real networks in this world where such a type of node coupling occurs – not only in physics.

Note: As we have explained in the PDF, the nodes of a layer define one dimension of the NδS“-matrices,
the number of mini-batch records the other. The latter remains constant. So, during the process the δ-matrices change only one of their 2 dimensions.

Some possible pitfalls to tackle before EBP-coding

Now, my friends, we can happily start coding … Nope, there are actually some minor pitfalls, which we have to explain first.

Special cost-, activation- and output-functions

I refer to the PDF mentioned above and its formulas. The example explained there referred to the “Log Loss” function, which we took as an example cost function. In this case the outδS and the 3δS-terms at the nodes of the outermost layer turned out to be quite simple. See formula (21), (22), (26) and (27) in the PDF.

However, there may be other cost functions for which the derivative with respect to the output vector “a” at the outermost nodes is more complicated.

In addition we may have other output or activation functions than the sigmoid function discussed in the PDF’s example. Further, the output function may differ from the activation function at inner layers. Thus, we find that the partial derivatives of these functions with respect to their variables “z” must be calculated explicitly and as needed for each layer during back propagation; i.e., we have to provide separate and specific functions for the provision of the required derivatives.

At the outermost layer we apply the general formulas (84) to (88) with matrix ED containing derivatives of the output-function Eφ(z) with respect to the input z to find EδS with E marking the outermost layer. Afterwards, however, we apply formula (92) – but this time with D-elements referring to derivatives of the standard activation-function φ used at nodes of inner layers.

The special case of the Log Loss function and other loss functions with critical denominators in their derivative

Formula (21) shows something interesting for the quantity outδS, which is a starting point for backward propagation: a denominator depending on critical factors, which directly involve output “a” at the outer nodes or “a” in a difference term. But in our one-hot-approach “a” may become zero or come close to it – during training by accident or by convergence! This is a dangerous thing; numerically we absolutely want to avoid any division by zero or by small numbers close to the numerical accuracy of a programming language.

What mathematically saves us in the special case of Log Loss are formulas (26) and (27), where due to some “magic” the dangerous denominator is cancelled by a corresponding factor in the numerator when we evaluate EδS.

In the general case, however,
we must investigate what numerical dangers the functional form of the derivative of the loss function may bring with it. In the end there are two things we should do:

  • Build a function to directly calculate EδS and put as much mathematical knowledge about the involved functions and operations into it as possible, before employing an explicit calculation of values of the cost function’s derivative.
  • Check the involved matrices, whose elements may appear in denominators, for elements which are either zero or close to it in the sense of the achievable accuracy.

For our program this means: Whether we calculate the derivative of a cost function to get values for “outδS” will depend on the mathematical nature of the cost function. In case of Log Loss we shall avoid it. In case of MSE we shall perform the numerical operation.

Handling of bias nodes

A further complication of our aspired coding has its origin in the existence of bias nodes on every inner layer of the MLP. A bias node of a layer adds an additional degree of freedom whilst adjusting the layer’s weights; a bias node has no input, it produces only a constant output – but is connected with weights to all normal nodes of the next layer.

Some readers who are not so familiar with “artificial neural networks” may ask: Why do we need bias nodes at all?

Well, think about a simple matrix operation on a 2 dim-vector; it changes its direction and length. But if we want to approximate a function for regression or a separation hyperplanes for classification by a linear operation then we need another element which corresponds to a constant translation part in a linear transformation: z = w1*x1 + w2*x2 + const.. Take a simple function y=w*x + c. The “c” controls where the line crosses the y axis. We need such a parameter if our line should separate clusters of points separably distributed somewhere in the (x,y)-plane; the w is not sufficient to orientate and position the hyperplane in the (x,y)-plane.

This very basically what bias neurons are good for regarding the basically linear operation between two MLP-layers. They add a constant to an otherwise linear transformation.

Do we need a bias node on all layers? Definitely on the input layer. However, on the hidden layers a trained network could by learning evolve weights in such a way that a bias neuron comes about – with almost zero weights on one side. At least in principle; however, we make it easier for the MLP to converge by providing explicit “bias” neurons.

What did we do to account for bias nodes in our Python code so far? We extended the matrices describing the output arrays ay_A_out of the activation function (for input ay_Z_in) on the input and all hidden layers by elements of an additional row. This was done by the method “add_bias_neuron_to_layer()” – see the codes given in article III.

The important point is that our weight matrices already got a corresponding dimension when we built them; i.e. we defined weights for the bias nodes, too. Of course, during optimization we must calculate partial derivatives of the cost function with respect to these weights.

The problem is:

We need to back propagate a delta-matrix Nδ for layer N via
( (NW.T).dot(Nδ) ). But then we can not apply a simple element-wise matrix multiplication with the (N-1)D(z)-matrix at layer N-1. Reason: The dimensions do not fit, if we calculate the elements of D only for the existing Z-Values at layer N-1.

There are two solutions for coding:

  • We can add a row artificially and intermediately to the Z-matrix to calculate the D-matrix, then calculate NδS as
    ( (NW.T).dot(Nδ) ) * (N_1)D
    and eliminate the first artificial row appearing in NδS afterwards.
  • The other option is to reduce the weight-matrix (NW) by a row intermediately and restore it again afterwards.

What we do is a matter of efficiency; in our coding we shall follow the first way and test the difference to the second way afterwards.

Check the matrix dimensions

As all steps to back-propagate and to circumvent the pitfalls require a bit of matrix wizardry we should at least check at every step during EBP backward-propagation that the dimensions of the involved matrices fit each other.

Outlook

Guys, after having explained some of the matrix math in the previous article of this series and the problems we have to tackle whilst programming the EBP-algorithm we are eventually well prepared to add EBP-methods to our Python class for MLP simulation. We are going to to this in the next article:
A simple program for an ANN to cover the Mnist dataset – VIII – coding Error Backward Propagation

Literature

“Machine Learning – An Applied Mathematics Introduction”, Paul Wilmott, 2019, Panda Ohana Publishing

A simple Python program for an ANN to cover the MNIST dataset – IV – the concept of a cost or loss function

We continue with our effort to write a Python class for a Multilayer Perceptron [MLP] – a simple form of an artificial neural network [ANN]. In the previous articles of this series

A simple program for an ANN to cover the Mnist dataset – III – forward propagation
A simple program for an ANN to cover the Mnist dataset – II – initial random weight values
A simple program for an ANN to cover the Mnist dataset – I – a starting point

I have already explained

  • what parts of an MLP setup we need to parameterize; e.g. the number of layers, the number of nodes per layer, the activation and output functions;
  • how we create node layers and the corresponding weight arrays,
  • how (and also a bit of why) we work with “mini-batches” of test data during training,
  • how we can realize a “vectorized” form of the required “Feed Forward Propagation” algorithm [FFP]. A vectorized form enables us to process all training data records of a mini-batch in parallel. We used Linear Algebra functions provided by Numpy for this purpose; these functions are supported by the the OpenBlas library on a Linux system.

We also set up a basic loop over a number of epochs during training. (Remember: An epoch corresponds to a training step over all training data records). The number of epochs is handled as a parameter to the class’s interface. By artificially repeating the FFP algorithm up to a thousand times, we already got an impression of the code’s performance and its dependence on the number of CPU cores and the size of a mini-batch.

A special method of our class MyANN controls the handling of a mini-batch of multiple input data records via two major steps so far:

  • Step 1: Extract the data records for the mini-batch from the input data.
  • Step 2: Apply FW-propagation to all data records of the mini-batch.

The next natural step would be to encode a training algorithm which optimizes the weight parameters of our MLP. However, in this article we shall not code anything. Instead, I shall discuss some aspects of the so called “cost function” of a MLP. I think this to be useful to get a basic understanding of what training of an ANN actually means and what the differences are in comparison to other ML-algorithms as e.g. the SVM approach. Understanding the cost function’s role for the training of a MLP will also help to better understand the origin and the mathematical form of the back-propagation-algorithm used for training and discussed in a later article.

I simplify a lot below; more details can be found in the literature on machine Learning [ML]; see the section “Links” for some references. Note that if you know all about the theoretical concepts behind ANN training you will not learn anything new here. This is for beginners (and for later reference in this article series).

The concept of a cost function: Turning a classification problem into an optimization problem

What do we mean by training an ANN? Training means to optimize the weights of the ANN such that the “Feed Forward Propagation” in the end delivers correct predictions for new datasets. A cost function is a central concept of the so called “gradient descent method” used for this optimization. By the way: A synonym for cost function is “loss function”. We use both terms
alike below.

The relation between ANN-training based on a loss function and the classification task, which we want to solve with an ANN, is a subtle one. Let us first discuss what we understand by “classification”:

Classification means to separate the input data into categories; i.e.: finding categorical separation surfaces in the multidimensional vector space of input data. In case of the MNIST dataset such separation interfaces should discriminate between 10 different clusters of data points.

I have discussed the problem of finding a separation surface for the case of the moons dataset example in previous articles in this blog. We then used SVM-algorithms to solve this particular problem. Actually, we determined parameters of (non-linear) polynomials to define a separation surface with a (soft) maximum distance from category related clusters of data points in an extended feature space (=input vector space). The extended feature space covered not only basic features of the input data but also powers of it.

All in all we worked directly in an multidimensional extension of the input vector space and optimized parameters describing linear separation interfaces there. If we had several categories instead of 2 we could use a so called “one versus all”-strategy to calculate 10 linear separation interfaces and determine the distance of any new data point towards the separation surfaces as a confidence measure (score) for a prediction. The separation with the highest score would be used to discriminate between the 10 possible solutions and choose the optimal one. Yes, working in an extended input vector space and with parameters of multiple linear separation surfaces was a bit difficult.

Actually, working with ANNs and cost functions corresponds to a more elegant way of optimizing; it starts with measuring distances in the output vector space of the ANN/MLP:

In the context of classification tasks (with known results for training data) a loss function provides a fictitious cost value which weighs the deviations (or distances) of calculated result values (of the ML-algorithm under training) from the already known correct result for training records. I.e. it measures the errors for the training data records in the output space. The optimization task then means to minimize the cost function and thereby minimize a kind of mean error for all input data records.
The hope is that the collection of resulting weight values allows for predictions of other unknown input data, too.

The result of an ANN/MLP for a training data record is the outcome of a complex transformation performed by the ANN. In case of an MLP the transformation of input into output data is done by the “Feed Forward Propagation” algorithm [FFP]. Thus a reasonably designed cost function becomes dependent on the parameters of the FFP-algorithm – predominantly on the weights given at the nodes of the MLP’s layers. We concentrate on this type of parameter below; but note that in special ANN cases there may be additional other parameters to be varied for training and ANN optimization.

The MLP’s weights can in principle be varied continuously during training. The parameter (vector) space thus can be described by multiple real value axes – one for each of the weights. The parameter space of a MLP is a multi-dimensional one with a dimension equal to or bigger than the space of input data – and of course also the result space. (That the dimension is bigger follows from the required node number in the input layer.)

With the help of a suitable cost function we can pose a mathematical optimization problem for the weight parameters:

Find a point in the weight vector space for which the cost
function gives us a minimum, which in turn corresponds to an overall minimum of the deviation distances.

A simple example for a cost function would be a sum of square values for the length of the difference vectors in the output space for all training data.

There are several things to mention:

  1. The result space is a multidimensional vector space (in case of MNIST a 10 dimensional one); so the distance between points there has to be defined via a mathematical norm over components.
  2. The result space in classification problems typically has a much smaller dimension “m” than the dimension “n” of the space of the input data (m < n).
  3. It makes almost no sense to display the cost function over the multidimensional space of input data – as a working ML-algorithm should deliver small cost values for all input data. However, it makes a lot of sense to display the costs over the multi-dimensional vector space of continuous weight values.
  4. We deal with batches of many training data records; it follows that a reasonable cost function in this case must combine deviations of individual records from optimal values. This is very often done via some kind of sum over individual cost contributions from each training record.

A continuous differentiable cost function defines a hyperplane for gradient-descent

In many MLP cases the cost function will be a function of the weight parameters only; this requires a reasonable node independent form of the activation functions. A loss function with a continuous dependency on all ANN parameters (as the weights) provides a multidimensional hyperplane in an (n+1)-dimensional space – with “n” being the number of FFP variables. The (n+1)-th dimension is for the cost values. As the the FFP-algorithm depends on a multitude of linear and non-linear operations we expect that the hyperplane-surface will have a rather complex form – with maxima and minima as well as so called saddle points.

However, if we construct the cost function cleverly the optimum values for the ANN’s weights will lead to a global minimum of this hyperplane – which then in turn corresponds to a minimum of distances between the propagation results and the known values for the training data:

The task to find categorical separation surfaces in the vector space of input data is reformulated as an optimization task in the cost-weight vector space: There it means finding a (global) minimum of the cost hyperplane.

Let us assume we sit at some point on a yet unexplored hyperplane. A quite general way to find the (global) minimum of this hyperplane is to follow a path indicated by the (tangential) gradient vector at the local point: The gradient is vertically oriented with respect to contour lines of constant cost values on the hyperplane. It thus gives us the direction along which a maximum cost change occurs per unit change of some weights. Calculating corrections of the weights translates into following the gradient with small steps. Geometrically speaking:

We follow the direction the overall gradient points to – and translate the movement into to small components along each weight axis – which gives us the individual weight corrections. Our hope is that the overall gradient points into the direction of the global minimum. (In case of local minima or large planes of the hyperplane we would have to adopt the step size somehow.)

This is called the “gradient descent method“. In one of the next contributions to this article series we shall see how this in turn efficiently translates into the backward propagation of errors through the network via matrix operations. Our optimization task is thus reduced to a
systematic variation of the weights during gradient descent with a series of mathematical operations determining gradient components and resulting weight corrections.

Smooth or stochastic gradient descent?

The cost function absorbs complexity stemming from the large amount of all training data rather smoothly by summing up the individual contributions of training data records. Let us look a bit at the gradient: Normally we would have to calculate partial derivatives of all cost contributions off all data sets with respect to all individual weights. For big training data sets this corresponds to a lot of mathematical operations – both matrix operations (linear algebra) and value calculations of nonlinear (activation and output) functions.

What happens if we took not all data records but concentrated on the contributions of selected input data, only? And corrected afterwards again for another disjunctive set of selected data points? I.e. what if we calculated the full required correction only piece-wise for different collections (mini-batches) of input data records?

Then the reduced gradient components would guide us into a direction on the hyperplane which deviates from the overall gradients direction. Taking the next data record would correct this movement a bit into another direction again. If we perform gradient correction for batches of different data records or in the extreme case for individual records we would move somewhat erratically around the overall gradient’s direction; we speak of a “stochastic gradient descent” [SGD].

The erratic movement of SGD helps to overcome local overall minima. But all in all it may take more steps to come to a global minimum or at least close to it – as the a stochastic movement may never converge into the overall minimum’s point in the weight space – but hop instead around it.

The question of how many input data we include in the cost function determining one single weight correction step during an epoch leads to the choice between the following cases:

  • stochastic gradient descent (sequence of weight corrections during an epoch – each based on just one training data record at a time and for all weights),
  • full batch gradient descent (one weight correction per epoch – based on all training data records and for all weights),
  • mini-batch gradient descent (sequence of weight corrections during an epoch – each based on a batch of multiple training data records and for all weights).

A stochastic or mini-bath based gradient descent may mean much faster corrections in terms of a reduced number of (vectorized) mathematical operations and CPU consumption – at least at the beginning of the descent. The CPU time of the training process for large amounts of input data may actually be reduced by factors!

In the case of mini-batches we can, therefore, optimize the performance by varying the mini-batch size. The required matrix operations can be performed vectorized over all data records of the batch; i.e. the operations can be performed “in parallel”. Fortunately, we do not need to care about the necessary CPU register handling whilst coding – optimized libraries will take care of this. As we have seen already in this blog, also threading for a reasonable amount of CPU cores may influence the performance on a specific system a lot.

For our Python class we will therefore provide parameters for the size of a mini-batch – and adapt both the calculation of cost-contributions and respective weight corrections accordingly.

Note that we do not only hope for that the weights determined by gradient descent provide reasonable result values for the training data but also for any other data later on provided to the ANN/MLP. Solving the optimization problem in the end must provide reliable and complex separation surfaces in
the multidimensional input vector space (for MNIST with a dimension of n=784). The mathematical equivalence of the problem of finding separation surfaces in the input vector space to the optimization problem in the result space can be proven for regression problems. (Actually, I do not know whether a mathematical equivalence has been proven for general problems. So, for some ML classification tasks gradient descent may not work sufficiently well.)

Choosing a cost function

Cost functions should be designed carefully. A “cost function” must have certain properties for the so called “gradient descent method” to work successfully:

  • For convenience the global extremum should be a minimum.
  • The cost function must be continuous and differentiable with respect to the ANN’s weights.
  • The requirement of differentiability translates back to the requirement of differentiable activation and output functions – as we shall see in detail in a later article.
  • It should expose a basic convex form in the surroundings of the global minimum (second partial derivatives > 0).
  • The “cost function” must have certain properties for making use of an efficient way to calculate gradients, i.e. partial derivatives. We shall see that some reasonable cost functions turn this task into a back propagation of errors. The efficiency comes via similar matrix operations as those used in the forward propagation algorithm.

Besides choosing a cost function carefully also the choice of the activation function is important for the success of gradient descent. The path to global minimum on a hyperplane may also depend on the starting point (defined by the statistically chosen initial weight values) as well as on an adaptive step size (called learning rate).

Most Machine Learning algorithms can incorporate a variety of reasonable “cost functions. For classification tasks often the following cost functions are used:

  • Categorial Cross-Entropy
  • Log Loss ( = Logistic Regression Loss )
  • Relative Entropy,
  • Exponential Loss
  • MSE (Mean Square Error)

Each of these functions is more or less appropriate for a specific type of classification problem. See the literature for more information on each of these cost functions.

In our code for MNIST-problem we will only include two of these functions as a starting point – Log Loss and MSE. MSE is e.g. used by T. Rashid in his book (see section Links) on building an MLP with Python for the MNIST case. Information on the Log Loss function are provided by the book of Rashka and the book of Geron; see the references in the section “Links” below.

Do we need cost function values at all?

The training of an ANN – i.e. the optimization of weights – does not require the explicit calculation of cost values. The reason for this is of course that gradient descent first of all works with partial derivatives with respect to weights. To calculate them we must use the chain rule with respect to the activation function, the output of lower layers and so on. But the cost values themselves are nowhere required. As a consequence in all of the book of T. Rashid on “Make your Own Neural network” the calculation of costs is never encoded.

Nevertheless, in the next article of this series we shall discuss the code for cost calculations of mini-batches. The reason for this is that we can use the cost values to study the progress of training and the convergence into a minimum: The change of total “costs” provides a way to control and watch the
success of training through its epochs.

Summary and conclusion

The concept of a cost function is central to MLPs and classification tasks: Classification means to separate the input data into categories. The task to find categorical separation surfaces in the vector space of input data is reformulated as an optimization task. This in turn requires us to find a minimum of the cost/loss hyperplane over the multidimensional space of potential weight-parameters. Calculating corrections of the weights during following a gradient guided path to a minimum in turn efficiently translates into the backward propagation of errors through the network via matrix operations.

Links and Literature

https://www.python-course.eu/matrix_arithmetic.php

Gradient descent and cost functions
towardsdatascience.com understanding-the-mathematics-behind-gradient-descent-dde5dc9be06e
ml-cheatsheet readthedocs – gradient-descent.html
page.mi.fu-berlin.de
neural chapter K7.pdf

Regularization
chunml.github.io tutorial on Regularization/

Books
“Neuronale Netze selbst programmieren”, Tariq Rashid, 2017, O’Reilly Media Inc. + dpunkt.verlag GmbH
“Machine Learning mit SciKit-Learn & TensorFlow, Aurelien Geron, 2018, O’Reilly Media Inc. + dpunkt.verlag GmbH
“Python machine Learning”, Seb. Raschka, 2016, Packt Publishing, Birmingham, UK
“Machine Learning mit Sckit-Learn & TensorFlow”, A. Geron, 2018, O’REILLY, dpunkt.verlag GmbH, Heidelberg, Deutschland