It is very seldom that you are confronted with a HTTP status message of type 406 “Not acceptable”. However, this happened yesterday to a customer who uses a renowned hosting provider (in Norway) to publish his web-sites. The customer uses his own WordPress installation on hosted web-servers. His favorite browser is Firefox on a Win 10 desktop system. A week ago he could work without any restrictions. Then suddenly everything changed.
Access to website and WP admin interface broken due to security measures of the provider
At some point in time during last week the hosting-provider changed his security policies on his (Norwegian) Apache servers. The provider seems to have at least changed settings of the “mod_security” module – and thereby started to eliminate old browsers by some rules. (Maybe they even introduced the use of the mod_security module for the first time ?). To implement mod-security with a reasonable set of rules basically is a good measure.
However, the effect was that our customer got a 406 error whenever he tried to access his web-site with his Firefox browser. The “406 Not Acceptable” message indicates that a web server cannot or will not (due to some rules) satisfy some conditions in the HTTP GET- or POST-request. Our customer uses the latest version of Firefox. He tested whether he got something similar on a test installation of one of our hosted servers in Germany. Of course not.
A subsequent complaint of our customer was answered by his provider; the answer in a direct translation says:
Contact the Firefox technicians or use Chrome!
Very funny! Our customer asked us for help. We tested the web-servers response with multiple browsers from Linux and Windows desktops. The problem seemed to exist only for Firefox and only on desktop systems. This already indicated a strange server reaction to the HTTP “User-Agent” string.
But this was only part of the strange experience our customer got due to new security measures. In addition his provider enforced the usage of an Apache htaccess password (Basic HTTP user authentication) for all users who maintained their own WordPress installation on the hoster’s web-servers. Our customer suddenly needed to provide a UserId and a password to get access to his WordPress installation’s “wp-admin”-directory. We found out about this intentionally imposed restriction by having a look at the public web site of the provider. There, in a side column, we found a message regarding the new restriction. Customers were asked there to contact the hoster’s specialists for required credentials. Our customer had not been directly informed by the provider about this new policy. So, we just sent the provider a mail and asked him to give us the authentication data to the admin folder of our customer’s WP-installation. We got it one day later via email.
In my opinion these procedures indicate some mess we are facing with improperly handled IT-security activities these days.
Some comments regarding enforced HTTP Basic Authentication for WP’s admin directory
Comment 1: It is, of course, OK to enforce a HTTP password access to directories of a web server. But this is only an effective protection measure if the provider at the same time enforces general TLS/SSL encryption for the access to the hosted web-sites. Otherwise the password would be sent in clear text over the Internet. However, you can still work with a WordPress installation or other CMS-installations on the provider’s web-servers without any SSL certificate. Our customer has a SSL-certificate – but he had to pay for it. Here business interests of the provider obviously collide with real security procedures.
Comment 2: Personally, I regard it as a major mistake to set a common UserID and a fixed permanent password for customers and send
these credentials to a web-admin via an unencrypted email. Ironically enough the provider asked the receiver in the mail to take note of the password and then to destroy the mail. So, mails on the customers mail system are dangerous, but the transfer of an unencrypted mail over at least partially unencrypted Internet lines is not?
Hey, we are not talking about a one time password here – but permanent credentials set and enforced by the provider. The CPanel admin tool offered by the hosting provider does NOT allow for the change of the fixed htaccess password set by the provider’s admins.
Furthermore, why announce this policy on a public website and not inform the customers via a secure channel? Next question: How did they know that we were authorized to request the access data without contacting our customer first ???
The mess with the User-Agent string
Also interesting was the analysis of the Firefox problem. We can demonstrate the effect on the provider’s own website. Here is what you presently (18.10.2019) get when opening the homepage of the provider with Firefox from a Linux desktop:
And here is what you get when you manipulate the User-Agent string a bit:
The blue rectangles have been added not to directly show the provider’s name. Note the 406 error message in the FF developer tool at the bottom!
Well, well … Our customer got the following when opening his own web-page:
Some analysis showed that we get a correct display of the web-site on the same browser if we manipulated the HTTP User-Agent-string for Firefox a bit. One way to do this is offered by the web developer tools of Firefox. However, there are also good plugins to fake the User-Agent string.
The next question was: What part in the User-Agent-string reacted the provider’s Apache servers allergic to?
The standard User-Agent-string of Firefox in a HTTP-GET- or POST-request is defined to have the following structure:
Mozilla/5.0 (platform; rv:geckoversion) Gecko/geckotrail Firefox/firefoxversion
This can be learned from related explanations of mozilla.org:
Firefox User Agent string
“geckotrail” can be an indication of a version or a date. However – quotation:
On Desktop, geckotrail is the fixed string “20100101”
And when we check the User-Agent-string for Firefox on e.g. a Linux desktop we indeed get:
Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko Firefox/68.0
and
Posted in Security und Security-Tools, Web - Browser, HTTP, HTML, CSS
|
Tagged htaccess password, HTTP, improper security measures, User-Agent string, web hosting provider, Wordpress
I continue with my efforts of writing a small Python class by which I can setup and test a Multilayer Perceptron [MLP] as a simple example for an artificial neural network [ANN]. In the last two articles of this series A simple program for an ANN to cover the Mnist dataset – II – initial random weight values I defined some code elements, which controlled the layers, their node numbers and built weight matrices. We succeeded in setting random initial values for the weights. This enables us to work on the forward propagation algorithm in this article. As we later on need to define methods which cover “training epochs” and the handling of “mini-batches” comprising a defined number of training records we extend our set of methods already now by An “epoch” characterizes a full training step comprising Handling of a mini-batch comprises Vectorized propagation means that we propagate all training records of a batch in parallel. This will be handled by Numpy matrix multiplications (see below). We shall see in a forthcoming post that we can also cover the cumulative gradient calculation over all batch samples by matrix-multiplications where we shift the central multiplication and summation operations to appropriate rows and columns. However, we do not care for details of training epochs and complete batch-operations at the moment. We use the two methods “_fit()” and “_handle_mini_batch()” in this article only as envelopes to trigger the epoch loop and the matrix operations for propagation of a batch, respectively. We change and extend our “__init_”-function of class MyANN a bit: “n_epochs” will later receive the user’s setting for the number of epochs to follow during training. “n_max_Batches” allows us to limit the number of mini-batches to analyze during tests. The kind reader will also have noticed that I encapsulated the series of operations for preparing the weight-matrices for the ANN in a new method “_set_ANN_structure()” We can safely assume that some steps must be performed to prepare epoch- and batch handling. We, therefore, introduced a new function “_prepare_epochs_and_batches()”. For the time being this method only calculates the number of mini-batches from the input parameter “n_size_mini_batch”. We use the Numpy-function “array_split()” to split the full range of input data into batches. For the time being method “_fit()” is used for looping over the number of epochs and the number of batches: We shall build up the operations for batch handling over several articles. In this article we clarify the operations for feed forward propagation, only. Nevertheless, we have to think a step ahead: Gradient calculation will require that we keep the results of propagation layer-wise somewhere. As the number of layers can be set by the user of the class we save the propagation results in two Python lists: The Z-values define a collection of input vectors which we normally get by a matrix multiplication from output data of the last layer and a suitable weight-matrix. The “collection” is our mini-batch. So, “ay_Z_in_layer” actually is a 2-dimensional array. For the ANN’s input layer “L0”, however, we just fill in an excerpt of the “_X”-array-data corresponding to the present mini-batch. Array “ay_A_out_layer[n]” contains the results of activation function applied onto the elements of “ay_Z_in_layer[n]” of Layer “Ln”. (In addition we shall add a value for a bias neutron; see below). Our method looks like: The function “_fw_propagation()” performs the forward propagation of a mini-batch through all of the ANN’s layers – and saves the results in the lists defined above. Important note: Note also that this function leaves room for optimization: It is e.g. unnecessary to prepare ay_Z_in_0T again and again for each epoch. We will transfer the related steps to “_prepare_epochs_and_batches()” later on. In one of my last articles in this blog I already showed how one can use Numpy’s Linear Algebra features to cover propagation calculations required for information transport between two adjacent layers of a feed forward “Artificial Neural Network” [ANN]: The result was that we can cover propagation between neighboring layers by a vectorized multiplication of two 2-dim matrices – one containing the weights and the other vectors of feature data for all mini-batch samples. In the named article I discussed in detail which rows and columns are used for the central multiplication with weights and summations – and that the last dimension of the input array should account for the mini-batch samples. This requires the transpose operation on the input array of Layer L0. All other intermediate layer results (arrays) do already get the right form for vectorizing. “_fw_propagation()” takes the following form: Note that we need some special treatment for the last layer: here we call the out-function to get result values. And, of course, we do not add a bias neuron! It remains to have a look at the function “_add_bias_neuron_to_layer(A_out_il, ‘row’)”, which extends the A-data by a constant value of “1” for a bias neuron. The function is pretty simple: We let the program run in a Jupyter cell with the following parameters: This produces the following output ( I omitted the output for initialization): If you raise the number for batches and the number for epochs you will pretty soon realize that writing continuous output to a Jupyter cell costs CPU-time. You will also notice strange things regarding performance, multithreading and the use of the Linalg library OpenBlas on Linux system. I have discussed this extensively in a previous article in this blog: So, for another tests we set the following environment variable for the shell in which we start our Jupyter notebook: export OPENBLAS_NUM_THREADS=4 This is appropriate for my Quad-core CPU with hyperthreading. You may choose a different parameter on your system! We furthermore stop printing in the epoch loop by editing the call to function “_fit()”: self._fit(b_print=False, b_measure_batch_time=False) We change our parameter setting to: Then the last output lines become: Good ! In this article we saw that coding forward propagation is a pretty straight-forward exercise with Numpy! The tricky thing is to understand the way numpy.dot() handles vectorizing of a matrix product and which structure of the matrices is required to get the expected numbers! In the next article A simple program for an ANN to cover the Mnist dataset – IV – the concept of a cost or loss function we shall start working on cost and gradient calculation. Recently, I tested the propagation methods of a small Python3/Numpy class for a multilayer perceptron [MLP]. I unexpectedly ran into a performance problem with OpenBlas. The problem had to do with the required vectorized matrix operations for forward propagation – in my case through an artificial neural network [ANN] with 4 layers. In a first approach I used 784, 100, 50, 10 neurons in 4 consecutive layers of the MLP. The weight matrices had corresponding dimensions. The performance problem was caused by extensive multi-threading; it showed a strong dependency on mini-batch sizes and on basic matrix dimensions related to the neuron numbers per layer: This problem has been discussed elsewhere with respect to the matrix dimensions relevant for the core multiplication and summation operations – i.e. the neuron numbers per layer. However, the vectorizing aspect of matrix multiplications is interesting, too: One can imagine that splitting the operations for multiple independent test samples is in principle ideal for multi-threading. So, using as many processor cores as possible (in my case 8) does not look like a wrong decision of OpenBlas at first. Then I noticed that for mini-batch sizes “N” below a certain number (N < 250) the system only seemed to use up to 3-4 cores; so there remained plenty of CPU capacity left for other tasks. Performance for N < 250 was better by at least a factor of 2 compared to a situation with an only slightly bigger batch size (N ≥ 260). I got the impression that OpenBLAS under certain conditions just decides to use as many threads as possible – which no good outcome. In the last years I sometimes had to work with optimizing multi-threaded database operations on Linux systems. I often got the impression that you have to be careful and leave some CPU resources free for other tasks and to avoid heavy context switching. In addition bottlenecks appeared due to the concurrent access of may processes to the CPU cache. (RAM limitations were an additional factor; but this should not be the case for my Python program.) Furthermore, one should not forget that Python/Numpy experiments on Jupyter notebooks require additional resources to handle the web page output and page update on the browser. And Linux itself also requires some free resources. So, I wanted to find out whether reducing the number of threads – or available cores – for Numpy and OpenBlas would be helpful in the sense of an overall gain in performance. All data shown below were gathered on a desktop system with some background activity due to several open browsers, clementine and pulse-audio as active audio components, an open mail client (kontact), an open LXC container, open Eclipse with Pydev and open ssh connections. Program tests were performed with the help of Jupyter notebooks. Typical background CPU consumption looks like this on Ksysguard: One of However, we perform such matrix operations NOT sequentially sample for sample of a collection of training data – we do it vectorized for so called mini-batches consisting of between 50 and 600000 individual samples of training data. Instead of operating with a matrix on just one feature vector of one training sample we use matrix multiplications whereby the second matrix often comprises many vectors of data samples. I have described such multiplications already in a previous blog article; see Numpy matrix multiplication for layers of simple feed forward ANNs. In the most simple case of an MLP with e.g. and “mini”-batches of different sizes (between 20 and 20000). An input vector to the first hidden layer has a dimension of 100, so the weight matrix creating this input vector from the “output” of the MLP’s input layer has a shape of 784×100. Multiplication and summation in this case is done over the dimension covering 784 features. When we work with mini-batches we want to do these operations in parallel for as many elements of a mini-batch as possible. All in all we have to perform 3 matrix operations (784×100) matrix on (784)-vector, (100×50) matrix on (100)-vector, (50×10) matrix on (50) vector on our example ANN with 4 layers. However, we collect the data for N mini-batch samples in an array. This leads to Numpy matrix multiplications of the kind (784×100) matrix on an (784, N)-array, (100×50) matrix on an (100, N)-array, (50×10) matrix on an (50, N)-array. Thus, we deal with matrix multiplications of two 2-dim matrices. Linear algebra libraries should optimize such operations for different kinds of processors. On my Linux system Python/Numpy use the openblas-library. This is confirmed by the output of command “np.__config__.show()”: and by In all tests discussed below I performed a series of calculations for different batch sizes N = 50, 100, 200, 250, 260, 500, 2000, 10000, 20000 and repeated the full forward propagation 30 times (corresponding to 30 epochs in a full training series – but here without cost calculation and weight adjustment. I just did forward propagation.) In a first experiment, I did not artificially limit the number of cores to be used. Measured response times in seconds are indicated in the following plot: Runtime for a free number of cores to use and different batch-sizes N We see that something dramatic happens between a batch size of 250 and 260. Below you see the plots for CPU core consumption for N=50, N=200, N=250, N=260 and N=2000. The plots indicate that everything goes well up to N=250. Up to this point around 4 cores are used – leaving 4 cores relatively free. After N=260 OpenBlas decides to use all 8 cores with a load of 100% – and performance suffers by more than a factor of 2. This result support the idea to look for an optimum of the number of cores “C” to use. For a MLP with neuron numbers (784, 300, 140, 10) I got the red curve for response time in the plot below. The second curve shows what performance is possible with just using 4 cores: Note the significantly higher response times. We also see again that something strange happens at the change of the batch-size from 250 to 260. The 100% CPU Though different from the first test case also these plots indicate that – somewhat paradoxically – reducing the number of CPU cores available to OpenBlas could have a performance enhancing effect. A bit of Internet research shows that one can limit the number of cores to use by OpenBlas e.g. via an environment variable for the shell, in which we start a Jupyter notebook. The relevant command to limit the number of cores “C” to 3 is : export OPENBLAS_NUM_THREADS=3 Below you find plots for the response times required for the batch sizes N listed above and core numbers of C=1, C=2, C=3, C=4, C=5, C=6, C=7, C=8 : For C=5 I did 2 different runs; the different results for C=5 show that the system reacts rather sensitively. It changes its behavior for larger core number drastically. We also find an overall minimum of the response time: We understand from the plots above that the number of cores to use become hyper-parameters for the tuning of the performance of ANNs – at least as long as a standard multicore-CPU is used. CPU consumption for N=50 and C=2 looks like: For comparison see the CPU consumption for N=20000 and C=4: CPU consumption for N=20000 and C=6: We see that between C=5 and C=6 CPU resources get heavily consumed; there are almost no reserves left in the Linux system for C ≥ 6. For a full view on the situation I also looked at the response time variation with node numbers for a given number of CPU cores. For C=4 and node number cases I got the following results: There is some broad variation with the weight-matrix size; the bigger the weight-matrix the longer the calculation time. This is, of course, to be expected. Note that the variation with the batch-size number is relatively smooth – with an optimum around 400. Now, look at the same plot for C=6: Note that the response time is significantly bigger in all cases compared to the previous situation with C=4. In cases of a large matrix by around 36% for N=2000. Also the variation with batch-size is more pronounced. Still, even with 6 cores you do not get factors between 1.4 and 2.0 as compared to the case of C=8 (see above)! As I do not know what the authors of OpenBlas are doing exactly, I refrain from technically understanding and interpreting the causes of the data shown above. However, some consequences seem to be clear: Whenever you deal with ANN or MLP simulations on a standard CPU (not GPU!) you should absolutely care about how many cores and related threads you want to offer to OpenBlas. As far as I understood from some Internet articles the number of cores to be used can be not only be controlled by Linux (shell) environment variables but also by os-commands in a Python program. You should perform tests to find optimum values for your CPU. stackoverflow: numpy-suddenly-uses-all-cpus stackoverflow: run-openblas-on-multicore stackoverflow: multiprocessing-pool-makes-numpy-matrix-multiplication-slower scicomp: why-isnt-my-matrix-vector-multiplication-scaling/1729 Setting the number of threads via Python In this article series we are going to build a relatively simple Python class for the simulation of a “Multilayer Perceptron” [MLP]. A MLP is a simple form of an “artificial neural network” [ANN] with multiple layers of neurons. It has three characteristic properties: (1) Only connections between nodes of neighboring layers are allowed. (2) Information transport occurs in one forward direction. We speak of a “forward propagation of input information” through the ANN. (3) Neighboring layers are densely connected; a node of layer L_n is connected to all nodes of layer L_(n+1). The firsts two points mean simplifications: According to (1) we do not consider so called cascaded networks. According to (2) no loops occur in the information transport. The third point, however, implies a lot of mathematical operations – not only during forward propagation of information through the ANN, but also – as we shall see – during the training and optimization of the network where we will back propagate “errors” from the output to the input layers. But for the next articles we need to care about simpler things first. We shall use our MLP for classification tasks. Our first objective is to apply it to the MNIST dataset. In my last article A simple program for an ANN to cover the Mnist dataset – I – a starting point I presented already some code for the “__init__”-function on some other methods of our Python class. It enabled us to import MNIST data and split them in to a set of training and a set of test samples. Note that this is a standard approach in Machine Learning: You train on one set of data samples, but you test the classification or regression abilities of your ANN on a separate disjunctive data set. In the present article we shall extend the functionality of our class: First we shall equip our network layers with a defined number of nodes. Then we shall provide statistical initial values for the “weights” describing the forward transport of information along the connections between the layers of our network. The present status of our __init__function is: The first of these parameters controls whether we print out some test data or not. The second parameter controls the number of test samples dealt with in parallel during propagation and optimization via the so called “mini-batch-approach” mentioned in the last article. We said in the last article that we would provide the numbers of nodes of the ANN layers via a parameter list “ay_nodes_layers”. We set the number of nodes for the input and the output layer, i.e. the first number and the last number in the list, to “0” in this array because these numbers are determined by properties of the input data set – here the MNIST dataset. All other numbers in the array determine the amount of nodes of the hidden layers in consecutive order between the input and the output layer. So, the number at ay_nodes_layers[1] is the number of nodes in the first hidden layer, i.e. the layer which follows after the input layer. In the last article we have understood already that the number of nodes in the input layer should be equal to the full number of “features” of our input data set – 784 in our case. The number of nodes of the output layer must instead be determined from the number of categories in our data set. This is equivalent to the number of distinct labels in the set of training data represented by an array “_y_train” (in case of MNIST: 10). We provide three methods to check the node numbers defined by the user, set the node numbers for the input and output layers and print the numbers. Note: The initial node numbers DO NOT include a bias node, yet. If we extend the final commands in the “__init__”-function by : By testing our additional code in a Jupyter notebook we get a corresponding output for of: Good! Initial values for the ANN weights have to be given as matrices, i.e. 2-dim arrays. However, the randomizer functions provided by Numpy give you vectors as output. So, we need to reshape such vectors into the required form. First we define a method to provide random floating point and integer numbers: Now, we define two methods to create the weight matrices for the connections As we allow for all possible connections between nodes the dimensions of the matrices are determined by the numbers of nodes in the connected neighboring layers. Each node of a layer L_n cam be connected to each node of layer L_(n+1). Our methods are: In my opinion this makes the access to these matrices flexible and easy in the case of multiple hidden layers. We must set the activation and the output function. This is handled by a method “_check_and_set_activation_and_out_functions()”. A test output for the enhanced code and a Jupyter cell gives : The shapes of the weight matrices correspond correctly to the numbers of nodes in the 4 layers defined. (Do not forget about the bias-nodes!). We have reached a status where our ANN class can read in the MNIST dataset and set initial random values for the weights. This means that we can start to do some more interesting things. In the next article A simple program for an ANN to cover the Mnist dataset – III – forward propagation we shall program the “forward propagation”. We shall perform the propagation for a mini-batch of many data samples in one step. We shall see that this is a very simple task, which only requires a few lines of code. For beginners both in Python and Machine Learning [ML] the threshold to do some real programming and create your own Artificial Neural Network [ANN] seems to be relatively high. Well, some readers might say: Why program an ANN by yourself at a basic Python level at all when Keras and TensorFlow [TF] are available? Answer: For learning! And eventually to be able to do some things TF has not been made for. And as readers of this blog will see in the future, I have some ideas along this line … So I thought, just let me set up a small Python3 and Numpy based program to create a simple kind of ANN – a “Multilayer Perceptron” [MLP] – and train it for the MNIST dataset. I expected that my readers and I myself would earn something on various methods used in ML during our numerical experiments. Well, we shall see ..- Regarding ANN-theory I take a brutal shortcut and assume that my readers are already acquainted with the following topics: I cannot spare you the effort of studying most of these topics in advance. Otherwise I would have to write an introductory book on ML myself. [I would do, but you need to give me a sponsor 🙂 ] But even if you are not fully acquainted with all the named topics: I shall briefly comment on each and every of the points in the forthcoming articles. The real basics are, however, much better and more precisely documented in the literature; e.g. in the books of Geron and Rashka (see the references at the end of this article). I recommend to read one of these books whilst we move on with a sequence of steps to build the basic code for our MLP. We need a relatively well defined first objective for the usage of our ANN. We shall concentrate on classification tasks. As a first example we shall use the conventional MNIST data set. The MNIST data set consists of images of handwritten numbers with 28×28 pixels [px]. It is a standard data set used in many elementary courses on ML. The challenge for our ANN is that it should be able to recognize hand-written digits from a digitized gray-color image after some training. Note that this task does NOT require the use of a fully fletched multi-layer MLP. “Stochastic Gradient Descent”-approaches for a pure binary classificator to determine (linear) separation surfaces in combination with a “One-versus-All” strategy for multi-category-classification may be sufficient. See chapters 3 to 5 in the book of Geron for more information. Regarding the build up of the ANN program, I basically follow an approach described by S. Raschka in his book (see the second to last section for a reference). However, at multiple points I take my freedom to organize the code differently and comment in my own way … I am only a beginner in Python; I hope my insights are helpful for others in the same situation. In any case you should make yourself familiar with numpy arrays and their “shapes”. I assume that you understand the multidimensional structure of Numpy arrays …. To avoid confusion, I use the following wording and synonyms: Category: Each input data element is associated with a category to which it belongs. In our case a category Label: A category may be described by a label. Training data may provide a so called “target label array” _y_train for all input data. We must be prepared to transform target labels for input data into a usable form for an ANN, i.e. into a vectorized form, which selects a specific category out of many. This process is called “label encoding”. Input data set: A complete “set” of input data. Such a set consists of individual “elements” or “records“. Another term which we I shall frequently use for such an element is a “sample“. The MNIST input set of training data consists of 60000 records or samples – which we provide via an array _X_train. The array is two dimensional as each sample consists of values more multiple properties. Feature: A sample of the input data set may be equivalent to a mathematical vector, whose elements specify (numerical) values for multiple properties – so called “features” – of a sample. Thus, input samples correspond to points in a multidimensional feature space. Output data set: A complete set of output data after a so called “propagation” through the ANN for the input data set. “Propagation”means a series of defined mathematical transformations of the original features data of the input sample. The number of samples or records in the output data sets is equal to the number of records in the input data set. The output set will be represented by a Numpy array “_ay_ANN_out“. A data record or sample of the input data set: One distinct element of the input data set (and its array). Note that such an element itself may be a multidimensional array covering all features in a distinct form. Such arrays represents a so called “tensor”. A data record of the output data set: One distinct element of the output data set. Note that such an element itself may be an array covering all possible categories in a distinct form. E.g., we may be given a “probability” for each category – which allows us to decide with which of the categories we should associate the output element. An AN network is composed of a series of horizontally and/or vertically arranged layers with nodes. The nodes represent the artificial neurons. A MLP is an ANN which has a rather simple structure: It consists of an input layer, multiple sequential intermediate “hidden” layers, and an output layer. All nodes of a specific layer are connected with all nodes of neighboring (!) layers, only. We speak of a “dense” or fully “connected layer” structure. The simplifying sketch below displays an ANN with just three sequentially arranged layers – an input layer, a “hidden” middle layer and an output layer. Note that in general there can be (many) more hidden layers than just one. Note also that modern ANNs (e.g. Convoluted Networks) may have a much more complicated topological structure with hundreds of layers. There may also be Input layer and its number of nodes For other input data the number of features may be different; in addition features my follow a multidimensional order or organization which first must be “flattened” out in one dimension. The number of input nodes must then be adjusted accordingly. The number of input nodes should, therefore, be a parameter or be derived from information on the type of input data. The way of how you map complicated and structured features to input layers and whether you map all data to a one dimensional input vector is a question one should think about carefully. (Most people today treat e.g. a time dimension of input data as just a special form of a feature – I regard this as questionable in some cases, but this is beyond this article series …) For our MLP we always assume a mapping of features to a flat one dimensional vector like structure. Output layer and its number of nodes How we indicate the association of a (transformed) sample at the output layer to a category numerically – by a probability number between “0” and “1” or just a “1” at the right category and zeros otherwise – can be a matter of discussion. It is also a question of the cost function we wish to use. We will come back to this point in later articles. The numbers of “hidden layers” and their nodes Activation and output functions We should be aware of the fact that the nodes of the output layers need special consideration as the “activation function” there I develop my code as a Python module in an Eclipse/PyDev IDE, which itself uses a virtual Python3 environment. I described the setup of such a development environment in detail in another previous article of this blog. In the resulting directory structure of the PyDev project I place a module “myann.py” at the location “…../ml_1/mynotebooks/mycode/myann.py”. This file shall contain the code of class “MyANN” for our ANN. We need to import some libraries at the head of our Python program first: Why do I import “tensorflow” and “keras”? We need “scipy” to get an optimized version of the so called “sigmoid“-function – which is an important version of an activation function. We shall use it most of the time. “numpy” and “math” are required for fast array- and math-operations. “time” is required to measure the run time of program segments and “mathplotlib” will help us to visualize some information gathered during and after training. We encapsulate most of the required functionality in a class and its methods. Python provides the “__init__”-function, which we can use as a kind of “constructor” – although it technically is not the same as a constructor in other languages. Anyway, we can use it as an interface to feed in parameters and to initialize variables of a class instance. We shall build up our “__init__()”-function during the next articles step by step. In the beginning we shall only focus on attributes and methods of our class required to import the MNIST data and put them into Numpy arrays and to create the basic network layers. Parameters You see that I defined multiple parameters, which are explained in the Python “doc”-string. We use a “string” to choose the dataset to train our ANN on. To be able to work on other data sets later on we assume that specific methods for importing a variety of special input data sets are implemented in our class. This requires that the class knows exactly which kinds of data sets it is capable to handle. We provide an list with this information below. The other parameters should be clear from their inline documentation. We first initialize a bunch of class attributes which we shall use to define the network of layers, nodes, weights, to keep our input data and functions. The list of known input data sets is kept in the variable “self.__input_data_sets”. The variables self._X, self._X_train, self._X_test, self._y, self._y_train, self._y_test will be used to keep all sample data of the chosen dataset – i.e. the training data, the test data for checking of the reliability of the algorithm after training and the corresponding target data (y_…) for classification – in distinct array variables during code execution. The target data in the MNIST case contain the digit a specific sample image ( of _X_train or _X_test..) represents. All of the named attributes will become Numpy arrays. A method called “_handle_input_data(self)” will load the (MNIST) input data and fill the arrays. The input arrays “X_…” will via their dimensions provide the information on the number of data sets (_dim_sets) and the number of features (_dim_features). Numpy provides the various dimensions of multidimensional arrays in form of a tuple. The target data arrays “_y_…” provides the number of “categories” (MNIST: 10 digits) the ANN must distinguish after training. We keep this number in the The number of total layers (“_n_total_layers”) is by 2 bigger than the number of hidden layers (_n_hidden_layers). We take the number of nodes in the layers from the respective list provided as an input parameter “ay_nodes_layers” to our class. We transform the list into a numpy array “_ay_nodes_layers”. The expected number of nodes in the output layer is used for consistency checks and saved in the variable “_n_nodes_layer_out”. The “weights” of an ANN must be given in form of matrices: A weight describes a connection between two nodes of different adjacent layers. So we have as many connections as there are node combinations (nodex_(N+1), nodey_N), with “nodex_N” meaning a node on layer L_N and nodey_(N+1) a node on layer L_(N+1). As the number of layers is not fixed, but can be set by the user, I use a Python list “_ay_w” to collect such matrices in the order of layer_0 (input) to layer_n (output). Weights, i.e. the matrix elements, must initially be set as random numbers. To provide such numbers we have to use randomizer functions. Depending on the kind (floating point numbers, integer numbers) of random numbers to produce we use at least two randomizers (randint, uniform). For the weights we use the uniform-randomizer. Allowed activation and output function names are listed in Python dictionaries which point to respective methods. This allows for an “indirect addressing” of these functions later on. You may recognize this by the direct reference of the dictionary elements to defined class methods (no strings are used!). For the time being we work with the “sigmoid” and the “relu” functions for activation and the “sigmoid” and “softmax” functions for output creation. The attributes “self._act_func” and “self._out_func” are used later on to invoke the functions requested by the respective parameters of the classes interface. The final part of the code segment given above is used for plot-sizing with the help of “matplotlib”; a method “initiate_and_resize_plot()” takes care of this. It can use 2 alternative ways of doing so. Now let us turn to some methods. We first need to read in and prepare the input data. We use a method “_handle_input_data()” to work on this problem. For the time being we have only three different ways to load the MNIST dataset from different origins. self._X_train, self._X_test, self._y_train, self._y_test. We have to do this a bit differently for the 3 cases. Note that the “mnist_784” set from “fetch_openml” gives the target category values in form of strings and not integers. We correct this directly after loading. The fastest method for importing the MNIST dataset is based on “keras”; the keras function “kmnist.load_data()” provides already a 60000:10000 ratio for training and test data. However, we get the images in a (60000, 28, 28) array shape; we therefore reshape the “_X_train”-array to (60000, 784) and “_X_test”-array to (10000, 784). A further handling of the MNIST data requires some common analysis. What shape do we expect for the “_X_train” and “_y_train”? Each element of the input data set is an array with values for all features. Thus the “_X_train.shape” should be (60000, 784). For _y_train we expect a simple integer describing the digit to which the MNIST input image corresponds. Thus we expect a one dimensional array with _y_train.shape = (60000). So far, so good … But: The output data of our ANN for one input element will be provided as an array of values for our 10 different categories – and not as a simple number. To account for this we need to encode the “_y_train”-data, i.e. the target labels, into an usable array form. We use two methods to achieve this: A big advantage of the weight optimization method we shall use later on during training of our MLP is that we will perform weight adjustment for a whole bunch of training samples in one step. Meaning: We propagate a whole bunch of test data samples in parallel through the grid to get an array with result data (output array) for all samples. Such a bunch is called a “batch” and if it is significantly smaller than the whole set of training data – a “mini-batch“. Working with “mini-batches” during the training and learning phase of an ANN is a compromise between See chapter 4 of the book of Geron and chapter 2 in the book of Raschka for some thorough information on this topic. The advantage of mini-batches is that we can use vectorized linear algebra operations over all elements of the batch. Linear Algebra libraries are optimized to perform the resulting vector and matrix operations on modern CPUs and GPUs. You really should keep the following point in mind to understand the code for the propagation and optimization algorithms discussed in forthcoming articles: Mini-batches also will help during training in so far as we look at a bunch of multiple selected samples in parallel to achieve bigger steps of our gradient guided descent into an minimum of the cost hyperplane in the beginning – with the disadvantage of making some jumpy stochastic turns on the cost hyperplane instead of a smoother approach. I probably lost you now 🙂 . The simpler version is: Keep in mind that we later on will work with batches of training data samples in parallel! However, the separation interface for our categories in the feature space must in the end be adjusted with respect to all given data points of the training set. This means we must perform the training successively for a whole sequence of mini-batches which together cover all available training samples. What is the shape of the output array? As I have explained already in my last article We shall construct the output function such that it provides something like “probability” values within the interval [0,1] for each node of the output layer. We define a perfectly working MLP as one which – after training – produces a “1.0” at the correct category node (i.e. the expected digit) and “0.0” at all other output nodes. One-hot-encoding of labels By using Numpy’s zero()-function and Pythons “enumerate()”-function we can achieve such an encoding for all data elements of the training data set. See the method “_encode_all_mnist_labels()”. Thus, the array “_ay_onehot” will have a shape of (10, 60000). From this 2-dim array we can later slice out bunches of consecutive test data for mini-batches. The array “_ay_oneval” is provided for convenience and print purposes, only: it provides the expected digit value in addition. Let us test the import of the input data and the label encoding with a Jupyter notebook. In previous articles I have described already how to use such a notebook. I set up a Jupyter notebook called “myANN” (in my present working directory “/projekte/GIT/ai/ml1/mynotebooks”). I start it with and add two cells. The first one is for the import of libraries. By the last line I import my present class code. With the second cell I create an instance of my class; the “__init__()”-function is automatically executed ad calls the other methods defined so far: Note that the display of “_ay_onehot” shows the categories in vertical (!) direction (rows) and the index for the input data element in horizontal direction (columns)! You see that the labels in the enumerate structure correspond to the “1”s in the “_ay_onehot”-array. Importing the MNIST dataset into Numpy arrays via Keras is simple – and has a good performance. We have learned a bit about “one-hot-encoding” and prepared an array “_ay_onehot”, which we shall use during ANN training and weight optimization. It will allow us to calculate a difference between the actual output values of the ANN at the nodes of the output layer and a “1.0” value at the node for the expected sample category and “0.0” otherwise. In the next article Referenced Books Links regarding cost (or loss) functions and logistic regression A simple Python program for an ANN to cover the MNIST dataset – XIV – cluster detection in feature spaceA simple Python program for an ANN to cover the MNIST dataset – III – forward propagation
A simple program for an ANN to cover the Mnist dataset – I – a starting pointMethods to cover training and mini-batches
Modified “__init__”-function
def __init__(self,
my_data_set = "mnist",
n_hidden_layers = 1,
ay_nodes_layers = [0, 100, 0], # array which should have as much elements as n_hidden + 2
n_nodes_layer_out = 10, # expected number of nodes in output layer
my_activation_function = "sigmoid",
my_out_function = "sigmoid",
n_size_mini_batch = 50, # number of data elements in a mini-batch
n_epochs = 1,
n_max_batches = -1, # number of mini-batches to use during epochs - > 0 only for testing
# a negative value uses all mini-batches
vect_mode = 'cols',
figs_x1=12.0, figs_x2=8.0,
legend_loc='upper right',
n b_print_test_data = True
):
'''
Initialization of MyANN
Input:
data_set: type of dataset; so far only the "mnist", "mnist_784" datsets are known
We use this information to prepare the input data and learn about the feature dimension.
This info is used in preparing the size of the input layer.
n_hidden_layers = number of hidden layers => between input layer 0 and output layer n
ay_nodes_layers = [0, 100, 0 ] : We set the number of nodes in input layer_0 and the output_layer to zero
Will be set to real number afterwards by infos from the input dataset.
All other numbers are used for the node numbers of the hidden layers.
n_nodes_out_layer = expected number of nodes in the output layer (is checked);
this number corresponds to the number of categories NC = number of labels to be distinguished
my_activation_function : name of the activation function to use
my_out_function : name of the "activation" function of the last layer which produces the output values
n_size_mini_batch : Number of elements/samples in a mini-batch of training data
The number of mini-batches will be calculated from this
n_epochs : number of epochs to calculate during training
n_max_batches : > 0: maximum of mini-batches to use during training
< 0: use all mini-batches
vect_mode: Are 1-dim data arrays (vctors) ordered by columns or rows ?
figs_x1=12.0, figs_x2=8.0 : Standard sizing of plots ,
legend_loc='upper right': Position of legends in the plots
b_print_test_data: Boolean variable to control the print out of some tests data
'''
# Array (Python list) of known input data sets
self._input_data_sets = ["mnist", "mnist_784", "mnist_keras"]
self._my_data_set = my_data_set
# X, y, X_train, y_train, X_test, y_test
# will be set by analyze_input_data
# X: Input array (2D) - at present status of MNIST image data, only.
# y: result (=classification data) [digits represent categories in the case of Mnist]
self._X = None
self._X_train = None
self._X_test = None
self._y = None
self._y_train = None
self._y_test = None
# relevant dimensions
# from input data information; will be set in handle_input_data()
self._dim_sets = 0
self._dim_features = 0
self._n_labels = 0 # number of unique labels - will be extracted from y-data
# Img sizes
self._dim_img = 0 # should be sqrt(dim_features) - we assume square like images
self._img_h = 0
self._img_w = 0
# Layers
# ------
# number of hidden layers
self._n_hidden_layers = n_hidden_layers
# Number of total layers
self._n_total_layers = 2 + self._n_hidden_layers
# Nodes for hidden layers
self._ay_nodes_layers = np.array(ay_nodes_layers)
# Number of nodes in output layer - will be checked against information from target arrays
self._n_nodes_layer_out = n_nodes_layer_out
# Weights
# --------
# empty List for all weight-matrices for all layer-connections
# Numbering :
# w[0] contains the weight matrix
which connects layer 0 (input layer ) to hidden layer 1
# w[1] contains the weight matrix which connects layer 1 (input layer ) to (hidden?) layer 2
self._ay_w = []
# --- New -----
# Two lists for output of propagation
# __ay_x_in : input data of mini-batches on the different layers; the contents is calculated by the propagation algorithm
# __ay_a_out : output data of the activation function; the contents is calculated by the propagation algorithm
# Note that the elements of these lists are numpy arrays
self.__ay_X_in = []
self.__ay_a_out = []
# Known Randomizer methods ( 0: np.random.randint, 1: np.random.uniform )
# ------------------
self.__ay_known_randomizers = [0, 1]
# Types of activation functions and output functions
# ------------------
self.__ay_activation_functions = ["sigmoid"] # later also relu
self.__ay_output_functions = ["sigmoid"] # later also softmax
# the following dictionaries will be used for indirect function calls
self.__d_activation_funcs = {
'sigmoid': self._sigmoid,
'relu': self._relu
}
self.__d_output_funcs = {
'sigmoid': self._sigmoid,
'softmax': self._softmax
}
# The following variables will later be set by _check_and set_activation_and_out_functions()
self._my_act_func = my_activation_function
self._my_out_func = my_out_function
self._act_func = None
self._out_func = None
# number of data samples in a mini-batch
self._n_size_mini_batch = n_size_mini_batch
self._n_mini_batches = None # will be determined by _get_number_of_mini_batches()
# number of epochs
self._n_epochs = n_epochs
# maximum number of batches to handle (<0 => all!)
self._n_max_batches = n_max_batches
# print some test data
self._b_print_test_data = b_print_test_data
# Plot handling
# --------------
# Alternatives to resize plots
# 1: just resize figure 2: resize plus create subplots() [figure + axes]
self._plot_resize_alternative = 1
# Plot-sizing
self._figs_x1 = figs_x1
self._figs_x2 = figs_x2
self._fig = None
self._ax = None
# alternative 2 does resizing and (!) subplots()
self.initiate_and_resize_plot(self._plot_resize_alternative)
# ***********
# operations
# ***********
# check and handle input data
self._handle_input_data()
# set the ANN structure
self._set_ANN_structure()
# Prepare epoch and batch-handling - sets mini-batch index array, too
self._prepare_epochs_and_batches()
# perform training
start_c = time.perf_counter()
self._fit(b_print=True, b_measure_batch_time=False)
end_c = time.perf_counter()
print('\n\n ------')
print('Total training Time_CPU: ', end_c - start_c)
print("\nStopping program regularily")
sys.exit()
Readers who have followed me so far will recognize that I renamed the parameter “n_mini_batch” to “n_size_mini_batch” to indicate its purpose a bit more clearly. We shall derive the number of required mini-batches form the value of this parameter.
I have added two new parameters:
'''-- Main method to set ANN structure --'''
def _set_ANN_structure(self):
# check consistency of the node-number list with the number of hidden layers (n_hidden)
self._check_layer_and_node_numbers()
# set node numbers for the input layer and the output layer
self._set_nodes_for_input_output_layers()
self._show_node_numbers()
# create the weight matrix between input and first hidden layer
self._create_WM_Input()
# create weight matrices between the hidden layers and between tha last hidden and the output layer
self._create_WM_Hidden()
# check and set activation functions
self._check_and_set_activation_and_out_functions()
return None
The called functions have remained unchanged in comparison to the last article. Preparing epochs and batches
''' -- Main Method to prepare epochs -- '''
def _prepare_epochs_and_batches(self):
# set number of mini-batches and array with indices of input data sets belonging to a batch
self._set_mini_batches()
return None
##
''' -- Method to set the number of batches based on given batch size -- '''
def _set_mini_batches(self, variant=0):
# number of mini-batches?
self._n_mini_batches = math.ceil( self._y_train.shape[0] / self._n_size_mini_batch )
print("num of mini_batches = " + str(self._n_mini_batches))
# create list of arrays with indices of batch elements
self._ay_mini_batches = np.array_split( range(self._y_train.shape[0]), self._n_mini_batches )
print("\nnumber of batches : " + str(len(self._ay_mini_batches)))
print("length of first batch : " + str(len(self._ay_mini_batches[0])))
print("length of last batch : " + str(len(self._ay_mini_batches[self._n_mini_batches - 1]) ))
return None
Note that the approach may lead to smaller batch sizes than requested by the user.
array_split() cuts out a series of sub-arrays of indices of the training data. I.e., “_ay_mini_batches” becomes a 1-dim array, whose elements are 1-dim arrays, too. Each of the latter contains a collection of indices for selected samples of the training data – namely the indices for those samples which shall be used in the related mini-batch. Preliminary elements of the method for training – “_fit()”
''' -- Method to set the number of batches based on given batch size -- '''
def _fit(self, b_print = False, b_measure_batch_time = False):
# range of epochs
ay_idx_epochs = range(0, self._n_epochs)
# limit the number of mini-batches
n_max_batches = min(self._n_max_
batches, self._n_mini_batches)
ay_idx_batches = range(0, n_max_batches)
if (b_print):
print("\nnumber of epochs = " + str(len(ay_idx_epochs)))
print("max number of batches = " + str(len(ay_idx_batches)))
# looping over epochs
for idxe in ay_idx_epochs:
if (b_print):
print("\n ---------")
print("\nStarting epoch " + str(idxe+1))
# loop over mini-batches
for idxb in ay_idx_batches:
if (b_print):
print("\n ---------")
print("\n Dealing with mini-batch " + str(idxb+1))
if b_measure_batch_time:
start_0 = time.perf_counter()
# deal with a mini-batch
self._handle_mini_batch(num_batch = idxb, b_print_y_vals = False, b_print = b_print)
if b_measure_batch_time:
end_0 = time.perf_counter()
print('Time_CPU for batch ' + str(idxb+1), end_0 - start_0)
return None
#
We limit the number of mini_batches. The double-loop-structure is typical. We tell function “_handle_mini_batch(num_batch = idxb,…)” which batch it should handle. Preliminary steps for the treatment of a mini-batch
''' -- Method to deal with a batch -- '''
def _handle_mini_batch(self, num_batch = 0, b_print_y_vals = False, b_print = False):
'''
For each batch we keep the input data array Z and the output data A (output of activation function!)
for all layers in Python lists
We can use this as input variables in function calls - mutable variables are handled by reference values !
We receive the A and Z data from propagation functions and proceed them to cost and gradient calculation functions
As an initial step we define the Python lists ay_Z_in_layer and ay_A_out_layer
and fill in the first input elements for layer L0
'''
ay_Z_in_layer = [] # Input vector in layer L0; result of a matrix operation in L1,...
ay_A_out_layer = [] # Result of activation function
#print("num_batch = " + str(num_batch))
#print("len of ay_mini_batches = " + str(len(self._ay_mini_batches)))
#print("_ay_mini_batches[0] = ")
#print(self._ay_mini_batches[num_batch])
# Step 1: Special treatment of the ANN's input Layer L0
# Layer L0: Fill in the input vector for the ANN's input layer L0
ay_Z_in_layer.append( self._X_train[(self._ay_mini_batches[num_batch])] ) # numpy arrays can be indexed by an array of integers
#print("\nPropagation : Shape of X_in = ay_Z_in_layer = " + str(ay_Z_in_layer[0].shape))
if b_print_y_vals:
print("\n idx, expected y_value of Layer L0-input :")
for idx in self._ay_mini_batches[num_batch]:
print(str(idx) + ', ' + str(self._y_train[idx]) )
# Step 2: Layer L0: We need to transpose the data of the input layer
ay_Z_in_0T = ay_Z_in_layer[0].T
ay_Z_in_layer[0] = ay_Z_in_0T
# Step 3: Call the forward propagation method for the mini-batch data samples
self._fw_propagation(ay_Z_in = ay_Z_in_layer, ay_A_out = ay_A_out_layer, b_print = b_print)
if b_print:
# index range of layers
ilayer = range(0, self._n_total_layers)
print("\n ---- ")
print("\nAfter propagation through all layers: ")
for il in ilayer:
print("Shape of Z_in of layer L" + str(il) + " = " + str(ay_Z_in_layer[il].shape))
print("Shape of A_out of layer L" + str(il) + " = " + str(ay_A_out_layer[il].shape))
# Step 4: To be done: cost calculation for the batch
# Step 5: To be done: gradient calculation via back propagation of errors
# Step 6: Adjustment of weights
# try to accelerate garbage handling
if len(ay_Z_in_layer) > 0:
del ay_Z_in_layer
if len(ay_A_out_layer) > 0:
del ay_A_out_layer
return None
Why do we need to transpose the Z-matrix for layer L0?
This has to do with the required matrix multiplication of the forward propagation (see below).
We transfer our lists (mutable Python objects) to “_fw_propagation()”! This has the effect that the array of the corresponding values is referenced from within “_fw_propagation()”; therefore will any elements added to the lists also be available outside the called function! Therefore we can use the calculated results also in further functions for e.g. gradient calculations which will later be called from within “_handle_mini_batch()”. Forward Propagation
Numpy matrix multiplication for layers of simple feed forward ANNs
''' -- Method to handle FW propagation for a mini-batch --'''
def _fw_propagation(self, ay_Z_in, ay_A_out, b_print= False):
b_internal_timing = False
# index range of layers
ilayer = range(0, self._n_total_layers-1)
# propagation loop
for il in ilayer:
if b_internal_timing: start_0 = time.perf_counter()
if b_print:
print("\nStarting propagation between L" + str(il) + " and L" + str(il+1))
print("Shape of Z_in of layer L" + str(il) + " (without bias) = " + str(ay_Z_in[il].shape))
# Step 1: Take input of last layer and apply activation function
if il == 0:
A_out_il = ay_Z_in[il] # L0: activation function is identity
else:
A_out_il = self._act_func( ay_Z_in[il] ) # use real activation function
# Step 2: Add bias node
A_out_il = self._add_bias_neuron_to_layer(A_out_il, 'row')
# save in array
ay_A_out.append(A_out_il)
if b_print:
print("Shape of A_out of layer L" + str(il) + " (with bias) = " + str(ay_A_out[il].shape))
# Step 3: Propagate by matrix operation
Z_in_ilp1 = np.dot(self._ay_w[il], A_out_il)
ay_Z_in.append(Z_in_ilp1)
if b_internal_timing:
end_0 = time.perf_counter()
print('Time_CPU for layer propagation L' + str(il) + ' to L' + str(il+1), end_0 - start_0)
# treatment of the last layer
il = il + 1
if b_print:
print("\nShape of Z_in of layer L" + str(il) + " = " + str(ay_Z_in[il].shape))
A_out_il = self._out_func( ay_Z_in[il] ) # use the output function
ay_A_out.append(A_out_il)
if b_print:
print("Shape of A_out of last layer L" + str(il) + " = " + str(ay_A_out[il].shape))
return None
#
First we set a range for a loop over the layers. Then we apply the activation function. In “step 2” we add a bias-node to the layer – compare this to the number of weights, which we used during the initialization of the weight matrices in the last article. In step 3 we apply the vectorized Numpy-matrix multiplication (np.dot-operation). Note that this is working for layer L0, too, because we already transposed the input array for this layer in “_handle_mini_batch()”!
''' Method to add values for a bias neuron to A_out '''
def _add_bias_neuron_to_layer(self, A, how='column'):
if how == 'column':
A_new = np.ones((A.shape[0], A.shape[1]+1))
A_new[:, 1:] = A
elif how == 'row':
A_new = np.ones((A.shape[0]+1, A.shape[1]))
A_new[1:, :] = A
return A_new
A first test
Input data for dataset mnist_keras :
Original shape of X_train = (60000, 28, 28)
Original Shape of y_train = (60000,)
Original shape of X_test = (10000, 28, 28)
Original Shape of y_test = (10000,)
Final input data for dataset mnist_keras :
Shape of X_train = (60000, 784)
Shape of y_train = (60000,)
Shape of X_test = (10000, 784)
Shape of y_test = (10000,)
We have 60000 data sets for training
Feature dimension is 784 (= 28x28)
The number of labels is 10
Shape of y_train = (60000,)
Shape of ay_onehot = (10, 60000)
Values of the enumerate structure for the first 12 elements :
(0, 6)
(1, 8)
(2, 4)
(3, 8)
(4, 6)
(5, 5)
(6, 9)
(7, 1)
(8, 3)
(9, 8)
(10, 9)
(11, 0)
Labels for the first 12 datasets:
Shape of ay_onehot = (10, 60000)
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 1. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 1. 0.]]
The node numbers for the 4 layers are :
[784 100 50 10]
Shape of weight matrix between layers 0 and 1 (100, 785)
Creating weight matrix for layer 1 to layer 2
Shape of weight matrix between layers 1 and 2 = (50, 101)
Creating weight matrix for layer 2 to layer 3
Shape of weight matrix between layers 2 and 3 = (10, 51)
The activation function of the standard neurons was defined as "sigmoid"
The activation function gives for z=2.0: 0.8807970779778823
The output function of the neurons in the output layer was defined as "sigmoid"
The output function gives for z=2.0: 0.8807970779778823
num of mini_batches = 300
number of batches : 300
length of first batch : 200
length of last batch : 200
number of epochs = 1
max number of batches = 2
---------
Starting epoch 1
---------
Dealing with mini-batch 1
Starting propagation between L0 and L1
Shape of Z_in of layer L0 (without bias) = (784, 200)
Shape of A_out of layer L0 (with bias) = (785, 200)
Starting propagation between L1 and L2
Shape of Z_in of layer L1 (without bias) = (100, 200)
Shape of A_out of layer L1 (with bias) = (101, 200)
Starting propagation between L2 and L3
Shape of Z_in of layer L2 (without bias) = (50, 200)
Shape of A_out of layer L2 (with bias) = (51, 200)
Shape of Z_in of layer L3 = (10, 200)
Shape of A_out of last layer L3 = (10, 200)
----
After propagation through all layers:
Shape of Z_in of layer L0 = (784, 200)
Shape of A_out of layer L0 = (785, 200)
Shape of Z_in of layer L1 = (100, 200)
Shape of A_out of layer L1 = (101, 200)
Shape of Z_in of layer L2 = (50, 200)
Shape of A_out of layer L2 = (51, 200)
Shape of Z_in of layer L3 = (10, 200)
Shape of A_out of layer L3 = (10, 200)
---------
Dealing with mini-batch 2
Starting propagation between L0 and L1
Shape of Z_in of layer L0 (without bias) = (784, 200)
Shape of A_out of layer L0 (with bias) = (785, 200)
Starting propagation between L1 and L2
Shape of Z_in of layer L1 (without bias) = (100, 200)
Shape of A_out of layer L1 (with bias) = (101, 200)
Starting propagation between L2 and L3
Shape of Z_in of layer L2 (without bias) = (50, 200)
Shape of A_out of layer L2 (with bias) = (51, 200)
Shape of Z_in of layer L3 = (10, 200)
Shape of A_out of last layer L3 = (10, 200)
----
After propagation through all layers:
Shape of Z_in of layer L0 = (784, 200)
Shape of A_out of layer L0 = (785, 200)
Shape of Z_in of layer L1 = (100, 200)
Shape of A_out of layer L1 = (101, 200)
Shape of Z_in of layer L2 = (50, 200)
Shape of A_
out of layer L2 = (51, 200)
Shape of Z_in of layer L3 = (10, 200)
Shape of A_out of layer L3 = (10, 200)
------
Total training Time_CPU: 0.010270356000546599
Stopping program regularily
stopped
We see that the dimensions of the Numpy arrays fit our expectations!
Linux, OpenBlas and Numpy matrix multiplications – avoid using all processor cores

The node numbers for the 4 layers are :
[784 100 50 10]
Shape of weight matrix between layers 0 and 1 (100, 785)
Creating weight matrix for layer 1 to layer 2
Shape of weight matrix between layers 1 and 2 = (50, 101)
Creating weight matrix for layer 2 to layer 3
Shape of weight matrix between layers 2 and 3 = (10, 51)
The activation function of the standard neurons was defined as "sigmoid"
The activation function gives for z=2.0: 0.8807970779778823
The output function of the neurons in the output layer was defined as "sigmoid"
The output function gives for z=2.0: 0.8807970779778823
num of mini_batches = 150
number of batches : 150
length of first batch : 400
length of last batch : 400
------
Total training Time_CPU: 146.44446582399905
Stopping program regularily
stopped
The time required to repeat this kind of forward propagation for a network with only one hidden layer with 50 neurons and 1000 epochs is around 160 secs. As backward propagation is not much more complex than forward propagation this already indicates that we should be able to train such a most simple MLP with 60000 28×28 images in less than 10 minutes on a standard CPU. Conclusion
Linux, OpenBlas and Numpy matrix multiplications – avoid using all processor cores

Most of the consumption is due to audio. Small spikes on one CPU core due to the investigation of incoming mails were possible – but always below 20%. Basics
the core ingredients to get an ANN running are matrix operations. More precisely: multiplications of 2-dim Numpy matrices (weight matrices) with input vectors. The dimensions of the weight matrices reflect the node-numbers of consecutive ANN-layers. The dimension of the input vector depends on the node number of the lower of two neighbor layers.
The reaction of OpenBlas to an MLP with 4 layers comprising 784, 100, 50, 10 nodes
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
(ml1) myself@mytux:/projekte/GIT/ai/ml1/lib64/python3.6/site-packages/numpy/core> ldd _multiarray_umath.cpython-36m-x86_64-linux-gnu.so
linux-vdso.so.1 (0x00007ffe8bddf000)
libopenblasp-r0-2ecf47d5.3.7.dev.so => /projekte/GIT/ai/ml1/lib/python3.6/site-packages/numpy/core/./../.libs/libopenblasp-r0-2ecf47d5.3.7.dev.so (0x00007fdd9d15f000)
libm.so.6 => /lib64/libm.so.6 (0x00007fdd9ce27000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fdd9cc09000)
libc.so.6 => /lib64/libc.so.6 (0x00007fdd9c84f000)
/lib64/ld-
linux-x86-64.so.2 (0x00007fdd9f4e8000)
libgfortran-ed201abd.so.3.0.0 => /projekte/GIT/ai/ml1/lib/python3.6/site-packages/numpy/core/./../.libs/libgfortran-ed201abd.so.3.0.0 (0x00007fdd9c555000)

The reaction of OpenBlas to an MLP with layers comprising 784, 300, 140, 10 nodes
consumption even for a batch-size of only 50 is shown below:

Limiting the number of available cores to OpenBlas
The overall optimum occurs for 400 < N < 500 for C=1, 2, 3, 4 – with the minimum region being broadest for C=3. The absolute minimum is reached on my CPU for C=4.CPU-consumption



Dependency on the size of the weight-matrices and the node numbers

Conclusion
Links
stackoverflow:
set-max-number-of-threads-at-runtime-on-numpy-openblas
codereview.stackexchange: better-way-to-set-number-of-threads-used-by-numpyA simple Python program for an ANN to cover the MNIST dataset – II – initial random weight values
Present status of our function “__init__”
def __init__(self,
my_data_set = "mnist",
n_hidden_layers = 1,
ay_nodes_layers = [0, 100, 0], # array which should have as much elements as n_hidden + 2
n_nodes_layer_out = 10, # expected number of nodes in output layer
my_activation_function = "sigmoid",
my_out_function = "sigmoid",
n_mini_batch = 1000, # number of data elements in a mini-batch
vect_mode = 'cols',
figs_x1=12.0, figs_x2=8.0,
legend_loc='upper right',
b_print_test_data = True
):
'''
Initialization of MyANN
Input:
data_set: type of dataset; so far only the "mnist", "mnist_784" datsets are known
We use this information to prepare the input data and learn about the feature dimension.
This info is used in preparing the size of the input layer.
n_hidden_layers = number of hidden layers => between input layer 0 and output layer n
ay_nodes_layers = [0, 100, 0 ] : We set the number of nodes in input layer_0 and the output_layer to zero
Will be set to real number afterwards by infos from the input dataset.
All other numbers are used for the node numbers of the hidden layers.
n_nodes_out_layer = expected number of nodes in the output layer (is checked);
this number corresponds to the number of categories NC = number of labels to be distinguished
my_activation_function : name of the activation function to use
my_out_function : name of the "activation" function of the last layer which produces the output values
n_mini_batch : Number of elements/samples in a mini-batch of training dtaa
vect_mode: Are 1-dim data arrays (vctors) ordered by columns or rows ?
figs_x1=12.0, figs_x2=8.0 : Standard sizing of plots ,
legend_loc='upper right': Position of legends in the plots
b_print_test_data: Boolean variable to control the print out of some tests data
'''
# Array (Python list) of known input data sets
self._input_data_sets = ["mnist", "mnist_784", "mnist_keras"]
self._my_data_set = my_data_set
# X, y, X_train, y_train, X_test, y_test
# will be set by analyze_input_data
# X: Input array (2D) - at present status of MNIST image data, only.
# y: result (=classification data) [digits represent categories in the case of Mnist]
self._X = None
self._X_train = None
self._X_test = None
self._y = None
self._y_train = None
self._y_test = None
# relevant dimensions
# from input data information; will be set in handle_input_data()
self._dim_sets = 0
self._dim_features = 0
self._n_labels = 0 # number of unique labels - will be extracted from y-data
# Img sizes
self._dim_img = 0 # should be sqrt(dim_features) - we assume square like images
self._img_h = 0
self._img_w = 0
# Layers
# ------
# number of hidden layers
self._n_hidden_layers = n_hidden_layers
# Number of total layers
self._n_total_layers = 2 + self._n_hidden_layers
# Nodes for hidden layers
self._ay_nodes_layers = np.array(ay_nodes_layers)
# Number of nodes in output layer - will be checked against information from target arrays
self._n_nodes_layer_out = n_nodes_layer_out
# Weights
# --------
# empty List for all weight-matrices for all layer-connections
# Numbering :
# w[0] contains the weight matrix which connects layer 0 (input layer ) to hidden layer 1
# w[1] contains the weight matrix which connects layer 1 (input layer ) to (hidden?) layer 2
self._ay_w = []
# Known Randomizer methods ( 0: np.random.randint, 1: np.random.uniform )
# ------------------
self.__ay_known_randomizers = [0, 1]
# Types of activation functions and output functions
# ------------------
self.__ay_activation_functions = ["sigmoid"] # later also relu
self.__ay_output_functions = ["sigmoid"] # later also softmax
# the following dictionaries will be used for indirect function calls
self.__d_activation_funcs = {
'sigmoid': self._sigmoid,
'relu': self._relu
}
self.__d_output_funcs = {
'sigmoid': self._sigmoid,
'softmax': self._softmax
}
# The following variables will later be set by _check_and set_activation_and_out_functions()
self._my_act_func = my_activation_function
self._my_out_func = my_out_function
self._act_func = None
self._out_func = None
# number of data samples in a mini-batch
self._n_mini_batch = n_mini_batch
# print some test data
self._b_print_test_data = b_print_test_data
# Plot handling
# --------------
# Alternatives to resize plots
# 1: just resize figure 2: resize plus create subplots() [figure + axes]
self._plot_resize_alternative = 1
# Plot-sizing
self._figs_x1 = figs_x1
self._figs_x2 = figs_x2
self._fig = None
self._ax = None
# alternative 2 does resizing and (!) subplots()
self.initiate_and_resize_plot(self._plot_resize_alternative)
# ***********
# operations
# ***********
# check and handle input data
self._handle_input_data()
print("\nStopping program regularily")
sys.exit()
The kind reader may have noticed that this is not exactly what was presented in the last article. I have introduced two additional parameters and corresponding class attributes:
“b_print_test_data” and “n_mini_batch”.Setting node numbers of the layers
# Method which checks the number of nodes given for hidden layers
def _check_layer_and_node_numbers(self):
try:
if (self._n_total_layers != (self._n_hidden_layers + 2)):
raise ValueError
except ValueError:
print("The assumed total number of layers does not fit the number of hidden layers + 2")
sys.exit()
try:
if (len(self._ay_nodes_layers) != (self._n_hidden_layers + 2)):
raise ValueError
except ValueError:
print("The number of elements in the array for layer-nodes does not fit the number of hidden layers + 2")
sys.exit(1)
# Method which sets the number of nodes of the input and the layer
def _set_nodes_for_input_output_layers(self):
# Input layer: for the input layer we do
NOT take into account a bias node
self._ay_nodes_layers[0] = self._dim_features
# Output layer: for the output layer we check the number of unique values in y_train
try:
if ( self._n_labels != (self._n_nodes_layer_out) ):
raise ValueError
except ValueError:
print("The unique elements in target-array do not fit number of nodes in the output layer")
sys.exit(1)
self._ay_nodes_layers[self._n_total_layers - 1] = self._n_labels
# Method which prints the number of nodes of all layers
def _show_node_numbers(self):
print("\nThe node numbers for the " + str(self._n_total_layers) + " layers are : ")
print(self._ay_nodes_layers)
The code should be easy to understand. self._dim_features was set in the method “_common_handling_of mnist()” discussed in the last article. It was derived from the shape of the input data array _X_train. The number of unique labels was evaluated by the method “_get_num_labels()” – also discussed in the last article.
# ***********
# operations
# ***********
# check and handle input data
self._handle_input_data()
# check consistency of the node-number list with the number of hidden layers (n_hidden)
self._check_layer_and_node_numbers()
# set node numbers for the input layer and the output layer
self._set_nodes_for_input_output_layers()
self._show_node_numbers()
print("\nStopping program regularily")
sys.exit()
ANN = myann.MyANN(my_data_set="mnist_keras", n_hidden_layers = 2,
ay_nodes_layers = [0, 100, 50, 0],
n_nodes_layer_out = 10,
vect_mode = 'cols',
figs_x1=12.0, figs_x2=8.0,
legend_loc='upper right',
b_print_test_data = False
)
The node numbers for the 4 defined layers are:
[784 100 50 10]
Setting initial random numbers for the weights
# ---
# method to create an array of randomized values
def _create_vector_with_random_values(self, r_low=None, r_high=None, r_size=None, randomizer=0 ):
'''
Method to create a vector of length "r_size" with "random values" in [r_low, r_high]
generated by method "randomizer"
Input:
ramdonizer : integer which sets randomizer method; presently only
0: np.random.uniform
1: np.randint
[r_low, r_high]: range of the random numbers to be created
r_size: Size of output array
Output: A 1-dim numpy array of length rand-size - produced as a class member
'''
# check parameters
try:
if (r_low==None or r_high == None or r_size == None ):
raise ValueError
except ValueError:
print("One of
the required parameters r_low, r_high, r_size has not been set")
sys.exit(1)
rmizer = int(randomizer)
try:
if (rmizer not in self.__ay_known_randomizers):
raise ValueError
except ValueError:
print("randomizer not known")
sys.exit(1)
# 2 randomizers (so far)
if (rmizer == 0):
ay_r_out = np.random.randint(int(r_low), int(r_high), int(r_size))
if (rmizer == 1):
ay_r_out = np.random.uniform(r_low, r_high, size=int(r_size))
return ay_r_out
Presently, we can only use two randomizer functions to be used – numpy.random.randint and numpy.random.uniform. The first one provides random integer values, the other one floating point values – both within a defined interval. The parameter “r_size” defines how many random numbers shall be created and put into an array. The code requires no further explanation.
# Method to create the weight matrix between L0/L1
# ------
def _create_WM_Input(self):
'''
Method to create the input layer
The dimension will be taken from the structure of the input data
We need to fill self._w[0] with a matrix for conections of all nodes in L0 with all nodes in L1
We fill the matrix with random numbers between [-1, 1]
'''
# the num_nodes of layer 0 should already include the bias node
num_nodes_layer_0 = self._ay_nodes_layers[0]
num_nodes_with_bias_layer_0 = num_nodes_layer_0 + 1
num_nodes_layer_1 = self._ay_nodes_layers[1]
# fill the matrix with random values
rand_low = -1.0
rand_high = 1.0
rand_size = num_nodes_layer_1 * (num_nodes_with_bias_layer_0)
randomizer = 1 # method np.random.uniform
w0 = self._create_vector_with_random_values(rand_low, rand_high, rand_size, randomizer)
w0 = w0.reshape(num_nodes_layer_1, num_nodes_with_bias_layer_0)
# put the weight matrix into array of matrices
self._ay_w.append(w0.copy())
print("\nShape of weight matrix between layers 0 and 1 " + str(self._ay_w[0].shape))
# Method to create the weight-matrices for hidden layers
def _create_WM_Hidden(self):
'''
Method to create the weights of the hidden layers, i.e. between [L1, L2] and so on ... [L_n, L_out]
We fill the matrix with random numbers between [-1, 1]
'''
# The "+1" is required due to range properties !
rg_hidden_layers = range(1, self._n_hidden_layers + 1, 1)
# for random operation
rand_low = -1.0
rand_high = 1.0
for i in rg_hidden_layers:
print ("Creating weight matrix for layer " + str(i) + " to layer " + str(i+1) )
num_nodes_layer = self._ay_nodes_layers[i]
num_nodes_with_bias_layer = num_nodes_layer + 1
# the number of the next layer is taken without the bias node!
num_nodes_layer_
next = self._ay_nodes_layers[i+1]
# assign random values
rand_size = num_nodes_layer_next * num_nodes_with_bias_layer
randomizer = 1 # np.random.uniform
w_i_next = self._create_vector_with_random_values(rand_low, rand_high, rand_size, randomizer)
w_i_next = w_i_next.reshape(num_nodes_layer_next, num_nodes_with_bias_layer)
# put the weight matrix into our array of matrices
self._ay_w.append(w_i_next.copy())
print("Shape of weight matrix between layers " + str(i) + " and " + str(i+1) + " = " + str(self._ay_w[i].shape))
Three things may need explanation:
We need this special form to support the vectorized propagation properly later on.Setting the activation and output functions
def _check_and_set_activation_and_out_functions(self):
# check for known activation function
try:
if (self._my_act_func not in self.__d_activation_funcs ):
raise ValueError
except ValueError:
print("The requested activation function " + self._my_act_func + " is not known!" )
sys.exit()
# check for known output function
try:
if (self._my_out_func not in self.__d_output_funcs ):
raise ValueError
except ValueError:
print("The requested output function " + self._my_out_func + " is not known!" )
sys.exit()
# set the function to variables for indirect addressing
self._act_func = self.__d_activation_funcs[self._my_act_func]
self._out_func = self.__d_output_funcs[self._my_out_func]
if self._b_print_test_data:
z = 7.0
print("\nThe activation function of the standard neurons was defined as \"" + self._my_act_func + '"')
print("The activation function gives for z=7.0: " + str(self._act_func(z)))
print("\nThe output function of the neurons in the output layer was defined as \"" + self._my_out_func + "\"")
print("The output function gives for z=7.0: " + str(self._out_func(z)))
It requires not much of an explanation. For the time being we just rely on the given definition of the “__init__”-interface, which sets both to “sigmoid()”. The internal dictionaries __d_activation_funcs[] and __d_output_funcs[] provide the functions to
internal variables (indirect addressing). Some test output
# ***********
# operations
# ***********
# check and handle input data
self._handle_input_data()
# check consistency of the node-number list with the number of hidden layers (n_hidden)
self._check_layer_and_node_numbers()
# set node numbers for the input layer and the output layer
self._set_nodes_for_input_output_layers()
self._show_node_numbers()
# create the weight matrix between input and first hidden layer
self._create_WM_Input()
# create weight matrices between the hidden layers and between tha last hidden and the output layer
self._create_WM_Hidden()
# check and set activation functions
self._check_and_set_activation_and_out_functions()
print("\nStopping program regularily")
sys.exit()
Conclusion
A simple Python program for an ANN to cover the MNIST dataset – I – a starting point
Wording
corresponds to a “digit”. The classification algorithm (here: the MLP) may achieve an ability to predict the association of an unknown MNIST like input data sample with its correct category. It should – after some training – detect the (non-linear) separation interfaces for categories in a multidimensional feature space. In the case of MNIST we speak about ten categories corresponding to 10 digits, including zero. A simple MLP network – layers, nodes, weights
cascaded networks where a specific layer has connections to many more than just the neighbor layers.
To feed input data into the MLP we need an “input layer” with sufficient input nodes. How many? Well, this depends on the number of features your data set represents. In the MNIST case a sample image contains 28×28 pixels. For each pixel with a gray value (integer number between 0 and 256) given. So a typical image represents 28×28 = 768 different “features” – i.e. 786 numbers for “gray”-values between 0 and 255. We need as many input nodes in our MLP to represent the full image information by the input layer.
We shall use our MLP for classification tasks in the beginning. We, therefore, assume that the output of the ANN should allow for the distinction between “strong>NC” different categories an input data set can belong to. In case of the MNIST dataset we can distinguish between 10 different digits. Thus an output layer must in this case comprise 10 different nodes. To be able to cover other data sets with a different number of categories the number of output nodes must be a parameter of our program, too.
We want the numbers of nodes on “hidden layers” to be parameters for our program. For simple data as MNIST images we do not need big networks, but we want to be able to play around a bit with 1 up to 3 layers. (For an ANN to recognize hand written MNIST digits an input layer “L0” and only one hidden layer “L1” before an output layer “L2” are fully sufficient. Nevertheless in most of our experiments we will actually use 2 hidden layers. There are three reasons: You can approximate any continuous function with two hidden layers (with a special non-linear activation function; see below) and an output layer (with just a linear output function). The other reason is that the full mathematical complexity of “learning” of a MLP appears with two hidden layers (see a later article).
The nodes in hidden layers use a so called “activation function” to transform aggregated input from different feeding nodes of the previous layer into one distinct value within a defined interval – e.g. between -1 and 1. Again, we should be prepared to have a program parameter to choose between different “activation functions”.
produces the final output – which in turn must allow for a distinction of categories. This may lead to a special form – e.g. a kind of probability function. So, the type of the “output function” should also be regarded as variable parameter. A Python class for our ANN and its interface
Modules and libraries to import
'''
Module to create a simple layered neural network for the MNIST data set
Created on 23.08.2019
@author: ramoe
'''
import numpy as np
import math
import sys
import time
import tensorflow
from sklearn.datasets import fetch_mldata
from sklearn.datasets import fetch_openml
from keras.datasets import mnist as kmnist
from scipy.special import expit
from matplotlib import pyplot as plt
#from matplotlib.colors import ListedColormap
#import matplotlib.patches as mpat
#from keras.activations import relu
Well, only for the purpose to create the input data of MNIST quickly. Sklearn’s “fetchml_data” is doomed to end. The alternative “fetch_openml” does not use caching in some older versions and is also in general terribly slow. But, “keras”, which in turn needs tensorflow as a backend, provides its own tool to provide the MNIST data. The “__init__”-function of our class MyANN
class MyANN:
def __init__(self,
my_data_set = "mnist",
n_hidden_layers = 1,
ay_nodes_layers = [0, 100, 0], # array which should have as much elements as n_hidden + 2
n_nodes_layer_out = 10, # number of nodes in output layer
my_activation_function = "sigmoid",
my_out_function = "sigmoid",
vect_mode = 'cols',
figs_x1=12.0, figs_x2=8.0,
legend_loc='upper right'
)
:
'''
Initialization of MyANN
Input:
data_set: type of dataset; so far only the "mnist", "mnist_784" and the "mnist_keras" datsets are known.
We use this information to prepare the input data and learn about the feature dimension.
This info is used in preparing the size of the input layer.
n_hidden_layers = number of hidden layers => between input layer 0 and output layer n
ay_nodes_layers = [0, 100, 0 ] : We set the number of nodes in input layer_0 and the output_layer to zero
Will be set to real number afterwards by infos from the input dataset.
All other numbers are used for the node numbers of the hidden layers.
n_nodes_layer_out = expected number of nodes in the output layer (is checked);
this number corresponds to the number of categories to be distinguished
my_activation_function : name of the activation function to use
my_out_function : name of the "activation" function of the last layer whcih produces the output values
vect_mode: Are 1-dim data arrays (vectors) ordered by columns or rows ?
figs_x1=12.0, figs_x2=8.0 : Standard sizing of plots ,
legend_loc='upper right': Position of legends in the plots
'''
Initialization of class attributes
# Array (Python list) of known input data sets
self.__input_data_sets = ["mnist", "mnist_784", "mnist_keras"]
self._my_data_set = my_data_set
# X, y, X_train, y_train, X_test, y_test
# will be set by analyze_input_data
# X: Input array (2D) - at present status of MNIST image data, only.
# y: result (=classification data) [digits represent categories in the case of Mnist]
self._X = None
self._X_train = None
self._X_test = None
self._y = None
self._y_train = None
self._y_test = None
# relevant dimensions
# from input data information; will be set in handle_input_data()
self._dim_sets = 0
self._dim_features = 0
self._n_labels = 0 # number of unique labels - will be extracted from y-data
# Img sizes
self._dim_img = 0 # should be sqrt(dim_features) - we assume square like images
self._img_h = 0
self._img_w = 0
# Layers
# ------
# number of hidden layers
self._n_hidden_layers = n_hidden_layers
# Number of total layers
self._n_total_layers = 2 + self._n_hidden_layers
# Nodes for hidden layers
self._ay_nodes_layers = np.array(ay_nodes_layers)
# Number of nodes in output layer - will be checked against information from
target arrays
self._n_nodes_layer_out = n_nodes_layer_out
# Weights
# --------
# empty List for all weight-matrices for all layer-connections
# Numbering :
# w[0] contains the weight matrix which connects layer 0 (input layer ) to hidden layer 1
# w[1] contains the weight matrix which connects layer 1 (input layer ) to (hidden?) layer 2
self._ay_w = []
# Known Randomizer methods ( 0: np.random.randint, 1: np.random.uniform )
# ------------------
self.__ay_known_randomizers = [0, 1]
# Types of activation functions and output functions
# ------------------
self.__ay_activation_functions = ["sigmoid"] # later also relu
self.__ay_output_functions = ["sigmoid"] # later also softmax
# the following dictionaries will be used for indirect function calls
self.__d_activation_funcs = {
'sigmoid': self._sigmoid,
'relu': self._relu
}
self.__d_output_funcs = {
'sigmoid': self._sigmoid,
'softmax': self._softmax
}
# The following variables will later be set by _check_and set_activation_and_out_functions()
self._my_act_func = my_activation_function
self._my_out_func = my_out_function
self._act_func = None
self._out_func = None
# Plot handling
# --------------
# Alternatives to resize plots
# 1: just resize figure 2: resize plus create subplots() [figure + axes]
self._plot_resize_alternative = 1
# Plot-sizing
self._figs_x1 = figs_x1
self._figs_x2 = figs_x2
self._fig = None
self._ax = None
# alternative 2 does resizing and (!) subplots()
self.initiate_and_resize_plot(self._plot_resize_alternative)
# ***********
# operations
# ***********
# check and handle input data
self._handle_input_data()
print("\nStopping program regularily")
sys.exit()
To make things not more complicated as necessary I omit the usage of “properties” and a full encapsulation of private attributes. For convenience reasons I use only one underscore for some attributes and functions/methods to allow for external usage. This is helpful in a testing phase. However, many items can in the end be switched to really private properties or methods. List of known input datasets
attribute “_n_labels”. It is also useful to keep the pixel dimensions of input image data. At least for MNIST we assume quadratic images (_img_h = img_w = _dim_img). Layers and weights
Read and provide the input data
# Method to handle different types of input data sets
def _handle_input_data(self):
'''
Method to deal with the input data:
- check if we have a known data set ("mnist" so far)
- reshape as required
- analyze dimensions and extract the feature dimension(s)
'''
# check for known dataset
try:
if (self._my_data_set not in self._input_data_sets ):
raise ValueError
except ValueError:
print("The requested input data" + self._my_data_set + " is not known!" )
sys.exit()
# handle the mnist original dataset
if ( self._my_data_set == "mnist"):
mnist = fetch_mldata('MNIST original')
self._X, self._y = mnist["data"], mnist["target"]
print("Input data for dataset " + self._my_data_set + " : \n" + "Original shape of X = " + str(self._X.shape) +
"\n" + "Original shape of y = " + str(self._y.shape))
self._X_train, self._X_test, self._y_train, self._y_test = self._X[:60000], self._X[60000:],
self._y[:60000], self._y[60000:]
# handle the mnist_784 dataset
if ( self._my_data_set == "mnist_784"):
mnist2 = fetch_openml('mnist_784', version=1, cache=True, data_home='~/scikit_learn_data')
self._X, self._y = mnist2["data"], mnist2["target"]
print ("data fetched")
# the target categories are given as strings not integers
self._y = np.array([int(i) for i in self._y])
print ("data modified")
print("Input data for dataset " + self._my_data_set + " : \n" + "Original shape of X = " + str(self._X.shape) +
"\n" + "Original shape of y = " + str(self._y.shape))
self._X_train, self._X_test, self._y_train, self._y_test = self._X[:60000], self._X[60000:], self._y[:60000], self._y[60000:]
# handle the mnist_keras dataset
if ( self._my_data_set == "mnist_keras"):
(self._X_train, self._y_train), (self._X_test, self._y_test) = kmnist.load_data()
len_train = self._X_train.shape[0]
#print(len_train)
print("Input data for dataset " + self._my_data_set + " : \n" + "Original shape of X_train = " + str(self._X_train.shape) +
"\n" + "Original Shape of y_train = " + str(self._y_train.shape))
len_test = self._X_test.shape[0]
#print(len_test)
print("Original shape of X_test = " + str(self._X_test.shape) +
"\n" + "Original Shape of y_test = " + str(self._y_test.shape))
self._X_train = self._X_train.reshape(len_train, 28*28)
self._X_test = self._X_test.reshape(len_test, 28*28)
# Common Mnist handling
if ( self._my_data_set == "mnist" or self._my_data_set == "mnist_784" or self._my_data_set == "mnist_keras" ):
self._common_handling_of_mnist()
# Other input data sets can not yet be handled
We first check, whether the input parameter fits a known dataset – and raise an error if otherwise. The data come in different forms for the three sources of MNIST. For each set we want to extract the arraysAnalysis of the input data and the one-hot-encoding of the target labels
# Method for common input data handling of Mnist data sets
def _common_handling_of_mnist(self):
print("\nFinal input data for dataset " + self._my_data_set +
" : \n" + "Shape of X_train = " + str(self._X_train.shape) +
"\n" + "Shape of y_train = " + str(self._y_train.shape) +
"\n" + "Shape of X_test = " + str(self._X_test.shape) +
"\n" + "Shape of y_test = " + str(self._y_test.shape)
)
# mixing the training indices
shuffled_index = np.random.permutation(60000)
self._X_train, self._y_train = self._X_train[shuffled_index], self._y_train[shuffled_index]
# set
dimensions
self._dim_sets = self._y_train.shape[0]
self._dim_features = self._X_train.shape[1]
self._dim_img = math.sqrt(self._dim_features)
# we assume square images
self._img_h = int(self._dim_img)
self._img_w = int(self._dim_img)
# Print dimensions
print("\nWe have " + str(self._dim_sets) + " data sets for training")
print("Feature dimension is " + str(self._dim_features) + " (= " + str(self._img_w)+ "x" + str(self._img_h) + ")")
# we need to encode the digit labels of mnist
self._get_num_labels()
self._encode_all_mnist_labels()
As you see we retrieve some of our class attributes which we shall use during training and do some printing. This is trivial. Not as trivial is, however, the handling of the output data:
# Method to get the number of target labels
def _get_num_labels(self):
self._n_labels = len(np.unique(self._y_train))
print("The number of labels is " + str(self._n_labels))
# Method to encode all mnist labels
def _encode_all_mnist_labels(self, b_print=True):
'''
We shall use vectorized input and output - i.e. we process a whole batch of input data sets in parallel
(see article in the Linux blog)
The output array will then have the form OUT(i_out_node, idx) where
i_out_node enumerates the node of the last layer (i.e. the category)
idx enumerates the data set within a batch,
After training, if y_train[idx] = 6, we would expect an output value of OUT[6,idx] = 1.0 and OUT[i_node, idx]=0.0 otherwise
for a categorization decision in the ideal case. Realistically, we will get a distribution of numbers over the nodes
with values between 0.0 and 1.0 - with hopefully the maximum value at the right node OUT[6,idx].
The following method creates an arrays OneHot[i_out_node, idx] with
OneHot[i_node_out, idx] = 1.0, if i_node_out = int(y[idx])
OneHot(i_node_out, idx] = 0.0, if i_node_out != int(y[idx])
This will allow for a vectorized comparison of calculated values and knwon values during training
'''
self._ay_onehot = np.zeros((self._n_labels, self._y_train.shape[0]))
# ay_oneval is just for convenience and printing purposes
self._ay_oneval = np.zeros((self._n_labels, self._y_train.shape[0], 2))
if b_print:
print("\nShape of y_train = " + str(self._y_train.shape))
print("Shape of ay_onehot = " + str(self._ay_onehot.shape))
# the next block is just for illustration purposes and a better understanding
if b_print:
values = enumerate(self._y_train[0:12])
print("\nValues of the enumerate structure for the first 12 elements : = ")
for iv in values:
print(iv)
# here we prepare the array for vectorized
comparison
print("\nLabels for the first 12 datasets:")
for idx, val in enumerate(self._y_train):
self._ay_onehot[val, idx ] = 1.0
self._ay_oneval[val, idx, 0] = 1.0
self._ay_oneval[val, idx, 1] = val
if b_print:
print("\nShape of ay_onehot = " + str(self._ay_onehot.shape))
print(self._ay_onehot[:, 0:12])
#print("Shape of ay_oneval = " + str(self._ay_oneval.shape))
#print(self._ay_oneval[:, 0:12, :])
The first method only determines the number of labels (= number of categories). We see from the code of the second method that we encode the target labels in the form of two arrays. The relevant one for our optimization algorithm will be “_ay_onehot”. This array is 2-dimensional. Why? Working with mini-batches
The so called “cost function” will be determined as some peculiar sum over all elements of a batch and the usual evaluation of partial derivatives during gradient descent will be based on matrix operations involving all input elements of a defined batch!
A single element of the batch is an array of 784 feature values. The corresponding output array is an array with values for 10 categories (here digits). But, what about a whole bunch of test data, i.e. a “batch”?
Numpy matrix multiplication for layers of simple feed forward ANNs
the output array for a batch of test data will have the form “_ay_a_Out[i_out_node, idx]” with:
Later on we must compare the real results for the training samples with the expected correct values. To be able to do this we must build up a 2-dim array of the same shape as “_ay_a_out” with correct output values for all test samples of the batch. E.g.: If we expect the digit 7 for the input array of a sample with index idx within the set of training data, we need a 2-dim output array with the element [[0,0,0,0,0,0,0,1,0,0], idx]. The derivation of such an array from a given category label is called “one-hot-encoding“. First tests via a Jupyter notebook
myself@mytux:/projekte/GIT/ai/ml1> source bin/activate
(ml1) myself@mytux:/projekte/GIT/ai/ml1> jupyter notebook
[I 15:07:30.953 NotebookApp] Writing notebook server cookie secret to /run/user/21001/jupyter/notebook_cookie_secret
[I 15:07:38.754 NotebookApp] jupyter_tensorboard extension loaded.
[I 15:07:38.754 NotebookApp] Serving notebooks from local directory: /projekte/GIT/ai/ml1
[I 15:07:38.754 NotebookApp] The Jupyter Notebook is running at:
[I 15:07:38.754 NotebookApp] http://localhost:8888/?token=06c2626c8724f65d1e3c4a50457da0d6db414f88a40c7baf
[I 15:07:38.755 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 15:07:38.771 NotebookApp]
Conclusion
A simple program for an ANN to cover the Mnist dataset – II
we shall define initial weights for our ANN. Literature and links
“Python machine Learning”, Seb. Raschka, 2016, Packt Publishing, Birmingham, UK
“Machine Learning mit Sckit-Learn & TensorFlow”, A. Geron, 2018, O’REILLY, dpunkt.verlag GmbH, Heidelberg, Deutschland
https://towardsdatascience.com/introduction-to-logistic-regression-66248243c148
https://cmci.colorado.edu/classes/INFO-4604/files/slides-5_logistic.pdf
Wikipedia article on Loss functions for classification
https://towardsdatascience.com/optimization-loss-function-under-the-hood-part-ii-d20a239cde11
https://stackoverflow.com/questions/32986123/why-the-cost-function-of-logistic-regression-has-a-logarithmic-expression
https://medium.com/technology-nineleaps/logistic-regression-gradient-descent-optimization-part-1-ed320325a67e
https://blog.algorithmia.com/introduction-to-loss-functions/
uni leipzig on logistic regressionFurther articles in this series
A simple Python program for an ANN to cover the MNIST dataset – XIII – the impact of regularization
A simple Python program for an ANN to cover the MNIST dataset – XII – accuracy evolution, learning
rate, normalization
A simple Python program for an ANN to cover the MNIST dataset – XI – confusion matrix
A simple Python program for an ANN to cover the MNIST dataset – X – mini-batch-shuffling and some more tests
A simple Python program for an ANN to cover the MNIST dataset – IX – First Tests
A simple Python program for an ANN to cover the MNIST dataset – VIII – coding Error Backward Propagation
A simple Python program for an ANN to cover the MNIST dataset – VII – EBP related topics and obstacles
A simple Python program for an ANN to cover the MNIST dataset – VI – the math behind the „error back-propagation“
A simple Python program for an ANN to cover the MNIST dataset – V – coding the loss function
A simple Python program for an ANN to cover the MNIST dataset – IV – the concept of a cost or loss function
A simple Python program for an ANN to cover the MNIST dataset – III – forward propagation
A simple Python program for an ANN to cover the MNIST dataset – II – initial random weight values


















