A simple Python program for an ANN to cover the MNIST dataset – VII – EBP related topics and obstacles

I continue with my series about a Python program to build simple MLPs:

A simple Python program for an ANN to cover the MNIST dataset – VI – the math behind the „error back-propagation“
A simple program for an ANN to cover the Mnist dataset – V – coding the loss function
A simple program for an ANN to cover the Mnist dataset – IV – the concept of a cost or loss function
A simple program for an ANN to cover the Mnist dataset – III – forward propagation
A simple program for an ANN to cover the Mnist dataset – II – initial random weight values
A simple program for an ANN to cover the Mnist dataset – I – a starting point

On our tour we have already learned a lot about multiple aspects of MLP usage. I name forward propagation, matrix operations, loss or cost functions. In the last article of this series
A simple program for an ANN to cover the Mnist dataset – VI – the math behind the „error back-propagation“
I tried to explain some of the math which governs “Error Back Propagation” [EBP]. See the PDF attached to the last article.

EBP is an algorithm which applies the “Gradient Descent” method for the optimization of the weights of a Multilayer Perceptron [MLP]. “Gradient Descent” itself is a method where we step-wise follow short tracks perpendicular to contour lines of a hyperplane in a multidimensional parameter space to hopefully approach a global minimum. A step means a change of of parameter values – in our context of weights. In our case the hyperplane is the surface formed by the cost function over the weights. If we have m weights we get a hyperplane in an (m+1) dimensional space.

To apply gradient descent we have to calculate partial derivatives of the cost function with respect to the weights. We have discussed this in detail in the last article. If you read the PDF you certainly have noted: Most of the time we shall execute matrix operations to provide the components of the weight gradient. Of course, we must guarantee that the matrices’ dimensions fit each other such that the required operations – as an element-wise multiplication and the numpy.dot(X,Y)-operation – become executable.

Unfortunately, there are some challenges regarding this point which we have not covered, yet. One objective of this article is to get prepared for these potential problems before we start coding EBP.

Another point worth discussing is: Is there really just one cost function when we use mini-batches in combination with gradient descent? Regarding the descriptions and the formulas in the PDF of the last article this was and is not fully clear. We only built sums there over cost contributions of all the records in a mini-batch. We did NOT use a loss function which assigned to costs to deviations of the predicted result (after forward propagation) from known values for all training data records.

This triggers the question what our
code in the end really does if and when it works with mini-batches during weight optimization … We start with this point.

In the following I try to keep the writing close to the quantity notations in the PDF. Sorry for a bad display of the δs in HTML.

Gradient descent and mini-batches – one or multiple cost functions?

Regarding the formulas given so far, we obviously handle costs and gradient descent batch-wise. I.e. each mini-batch has its own cost function – with fewer contributions than a cost function for all records would have. Each cost function has (hopefully) a defined position of a global minimum in the weights’ parameter space. Taking this into consideration the whole mini-batch approach is obviously based on some conceptually important assumptions:

  • The basic idea is that the positions of the global minima of all the cost-functions for the different batches do not deviate too much from each other in the basic parameter space.
  • If we additionally defined a cost function for all test data records (over all batches) then this cost function should display a global minimum positioned in between the ones of the batches’ cost functions.
  • This also means that there should be enough records in each batch with a really statistical distribution and no specialties associated with them.
  • Contour lines and gradients on the hyperplanes defined by the loss functions will differ from each other. On average over all mini-batches this should not hinder convergence into a common optimum.

To understand the last point let us assume that we have a batch for MNIST dataset where all records of handwritten digits show a tendency to be shifted to the left border of the basic 28×28 pixel frames. Then this batch would probably give us other weights than other batches.

To get a deeper understanding, let us take only two batches. By chance their cost functions may deviate a bit. In the plots below I have just simulated this by two assumed “cost” functions – each forming a hyperplane in 3 dimensions over only two parameter (=weight) dimensions x and y. You see that the “global” minima of the blue and the red curve deviate a bit in their position.

The next graph shows the sum, i.e. the full “cost function”, in green in comparison to the (vertically shifted and scaled) original functions.

Also here you clearly see the differences in the minimas’ positions. What does this mean for gradient descent?

Firstly, the contour lines on the total cost function would deviate from the ones on the cost function hyperplanes of our 2 batches. So would the directions of the different gradients at the point presently reached in the parameter space during optimization! Working with batches therefore means jumping around on the surface of the total cost function a bit erratically and not precisely along the direction of steepest descent there. By the way: This behavior can be quite helpful to overcome local minima.

Secondly, in our simplified example we would in the end not converge completely, but jump or circle around the minimum of the total cost function. Reason: Each batch forces the weight corrections for x,y into different directions namely those of
its own minimum. So, a weight correction induced by one bath would be countered by corrections imposed by the optimization for the other batch. (Regarding MNIST it would e.g. be interesting to run a batch with handwritten digits of Europeans against a batch with digits written by Americans and see how the weights differ after gradient descent has converged for each batch.)

This makes us understand multiple things:

  • Mini-batches should be built with a statistical distribution of records and their composition should be changed statistically from epoch to epoch.
  • We need a criterion to stop iterating over too many epochs senselessly.
  • We should investigate whether the number and thus the size of mini-batches influences the results of EBP.
  • At the end of an optimization run we could invest in some more iterations not for the batches, but for the full cost function of all training records and see if we can get a little deeper into the minimum of this total cost function.
  • We should analyze our batches – if we keep them up and do not create them statistically anew at the beginning of each epoch – for special data records whose properties are off of the normal – and maybe eliminate those data records.

Repetition: Why Back-propagation of 2 dimensional matrices and not vectors?

The step wise matrix operations of EBP are to be performed according to a scheme with the following structure:

  • On a given layer N apply a layer specific matrix “NW.T” (depending on the weights there) by some operational rule on some matrix “(N+1)δS“, which contains some data already calculated for layer (N+1).
  • Take the results and modify it properly by multiplying it element-wise with some other matrix ND (containing derivative expressions for the activation function) until you get a new NδS.
  • Get partial derivatives of the cost function with respect to the weights on layer (N-1) by a further matrix operation of NδS on a matrix with output values (N-1)A.TS on layer (N-1).
  • Proceed to the next layer in backward direction.

The input into this process is a matrix of error-dependent quantities, which are defined at the output layer. These values are then back-propagated in parallel to the inner layers of our MLP.

Now, why do we propagate data matrices and not just data vectors? Why are we allowed to combine so many different multiplications and summations described in the last article when we deal with partial derivatives with respect to variables deep inside the network?

The answer to the first question is numerical efficiency. We operate on all data records of a mini-batch in parallel; see the PDF. The answer to the second question is 2-fold:

  • We are allowed to perform so many independent operations because of the linear structure of our cost-functions with respect to contributions coming from the records of a mini-batch and the fact that we just apply linear operations between layers during forward propagation. All contributions – however non-linear each may be in itself – are just summed up. And propagation itself between layers is defined to be linear.
  • The only non-linearity occurring – namely in the form of non-linear activation functions – is to be applied just on layers. And there it works only node-wise! We do not
    couple values for nodes on one and the same layer.

In this sense MLPs are very simple by definition – although they may look complex! (By the way and if you wonder why MLPs are nevertheless so powerful: One reason has to do with the “Universal Approximation Theorem”; see the literature hint at the end.)

Consequence of the simplicity: We can deal with δ-values (see the PDF) for both all nodes of a layer and all records of a mini-batch in parallel.

Results derived in the last article would change dramatically if we had rules that coupled the Z- or A-values of different nodes! E.g. if the squared value at node 7 in layer X must always be the sum of squared values at nodes 5 an 6. Believe me: There are real networks in this world where such a type of node coupling occurs – not only in physics.

Note: As we have explained in the PDF, the nodes of a layer define one dimension of the NδS“-matrices,
the number of mini-batch records the other. The latter remains constant. So, during the process the δ-matrices change only one of their 2 dimensions.

Some possible pitfalls to tackle before EBP-coding

Now, my friends, we can happily start coding … Nope, there are actually some minor pitfalls, which we have to explain first.

Special cost-, activation- and output-functions

I refer to the PDF mentioned above and its formulas. The example explained there referred to the “Log Loss” function, which we took as an example cost function. In this case the outδS and the 3δS-terms at the nodes of the outermost layer turned out to be quite simple. See formula (21), (22), (26) and (27) in the PDF.

However, there may be other cost functions for which the derivative with respect to the output vector “a” at the outermost nodes is more complicated.

In addition we may have other output or activation functions than the sigmoid function discussed in the PDF’s example. Further, the output function may differ from the activation function at inner layers. Thus, we find that the partial derivatives of these functions with respect to their variables “z” must be calculated explicitly and as needed for each layer during back propagation; i.e., we have to provide separate and specific functions for the provision of the required derivatives.

At the outermost layer we apply the general formulas (84) to (88) with matrix ED containing derivatives of the output-function Eφ(z) with respect to the input z to find EδS with E marking the outermost layer. Afterwards, however, we apply formula (92) – but this time with D-elements referring to derivatives of the standard activation-function φ used at nodes of inner layers.

The special case of the Log Loss function and other loss functions with critical denominators in their derivative

Formula (21) shows something interesting for the quantity outδS, which is a starting point for backward propagation: a denominator depending on critical factors, which directly involve output “a” at the outer nodes or “a” in a difference term. But in our one-hot-approach “a” may become zero or come close to it – during training by accident or by convergence! This is a dangerous thing; numerically we absolutely want to avoid any division by zero or by small numbers close to the numerical accuracy of a programming language.

What mathematically saves us in the special case of Log Loss are formulas (26) and (27), where due to some “magic” the dangerous denominator is cancelled by a corresponding factor in the numerator when we evaluate EδS.

In the general case, however,
we must investigate what numerical dangers the functional form of the derivative of the loss function may bring with it. In the end there are two things we should do:

  • Build a function to directly calculate EδS and put as much mathematical knowledge about the involved functions and operations into it as possible, before employing an explicit calculation of values of the cost function’s derivative.
  • Check the involved matrices, whose elements may appear in denominators, for elements which are either zero or close to it in the sense of the achievable accuracy.

For our program this means: Whether we calculate the derivative of a cost function to get values for “outδS” will depend on the mathematical nature of the cost function. In case of Log Loss we shall avoid it. In case of MSE we shall perform the numerical operation.

Handling of bias nodes

A further complication of our aspired coding has its origin in the existence of bias nodes on every inner layer of the MLP. A bias node of a layer adds an additional degree of freedom whilst adjusting the layer’s weights; a bias node has no input, it produces only a constant output – but is connected with weights to all normal nodes of the next layer.

Some readers who are not so familiar with “artificial neural networks” may ask: Why do we need bias nodes at all?

Well, think about a simple matrix operation on a 2 dim-vector; it changes its direction and length. But if we want to approximate a function for regression or a separation hyperplanes for classification by a linear operation then we need another element which corresponds to a constant translation part in a linear transformation: z = w1*x1 + w2*x2 + const.. Take a simple function y=w*x + c. The “c” controls where the line crosses the y axis. We need such a parameter if our line should separate clusters of points separably distributed somewhere in the (x,y)-plane; the w is not sufficient to orientate and position the hyperplane in the (x,y)-plane.

This very basically what bias neurons are good for regarding the basically linear operation between two MLP-layers. They add a constant to an otherwise linear transformation.

Do we need a bias node on all layers? Definitely on the input layer. However, on the hidden layers a trained network could by learning evolve weights in such a way that a bias neuron comes about – with almost zero weights on one side. At least in principle; however, we make it easier for the MLP to converge by providing explicit “bias” neurons.

What did we do to account for bias nodes in our Python code so far? We extended the matrices describing the output arrays ay_A_out of the activation function (for input ay_Z_in) on the input and all hidden layers by elements of an additional row. This was done by the method “add_bias_neuron_to_layer()” – see the codes given in article III.

The important point is that our weight matrices already got a corresponding dimension when we built them; i.e. we defined weights for the bias nodes, too. Of course, during optimization we must calculate partial derivatives of the cost function with respect to these weights.

The problem is:

We need to back propagate a delta-matrix Nδ for layer N via
( (NW.T).dot(Nδ) ). But then we can not apply a simple element-wise matrix multiplication with the (N-1)D(z)-matrix at layer N-1. Reason: The dimensions do not fit, if we calculate the elements of D only for the existing Z-Values at layer N-1.

There are two solutions for coding:

  • We can add a row artificially and intermediately to the Z-matrix to calculate the D-matrix, then calculate NδS as
    ( (NW.T).dot(Nδ) ) * (N_1)D
    and eliminate the first artificial row appearing in NδS afterwards.
  • The other option is to reduce the weight-matrix (NW) by a row intermediately and restore it again afterwards.

What we do is a matter of efficiency; in our coding we shall follow the first way and test the difference to the second way afterwards.

Check the matrix dimensions

As all steps to back-propagate and to circumvent the pitfalls require a bit of matrix wizardry we should at least check at every step during EBP backward-propagation that the dimensions of the involved matrices fit each other.

Outlook

Guys, after having explained some of the matrix math in the previous article of this series and the problems we have to tackle whilst programming the EBP-algorithm we are eventually well prepared to add EBP-methods to our Python class for MLP simulation. We are going to to this in the next article:
A simple program for an ANN to cover the Mnist dataset – VIII – coding Error Backward Propagation

Literature

“Machine Learning – An Applied Mathematics Introduction”, Paul Wilmott, 2019, Panda Ohana Publishing

Upgrading Win 7 to Win 10 guests on Opensuse/Linux based VMware hosts – I – some experiences

As my readers know I am not a fan of MS or any “Windows N” operating system – whatever the version number N. But some of you may be facing the same situation as me:

A customer or an employer enforces the use of MS products – as e.g. MS Office, clients for MS Exchange, Skype for Business, Sharepoint, components for effort booking and so on. For the fulfillment of most of your customer’s demands you can use browser based interfaces or Linux clients.

However, something that regularly leads to problems is the heavy use of MS Office programs or graphics tools in their latest versions. Despite other claims: A friction-less back and forth between Libreoffice and MS Office is still a dream. Crossover Office is nice – but the latest MS Office versions are often not yet covered when you need them. Another very reasonable field of using MS Windows guests on Linux is, by the way, training for pen-testing and security measures.

So, even Linux enthusiasts are sometimes forced to work with or within a native Windows environment. We would use a virtualized Windows guest machine then – on a Linux host with the help of VMware, KVM or Virtualbox. Regarding graphical performance, support of basic 3D features, Direct X and of the latest USB-versions in the emulated system environment I have a tendency to use VMware Workstation, despite its high price. Get me right: I practically never use VMware to virtualize Linux systems – for this purpose I use LXC containers or KVM. But for “Win 7” or “Win 10” VMware seemed to be a good choice – so far.

Upgrade to Win 10

During the last days of orchestrated panic regarding the transition from Windows 7 to Windows 10 I eventually gave in and upgraded some of my VMware-virtualized Windows 7 systems to Windows 10. More because of having some free time to get into this process than because assuming a sudden drop in security. (As if we ever trusted in the security of Windows system … I come back to security and privacy aspects in a second article.) However, on a perspective of some weeks or months the transition from Win 7 to Win 10 is probably unavoidable – if you cannot isolate your Windows machine completely from the Internet and/or from other external servers which bring a potential attack risk with them. The latter may even hold for servers of your clients.

I was a bit skeptical about the outcome of the upgrade procedure and the effort it would require on my side. A good friend of mine, who sells and administers Windows system professionally, had told me that he had experienced a whole variety of different problems – depending on the Win 7 setup, the amount and character of application SW installed, hardware drivers and the validity of licenses.

Well, my Windows 7 Pro clients were equipped with rather elementary SW: MS Office in different versions, MS Project, Lexware, Adobe Creative suite in an old version, some mind mapping SW, Adobe Reader, Anti malware SW. The “hardware” of the virtual machines is standard, partially emulated by VMware with appropriate drivers. So, no need to be especially nervous.

To be on the safe side I also ordered a VMware WS Pro upgrade to version 15.X. (I own WS 12.5.9 and WS 14 licenses.) Reason: I had read that only the WS 15.5 Pro supports the latest Win 10 versions fully. Well reading without thinking may lead to a waste of resources – see below.

Another rumor you often hear is that Windows 10 requires rather new hardware and is quite resource-demanding. MS itself recommends to buy a new PC or laptop on its web-sites – of course often followed by advertisement for MS notebook models on the very same web page. Yeah, money makes the world turn around. Well, regarding resources for my Windows guest systems I was/am rather restrictive:

Virtual machines for MS Win never get a lot of RAM from me – a maximum of 4 GB at most. This is enough for office purposes. (All really resource craving
things I do on Linux 🙂 ). Neither do my virtualized Win systems get a lot of disk space – typically < 60 GB. I mostly use vmdk-files to provide virtual hard disks – without full space allocation at startup, but dynamically added 4GB extents. vdmk files allow for an easy movement of virtual machines and simple backup procedures. And I usually give my virtual Win machines a maximum of 2 processor cores. So, these limitations contributed a bit to my skepticism. In addition I have 3D support on for my Win 7 guests in the virtual machine setup.

Meanwhile, I have successfully performed multiple upgrades on a rather old Linux host with an i7 950 CPU and newer hosts with I7 6700 K and modern i9 9900 processors. The operative system on all hosts run Opensuse Leap 15.1; I did not find the time to test my Debian hosts, yet.

I had some nice and some annoying experiences. I also found some aspects which you should take care of ahead of the Win 7 to Win 10 upgrade.

Make a backup!

As always with critical operations: Make a backup first! This is quite easy with a VMware virtual machine based on “vmdk”-files: Just copy the machines directory with all its files to some Linux formatted backup medium and keep up all the access rights during copying (=> cp -dpRv). In case of partition based virtual machines – make a copy of the partition with “dd”.

If you should need to restore the virtual machine in its old state again and to copy your backup files to their old places: VMware will notice this and will ask you whether you moved or copied the guest. Then answer “moved” (!) – which appears a bit paradox. But otherwise there is a very high probability that trouble with your Windows license will follow. VMware interprets a “copy”-operation as a duplication of a virtual machine and puts a related information somewhere (?) which Windows evaluates. Windows will almost certainly ask for a reactivation of your installation in case that your Win license was/is an individual one – as e.g. an OEM license.

Good news and potentially bad news regarding the upgrade to Win 10

The good news is:

  • Provided that you have valid licences for your Win 7 and for all SW components installed and provided that there is enough real and virtual disk space available, the Win 7 to Win 10 upgrade works smoothly. However, it takes a considerable amount of time.
  • I did not experience any performance problems after the upgrades – not even regarding transparency effects and other gimmicks in comparison to Windows 7. VMware’s 3D support for Win works – in WS 15 even for DirectX 10.

The requirement for time depends partially on the bandwidth of your Internet connection and partially on the performance of your disk access as well as your CPU and the available RAM. In my case I had to invest around 1 hr – in those cases when everything went straight through.

The potentially bad news comprises the following points:

  • The upgrade requires a considerable amount of free space on your virtual machine’s hard disk, which will be used temporarily. So, you should carefully check the available disk space – inside the virtual machine and – a bit surprising – also on the Linux filesystem keeping the vmdk-files. I ran into problems with limited space for multiple upgrades on both sides; see below. Whether you will experience something similar depends on your safety margin policies with respect to disk space in the guest and on the host.
  • A really annoying
    aspect of the upgrade had to do with VMware’s development and market strategy. From advertisement you may conclude that it would be best to use VMware WS 14 or 15 to handle Windows 10. However, on older Intel based systems you should absolutely check whether the CPU is compatible with VMware WS 14 and 15. Check it, before you think upgrading a Vmware WS 12 license to anything higher. On my Intel i7 950 neither WS 14 nor WS 15 did work at all. Even if you get these WS versions working by a trick (see below) they perform badly.
  • Then there is a certain privacy aspect. As said, the upgrade takes a lot of time during which you are connected to the Internet and to Microsoft servers. This is only partially due to the fact that Win 10 SW has to be downloaded during the upgrade process; there are more phases of information exchange. It is also quite understandable that MS has to analyze and check your system on a full scale. But do we know what Big Brother [BB] MS is doing during this time and what information/data they transfer to their own systems? No, we do not. So, if you have any sensitive data files on your system – how to protect them? You cannot isolate your Windows 10 during the upgrade. And even worse: Later on you will be more or less forced to perform updates within certain periods. So, how to keep sensitive data inaccessible for BB during the upgrade and beyond?

I address the first two aspects below. The last point of privacy is an interesting but complicated one. I shall discuss it in a separate article.

Which VMware workstation version should I use?

Do not get misguided by reports or advertisement on the Internet that certain MS Win 10 require the latest version of VMware Workstation! WS 12 Pro was the first version which supported Win 10 in late 2015. Now VMware 15.X has arrived. And yes, there are articles that claim incompatibility of VMware WS 12, WS 14 and early subversions of WS 15 with some of the latest Win 10 builds and updates. See the following links and discussions therein:
https://communities.vmware.com/thread/608589
https://www.borncity.com/blog/2019/10/03/windows-10-update-kb4522015-breaks-vmware-workstation/
https://www.askwoody.com/forums/topic/vmware-12-and-newer-incompatible-with-windows-10-1903/

But read carefully: The statements on incompatibility refer mostly (if not only) to using a MS Win 10 system as a host for VMware! But we guys are using Linux systems as hosts.

Therefore the good message is:

Windows 10 as a VMware guest is already supported by VM WS 12.5.9 Pro, which runs also on older CPUs. For all practical purposes and 2D graphics a Win 10 guest installation works quite well on a Linux host with VMware 12.5.9.

At least, I have not yet noticed anything wrong on my hosts with Opensuse Leap 15.1 and VMware WS 12.5.9 PRO for a Win 10 guests. (Neither did I see problems with WS 14 or WS 15 on those hosts where I could use these versions).

The compatibility of WS 12.5 with Win 10 guest on Linux is more important than you may think if your host has an older CPU. If you really want to spend money and use WS 14 or WS 15 please note:

WS 14 Pro and WS 15 Pro require that your CPU provides Intel VT-x virtualization technology and EPT abilities.

So, the potentially bad message for you as the still proud owner of an older but capable CPU is:

The present VMware WS versions 14 and 15 which support Win 10 fully (as guest and host system) may not be compatible with your CPU!

Check
compatibility twice BEFORE you intend to upgrade VMware Workstation ahead of a “Win7 to Win 10”-upgrade. It would be a major waste of money if your CPU is not supported. And as stated: Win 12.5 does a good job with Win 10 guests.

VMware has deserved a lot of criticism with their decision to ignore older processors with WS Pro versions > 14. See
https://communities.vmware.com/thread/572931
https://vinfrastructure.it/2018/07/vmware-workstation-pro-14-issues-with-old-cpu/
https://www.heise.de/newsticker/meldung/VMware-Workstation-14-braucht-juengere-Prozessoren-3847372.html
For me this is a good reason to try a bit harder with KVM for the virtualization of Windows – and drop VMware wherever possible.

There is a small trick, though, to get WS 14 Pro running on an i7 950 and other older processors: In the file “/etc/vmware/config” you can add the setting

monitor.allowLegacyCPU = “true”

See https://communities.vmware.com/thread/572804.

But: I have tested this and found that a Win 7 start takes around 3 minutes! You really have to be very patient… This is crazy – and for me unacceptable. After you once are logged in, performance of Win 7 seems to be OK – maybe a bit sluggish. Still I cannot bear the waiting at boot time. So, I went back to WS 12 Pro on the machine with an i7 950.

Another problem for you may be that the installation of WS 12.5.9 on both Opensuse Leap 15.0 and 15.1 requires some special settings and tricks which I have written about in this blog. See:
Upgrade auf Opensuse Leap 15.0 – Probleme mit Nvidia-Treiber aus dem Repository und mit VMware WS 12.5.9
Upgrade Laptop to Opensuse 42.3, Probleme mit Bumblebee und VMware WS 12.5, Workarounds
The first article is relevant also for Opensuse 15.1.

Use the Windows Upgrade site and the Media Creation Tool page to save money

If you have a valid Win 7 license for all of your virtualized Win 7 installations it is not required to spend money on a new Win 10 license. Microsoft’s offer for a cost free upgrade to Win 10 still works. See e.g.:
https://www.cnet.com/how-to/windows-10-dont-wait-on-free-upgrade-because-windows-7-officially-done/
https://www.techbook.de/apps/kostenloses-update-windows-10
Follow the steps there – as I have done successfully myself.

Problems with disk space within the VMware Windows 7 guest during upgrade

My first Win7 to Win10 upgrade trial ran into trouble twice. The first problem occurred during the upgrade process and within the virtual machine:
I got a warning from the upgrade program at its start that I should free at least some 8.5 GByte.

Not so funny – as said, I am a bit picky about resources. The virtual guest machine had only a 60 GB C-disk. Fortunately, there were a lot of temporary files which could be deleted. Actually Gigabytes and partially years old – makes you wonder why Win 7 kept those files piled up. I also could move a bunch of data files to a D-disk. And I deinstalled some programs. All in all – it just worked out. The upgrade itself afterwards went friction-free and without

So one
message is:

Ensure that you have around 15 GB free on your virtual C-disk.

It is better to solve the problems with freeing C-disk space inside Win 7 without pressure – meaning: ahead of the upgrade to Win 10. If you run into the described problem it may be better to abort the Win 10 upgrade. I have tested this – and the Win 7 system was restored – apparently in good health. I got a strange message during reboot that the system was prepared for first use – but after everything was as before.

On another system I got a warning during the upgrade, when the “search for updates” began, that I should clear some 10 GByte of temporarily required disk space or attach an external drive (USB) to be used for temporary operations. The latter went OK in this case. But be careful the USB disk must be kept attached to the virtual machine over some reboots. Do not touch it until the upgrade has finalized.

So, a second message is:

Be prepared to have some external device with some free 20 GB ready if you have a complex installation with a lot of application SW and/or a complex virtual HW configuration.

I advise you to check your external USB drive, USB stick or whatever you use for filesystem errors before attaching it. And have your VMware window active whilst attaching the device! VMware will then warn you that the Linux host may claim access to the device and you just have to click the buttons in the dialog boxes to give the VMware guest full control instead of the host OS.

If you now should think about a general enlargement of the virtual disk(s) of your existing Win 7 installation please take into account the following:

On the one hand side an enlargement is of course possible and relatively easy to handle if you use vdmk files for disk virtualization and have free space on the Linux partition which hosts the vmdks. VMware supports the resizing process in the disk section of the virtual machine “settings”. On Win 7 you afterward can use the Win admin tools to extend the NTFS filesystem to the full extent of the newly configured disk.

But, on the other side, please, consider that Windows may react allergic to a change of the main C-disk and request a new activation due to major hardware changes. 🙁

This is one of the points why we do not like Windows ….
So, how you solve a potential free disk problem depends a bit on what you think is the bigger problem – reactivation or freeing disk space by deletions, movement of files or deinstallations.

Addendum: Also check old restore points Win 7 may have created over time! After a successful upgrade to Win 10 I stumbled across an option to release all restore information for old installations (in this case for Win 7 and its kept restore points). This will give you again many Gigabytes if you had not deleted “restore point” data for a long time in your Win 7. In my case I gained remarkable 17 GB! => Should have deleted some old restore points data already before the upgrade.

Problems with disk space on the Linux host

The second problem with disk space occurred after or during some upgrades to Win 10: I ran out of space in the Linux filesystem containing the vmdk files of my virtual machine. In one case the upgrade simply stopped. In another case the problem occurred a while after the upgrade – without me actually doing much on the new Win 10 installation. VMware suddenly issued a warning regarding the Linux file system and paused the virtual machine. I was first a bit surprised as I had not experienced this lack of space during normal usage of the previous Win 7 installation.

The explanation was simple: As said, I had set up the virtual disk such that the required space was not allocated at once, but as required. Due to the upgrade the VMware had created all 4GB-extends
to provide the full disk space the guest needed. In addition I had activated “Autoprotect Snapshots” on VMware (3 per day) – the first automatically created snapshot after the upgrade required a lot of additional space on the Linux file system – due to heavy changes on the hard disk.

My virtualized machines most often reside on specific (encrypted) LVM-based Linux partitions. And there it just got tight – when VMware stopped the virtual machine only 3.5 GB were left free. Not funny: You cannot kill snapshots on a paused virtual guest – the guest must be running or be shut down. And if you want to enlarge a Linux partition – which is possible if there is (neighboring) space free on your hard disk – then the filesystem should best be unmounted. Well, you can enlarge a GPT-partition with the ext4-filesystem in operation (e.g. with YaST) – but it gives you an uncomfortable feeling.

In my case I decided to brutally power down the virtual machines. In one case where this problem occurred I could at least eliminate one snapshot. I could start the virtual machine then again and let Windows check the NTFS filesystems for errors. Then I shut down the virtual machine again, deleted another snapshot and used the tools of VMware to defragment and compact the virtual disks. This gave me a considerable amount of free GBs. Good!
Afterwards I additionally reduced the number of protection snapshots – if this still seemed to be necessary.

On another system with a more important Win 7/10 installation I really extended the Linux partition and its ext4 filesystem by 20 GB – I had some spare space, fortunately – and then followed the steps just described.

So, there is a whole spectrum of options to regain disk space after the upgrade. See also:
thebackroomtech.com : reduce-size-virtual-machine-disk-vmware-workstation/

My third message is:

Ensure a reasonable amount of free space in the Linux filesystem – for required extents and snapshots!
After the backup of your old Win 7 installation, eliminate all VMware snapshots which you do not absolutely need – in the snapshot manager from the left to the right. Also use the VMware tools to defragment and compact your virtual disks ahead of the upgrade.

By the way: I hope that it is clear that snapshots do NOT replace backups. You should make a backup of your successfully upgraded Win 10 installation after you have tested the functionality of your applications and before you start working seriously with your new Win 10. You do not want to go through the upgrade procedure again ..

Addendum: Circumvent the enforcement of Windows 10 updates after your upgrade

Updates on Windows 7 have often lead to trouble in the past – and as an administrator you were happy to have some control over the download and installation points for updates in time. After reading a bit, I got the impression that the situation has not changed much: There have occurred some major problems related to updates of Win 10 since 2016. Yet, Windows 10 enforces updates more rigidly than Win 7.

I, therefore, generally recommend the following:

Delay or stop automatic updates on Win 10. Then use VMware’s snapshot mechanism before manual updates to be able to turn back to a running Win 10 guest version. In this order.

The first point is not so easy as it may seem – there are no basic and directly accessible options to only get informed about available updates as on Win 7. Win 10 enforces updates if you have enabled “Windows Update”; there is no “inform only” or “download only”. You have to either disable updates totally or to delay them. The latter only works for a maximum period of 35 days. How to deactivate updates completely is described here:

https://www.easeus.com/todo-backup-resource/how-to-stop-windows-10-from-automatically-update.html
https://www.t-online.de/digital/software/id_77429674/windows-10-automatische-updates-deaktivieren-so-geht-s.html

There is also a description on “Upgrade” values for a related registry entry:
www.deskmodder.de/wiki/index.php/Automatische-Updates-deaktivieren-oder-auf-manuell-setzen-Windows-10#Windows_10_1607.2C-1703-Pro-Updates-auf-manuell-setzen-oder-deaktivieren

I am not sure whether this works on Win 10 Pro build 1909 – we shall see.

Conclusion

Win 7 and Win 10 can be run on VMware WS Pro versions 12.5 up to 15.5 on Linux hosts. Before you upgrade VMware WS check for compatibility with your CPU! An upgrade of a Win 7 Pro installation on a VMware virtual machine to Win 10 Pro basically works smoothly – but you should take care of providing enough disk space within the virtual machine and also on the host’s filesystem containing the vdmk-files for the virtual disks.

It is not necessary to change the quality of the virtualized hardware configuration. Win 10 appears to be running with at least the same performance as the old Win 7 on a given virtual machine.

In the next article I will discuss some privacy aspects during the upgrade and after. The main question there will be: What can we do to prevent the transfer of sensitive data files from a Win 10 installation?