An answers to a reader’s concern about this blog

A reader wrote me a mail and asked what the general direction of this blog is going to be. He wondered about the “flood” of formulas lately. Which, in his opinion, have nothing to do with Linux. In general, his impression was that I seemingly have lost my interest in core Linux topics. He, a German, also complained that I write my posts in English.

My first reaction was: I appreciate that some of my readers care. The criticism is justified. And it deserves an answer and some explanations. The easiest part is the question regarding language.

According to my provider and my own blog-statistics 78% of page requests to this blog come from abroad, i.e. from countries outside Germany. Most requests stem from US-systems. Before Russia’s imperialistic war against Europe there also were connections (and permanent attack trials) from both Russia and China. Their percentage has declined (fortunately). Anyway, the majority of page requests comes from outside Germany. Therefore, I try to write in English. My English certainly is not the best, but it is still easier to read for those who are interested in my posts’ contents. And obviously, these are not German readers. So, I will not go back to German again.

Now to the question regarding the declining number of posts related to Linux. In the time when I worked as a free-lancer (up to 2018), I had some German customers who cared about Linux. It was in my own interest to dig a bit deeper as usual into topics like “virtualized VLANs”, firewalls etc.. The articles on these topics are still the most read ones in this blog.

But then I started to work as an employed consultant for IT-management topics in a Windows-dominated company. I simply had no chance and no time to continue with hard core Linux topics until the end of 2022. The only connection that came up was related to minor Machine Learning topics. Since my retirement I again use my private Linux systems – but what I need there simply works. No need to dig deeper at the moment. I intend to shift all of my HW-platforms and in the wake of such an endeavor typically some Linux topics come up, but all of this requires a period of money saving first. The same holds for a private project concerning central Linux-based audio-station. (Side remark: Due to the systematic destruction of the social system in Germany, ironically and mainly by the politics of social democrats, ca. 10 million of the persons going into retirement the next years will get significantly less than 1500 Euros per month. These are official numbers of the German government. I am on the edge of this wave.)

A second point which obviously has an impact on this blog is that I have an education in physics and an inborn interest in math. One of the best aspects of retirement is that you gain a lot of freedom regarding your real interests. No employer longer forces you to focus on things you only work with to earn a living. In my case the physicist woke up in spring 2023. I started to read a lot of books on theoretical physics and cosmology. To find out that I needed to revive some university level mathematics. At the same time I got interested in some admittedly special aspects of my own ML-experiments and network simulations in general. And suddenly you find yourself applying some basic linear algebra and calculus again. An easy but not very thoughtful way to start collecting some simple, but useful results was using this blog. I admit: It has turned the blog’s focus away from Linux.

The solution is clear: This blog has to be split up. I will do this as soon as I find some motivation for the boring task to set up a new blog, database, etc. For the time being I have changed the subtitle of this blog to indicate that other topics have come up.

What I cannot promise is that Linux topics will dominate my interests in the future. As said: What a retired person needs on PCs and laptops normally works perfectly under the control of Linux. Thanks to all the fantastic people of the Open Source community.


Properties of ellipses by matrix coefficients – III – coordinates of points with extremal radii

This post requires Javascript to display formulas!

A centered, rotated ellipse can be defined by matrices which operate on position-vectors for points on the ellipse. The topic of this post series is the relation of the coefficients of such matrices to some basic geometrical properties of an ellipse. In the previous posts

Properties of ellipses by matrix coefficients – I – Two defining matrices
Properties of ellipses by matrix coefficients – II – coordinates of points with extremal y-values

we have found that we can use (at least) two matrix based approaches:

  • One reflects a combination of two affine operations applied to a unit cycle. This approach lead us to a non-symmetric matrix, which we called AE. Its coefficients ((a, b), (c, d)) depend on the lengths of the ellipses’ principal axes and trigonometric functions of its rotation angle.
  • The second approach is based on coefficients of a quadratic form which describes an ellipse as a special type of a conic section. We got a symmetric matrix, which we called Aq.

We have shown how the coefficients α, β, γ of Aq and a further coefficient δ of the quadratic form can be expressed in terms of the coefficients of AE. Furthermore, we have derived equations for the lengths σ1, σ2 of the ellipse’s principal axes and the rotation angle by which the major axis is rotated against the x-axis of the Euclidean coordinate system [ECS] we work with. We have also found equations for the components of the position vectors to those points of the ellipse with maximum y-values. A major result was that the eigenvalues and eigenvectors of Aq completely control the ellipse’s properties.

In this post we determine the components of the vectors to the end-points of the ellipse’s principal axes in terms of the coefficients of Aq. Afterward we shall test our formulas by a Python program and plots for a specific example.

Reduced matrix equation for an ellipse

Our centered, but rotated ellipse is defined by a quadratic form, i.e. by a polynomial equation with quadratic terms in the components xe and ye of position vectors to points on the ellipse:

\alpha\,x_e^2 \, + \, \beta \, x_e y_e \, + \, \gamma \, y_e^2 \:=\: \delta

The quadratic polynomial can be formulated as a matrix operation applied to position vectors ve = (xe, ye)T. With the the quadratic and symmetric matrix Aq

\[ \pmb{\operatorname{A}}_q \:=\:
\begin{pmatrix} \alpha & \beta / 2 \\ \beta / 2 & \gamma \end{pmatrix}

we can rewrite the polynomial equation for the centered ellipse as

\pmb{v}_e^T \circ \pmb{\operatorname{A}}_q \circ \pmb{v}_e^T \:=\: \delta \,=\, \sigma_1^2\, \sigma_2^2, \quad \operatorname{with}\: \pmb{v_e} \,=\, \begin{pmatrix} x_e \\ y_e \end{pmatrix}.

Just to cover another relation you may find in books: We could have included the δ-term in a somewhat artificial (3×3)-matrix

\pmb{\operatorname{A}}_q^e \:=\:
\begin{pmatrix} \alpha & \beta / 2 & 0 \\ \beta / 2 & \gamma & 0 \\ 0 & 0 & – \delta \end{pmatrix},

and applied this matrix to an artificially extended position vector to reproduce our definition equation:

\pmb{\operatorname{A}}_q^e \, \circ \, \begin{pmatrix} x_e \\ y_e \\ 1 \end{pmatrix} \:=\: 0

However, this formal aspect will not help much to solve the equations coming below. We want to describe the vectors to the principal axes’ end-points by mathematical expressions that depend on α, β, γ, δ – and both matrices will of course deliver the same results. But: There are still two different approaches to achieve our objective.

Method 1 to determine the vectors to the principal axes’ end points

My readers have certainly noticed that we have already gathered all required information to solve our task. In the first post of this series we have performed an eigendecomposition of our symmetric matrix Aq. We found that the two eigenvectors of Aq for respective eigenvalues λ1 and λ2 point along the principal axes of our rotated ellipse:

\[ \begin{align}
\lambda_1 \: &: \quad \pmb{\xi_1} \:=\: \left(\, {1 \over \beta} \left( (\alpha \,-\, \gamma) \,-\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right), \: 1 \, \right)^T \\
\lambda_2 \: &: \quad \pmb{\xi_2} \:=\: \left(\, {1 \over \beta} \left( (\alpha \,-\, \gamma) \,+\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right), \: 1 \, \right)^T

The T symbolizes a transposition operation. The eigenvalues are related to the Aq-coefficients by the following equations:

\[ \begin{align}
\lambda_1 \:&=\: {1 \over 2} \left(\, \left( \alpha \,+\, \gamma \right) \,+\, \left[ \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \,\right) \\
\lambda_2 \:&=\: {1 \over 2} \left(\, \left( \alpha \,+\, \gamma \right) \,-\, \left[ \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \,\right)

These eigenvalues correspond to the squares of the lengths of the ellipse’s axes.

\[ \begin{align}
\lambda_1 \:&=\: \sigma_1^2 \\
\lambda_2 \:&=\: \sigma_2^2 \\

Therefore, we can simply take the components of the normalized vectors

\[ \begin{align}
\lambda_1 \: &: \quad \pmb{\xi_1^n} \:=\: {1 \over \|\pmb{\xi_1}\|}\, \pmb{\xi_1} \\
\lambda_2 \: &: \quad \pmb{\xi_2^n} \:=\: {1 \over \|\pmb{\xi_2}\|}\, \pmb{\xi_2}

and multiply them with the square-root of the respective eigenvalues to get the vector components to the end-points of the ellipse’s axes:

\[ \begin{align}
\pmb{\xi_1}^{rmax} \:&=\: \sqrt{\lambda_1} * \pmb{\xi_1^n} \\
\pmb{\xi_2}^{rmax} \:&=\: \sqrt{\lambda_2} * \pmb{\xi_1^n}

This is trivial regarding the algebraic operations, but results in lengthy (and boring) expressions in terms of the matrix coefficients. So, I skip to write down all the terms. (We do not need it for setting up ordered numerical programs.)

Remember that you could in addition replace (α, β, γ, δ) by coefficients (a, b, c, d) of matrix AE. See the first post of this series for the formulas. This would, however, produce even longer equation terms.

Equation for points with maximum radius values

We define again some convenience variables:

\[ \begin{align}
a_h \,& =\, {\alpha \over \gamma} \\
b_h \,& =\, {1 \over 2 } {\beta \over \gamma} \\
d_h \,& =\, {\delta \over \gamma} \\
g_h \,& =\, a_h \,-\, b_h^2 \\
f_h \,& =\, 1 \,+\, b_h^2 \,-\, g_h \phantom{\huge{(}}


\xi_h \,=\, { \left[\, 4\,d_h\, g_h\, b_h^2 \,+\, d_h\, f_h^2 \,\right] \over 2\, \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \phantom{\Huge{(}} \\
\eta_h \,=\, { b_h^2 \, d_h^2 \over \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \phantom{\Huge{(}}

We find for ye:

\[ \begin{align}
y_e \:&=\: – b_h \, x_e \, \pm \, \left[\,d_h \,-\, \left( a_h \,-\, b_h^2 \right)\, x_e^2 \, \right]^{1/2} \\
\:&=\: – b_h \, x_e \, \pm \, \left[\,d_h \,-\, g_h\, x_e^2 \, \right]^{1/2} \phantom{\huge{(}}
\end {align}

We pick the ye with the positive term in the following steps. (The way for the solution with the negative term in ye is analogous.) The square of ye is:

y_e^2 \:=\: – d_h \,+\, \left(b^2 \,-\, g\right)\, x_e^2 \,-\, 2 \, b_h \, x_e \,\left[d \,-\, g\,x_e^2 \right]^{1/2}

To find an extremal value of the radius we differentiate and set the derivative to zero:

{\partial \, \left(y_e^2 \,+\, x_e^2\right) \over \partial \, x_e} \:=\: 0 \: \Rightarrow
\[ \begin{align}
& \left(\, 1\,+\, b_h^2 \,-\, g_h\,\right) \, x_e \,-\, b_h \, \left[\, d_h \,-\, g_h\, x_e^2 \, \right]^{1/2} \\
&+\, b_h\,g_h\,x_e^2 \, {1 \over \left[\, d_h \,-\, g_h \, x_e^2 \, \right]^{1/2} } \:=\: 0

This results in

f_h\, x_e \, \left[ d_h \,-\, g_h\, x_e^2\right]^{1/2} \:=\: b_h\,d_h \,-\, 2\, b_h\,g_h\,x_e^2 \, .

Solution for xe-values of the end-points of the principal axes

We take the square of both sides and reorder terms to get

\left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right]\, x_e^4 \,-\, \left[\, 4\,d_h\, g_h\, b_h^2 \,+\, d_h\, f_h^2 \,\right]\, x_e^2 \,+\, b_h^2 \, d_h^2 \:=\: 0
x_e^4 \,-\, { \left[\, 4\,d_h\, g_h\, b_h^2 \,+\, d_h\, f_h^2 \,\right] \over \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \, x_e^2 \:=\:
-\, { b_h^2 \, d_h^2 \over \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] }


\[ \begin{align}
\xi_h \,&=\, { \left[\, 4\,d_h\, g_h\, b_h^2 \,+\, d_h\, f_h^2 \,\right] \over 2\, \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \\
\eta_h \,&=\, { b_h^2 \, d_h^2 \over \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \phantom{\Huge{)^A}}

we have

x_e^4 \, -\, 2 \,\xi_h \,x_e^2 \:=\: – \eta_h

With the help of a quadratic supplement we get

\left[\, x_e^2 \, -\, \xi_h \right]^2 \:=\: \xi_h^2 \:-\: \eta_h

and find the solution

x_e \:=\: \pm \, \sqrt{ \xi_h \,\pm\, \sqrt{\, \xi_h^2 \,-\,\eta_h \,} }

A detailed analysis also for the other ye-expression (see above) leads to further solutions for the coordinates (=vector component values) of points with extremal values for the radii. These are the end-points of the principal axes of the ellipse:

\[ \begin{align}
x_{e1}^{rmax} \:&=\: -\, \sqrt{ \, \xi_h \,-\, \sqrt{\, \xi_h^2 \,-\,\eta \,} } \\
y_{e1}^{rmax} \:&=\: +\, \sqrt{ \, d_h \,-\, g_h \, \left(x_{e1}^{rmax}\right)^2 \,} \,-\, b_h \, x_{e1}^{rmax} \\
x_{e2}^{rmax} \:&=\: -\, x_{e1}^{rmax} \phantom{\huge{)}} \\
y_{e2}^{rmax} \:&=\: -\, y_{e1}^{rmax} \\
x_{e3}^{rmax} \:&=\: +\, \sqrt{ \, \xi_h \,+\, \sqrt{\, \xi_h^2 \,-\,\eta \,} } \phantom{\huge{)}} \\
y_{e3}^{rmax} \:&=\: +\, \sqrt{ \, d_h \,-\, g_h \, \left(x_{e3}^{rmax}\right)^2 \,} \,-\, b_h \, x_{e3}^{rmax} \\
x_{e4}^{rmax} \:&=\: -\, x_{e3}^{rmax} \phantom{\huge{)}} \\
y_{e4}^{rmax} \:&=\: -\, y_{e3}^{rmax}
\end {align}

I leave it to the reader to expand the convenience variables into terms containing the original coefficients α, β, γ, δ.


It is easy to write a Python program, which calculates and plots the data of an ellipse and the special points with extremal values of the radii and extremal values of ye. The general steps which I followed were:

Step 0: Create 100 points a unit circle. Save the coordinates in Python lists (or Numpy arrays). Use Matplotlib’s plot(x,y)-function to plot the vectors.

Step 1: Create an axis-parallel ellipse with values for the axes ha = 2.0 and hb = 1.0 along the x- and the y-axis of the Euclidean coordinate system [ECS]. Do this by applying a diagonal scaling matrix Dσ1, σ2 (see the first post of this series).

Step 2: Rotate the ellipse bei π/3 (60 °). Do this by applying a rotation matrix Rπ/3 to the position vectors of your ellipse (with the help of Numpy). Alternatively, you can first create the matrices, perform a matrix multiplication and then apply the resulting matrix to the position vectors of your unit circle.

(The limiting lines have been calculated by the formulas given above.)

Step 3: Determine the coefficients of combined matrix AE = Rπ/3Dσ1, σ2

I got for the coefficients ( (a, b), (c, d) ) of AE :

A_ell = 
[[ 1.         -0.8660254 ]
[ 1.73205081  0.5       ]]

Step 3: Determine the coefficients of the matrix Aq by the formulas given in the first post of this series. I got

A_q = 
[[ 3.25       -1.29903811]
[-1.29903811  1.75      ]]

For δ I got:

delta =  4.0

which is consistent with the length-values of the principal axes.

Step 4: Determine values for the eigenvalues λ1 and λ2 from the Aq-coefficients by the formulas given in the first post. Also calculate them by using Numpy’s
eigenvalues, eigenvectors = numpy.linalg.eig(A_q). Theory tells us that these values should be exactly λ1 = 4 and λ1 = 1. I got

Eigenvalues from A_q:  lambda_1 = 4. :: lambda_2 = 1.

Step 5: Determine the components of the normalized eigenvectors with the help of numpy.linalg.eig(A_q). I got:

Components of normalized eigenvectors by theoretical formulas from A_q coefficients: 
ev_1_n :  -0.8660254037844386  :  0.5000000000000002
ev_2_n :  0.5000000000000001  :  0.8660254037844385

Eigenvectors from A_q via numpyy.linalg.eig():  
ev_1_num :  0.8660254037844387  :  -0.5000000000000001
ev_2_num :  0.5000000000000001  :  0.8660254037844387 

The deviation between ev_1_n and ev_1_num is just due to a difference by -1. This is correct as the eigenvectors are unique only up to a minus-sign in all components.

Step 6: Calculate the sinus of the rotation angle of our ellipse from Aq– and Aq-coefficients. The theoretical value is sin(2 π/3) = sin(2 pi/3) = 0.8660254037844387. I got:

sin(2. * rotation angle) of major axis of the ellipse against the ECS x-axis from A_E coefficients: 
sin_2phi-A_E  =  0.8660254037844388

sin(2. * rotation angle) of major axis of the ellipse against the ECS x-axis from from eigenvectors of A_q:
sin_2phi-ev_A_q =  0.8660254037844387 

sin(2. * rotation angle) of major axis of the ellipse against the ECS x-axis from A_q-coefficients:
sin_2phi-coeff-A_q =  0.8660254037844388 


Step 7: Plot the end-points of the normalized eigenvectors of Aq:

Note that in our example case the end-point of the eigenvector along the minor axis must be located exactly on the elliptic curve as the ellipses minor axes has a length of b=1!

Step 8: Calculate the components of the vectors to data-points of the ellipse with maximal absolute ye-values from the Aq-coefficients given in the previous post. Plot these data-points (here in green color).

Step 9: Calculate the components of the vectors to data-points of the ellipse with maximal values of the radii with the help of the complex formulas presented in this post and plot these points in addition.


In this mini-series of posts we have performed some small mathematical exercises with respect to centered and rotated ellipses. We have calculated basic geometrical properties of such ellipses from the coefficients of matrices which define ellipses in algebraic form. Linear Algebra helped us to understand that the eigenvectors and eigenvalues of a symmetric matrix, whose coefficients stem from a quadratic equation (for a conic section), control both the orientation and the lengths of the ellipse’s axes completely.

This knowledge is useful in some Machine Learning [ML] context where elliptic data appear as projections of multivariate normal distributions. Multivariate Gaussian probability functions control properties of a lot of natural objects. Experience shows that certain types of neural networks may transform such data into multivariate normal distributions in latent spaces. An evaluation of the numerical data coming from such ML-experiments often delivers the coefficients of defining matrices for ellipses.

In my blog I now return to the study of with shearing operations applied to circles, spheres, ellipses and 3-dimensional ellipsoids. Later I will continue with the study of multivariate normal distributions in latent spaces of Autoencoders. For both of these topics the knowledge we have gathered regarding the matrices behind ellipses will help us a lot.