This is only a small tip regarding the vertical spacing of formulas/equations in WordPress plugins which understand the LaTex notation. As e.g. the plugin “MathML Block”.
Very often when you align formulas with the \begin{align} equ 1 \\ equ 2 \end{align}
commands the vertical arrangement of the equations may appear sub-optimal: The equations get very close in vertical direction.
Especially when you work with complex formulas that have some significant vertical extensions the vertical distance between adjacent equations often becomes too small in my opinion.
As I am not an experienced user of neither Latex nor MathML I always fiddled around with some invisible \phantom{} creation in the lower formulas which gave me sufficient extra vertical space. But this is a tiresome approach. As I learned some days ago from a tex-related stackexchange post there is a much simpler recipe:
The line-break symbol “\\” can be extended like “\\[6pt]“. This will give you exactly the vertical separation you want to get. For details see the following link:
Last week I stared preparing posts for my new blog on Machine Learning topics (see the blog-roll). During my studies last week I came across a scientific publication which covers an interesting topic for ML enthusiasts, namely the question what kind of statistical distributions we may have to deal with when working with data of natural objects and their properties.
The reference is:
S. A. FRANK, 2009, “The common patterns of nature”, Journal of Evolutionary Biology, Wiley Online Library Link to published article
I strongly recommend to read this publication.
It explains statistical large-scale patterns in nature as limiting distributions. Limiting distributions result from an aggregation of the results of numerous small scale processes (neutral processes) which fulfill constraints on the preservation of certain pieces of information. Such processes will damp out other fluctuations during sampling. The general mathematical approach to limiting distributions is based on entropy maximization under constraints. Constraints are mathematically included via Lagrangian multipliers. Both are relatively familiar concepts. The author explains which patterns result from which basic neutral processes.
However, the article also discusses an intimate relation between aggregation and convolutions. The author furthermore presents an related interesting analysis based on Fourier components and respective damping. For me this part was eye-opening.
The central limit theorem is explained for cases where a finite variance is preserved as the main information. But the author shows that Gaussian patterns are not the only patterns we may directly or indirectly find in the data of natural objects. To get a solid basis from a spectral point of view he extends his Fourier analysis to the occurrence of infinite variances and consequences for other spectral moments. Besides explaining (truncated) power law distributions he discusses aspects of extreme value distributions.
All in all the article provides very clear ideas and solid arguments why certain statistical patterns govern common distributions of natural objects’ properties. As ML people we should be aware of such distributions and their mathematical properties.
we have studied the transformation of an ellipse by a shear operation. The coordinates of points on an ellipse and the components of respective position vectors fulfill a quadratic equation (quadratic form):
The superscript “T” symbolizes the transposition operation. The symmetric (2×2)-matrix \( \pmb{\operatorname{A}}_q^O \) defines the original, unsheared ellipse. The suffix “q” indicates the quadratic form. I have shown how the shear parameter λS impacts the coefficients of a corresponding (2×2)-matrix \(\pmb{\operatorname{A}}_q^S \) that defines the sheared ellipse.
What I have not done in the last post is to show how our matrix \(\pmb{\operatorname{A}}_q^S \) is related to a shear matrix \(\pmb{\operatorname{M}}_{sh} \) (see the first post), which describes the effect of the shear on the vectors \( \left(\,x_o,\, y_o\,\right)^T \). I am going to discuss this below. The given matrix relations will also be valid for general n-dimensional ellipsoids.
Matrix relations as discussed below are helpful to accelerate numerical calculations as Numpy (in cooperation with libraries for your OS) provides highly optimized modules for matrix operations. n-dimensional ellipsoids furthermore characterize hyper-surfaces of multivariate normal distributions which appear in certain areas of Machine Learning and respective data.
Matrix describing a centered n-dimensional ellipsoid
We consider n-dimensional and centered ellipsoids whose symmetry centers coincide with the origin of the Euclidean coordinate system [ECS] we work with. A position vector \(\left(\,x_1^o,\, x_2^o\, \cdots x_n^o \right)^T \)
is a vector drawn from the origin to a point on the ellipsoid’s hyper-surface. Note that a general vector of a vector space has no reference to a coordinate system’s origin. Therefore the distinction. A general ellipsoid is defined by a quadratic form in the components of its position vectors. The quadratic form is equivalent to the following matrix equation
where \(\pmb{\operatorname{A}}_{qn}^O \) now represents a symmetric (nxn)-matrix. The “\( \circ \)” symbolizes a matrix product.
Note: A coefficient \( \delta \gt 0 \) which we have used in previous posts on the right side of the equation can be included in the coefficient values of the matrix).
Note that the equations above define an ellipsoid up to a translation vector. This is reflected in the fact that the above equation does not create any linear terms.
Quadratic forms not only define ellipsoids. For an ellipsoid we have to assume that the determinant of\(\pmb{\operatorname{A}}_{qn}^O \) is > 0 and that the matrix is invertible:
Note that you could choose an ECS in which the ellipsoid’s principal axes would align with the ECS’s coordinate axes. Such a choice would correspond to a PCA-transformation of the vector data. \(\pmb{\operatorname{A}}_{qn}^O \) would then become diagonal. This corresponds to the fact that a symmetric matrix always has an eigenvalue-decomposition.
Equation of the quadratic form for the sheared ellipsoid
In the first post of this series I have defined a (invertible) shear matrix as a unipotent matrix \( \pmb{\operatorname{M}}_{sh} \) with all coefficients of the lower triangular part, off the diagonal, being equal to 0.0 and all elements on the diagonal being equal to 1:
So an inverse matrix \( \pmb{\operatorname{M}}_{sh}^{-1} \) exists. Shearing our original ellipse (with position vectors \( \pmb{x_o} \)) leads to new vectors \( \pmb{x_S} \):
Σ is a diagonal (nxm)-matrix with singular values. The column-vectors of U and V are orthogonal singular vectors. Geometrically, U and V can be interpreted s rotational operations.
Therefore, we can decompose a (nxn) upper triangular shear-matrix into two orthonormal (nxn)-matrices U and V plus a diagonal matrix Σ :
This gives us an alternative form to define the inverse shear matrix. Note that the order of the matrices in the matrix products is essential.
An example for the case of a sheared ellipse
We use the example of a sheared ellipse discussed in the last post to verify the results above numerically for a 2-dim case. To write a respective Python/Numpy-program is simple. I will just give you my numerical results below.
We have used an ellipse with the longer and shorter primary axes having values a = 2 and b = 1, respectively. The ellipse was rotated by 60° against the ECS-axes.
The respective (2×2)-matrix \( \pmb{\operatorname{A}}_q^O \) had the following coefficients
We have shown how a shear matrix \( \pmb{\operatorname{M}}_{sh} \) transforms the matrix \(\pmb{\operatorname{A}}_{qn}^O \) which defines an un-sheared n-dimensional ellipsoid into a matrix \(\pmb{\operatorname{A}}_{qn}^S \) defining its sheared counterpart. We have also had a glimpse on a SVD decomposition of a shear matrix. The results will enable us in the next post to apply shear operations on a concrete example of a 3-dimensional ellipsoid.