KMeans as a classifier for the WIFI and MNIST datasets – III – KMeans as a classifier for the WIFI-example

In the previous articles of this series

KMeans as a classifier for the WIFI and MNIST datasets – I – Cluster analysis of the WIFI example
KMeans as a classifier for the WIFI and MNIST datasets – II – PCA in combination with KMeans for the WIFI-example

I applied KMeans to the “WIFI” dataset – a small and rather simple training set of the UCI Irvine. The 7 dimensional feature space for this example is defined by the strength of seven different WLAN-signals. The data samples are labeled by numbers specifying four different rooms. We just have 2000 samples. But “simple” does not mean that one cannot learn something of it.

An elbow and a silhouette analysis indicated that the data can well be described by 4 to 5 clusters.
A detailed PCA analysis helped us to understand that the data could basically be described by two primary components whose axes were defined by a diagonal in a two-dimensional sub-space of the original feature space (defined by the features “WLAN-0” and “WLAN-3”) plus an orthogonal axis (defined by “WLAN-4”). As the data points segregated into well separated clusters in the coordinate system of the primary components we could expect that they also segregate well when projected onto two 2-dim sub-spaces of the feature space defined by the signal combinations {WLAN-0/WLAN-4} and {WLAN-3/WLAN-4}.

So we projected the results of a KMeans analysis into a 2-dim sub-space of the 7-dim feature space spanned by two selected “features” (WLAN signals WLAN-0 and WLAN-4). And indeed: For 2 special signal- or feature-combinations we saw a field with 4 to 5 well separated clusters.

As we have well separated clusters – either in sub-planes of the feature space or the space defined by the dominant two PCA components – a legitimate question is: Can we use the cluster information for classifying?

Clusters and classification

In the special example of the WIFI data the data points can obviously be divided into different “groups” filled by data points which belong to a certain “label” or class (in the WIFI case: a room):

In the plots above the colorization of the data points is given by their label in the plot above. However:
Does this mean that these spatially separated “groups” coincide with the clusters detected by KMeans?

In the next plots I superimposed the data points – first colorized by class and then by cluster with different colors and a slight spatial deviation.

We see that the border of the class related points on average overlaps well with the border of the clusters. Only a few points show a major deviation.

Is this always the case? Of course not!

We see this already when we we just use 2 clusters and colorize the data points according to cluster membership in one of the two clusters:

Oops, all the beauty is gone! Members of the previously visual dark-red, green and pink “clusters” would now certainly be members of the first defined “green” cluster. And members of the visual pink and orange clusters would now be members of the “defined” blue cluster. So: Labels may but do strong>not always define clusters.

An important thing we learned by this consideration is that we should at least use a number of cluster greater or equal the number of labels.

But even then the separation may not work. Let us imagine three rooms: In room “A” we only have male persons, in room “B” only female persons and in the third room “C” female persons on one side of the room and male persons on the other – like in some old fashioned dancing courses. If we used the persons’ coordinates to define features and the “male”/”female” attributes as labels and tried a cluster analysis in the feature space we would clearly identify three clusters. However, in the cluster for room “C” we would find a mixture of samples with different labels. Spatial vicinity in the feature space does not mean class identity and a label does not necessarily mean a big distance in the feature space.

Hyperplanes and clusters

The only thing you can hope for with respect to a solvable ML problem is that the data points for different labels may be distributed such that a complicated and curved hyperplane can be found in the multidimensional feature space which separates groups of data points with the same label sufficiently well. But this hyperplane does not necessarily coincide with fictitious borders of some clusters identified by KMeans. We are lucky if the topologically closed surface of a cluster lies completely in a region of the hyperspace separated by complex and topologically open hyperplanes separating data points belonging to one class from other feature space regions which contain data points with different labels.

Not all data ensembles in ML will fall apart in extended clusters each of which dominated by some specific label. Ring like distributions of data samples with identical labels may pose a problem to cluster algorithms – mot only in 2 dimensions.

On the other side: If the labeled data – after some useful transformation – are not well separable in the feature space, then the posed ML task is problematic anyway and the chosen feature definitions may not be appropriate.

Granularity: Hope for label dominance in sufficiently fine grained clusters

What does a label-cluster correlation depend on? Well, if there is an important factor regarding the position of the centroids of KMeans then it is the number of clusters. In the extreme case of as many clusters as data points a super-well defined cluster/label-association exists – but it is not of much use for ML-task. It just represents the most extreme case of overfitting you can think of. It will be useless for new unknown samples.

But the example with the distribution of male/female samples gives you an indication of what we should try: With a growing number of cluster the chances for a fine grained separation into clusters residing on one side of a hyperplane separating samples of different labels rises and with it the chance for a clearer dominance of one label per cluster. This is even true for ring like data distributions: With more then 4 clusters we may even describe a ring like data distribution quite well.

Therefore: If the sample distribution has some reasonable separation hyperplane at all there is a chance that you may find a number of clusters for which each cluster is dominated by a specific label.

In the case of the WIFI example this is pretty obvious. The following first plot shows the data points colorized according to their labels:

The next plot shows four cluster – with data points colorized according to their cluster membership:

The colors are different (we have no control of it) – but the different data point “clouds” in the pictures coincide rather well. Only some data points do not behave well. We can again use the trick we iused in 2 dimensions and superimpose data points with colorization according to cluster and class:

How do we define a classifier by the help of KMeans?

We need a predict()-function which predicts a label from a predicted cluster membership. The recipe is rather simple:

Define the number of cluster you want to use.

  • Use KMeans for a cluster analysis of a training set of samples.
  • Predict the cluster-membership of a sample with the help of the fitted kmeans-object and its predict()-function.
  • Get the labels of the samples belonging to a specific cluster.
  • Find out what the amount of samples with a certain label for a cluster is and compared the data.
  • Find the label which contributes most samples to a cluster. Check the relative amount of deviating data points with other labels. Should be sufficiently small…
  • Build a dictionary which associates the cluster number with its dominant label.
  • Build a prediction function for yet unknown data points. It first predicts the cluster and then the associated label.

You should then test the accuracy of your new cluster-based classifier model on test data.

More clusters than labels?

What will happen to such an algorithm if you use more clusters than labels? Short answer: Nothing – as long as each cluster is really dominated by a specific label. More precisely: A vast majority of your clusters should exhibit contributions from data points with one specific label to an amount significantly far beyond the statistical average. Having more clusters means that under reasonable conditions we just have more than one cluster predicting a certain label. For the case of the WIFI example this means that we can work with five clusters without getting nervous.

Application to the WIFI example

As the clusters represent the labels quite well we expect very good values for the recall values. The experiment is pretty simple. I just present the resulting confusion matrices for the four labels (identifying rooms) and different numbers of clusters “nclus”. I first show you a presentation with seaborn, which explains, how to interpret rows and columns:

nclus = 5

Here I used five (5) clusters and included all samples in the calculation. I.e. I did not separate a training set from a test set of data points.

Confusion matrices for different numbers of clusters

nclus = 4

[[496   0   4   0]
 [  0 425  75   0]
 [  2   0 492   6]
 [  2   0   2 496]]
num of wrongly predicted samples:  91  :: avg_err =  0.0455

The “avg_err” gives you the number of wrongly predicted samples divided by the total number of samples. We see that 4 clusters have a problem with the differentiation of the data points for the room “Diele”.

nclus = 5

[[495   0   5   0]
 [  0 455  45   0]
 [  2   0 493   5]
 [  2   0   2 496]]
num of wrongly predicted samples:  61  :: avg_err =  0.0305

nclus = 6

[[499   0   1   0]
 [  0 452  48   0]
 [  5   0 490   5]
 [  2   0   2 496]]
num of wrongly predicted samples:  63  :: avg_err =  0.0315

nclus = 7

[[499   0   1   0]
 [  0 492   8   0]
 [  4  38 455   3]
 [  2   0   3 495]]
 num of wrongly predicted samples:  59  :: avg_err =  0.0295

nclus =

num wrongly predicted samples:  58  :: avg_err =  0.029

nclus = 9

[[499   0   1   0]
 [  0 484  16   0]
 [  4  16 476   4]
 [  2   0   2 496]]
num of wrongly predicted samples:  45  :: avg_err =  0.0225

All data above were derived for the same random_state-variable to KMeans.

However, this result depends on the initial shuffling of the samples, the random_state parameter and the initial statistical distribution of the centroids by KMeans. And other hyperparameters … The next plot shows a slightly different result for nclus = 9:

nclus = 9

This means: Nine cluster give us a reasonably fine grained resolution in the case of the WIFI example.

But: Note that the problem areas in the confusion matrix have changed. “Diele” is handled a bit better, but “Wohnzimmer” is not so sharply separated as before. This is not astonishing as we saw a significant mix of data of different labels for different clusters above already:

After nclus = 9 we do not get much better – and dance around an average error of 0.025 => 2.5%:

nclus = 10

num of wrongly predicted samples:  55  :: avg_err =  0.0275

nclus = 11

num of wrongly predicted samples:  52  :: avg_err =  0.026

nclus = 25

num of wrongly predicted samples:  40  :: avg_err =  0.02

Being conservative we can say that our simple cluster based classifier approaches an accuracy around 97.4%. Not so bad regarding the crudeness of our approach! A random forest algorithm reaches something above 98.2%. This is not so much better.

Results after a separation of test data samples

I divided the data set than into 1500 train and 500 test samples.

For 9 clusters I got:

nclus = 9

[[374   0   1   0]
 [  0 362  13   0]
 [  3  17 352   3]
 [  2   0   2 371]]
num wrongly predicted train samples:  41  :: avg_err =  0.027333333333333334
num wrongly predicted train samples:  8  :: avg_err =  0.016

But these numbers depend strongly on the splitting – even if we split stratified. Another run gives:

nclus = 9

[[374   0   1   0]
 [  0 352  23   0]
 [  3   0 369   3]
 [  1   0   2 372]]
num wrongly predicted train samples:  33  :: avg_err =  0.022 
num wrongly predicted test samples:  11  :: avg_err =  0.022

Other runs may even give higher average error values. The variation depend upon of how many critical data points in the intermixing zone were omitted in the train samples. At least the results are not too different from the ones named above.

Conclusion

In this article I presented a very simple method by which a cluster algorithm can be used as a classifier. When we applied the approach to the rather simple WIFI example we saw that this worked pretty well.

What we in addition should try is to combine the classifier with a dimension reduction based on a PCA-analysis. From the results of a previous post we would expect that a cluster classifier should work well on the WIFI data after a transformation and projection of the samples’ data points into a 3-dimensional space spanned by the most important orthogonal main component axes. This is the topic of the next post in this series:

KMeans as a classifier for the WIFI and MNIST datasets – IV – KMeans on PCA transformed data

Ceterum censeo: The worst fascist today who must be denazified is the Putler.

 

KMeans as a classifier for the WIFI and MNIST datasets – II – PCA in combination with KMeans for the WIFI-example

I continue with my series of posts about using KMeans as a classifier for some simple ML datasets. In the last post

KMeans as a classifier for the WIFI and MNIST datasets – I – Cluster analysis of the WIFI example

I applied KMeans to the “WIFI” dataset which was discussed in the German “Linux Magazin”; see the Nov and Dec editions of 2021. We found 4 to 5 well separated clusters in a projection of the data points onto a 2-dimensional space defined by just two out of seven features.

I briefly discuss how this result is related to a PCA analysis. In my opinion the discussion in the “Linux Magazin” was a bit misleading regarding this point.

PCA analysis

A PCA analysis helps to identify the main orthogonal axes of the distribution of the samples’ data-points in their multidimensional “feature space”. If the data points for the samples are distributed in all directions and over all regions of the feature space in a similar way we may indeed need all of the feature spaces’s dimensions to describe the data distribution. Still, we could find some complicated curved hyperplanes which separate groups of data points with identical label quite well.

But often the data points are positioned along certain preferred directions, i.e. the data points are located along specific lines or (multidimensional) flat planes in the feature space – not withstanding an additional clustering. In such cases the data distribution may exhibit some intrinsic major axes in the feature space AND/OR the distribution may be confined to a subspace of the original feature space. The sub-space is defined and spanned by fewer axes than the original space. We speak of the “primary components” of the data distribution when we refer to the (orthogonal) axes of such a sub-space.

Of course, we can span the full original feature space by the most important primary component axes plus some extra orthogonal axes. I.e., we just need the same number of main components as the dimension of the original feature space. Then we get a new coordinate system which describes the vector space of the original features in a different way: The differences are

  • another orientation of the main component axes in comparison with the original axes,
  • a difference in the position of the origins of the two coordinate systems.

Of course, there is a mathematically well defined transformation which maps coordinates of data points with respect to the original feature spaces axes to coordinates in a coordinate system defined by the main component axes.

Of course, we can project a vector describing a data point onto unit vectors along the axes of either coordinate system. By using these special components we can describe the data distribution in terms of a sub-space spanned by only the most important primary component axes. A projection of ML data points to primary component axes is equivalent to a reduction of the dimensionality of the ML problem. Whenever we find that we can use fewer main component axes than the feature space’s number of dimensions to describe the data distribution reasonably well we can reduce the dimensionality of the problem: We project the original data vectors in the feature space to the axes of the main components’ coordinate system with fewer dimensions than the feature space.

How do we measure the “importance” of a main or primary component?

A metric for the importance of a primary component axis is its contribution to the so called “explained variance”. This quantity measures correlations of data in the original feature space and thus also the amount of information residing in the data.

From a mathematical point of view determining the main component axes corresponds to the diagonalization of the so called covariance matrix and the identification of eigenvectors. (You can find a short explanation in the books of S. Rashka on “Python Machine Learning”, 2016, PACKT or the book of J. Frochte on “Maschinelles Lernen”, 2019, Carl Hanser.) The eigenvectors define the directions of the main components’ axes. The variation of the data along such a main component axis contributes to the total variance by its measure of specific correlations, i.d. weighted quadratic data point distances. We are interested in finding those main component axes for which the projected data distribution explains most of the data variation.

Note that the axes which a PCA-analysis determines are orthogonal axes. Thus PCA describes a transformation of vectors in the feature space from one coordinate system with orthogonal axes to another coordinate system with orthogonal axes. Geometrically, this can be described by a sequence of a translation, followed by a rotation – and in case of a dimensionality reduction in addition by a projection.

Let us visualize an example. The following picture shows the surface of an asymmetric “ellipsoidal” data distribution in a 3-dim feature space. (Actually, the image does not display a real ellipsoid – but we ignore the differences for reasons of simplicity.) The dark arrows in the picture indicate the orthogonal axes and unit-vectors per dimension of the 3-dim feature space. The colored arrows of the ellipsoid show the direction of the main component axes of the “ellipsoid”.

Regarding the dimensions of the ellipsoid the elongation along the red vector is biggest. The width of the data point distribution in the direction of the blue vector is significantly smaller, but still bigger than in the direction of the green vector. So, we would expect that the two main component axes in the directions of the red and the blue arrows explain most of the data variation in the feature space. The projections of the data point vectors in a coordinate system defined by the main component axes onto the “green” axis would only give us rather small values. Therefore, we can reduce the dimensionality of the problem by describing the data distribution in only two dimensions: We project the vectors of the original data points in the feature space onto the two most important main component axes – in the directions of the red and the blue unit vectors.

Note: The axis in the direction of the dominant elongation of the ellipsoid has a diagonal orientation in the feature space. This means that all of the three original features contribute to the data points’ distribution in this direction. Therefore, the following point must be underlined:

A PCA analysis is not a selection process with respect to the original features. A vector describing a main component axis of the data distribution is a
linear combination of all unit vectors along various original axes of the feature-space. Normally many – if not all – original features contribute to a main component axis.. Without a detailed analysis you can not assume that only one or two original features determine the primary component axis.

In particular: You cannot assume that only one special feature dominates a main component axis. Unfortunately, the text in the Linux Magazin on the WIFI example could be read and interpreted in this way. So, let us have a closer look at the reason, why the projections of the WIFI-data from the original 7-dimensional feature-space onto a reduced 2-dim space of the signal-combinations [WLAN-4, WLAN-0] or [WLAN-3, WLAN-0] worked so well.

How many main components dominate the variance of the WIFI data distribution?

How many main components explain most of the variation of the samples’ data of the WIFI example? How do we get the required information about the contribution to the explained variance from PCA applications? Sklearn’s PCA implementation provides the usual “fit()”-function to perform the PCA calculation for a given data set. But it also provides an array named “explained_variance_” which contains the individual contributions of the main components to the “explained variance“.

So, as a first step, we simply try and apply Sklearn’s PCA()-function directly to the original WIFI data given in their 7-dimensional feature space. I.e. without any scaling. Just as it was done in the Linux Magazin. The bar plot below displays the “importance” of a maximum of 7 main components. The importance of each component is measured by its normalized contribution to the “explained variance”:

An accumulation gives the following percentages:

We see that only two of the main PCA components already explain 85% of the data variance in the feature space. We thus could choose these two main components as our “primary” components. But this does NOT automatically mean that only two features dominate the data distribution in the feature space.

Which features determine the primary components’ orientation?

As we have discussed above the projection of a unit vector along the main component axis onto the original axes of the feature-space may give similarly big values for each of the features. It is rather seldom that only a few features determine the direction of a main component’s axis.

But: The WIFI data are (on purpose?) indeed distributed in a very special way. Actually, only two original features determine the direction of the most important primary component’s axis already quite well. And just one feature dominates the second most important PCA component. So, in the WIFI example the first and the second main components are oriented more or less within a 2-dim feature plane and along a special feature axis, respectively. I.e. the data are more or less confined to a 3-dimensional sub-space of the feature space. How and where from did I get this information?

Sklearn’s function “PCA()” returns a reference to an object, which after a call to its method “fit()” has a filled property “components_“. This array gives us the 7 vector-components of each unit vector oriented along a PCA main component axis with respect to the various original axes of the feature space. I.e. we get the components of unit vectors along main component axes in terms of the original coordinate system spanning our feature space. The related coefficients of the vector tell us whether the main component axes are confined to a subspace spanned by just a few original features.

Below, the elements (rows) of the array were reversely sorted by the the contribution of the main component to the “explained variance”. So the first two rows correspond to the two most important PCA main components.

[6.22e-01 7.47e-04 1.03e-02 6.29e-01 2.02e-01 3.03e-01 2.91e-01]
 [1.71e-01 9.05e-02 4.31e-01 1.95e-01 8.50e-01 1.02e-01 7.62e-02]
 [2.41e-01 2.78e-01 2.18e-01 2.90e-01 9.80e-02 5.28e-01 6.67e-01]
 [1.67e-01 3.44e-01 7.05e-01 1.69e-01 4.22e-01 9.65e-02 3.75e-01]
 [1.13e-01 1.77e-01 3.15e-01 1.33e-01 1.82e-01 6.97e-01 5.66e-01]
 [6.37e-01 2.31e-01 3.28e-01 6.45e-01 1.26e-01 1.32e-02 3.22e-02]
 [2.80e-01 8.43e-01 2.50e-01 1.42e-01 2.43e-02 3.52e-01 5.12e-02]]

The first primary component has vector-components along the original axes of the feature space given by

6.22e-01, 7.47e-04, 1.03e-02, 6.29e-01, 2.02e-01, 3.03e-01, 2.91e-01

The features, i.e. the WLAN signals, are numbered in the given order from left to right. Obviously, the features “WLAN-0” and “WLAN-3” dominate the direction of the first primary component.

The second main component

1.71e-01, 9.05e-02, 4.31e-01, 1.95e-01, 8.50e-01, 1.02e-01, 7.62e-02

is instead dominated by he feature “WLAN-4”.

Actually, a closer look shows that the signal “WLAN-6” dominates the third main component by a relatively big value. This might be an indication that we had better used three primary components instead of two … I come back to this point below.

So, what do we learn from the results of the projections of the PCA unit vectors onto the axes of the feature space?

  1. In the very special case of the WIFI example around 3 original features dominate the overall data distribution.
  2. In the WLAN-0/WLAN-3 plane we should see an approximate diagonal distribution of data. Reason: the vector components are of almost equal size.
  3. As we already know from an elbow-analysis we have 4 to 5 clusters. So, we should see them clearly in a 3D-plot for the axes WLAN-0, WLAN-3 und WLAN-4.
  4. If the data are well separated into clusters along the diagonal in the WLAN-0/WLAN-3 plane AND the WLAN-4 direction then they will also be well separated in the 2-dim space of the 2 main PCA components.

Ok, let us visualize it. The next plot shows the data distribution from a view almost perpendicular to the WLAN-3/WLAN-4 plane. The colors indicate the labels of the data (i.e. the rooms where the strength of each of the 7 WLAN signals has been measured).

3D-plot of the WIFI data distribution in the space of the dominant 3 original features – the WLAN-3/WLAN-4 plane

We already see the clustering. The next plot shows the data distribution from a direction almost perpendicular to the WLAN-0/WLAN-4 plane.

3D-plot of the WIFI data distribution in the space of the dominant 3 original features – the WLAN-0/WLAN-4 plane

And now a view from above showing the diagonal distribution of very many data points. We clearly see that there is something strange going on in the “orange” room.

3D-plot of the WIFI data distribution in the space of the dominant 3 original features – the WLAN-3/WLAN-0 plane

So far, so good! We have again identified the clusters which we already got familiar with in my last post.

Data distribution in the vector space of the three most important PCA components

In full consistency with the results derived above we expect a good cluster separation in the plane of the first two main PCA components. These components are called “PCA-1” and “PCA-2” in the following plot:

3D-plot of the transformed WIFI data distribution in the space of the dominant 3 PCA components – the PCA-1/PCA-2 plane

But looking from a different perspective, we see that there still is a significant distribution along the axis of the third main component – at least with the scaling used along the PCA-3 axis.

3D-plot of the transformed WIFI data distribution in the space of the dominant 3 PCA components – the PCA-1/PCA-3 plane

Even when we take into account the different scales of the axes: The spread in z-direction (PCA-3) is relatively big compared with the data spread in the PCA-2 direction. Again, we see that the data indicate three main components. Why did we not get this information already in our bar plot for the “explained variance”?

Working on scaled data

Well, part of the answer to the last question is that we did not really treat the various WLAN signals equally well. Actually, for very simple reasons, a PCA analysis of really independent features with different measurement units should be applied to scaled data. So, just for curiosity’s sake, let us apply Sklearn’s StandardScaler() to our WIFI data ahead of a PCA-analysis. Then, we indeed get a different bar plot:

The elbow is now centered at a point corresponding to 3 main components! Below I show respective 3D-plots for the standardized and PCA-transformed data:

3D-plot of the scaled WIFI data distribution in the space of the dominant 3 PCA components – the PCA-1/PCA-3 plane

3D-plot of the scaled WIFI data distribution in the space of the dominant 3 PCA components – the PCA-1/PCA-2 plane

However, the 5th cluster – a subcluster of the orange one – is no longer so clearly visible as before. This is due to the fact that the standard deviation of the data around the mean value of each feature is adjusted to a value of 1.0 with StandardScaler(). A MinMaxScaler() does a better job:

In the case of the WIFI example there is also a strong counter-argument against scaling:
The individual features and their scales are NOT really independent of each other. A weaker signal or a specific spread around the mean value of a specific signal do actually mean something! When we have multiple maxima in a signal distribution (see the previous post) this carries some important information – and then the adjustment of the standard deviation to a standard value is not a really good idea. This means that it depends on the data and their meaning which kind of scaler one should use ahead of a PCA analysis.

Conclusions

The existence of a few primary components does not automatically mean that only a few features contribute to the data distribution’s variance in the features space. However, in the case of the WIFI data example we have a special situation for which only three out of seven features do determine the primary components and the direction of the respective preferred axes of the data distribution. We also saw that we may have to scale feature data properly before applying a PCA analysis.

In the next post of this series

KMeans as a classifier for the WIFI and MNIST datasets – III – KMeans as a classifier for the WIFI-example

we shall answer the question whether and how we can use the cluster algorithm KMeans also as a classifier for the WIFI data.

Ceterum censeo: The worst fascist today who really and urgently must be denazified is the Putler.

KMeans as a classifier for the WIFI and MNIST datasets – I – Cluster analysis of the WIFI example

In the November and December 2021 editions of the German “Linux Magazin” R. Pleger discussed a simple but nevertheless interesting example for the application of a cluster algorithm. His test case was based on a dataset of the UCI Irvine. This dataset contains 2000 samples with (fictitious?) data describing WIFI signals which stemmed from seven WLAN spots around a building. The signal strength of each source was measured at varying positions in four different rooms. I call the whole setup the “WIFI example” below.

One objective of the articles in the Linux Magazin was to demonstrate how simple it is today to apply basic Machine Learning methods. In a first step the author used a ML classifier algorithm to determine the location (i.e. the room) of a measuring instrument just from the strengths of the different WIFI signals. This task can be solved by a variety of algorithms – e.g. by a Decision Tree, SVM/SVC or a simple Multilayer Perceptron. The author used Sklearn’s RandomForestTree. This method is a good example for the powerful “Ensemble Learning” technique. When applied to the simple and well structured WIFI example it predicts the rooms for test samples with an accuracy of more than 98%.

The author afterward performed a deeper analysis of the WIFI data via Kmeans, MiniBatchKMeans and PCA. His second article underlined a major question, which sometimes is not taken seriously enough:

Do the data, which we feed into ML algorithms, really cover all aspects of the problem? Is the set of target labels complete or sufficient in the sense that the separation of the samples into labeled groups really reflects the problem’s internal structure? Or do the data contain more information than the labels reveal?

Unfortunately, in my opinion, the Linux Magazine covered an important point, namely the relation of the results of a PCA analysis to a 2-dimensional cluster visualization, in an incomplete and also slightly misleading way. In addition another interesting question was not discussed at all:

Can we use KMeans also as a classifier? How would we do this?

In this series of posts I want to dig a bit deeper into these topics – both for the WIFI example and also for the MNIST dataset. For MNIST we will not be able to visualize clusters as easily as for the WIFI example. Therefore, we should have a clear idea about what we do when we use clusters for classifying.

In this first post I focus on the results of a cluster analysis for the WIFI example. In a second article I will discuss the relation of cluster results to a PCA analysis. A third post will then present a very simple method of how to turn a cluster algorithm into a classifier algorithm. In later articles we shall transfer our knowledge to the MNIST data. More precisely: We shall combine a PCA analysis with a cluster classifier to predict the labels of handwritten digit images. We will use the PCA technique to reduce the dimensions of the MNIST feature space from 784 down to below 80. It will be interesting to see what accuracy we can reach with a relatively crude clustering approach on only about 30 main PCA components. As a side aspect we shall also have a look at standardization and normalization of the MNIST data.

I do not present any code in the first three posts as the required Python programs can be build relatively straight-forward and most of the core statements were already given in the Linux Magazin. You unfortunately have to buy the articles of the magazine; but see https://www.linux-magazin.de/ausgaben/2021/11/maschinenlernen/. However, as soon as we turn to MNIST I shall provide a Jupyter notebook.

The WIFI example: Two thousand samples, each with data for the signal strength of seven WLAN sources measured in four rooms

You can download the data WIFI data set from the following address:
https://archive.ics.uci.edu/ml/machine-learning-databases/00422/wifi_localization.txt

The feature space of this is example is 7-dimensional: 7 WLAN spots provide WIFI signals in the building. We have 2000 samples. Each sample provides the signal strength of each of the WLAN sources measured at different times and positions within a specific room. An integer number in [1,4] is provided as a label which identifies the room. The following plot shows the interpolated frequency distribution over the signal strength for each of the 7 signals in the four rooms:

Cluster analysis of the WIFI data – more than four rooms?

The original label-data of the WIFI example imply the existence of four rooms. But can we trust this information? The measurements in the room, which we called “Diele” in the plots above, indicate a consistent second peak for both the signals 0 and 3. Is this due to an opening into another room?

A simple method to analyze the inner structure of the distribution of data points in a configuration or feature space is a “cluster analysis”. The KMeans algorithm provides such an analysis for an assumed number of clusters.

KMeans is a basic but important ML method which reveals a lot about the data distribution in feature space and indirectly about the complexity of hyperplanes required to separate data according to their labels. Among other things KMeans determines the positions of cluster centers – the so called centroids – by measuring and systematically optimizing distances of samples to assumed centroids. Actually, the sum over all intra-cluster variances, i.e. the summed quadratic distances of the associated samples to their cluster’s centroid, is minimized. The respective quantity is called “inertia” of the cluster distribution. See e.g. the excellent book of P. Wilmott, Machine learning – an applied mathematics introduction” on this topic.

A simple method to find out into how many clusters a distribution probably segregates is to look for an elbow in the variation of the inertia with the number of clusters. When we look at the variation of inertia values with the number of potential clusters “k” for the WIFI example we get the following curve:

This indicates an elbow at k=4,5.

Another method to identify the most probable number of distinct clusters in a multi dimensional data point distribution is the so called “silhouette analysis”. See the book of A. Geron “Hands-On Machine Learning with Scikit-Learn, Keras and Tensorflow”, 2n edition, for a description. For the WIFI example the plots of the silhouette score data support the result of the elbow analysis:

The second plot shows ordered silhouette data for k = 3,4,5,6 clusters. Again, we get the most consistent pictures for k=4 and k=5.

So, the data indicate a fragmentation into 4 or 5 clusters. How can we visualize this with respect to the feature space?

Scatter plots for 2-dim sub-spaces of the feature space

A general problem with the visualization of cluster data for multidimensional data is that we are limited to 2, maximal 3 dimensions. And a projection down to two dimensions may not reflect the real cluster separation in the multidimensional feature space in a realistic way. But sometimes we are lucky.

We shall later see that there are two primary components which dominate the data and signal distributions in the WIFI example. A major question, however, is whether we will also find that only a few original features contribute dominantly to these major components. A PCA analysis does not mean that a “primary component” only depends on the same number of features!

As I did not know the relation of “primary components” to features I just plotted the results of “KMeans” for a variety of 2-dim signal combinations. I used Sklearn’s version of KMeans; due to the very small data ensemble KMeans is applicable without consuming too much CPU time (this will change with MNIST; there we need to invoke MiniBatchKMeans):

Note that the colorization of the data points in all plots was done with respect to the cluster number predicted by KMeans for the samples – and not with respect to their labels.

It is interesting that the projections onto two special feature combinations – namely WLAN-4/WLAN 0 and WLAN-3/WLAN-0 signal – show a very distinct separation of the clusters.

Four or five clusters ?

The data displayed above depend a bit on the initial distribution of cluster centers as an input into the KMeans algorithm. But for 4 and 5 clusters we get very consistent results. The next plots show the positions of the centroids:

This time the colorization was done with respect to the labels. What we see is: Five clusters represent the situation a bit better than only 4 clusters.

When we align this with the rooms: Five “rooms” may describe the signal variation better than only 4 rooms. The reason for this might be that one of the four rooms has a wall which partially separates different areas from another. We often find this in “entrances” [German: “Diele”] to houses. Sketches of the rooms in the Linux Magazin article actually show that this is the case. And, of course, such a wall or an opening into another room would have an impact on the damping of the WLAN signals.

Addendum 19.03.2022: Comparing clusters with groups of labeled data points

An important question which we have not answered yet by the images shown above is the following:

How well do clusters coincide with groups of data points having a specific label?

Note that in general you can not be sure that clusters reflect data points of the same label. Actually, a cluster is only a way to describe a close spatial vicinity of data points in some region of the multidimensional feature space. I.e. some kind of clumping of the data points around some centroids. But spatial vicinity does not necessarily reflect a label: A label border may often separate data points which are very close neighbors. And a cluster may contain a mixture of samples with different labels ….

Well, in the case of the WIFI example the identified 4 to 5 clusters match the groups of data points with different labels quite well. Below I superimposed the sample’s data points with different colors: First I colorized the data points according to their label. On top of the resulting scatter plot I placed the same data points again, but this time with a different and transparent colorization according to their cluster association. In addition I shifted the second data layer a bit to get a better contrast:

You see that the areas are not completely identical, but they overlap quite well. Obviously, I used 5 clusters. Also the fifth cluster fits well into a region characterized by just one label.

Conclusion

The simple WIFI example shows that a cluster analysis may give you new insights into the structure of ML data sets which a simple classifier algorithm can not provide. In the next article

KMeans as a classifier for the WIFI and MNIST datasets – II – PCA in combination with KMeans for the WIFI-example

we shall link the information contained in the “clusters” to the results of a PCA analysis of the WIFI example.

Stay tuned …

Ceterum censeo: The most important living fascist which must be denazified is the Putler.

Pandas – Extending a vocabulary or simple dataframe relatively fast

During some work for a ML project on a large text corpus I needed to extend a personally used reference vocabulary by some complex ad unusual German compounds and very branch specific technical terms. I kept my vocabulary data in a Pandas dataframe. Each “word” there had some additional information associated with it in some extra columns of the dataframe – as e.g. the length of a word or a stem or a list of constituting tri-char-grams. I was looking for a fast method to extend the dataframe in a quick procedure with a list of hundreds or thousands of new words.

I tried the df.append() method first and got disappointed with its rather bad performance. I also experimented with the incorporation of some lists or dictionaries. In the end a procedure based on csv-data was the by far most convenient and fastest approach. I list up the basic steps below.

In my case I used the lower case character version of the vocabulary words as an index of the dataframe. This is a very natural step. It requires some small intermediate column copies in the step sequence below, which may not be necessary for other use-cases. For the sake of completeness the following list contains many steps which have to be performed only once and which later on are superfluous for a routine workflow.

  1. Step1: Collect your extension data, i.e. a huge bunch of words, in a Libreoffice Calc-file in ods-format or (if you absolutely must) in an MS Excel-file. One of the columns of your datasheet should contain data which you later want to use as a (unique) index of your dataframe – in my case a column “lower” (containing the low letter representation of a word).
  2. Step 2: Avoid any operations for creating additional column information which you later can create by Python functions working on information already contained in some dataframe columns. Fill in dummy values into respective columns. (Or control the filling of a dataframe with special data during the data import below)
  3. Step 3: Create a CSV-File containing the collected extension data with all required field information in columns which correspond to respective columns of the dataframe to be extended.
  4. Step 4:Create a backup copy of your original dataframe which you want to extend. Just as a precaution ….
  5. Step 5: Copy the contents of the index of your existing dataframe to a specific dataframe column consistent with step 1. In my case I copied the words’ lower case version into a new data column “lower”.
  6. Step 6: Delete the existing index of the original dataframe and create a new basic integer based index.
  7. Step 7: Import the CSV-file into a new and separate intermediate Pandas dataframe with the help of the method pd.read_csv(). Map the data columns and the data formats properly by supplying respective (list-like) information to the parameter list of read_csv(). Control the filling of possibly empty row-fields. Check for fields containing “null” as string and handle these by the parameter “na_filter” if possible (in my case by “na_filter=False”)
  8. Step 8: Work on the freshly created dataframe and create required information in special columns by applying row-specific Python operations with a function and the df.apply()-method. For the sake of performance: Watch out for naturally vectorizable operations whilst doing so and separate them from other operations, if possible.
  9. Step 9: Check for completeness of all information in
    your intermediate dataframe. verify that the column structure matches the columns of the original dataframe to be extend.
  10. Step 10: Concatenate the original Pandas dataframe (for your vocabulary) with the new dataframe containing the extension data by using the df.concat() or (simpler) by df.append() methods.
  11. Step 11: Drop the index in the extended dataframe by the method pd.reset_index(). Afterward recreate a new index by pd.set_index() and using a special column containing the data – in my case the column “lower”
  12. Step 12: Check the new index for uniqueness – if required.
  13. Step 13: If uniqueness is not given but required:
    Apply df = df[~df.index.duplicated(keep=’first’)] to keep only the first occurrence of rows for identical indices. But be careful and verify that this operation really fits your needs.
  14. Step 14: Resort your index (and extended dataframe) if necessary by applying df.sort_index(inplace=True)

Some steps in the list above are of course specific for a dataframe with a vocabulary. But the general scheme should also be applicable for other cases.

From the description you have certainly realized which steps must only be performed once in the beginning to establish a much shorter standard pipeline for dataframe extensions. Some operations regarding the index-recreation and re-sorting can also be automatized by some simple Python function.

Have fun with Pandas!

TF-IDF – which formula to take in combination with the Keras Tokenizer? And when to calculate TF-IDF by your own code …

When performing Computer based text analysis we sometimes need to shorten our texts by some criterion before we apply machine learning algorithms. One of the reasons could be that a classical vectorization process applied to the original texts would lead to matrices or tensors which are beyond our PC’s memory capabilities.

Another reason for shortening might be that we want to focus our analysis upon words or tokens which are “significant” for the text documents we are dealing with. The individual texts we work with typically are members of a limited collection of texts -a so called text corpus.

What does “significance” mean in this context? Well, words which are significant for a specific text should single out this text among all other texts of the corpus – or vice versa. There should be a strong and specific correlation between the text and its “significant” tokens. Such a kind of distinguishing correlation could be: The significant tokens may appear in the selected specific document, only, or especially often – always in comparison to other texts in the corpus.

It is clear that we need some “measure” for the significance of a token with respect to individual texts. And we somehow need to compare the frequency by which a word/token appears in a text with the frequencies by which the token appears in other texts of our corpus.

For some analysis we might in the end only keep “significant” words which distinguish the texts of corpus from each other. Note that such a shortening procedure would reduce the full vocabulary, i.e. the set of all unique words appearing in the corpus’ texts. And after shortening the statistical basis for “significance” may have changed.

Due to the impact the choice for a “measure” of a token’s significance may have on our eventual analysis results we must be careful and precise when we discuss our results and its presuppositions. We should name the formula used for the measure of significance.

This leads to the question: Do all authors in the filed of Machine Learning [ML] and NLP use the same formula for the “significance” of words or tokens? Are there differences which can be confusing? Oh yes, there are … The purpose of this post is to remind beginners in the field of NLP or text analysis about this fact and to give an overview over the most common approaches. In addition I will discuss some practical aspects and give some snippets of a code which reproduces the TF-IDF values which the fast Keras Tokenizer would give you.

For large corpus the differences of using different formulas for measuring the significance of tokens may be minor and not change fundamental conclusions. But in my opinion the differences should be checked and at least be named.

TF-IDF values as a measure of a token’s significance – dependency on token, text AND corpus

In NLP a measure of a word’s significance with respect to a specific text of a defined corpus is given by a quantity called “TF-IDF” – “term frequency – inverse document frequency” (see below). TF-IDF values are specific for a word (or token) and a selected text (out of the corpus). We will discuss elements of the formulas in a minute.

Note: Significance values as TF-IDF values will in general depend on the corpus, too.

This is due to the fact that “significance” is based on correlations. As said above: We need to compare token frequencies within a specific text with the token’s frequencies in other texts of the given corpus. Significance is thus rooted in a singular text and the collection of other texts in a specific corpus. Keep a specific text, but change the corpus (e.g by eliminating some texts) – and you will change the significance of a token for the selected text.

If you have somehow calculated “TF-IDF”-values for all the words used in a specific text (of the corpus collection), a simple method to shorten the selected text would be to use a “TF-IDF”-threshold: We keep words which have a “TF-IDF”-value above the defined threshold and omit others from our specific text.

Such a shortening procedure would depend on word- and text-specific values. Another way of shortening could be based on an averaged TF-IDF value of each token evaluated over all documents. We then would get a corpus-specific significance value <TF-IDF> for each of our tokens. Such averaged TF-IDF values for our tokens together with a threshold could also be used for text-shortening.

Whatever method for shortening you choose: “TF-IDF”-values require a statistical analysis over the given ensemble of texts, i.e. the text corpus. The basic statistical data are often collected during the application of a tokenizer to the texts of a given corpus. A tokenizer identifies unique tokens appearing in the texts of a corpus and collects them in a long vector. Such a vector represents the tokenizer’s (and the corpus’) “vocabulary”. The tokenizer vocabulary often is sorted by the total frequency of the tokens in the corpus. In a Python environment the tokenizer’s vocabulary often will be represented by one or more Python dictionary objects.

Technical obstacles may require the explicit calculation of TF-IDF values by a coded formula in your programs

NLP frameworks in most cases provide specific objects and methods to automatically calculate TF-IDF values for the tokens and texts of a corpus during certain analysis runs. But things can become problematic because some tokenizers provide “TF-IDF”-data during a vectorization procedure, only. By vectorization we mean a digital encoding of the texts with respect to the tokenizer’s vocabulary in a common way for all texts.

An example is one-hot encoding: Each text can be represented by numbers 0,1..,n at positions in a long vector in which each position represents a specific token. 0,1,..n would then mean: In this text the token appears 0,1 or n times. Such a kind of encoding is especially useful for the training of neural networks.

A tool which gives us TF-IDF values after some vectorization is the Keras Tokenizer. I prefer the Keras Tokenizer in my projects because it really is super fast.

But: You see at once that we may run into severe trouble if we need to feed all of the tokens of a really big corpus into a TF-IDF analysis based on vectorization. For a collection of texts the number of unique tokens may lie in the range of several millions. This in turn leads to very long vectors. The number of vectors to look at is given by the number of texts which in itself may be hundreds of thousands or even millions. You can do the math for RAM requirements with a 16- or 32 Bit-representation yourself. As a consequence you may have to accept that you can do vectorization only batch-wise. And this may be time-consuming and may require intermediate manual interaction with your programs – especially when working with Python code and Jupyter notebooks (see below).

For the analysis of huge text corpora the snake of tokenizing, TF-IDF calculation, reasonable shortening and vectorization for ML may bite in its tail:
We need tf-idf to to shorten texts in a reasonable way and to avoid later memory problems during vectorization for ML tasks. But sometimes our tool-kit provides “tf-idf”-data only after a first vectorization, which may not be feasible due to the size of our corpus and the size of the resulting vocabulary of the tokenizer.

A typical example is given by the Keras tokenizer. If you have a big corpus with millions of texts and tokens vectorization may not be a good recipe to get your TF-IDF values. Especially in the case of post-OCR applications you may not be allowed to throw away any of the identified tokens before you have corrected them for possible OCR errors. And your ML-mechanism for such correction may depend on the results of some kind of TF-IDF analysis.

In such a situation one must invest some (limited) effort into a “manual” calculation of TF-IDF values. You then have to pick some formula from a text book on NLP and/or ML-based text analysis. But having done so you may soon find out that your (text-book) formula for a “TF-IDF”-calculation does not reproduce the values the tokenizer of a selected NLP framework would have given you after a vectorization of your texts. Note that you can always check for such differences by using an artificially and drastically reduced corpus.

A formula for the TF-IDF calculation performed the Keras tokenizer is one of the central topic of this post. I omit the hyphen in TF-IDF below sometimes for convenience reasons.

Is it worth the effort to calculate TF-IDF values by your own code?

Tokenizers of a NLP tool-kit do not only identify individual tokens in a text, but are also capable to vectorize texts. Vectorization leads to the representation of a text by an (ordered) series of integer or float numbers, i.e. a vector, which in a unique way refers to the words of a vocabulary extracted from the text collection:

The indexed position in the vector refers to a specific word in the vocabulary of the text corpus, the value given at this position instead describes the word’s (statistical) appearance in a text in some way.

We saw already that “one-hot”-encoding for vectorization may result in a “bag-of words”-model: A word appearing in a text is marked by a “one” (or integer) in an indexed vector referring to tokens of the ordered vocabulary derived by a tokenizer. But vectorization can be provided in different modes, too: The “ones” (1) in a simple “one-hot-encoded” vector can e.g. be replaced by TF-IDF values (floats) of the words (tfidf-mode). This is the case for the Keras Tokenizer: By using respective Keras tokenizer functions you may get the aspired TFIDF-values for reducing the texts during a vectorization run. However, all one-hot like encodings of texts come with a major disadvantage:

The length of the word vectors depends on the number of words the tokenizer has identified over all texts in a collection for the vocabulary.

If you have extracted 3 million words (tokens) out of hundreds of thousands of texts you may run into major trouble with the RAM of PC (and CPU-time). Most tokenizers allow for a (manual) sequential vectorization approach for a limited number of texts to overcome memory problems under such circumstances. What does this mean practically when working with Jupyter notebooks?

Well, if you work with notebooks on a PC with a small amount of RAM – and small may mean 128 GB (!) in some cases – you may have to perform a sequence of vectorization runs, each with maximally some hundred or thousand texts. Then you may have to export your results and afterward manually reset your notebook kernel to get rid of the RAM consumption – just because the standard garbage collection of your Python environment may not work fast enough.

I recently had this problem in a project with 200,000 texts, the Keras tokenizer and a vocabulary of 2.4 million words (where words with less than 4 characters were already omitted). The Keras tokenizer produces almost all relevant data for a “manual” calculation of TF-IDFF values after it has been applied to a corpus. In my case the CPU-time required to tokenize and build a vocabulary for the 200,000 texts took 20 secs, only. A manual and sequential approach to create all TFIDF values via vectorization, however, required about an hour’s time. This was due both to the time the vectorization and tf-idf calculation needed and the time required for resetting the notebook kernel.

After I had decided to implement the TFIDF calculation on my own in my codes I could work on the full corpus and get the values within a minute. So, if one has to work on a big corpus multiple times with some iterative processes (e.g. in post-OCR procedures) we may talk of a performance difference in terms of hours.

Therefore, I would in general recommend to perform TFIDF-calculations by your own code segments when being confronted with big corpora – and not to rely on (vectorizing) tools of a framework. Besides performance aspects another reason for TF-IDF functions programmed by yourself is that you afterward know exactly what kind of TF-IDF formula you have used.

TF-IDF formulas: The “IDF”-term – and what does the Keras tokenizer use for it?

The TF-IDF data describe the statistical overabundance of a token in a specific text by some formula measuring the token’s frequency in the selected text in comparison to the frequency over all texts in a weighted and normalized way.

During my own “TF-IDF”-calculations based on some Python code and basic tokenizer data, I, of course, wanted to reproduce the values the Keras tokenizer gave me during my previous vectorization approach. To my surprise it was rather difficult to achieve this goal. Just using a reasonable “TF-IDF”-formula taken from some NLP text-book simply failed.

The reason was that “TF-IDF”-data can be and are indeed calculated in different ways both in NLP literature adn in NLP frameworks. The Keras tokenizer does it differently than the tokenizer of SciKit – actually for both the TF and the IDF-part. There is a basic common structure behind a normalized TFIDF-value; there are, however, major differences in the details. Lets look at both points.

Everybody who has once in his/her life programmed a search engine knows that the significance of a word for a specific text (of an ensemble) depends on the number of occurrences of the word inside the specific text, but also on the frequency of the very same word in all the other texts of a given text collection:

If a word also appears very often in all other texts of a text ensemble then the word is not very significant for the specific text we are looking at.

Examples are typical “stop-words” – like “this” or “that” or “and”. Such words appear in very many texts of a corpus of English texts. Therefore, stop-words are not significant for a specific text.

Thus we expect that a measure of the statistical overabundance of a word in a selected text is a combination of the abundance in this specific text and a measure of the frequency in all the other texts of the corpus. The “TF-IDF” quantity follows this recipe. It is a combination of the so so called “term frequency” [tf(t)] with the “inverse document frequency [idf(t)], with “t” representing a special token or term:

tfidf(t)   =   tf(t)   *   idf(t)

While the term frequency measures the occurrence or frequency of a word within a selected text, the “idf” factor measures the frequency of a word in multiple texts of the collection. To get some weighing and normalization into this formula, the “idf”-term is typically based on the natural logarithm of the fraction described

  • by the number of texts NT comprised by a corpus as the nominator
  • and the number of documents ND(t) in which a special word or term appears as the denominator.

Once again: A TF-IDF value is always characteristic of a word or term and the specific text we look at. But, the “idf”-term is calculated in different manners in various text-books on text-analysis. Most variants try to avoid the idf-term becoming negative or want to avoid a division by zero; typical examples are:

  1. idf(t) = log( NT / (ND + 1) )

  2. idf(t) = log( (1 + NT) / (ND + 1) )

  3. idf(t) = log( 1 + NT / (ND + 1) )

  4. idf(t) = log( 1 + NT / ND )

  5. idf(t) = log( (1 + NT) / (ND + 1) ) + 1

Note: log() represents the natural logarithm above.

Who uses which IDF version?
The second variant appears e.g. in a book of S. Raschka (see below) on “Python Machine Learning” (2016, Packt Publishing).

The fourth version in the list above is used in Scikit according to https://melaniewalsh.github.io/Intro-Cultural-Analytics/05-Text-Analysis/03-TF-IDF-Scikit-Learn.html

This is in so far consistent to Raschka’s version as he himself characterizes the SciKit “TF-IDF” version as:

tfidf(t) = tf(t) * [ idf(t) + 1 ]

Keras: The third variant is the one you find in the source code of the Keras tokenizer. The strange thing is that you also find a reference in the Keras code which points to a section in a Wikipedia article that actually reflects the fourth form (!) given in the list above.

Source code excerpt of the Keras Tokenizer (as of 10/2021):

.....
.....
elif mode == 'tfidf':
                    # Use weighting scheme 2 in
                    # https://en.wikipedia.org/wiki/Tf%E2%80%93idf
                    tf = 1 + np.log(c)
                    idf = np.log(1 + self.document_count /
                                 (1 + self.index_docs.get(j, 0)))
                    x[i][j] = tf * idf
.....
.....

What we learn from this is that there are several variants of the “IDF”-term out there. So, if you want to reproduce TFIDF-numbers of a certain NLP framework you should better look into the code of your framework’s classes and functions – if possible.

Variants of the “term frequency” TF? Yes, they do exist!

While I had already become aware of the existence of different IDF-variants, I did not at all know that here were and are even differences regarding the term-frequency “tf(t)“. Normally, one would think that it is just the number describing how often a certain word or term appears in a specific text, i.e. the token’s frequency fro the selected text.

Let us, for example, assume that we have turned a specific text via a tokenizer function into a “sequence” of numbers. An entry in this sequence refers to a unique number assigned to a word of a somehow sorted tokenizer vocabulary. A tokenizer vocabulary is typically represented by a Python dictionary where the key is the word itself (or a hash of it) and the value corresponds to a unique number for the word. (Hint: In my applications for texts I always create a supplementary dictionary, which I call “switched_vocab”, with keys and values switched (number => word). Such a dictionary is useful for a lot of analysis steps)

A sequence can typically be represented by a Python list of numbers “li_seq”: the position in the list corresponds to the word’s position in the text (marked by separators), the number given corresponds to the words unique index number in the vocabulary.

Then, with Python 3, a straight-forward code snippet to get simple tf-values (as we sum of the number’s frequency in the sequence) would be

ind_w = li_seq[i]    # with "i" selecting a specific point or word in the sequence 
d_count  = Counter(li_seq)
tf = d_count[ind_w]

This code creates a dictionary “d_count” with the word’s unique number appearing in the original sequence and the sum of occurrences of this specific number in the text’s sequence – i.e. in the text we are looking at.

Does the Keras tokenizer calculate and use the tf-term in this manner when it vectorizes texts in tfidf-mode? No, it does not! And this was a major factor for the differences in comparison to the TFIDF-values I naively produced for my texts.

With the terms above the Keras tokenizer instead uses a logarithmic value for tf (= TF):

ind_w = li_seq[i] # i selecting a specific point or word in the sequence 
d_count  = Counter(li_seq)
tf = log( 1 + d_count[ind_w] )

This makes a significant difference in the derived “TF-IDF” values – even if one had gotten the “IDF”-term right!
Please note that all of the variants used for the TF- and the IDF-terms have their advantages and disadvantages. You should at least know exactly which formula you use in your analysis. In my project the Keras way of doing TF-IDF was useful, but there may be cases where another choice is appropriate.

Quick and dirty Python code to calculate TF-IDF values manually for a list of texts with the Keras tokenizer

For reasons of completeness, I outline some code fragments below, which may help readers to calculate “TF-IDF”-values, which are consistent with those produced during “sequences to matrix”-vectorization runs with the Keras tokenizer (as of 10/2021).

I assume that you already have a working Keras implementation using either CPU or GPU. I further assume that you have gathered a collection of texts (cleansed by some Regex operations) in a column “txt” of a Pandas dataframe “df_rex”. We first extract all the texts into a list (representing the corpus) and then apply the Keras tokenizer:

from tensorflow.keras import preprocessing
from tensorflow.keras.preprocessing.text import Tokenizer

num_words = 1800000    # or whatever number of words you want to be taken into account from the vocabulary  

li_txts = df_rex['txt'].to_list()
tokenizer = Tokenizer(num_words=num_words, lower=True) # converts tokens to lower-case 
tokenizer.fit_on_texts(li_txts)    

vocab   = tokenizer.word_index
w_count = tokenizer.word_counts
w_docs  = tokenizer.word_docs
num_tot_vocab_words = len(vocab) 
    
# Switch vocab - key <> value 
# ****************************
switched_vocab = dict([(value, key) for key, value in vocab.items()])

Tokenizing should be a matter of seconds or a few ten-second intervals depending on the number of texts and the length of the texts. In my case with 200,000 texts, on average each with 2000 words, it took 25 secs and produced a vocabulary of about 2.4 million words.

In a next step we create “integer sequences” from all texts:

li_seq_full  = tokenizer.texts_to_sequences(li_txts)
leng_li_seq_full = len(li_seq_full)

Now, we are able to create a super-list of lists – including a list of tf-idf-values per text:

li_all_txts = []

j_end = leng_li_seq_full
for j in range(0, j_end):
    li_text = []
    li_text.append(j)

    leng_seq = len(li_seq_full[j])
    li_seq     = []
    li_tfidf   = []
    li_words   = []
    d_count    = {}

    d_count  = Counter(li_seq_full[j])
    for i in range(0,leng_seq):
        ind_w    = li_seq_full[j][i] 
        word     = switched_vocab[ind_w]
        
        # calculation of tf-idf
        # ~~~~~~~~~~~~~~~~~~~~~
        # https://github.com/keras-team/keras-preprocessing/blob/1.1.2/keras_preprocessing/text.py#L372-L383
        # Use weighting scheme 2 in https://en.wikipedia.org/wiki/Tf%E2%80%93idf
        dfreq    = w_docs[word] # document frequency 
        idf      = np.log( 1.0 + (leng_li_seq_full)  / (dfreq + 1.0) )
        tf_basic = d_count[ind_w]
        tf       = 1.0 + np.log(tf_basic)
        tfidf    = tf * idf 
                
        li_seq.append(ind_w) 
        li_tfidf.append(tfidf) 
        li_words.append(word) 

    li_text.append(li_seq)
    li_text.append(li_tfidf)
    li_text.append(li_words)

    li_all_txts.append(li_text)

leng_li_all_txts = len(li_all_txts)

This last run took about 1 minute in my case. When getting the same numbers with a sequential approach calculating Keras vectorization matrices in tf-idf mode for around 6000 texts in each run with in-between memory cleansing it took me around an hour and required continuous manual system interactions. Well, such a run provides the encoded text vectors as one of its major products, but, actually, we do not need these vectors for just evaluating TF-IDF values.

Conclusion

In this article I have demonstrated that “TF-IDF”-values can be calculated almost directly from the output of a tokenizer like the Keras Tokenizer. Such a “manual” calculation by one’s own coded instructions is preferable in comparison to e.g. a Keras based vectorization run in “tf-idf”-mode when both the number of texts and the vocabulary of your text corpus are huge. “tf-idf”-word-vectors may easily get a length of more than a million words with reasonably complex text ensembles. This poses RAM problems on many PC-based systems.

With TF-IDF-values calculated by your code functions you will get a measure for the significance of words or tokens within a reasonable CPU time and without any RAM problems. The evaluated “TF-IDF”- values may afterward help you to shorten your texts in a well founded and reasonable way before you vectorize your texts, i.e. ahead of applying advanced ML-algorithms.

Formulas for TF-IDF values used in the literature and various NLP tool-kits or frameworks do differ with respect to both for the TF-terms and the IDF-terms. You should be aware of this fact and choose one of the formulas given above carefully. You should also specify the formula used during your text analysis work when presenting results to your customers.