When preparing a bunch of texts for Machine Learning [ML] there may come a point where you need to eliminate probable junk words or simply wrongly written words form the texts. This is especially true for scanned texts. Let us assume that you have already applied a tokenizer to your texts and that you have created a "bag of words" [BoW] for each individual text or even a global one for all of your texts.
Now, you may want to compare each word in your bag with a checked list of words - a "reference vocabulary" - which you assume to comprise the most relevant words of a language. If you do not find a specific word of your bag in your reference "vocabulary" you may want to put this word into a second bag for a later, more detailed analysis. Such an analysis may be based on a table where the vocabulary words are split in n-grams of characters. These n-grams will stored in additional columns added to your wordlist, thus turning it into a 2-dimensional array of data.
Such tasks require a tool which - among other things -
- is able to load 1-, 2-dimensional and sometimes 3-dimensional data structures in a fast way from CSV-files into series-, table- or cube-like data structures in RAM,
- provides tools to select, filter, retrieve and manipulate data from rows,columns and cells,
- provides tools to operate on a multitude of rows or columns,
- provides tools to create some statistics on the data.
All of it pretty fast - which means that the tool must support the creation of an index or indices and/or support vectorized operations (mostly on columns).
I had read in ML books the Pandas is such a tool for a Python environment. A way to accomplish our task would be to load the reference vocabulary into a Pandas structure and check the words of the BoW(s) against it. This means that you try to find the word in the reference list and evaluate the result (positive or negative). And when you are done with this challenge you may want to retrieve additional information from a 2-dimensional Pandas data structure.
This article is about the performance of some data retrieval experiments I recently did on a wordlist of around 2 million words. The objective was to check the existence of tokenized words of some 200.000 texts, each with around 2000 tokens, against this wordlist being embedded in a Pandas dataframe and also to retrieve additional information from other columns of the dataframe.
As we talk about scanned texts and OCR treatment it is very probable that the number of tokens you have to compare with your vocabulary is well above 10 millions. It is clear that there is an requirement for performance if you want to work with a standard Linux PC.
Multiple ways to retrieve or query information from a Pandas series or dataframe
When I started to really use Pandas some days ago I became a bit overwhelmed by the documentation - and the differences in comparison to databases. After having used a database like MySQL for years I had a certain vision about the handling of "table"-like data and related performance. Well, I had to swallow some camels!
And when I started to really care about performance I also realized that there where very many ways to "query" a Pandas dataframe - and not all will give you the same speed in data retrieval.
This article, therefore, dives a bit below the glittering surface of Pandas and looks at different methods to retrieve rows and certain cell values out of a simple "Pandas dataframe". To work with some practical data I used a reference vocabulary for the German language based on Wikipedia articles.
The first objective was very simple: Verify that a certain word is an element in the reference vocabulary.
The second objective was a natural extension: Retrieve rows (with multiple columns) for fitting entries - sometimes multiple entries with different word writings.
I was somewhat astonished top see factors between at least 16 and 10.000 for real data retrieval, in comparison with the fastest solution. Just checking the existence of a word in the wordlist proved to be extremely faster after having created a suitable index - and not using any data columns at all.
The response times of Pandas depended strongly on the "query" method and the usage of an index.
I hope the information given below and in the next article is useful for other beginners with Pandas. I shall speak of a "query" when I want to select data from a Pandas dataframe and a "resultset" when addressing one or a collection of data rows as the result of a query. Can't forget my time with databases ...
I assume that you already have a valid Pandas installation in a Python 3 environment on your Linux PC. I did my simple experiments with a Jupyter notebook, but, of course, other tools can be used, too.
Loading an example wordlist into a Pandas dataframe
For my small "query" experiments I first loaded a simple list with around 2.1 million words from a text file into a Pandas data structure. This operation created a so called "Pandas series" and also produced an unique index - appearing as integers, which marked each row of the data with a specific integer.
Then I created two additional columns: The first one with all words written in lower case letters. The second one containing the number of characters of the word's string. By these operations I created a real 2-dim object - a so called Pandas "dataframe".
Let us follow this line of operations as a first step. So, where do we get a wordlist from?
A friendly engineer (Torsten Brischalle) has provided a German word-list based on Wikipedia which we can use as an example.
We first import the "uppercase"-wordlist. You can download from this link. On your Linux PC you expand the 7zip archive by standard Linux tools.
This "uppercase" list has the advantage that an index which we will later base on the lowercase writing of the words will (hopefully) be unique. The more extensive wordlist also provided by Brischalle instead comprises multiple writings for some words. The related index would, therefore, not be unique. We shall see that this has a major impact on the response time of the resulting Pandas dataframe.
The wordlists, after 7zip-expansion, all are very simple text-files: Each line contains just one word.
We shall nevertheless work with a 2-dim general Pandas "dataframe" instead of a "series". A reason is that in a real data analysis environment we may want to add multiple columns with more information later on. E.g. columns for n-grams of character sequences constituting the word or for other information as frequencies, consonant to vocal ratio, etc. And then we would work on 2-dim data structures.
Loading the data into a Pandas dataframe and creating an index based on lowercase word representation
Let us import the wordlist data by the help of some Python code in a Jupyter cell (in my case from a directory "/py/projects/CA22/catch22/Wortlisten/"):
import os import time import pandas as pd import numpy as np dfw_smallx = pd.read_csv('/py/projects/CA22/catch22/Wortlisten/word_list_german_uppercase_spell_checked.txt', dtype='str', na_filter=False) dfw_smallx.columns = ['word'] dfw_smallx['indw'] = dfw_smallx['word'] pdx_shape = dfw_smallx.shape print("shape of dfw_smallx = ", pdx_shape) pdx_rows = pdx_shape pdx_cols = pdx_shape print("rows of dfw_smallx = ", pdx_rows) print("cols of dfw_smallx = ", pdx_cols) dfw_smallx.head(8)
You see that we need to import the Pandas module besides other standard modules. Then you find that Pandas obviously provides a function "read_csv()" to import CSV like text files. You find more about it in the Pandas documentation here.
The CSV import should in our case be a matter of a few seconds, only.
A column name or column names can be added to a Pandas series or Pandas dataframe, respectively, afterward.
Why did I use the parameter "na_filter"? Well, this was done to handle a special value in the wordlist, namely "NULL". You may remember that this is a key-word in Python! We would get an empty entry in the dataframe for this input value without the named parameter. You find more information on this topic in the Pandas documentation on the "read_csv()"-function.
The reader also notices that I just named the single data column (resulting from the import) 'word' and then copied this column to another new column called 'indw'. I shall use the latter column as an index in a minute. I then print out some information on the dataframe:
shape of dfw_smallx = (2188246, 2) rows of dfw_smallx = 2188246 cols of dfw_smallx = 2 word indw 0 AACHENER AACHENER 1 AACHENERIN AACHENERIN 2 AACHENERINNEN AACHENERINNEN 3 AACHENERN AACHENERN 4 AACHENERS AACHENERS 5 AACHENS AACHENS 6 AAL AAL 7 AALE AALE
Almost 2.2 million words. OK, I do not like uppercase. I want a lowercase representation to be used as an index later on. This gives me the opportunity to apply an operation to a whole column with 2.2 mio words.
The creation of our string based index can be achieved by the "set_index()" function:
dfw_smallx['indw'] = dfw_smallx['word'].str.lower() dfw_smallx = dfw_smallx.set_index('indw') dfw_smallx.head(5)
Leading after less than 0.5 secs (!) to:
word indw aachener AACHENER aachenerin AACHENERIN aachenerinnen AACHENERINNEN aachenern AACHENERN aacheners AACHENERS
Now, let us add one more column containing the length information on the word(s).
This can be done by two methods
- dfw_smallx['len'] = dfw_smallx['word'].str.len()
- dfw_smallx['len'] = dfw_smallx['word'].apply(len)
The second method is a bit faster (by a factor of 0.7), but does not work on NaN cells of a column. In our case no problem, we get:
# A a column for len information v_start_time = time.perf_counter() dfw_smallx['len'] = dfw_smallx['word'].apply(len) v_end_time = time.perf_counter() print("Total CPU time ", v_end_time - v_start_time) dfw_smallx.head(3) Total CPU time 0.3626117290004913 word len indw aachener AACHENER 8 aachenerin AACHENERIN 10 aachenerinnen AACHENERINNEN 13
Basics of addressing data in a Pandas dataframe
Ok, we have loaded our reference list of words into a dataframe. A Pandas "dataframe" basically is a 2-dimensional data structure based on Numpy array technology for the columns. Now, we want to address data in specific rows or cells. Below I repeat some basics for the retrieval of single values from a dataframe:
Each "cell" has a two dimensional integer-"index" - a tuple [i,j], with "i" identifying a row and "j" a column. You can use respective integer values by the "iloc"-operator. E.g. dfw_smallx.iloc[2,1] will give you the value "13".
The "loc"-operator instead works with "labels" given to the rows and columns; in the most primitive form as :
dataframe.loc[row label, column label], e.g. dfw_smallx.loc.[ 'aachenerinnen', 'len' ] .
Labels have to be defined. For columns you may define names (often already during construction of the dataframe). For rows you may define an index - as we actually did above. If you want to compare this with databases: You define a primary key (sometimes based on column-combinations).
Other almost equivalent methods
- iat - operator ,
- at - operator,
- array like usage of the column label + row-index
- and the so called dot-notation
for the retrieval of single values are presented in the following code snippet:
print(dfw_smallx.iloc[2,1]) print(dfw_smallx.iat[2,1]) print(dfw_smallx['len']) print(dfw_smallx.loc['aachenerinnen', 'len']) print(dfw_smallx.at['aachenerinnen', 'len']) print(dfw_smallx.len.aachenerinnen) 13 13 13 13 13 13
Note that the "iat" and "at" operators can only be used for cells, so both row and column values have to be provided; the other methods can be used for more general slicing of columns.
Slicing in general supported by the ":" notation - just as in NumPy. So, with the notation "labelvalue1 : labelvalue2" one can define slices. This works even for string label values:
words = dfw_smallx.loc['alt':'altersschwach', 'word':'len'] print(words)
word len indw alt ALT 3 altaachener ALTAACHENER 11 altablage ALTABLAGE 9 altablagen ALTABLAGEN 10 altablagerung ALTABLAGERUNG 13 ... ... ... altersschnitt ALTERSSCHNITT 13 altersschnitts ALTERSSCHNITTS 14 altersschrift ALTERSSCHRIFT 13 altersschutz ALTERSSCHUTZ 12 altersschwach ALTERSSCHWACH 13 [3231 rows x 2 columns]
Queries with conditions on column values - and Pandas objects containing multiple results
Now let us look at some queries with conditions on columns and the form of the "result sets" when more than just a single value is returned in a Pandas response. Multiple return values may mean multiple rows (with one or more column values) or just one row with multiple column values. Two points are noteworthy:
- Pandas produces a new dataframe or series with multiple rows if multiple values are returned. Whenever we get a Pandas "object" with an internal structure as a Pandas response, we need to narrow down the result to the particular value we want to see.
- To grasp a certain value you need to include some special methods already in the "query" or to apply a method to the result series or dataframe.
An interesting type of "query" for a Pandas dataframe is provided by the "query()"-function: it allows us to retrieve rows or single values by conditions on column entries. But conditions can also be supplied when using the "loc" operator:
w1 = dfw_smallx.loc['null', 'word'] pd_w2 = dfw_smallx.loc['null'] # resulting in a series w2 = pd_w2 pd_w3 = dfw_smallx.loc[dfw_smallx['word'] == 'NULL', 'word'] w3 = pd_w3 pd_w4 = dfw_smallx.query('word == "NULL"') w4 = pd_w4.iloc[0,0] w5 = dfw_smallx.query('word == "NULL"').iloc[0,0] w6 = dfw_smallx.query('word == "NULL"').word.item() print("w1 = ", w1) print("pd_w2 = ", pd_w2) print("w2 = ", w2) print("pd_wd3 = ", pd_w3) print("w3 = ", w3) print("w4 = ", w4) print("w5 = ", w5) print("w6 = ", w6)
I have added a prefix "pd_" to some variables where I expected a Pandas dataframe to be the answer. And really:
w1 = NULL pd_w2 = word NULL len 4 Name: null, dtype: object w2 = NULL pd_wd3 = indw null NULL Name: word, dtype: object w3 = NULL w4 = NULL w5 = NULL w6 = NULL
Noteworthy: For loc (in contrast to iloc) the last value of the slice definition is included in the result set.
Retrieving data by a list of index values
As soon as you dig a bit deeper into the Pandas documentation you will certainly find the following way to retrieve multiple rows by providing a list of of index values:
# Retrieving col values by a list of index values inf = ['null', 'mann', 'frau'] wordx = dfw_smallx.loc[inf, 'word'] wx = wordx.iloc[0:3] # resulting in a Pandas series print(wx.iloc) print(wx.iloc) print(wx.iloc)
NULL MANN FRAUp
The variety of options even in our very simple scenario to retrieve values from a wordlist (with an additional column) is almost overwhelming. They all serve their purpose - depending on the structure of the dataframe and your knowledge on the data positions.
But actually in our scenario for analyzing a BoWs, we have a very simple task ahead of us: We just want to check whether a word or a list of words exists in the list, i.e. if there is an entry for a word (written in small letters) in the list. What about the performance of the different methods for this task?
Actually, there is a very simple answer for the existence check - giving you maximum performance.
But to learn a bit more about the performance of different forms of Pandas queries we also shall look at methods performing some real data retrieval from the columns of a row addressed by some (string) index value.
These will be the topics of the next article. Stay tuned ...
Various ways of "querying" Pandas dataframes
The book "Mastering Pandas" of Ashish Kumar, 2nd, edition, 2019, Packt Publishing Ltd. may be of help - though it does not really comment on performance issues on this level.