Blender – even on old laptops a graphics card increases rendering performance

My present experiments with Blender on my old laptop take considerable time to render- especially animations. So, I got interested in whether rendering on the laptop’s old Nvidia card, a GT 645M, would make a difference in comparison to rendering on the available 8 hyperthreaded cores of the CPU. The laptop’s CPU is an old one, too, namely an i7-3632QM. The laptop’s operative system is Opensuse Leap 15.3. The system uses Optimus technology. To switch between the Nvidia card and the Intel graphics I invoke Suse’s Prime Select application on KDE.

I got a factor of 2 up to 5.2 faster rendering on the GPU in comparison to the CPU. The difference depends on multiple factors. The number of CPU cores used is an important one.

How to activate GPU rendering in Blender?

Basically three things are required: (1) A working recent Nvidia driver (with compute components) for your graphics card. (2) A certain setting in Blender’s preferences. (3) A setting for the Cycles renderer.

Regarding the CUDA toolkit I quote from Blender’s documentation

Normally users do not need to install the CUDA toolkit as Blender comes with precompiled kernels.

With respect to required Blender settings one has to choose a CUDA capable device via the menu point “Preferences >> System”:

You may also select both the GPU and the CPU. Then rendering will be done both on the GPU and the CPU. My graphics card unfortunately only understands a low level of CUDA instructions. The Nvidia driver I used is of version 470.103.01, installed via Opensuse’s Nvidia community repository:

In addition, you must set an option for the Cycles renderer:

With all these settings I got a factor of 2 up to > 6 faster rendering on the GPU in comparison to a CPU with multiple cores.

The difference in performance, of course, depends on

  • the number of threads used on the CPU with 8 (hyperthreaded) cores available to the Linux OS
  • tiling – more precisely the “tile size” – in case of the GPU and the CPU

All other render options with the exception of “Fast G” were kept constant during the experiments.

Scene Setup

To give the Blender’s Cylces renderer something to do I set up a scene with the following elements:

  • a mountain-like landscape (via the A.N.T Landscape Add-On) with a sub-dividion of 256 to 128 – plus subdivision modifier (Catmull-Clark, render level 2, limit surface quality 3) – plus simple procedural texture with some noise and bumps
  • a plane with an “ocean” modifier (no repetition, waves + noisy bump texture for the normal to simulate waves)
  • a world with a sky texture of the Nishita type ( blue sky by much oxygen, some dust and a sun just above the horizon)

The scene looked like

The central red rectangle marks the camera perspective and the area to be rendered. With 80 samples and a resolution of 1200×600 we get:

The hardest part for the renderer is the reflection on the water (Ocean with wave and texture). Also the “landscape” requires some time. The Nishita world (i.e. the sky with the sun), however, is rendered pretty fast.

Required time for rendering on multiple CPU cores

I used 40 samples to render – no denoising, progressive multi-jitter, 0 minimum bounces.
Other settings can be found here:


The number of threads, the tile size and the use of the Fast CI approximation were varied.
The resolution was chosen to be 1200×600 px.

All data below were measured on a flatpak installation of Blender 3.1.2 on Opensuse Leap 15.3.

tile size threads Fast GI time
64 2 no 82.24
128 2 no 81.13
256 2 no 81.01
32 4 no 45.63
64 4 no 43.73
128 4 no 43.47
256 4 no 43.21
512 4 no 44.06
128 8 no 31.25
256 8 no 31.04
256 8 yes 26.52
512 8 no 31.22

A tile size of 256×256 seems to provide an optimum regarding rendering performance. In my experience this depends heavily on the scene and the chosen image resolution.

“Fast GI” gives you a slight, but noticeable improvement. The differences in the rendered picture could only be seen in relatively tiny details of my special test case. It may be different for other scenes and illumination.

Note: With 8 CPU cores activated my laptop was stressed regarding CPU temperature: It went up to 81° Celsius.

Required time for rendering on the mobile GPU

Below are the time consumption data for rendering on the mobile Nvidia GPU 645M:

tile size Fast GI time
64 no 18.3
128 no 16.47
256 no 15.56
512 no 15.41
1024 no 15.39
1200 no 15.21
1200 yes 12.80

Bigger tile sizes improve the GPU rendering performance! This may be different for rendering on a CPU, especially for small scenes. There you have to find an optimum for the tile size. Again, we see an effect of Fast GI.

Note: The temperature of the mobile graphics card never rose above 58° Celsius. I measured this whilst rendering a much bigger image of 4800×2400 px. I therefore think that the temperature stress Blender rendering exerts on the GPU is relatively smaller in comparison to the heat stress on a CPU.

Required time for rendering both on the CUDA capable mobile GPU and the CPU

As the CPU is CUDA capable one can activate CUDA based rendering on the CPU in addition to the GPU in the “preferences” settings. With 4 CPU cores this brings you down to around 11 secs, with 8 cores down to 10 secs.

tile size threads Fast GI time
64 4 no 11.01
128 8 no 10.08

Conclusion

Even on an old laptop with Optimus technology it is worthwhile to use a CUDA capable Nvidia graphics card for Cycles based rendering in Blender experiments. The rise in temperature was relatively low in my case. The gain in performance may range from a factor 2 to 5 depending on how many CPU cores you can invoke without overheating your laptop.

Ceterum censeo: The worst living fascist and war criminal today, who must be isolated, denazified and imprisoned, is the Putler.

 

Pandas – Extending a vocabulary or simple dataframe relatively fast

During some work for a ML project on a large text corpus I needed to extend a personally used reference vocabulary by some complex ad unusual German compounds and very branch specific technical terms. I kept my vocabulary data in a Pandas dataframe. Each “word” there had some additional information associated with it in some extra columns of the dataframe – as e.g. the length of a word or a stem or a list of constituting tri-char-grams. I was looking for a fast method to extend the dataframe in a quick procedure with a list of hundreds or thousands of new words.

I tried the df.append() method first and got disappointed with its rather bad performance. I also experimented with the incorporation of some lists or dictionaries. In the end a procedure based on csv-data was the by far most convenient and fastest approach. I list up the basic steps below.

In my case I used the lower case character version of the vocabulary words as an index of the dataframe. This is a very natural step. It requires some small intermediate column copies in the step sequence below, which may not be necessary for other use-cases. For the sake of completeness the following list contains many steps which have to be performed only once and which later on are superfluous for a routine workflow.

  1. Step1: Collect your extension data, i.e. a huge bunch of words, in a Libreoffice Calc-file in ods-format or (if you absolutely must) in an MS Excel-file. One of the columns of your datasheet should contain data which you later want to use as a (unique) index of your dataframe – in my case a column “lower” (containing the low letter representation of a word).
  2. Step 2: Avoid any operations for creating additional column information which you later can create by Python functions working on information already contained in some dataframe columns. Fill in dummy values into respective columns. (Or control the filling of a dataframe with special data during the data import below)
  3. Step 3: Create a CSV-File containing the collected extension data with all required field information in columns which correspond to respective columns of the dataframe to be extended.
  4. Step 4:Create a backup copy of your original dataframe which you want to extend. Just as a precaution ….
  5. Step 5: Copy the contents of the index of your existing dataframe to a specific dataframe column consistent with step 1. In my case I copied the words’ lower case version into a new data column “lower”.
  6. Step 6: Delete the existing index of the original dataframe and create a new basic integer based index.
  7. Step 7: Import the CSV-file into a new and separate intermediate Pandas dataframe with the help of the method pd.read_csv(). Map the data columns and the data formats properly by supplying respective (list-like) information to the parameter list of read_csv(). Control the filling of possibly empty row-fields. Check for fields containing “null” as string and handle these by the parameter “na_filter” if possible (in my case by “na_filter=False”)
  8. Step 8: Work on the freshly created dataframe and create required information in special columns by applying row-specific Python operations with a function and the df.apply()-method. For the sake of performance: Watch out for naturally vectorizable operations whilst doing so and separate them from other operations, if possible.
  9. Step 9: Check for completeness of all information in
    your intermediate dataframe. verify that the column structure matches the columns of the original dataframe to be extend.
  10. Step 10: Concatenate the original Pandas dataframe (for your vocabulary) with the new dataframe containing the extension data by using the df.concat() or (simpler) by df.append() methods.
  11. Step 11: Drop the index in the extended dataframe by the method pd.reset_index(). Afterward recreate a new index by pd.set_index() and using a special column containing the data – in my case the column “lower”
  12. Step 12: Check the new index for uniqueness – if required.
  13. Step 13: If uniqueness is not given but required:
    Apply df = df[~df.index.duplicated(keep=’first’)] to keep only the first occurrence of rows for identical indices. But be careful and verify that this operation really fits your needs.
  14. Step 14: Resort your index (and extended dataframe) if necessary by applying df.sort_index(inplace=True)

Some steps in the list above are of course specific for a dataframe with a vocabulary. But the general scheme should also be applicable for other cases.

From the description you have certainly realized which steps must only be performed once in the beginning to establish a much shorter standard pipeline for dataframe extensions. Some operations regarding the index-recreation and re-sorting can also be automatized by some simple Python function.

Have fun with Pandas!

Performance of data retrieval from a simple wordlist in a Pandas dataframe with a string based index – II

In my last post I set up a Pandas dataframe with a column containing a (German) wordlist of around 2.2 million words. We created a unique string based index for the dataframe from a column wither “lower case” writing of the words. My eventual objectives are

  1. to find out whether a string like token out of some millions of tokens is a member of the wordlist or not,
  2. to compare n-grams of characters, i.e. (sub-strings) of millions of given strange string tokens with the n-grams of each word in the wordlist.

In the first case a kind of “existence-query” on the wordlist is of major importance. We could work with a condition on a row-value or somehow use the string based index itself. For the second objective we need a requests on column values with “OR” conditions or again a kind of existence-queries on individual columns, which we turn into index structures before.

It found it interesting and a bit frustrating that a lot of introductory articles on the Internet and even books do not comment on performance. In this article we, therefore, compare the performance of different forms of simple data requests on a Pandas dataframe. To learn a bit more about Pandas’ response times, we extend the data retrieval requests a bit beyond the objectives listed above: We are going to look for rows where conditions for multiple words are fulfilled.

For the time being we restrict our experiments to a dataframe with just one UNIQUE index. I.e. we do not, yet, work with a multi-index. However, at the end of this article, I am going to look a bit at a dataframe with a NON-UNIQUE index, too.

Characteristics of the dataframe and the “query”

We work on a Pandas dataframe “dfw_smallx” with the following characteristics:

pdx_shape = dfw_smallx.shape
print("shape of dfw_smallx = ", pdx_shape)
pdx_rows = pdx_shape[0]
pdx_cols = pdx_shape[1]
print("rows of dfw_smallx = ", pdx_rows)
print("cols of dfw_smallx = ", pdx_cols)
print("column names", dfw_smallx.columns)
print("index", dfw_smallx.index)
print("index is unique: ", dfw_smallx.index.is_unique)
print('')
print(dfw_smallx.loc[['aachener']])
shape of dfw_smallx =  (2188246, 3)
rows of dfw_smallx =  2188246
cols of dfw_smallx =  3
column names Index(['lower', 'word', 'len'], dtype='object')
index Index(['aachener', 'aachenerin', 'aachenerinnen', 'aachenern', 'aacheners',
       'aachens', 'aal', 'aale', 'aalen', 'aales',
       ...
       'zynisches', 'zynischste', 'zynischsten', 'zynismus', 'zypern',
       'zyperns', 'zypresse', 'zypressen', 'zyste', 'zysten'],
      dtype='object', name='indw', length=2188246)
index is unique:  True

             lower      word  len
indw                             
aachener  aachener  AACHENER    8

The only difference to the dataframe created in the last article is the additional column “lower”, repeating the index. As said, the string based index of this dataframe is unique (checked by dfw_smallx.index.is_unique). At the end of this post we shall also have a look at a similar dataframe with a non-unique index.

Query: For a comparison we look at different methods to answer the following question: Are there entries for the words “null”, “mann” and “frau” in the list?

We apply each methods a hundred times to get some statistics. I did the experiments on a CPU (i7-6700K). The response time depends a bit on the background load – I took the best result out of three runs.

Unique index: CPU Time for checking the existence of an entry within the index itself

There is a very simple answer to the question, of how one can check the existence of a value in a (string based) index of a Pandas dataframe. We just use
r

(‘STRING-VALUE‘ in df.index)” !

Let us apply this for three values

b1 = 0; b2=0; b3=0;  
v_start_time = time.perf_counter()
for i in range(0, 100):
    if 'null' in dfw_smallx.index:
        b1 = 1
    if 'mann' in dfw_smallx.index:
        b2=1 
    if 'frau' in dfw_smallx.index:
        b3=1
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
b4 = 'gamling' in dfw_smallx.index 
print(b1, b2, b3, b4)
Total CPU time  0.00020675300038419664
NULL
1 1 1 False

Giving me a total time on my old PC of about 2.1e-4 secs. Which – as we are going to see is a pretty good value – for Pandas!

Total time 2.1e-4 secs.    Per query: 6.9e-7 secs.

Unique index: CPU Time for checking the existence of an entry with a Python dictionary

It is interesting to compare the query time required for a simple dictionary.

We first create a usable dictionary with the lower case word strings as the index:

ay_voc = dfw_smallx.to_numpy()
print(ay_voc.shape)
print(ay_voc[0:2, :])

ay_lower = ay_voc[:,0].copy()
d_lower = dict(enumerate(ay_lower))
d_low   = {y:x for x,y in d_lower.items()}
print(d_lower[0])
print(d_low['aachener'])
(2188246, 3)
[['aachener' 'AACHENER' 8]
 ['aachenerin' 'AACHENERIN' 10]]
aachener
0

And then:

b1 = 0; b2 = 0; b3 = 0
v_start_time = time.perf_counter()
for i in range(0, 100):
    if 'null' in d_low:
        b1 = 1
    if 'mann' in d_low:
        b2=1 
    if 'frau' in d_low:
        b3=1    
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)

print(b1, b2, b3)
print(d_low['mann'], d_low['frau'])
Total CPU time  8.626400085631758e-05
1 1 1
1179968 612385

Total time 8.6e-5 secs.    Per query: 2.9e-7 secs.

A dictionary is by almost a factor of 2.4 faster regarding a verification of the existence of a given string value in a string based index than related queries on an indexed Pandas series or dataframes!

Unique index: CPU Time for direct “at”-queries by providing individual index values

Now, let us start with repeated query calls on a our dataframe with the “at”-operator:

v_start_time = time.perf_counter()
for i in range(0, 100):
    wordx = dfw_smallx.at['null', 'word']
    wordy = dfw_smallx.at['mann', 'word']
    wordz = dfw_smallx.at['frau', 'word']
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
Total CPU time  0.0013257559985504486

Total time: 1.3e-3 secs.    Per query: 4.4e-6 secs
This approach is by factors 6.5 and 15 slower than the fastest solutions for Pandas and the dictionary, respectively

Unique index: CPU Time for direct “loc”-queries by providing individual index values

We now compare this with the same queries – but with the “loc”-operator:

v_start_time = time.perf_counter()
for i in range(0, 100):
    wordx = dfw_smallx.loc['null', 'word']
    wordy = dfw_smallx.loc['mann', 'word']
    wordz = dfw_smallx.loc['frau', 'word']
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
Total CPU time  0.0021894429992244113
NULL

Total time: 2.2e-3 secs. &
nbsp;  Per query: 7.3e-6 secs
More than a factor of 10.6 and 25.4 slower than the fastest Pandas solution and the dictionary, respectively.

Unique index: CPU Time for a query based on a list of index values

Now, let us use a list of index values – something which would be typical for a programmer who wants to save some typing time:

# !!!! Much longer CPU time for a list of index values  
inf = ['null', 'mann', 'frau']
v_start_time = time.perf_counter()
for i in range(0, 100):
    wordx = dfw_smallx.loc[inf, 'word'] # a Pandas series 
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
print(wordx)
Total CPU time  0.037733839999418706
indw
null    NULL
mann    MANN
frau    FRAU
Name: word, dtype: object

Total time: 3.8e-2 secs.    Per query: 1.3e-4 secs
More than a factor of 182 and 437 slower than the fastest Pandas solution and the dictionary, respectively.

Unique index: CPU Time for a query based on a list of index values with index.isin()

We now try a small variation by

ix = dfw_smallx.index.isin(['null', 'mann', 'frau'])
v_start_time = time.perf_counter()
for i in range(0, 100):
    wordx = dfw_smallx.loc[ix, 'word']
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
Total CPU time  0.058356915000331355

Total time: 5.8e-2 secs.    Per query: 1.9e-4 secs
We are loosing ground, again.
More than a factor of 282 and 667 slower than the fastest Pandas solution and the dictionary, respectively.

Unique index: CPU Time for a query based on a list of index values with index.isin() within loc()

Yet, another seemingly small variation

inf = ['null', 'mann', 'frau']
v_start_time = time.perf_counter()
for i in range(0, 100):
    wordx = dfw_smallx.loc[dfw_smallx.index.isin(inf), 'word']
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
Total CPU time  6.466159620998951

OOOOPs!
Total time: 6.48 secs.    Per query: 2.2e-2 secs
More than a factor of 31000 and 75000 slower than the fastest Pandas solution and the dictionary, respectively.

Unique index: Query by values for a column and ignoring the index

What happens if we ignore the index and query by values of a column?

v_start_time = time.perf_counter()
for i in range(0, 100):
    pdq = dfw_smallx.query('indw == "null" | indw == "mann" | indw == "frau" ')
    #print(pdq)
    w1 = pdq.iloc[0,0]
    w2 = pdq.iloc[1,0]
    w3 = pdq.iloc[2,0]
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
Total CPU time  16.747838952000166

Well, well – the worst result so far!
Total time: 16.75 secs.    Per query: 5.6e-2 secs
More than a factor of 81000 and 194000 slower than the fastest Pandas solution and the dictionary, respectively.

The following takes even longer:

v_start_time = time.perf_counter()
for i in range(0, 100):
    word3 = dfw_smallx.loc[dfw_smallx['word'] == 'NULL', 'word']
    word4 = dfw_smallx.loc[dfw_smallx['word'] == 'MANN', 'word']
    word5 = dfw_smallx.loc[dfw_smallx['word'] == 'FRAU', 'word']
    w4 = word3.iloc[0]
    w5 = word4.iloc[0]
    w6 = word4.iloc[0]
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
r
Total CPU time  22.6538158809999

Total time: 22.76 secs.    Per query: 7.6e-2 secs
More than a factor of 109000 and 262000 slower than the fastest Pandas solution and the dictionary, respectively.

However:

v_start_time = time.perf_counter()
for i in range(0, 100):
    pdl = dfw_smallx.loc[dfw_smallx['word'].isin(['MANN', 'FRAU', 'NULL' ]), 'word']
    w6 = pdl.iloc[0]
    w7 = pdl.iloc[1]
    w8 = pdl.iloc[2]
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)

This gives us 6.576 secs again.

Unique index: Query by values of a dictionary and ignoring the index

Here a comparison to a dictionary is interesting again:

v_start_time = time.perf_counter()
b1 = 0; b2 = 0; b3 = 0
for i in range(0, 100):
    if 'null' in d_lower.values():
        b1 = 1
    if 'mann' in d_lower.values():
        b2=1 
    if 'frau' in d_lower.values():
        b3=1    
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
print(b1, b2, b3)
Total CPU time  4.572028649003187
1 1 1

So a dictionary is faster than Pandas – even if we ignore the index and query for certain values!

Intermediate conclusions

What do the above results tell us about the handling of Pandas dataframes or series with strings as elements and an unique index containing strings, too?

The first thing is:

If possible do not use a Pandas dataframe at all! Turn (two) required (string) columns of the dataframe into a string indexed dictionary!

In our case a simple solution was to turn a column with lower case writings of the words into a dictionary index and using an enumerated column as values.

A dictionary with string based keys will give you by far the fastest solution if you are only interested in the existence of a certain key-value.

Now, if you want to use Pandas to check the existence of a certain string in a unique index or column the following rules should be followed:

  • Whenever possible use a (unique) index based on the strings whose existence you are interested in.
  • For pure existence checks of a string in the index use a query of the type
    “if ‘STRING‘ in df.index”.
    This will give you the fastest solution with Pandas.
  • Whenever possible use a series of simple queries – each for exactly one index value and one or multiple column labels instead of providing multiple index values in a list.
  • If you want to use the “at” or “loc”-operators, prefer the “at”-operator for a unique index! The form should be
    result = df.at[‘IndexValue’, ‘ColLabel’].
    The loc-operator
    result = df.loc[‘IndexValue’, ‘colLabel1’, ‘colLabel2’, …] is somewhat slower, but the right choice if you want to retrieve multiple columns for a single row index value.
  • Avoid results which themselves become Pandas dataframes or series – i.e. results which contain a multitude of rows and column values
  • Avoid queries on column-values! The CPU times to produce results depending on conditions for a column may vary; factors between 6 and 200,000 in comparison to the fastest solution for single values are possible.

Using a string based index in a Pandas dataframe is pretty fast because Pandas then uses a hash-function and a hashtable to index datarows. Very much like Python handles dictionaries.

Dataframes with a non-unique string index

Let us now quickly check what happens if we turn to a vocabulary with a non-unique
string based index. We can easily get this from the standard checked German wordlist provided by T.Brischalle.

dfw_fullx = pd.read_csv('/py/projects/CA22/catch22/Wortlisten/word_list_german_spell_checked.txt', dtype='str', na_filter=False)
dfw_fullx.columns = ['word']
dfw_fullx['indw'] = dfw_fullx['word']

pdfx_shape = dfw_fullx.shape
print('')
print("shape of dfw_fullx = ", pdfx_shape)
pdfx_rows = pdfx_shape[0]
pdfx_cols = pdfx_shape[1]
print("rows of dfw_fullx = ", pdfx_rows)
print("cols of dfw_fullx = ", pdfx_cols)

giving:

shape of dfw_fullx =  (2243546, 2)
rows of dfw_fullx =  2243546
cols of dfw_fullx =  2

We set an index as before by lowercase word values:

dfw_fullx['indw'] = dfw_fullx['word'].str.lower()
dfw_fullx = dfw_fullx.set_index('indw')

word_null = dfw_fullx.loc['null', 'word']
word_mann = dfw_fullx.loc['mann', 'word']
word_frau = dfw_fullx.loc['frau', 'word']

print('')
print(word_null)
print('')
print(word_mann)
print('')
print(word_frau)

Giving:

indw
null    null
null    Null
null    NULL
Name: word, dtype: object

indw
mann    Mann
mann    MANN
Name: word, dtype: object

indw
frau    Frau
frau    FRAU
Name: word, dtype: object

You see directly that the index is not unique.

Non-unique index: CPU Time for direct queries by with single index values

We repeat our first experiment from above :

v_start_time = time.perf_counter()
for i in range(0, 100):
    pds1 = dfw_fullx.loc['null', 'word']
    pds2 = dfw_fullx.loc['mann', 'word']
    pds3 = dfw_fullx.loc['frau', 'word']
v_end_time = time.perf_counter()
print("Total CPU time ", v_end_time - v_start_time)
print(pds1) 

This results in:

indw
null    null
null    Null
null    NULL
Name: word, dtype: object
TotalCPU time  7.232291821999752

7.2 secs! Not funny!

The reason is that the whole index must be checked for entries of the given value.

Why the whole dataset? Well, Pandas cannot be sure that the string based index is sorted in a way!

Non-unique index: CPU Time for direct queries by single index values and a sorted index

We remedy the above problem by sorting the index:

dfw_fullx = dfw_fullx.sort_index(axis=0)

And afterwards we try our test from above again – this time giving us :

indw
null    Null
null    null
null    NULL
Name: word, dtype: object
Total CPU time  0.04120599599991692

0.041 secs – much faster. On average the response time now depends on log(N) because Pandas now can use a binary search – with N being the number of elements in the index.

Conclusions

For me as a beginner with Pandas the multitude of options to retrieve values from a Pandas series or dataframe was confusing and their relation to index involvement and the construction of the “result set” was not always obvious. Even more surprising, however, was the impact on performance:

As soon as you hit multiple rows by a “query” a Pandas series or dataframe is constructed which contains the results. This may have advantages regarding the presentation of the result data and advantages regarding a convenient post-query handling of the results. It is however a costly procedure in terms of CPU time. In database language:

For Pandas building the result set can become more costly than the query itself – even for short result sets. This was in a way a bit disappointing.

If you are interested in just the existence of a certain string values in a list of unique strings just create a
dictionary – indexed by your string values. Then check the existence of a string value by “(‘STRING-VALUE‘ in dict)”.
You do not need Pandas or this task!

When you want to use an (indexed) Pandas for existence checks then query like “if ‘STRING-VALUE‘ in df.index” to check the existence of a string value in the properly created string based index.

When you need a bunch of values in various columns for given indices go for the “at”-operator and individual queries for each index value – and not a list.

When retrieving values you should, in general, use a direct form of “selecting” by just one index value and (if possible) one column value with the “at”-operator (df.at[‘indexValue’, ‘columnLabel’]). The loc-operator is up to a factor 1,7 slower – but relevant if you query for more than one columns.

Non-unique indices cost time, too. If you have no chance to associate your dataframe with unique index that suits your objectives then sort the non-unique index at least.