PHP: Upload/import of large csv files, server resource limitations, php.ini

In this article I want to list and briefly discuss some resource limiting parameters for PHP that may become important when big data files are transferred to a server and afterwards imported into a database. Before I do that, I first want to discuss a related comment of reader.

Data splitting to avoid huge transfer times to the server

In one of my last articles [Importing large csv files with PHP into a MySQL MyISAM table] I recommended using the SQL statement “LOAD DATA INFILE …” inside PHP programs to import big csv-files into a MySQL database. In addition I recommended to deliver all fields of a record with the same key dependency in one file instead of distributing data fields of records over several separate csv-files and import these files in maybe several separate (denormalized) database tables.

A reader reacted by writing in a mail that a file combining many data fields would of course become much larger than each of the individual files with only few data fields. He pointed out that this may become a problem for the transfer of the file from a client to the server – especially if the server was hosted at a provider. Typical and connected problem areas could be :

  • bandwidth of the client-server connection – especially for ADSL connections
  • limitations on the server side regarding e.g. file size, maximum job duration or input times

I agree – but only partially.
First, it is true that a limited bandwidth for file uploads may become a problem. An example of a “small” csv file with a million records each containing 7 fields (4 key fields, 3 quantity fields) makes that clear:

Upload time = transfer time to the server :

  • In my case the file size was around 35 MByte. To upload such a file with ADSL and a maximum upload rate of 1MBit gives you an effective 5 minutes upload time (better transfer time to the server).
  • The transfer time has to be compared with the data import time on the server itself which turns out to be in the region of 6 seconds when using the “LOAD DATA INFILE” directive.

The time difference will get even bigger for larger files. So, we see that the transfer time may soon become a dominant factor when our server is located somewhere in the internet. The question arises, whether required upload times may collide with server settings. This would be one example of a server side resource limitation we need to deal with when working with big data. We come back to a potentially relevant, but disputed parameter later on.

Anyway, I do not agree with data splitting by fields to overcome bandwidth problems:
One reason is that the total upload time will not become smaller. The only advantage is a smaller upload time interval per file. This may help to get a better control over the upload process(es). My real argument against splitting by fields is that the total data import time for several files with a few data fields each but with the same huge number of records may become considerably bigger than for one file with all data fields (and the same number of records). At least if you use the fast “LOAD DATA INFILE” feature of the MySQL engine. So, I would

  1. either tolerate a relative big transfer time to the server
  2. or try to organize the data records in such a way that they can be uploaded to the server and imported into the data base sequentially – i.e. by files with the full data field spectrum but with reduced record numbers.

I would not give up the idea of transferring csv-files with as many fields as possible per record line – as long as this is
compatible with a normalized data model. Point 2 can be realized very often without problems – and with Ajax and HTML5 technologies it can even be solved in such a way that the transfers are automatically done one after the other (file pipelines). So, if you want to limit the transfer times, split the data by records and not by fields. I.e. transfer several files with bunches of records instead of several files with fewer fields.

Resource limitations require control over server parameter settings

Nevertheless, relatively big transfer times for big files may conflict with some server settings. Also – and more important – the time required for the import into the database or the file size and memory consumption can conflict with limits set on your server. E.g., the maximum time a PHP job is allowed to run on most web servers is limited – as are many other resources given to a PHP job.

Many of the PHP resource limitations we may be confronted with on a Apache Web server are defined by parameter settings in a php.ini-file. On a SuSE system this is typically located at “/etc/php5/apache2/php.ini”. Many providers deny access to these global settings – although one could in principle influence them for certain scripts by .htaccess-files or by putting php.ini files into the directories where your PHP-scripts reside.

This is one of the reasons why I urge my customers to rent a root server or at least a virtualized server of some provider when seriously dealing with big data. You need to have control over a variety of server settings. Sometimes also general settings and not only php.ini-parameters. Working with big data files and applications dealing with tens or hundreds of millions of records in joined tables require full server control especially during the development phase. This is in my opinion nothing for simple web site hosting.

Some relevant php.ini parameters

What are typical php.ini parameters that limit resources for PHP processes on a server and may get relevant for file uploads? I think the most important ones are the following:

  • max_execution_time :: default value : 30 (many providers limit this to 10)
  • upload_max_filesize :: default: 2 M
  • post_max_size :: default : 8M
  • memory_limit :: default :: 128 M
  • max_input_time :: default value : 60
  • session.gc_maxlifetime :: default : 1440

The time numbers are given in secs. The memory numbers in Megabytes.

Most of these parameters are mentioned in the following web articles which comment their impact on file uploads. Please, have a look at these web pages.
http://php.net/manual/en/features.file-upload.php
http://php.net/manual/de/features.file-upload.common-pitfalls.php

Important parameters for the transfer of big data files are “upload_max_filesize” and “post_max-size”. I want to stress the following point:

Both of these parameters have to be set consistently when dealing with file uploads. (Actually, in the past I sometimes forgot that myself and wasted some time with wondering why a file did not get loaded although I had set a sufficient value for upload_max_filesize).
 
When you upload a file plus some additional POST data the total amount of POST data can be bigger than just the file size. However, “if the size of post data is greater than post_max_size, the $_POST and $_FILES superglobals are empty” – according to the PHP manual. You may even get no warnings or errors in some situations.
Therefore, the value of “post_max_size” should always be bigger than the value of “upload_max_filesize” – and the latter should of course be at least as big or a bit bigger than the size of the file you plan to transfer to a server.

An important parameter is also the “max_execution_time”. It limits the time a PHP process is allowed to run. It should be big enough to cover the file handling and database import times on the server. Transfer times are not included if we believe the hints given at http://php.net/manual/de/features.file-upload.common-pitfalls.php.

In my understanding the “max_input_time” parameter limits the time for “parsing” request data (= POST or GET data). See http://php.net/manual/en/info.configuration.php#ini.max-input-time
However, a comment on the following page
http://php.net/manual/de/features.file-upload.common-pitfalls.php
says something different:

Warning – max_input_time sets the maximum time, in seconds, the script is allowed to receive input; this includes file uploads. For large or multiple files, or users on slower connections, the default of 60 seconds may be exceeded.”

Personally, I have some doubts about this last statement as I never experienced any problem with the standard settings and files above 200 MB. So a contrary position would be:

“Parsing” refers to the time interval between the arrival on the web server and before a PHP script starts executing – so “max_input_time” would not include the upload time (= transfer time to the server). However, it would include the time to prepare the superglobal arrays as $_GET, $_POST, $_FILE.

This interpretation makes some sense to me. However, I did not test it thoroughly.

Interestingly enough, there is some dispute about the meaning of the max_input_time parameter and its relation to the max_execution_time parameter on the internet. See:

http://blog.hqcodeshop.fi/archives/185-PHP-large-file-uploads.html
https://bugs.php.net/bug.php?id=53590&
https://bugs.php.net/bug.php?id=28572
http://stackoverflow.com/questions/11387113/php-file-upload-affected-or-not-by-max-input-time
http://www.php.de/php-fortgeschrittene/41473-php-ini-frage-zu-max_input_time.html
https://www.grumpyland.com/blog/101/settings-for-uploading-large-files-in-php/
http://www.devside.net/wamp-server/apache-and-php-limits-and-timeouts
http://www.techrepublic.com/article/a-tour-of-the-phpini-configuration-file-part-2/
http://serverfault.com/questions/224536/is-max-input-time-included-in-max-execution-time

I have no definite opinion about this discussion, yet. In two of the articles listed above one can see that even the error messages referring to conflicts with the max_input_time parameter can be misleading as they wrongly refer
to the max_input_time. Therefore, I recommend be aware of the parameter “max_input_time”, although the default should be sufficient for most cases. Setting it to “-1” corresponds to an unlimited max_input_time interval – whatever it really has an impact on.

If you really work with huge files of several hundred megabytes even the garbage collection time may become important. gc_maxlifetime sets a life time limit for data until the system regards them as garbage. So, if your import or data handling times get big enough also this parameter must be adjusted.

The “memory_limit” of a PHP-Scripts may also be reached when processing huge files.

So, you see there is a whole bunch of parameters which you may need to adjust when you start working with larger and larger files.

Web hosting providers and php.ini-settings

You find typical settings of php.ini-parameters for the web hosting packages of the providers Strato and 1&1 here.
http://strato-faq.de/article/1315/Mit-welchen-Grundeinstellungen-steht-PHP-bei-STRATO-zur-Verf%C3%BCgung.html
https://hilfe-center.1und1.de/skript–und-programmiersprachen-c82634/php-c82686/bedienung-c82739/welche-php-einstellungen-kann-ich-aendern-a791899.html

Most web providers also give more or less clear information about which of the PHP relevant parameters can be manipulated by customers (e.g. by the use of .htaccess- or directory specific php.ini-files) and which not. See e.g.:
https://hilfe-center.1und1.de/skript–und-programmiersprachen-c82634/php-c82686/bedienung-c82739/php-einstellungen-ueber-phpini-aendern-a791901.html

At some providers you can use a .htaccess file, for others you may need to put a php.ini file into each script directory. See e.g.:
http://www.webdecker.de/artikel/items/php-ini-value-htaccess.html

Posted in PHP

MySQL/Aggregation: Comparing COUNT(DISTINCT) values on big tables

Sometimes one needs to work with rather big database tables and with COUNT(DISTINCT) statements. Recently, I had to analyze the result sets of COUNT(DISTINCT) statements for table consistency checks. These tasks required subselects or subqueries. The SQL statements were performed on a test table with more than 5 million records. The data records – together with data of another table – formed the data basis of several complicated computational runs. The eventual record number in our application will be much larger, but some optimization principles can be learned already from 5 million example.

Most of the computed COUNT(DISTINCT) values referred to some of the table’s key columns. Our data records have a combination of three different key fields, which together define a unique key. The combined unique key has a sub-structure corresponding to physical aspects of the real world data objects behind the records. This induces a natural grouping of records.

Such a data structure occurs very often in physical simulations: Think e.g. of a grid of meteorological observation stations that measure environment data at distinct points of a time line. Each station would then be characterized by 2 positional data. The measured quantity values would have a three fold unique key : two geometrical position data and a time value. Then the data for temperature, pressure over time in a database table could be aggregated and grouped according to geometrical positions, i.e. for the stations and their locations.

In my case COUNT(DISTINCT) values had to be computed over one key column and for groups defined by distinct combinations of other keys. The results should be compared with a reference value: the COUNT (DISTINCT) result for the whole table. I shall explain the detailed key structure and the comparison objectives below. Some of the counted distinct values also had to be compared with similar values of another, but much smaller table that contained only around 14000 records for a reduced set of the key columns. The comparisons were part of consistency checks concerning record groups inside the huge table and concerning the equality of distinct key values used in the huge table and the smaller table.

When working on these tasks I first tried to optimize the required sub-select statements for the COUNT(DISTINCT) evaluation on my key columns by defining and using proper indices. After some more analysis, I extended my optimization strategy to creating a helper table instead of working with intermediate but volatile “derived result sets” of my sub-selects. Read below, why …

My case is a little special, but consistency checks regarding the number of records in certain record groups which correspond to distinguished combinations of some keys occur in very many environments (physical simulation, statistics, neuronal networks, ..). Therefore, I hope that the insights I got from my simple scenario may be helpful for other people who have to deal with COUNT(DISTINCT) statements on big tables, too.

The playground, data key structure and some numbers

In our project we load the data of a sequence of csv-files (each with millions of records) into 2 tables of a database. After each import process we check some consistency conditions the records of two tables must fulfill. The reason is simply that we want to avoid interrupted or crashing computation runs due to a lack of data or inconsistent data provided by our tables.

Each record in our big table contains data values that depend on a 3 fold key-structure:

Two keys [m,n] describe certain objects by 2 dimensions in an abstract space. Let us call such an object a “node”. For a “node” some hundred values of several quantities are defined with respect to an additional integer key “[i]” of a further dimension.

The table structure
of our first table “TA” is flat – something like

nr,    m, n, i    Q1, Q2, Q3, ….

with the Qs representing some quantities and the first column being a auto-indexed one. The number distribution of the key values in our case is as follows:

m has only some few distinct values (2 to 10), whereas n may have distinct values in a range between 10000 up to 100000. Distinct i values are in the range between 200 and 1000. In my concrete test table I chose:

  • distinct m values : 1 (as an extreme situation)
  • distinct n values : 13580
  • distinct i values : 386

There is another table “TB” with keys [m,n] and associated quantity values per object. The data of both tables are later on used to perform several calculations. For a sufficient performance of the computational runs we need at least a combined index over the tupel [m,n] – which should be created already during the data import phase. The funny thing in our scenario is that the time required for the data load and import phase is dominant in comparison with all of our computation runs – right now by a factor of 1,5 to 3. So, we really have no intention of making the import time interval any bigger without very good reasons.

Objectives of COUNT and COUNT (DISTINCT ) statements : consistency checks

The consistency of the imported data in the two tables TA and TB must be guaranteed. Some of the consistency conditions that must be fulfilled are:

  1. The number of distinct values for both [m], [n] and [m,n] must be the same in the tables TA and TB
  2. The number of unique [m,n,i] values should be equal to the total number of entries in table TA.
  3. The number of distinct [i] values (i.e. number of distinct records) for any given [m,n]-key pair (i.e. node) should be equal to the total number of records for that [m,n]-pair.
  4. The number of distinct i values (i.e. number of records) of a given [m,n]-key pair in table TA should have an identical [m,n]-independent value. (For all nodes there must be an identical number of i-dependent records).
  5. The number of distinct i-key values of any [m,n] pair should be identical to the number of distinct i-values given for the full table

The last 3 conditions together (!) guarantee that for each [m,n] combination (=node) the different i values of the associated records

  • are all distinct – with the same sequence of values independent of [m,n];
  • lead to the same number of records for every [m,n]-combination.

The following aspects should be considered:

  • The checks are to be performed several times as the big table would be generated stepwise by a sequence of “LOAD DATA INFILE” processes. E.g. 4 times a 5 million record file. A consistency check should in addition always be performed before a computation run is started.
  • The time required for consistency checks add up to the total data import and preparation time for the tables. The effect should be limited to a tolerable factor below 1.3.
  • We expect indices over the key columns to play an important role for the performance of SELECTS for aggregated distinct key values. Building indices, however, may cost substantial additional time during the data import and preparation phase.
  • The performance of certain SELECT statements may depend on the order of columns used in the index definition.

As the data import and preparation time was already dominant we were eager to avoid any intolerable prolongation of this time period.

The impact
of a unique or non unique index over all key columns on the data import and preparation time

In our case we would create a [m,n] or [n,m] index. Such an index was unavoidable due to performance requirements for the computation runs. An [n,m]-index would give us a slight performance advantage during our calculations compared to an [m,n] index. But this difference is marginal. In the data loading phase we import our data into the MySQL base with “LOAD DATA INFILE” statement issued by a PHP program (see ). The index is filled in parallel. We were interested in whether we could afford a full unique index over all 3 key tables.

Below we give some numbers for the required time intervals of our data imports. These numbers always include a common overhead part. This overhead time is due to an Ajax exchange between browser and server, zip file transfer time across our network, PHP file loading times, zip expansion time on the server, movement of files … So the numbers given below do not reflect the pure data loading time into the base on the server. By estimation this time is at least by 4,5 seconds smaller. The test server itself is a relatively small one in form of a KVM virtualized LAMP server with 2 GByte assigned RAM, CPU INTEL Q9550, Raid 10 disk array. For the following tests I worked with MySQL MyISAM tables – however, the general picture should hold for InnoDB tables, too.

Addendum, 21.09.2014:
Please note in addition that our big table has a first column which is auto-incremented and is used as a primary index. This primary index was always created. It explains why the absolute numbers of data import times given below are relatively big. See a forthcoming article
MySQL: LOAD DATA INFILE, csv-files and index creation for big data tables
for a more complete discussion of the impact of (unique and non-unique) indices on data imports from csv files.

We got the following loading times required to fill table TA with data from a csv-file with m=1, n=13580, i=386 (all distinct) by using “LOAD DATA INFILE”:

  • without [n,m,i]-index creation : 23.5 sec.
  • with [n,m]-index creation (non unique) : 25.3 sec
  • with [n,m,i]-index creation (non unique) : 26.3 sec
  • with [n,m,i]-index creation (unique) : 36.2 sec

(compare with the following blog articles
MySQL/PHP: LOAD DATA – import of large csv files – linearity with record number and
Importing large csv files with PHP into a MySQL MyISAM table )

Note that there is no big difference between creating a full [n,m,i]-index over all key columns and creating an index only for the first two columns of our test table. So a full [n,m,i]- index creation seemed to be affordable. It would not hamper the computation runs, but it helps a little with our consistency checks.

However, there is a substantial difference of loading times for a unique and a non unique index. This difference stems of course from related checks which have to be performed during the implicit INSERTs of the data into the table. As we do not really need a unique index for the computation runs the question arises whether we can just use a plain index and still can verify check condition 2 with a better performance than using 10 extra seconds. The answer is – yes we can do that. See below.

Another remark for reasons of completeness:

An [i,n,m]-index instead of a [n,m,i]-index would double the time required for our computation runs. The branching structure of an index tree is performance relevant! in our scenario this is due to the fact that aggregations (sums, statistical information, … ) over the i-dependent quantities for a node play an important role in the calculation steps. Selects or aggregation for distinct [n,m]-nodes and aggregation processes over [n,m]-related record groups require a [n,m,..] index. In case of doubt about the n,mm-order I would say it is better to start with the more numerous branching. Any initial branch chosen then leads to a smaller amount of remaining records for further analysis. Almost all statements discussed below profit from the chosen column and branching order of the index.

What information is really required?

Looking at condition 2 we would say: Let us get the number of all distinct [n,m,i] triples in the table and compare that with the COUNT(*) value for the table. With our given [n,m,i]-index, we could use the following SQL statement:

“SELECT COUNT(DISTINCT n,m,i) FROM TA” :: 1.9 to 2.0 secs .

If we had a unique index the answer would come much faster. However, with our plain [n,m,i]-index such a statement takes 2 secs – which is still considerably less than 10 secs (< 10 secs!). Our [n,m,i]-index is used already for grouping and sorting as EXPLAIN will tell you. So, most of the time is spend for comparing i values of the sorted groups. If there were no other conditions we would have to use this statement. However, conditions 3 to 5 are very strict. Actually, some thinking shows that if we guarantee conditions 3 to 5 we also have guaranteed that the total number of records is identical to the number of distinct records. If for some given [m,n] combination we have y distinct records than we have y different i-values. The rest follows ... Therefore, we should instead concentrate on getting information about the following numbers:

  1. The total number of distinct i values in the table. This would be used as a reference value “$ref_dist_i”.
  2. The total number of i-values ( = number of records ) “num_i” per [m,n]-node (for all nodes).
  3. The total number of distinct i-values “num_dist_i” per [m,n]-node (for all nodes).
  4. The number of records, for which num_dist_i or num_i deviate from $ref_dist_i.

If we find no deviation than we have also confirmed condition 2.

“Alternative”
There is another way to prove condition 2 if there is an additional precondition:

The sequence of i-values given for each node shall be the same for all nodes.

This condition is given in very many circumstances (e.g. in physics simulations) – also in our case. Then we could look at the difference of

Max(Distinct i) – Min (DISTINCT i) for each node

and demand

  • [Max(Distinct i) – Min (DISTINCT i) + 1] == [COUNT (DISTINCT i)] == [COUNT i] per node
  • plus that [COUNT(i)] has the same value for all nodes.

Please note that we would not need to determine the number of distinct i values in our big table at all for this variant of our consistency check.

We shall see below why this can give us a substantial advantage in terms of SELECT times.

To investigate condition 1 we need additionally:

  • The total number of distinct [n,m]-values, distinct m-values and distinct n-values for the tables TA and TB.

Aggregated distinct key values for groups corresponding to leading branches of an index

Let us now consider
SQL statements that determine distinct [n,m]-values, distinct m-values and distinct n-values of table TA. We make the following trivial assuption: COUNT (DISTINCT ) aggregations over one or several columns are much faster if you can use an index defined for these columns. Why ? Because an index has a tree structure and “knows” about the number of its branches which are determined by unique values! Furthermore, also on subsequent branching levels it can be used for grouping and sorting of records ahead of internal value comparisons.

If this is true we would further assume that getting the number of distinct [n,m] values should be pretty fast, because our [n,m,i]- index can be used. And really :

“SELECT COUNT(DISTINCT n,m) FROM TA” :: 0.051 to 0.067 secs

The same holds for “n” alone:

“SELECT COUNT(DISTINCT n) FROM TA” :: 0.049 to 0.067 secs

The marginal difference is not surprising as there is only one [m]-value. EXPLAIN shows that in both statements above the index is used for internal grouping and sorting.

However :

“SELECT COUNT(DISTINCT m) FROM TA” :: 1.4 to 1.6 secs

What we see here is that the index may still be used for grouping – but for each “n”-branch the “m”-index values still have to be compared. Can we make this evaluation a bit faster? What about a sub-select that creates a [n,m]-basis for further analysis ? This could indeed lead to a better performance as ordered [n,m]-pairs correspond directly to leading branches of our index and because the result set of the subquery will be significantly smaller than the original table:

“SELECT COUNT(DISTINCT a.ma) FROM (SELECT DISTINCT n, m AS ma FROM TA ORDER BY n) as a” :: 0,056 to 0,078 secs

Compared to a more realistic (n,m)-value distribution this result is a bit too positive as we only have one distinct m-value. Nevertheless, we see that we can use our already created [(n,m),i]-index with some considerable advantage to work on parts of condition 1 in the list above. Note, that a temporary table for the derived result set of the sub-select is created; EXPLAIN and time measurements of the MySQL profiler will show that. This intermediate table is substantially smaller than the original table and also the number of distinct [m]-values is small (here 1).

What about table TB and these numbers ? Well, TB is just as small as our intermediate table of the last statement! And because it is (relatively) small, we can afford to create whatever index required there – one for [n], one for [m], one for [n,m]. So, we can get all required node information there with supreme speed.

Summary: We can perform consistency checks for condition 1 without any performance problems. Our [n,m]-index is suitable and sufficient for it.

Two additional lessons were learned:
1) Aggregation for key groups and leading index branches over corresponding key columns should fit to each other.
2) Intermediate temporary tables may help for a required aggregation – if they are much smaller than the original table. This leads to SQL statements with subselects/subqueries of the form

SELECT COUNT(DISTINCT a.x) FROM ( SELECT DISTINCT col1 AS x, col2, .. as FROM ….ORDER BY .. ) as a

Aggregating distinct values of a column which is the last in the column definition order of a combined index

Now, let us turn to the determination of the total number of distinct i values. We need this value as a reference value ($ref_dist_i) if we want to verify conditions 3 to 5 ! The fastest way – if we only have our plain, non-unique [n,m,i]-index to get this number – is a simple

SELECT COUNT(DISTINCT i) FROM table TA : 1.85 – 1,95 secs

nAn EXPLAIN shows that this statement already uses the [n,m,i]-index. I did not find any way to make the process faster with just our plain (non unique) [n,m,i]-index. And note: No intermediate table would help us here – and intermediate table would only be a reordered copy of our original table. Therefore:

“SELECT COUNT(DISTINCT a.ia) FROM (SELECT DISTINCT i AS ia, n, m FROM TA ORDER BY n, m ) as a” :: 11.9 secs

To try to make the determination of COUNT(DISTINCT i) faster we could be tempted to create another index which starts it’s tree branching with the i-column or just a plain index for the i-column. We would build such an index during the data load process. And then we would find in our scenario that the extra time to create such an index would be ca. 2 sec. We would first have to invest time in a proper data analysis – even if a subsequent SELECT COUNT(DISTINCT i) FROM table TA would be pretty fast.

So, if we do not get the number of distinct i-values from elsewhere (e.g. from the provider of our csv-files) – we are stuck to something like an additional overhead of 2 secs – which gives us a choice: Create an additional index OR use the simple SQL statement ?

Note that even the creation of a pure i-index would consume considerable memory space on the hard disk (in my concrete example the required index space grew from 138 MB to 186 MB – this total value being not far from the total amount of data in table TA.) Because, up to now I have not found another usage for an i-index I refrain from creating it. I better save the determined number of distinct i values in a tiny additional parameter table to make it accessible for different PHP programs afterwards.

Comparing aggregated COUNT(DISTINCT) values for record groups with a given reference value

Conditions 3 to 5 of our task list require that we compare the reference value [$ref_val_i] for distinct i values of table TA with both the distinct i values and the sum of all i values for each of all the [m,n]-nodes. This leads us to something like the following SQL statement with a subselect/subquery:

Comparison Statement:

SELECT count(*) FROM ( SELECT COUNT(distinct b.i) AS sum_distinct_i, COUNT(b.i) AS sum_i FROM TA b GROUP BY b.n, b.m HAVING sum_distinct_i != $ref_dist_i OR sum_i != $ref_dist_i ) AS a

It gives us the number of nodes with deviations either of the number of associated distinct i-values or of the number of associated i-values from the reference value (here $ref_dist_i). The complicated statement requires some time.

Comparison statement :: < 2.2 secs

Despite the fact that our [n,m,i]-index is used as an EXPLAIN will show. You won’t get that much faster. The alternative

SELECT count(*) FROM ( SELECT COUNT(distinct b.i) AS sum_distinct_i, COUNT(b.i) AS sum_i FROM TA b GROUP BY b.n, b.m ) AS a WHERE a.sum_distinct_i != $ref_dist_i OR a.sum_i != $ref_dist_i

has the same performance. The slow part is the WHERE or HAVING analysis on the derived temporary result set.

By the way:
Working with an [n,m]-index and a separate [i]-index would slow down the comparison statement to > 2.6 secs. So, here we eventually find a slight advantage of our combined index over three columns (also the computation runs get a bit faster). In addition the combined index takes less disk space than several individual indices.

We now should take into account the following aspect of our scenario: We may call such a comparison statement several times if we load a sequence of data files – each with 5 million data records. Then our “comparison statement” will get slower and slower the more records the table comprises. In addition we may also start every computational run with a consistency check of the tables
and show the results in a browser! Then we need to optimize the response time for the web page. Even 2.2 secs may then be too much. How can we meet this challenge ?

Helper table instead of derived result sets from subselects/subqueries

My answer was to use a helper table “TC” – which receives the data of the derived result set from the sub-select of our comparison statement. In table TC we would store something like

m,    n,    sum_distinct_i,    sum_i

and maybe other node dependent values. This has also the advantage to become flexible in case that some day the distinct i values get [n,m]-dependent and have to be compared to some other values.

Note that the number of values in such a table is considerably smaller than in the original table. So, speaking in relative numbers we could generate some indices for [m,n], m, n, sum_dist_i, sum_i without much investment into cpu time and memory! This would also relieve us from complicated statements to determine the number of distinct m values as discussed above. And statements as

SELECT COUNT(*) FROM TC WHERE sum_distinct != $ref_dist_i OR sum_i != $ref_dest_i

would perform very well. (Under our present conditions this should give a plain zero if consistency is given).

Therefore, the only question remains: What would such a helper table cost regarding time ? The proper statement to fill a prepared table TC with indices is

INSERT INTO TC (m, n, sum_distinct_i, sum_i) SELECT a.m, a.n, COUNT(DISTINCT a.i), COUNT(a.i) FROM TA a GROUP BY a.n, a.m

This takes approximately 2.26 secs. However, an additional

SELECT COUNT(*) as sum_dist FROM TC WHERE sum_distinct_i != $ref_dist_i OR sum_i != $ref_dist_i

takes only 0,002 secs:

  • SELECT COUNT(DISTINCT m) FROM TC :: < 0.002 secs
  • SELECT COUNT(DISTINCT n) FROM TC :: 0.018 to 0.026 secs
  • SELECT COUNT(DISTINCT n,m) FROM TC :: 0.007 to 0.012 secs

This means that we do no longer have to be afraid of repeated consistency checks!

“Alternative”
If our conditions are such that we can follow the alternative discussed above the helper table is of even more help. The reason is that we can omit the determination of our reference value $ref_dist_i. Instead we fill the (modified) helper table with

INSERT INTO TC (m, n, sum_distinct_i, sum_i, min_i, max_i) SELECT a.m, a.n, COUNT(DISTINCT a.i), COUNT(a.i), MIN(DISTINCT i), MAX(DISTINCT i) FROM TA a GROUP BY a.n, a.m

This costs us only slightly more time than our original value. With a proper index generated over the columns sum_distinct_i, sum_i, min_i, max_i we the use

SELECT COUNT(DISTINCT sum_distinct_i, sum_i, min_i, max_i) FROM TC :: < 0.002 sec

In case that everything is OK this should give us a “1” in almost no time. Then additionally we fetch just one line

SELECT (*) FROM TC LIMIT 1

and check the conditions [Max(Distinct i) – Min (DISTINCT i) + 1] == [COUNT (DISTINCT i)] == [COUNT i] for this node, only.
The big advantage is that we win the time otherwise spent to determine the COUNT(DISTINCT i) value for our big table. This advantage gets bigger with a growing table.

Remark regarding indices: As you have realized we clutter the helper table with indices over various columns. Normally, I would characterize this as a bad idea. But in our case I do not care – compared to other things it is a small prize to pay for the relatively small space the numerous indices consume for our (relatively) small helper table.

The resulting sum of additional time investments for preparing the consistency checks

We
find for our 5 million record table that we need to invest some extra time of

  • around 2.2 secs for creating a helper table

in the best case. If we cannot use the “Alternative” we additionally need around 2 secs for getting the number of distinct i-values in our table.

So, this makes 1% in the best case up to 16 % of the original loading time (26.3 secs). I think, this is a good compromise as it gives us a repeatable consistency check for a given status of the tables.

Extrapolation to table enlargements resulting from loading a sequence of further data files

What happens if we deal with larger data amounts sequentially loaded into our big data table from several csv-files? We expect things to behave reasonably well:
A critical point is that the helper table TC should only get new records originating from the new file loaded. Already saved aggregated values should not be changed. This in turn requires that the data in the new files belong to nodes that are distinct from already loaded nodes. So, the data imported must be organized by nodes and systematically spread over the files to be loaded. The relevant data records for the new aggregations can easily be identified by an evaluation of an auto-index column in the big data table before and after the loading process. The aggregation results for the new nodes are than added to the helper table without any interference with the already aggregated values stored there.

Therefore, the time to generate additional entries in our helper table should remain a constant per csv-file to be imported!

However – if we need to do it – the determination of the COUNT(DISTINCT i) would require more and more time for our growing big table.

So, at this point we will really profit significantly from the discussed “Alternative”.

The time to perform our checks on the helper table will also grow – but in comparison to other times this remains a really small contribution.

Summary

In our scenario we had to find a compromise between index generation and a resulting prolongation of data loading and preparation time versus a faster consistency check (partially) based on aggregated COUNT(DISTINCT) values for (grouped) key columns. Such checks typically lead to sub-selects with derived result sets. Even if the temporary tables implicitly used by the RDBMS engine are relatively small compared to the original big tables any further aggregation or analysis on the derived data may cost considerable time. If such analysis is to be done more often than once the creation of a helper table with aggregated data may pay off very quickly.

In addition we have seen that one should carefully evaluate alternatives to verify consistency conditions and what information can and should be used in the verification process. Not all obvious ways are the best in terms of performance.

Addendum, 21.09.2014:
Is this the end of a reasonable line of thinking ? No, it is not, if you want to make your big data imports faster.
Creating a helper table is useful. But there are more points to take into account when trying to optimize:

The reader has probably noticed that the creation of a unique [n,m,i]-index enlarged the loading time drastically. This observation sets a huge question mark behind the fact that I had a primary index on the first auto-incremented “nr”-column. Furthermore, what about the impact of index building in parallel to the processing of “DATA LOAD INFILE” in general? We did not discuss whether we could accelerate the import time by omitting index building throughout the “LOAD DATA” import completely and creating the index instead afterwards! And we assumed that index creation during the import of a sequence of files would behave reasonably – i.e. linearly with the growing number of records. Actually, as further tests show the latter is not
always true and separating index creation from the execution of “LOAD DATA INFILE” is an important optimizing step! See a forthcoming article for a more detailed discussion of these interesting topics!

Libreoffice 4.1.6.2 – connection to remote cups server lost after 5 minutes

There are some bugs that seem to reappear again after some time in Libreoffice. One of these bugs is the handling of connections to remote cupsd servers and their printer queues. See:
https://bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1020048
https://www.libreoffice.org/bugzilla/show_bug.cgi?id=56344
https://www.libreoffice.org/bugzilla/show_bug.cgi?id=50784

The problem

In my present installation of Libreoffice 4.1.6.2 on an Opensuse 13.1 system with KDE 4.13 this bug reappeared. (The libreoffice packages are from the standard Opensuse 13.1 update repository). Currently the connection to our remote cups printer queues is lost after 5 minutes. This is the standard timeout for idle connections on the cups server (see the settings for the cupsd daemon). LibreOffice obviously does not reactivate the connection – at least not on my systems in their present status.

Deinstalled “libreoffice-kde4” as a cause ?

I did not check all and every LibreOffice setting that may influence this behavior. Furthermore, due to other problems with a previous LibreOffice release I had uninstalled the KDE extension package “libreoffice-kde4” on my machine – although I use KDE4. Instead I used the “libreoffice-gnome” package which worked more reliable for some aspects. So, I cannot exclude at the moment that the printing problem had to do with my deinstallation of “libreoffice-kde4”.

The Kamppeter workaround may help

I had and have no intention to change my cupsd settings on the print server, because LibreOffice has a problem with reopening connections. What still helped in my situation and despite the deinstalled “libreoffice-kde4” was to change the print dialog from “LibreOffice dialog” to the standard system dialog. This workaround was described some time ago by Till Kamppeter; see “https://bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1020048/comments/33“. It requires that one activates advanced options under
Tools >> Options >> LibreOffice >> Advanced >> Set Checkbox “Enable experimental features”.

Then go to the “general” settings and deactivate “Use LibreOffice dialogs” under “print dialogs:
Tools >> Options >> LibreOffice >> General >> section “Print dialogs” >> Unset Checkbox “Use LibreOffice dialogs”.

This worked in my case and maybe it helps others, too. If it does not I would also try to reinstall “libreoffice-kde4” again in case you use KDE4 (and see if it works better again than in some previous versions).

PHP/MySQL/Linux: File upload, database import and access right problems

On an (Apache) web server one should establish a policy regarding access rights to the files of hosted (virtual) domains. Especially with respect to files transferred by FTP or by uploads for a browser. It was an interesting experience for me that uploading files during an Ajax communication with a PHP program and moving them to target directories with PHP’s “move_uploaded_file” can collide with such policies and lead to unexpected results.

The problem

I recently tested Ajax controlled csv file transfers from a browser to a web server with a subsequent loading of the file contents to a database via PHP/MySQL. The database import was initiated by PHP by executing the SQL command “LOAD DATA INFILE” on the MySQL server. This chain of processes worked very well:

The uploaded csv file is moved from PHPs $_FILES superglobal array (which works as an input buffer for uploaded files) to a target directory on the web server by the means of the PHP function “move_uploaded_file”. My PHP program – the Ajax counterpart on the server – afterwards triggers a special MySQL loader procedure via the “LOAD DATA INFILE” command. MySQL then loads the data with very high speed into a specified database table. It is clear that the import requires sufficient database and table access rights which have to be specified by the PHP program when opening the database connection via one of PHP’s MySQL interfaces.

The overall success of the upload and database import sequence changed, however, in a surprising way when I wanted to transfer my (rather big) files in a zip-compressed form from the browser to the web server.

So I compressed my original csv file into a zip container file. This zip file was transferred to the server by using the same a web site formular and Ajax controls as before. On the server my PHP program went through the following steps to make the contents of the zip container available for further processing (as the import into the database) :

  • Step 1: I used “unlink” to delete any existing files in the target directory.
  • Step 2: I used “move_uploaded_file” to save the zip file into the usual target directory.
  • Step 3: I used the “ZipArchive” class and its methods from PHP to uncompress the zip-file content within the target directory.

Unfortunately, the “LOAD DATA INFILE” command failed under these conditions.

However, everything still worked well when I transferred uncompressed files. And even more astonishing:
style=”margin-left:20px;”>When I first uploaded the uncompressed version and then tried the same with the zip-version BUT omitted Step 1 above (i.e. did not delete the existing file in the target directory) “LOAD DATA” also worked perfectly.

It took me some time to find out what happened. The basic reason for this strange behavior was a peculiar way of how file ownership and access rights are handled by “move_uploaded_file” and by the method ZipArchive::extractTo. And in a wiggled way it was also due to some naivety on my side regarding my established access right policy on the server.

Note in addition: The failures took place although I used SGID and setfacl policies on the target directory (see below).

User right settings on my target directory

On some of my web servers – as the one used for the uploads – I restrict access rights to certain directories below the root directories of some virtual web domains to special user groups. One reason for this is the access control of different developer groups and FTP users. In addition I enforce automatic right and group settings for newly created files in these directories by ACL (setfacl) and SGID settings.

The Apache process owner becomes a member of the special group(s) owning these special directories. Which is natural as PHP has to work with the files.

Such an access policy was also established for the target directory of my file uploads. Let us say one of these special groups would be “devops” and our target directory would be “uploads”. Normally, when a user of “devops” (e.g. the Apache/PHP process)

  • creates a file in the directory “uploads”
  • or copies a file to the directory “uploads”

the file would get the group “devops” and “-rw-rw—-” rights – the latter due to my ACL settings.

What does “move_uploaded_file” do regarding file ownership and access rights ?

The first thing worthwhile to note is that the politics for what this PHP function does regarding ownership settings and access rights changed at some point in the past. It furthermore seems to depend on the fact whether the target directory resides on a different (mounted) file system. See the links given at the bottom of this file for more information.

Some years ago the right settings for the moved file in the target directory were “0600”. This obviously has changed :

  • The right settings of moved files today are “0644” (independent of file system questions).

However, what about the owner and the group of the moved file? Here “move_uploaded_file” seems to have it’s very own policy:

  • It sets the file owner to the owner of the Apache web server process (on my Opensuse Linux to “wwwrun“).
  • It sets the group always to “www” – and it does so
    independent of

    • whether the SGID sticky bit is set for the target directory or not (!) ,
    • whether the Apache web server process owner really is a member of “www” (!) ,
    • whether a file with the same name is already existing in the target directory with a maybe different group !

Funny, isn’t it? Test it out! I find the last 3 facts really surprising. They do not reflect a standard copy policy. As a result in my case the uploaded file always gets the following settings when saved to the file system by “move_uploaded_file”:

owner “wwwrun” and group “www” and the access rights “0644”.

So, after the transfer to the server, my target file ends up with being world readable!

What does ZipArchive do with respect to ownership and access rights ?

As far as I have tested “ZipArchive::extractTo” I dare say the following: It respects my sticky bit SGID and ACL settings. It behaves more or less like the Linux “cp” command would do.

So, when ZipArchive has done it’s extraction job the target file will have very different settings:

owner “wwwrun”, BUT group “devops” and the access rights “0660”.

However, this is only the case if the target file did not yet exist before the extraction !
If a file with the same name already existed in the target directory “ZipArchive::extractTo” respects the current owner/group and access rights settings (just as the cp command would have done!). Test it out!

The impact on MySQL’s “LOAD DATA INFILE …”

The MySQL process has its own owner – on my system “mysql”. When PHP issues the SQL command “LOAD DATA INFILE” by one of it’s MySQL interfaces the MySQL engine uses an internal procedure to access the (csv) file and loads it’s content into a database table. You may rightfully conclude that the opening of the file will be done by a process with “mysql” as the owner.

nSo, it becomes clear that not only the rights of the Apache/PHP process or database access rights are important in my scenario described above:

The MySQL (sub-) process itself must have access rights to the imported file!
 
But as we have learned: This is not automatically the case when the ZipArchive extraction has finalized – if “mysql” is not by accident or purpose a member of the group “devops”.

Now, take all the information given above together – and one understands the strange behavior:

  • When I uploaded the uncompressed file it got rights such that it was world readable and it’s contents could therefore be accessed by “mysql”. That guaranteed that “UPLOAD DATA INFILE” worked in the first standard scenario without a zip file.
  • If the target file is not deleted and exists before a zip extraction process it will be rewritten without a change of it’s properties. That makes the “LOAD DATA INFILE” work also after a zip extraction as long as we do not delete a previously existing target file from a standard upload with the same name and world readability.
  • However, in case of an emptied target directory the extraction process respects the SGID and ACL settings – and then “mysql” has no read access right for the file!

A conclusion is that I had stumbled into a problem which I had partially created by myself:

I had established a policy for group and right settings which was respected when extracting zipped files. This policy collided with required file access rights for the MySQL processes. Stupid me! Should have taken that into account !

Accidentally, my politics was not respected when uploading and handling standard files directly without encapsulating it in a zip container. Due to the failure of my policy and the disregard of special right settings by “move_uploaded_file” the subsequent MySQL process could use “LOAD UPLOAD INFILE”. Not what I wanted or had expected – but the policy violation almost went undetected due to the overall success.

Conclusions

I hope that the points discussed in this article have made it clear that the access rights of uploaded and/or zip-extracted files are something to take good care of on a Linux web server:

Do not rely on any expectations regarding of how PHP functions and objects may handle user/group ownerships and access rights! In case of file uploads (with and without zip containers) test it out, check the behavior of your programs and set the rights with available PHP file handling functionality explicitly to what to want to achieve.

The consequences of the peculiar rights settings of “move_uploaded_file” and it’s ignorance of SGID/ACL policies must be taken into account. Other PHP routines may handle access rights of copied/moved/extracted files differently than “move_uploaded_file”. Furthermore, if you want to load data from uploaded (csv) files into the database take into account that the “mysql” user needs read access rights for your file.

Solutions for my scenario with zip files

Let us assume that I follow my advice above and set the rights for files saved by “move_uploaded_file” explicitly, namely such that my policy with the group “devops” and the “0660” rights is respected. Than – without further measures – no uploaded file could be handled by the MySQL “UPLOAD DATA INFILE” mechanism. In my special case one could think about several possible solutions. I just name 3 of them:

Solution 1: If – for some reasons – you do not even temporarily want to break your SGID/ACL policies for the uploaded and extracted file in your target directory, you could make “mysql” a member of your special group (here “devops”). Note that this may also have some security
implications you should consider carefully.

Solution 2: You may set up another target directory with special access rights only to “mysql” and “wwwrun” by defining a proper group. This directory would only be used to manage the data load into the database. So, copy or move your uploaded/extracted file there, set proper rights if necessary and then start the “UPLOAD DATA” process for this target file. And clean the special directory afterwards – e.g. by moving your upload file elsewhere (for example to an archive).

Solution 3: You may change the access rights and/or the group of the file temporarily by the PHP program before initiating the MySQL “LOAD DATA” procedure. You may reset rights after the successful database import or you may even delete or move the file to another directory (by rename) – and adjust its rights there to whatever you want.

Whatever you do – continue having fun with Ajax and PHP controlled file uploads. I shall come back to this interesting subject in further articles.

Links

New access right settings of move_uploaded_file
https://blog.tigertech.net/posts/php-upload-permissions/
Old access right settings by move_uploaded_file and ignorance of SGID sticky bit
http://de1.php.net/manual/en/function.move-uploaded-file.php#85149
cp and access rights (especially when a file already exists in the target directory)
http://superuser.com/questions/409226/linux-permissions-and-owner-being-preserved-with-cp
The use of SGID
http://www.library.yale.edu/ wsg/docs/ permissions/ sgid.htm

 

PHP/OO/Schemata: Decoupling of property groups by using composite patterns

When designing and building web applications with OO techniques the following elementary questions come up:

  • How to represent real world objects [RWOs] in the PHP OO code of Web applications by PHP objects?
  • How and where to define and control hierarchical relations between different object classes?
  • How and where to define the relation of the object properties to database table and column definitions of a RDBMS?
  • How to define rules for presentation and web template controller objects and/or web generator objects ?

As a common answer I often use a kind of “SCHEMA” oriented approach. By a SCHEMA on the programming level I understand a container of information that helps to control objects of an application and their properties in a flexible, yet standardized way. So, I am not talking about database schemata – I refer to SCHEMATA describing and controlling the data structure, the behavior, properties and certain methods of OO objects (OO Schemata). Such an object control SCHEMA may, however, refer to and use information coded in a relational database schema. Actually, important parts of a SCHEMA support the objectives of a classical data mapper pattern (OO properties vs. database table fields). But SCHEMATA in my own application framework contain more and other logical control information and settings than just data mapping definitions (see below).

When many object relations and properties are involved the maintenance of multiple OO SCHEMATA may get painful. This article is about the idea that the use of decoupled Schemata for property groups and a rigorous use of composite patterns in the object design may make things easier.

In the following text we call a PHP object instance that represents a RWO a “PWO”. The PWO object properties are of course defined in the PWO class definition. What kinds of PWO classes are required is defined by the results of a object oriented analysis combined with an ER model. A specific PWO class may be derived from a general (framework) class defining “Single” [SGL] objects that shall work as PWOs (and represent a single RWO each) plus the methods required for handling a PWO and its property data.

Schemata to encapsulated structural data knowledge for (PHP) PWO objects

A PWO may represent a RWO of a certain type like a free position, a product, a contract, an employee or a web page. We need classes of PWOs to cover all required RWO types. For each class of PWOs that appear in a new PHP application I define a specific PHP SCHEMA object class (derived from Schema base classes). I use such a PWO Schema class to define – among other things – e.g.

  • all required properties of a PWO object class, associated data types, value ranges, default values, criteria for must and (un/)changeable properties by appropriate arrays,
  • schematic control parameters for other application objects that deal with the data,
  • associated database tables/views of a RDBMS,
  • the relation of properties to database fields and index definitions,
  • the relation of properties to $_POST oder $_SESSION data arrays,
  • standard or Null value settings for all fields,
  • typical SQL fragments for a variety of database interactions,
  • all required Master-Detail [MD] relations to other object classes and associated tables/columns,
  • possible constraints for deletions,
    inserts, updates to maintain data and relation integrity,
  • constraints regarding unique values within relations.

Most of the property/field information is typically kept in arrays. The definitions can be provided directly in the class definition field or indirectly by fetching information from special database tables which an application designer may fill. Normally, only one Schema object instance – a PWO class specific SCHEMA object – is generated for each type of PWOs during the run time of a PHP application program. SCHEMA objects of a specific class are typical singletons.

The ultimate goal of using Schema definitions in an application development framework is:

If you once have defined a Schema consistently with your database most – if not all – application features and methods to maintain the PWO objects and their underlying data should be ready to run. The object constructor uses the SCHEMA definitions to prepare and fill the objects properties.
In addition a SCHEMA provides a very flexible and dynamic way to change the property list of object classes.

As a central part of this task the Schema information relates (PHP) PWO object properties to database tables and their field/columns (of a RDBMS) in the sense of a “mapping“. Such tasks are theoretically well covered by the use of “Data Mapper” patterns as a bridge between the OO and the RDBMS worlds. A SCHEMA is – among other things – a concrete Data Mapper object. It defines what PWO property corresponds to which database table column. (It may also define the property and data type, value limits changeability, and so on). If you for a moment neglect object relations (and related table relations in the database), in very many simple applications RWO properties and related PWO properties are mapped to the fields of exactly one database table.

However, in case of complex applications you may also have to define parameters describing Master-Detail [MD] relations or other types of relations to other object classes or other external MD hierarchies. For complex applications we furthermore will add parameters defining which properties shall or shall not appear in which order, in which PWO single or PWO list representation, in MD views, in maintenance masks and so on.

Especially list definitions (What fields appear? From which joined tables? In which order?) and definitions for which fields should appear in which order in maintenance masks or public web pages add complexity to a SCHEMA. Moreover, for the purposes of an application meta framework, I provide a lot of parameters to control presentation tasks both for automatically providing maintenance masks for whole object hierarchies and controlling the methods of web page generator objects.

Therefore, in a complicated application environment, a SCHEMA object can be a very complex thing in itself – it must be modified with care and there should always be defined methods to guarantee consistency conditions – e.g.

  • internally between definitions inside a SCHEMA,
  • between SCHEMATA for object classes defined on different levels of a MD hierarchy
  • and of course between the SCHEMA definitions and the tables in the database.

Note that SCHEMA definitions in application developer meta frameworks can also be used to generate tables for new applications (if they are not present yet) and/or to change existing tables and/or to validate the real database tables against what the PHP object representation in a program expects.

A typical (web) application will of course require multiple distinct SCHEMATA for the definition of different PWO classes.

Typically the appropriate SCHEMA object for a PWO or an object describing lists of PWOs will be loaded or injected
into a PWO instance during the creation of this instance by a constructor function. [An alternative to an external injection into the constructor could of course be an automatic identification and loading of the already created singleton Schema object by static methods of a classical Singleton pattern in PHP.]

A major disadvantage of a Schema oriented approach is:

If you make errors when maintaining the Schema definitions your application(s) probably will not run anymore. The reason of course is that Schema objects transport a substantial, crucial knowledge about the PWO object(s) and the database structure as well as object relations and table relations. This knowledge is used at many places in your application methods.

Maintainability of SCHEMA definitions ?

The idea of encapsulating extensive knowledge about data structures in a SCHEMA may lead to maintainability problems. In my experience, a SCHEMA based approach works very well and relatively effortless if

  • the number of properties/fields is limited,
  • selection conditions for defining which fields appear are limited,
  • when sequence and ordering conditions (i.e. the order of fields or conditions on the order of field appearances in representation objects) are simple and also limited
  • the relations to other object classes are simple and the number of relations is limited
  • your application, object and and database design is finalized already.

But, whenever the number of properties of RWOs/PWOs gets big, or large groups of logically connected properties appear and/or the relations to other objects get numerous and complicated then the effort to change the settings in the interconnected SCHEMATA for several PWO classes may get a painful and a very time consuming task.

Especially the insertion of new object properties at some positions in defined sequence of properties may lead to a lot of manual adjustments a programmer may have to do in different affected Schemata. E.g., a renumbering of many array elements may happen. Changing the database tables may be much less of an effort than changing all affected Schemata in a complex application environment (to be developed).

In parallel to the number of properties the number of columns in a database table may get big – so that even handling the tables with tools like phpMyAdmin may also become a bit difficult. Note that in a CMS like application the variables (properties) describing the position and look of page elements may easily be more than hundred.

Typically, the last condition of my criteria list above will not be met during development phases – especially not in agile projects. Tests and new insights may lead to continuous modifications of data and object models. Resulting Schema adaptions to changes in database or object models may happen very often during some development phases. And then time is a more crucial factor than developers may wish it to be.

Some SCHEMA edit actions like renumbering of arrays or adjusting property/field order definitions will feel like a waste of time especially in cases when only a special group of related properties is the main target of the changes whilst other property groups could be left alone – at least in principle – if things were decoupled … . However, this is not the case if all properties of all property groups are sequentially listed and probably enumerated in one and the same SCHEMA object. And believe me – some enumeration has to be done – you cannot cover all and everything by associative arrays.

To enhance the maintainability of SCHEMATA in vast MD applications I have used a different approach for some time now,
which may also be of interest for other developers. Actually – as I am originally a physicist and programming is not even my main profession within IT – I am not sure whether the following approach has been described elsewhere. Probably it has. There is a big overlap with the well known general “Composite Pattern” in OO design – however, it is the relation to SCHEMA objects that is of interest here. My goal is to fold a composite pattern for SCHEMATA into a related composite pattern for PWOs. For myself it marked a bit of a change of how I incorporate structural information into complex PHP/MySQL applications like a CMS.

SLAVE Schemata and SLAVE Objects

I summarize my approach which leads to a split of tables and Schema definitions by the words “Slave Schemata and Slave Objects”.

By the word “SLAVE” I do not refer to Master-Detail relations. I neither refer to Master-Slave structures in a database or LDAP server environment. To mark the difference I shall below call the “master” of a SLAVE SCHEMA the “MAIN SCHEMA“. Nevertheless SLAVEs mark a level in a new hierarchy of a composite pattern as I explain below.

I have three main objectives:

  • The first objective of this approach is to decouple the conventional definitions of property and data groups in a classical Schema class from each other and encapsulate each data group definition in a separate SCHEMA object, i.e. in a separate SLAVE SCHEMA class definition.
  • The second objective is that – instead of comprising all data fields for object properties in just one database table – we will distribute the fields over several separate database tables – one for each group of properties.
  • The third objective is to make the resulting MAIN/SLAVE-SCHEMA structure and the distribution of data over several tables usable for (restructured) PWOs – without having to reprogram basic methods already available to PWOs by inheritance from some Base classes (of a framework). This will lead us to the definition of SLAVE objects.

SLAVE SCHEMATA and distinct database tables for property groups

Please note, that the rearrangement of data over several database tables is NOT done because of reasons like redundancy reduction or to get a better ER model. On the contrary, we shall need additional efforts in our object methods to gather all property information from separate tables. The whole effort is done to enhance the maintainability of our SCHEMATA. Nevertheless, reflecting the logical association of data groups by separate distinct tables may help to deal better with associative structures – although we even get a bit more redundancy.

At its core a SCHEMA defines the relation between database table fields and PWO properties. If we want to split the properties of PWOs of a defined class into groups and distribute these groups into separate database tables we need of course multiple SCHEMATA and a related definition of several SCHEMA classes.

As an example let us assume that we have a RWO/PWO (like an art picture) which shall be described

  • by some basic geometrical information (as e.g. an art picture canvas) (property group “geo”)
  • and by some standard maintainable CMS parameters determining the presentation structure of text information and illustrating pictures on a web page (property group “cms”).

Then we may identify two groups of properties “geo” and “cms” of such a PWO. We could define the properties and their mapping to database fields in 2 Schemata – one for the “geo” group of object properties and one for the “cms” group.
r

However, we also need a kind of “Main Schema” to bind these (Sub-) Schemata together. Following this idea we get a new hierarchy – in addition to a potentially already existing logical and hierarchical Master-Detail relation between different classes of PWO objects (e.g. a picture-artist relation). But this time we deal just with a grouping of data in “rooms” of a house under one “roof”. So, this new hierarchy of SCHEMATA only has two levels:

  • A MAIN SCHEMA – which defines as usual basic properties of a specific PWO object class and the MD hierarchy relations or other relations to object instances of other PWO classes (besides other things)
  • Multiple SLAVE SCHEMATA – each describing a special group of semantically connected properties of its related specific PWO class.

Note that we will define basic logical and fundamental relational aspects of a PWO in the MAIN SCHEMA. A SLAVE SCHEMA contains information about some standard properties confined in a group. Each SLAVE SCHEMA describes the relation of the properties of its associated group with the fields of a distinguished SLAVE database table – separate from the MAIN table and other SLAVE tables.

All properties described in the MAIN SCHEMA object and it’s included SLAVE SCHEMA objects together define a complete set of properties of a PWO instance.

To logically bind associated records together it is clear that the key values identifying the associated records both in the Main table and the SLAVE tables must have a common value – identifying a PWO record and its respective object instance in a PHP program. We call this value of the unique record identification key “snr” below.

The MAIN SCHEMA has of course to comprise some knowledge about the SLAVE Schemata. In an MD application, we may in addition have to define one MAIN Schema and several SLAVE Schemata on each of the MD levels.

The basic idea regarding an improved efficiency of the maintenance of SCHEMATA is:

If you have to change some properties/fields – just and only change the affected SLAVE or the Main SCHEMA describing the related group of fields – but leave all other SCHEMATA and their definition statements unchanged !

SLAVE SCHEMA objects are created inside the MAIN SCHEMA object

As a Main Schema and its Slave Schemata strongly belong together and all the Slave Schemata depend on the Main Schema my approach was to create the SLAVE Schema object instances as sub-objects of the MAIN Schema object instance. I.e., I followed the idea of a composite pattern:

The MAIN SCHEMA object – a Singleton – acts as a container object for its SLAVE SCHEMA object instances. It generates, contains and controls SLAVE SCHEMATA as sub-objects (i.e. as complicated variables). We can save this knowledge in array-like structures for the sub-objects. The SLAVE Schemata can e.g be arranged in an associative array with indices defined in an array “ay_slave_prefs[]” of the Main Schema – containing name prefixes for each of the (Slave) property groups (e.g. “geo” and “cms”).

The MAIN SCHEMA objects of an MD application and their encapsulated SLAVE SCHEMA objects should of course be instances of the same type of general Base Classes for Schema objects. We want to use as much of the methods already defined for Schema objects as possible. SLAVE Schemata are basically Schemata after all!

Nevertheless, in such an approach we would still have to adapt or introduce some methods to deal with the hierarchical structure and the association with another – e.g. in special requirements for consistency checks between a SLAVE Schema and its MAIN Schema and other things. However, this is an easy task.

nFurthermore, each SLAVE SCHEMA object should to receive a reference to its MAIN SCHEMA object as an injected parameter to be able to create and handle further references to all the variables of the MAIN Schema. So the MAIN Schema object will contain SLAVE Schema objects – each of which itself comprises a reference to their common container object, namely the MAIN SCHEMA object.

Note, that there is still only one MAIN SCHEMA object comprising all relevant property and relation information for each PWO class.

The following drawing should make the basic ideas described above clear:

Slave_schemata

SGL PWO objects representing a single RWO become internally structured by SLAVE PWO objects

Ok, so far we grasped the idea of a kind of an array of SLAVE Schemata within a MAIN SCHEMA. Each SLAVE SCHEMA describes a bunch of properties of a PWO object class. The values of these properties are saved in a distinct database table. The MAIN SCHEMA keeps everything together and defines also the MD relations and other relations of a PWOs of different PWO classes.

At the core of PHP (web) applications, however, we need (structured) PWO objects with methods to handle all of the user’s interactions with all property data.

In my application meta framework a PWO class is realized by deriving it from a base class for so called Single [SGL] objects. The task of a SGL object class is to provide a general basis for specific PWO classes. A PWO instance is also a SGL instance and has all of its methods available. We speak of a “SGL PWO” object below. (An application class family comprises also other types of objects as e.g. LIST objects or template control objects).

A SGL PWO object is derived from some SGL Base Classes (of an inheritance chain) with all required methods e.g. to handle database transactions, to check field contents and to handle complex object relations like MD hierarchy relations of a potential PWO. It does this by extensively using the PWO SCHEMA information in its methods.

But, how to deal with our new type of a MAIN SCHEMA that contains information distributed over several SLAVE SCHEMATA? How would a PWO use it? And:

Do we need to rewrite all the base class methods for SGL PWOs to handle the database interactions because we have divided the PWO properties into distinct groups (saved in distinct database tables).
Fortunately, the answer is NO!

A SGL PWO object in my framework e.g. identifies its appropriate SCHEMA object by following name space rules and then loads the Schema as a sub object (see the graphics). It does this by injecting the PWO Schema object into the constructor of its most elementary base class in the inheritance chain. So, basically a PWO gets its Schema object injected. (I have only automatized this process by name rules and the use of Singleton patterns). A PWO deals with its data by using the knowledge of its injected SCHEMA.

Therefore, we can choose a similar approach for our SGL PWO objects as for the Schemata:

A SGL object making a specific PWO instance becomes a MAIN SGL PWO object. It will create and contain SLAVE SGL PWOs which are derived from the very same base classes as the MAIN SGL object itself. So, we use a kind of composite pattern also for the SGL PWO object:
 
The Main SGL PWO object acts as a container for the SLAVE SGL PWOs. Each SLAVE SGL object shall be responsible for the representation of a property group defined in a SLAVE Schema. And now comes the real power of
this intertwined double composite pattern approach:
 
The properties of each SLAVE SGL object correspond to fields of the table/views defined in the related SLAVE Schema, only! To use that on the programming stage we only have to create the SLAVE SGL PWO objects the same way as the MAIN SGL PWO object – BUT with an injection of the relevant SLAVE Schema instead of the Main Schema!

See the drawing above. As in the case of the SCHEMATA we create each SLAVE SGL object with a reference to its MAIN SGL object. Each SLAVE SGL object therefore knows about the identification key of the MAIN SGL object (and its rows in the database tables) and can therefore use it to identify its own records in the SLAVE tables of the database (defined in the SLAVE Schema). Remember that we defined the key value to be the same for associated records in the MAIN and SLAVE tables.

Provided that the right SLAVE SCHEMA was injected into each SLAVE SGL PWO , all base class methods that work for the MAIN SGL object regarding database operations will also work for SLAVE SGL objects. The correct identification of the right record in the associated SLAVE tables is guaranteed if each SLAVE object gets the same value “snr” for its identification key as its MAIN container object (see above). That should be straightforward to understand and can be guaranteed by the constructor functions. As a result SLAVE objects and their methods work autonomously on their SLAVE tables just as the MAIN SGL object works on its main table.

Recursive iteration of methods

All what we said above means in addition that we are able to iterate all the methods, a SGL MAIN object uses for data handling, also over it’s SLAVE SGL objects:

The thing to be guaranteed is that every UPDATE, DELETE, INSERT method for MAIN SGL object automatically triggers the execution of the very same (base class) methods for the SLAVE objects. This requires a rather simple method extension in the base classes. Actually, we could define each of the elementary methods in form of a recursion following a hierarchy:

If SLAVE objects exist call the method presently used for the MAIN SGL PWO object for each of the SLAVE objects, too.

We may stop the recursion at the level of the SLAVE objects without trying a further iterate over non existing deeper SLAVE level by evaluating some special property of a SLAVE Schema describing that the SLAVE has no SLAVE Schemata incorporated itself. (However, also the iteration over tree like object structures would be possible – although not required in my case.)

Remark regarding error handling of database transactions

For error handling we need further method extensions controlling the success of the database operations over all (MAIN and SLAVE) tables. In case of failures in a SLAVE table all other already performed transactions on other SAVE tables or the MAIN table have to be rolled back. To do this without appropriate mechanisms offered by the database the old values have to be saved in an intermediate storage. Otherwise database transaction control and rollback mechanisms could be used.

Hide or use the SLAVE structure from outside?

Note that regarding the interaction with a PWO from outside you have the choice

  • to adapt objects that use PWOs to work with their data or generate e.g. web pages to use they knowledge about the SLAVE structure
  • or to create interface methods that hide the internal structure.

My experience is that in case you build your own application development framework you will need both approaches. Note also that our SLAVE
approach will have an impact on objects and methods developed for representing LISTs of PWOs or MD views over several MD hierarchy levels of PWOs. But that is stuff for another article. In this article I just wanted to present the basics of the SLAVE SCHEMA and SLAVE PWO object approach.

Conclusion

Maintaining multiple SCHEMA definitions over hundreds of properties of RWO/PWOs can be dreadful. Splitting the properties into property groups and defining associated Sub-Schema-objects of a Main-Schema-object in the sense of a OO composite pattern can help to improve maintainability. This approach can be coupled with a composite pattern for the (SGL) PWO objects representing single RWOs. The SLAVE (SGL) PWO objects will be instantiated by using the same SGL base classes as for the MAIN PWO object (containing the SLAVE PWOs). The decoupling of data is guaranteed by the injection of right SLAVE Schema into a SLAVE PWO. Many methods can then be iterated over the Main/SLAVE objects and will then lead to a consistent database interaction.

In a forthcoming article I shall discuss the impact of a SLAVE object approach on (X)HTML generator methods in web applications. See:
PHP/OO/Schemata: SLAVE objects and (X)HTML generator methods

Posted in PHP