Lohnt sich eine COBIT 5 Foundation Zertifizierung für Linuxer?

Linux ist heute im Serverbereich vielfach eine etablierte Größe. Dennoch mag es immer noch viele Projekte geben – im Besonderen im deutschen Mittelstand und/oder der öffentlichen Verwaltung – mit denen die Einführung von Linux-Systemen (z.B. aus Gründen von Kostensenkung) erstmalig angegangen wird. Vielleicht mag im einen oder anderen Fall sogar auch der Desktop betroffen sein (s. z.B. das (noch) laufende Limux-Projekt der Stadt München).

Im Zusammenhang mit Linux-Einführungen ist in einem ersten Schritt viel Überzeugungsarbeit zu leisten, bevor entsprechende strategische Entscheidungen getroffen werden. Hierbei sind klassische Themenfelder der IT-Governance betroffen und die Abstimmung mit einem verantwortlichen CIO kommt mit hoher Wahrscheinlichkeit auf die Tagesordnung. Und dann berührt die Welt der betrieblichen IT-Governance direkt diejenigen interner und externer Mitarbeiter wie Berater.

Ich habe letzte Woche die COBIT 5 Foundation Zertifizierung hinter mich gebracht. COBIT 5 ist ein Framework für die Governance und das Management von Unternehmens-IT. Da ich bereits über ein Spektrum personenbezogener Zertifikate im ITSM-, ISM-, Risk Management- und ITIL-Bereich verfüge, habe ich mich selbst gefragt, ob sich der Aufwand überhaupt lohnt. Nach dem Selbststudium des Buches “Praxiswissen COBIT” und dem Absolvieren eines zugehörigen Kurses meine ich jetzt, dass es gerade für Berater im Linux-Umfeld interessant sein kann, sich mit COBIT auseinanderzusetzen. Hierfür ein paar Gründe:

  • 1) IT-Governance und Linux
    Linux einzuführen und/oder substanziell im Unternehmen auszubauen ist immer auch ein GF- und damit Governance-Thema. Für Berater (ggf. auch technische Berater) und engagierte Mitarbeiter lohnt es sich daher durchaus, ein grundlegendes Verständnis in Bezug auf Governance-Themen – und im Besonderen bzgl. IT-Governance – zu erwerben. Und sei im Fall einer technischen (Führungs-) Kraft auch nur zu dem Zweck, die Motive und Leitlinien im Denken des Managements – im Besonderen eines CIO – besser zu verstehen. Hierfür ist ein 1,5-tägiger Foundation Kurs aus meiner Sicht gut geeignet.
    Gerade technik-begeisterte Verfechter von Linux, die sich von “ideologischen” Motiven manchmal nicht frei machen können oder wollen, mag das Credo der COBIT Zielkaskade – nämlich die Ausrichtung der IT auf die Unterstützung von Unternehmenszielen – helfen, den Entscheidungsträgern die “richtigen” Argumente zu liefern. Eine relevante Frage ist: Kann eine linux-basierte Lösung die Unternehmensziele besser, nachhaltiger und ggf. auch kostengünstiger unterstützen als andere Lösungen? Warum? In welchem Umfang? Wie sehen die zugehörigen Business Cases aus?
    “Open” allein ist auf der Management-Ebene zu wenig …. das ist zwar eine Trivialität, aber COBIT erinnert auch Linux-Anhänger zu Recht und explizit daran, das IT kein Selbstzweck ist. Wir müssen schon aufzeigen, wo, wann und warum genau Linux-basierte Lösungen das Unternehmen die Unternehmensziele effektiver und effizienter erreichen lassen als andere Lösungsansätze. Und wir müssen das auch immer wieder selbstkritisch überprüfen.
  • 2) Projekte / Programme als Teil der IT-Governance
    Wer die ISO 20000 oder die ISO 21000 kennt weiß, dass die Begriffswelten dieser Normen Projekte und deren Prozesse ausklammern. Das ist unter dem Aspekt der Allgemeingültigkeit von Normen verständlich – aber dennoch nicht wirklich praxis- und realitätsnah. Aus einem Anforderungsmanagement heraus entstehen neue Services i.d.R. durch Projekte oder bei großen, strategischen Anliege auch durch Programme. Das gilt natürlich auch und gerade im Linux-Umfeld. Daher ist ein Framework gefragt, welches neben den ITSM-Normen und ITSM, ISM- Best Practices auch auf Best Practices im Bereich des Projektmanagement verweist und entsprechende Prozesse zur Government-Unterstützung einbindet. Dies ist bei CoBIT 5 der Fall.

  • 3) Architektur unter verschiedenen Blickwinkeln
    Linux hat aus meiner Sicht natürlicherweise viel mit dem Aufbau konsistenter, aber auch flexibler und erweiterungsfähiger IT-System- und -SW-Architekturen zu tun. Auch hier gilt: Ein Governance-Framework sollte aus diesem Grund Architektur-Aspekte einbinden. Dies ist bei CoBIT 5 grundsätzlich gegeben, nachdem ein Abgleich mit Togaf stattgefunden hat.
  • 4) Richtlinien
    Ich kenne kaum ein Linux-Projekt und schon gar kein Linux-Einführungs-Projekt, dass nicht der Unterstützung durch das Management bedurft hätte. Damit sind Policies und Richtlinien gefragt. Wer sich mit dieser Thematik noch nicht im Rahmen von anderen Management-Systemen befasst hat, erhält über einen COBIT Foundation Kurs einen guten Einstieg.
  • 5) Breites, umfassendes Alignment mit anderen Frameworks
    COBIT 5 stellt aus meiner jetzigen Sicht ein sehr breit angelegtes Rahmenwerk dar, das gezielt mit anderen wichtigen Frameworks und Normen wie u.a. ITIL V3, ISO 20000, ISO 21000, ISO 31000, ISO 38500, COSO, Togaf abgeglichen wurde. Hier ist seit COBIT 4 eine große Fleißarbeit geleistet worden, die den gesamten Governance-Ansatz auf eine deutlich breitere, umfassendere Grundlage stellt als manche andere mir bekannten Frameworks oder Normen. Nach meiner Einschätzung hat das letztlich mehr Vor- als Nachteile. Zumindest verdeutlicht COBIT 5, an welchen Punkten IT-Governance auf andere Management-Systeme oder Best Practices zurückgreifen kann und sollte. Wie oben schon angedeutet, ergibt sich hierbei u.a. auch ein Anknüpfungspunkt, um die Prozess-Welten von Projekten/Programmen und des IT-Service-Managements in sinnvoller Weise miteinander zu verzahnen – auch wenn COBIT die konkrete Ausdeutung für mein Gefühl nicht hinreichend vornimmt. Aber das ist auch ein spezieller Aspekt, der mich wegen meiner Affinität zu SW-Projekten besonders interessiert.

Natürlich kann ein Foundation Kurs zu den genannten Punkten maximal Grundlagen vermitteln und Hinweise darauf geben, wie entsprechende Prozesse in ein governance-getriebenes Prozessmodell integriert werden können. Das ist aus meiner Sicht jedoch schon interessant genug – auch wenn die Untergliederung von Prozessen in sog. “Praktiken” für den ITILianer etwas gewöhnungsbedürftig ist. Zudem finde ich, dass COBIT aus Sicht des Managements eine ebenso gute Motivation für die Bschäftigung mit weiterführenden oder spezialisierten IT-bezogene Management-Frameworks und Normen bietet wie ITIL dies aus einer ganz anderen Perspektive auch tut.

Den Aufwand für die internationale, personenbezogene Foundation Zertifizierung möchte persönlich ich im Vergleich mit anderen Zertifizierungsprüfungen als wirklich überschaubar einstufen. Das von mir gewählte Buch “Praxiswissen COBIT” ist als Referenz und für ein Selbststudium sicher gut geeignet – auch wenn man nach dem Lesen nicht unbedingt einschätzen kann, was für eine Foundation Prüfung relevant ist. Der Verfasser Markus Gaulke sollte in künftigen Auflagen vielleicht zu Beginn des Buches entsprechende Lesehinweise geben und nicht erst auf S. 363. Die relevanten Kapitel kann man an einem längeren Wochenende gut durcharbeiten. Den Besuch eines ca. 1,5 tägigen Kurses zur Vorbereitung auf die Prüfung empfand ich auf dieser Grundlage als sehr hilfreich, wenn auch nicht zwingend erforderlich. Der von mir besuchte Kurs beim mITSM war erfrischend lebendig und rückte zudem die eine oder andere vorschnell auf der Basis anderer Frameworks oder Normen gefasste Überzeugung gerade. Ein Kurs hilft ferner auch, die oft etwas verquere Fragestellungen, den Stil sowie Haken und Ösen der Prüfungen besser einzuschätzen – zumal mir nach der Prüfung manche Frage zu wortwörtlich aus dem Englischen übersetzt erscheint.

Eine Warnung noch : IT-Governance ist kein Thema für Technik-Freaks – es geht vielmehr
um die Steuerung von Prozessen zur Erreichung von Unternehmens- oder Organisationszielen. Natürlich wird dabei auch die Organisation technischer Aufgabenstellungen berührt. Aber technologische Fragen sind kein primäres Thema eines COBIT-Foundation Kurses – wenngleich sich manches Steuerungselement an konkreten Beispielen gut verdeutlichen lässt.

Aber niemand hat ja festgelegt, dass nicht auch technik-affine Mitarbeiter und Führungskräfte über ihren Tellerrand hinausblicken dürfen. Und eine stringente IT-Governance ist bei den vielfältigen, heterogenen und offenen Angeboten an potentiellen linux-basierten Lösungen für Aufgabenstellungen eines modernen Unternehmens sicher ein essentieller Faktor, durch den die ungeheure Dynamik von Open Source Entwicklungen erst zum Vorteil von Unternehmen besser genutzt und auf priorisierte Ziele hin ausgerichtet werden kann. Führung, Agilität und Prinzipien einer kreativen Selbstorganisation sind für mich überhaupt keine Widersprüche sondern einander ergänzende Faktoren – gerade im Open Source Umfeld. COBIT kann der Unternehmensführung wie dem IT-Management helfen, wirksame Leitplanken für die fruchtbare Entfaltung von Agilität und Dynamik zu setzen.

Character sets and Ajax, PHP, JSON – decode/encode your strings properly!

Ajax and PHP programs run in a more or less complex environment. Very often you want to transfer data from a browser client via Ajax to a PHP server and save them after some manipulation into a MariaDB or MySQL database. As you use Ajax you expect some asynchronous response sent from the PHP sever back at the client. This answer can have a complicated structure and may contain a combination of data from different sources – e.g. the database or from your PHP programs.

If and when all components and interfaces [web pages, Ajax-programs, the web server, files, PHP programs, PHP/MySQL interfaces, MySQL …) are set up for a UTF-8 character encoding you probably will not experience any problems regarding the transfer of POST data to a PHP server by Ajax and further on into a MySQL database via a suitable PHP/MySQL interface. The same would be true for the Ajax response. In this article I shall assume that the Ajax response is expected as a JSON object, which we prepare by using the function json_encode() on the PHP side.

Due to provider restrictions or customer requirements you may not always find such an ideal “utf-8 only” situation where you can control all components. Instead, you may be forced to combine your PHP classes and methods with programs others have developed. E.g., your classes may be included into programs of others. And what you have to or should do with the Ajax data may depend on settings others have already performed in classes which are beyond your control. A simple example where a lack of communication may lead to trouble is the following:

You may find situations where the data transfer from the server side PHP-programs into a MySQL database is pre-configured by a (foreign) class controlling the PHP/MySQL interface for a western character set iso-8859-1 instead of utf-8. Related settings of the MySQL system (SET NAMES) affect the PHP mysql, mysqli and pdo_mysql interfaces for the control program. In such situations the following statement would hold :

If your own classes and methods do not provide data encoded with the expected character set at your PHP/MySQL interface, you may get garbage inside the database. This may in particular lead to classical "Umlaut"-problems for German, French and other languages.

So, as a PHP developer you are prepared to decode the POST or GET data strings of an Ajax request properly before transferring such string data to the database! However, what one sometimes may forget is the following:

You have to encode all data contributing to your Ajax response – which you may deliver in a JSON format to your browser – properly, too. And this encoding may depend on the respective data source or its interface to PHP.

And even worse: For one Ajax request the response data may be fetched from multiple sources – each encoded for a different charset. In case you want to use the JSON format for the response data you probably use the json_encode() function. But this function may react allergic to an offered combination of strings encoded in different charsets! So, a proper and suitable encoding of string data from different sources should be performed before starting the json_encode()-process in your PHP-program ! This requires a complete knowledge and control over the encoding of data from all sources that contribute strings to an Ajax response !

Otherwise, you may never get any (reasonable) result data back to your javascript function handling the Ajax response data. This happened to me lately, when I deployed classes which worked perfectly in a UTF-8 environment on a French LAMP system where the PHP/MySQL interfaces were set up for a latin-1 character set (corresponding to iso-8859-1). Due to proper decoding on the server side Ajax data went correctly into a database –
however, the expected complex response data comprising database data, data from files and programs were not generated at all or incorrectly.

As I found it somewhat difficult to analyze what happened, I provide a short overview over some important steps for such Ajax situations below.

Setting a character set for the PHP/MySQL interface(s)

The character code setting for the PHP/MySQL-connections is performed from the PHP side by issuing a SQL command. For the old interface mysql-interface, e.g., this may look like

$sql_unames = “SET NAMES ‘latin1′”;
mysql_query($sql_unames, $this->db);

Note that this setting for the PHP/MySQL-interfaces has nothing to do with the MySQL character settings for the base, a specific table or a table row! The NAMES settings actually prepares the database for the character set of incoming and outgoing data streams. The transformation of string data to (or from) the character code defined in your database/tables/columns is additionally and internally done inside the MySQL RDBMS.

With such a PHP/MySQL setting you may arrive at situations like the one displayed in the following drawing:

ajax_encoding

In the case sketched above I expect the result data to come back to the server in a JSON format.

Looking at the transfer processes, one of the first questions is: How does or should the Ajax transfer to the server for POST data work with respect to character sets ?

Transfer POST data of Ajax-requests encoded with UTF-8

Normally, when you transfer data for a web form to a server you have to choose between the GET or the POST mechanism. This, of course, is also true for Ajax controlled data transfers. Before starting an Ajax request you have to set up the Ajax environment and objects in your Javascript programs accordingly. But potentially there are more things to configure. Via e.g. jQuery you may define an option regarding the so called “ContentType” for the character encoding of the transfer data, the “type” of the data to be sent to the server and the “dataType” for the structural format of the response data:

$.ajaxSetup( { …..
    ContentType : ‘application/x-www-form-urlencoded; charset=UTF-8’
    type : ‘POST’
    dataType : ‘json’
…});

With the first option you could at least in principle change the charset for the encoding to iso-8859-1. However, I normally refrain from doing so, because it is not compliant with W3C-requirements. The jQuery/Ajax documentation says:

" The W3C XMLHttpRequest specification dictates that the charset is always UTF-8; specifying another charset will not force the browser to change the encoding."
(See: http://api.jquery.com/jquery.ajax/).

Therefore, I use the standard and send POST data in Ajax-requests utf-8 encoded. In our scenario this setting would lead to dramatic consequences on the PHP/MySQL side if you did not properly decode the sent data on the server before saving them into the database.

In case you have used the “SET NAMES” SQL command to activate a latin-1 encoded database connection, you must apply the function utf8_decode() to utf-8 encoded strings in the $_POST-array before you want to save these strings in some database table-
fields!

In case you want to deploy Ajax and PHP codes in an international environment where “SET NAMES” may vary from server to server it is wise to analyze your PHP/MySQL interface settings before deciding whether and how to decode. Therefore, the PHP/MySQL interface settings should be available information for your PHP methods dealing with Ajax data.

Note, that the function utf8_decode() decodes to the iso-8859-1-charset, only. For some cases this may not be sufficient (think of the €-sign !). Then the more general function iconv() is your friend on the PHP side.
See: http://de1.php.net/manual/de/function.iconv.php.

Now, you may think we have gained what we wanted for the “Ajax to database” transfer. Not quite:

The strings you eventually want to save in the database may be composed of substrings coming from different sources – not only from the $_POST array after an Ajax request. So, you need to control where from and in which charset the strings you compose come from. A very simple source is the program itself – but the program files (and/or includes) may have another charset than the $-POST-data! So, the individual strings may require a different de- or en-coding treatment! For that purpose the general “Multibyte String Functions” of PHP may be of help for testing or creating specific encodings. See e.g.: http://php.net/manual/de/function.mb-detect-encoding.php

Do not forget to encode Ajax response data properly!

An Ajax request is answered asynchronously. I often use the JSON format for the response from the server to the browser. It is easy to handle and well suited for Javascript. On the PHP the json_encode() function helps to create the required JSON object from the components of an array. However, the strings combined into a JSON conform Ajax data response object may come from different sources. In my scenario I had to combine data defined

  • in data files,
  • in PHP class definition files,
  • in a MySQL database.

All of these sources may provide the data with a different character encoding. In the most simple case, think about a combination (inclusion) of PHP files which some other developers have encoded in UTF-8 whereas your own files are encoded in iso-8859-1. This may e.g. be due to different standard settings in the Eclipse environments the programmers use.

Or let’s take another more realistic example fitting our scenario above:
Assume you have to work with some strings which contain a German “umlaut” as “ü”, “ö”, “ä” or “ß”. E.g., in your $_POST-array you may have received (via Ajax) some string “München” in W3C compliant UTF-8 format. Now, due to database requirements discussed above you convert the “München” string in $_POST[‘muc’] with

$str_x = utf8_decode($_POST[‘muc’]);

to iso-8859-1 before saving it into the database. Then the correct characters would appear in your database table (a fact which you could check by phpMyAdmin).

However, in some other parts of your your UTF-8 encoded PHP(5) program file (or in included files) you (or some other contributing programmers) may have defined a string variable $str_x that eventually also shall contribute to a JSON formatted Ajax response:

$str_y = “München”;

Sooner or later, you prepare your Ajax response – maybe by something like :

$ay_ajax_response[‘x’] = $str_x;
$ay_ajax_response[‘y’] = $str_y;
$ajax_response = json_encode($ay_ajax_response);
echo $ajax_response;

n
(Of course I oversimplify; you would not use global data but much more sophisticated things … ). In such a situation you may never see your expected response values correctly. Depending on your concrete setup of the Ajax connection in your client Javascript/jQuery program you may not even get anything on the client side. Why? Because the PHP function json_encode() will return “false” ! Reason:

json_encode() expects all input strings to be utf-8 encoded !

But this is not the case for your decoded $str_x in our example! Now, think of string data coming from the database in our scenario:

For the same reason, weird things would also happen if you just retrieved some data from a database without thinking about the encoding of the PHP/MySQL interface. If you had used “SET NAMES” to set the PHP/MySQL interface to latin-1, then retrieved some string data from the base and injected them directly – i.e. without a transformation to utf-8 by utf8_encode() – into your Ajax response you would run into the same trouble as described in the example above. Therefore:

Before using json_encode() make sure that all strings in your input array – from whichever source they may come – are properly encoded in UTF-8 ! Watch out for specific settings for the database connection which may have been set by database handling objects. If your original strings coming from the database are encoded in iso-8859-1 you can use the PHP function ut8_encode() to get proper UTF-8 strings!

Some rules

The scenario and examples discussed above illustrate several important points when working with several sources that may use different charsets. I try to summarize these points as rules :

  • All program files should be written using the same character set encoding. (This rule seems natural but is not always guaranteed if the results of different developer groups have to be combined)
  • You should write your program statements such that you do not rely on some assumed charsets. Investigate the strings you deal with – e.g. with the PHP multibyte string functions “mb_….()” and test them for their (probable) charset.
  • When you actively use “SET NAMES” from your PHP code you should always make this information (i.e. the character code choice) available to the Ajax handling methods of your PHP objects dealing with the Ajax interface. This information is e.g. required to transform the POST input string data of Ajax requests into the right charset expected by your PHP/MySQL-interface.
  • In case of composing strings from different sources align the character enprintcoding over all sources. Relevant sources with different charsets may e.g. be: data files, data bases, POST/GET data, ..
  • In case you have used “SET NAMES” to use some specific character set for your MySQL database connection do not forget to decode properly before saving into the database and to encode data fetched from the base properly into utf-8 if these data shall be part of the Ajax response. Relevant functions for utf-8/iso-8859-1 transformations may be utf8_encode(), utf8_decode and for more general cases iconv().
  • If you use strings in your program that are encoded in some other charset than utf-8, but which shall contribute to your JSON formatted Ajax response, encode all these strings in utf-8 before you apply json_encode() ! Verify that all strings are in UTF8 format before using json_encode().
  • Always check the return value of json_encode() and react properly by something like
    if (json_encode($…) === false
    ) {
    …. error handling code …
    }
  • Last but not least: When designing your classes and methods for the Ajax handling on the PHP side always think about some internal debugging features, because due to restrictions and missing features you may not be able to fully debug variables on the server. You may need extra information in your Ajax response and you may need switches to change from a Ajax controlled situation to a standard synchronous client/server situation where you could directly see echo/print_r – outputs from the server. Take into account that in some situation you may never get the Ajax response to the client …

I hope, these rules may help some people that have to work with jQuery/Ajax, PHP, MySQL and are confronted with more than one character set.

PHP/OO/Schemata: Composite SLAVE objects and (X)HTML generator methods

Some time ago I wrote an article about how to treat groups of associated, related properties of a Real World Object [“RWO”] when representing it by an object [“PWO”] in PHP.

We assumed that the property description of a certain object class is done with the help of a so called “SCHEMA” (a file with a list of definitions and rules or Schema program using definitions saved in a database Schema table). A SCHEMA is specific for a PWO class. A SCHEMA defines object relations, object properties, the relations between certain object properties, the relation of object properties with database fields and of course the type of each property, the property’s own properties as well as associated constraints. Such “SCHEMATA” would play a central role in a web application as they encapsulate vital structural and detail information about objects and their properties.

Every object constructors would refer to an appropriate SCHEMA, as well as web generators for the creation e.g. of form or web template elements would do. Actually, any reasonable (web) application would of course work with multiple SCHEMATA – each for a different object class. SCHEMATA give us a flexible mode to change or adapt a classes properties and relations. They also close the gap between the OO definitions and a relational database.

To handle groups of associated properties in such an environment I suggested the following:

  • Split the SCHEMA-information for properties and related database fields into several “SLAVE-Schemata” which would be built as sub objects within the MAIN SCHEMA object. Each SCHEMA would describe a certain group of closely associated object properties.
  • Create and use “SLAVE PWO SGL objects” as encapsulated sub-objects in a MAIN SGL PWO. Each SLAVE PWO object gets it’s own properties defined in a related SLAVE-SCHEMA.
  • Each of the SLAVE PWO objects receives it’s knowledge about it’s individual SLAVE-SCHEMA by an injection process during construction.

See:
PHP/OO/Schemata: Decouple property groups by combined composite patterns

Each PWO object, representing a single RWO we shall call a SGL PWO. It comprises a series of sub-objects: SLAVE SGL PWOs that refer to corresponding SLAVE Schemata of a MAIN Schema. See the illustration from the article named above:

Slave_schemata

In the named article I had discussed that one can iterate methods for complete database interactions over the MAIN and the SLAVE objects. The same is true for interactions with POST or SESSION data in a CMS like web application. So, there is no need to rewrite core method code originally written for objects which comprise all object properties in just one property/field-SCHEMA without any SLAVE objects. At least with respect to field checks and database interactions.

This works if and when a SGL object is derived from a base class which comprises all required methods to deal with the database, POST and SESSION data. Each MAIN or SLAVE SGL PWO knows itself how to store/fetch it’s property data into/from database tables (or $_POST or $_SESSION arrays) and uses the same methods inherited from common SGL base classes (= parent classes) to do so.

I have meanwhile introduced and realized this SLAVE PWO and SLAVE SCHEMA approach in a quite general form in my web application framework. In this article I briefly want to discuss some unexpected consequences for HTML
generator methods in a web or CMS context.

(X)HTML-Generator methods – where to place them ?

When you design CMS like web applications you always have to deal with Template [TPL] structures and objects that fill the templates with reasonable formatted contents. The contents may be just text or images or links inserted into template placeholders. In more complicated cases (like e.g. the generation of maintenance masks with forms), however, you may have to generate complete HTML fragments which depend on property/field information given in your respective SCHEMA.

A basic design question is: Where do we place the generator methods? Should the SGL PWOs know how to generate their property contents or should rather a “Template Control Object” – a “TCO” – know what to do? I have always preferred the second approach. Reason:

TPL aspects may become very specific, so the methods may need to know something about the TPL structures – and this is nothing that I want to incorporate into the OO representation [PWO] of real world objects.

Over time I have developed a bunch of generator methods for all kind of funny things required in customer projects. The methods are defined in base classes for Template Control Objects and or special purpose sub classes injected into TCOs. A TCO knows about its type of TPL and works accordingly. (By the way: With Pear ITX or Smarty you can realize a complete separation of (X)HTML-code and the PHP code).

(X)HTML-Generator methods – which MAIN or SLAVE PWO and which MAIN/SLAVE SCHEMA are we dealing with ?

In addition to some property/field identifiers a (X)HTML generator method has of course to know what SGL PWO and what SCHEMA it has to work with. This information can be fetched either by the TCO due to some rules or can be directly injected into the methods.

In the past I wanted to keep interfaces lean. In many applications the SGL PWO object was a classical singleton. So, it could relatively easily be received or identified by the central TCO. I did not see any reason to clutter TCO method interfaces with object references that were already known to their TCO object. So, my generator methods referred and used the SGL PWO object and it’s SCHEMA by invoking it with the “$this”-operator:

function someTCOgenerator_method() {
…..
do something with $this->SGL and $this->Schema
….
}

However, in the light of a more general approach this appears to be a too simplistic idea.

If we regard a SLAVE SGL object as a relatively compact entity inside a PWO – as a SLAVE object with its own property and field information SCHEMA – than we see: for a generator method it behaves almost like an independent object different from the MAIN PWO. This situation is comparable to one where the generator method really would be requested to operate on instances a completely different PWO class:

A HTML generator method needs to know the qualities of certain OO properties and associated database field definitions. In our approach with MAIN SGL PWOs comprising composite SLAVE SGL PWO objects each SGL object knows exactly about its associated MAIN or SLAVE Schema object. To work correctly the generator method must get access to this specific SCHEMA. This would in a reasonable application design also be valid for PWO objects representing other, i.e. different types of RWOs.

A (X)HTML generator method can work properly as soon as it knows about

  • the SGL object,
  • the object property (identified by some name, index or other unique information) to operate on and generate HTML code for,
  • the SCHEMA describing the qualities of the object property and related database fields.

This would in
our approach also be given for our SLAVE SGL objects or any PWO as soon as we inject it into our generator method.

Therefore, (X)HTML generator methods of TCOs should be programmed according to the following rules:

  • Do not assume that there is only one defined class of PWO SGL objects that the Template Control Object TCO needs to know about and needs to apply it’s generator methods to.
  • Instead enable the (X)HTML generator methods of a TCO to be able to work with any kind of PWO and its properties – as long as the PWO provides the appropriate SCHEMA information.
  • Inject the SGL [SLAVE] PWO and thereby also its associated [SLAVE] SCHEMA into each TCO generator method:
    function someTCOgenerator_method($SGL_ext, …..).
  • Do not refer to the TCO’s knowledge about a SGL PWO by using the “$this” operator (like “$this->Sgl”) inside a generator method of a TCO; refer all generator action to a local $SGL object reference that points to the injected (!) object $SGL_ext:
    $SGL = $SGL_ext.
    Also refer to a local $Schema which points to the $SGL->Schema of the injected $SGL object:
    $Schema = $Sgl->Schema.
    (Remember: Objects are transferred by reference !)

These recipes give us the aspired flexibility to deal with properties both of SLAVE objects and objects of a different PWO class.

The injection is a small but structurally important difference in comparison to the database interaction methods directly incorporated in SGL objects or better their base classes. Here the TCO (!) method must receive the required information – and we gain flexibility when we inject the SGL object into it (plus information identifying the property for which a HTML fragment has to be generated for).

Had I respected the rules above already some time ago it would have saved me much time now. Now, I had to adapt the interfaces of many of my generator methods in my TCO base classes.

Again, I found one experience confirmed:

  • In case of doubt do not hesitate to use loose object coupling via injection on general methods which could in principle be applied to more and other objects than the ones the object containing the method knows about at the time of your class design.
  • Use injection even if it may look strange when you need to do something like
    $this->someTCOgenerator_method($this->KnownObject);
    i.e., when you inject something the present object containing the method knows about already.

It will save time afterwards when iterator patterns over other objects have to be used and when you may access the (public) method from outside, too.

Iteration over SLAVE objects

Now, if our (X)HTML-generator methods are prepared for injection of SGL PWO objects, we have no more difficulties to generate HTML fragments required in templates for selected properties of MAIN and SLAVE PWO SGL objects:

We just have to iterate the usage of the generator method for the properties/fields of the MAIN SGL PWO as well as its SLAVE SGL PWOs. By getting the SGL object injected the method also knows about the right SCHEMA to use and provide required detail information for the generation of related HTML code (e.g. for a form element).

Think about a situation in which we want to provide a form with input fields which a user can use to update data for all properties of a certain PWO. We just have to apply generator methods to create the required input field types according to the appropriate SCHEMA informations. For this

  • we loop (iterate) over the MAIN and all it’s SLAVE objects,
  • identify all properties according to the MAIN or SLAVE SCHEMA
    information
  • and apply the generator method by injecting the relevant property identifier plus the MAIN/SLAVE-SGL object in question.

Mission accomplished.

The principle of iteration over SLAVE objects is actually nothing new to us: We used it already when dealing with the database interaction methods. (It is a basic ingredient of a composite pattern).

If we only want to work on selected properties then we need to know which of the properties is located in which SLAVE PWO and described in which SLAVE SCHEMA. To be able to do so, we should create and use an array

  • that collects information about which PWO property belongs to which of the PWOs SLAVE objects
  • and which is filled in course of the construction process of a PWO.

Conclusion

When realizing a composite pattern in the form of SLAVE objects (with SLAVE Schemata) to deal with closely associated property groups of complex objects you can apply existing base class methods and iterate over the SLAVE objects to accomplish complete transactions affecting all properties of a structured PWO. This principle can be extended to (X)HTML generator methods of Template Control Objects, if these methods are prepared to receive the SLAVE SGL PWO objects by injection. If we only want to apply generator methods on a bunch of selected properties, we should use an array associating each PWO property with a SLAVE PWO SGL object.