Upgrade of a server host from Opensuse Leap 15.1 to 15.2 – II – smartd configuration, 3ware controllers, mdadm RAID

During the upgrade of one of my server systems from Opensuse Leap 15.1 to 15.2 I noticed that the smartd-daemon did not start successfully. Logs revealed that the upgrade itself was NOT the cause of this failure; it is nevertheless a good occasion to briefly point out how one can configure the smartd-daemon to keep an overview over the evolution of disk properties on a Leap 15.X system – even when the disks are an integral part of one or more Raid-arrays. “Overview” is also a reason why I use “rsyslog” in addition to a pure systemd journal log to track the SMART data.

A problem with a 3ware based Raid array

The main cause of the smartd-failure on the upgraded server was an active line with the command “DEVICESCAN” in Opensuse’s version of the configuration file “/etc/smartd.conf”. As the command’s name indicates, the system is scanned for disks to monitor. Unfortunately, smartd did not find any SMART enabled devices on my server.

Feb 05 17:01:55 MySRV smartd[14056]: Drive: DEVICESCAN, implied '-a' Directive on line 32 of file /etc/smartd.conf
Feb 05 17:01:55 MySRV smartd[14056]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Feb 05 17:01:55 MySRV smartd[14056]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...

Well, this finding was correct. All internal disks of the server were members of Raid arrays on elderly 3ware controllers. 3ware controllers are supported by the Linux kernel – in my case via a module “3w_9xxx”. During the installation an Opensuse Leap system the driver module was automatically added to the configurations of GRUB, the initramfs and the list of modules loaded by the eventually booted kernel. Upgrades have respected the configuration so far.

But with hardware RAID controllers the attached individual disks are NOT directly accessible to standard “smartd” directives or a standard plain “smartctl” command. You need to a special options. The HW-RAID-array is seen by the Linux system and smartd as a block device – without an inner structure. And the block device corresponding to the HW-controlled RAID-array has no SMART capabilities – which is, of course, very reasonable. Thus DEVICESCAN fails.

I wondered a bit why I had not seen the problem on the same system with Leap 15.1. Actually, this was my fault: Looking into a backup I saw in the last logs of the host during its time with Leap 15.1 that the problem had existed already before the upgrade – and was caused by an old, original “smartd.conf”-file from SuSE with which I had accidentally overwritten my own version of “smartd.conf”. Stupid me! But the really concerning thing was that I should have noticed the consequences in some logs. But, since I became a almost full-time employee I have been too lazy to work regularly with specific filter-options of systemd’s command “journalctl” – so I missed the relevant messages within the flood of messages in the journal. Time to reactivate rsyslog to get specific information into specific files – see below.

Anyway … It is reasonable that the failure of a standard DEVICESCAN should lead to a failure of the smartd service. The attentive admin thus gets a clear hint that his intention to watch disk health variables is not working as expected – even if the “smartd.conf” should contain other valid lines. The comments in “/etc/smartd.conf” clearly say:

# The word DEVICESCAN will cause any remaining lines in this configuration file to be ignored: 
# it tells smartd to scan for all ATA and SCSI devices. DEVICESCAN may be followed by any of the
# Directives listed below, which will be applied to all devices that are found.
# Most users should comment out DEVICESCAN and explicitly list the devices that they wish to monitor.

I changed the line breaks a bit to stress the last statement. The obvious
and trivial solution in my case was to comment out the DEVICESCAN-line and activate some explicit statements for the individual disks in the 3ware array. To make things a bit more interesting, let us look at a different system MySYS which uses a combination of 3ware controlled hard-disk arrays and mdadm controlled SSD-arrays.

The server host discussed above does not require a particular high disk throughput. Storage capacity in TB is more important. Actually, I would never buy a 3ware or LSI controller again for a semi-professional Linux machine. Too much money for a too low performance. With Linux you are certainly better off with mdadm and SW-RAID on a modern multi-core-processor. The general impact on an i7 or i9 on the overall performance is negligible in my experience. Especially, when you have a lot of fast RAM. In addition mdadm gives you much (!) more flexibility. You can, for example, build a set of multiple RAID-arrays working in parallel with one and the same stack of disks/SSDs by assigning different partitions to different distinct arrays – which may even be of a different type. But this is another story …

Entries for disks attached to 3ware controllers and SSDs in mdadm-Raid-arrays in the “/etc/smartd.conf” file

In the system MySRV I use a configuration

  • of multiple mdadm RAID-arrays consisting of assigned SSD-partitions of a stack of 4 SSDs
  • and 4 conventional hard-disks attached to a 3ware controller, forming a RAID 10-array.

In addition the host contains a further stand alone SSD.

For disks attached to a 3ware-controller we need a special form of the directives in the “smartd.conf”-file. Actually, some examples for such directives are given in a 3ware-related commentary section of “/etc/smartd.conf”. You may find something directly suitable for your purposes there.

What about devices which are members of mdadm-controlled SW-Raid arrays? Well, the nice thing is that in contrast to HW-controlled Raid-disks the SMART-daemon can access SSDs (or hard disks) on a mdadm-controlled SW-Raid-array directly! Without any special options! Actually, folks familiar with “mdadm” would have expected this for very basic reasons … But this is yet another story … We just accept the fact as one example ofthe power of Linux.

Thus, for my special system MySYS I just disabled the line with “Devicescan” and inserted the following lines into “/etc/smartd.conf”:

DEFAULT -d removable -s (S/../../7/03)
# Monitor SSDs contributing to different mdadm based SSD-Raid-arrays via (vertical) stacks of partitions 
/dev/sda -d sat -a -o off
/dev/sdb -d sat -a -o off
/dev/sdc -d sat -a -o off
/dev/sdd -d sat -a -o off

# Monitor additional stand alone SSD 
/dev/sde -d sat -a -o off

# Monitor HDs attached to 3ware controller with a Raid 10 configuration 
/dev/twa0 -d 3ware,0 -F samsung -a -o off
/dev/twa0 -d 3ware,1 -F samsung -a -o off
/dev/twa0 -d 3ware,2 -F samsung -a -o off
/dev/twa0 -d 3ware,3 -F samsung -a -o off

Disclaimer: Never copy the statements and later discussed smartctl-commands above without a thorough study of the literature and documentation on smart, smartctl and smartd. Check the preconditions of applying the directives and commands carefully with respect to your actual HW-/SW-configuration and the type of SSDs and hard disks used there! You have to find out about the correct settings for your system-configuration on your own.
Warning: The effects of the “-a” option and especially of the planned automatic disk tests initiated via the “DEFAULT”-statement may be different for SCSI or SATA-disks. Some options may even lead to data loss on older Samsung disks. I will not take any responsibility
for any problems caused by applying the settings displayed above to YOUR systems. Get advice of a professional regarding RAID-configurations and smartctl ahead of any experiments.

Having this said, the expert sees that I plan short self tests of the disks on every Sunday at 3 AM. Such tests will temporarily reduce the performance of the affected disks and as a consequence also of the various RAID-configurations the disk or any of its partitions might be a part of. At least on my system such a test will, however, not render the defined RAID-arrays unusable.

I could test this for a SATA-device and mdadm e.g. by using a suitable smartctl-command as

smartctl -t short /dev/sdc

and watching the continuing operation of a related RAID-array by

mdadm -D /dev/mdxxx

The corresponding smartcl-command for a 3ware RAID-disk would e.g. be

smartctl -t short /dev/twa0 -d 3ware,3 -F samsung.

As said above: Be careful with such tests on productive systems.

By the way: Only for the 3ware disks you may get an information about the starting of the test in a smartd related log-file (see below). If no errors are found, the logs will NOT show any special message.

At least on an Opensuse system the other settings and options for the various disks are discussed and explained in the commentary sections of the file:

#   -d TYPE Set the device type: ata, scsi, marvell, removable, 3ware,N, hpt,L/M/N
#   -o VAL  Enable/disable automatic offline tests (on/off)
#   -H      Monitor SMART Health Status, report if failed
#   -l TYPE Monitor SMART log.  Type is one of: error, selftest
#   -f      Monitor for failure of any 'Usage' Attributes
#   -p      Report changes in 'Prefailure' Normalized Attributes
#   -u      Report changes in 'Usage' Normalized Attributes
#   -t      Equivalent to -p and -u Directives
#   -a      Default: equivalent to -H -f -t -l error -l selftest -C 197 -U 198
#   -F TYPE Use firmware bug workaround. Type is one of: none, samsung


# Monitor 4 ATA disks connected to a 3ware 6/7/8000 controller which uses
# the 3w-xxxx driver. Start long self-tests Sundays between 1-2, 2-3, 3-4, 
# and 4-5 am.
# NOTE: starting with the Linux 2.6 kernel series, the /dev/sdX interface
# is DEPRECATED.  Use the /dev/tweN character device interface instead.
# For example /dev/twe0, /dev/twe1, and so on.
#/dev/sdc -d 3ware,0 -a -s L/../../7/01
#/dev/sdc -d 3ware,1 -a -s L/../../7/02
#/dev/sdc -d 3ware,2 -a -s L/../../7/03
#/dev/sdc -d 3ware,3 -a -s L/../../7/04

On my system (with one a 3ware controller and mdadm Raid-arrays) the smartd daemon started without any problems afterwards.

Configure rsyslog to write smartd-information into a distinct separate log-file

Old fashioned as I am, I use rsyslog aside systemd’s journal-logging. rsyslog can easily be configured to write log entries for selected daemons/processes into predefined separate log-files. Such files can directly be opened as they are plain text-files. Let us configure rsyslog for the smartd daemon:

  • Step 1: We create an empty file “/var/log/smartd.log” (as root) on the host in question by the command

    “touch /var/log(smartd.conf”

  • Step 2: Change rights by “chmod 640 /var/log/smartd.log”
  • Step 3: Add the following line to the file “/etc/rsyslog.conf” (close to the end)

    :programname, isequal, “smartd” /var/log/smartd.log

  • Step 4: Restart the rsyslog-daemon by “systemctl restart rsyslog”

surely have noticed that this is for local logging. But it is easy to adapt the settings to logging on remote systems.

What do smartd-logs look like?

Depending on your hardware you may get specific messages for your devices at the start of the smartd-daemon. Depending on the execution of scheduled self-tests you may get some messages about starting such test. After some time you should (hopefully) see successful messages regarding short or long self-tests – depending on your settings. In my case:

2021-02-11T14:29:43.511950+01:00 MySYS smartd[1723]: Device: /dev/sda [SAT], previous self-test completed without error
2021-02-11T14:29:44.026256+01:00 MySYS smartd[1723]: Device: /dev/twa0 [3ware_disk_00], previous self-test completed without error

In addition that you will get continuous information about changes of variables describing the health-status of your disks. In my case only the variation of the temperature due to changing airflow is shown:

2021-02-11T14:59:43.340341+01:00 MySYS smartd[1723]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 76 to 78
2021-02-11T14:59:43.350339+01:00 MySYS smartd[1723]: Device: /dev/sdb [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 77 to 78
2021-02-11T14:59:43.353660+01:00 MySYS smartd[1723]: Device: /dev/sdc [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 77 to 79
2021-02-11T14:59:43.357007+01:00 MySYS smartd[1723]: Device: /dev/sdd [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 77 to 79
2021-02-11T14:59:43.360136+01:00 MySYS smartd[1723]: Device: /dev/sde [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 77 to 78


If for some reason the “smartd”-daemon and the related service does not start on a Linux host with Opensuse Leap 15.2 it is easy to get it running again. Even, when your disk devices are members of RAID arrays. The configuration file “/etc/smartd.log” contains a lot of hints how to set up reasonable directives regarding various HW-RAID-controllers supported by the kernel. A really good thing is that we do not need any special commands or options regarding disk devices whose partitions are members of mdamd-controlled SW-RAID arrays.




Performance of Linux md raid-10 arrays – negative impact of the intel_pstate CPU governor for HWP?

Last November I performed some tests with “fio” on Raid arrays with SSDs (4 Samsung EVO 850). The test system ran on Opensuse Leap 42.1 with a Linux kernel of version 4.1. It had an onboard Intel Sunrise Point controller and an i7 6700k CPU.

For md-raid arrays of type Raid10, i.e. for Linux SW Raid10 arrays created via the mdadm command, I was quite pleased with the results. Both for N2 and F2 layouts and especially for situations where the Read/Write load was created by several jobs running in parallel. Depending on the size of the data packets you may, e.g., well reach a Random Read [RR] performance between 1.0 Gbyte/sec and 1.5 GByte/sec for packet sizes ≥ 512 KB and %gt; 1024k, respectively. Even for one job only the RR-performance for such packet sizes lay between 790 MByte/sec and 950 Mbyte/sec – i.e. well beyond the performance of a single SSD.

Significant drop in md-raid performance on Opensuse Leap 42.2 with kernel 4.4

Then I upgraded to Opensuse Leap 42.2 with kernel 4.4. Same system, same HW, same controller, same CPU, same SSDs and raid-10 setup.
I repeated some of my raid-array tests. Unfortunately, I then experienced a significant drop in performance – up to 25% depending on the packet size. With absolute differences in the range of 60 MByte/sec to over 200 Mbyte/sec, again depending on the chosen data packet sizes.

In addition I saw irregular ups and downs in the performance (large spread) for repeated tests with the same fio parameters. Up to some days ago I never found a convincing reason for this strange performance variation behavior of the md-Raid arrays for different kernels and OS versions.

Impact of the CPU governor ?!

Three days ago I read several articles about CPU governors. On my system the “intel_pstate” driver is relevant for CPU power saving. It offers exactly two active governor modes: “powersave” and “performance” (see e.g.: https://www.kernel.org/doc/html/latest/admin-guide/pm/intel_pstate.html).

The chosen CPU governor standard for Opensuse Leap 42.2 is “powersave”.

Just for fun I repeated some simple tests for the raid array again – one time with “powersave” active on all CPU cores/threads and a second time with “performance” active on all cores/threads. The discrepancy was striking:

Test setup

HW: CPU i7 6700K, Asus Z170 Extreme 7 with Intel Sunrise Point-H Sata3 controller

Raid-Array: md-raid-10, Layout: N2, Chunk Size: 32k, bitmap: none

SSDs: 4 Samsung Evo 850

Fio parameters:

; bs=1024k

Test results:

Fio test case 1bs=512k, Random Read [RR] / Random Write [RW]:

Leap 42.1, Kernel 4.1:
RR: 780 MByte/sec – RW: 754 MByte/sec,
RR Spread around 35 Mbyte/sec

Leap 42.2, Kernel 4.4, CPU governor powersave :
RR: 669 MByte/sec – RW: 605 MByte/sec,
RR Spread around 50 Mbyte/sec

Leap 42.2, Kernel 4.4, CPU governor: performance :
RR: 780 MByte/sec – RW: 750 MByte/sec,

RR Spread around 35 Mbyte/sec

Fio test case 2bs=1024k, Random Read [RR] / Random Write [RW]:

Leap 42.1, Kernel 4.1:
RR: 860 MByte/sec – RW: 800 MByte/sec
RR Spread around 30 Mbyte/sec

Leap 42.2, Kernel 4.4, CPU governor: powersave :
RR: 735 MByte/sec – RW: 660 MByte/sec
RR Spread > 50 Mbyte/sec

Leap 42.2, Kernel 4.4, CPU governor: performance :
RR: 877 MByte/sec – RW: 792 MByte/sec
RR Spread around 25 Mbyte/sec


The differences are so significant that one begins to worry. The data seem to indicate two points:

  • It seems that a high CPU frequency is required for optimum performance of md-raid arrays with SSDs.
  • It seems that the Leap 42.1 with kernel 4.1 reacts differently to load requests from fio test runs – with resulting CPU frequencies closer to the maximum – than Leap 42.2 with kernel 4.4 in powersave mode. This could be a matter of different CPU governors or major changes in drivers …

With md-raid arrays active, the CPU governor should react very directly to a high I/O load and a related short increase of CPU consumption. However, the md-raid-modules seem to be so well programmed that the rise in CPU load is on average below any thresholds for the “powersave”-governor to react adequately. At least on Leap 42.2 with kernel 4.4 – for whatever reasons. Maybe the time structure of I/O and CPU load is not analyzed precisely enough or there is in general no reaction to I/O. Or there is a bug in the related version of intel_pstate …

Anyway, I never saw a rise of the CPU frequency above 2300 Mhz during the tests on Leap 42.2 – but short spikes to half of the maximum frequency are quite normal on a desktop system with some applications active. Never, however, was the top level of 4200 Mhz reached. I tested also with 2000 MB to be read/written in total – then we talk already of time intervals around 3 to 4 secs for the I/O load to occur.


When reading a bit more, I got the impression, that the admins possibilities to influence the behavior of the “intel_pstate” governors are very limited. So, some questions arise directly:

  1. What is the major difference between OS Leap 42.1 with kernel 4.1 compared to Opensuse 42.2 with kernel 4.4? Does Leap 41.1 use the same CPU governor as Leap 42.2?
  2. Is the use of Intel’s standard governor mode “powersave” in general a bad choice on systems with a md-raid for SSDs?
  3. Are there any kernel or intel-pstate parameters that would change the md-raid-performance to the better again?

If somebody knows the answers, please contact me. The whole topic is also discussed here: https://bugzilla.kernel.org/show_bug.cgi?id=191881. At least I hope so …

Addendum, 19.06.2017:
Answer to question 1 – I checked which CPU governor runs on Opensuse Leap 42.1:
It is indeed the good old (ACPI-based) “ondemand” governor – and not the “new” intel_pstate based “powersave” governor for Intel’s HWP. So, the findings
described above raise some questions regarding the behavior of the “intel_pstate based “powersave” governor vs. comparable older CPU_FREQ solutions like the “ondemand” governor: The intel_pstate “powersave” governor does not seem to support md-raid as well as the old “ondemand” governor did!

I do not want to speculate too much. It is hard to measure details of the CPU frequency changes on small timescales, and it is difficult to separate the cpu consumption of fio and the md-modules (which probably work multithreaded). But it might be that the “ondemand” governor provides on average higher CPU frequencies to the involved processes over the timescale of a fio run.

Anyway, I would recommend all system admins who use SSD-Raid-arrays under the control of mdadm to check and test for a possible performance dependency of their Raid installation on CPU governors!