Upgrade of a server host from Opensuse Leap 15.1 to 15.2 – II – smartd configuration, 3ware controllers, mdadm RAID

During the upgrade of one of my server systems from Opensuse Leap 15.1 to 15.2 I noticed that the smartd-daemon did not start successfully. Logs revealed that the upgrade itself was NOT the cause of this failure; it is nevertheless a good occasion to briefly point out how one can configure the smartd-daemon to keep an overview over the evolution of disk properties on a Leap 15.X system – even when the disks are an integral part of one or more Raid-arrays. “Overview” is also a reason why I use “rsyslog” in addition to a pure systemd journal log to track the SMART data.

A problem with a 3ware based Raid array

The main cause of the smartd-failure on the upgraded server was an active line with the command “DEVICESCAN” in Opensuse’s version of the configuration file “/etc/smartd.conf”. As the command’s name indicates, the system is scanned for disks to monitor. Unfortunately, smartd did not find any SMART enabled devices on my server.

Feb 05 17:01:55 MySRV smartd[14056]: Drive: DEVICESCAN, implied '-a' Directive on line 32 of file /etc/smartd.conf
Feb 05 17:01:55 MySRV smartd[14056]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Feb 05 17:01:55 MySRV smartd[14056]: Unable to monitor any SMART enabled devices. Try debug (-d) option. Exiting...
...

Well, this finding was correct. All internal disks of the server were members of Raid arrays on elderly 3ware controllers. 3ware controllers are supported by the Linux kernel – in my case via a module “3w_9xxx”. During the installation an Opensuse Leap system the driver module was automatically added to the configurations of GRUB, the initramfs and the list of modules loaded by the eventually booted kernel. Upgrades have respected the configuration so far.

But with hardware RAID controllers the attached individual disks are NOT directly accessible to standard “smartd” directives or a standard plain “smartctl” command. You need to a special options. The HW-RAID-array is seen by the Linux system and smartd as a block device – without an inner structure. And the block device corresponding to the HW-controlled RAID-array has no SMART capabilities – which is, of course, very reasonable. Thus DEVICESCAN fails.

I wondered a bit why I had not seen the problem on the same system with Leap 15.1. Actually, this was my fault: Looking into a backup I saw in the last logs of the host during its time with Leap 15.1 that the problem had existed already before the upgrade – and was caused by an old, original “smartd.conf”-file from SuSE with which I had accidentally overwritten my own version of “smartd.conf”. Stupid me! But the really concerning thing was that I should have noticed the consequences in some logs. But, since I became a almost full-time employee I have been too lazy to work regularly with specific filter-options of systemd’s command “journalctl” – so I missed the relevant messages within the flood of messages in the journal. Time to reactivate rsyslog to get specific information into specific files – see below.

Anyway … It is reasonable that the failure of a standard DEVICESCAN should lead to a failure of the smartd service. The attentive admin thus gets a clear hint that his intention to watch disk health variables is not working as expected – even if the “smartd.conf” should contain other valid lines. The comments in “/etc/smartd.conf” clearly say:

# The word DEVICESCAN will cause any remaining lines in this configuration file to be ignored: 
# it tells smartd to scan for all ATA and SCSI devices. DEVICESCAN may be followed by any of the
# Directives listed below, which will be applied to all devices that are found.
#  
# Most users should comment out DEVICESCAN and explicitly list the devices that they wish to monitor.

I changed the line breaks a bit to stress the last statement. The obvious
and trivial solution in my case was to comment out the DEVICESCAN-line and activate some explicit statements for the individual disks in the 3ware array. To make things a bit more interesting, let us look at a different system MySYS which uses a combination of 3ware controlled hard-disk arrays and mdadm controlled SSD-arrays.

Off-topic:
The server host discussed above does not require a particular high disk throughput. Storage capacity in TB is more important. Actually, I would never buy a 3ware or LSI controller again for a semi-professional Linux machine. Too much money for a too low performance. With Linux you are certainly better off with mdadm and SW-RAID on a modern multi-core-processor. The general impact on an i7 or i9 on the overall performance is negligible in my experience. Especially, when you have a lot of fast RAM. In addition mdadm gives you much (!) more flexibility. You can, for example, build a set of multiple RAID-arrays working in parallel with one and the same stack of disks/SSDs by assigning different partitions to different distinct arrays – which may even be of a different type. But this is another story …

Entries for disks attached to 3ware controllers and SSDs in mdadm-Raid-arrays in the “/etc/smartd.conf” file

In the system MySRV I use a configuration

  • of multiple mdadm RAID-arrays consisting of assigned SSD-partitions of a stack of 4 SSDs
  • and 4 conventional hard-disks attached to a 3ware controller, forming a RAID 10-array.

In addition the host contains a further stand alone SSD.

For disks attached to a 3ware-controller we need a special form of the directives in the “smartd.conf”-file. Actually, some examples for such directives are given in a 3ware-related commentary section of “/etc/smartd.conf”. You may find something directly suitable for your purposes there.

What about devices which are members of mdadm-controlled SW-Raid arrays? Well, the nice thing is that in contrast to HW-controlled Raid-disks the SMART-daemon can access SSDs (or hard disks) on a mdadm-controlled SW-Raid-array directly! Without any special options! Actually, folks familiar with “mdadm” would have expected this for very basic reasons … But this is yet another story … We just accept the fact as one example ofthe power of Linux.

Thus, for my special system MySYS I just disabled the line with “Devicescan” and inserted the following lines into “/etc/smartd.conf”:

...
DEFAULT -d removable -s (S/../../7/03)
...
#DEVICESCAN
...
# Monitor SSDs contributing to different mdadm based SSD-Raid-arrays via (vertical) stacks of partitions 
/dev/sda -d sat -a -o off
/dev/sdb -d sat -a -o off
/dev/sdc -d sat -a -o off
/dev/sdd -d sat -a -o off

# Monitor additional stand alone SSD 
/dev/sde -d sat -a -o off

# Monitor HDs attached to 3ware controller with a Raid 10 configuration 
/dev/twa0 -d 3ware,0 -F samsung -a -o off
/dev/twa0 -d 3ware,1 -F samsung -a -o off
/dev/twa0 -d 3ware,2 -F samsung -a -o off
/dev/twa0 -d 3ware,3 -F samsung -a -o off
...

Disclaimer: Never copy the statements and later discussed smartctl-commands above without a thorough study of the literature and documentation on smart, smartctl and smartd. Check the preconditions of applying the directives and commands carefully with respect to your actual HW-/SW-configuration and the type of SSDs and hard disks used there! You have to find out about the correct settings for your system-configuration on your own.
Warning: The effects of the “-a” option and especially of the planned automatic disk tests initiated via the “DEFAULT”-statement may be different for SCSI or SATA-disks. Some options may even lead to data loss on older Samsung disks. I will not take any responsibility
for any problems caused by applying the settings displayed above to YOUR systems. Get advice of a professional regarding RAID-configurations and smartctl ahead of any experiments.

Having this said, the expert sees that I plan short self tests of the disks on every Sunday at 3 AM. Such tests will temporarily reduce the performance of the affected disks and as a consequence also of the various RAID-configurations the disk or any of its partitions might be a part of. At least on my system such a test will, however, not render the defined RAID-arrays unusable.

I could test this for a SATA-device and mdadm e.g. by using a suitable smartctl-command as

smartctl -t short /dev/sdc

and watching the continuing operation of a related RAID-array by

mdadm -D /dev/mdxxx

The corresponding smartcl-command for a 3ware RAID-disk would e.g. be

smartctl -t short /dev/twa0 -d 3ware,3 -F samsung.

As said above: Be careful with such tests on productive systems.

By the way: Only for the 3ware disks you may get an information about the starting of the test in a smartd related log-file (see below). If no errors are found, the logs will NOT show any special message.

At least on an Opensuse system the other settings and options for the various disks are discussed and explained in the commentary sections of the file:

#   -d TYPE Set the device type: ata, scsi, marvell, removable, 3ware,N, hpt,L/M/N
#   -o VAL  Enable/disable automatic offline tests (on/off)
#   -H      Monitor SMART Health Status, report if failed
#   -l TYPE Monitor SMART log.  Type is one of: error, selftest
#   -f      Monitor for failure of any 'Usage' Attributes
#   -p      Report changes in 'Prefailure' Normalized Attributes
#   -u      Report changes in 'Usage' Normalized Attributes
#   -t      Equivalent to -p and -u Directives
#   -a      Default: equivalent to -H -f -t -l error -l selftest -C 197 -U 198
#   -F TYPE Use firmware bug workaround. Type is one of: none, samsung

and

# Monitor 4 ATA disks connected to a 3ware 6/7/8000 controller which uses
# the 3w-xxxx driver. Start long self-tests Sundays between 1-2, 2-3, 3-4, 
# and 4-5 am.
# NOTE: starting with the Linux 2.6 kernel series, the /dev/sdX interface
# is DEPRECATED.  Use the /dev/tweN character device interface instead.
# For example /dev/twe0, /dev/twe1, and so on.
#/dev/sdc -d 3ware,0 -a -s L/../../7/01
#/dev/sdc -d 3ware,1 -a -s L/../../7/02
#/dev/sdc -d 3ware,2 -a -s L/../../7/03
#/dev/sdc -d 3ware,3 -a -s L/../../7/04

On my system (with one a 3ware controller and mdadm Raid-arrays) the smartd daemon started without any problems afterwards.

Configure rsyslog to write smartd-information into a distinct separate log-file

Old fashioned as I am, I use rsyslog aside systemd’s journal-logging. rsyslog can easily be configured to write log entries for selected daemons/processes into predefined separate log-files. Such files can directly be opened as they are plain text-files. Let us configure rsyslog for the smartd daemon:

  • Step 1: We create an empty file “/var/log/smartd.log” (as root) on the host in question by the command

    “touch /var/log(smartd.conf”

  • Step 2: Change rights by “chmod 640 /var/log/smartd.log”
  • Step 3: Add the following line to the file “/etc/rsyslog.conf” (close to the end)

    :programname, isequal, “smartd” /var/log/smartd.log

  • Step 4: Restart the rsyslog-daemon by “systemctl restart rsyslog”

You
surely have noticed that this is for local logging. But it is easy to adapt the settings to logging on remote systems.

What do smartd-logs look like?

Depending on your hardware you may get specific messages for your devices at the start of the smartd-daemon. Depending on the execution of scheduled self-tests you may get some messages about starting such test. After some time you should (hopefully) see successful messages regarding short or long self-tests – depending on your settings. In my case:

2021-02-11T14:29:43.511950+01:00 MySYS smartd[1723]: Device: /dev/sda [SAT], previous self-test completed without error
...
2021-02-11T14:29:44.026256+01:00 MySYS smartd[1723]: Device: /dev/twa0 [3ware_disk_00], previous self-test completed without error
...

In addition that you will get continuous information about changes of variables describing the health-status of your disks. In my case only the variation of the temperature due to changing airflow is shown:

...
2021-02-11T14:59:43.340341+01:00 MySYS smartd[1723]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 76 to 78
2021-02-11T14:59:43.350339+01:00 MySYS smartd[1723]: Device: /dev/sdb [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 77 to 78
2021-02-11T14:59:43.353660+01:00 MySYS smartd[1723]: Device: /dev/sdc [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 77 to 79
2021-02-11T14:59:43.357007+01:00 MySYS smartd[1723]: Device: /dev/sdd [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 77 to 79
2021-02-11T14:59:43.360136+01:00 MySYS smartd[1723]: Device: /dev/sde [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 77 to 78
...

Conclusion

If for some reason the “smartd”-daemon and the related service does not start on a Linux host with Opensuse Leap 15.2 it is easy to get it running again. Even, when your disk devices are members of RAID arrays. The configuration file “/etc/smartd.log” contains a lot of hints how to set up reasonable directives regarding various HW-RAID-controllers supported by the kernel. A really good thing is that we do not need any special commands or options regarding disk devices whose partitions are members of mdamd-controlled SW-RAID arrays.

Links

https://linux.die.net/man/5/smartd.conf
https://linux.die.net/man/8/smartd

https://www.linux.com/topic/desktop/monitor-sata-and-ssd-health-smart/
https://linuxconfig.org/introduction-to-the-systemd-journal
https://www.debugpoint.com/2020/12/systemd-journalctl/

Opensuse, KDE Plasma, X11, Nvidia – stop video and screen tearing

In these times of Corona, home-office and of increased Internet usage some of us Linux guys may experience an old phenomenon: screen and video tearing. In my case it happened with an Nvidia card and with X11 (Wayland does not yet work on my Opensuse Leap 15.1 – I am too lazy to investigate why). I have ignored the tearing already for some months – but now it really annoyed me. I saw tearing already some years ago; at that point in time activating triple buffering helped. But not these days …

Where did I see the tearing?

I observed tearing effects

  • when moving “wobbling windows” (one of KDE’s desktop effects) across the screen – strangely enough when moving them slowly,
  • when watching TV and video streams in browsers (independent of FF, Opera or Chromium) – mostly when major parts of the video changed quickly.

Not much, not always – but enough to find it annoying. So, I invested some time – and got rid of it.

Driver and contents of the xorg.conf file

Driver: Latest Nvidia driver from Opensuse’s NVidia Repository: nvidia-glG05, x11-video-nvidiaG05.

I have three screens attached to my NVidia card (GTX 960); two of them are of the same type, but one has a lower resolution than the others. The screens are configured to work together as a super wide screen via the Xinerama setting in the xorg configuration file. Below, you find the contents of the file “/etc/X11/xorg.conf” with details about the screen configuration and modes.

xorg.conf

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 450.80.02


Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
    Option         "Xinerama" "0"
EndSection

Section "Files"
EndSection

Section "InputDevice"

    # generated from data in "/etc/sysconfig/mouse"
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "IMPS/2"
    Option         "Device" "/dev/input/mice"
    Option         "Emulate3Buttons" "yes"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"

    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "DELL U2515H"
    HorizSync       30.0 - 113.0
    VertRefresh     56.0 - 86.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 960"
EndSection

Section "Screen"

    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "DFP-2"
    Option         "ForceFullCompositionPipeline"  "on"
#    Option         "ForceCompositionPipeline"  "on"
    Option         "metamodes" "DP-4: nvidia-auto-select +0+0, DP-0: nvidia-auto-select +2560+0, DVI-I-1: nvidia-auto-select +5120+0; DP-4: nvidia-auto-select +2560+0, DP-0: nvidia-auto-select +0+0, DVI-I-1: nvidia-auto-select +5120+0; DP-4: nvidia-auto-select +2560+0, DP-0: nvidia-auto-select +0+0, DVI-I-1: 1920x1080 +5120+0; DP-4: nvidia-auto-select +2560+0, DP-0: nvidia-auto-select +0+0, DVI-I-1: 1680x1050 +5120+0; DP-4: nvidia-auto-select 
+2560+0, DP-0: nvidia-auto-select +0+0, DVI-I-1: 1600x1200 +5120+0; DP-4: nvidia-auto-select +2560+0, DP-0: nvidia-auto-select +0+0, DVI-I-1: 1440x900 +5120+0; DP-4: nvidia-auto-select +2560+0, DP-0: nvidia-auto-select +0+0, DVI-I-1: 1280x1024 +5120+0; DP-4: nvidia-auto-select +2560+0, DP-0: nvidia-auto-select +0+0, DVI-I-1: 1280x960 +5120+0"
    Option         "SLI" "Off"
    Option         "TripleBuffer" "True"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

 

The most important statement regarding the suppression of tearing is

    Option         "ForceFullCompositionPipeline"  "on"

Alternatively,

    Option         "ForceCompositionPipeline"  "on"

seems to work equally well. Use the latter, if your graphics should react a bit sluggish.

We find more information about these options in the “nvidia-settings” application:

When you move your mouse over the option for “ForceCompositionPipeline” and “ForceFullCompositionPipeline”, you get

The Nvidia driver can use a composition pipeline to apply Xscreen transformations and rotations. “ForceCompositionPipeline” can be used to force the use of this pipeline, even when no transformations or rotations are applied to the screen. This option is implicitly set by ForceFullCompositionPipeline.

and, respectively,

“This option implicitly enables “ForceCompositionPipeline” and additionally makes use of the composition pipeline to apply ViewPortOut scaling.”

Important: If you want to test the setting via nidia-settings, you have to activate the options it for all three screens!.

When I first tested “ForceCompositionPipeline” I just set it on the page of “nvidia-settings” for the first screen of my three, wrongly assuming that this setting was applied in general. However, tearing did not disappear. I realized after some time that it still happened on 2 screens predominantly. I even suspected a different quality of the display-port cables to my screens to be the cause of tearing. Wrong … the ForceCompositionPipeline had been applied to one screen, only.

So, switch to the other screens by using the first combo-box on the “nvidia-settings”-page and set “ForceCompositionPipeline” for all screens. Do this before you eventually save the settings to a “xorg.conf”-file (as root). Your resulting xorg.conf file may look a bit different; the CompositionPipeline-settings might be included as a side-option of the meta-mode settings – and not in form of a special separate line as shown above.

Regarding Xvideo- and OpenGL-settings you should activate syncing;

KDE Plasma settings

KDE Plasma settings for the screens should be consistent with the “nvidia-settings”. You use KDE’s “system-settings” >> “Display and Monitor” >> “Displays” and “Compositor”.

The combination of all the settings discussed above worked in my case – the tearing disappeared for videos in browsers, in video applications as well as on the Xinerama KDE Plasma screen in general.

Conclusion

It is easy to suppress video and screen tearing on an Opensuse Leap system with KDE PLasma and a Nvidia graphics card. The most important point is to activate “ForceCompositionPipeline” on all individual screens via “nvidia-settings” or to activate this option globally for the Xinerama screen of a multi-monitor configuration.

Bumblebee for Optimus systems on Opensuse 15.1

Recently, I upgraded a laptop from Opensuse Leap 15.0 to Leap 15.1. I successfully followed the upgrade procedure described by the Linux Kamarada in

kamarada: how-to-upgrade-from-opensuse-leap-150-to-151/ .

The only problems I got had to do with the Optimus-design of my old laptop. When I followed my own description how to install and use Bumblebee as described in

Installation Opensuse Leap 15 auf Laptop – Grafik Probleme, Optimus

I always got an error message when trying to load the Nvidia kernel module:

mytux:~ # sudo modprobe nvidia

modprobe: ERROR: could not insert 'nvidia': No such device

The reason was given by the command “dmesg”; the Nvidia device was no longer available on the PCI bus:

NVRM: The NVIDIA GPU 0000:01:00.0
               NVRM: (PCI ID: 10de:134d) installed in this system has
               NVRM: fallen off the bus and is not responding to commands.
[    3.312435] nvidia: probe of 0000:01:00.0 failed with error -1

The bbswitch-module could, however, be loaded without any problems.

Workaround

A workaround is described in here :

After having started KDE or Gnome issue the following commands as root in a terminal:

mytux:~ # echo 1 > /sys/bus/pci/devices/0000:01:00.0/remove
mytux:~ # echo 1 > /sys/bus/pci/devices/0000:00:02.0/rescan

Afterwards my Nvidia card (640M) was available on the bus again – and e.g. “primusrun glxgears” worked.

So, something with starting the dkms.service and the bumblebeed.service, switching off the Nvidia card by the bbswitch module during system startup and later on loading of the nvidia-module was failing. I suspected the bbswitch-module to be the cause …

Solution

In my case I could solve the problem by installing the RPM packets

bumblebee, dkms, bbswitch, bbswitch-kmp-default

from the standard Update repository of Opensuse Leap 15.1

https://download.opensuse.org/update/leap/15.1/oss/

instead of installing them from the Bumblebee-repository

https://download.opensuse.org/repositories/X11:/Bumblebee/openSUSE_Leap_15.1

Otherwise I followed the instructions in
Installation Opensuse Leap 15 auf Laptop – Grafik Probleme, Optimus .

I.e.: I installed only the packets

nvidia-bumblebee, nvidia-bumblebee-32bit

from the Bumblebee repository.

Do not forget to issue a “mkinitrd” after you have successfully tested e.g. “optirun glxgears” and “tee /proc/acpi/bbswitch <<< OFF". Then reboot and use Optimus as you were used to. I do not know what is wrong with the packets in the Bumblebee repository - but I hope this bug is fixed soon. It is a bit annoying when one has to play around with packets of different repositories.

Addendum, 03.08.2020: A problem with VirtualGL

The present Bumblebee repository of Opensuse contains a new version of the VirtualGL library (version 2.6.4). Do NOT install this RPM-file. It will break your optirun/primusrun with an error:

Error: undefined symbol: glXGetProcAddressARB

Although this type of error is discussed in various forums on the Internet I have no clue how to mend it.

Workaround:
Use the original VirtualGL RPM of the Opensuse Leap 15.1 Update repository! Works on my system.

 

Linux SSD partition alignment – problems with external USB-to-SATA controllers – II

In my last article
Linux SSD partition alignment – problems with external USB-to-SATA controllers – I
I wrote about different partition alignments parted and YaST’s Partitioner (Opensuse) applied when one and the same SSD (a Samsung 850 Pro) was attached

  • to external USB-to-SATA controllers and the USB bus of a Linux system (1 MiB-alignment: start sectors as multiples of 2048 logical 512 byte blocks)
  • or directly to an internal SATA-III-controller of a Linux system (alignment to multiples of 65535 logical 512 byte blocks).

We saw that different controllers led to the detection of different disk topology parameters (I/O limits) by the Linux system via the libblkid library. For one and the same SSD different values were reported by different controllers e.g. for the following parameters (I/O Limits data) :

physical_block_size,    minimum_io_size,    optimal_io_size

We saw in addition that even different Linux disk tools may report different values; e.g. fdisk showed a different value [512 byte] for the “optimal_io_size” for the SSD on the SATA bus than e.g. lsblk and parted [0].

Guided by a Red Hat article https://people.redhat.com/msnitzer/docs/io-limits.txt we came to the conclusion that at least parted and YaST’s Partitioner use heuristic rules for its alignment decisions. The rules take into account the values for disk “I/O Limits” parameters. They are consistent with a default of “optimal” for the alignment parameter of parted and provide a decision when “the value for “optimal_io_size” is found to be zero. By applying these rules we could explain why we got different partition offsets and alignments for one and the same disk when it was attached to different controllers.

But this insight left us in an uncomfortable situation:

  1. Should we cling to the chosen settings when we use the SSD on external controllers, only? Can we partition SSDs on an external USB-to-SATA controller and move them later directly to a SATA-bus without adjusting partition borders? We saw that “parted” would complain about misalignment when SSD partitions were prepared on a different controller.
  2. As many people discuss the importance of partition alignment for SSD performance – will we see a noticeable drop in performance when we read/write to “misaligned” partitions?
  3. We saw that at least a JMicron controller indicated a bundling of 8 logical 512 byte blocks into a 4096 byte (fake?) “physical block”. Another question might therefore be what happens after an installation and something is written by Grub to the first sector of a disk with GPT layout – and maybe assuming some wrong disk topology? This is not so far fetched as one may think; see the third link at the bottom for a disaster with an MBR.

I cannot answer all these questions in general. But in this article I will at least look a bit into performance issues and answer the last question for my test situation.

Continue reading

Linux SSD partition alignment – problems with external USB-to-SATA controllers – I

When you try to find a solution for one problem suddenly another problem appears – and you find yourself confronted with something you never worried about before. For my article series on full laptop encryption with Opensuse Leap 15 I wanted to prepare a new SSD on my Linux desktop. This SSD should later on become the main disk of the laptop. I used an external disk box with an USB-to-SATA-controller from JMicron to attach the disk to the USB-3 bus of my desktop system. And stumbled across a really strange partitioning scheme YaST’s “partitioner” set up on my SSD during installation.

For a SSD you normally expect Linux to choose a so called “1 MiB-alignment“. Among other things this alignment leads to a minimum offset of the first partition of 2048 logical sectors/blocks with 512 byte. The math is 2048 * 512 = 1024 * 1024 = 1 (MiB) = 256 * 4096. This sector start address is well suited for so called AF disks with native 4k support, but also for disks with conventional 512 byte physical blocks. Note that 1024 * 1024 can be divided by both 512 and 4096 without remainder.

From previous experience with SSDs attached to SATA-III controllers on a Linux system I would have said: Any partition on a SSD shows a logical start block number which is a multiple of 2048 on a Linux system. And I would have bet that this were true for the size of a partition number expressed in block numbers, too. I had never questioned this before … In addition there are many articles on the Internet that tell users not to worry about alignment because all Linux tools today handle alignment correctly.

However, in my test situation I got something very different – and consequently some (!) Linux partitioning tools on the laptop later complained about misalignment. This worried me. Googling lead me to the following unanswered question at
https://superuser.com/questions/1291467/trouble-figuring-out-optimal-alignment-of-multiple-partitions-on-a-931-5-gib-ext,
which describes more or less my problem.

I have tried to figure out what was/is behind these “misalignment” problem – and the result shocked me a bit. It triggers questions about manufacturers of disk controllers and the way Linux handles disk property information. In addition I had a brief look at performance for an aligned and an unaligned partition. The results again puzzled me. I find it difficult to get a consistent picture out of my findings for partition alignment on SSDs at different controllers. Write me a mail if you know better …

Anyway, I hope this article, which digs into areas of Linux which are a bit offside standard usage, finds the interest of some other Linux fans, too ….

I shall use the words “blocks” (OS perspective) and “sectors” (atomic disk unit) on the level of partitioning interchangeably – which is not quite correct semantically. But you find this mix of wording also in disk tools :-).

The problem

I used an external box for my SSD which contained a JMicron USB-to-SATA controller. When my SSD – a Samsung 850 Pro – was attached to my Linux desktop via an external USB interface, the physical sector size was reported to be 4096 bytes by lsblk, by parted and fdisk. This stood a bit in contrast to other reports for this SSD type on the Internet; but SSD technology changes so often; I did not make me suspicious. I added a BIOS boot partition and several other partitions to the disk first with YaST’s Partitioner. The start sector of the first partition was chosen by YaST to be 65535.

mytux:~ # fdisk -l /dev/sdh 
Disk /dev/sdh: 477 GiB, 512110190592 bytes, 1000215216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O 
size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: gpt
Disk identifier: ....

Device         Start       End   Sectors  Size Type
/dev/sdh1      65535    131069     65535   32M BIOS boot
/dev/sdh2     131070   1376234   1245165  608M EFI System
/dev/sdh3    1376235  37027274  35651040  17GM Linux swap
...

OK, I had noticed before that some present partitioning tools choose a 32 MiB offset for the first partition these days. However, the number 65535 is NOT a multiple of 2048 and thus gives 31,99951171875 MiB. An offset of 65536 logical blocks would, however, fit exactly to 32 MiB.
Also all other partitions on the external SSD were “aligned” (?) to start sector numbers which were multiples of 65535.
Addendum 05.12.2018: I retested this with the tool parted – without defining a special alignment type, i.e. using defaults. Same result.

65535 is compatible with a physical block size of 512 bytes, but NOT 4096 bytes. Interestingly, 65535 also fits an erase block size of 1536 MiB, which a lot of people assume to be used by Samsung for its SSDs. Wherever these people got this information from … see e.g. https://www.phoronix.com/forums/forum/hardware/general-hardware/1030306-samsung-970-evo-nvme-ssd-benchmarks-on-ubuntu-linux.

But: I checked against partition boundaries on another Linux system with the same type of SSD directly attached to the internal (Intel) SATA controller. There the alignment fitted exactly the theory of boundaries as multiples of 1 MiB (=> multiples of 2048 logical blocks as start sector numbers). I also checked for systems with other internal SSDs. 1 MiB alignment …

Then I built the disk into my laptop and attached it directly to the SATA interface there – and got a plain “1 MiB”-alignment (both for YaST’s Partitioner and parted with defaults).

mylux:~ # fdisk -l /dev/sda 
Disk /dev/sda: 477 GiB, 512110190592 bytes, 1000215216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: ....

Device         Start       End   Sectors  Size Type
/dev/sda1      2048      67583     65536   32M BIOS boot
...

If one had added more partitions the pattern would have repeated itself: With the SSD on the SATA controller the start sector numbers became multiples of 2048; on the external JMicron USB-to-Sata-controller bus the start sector numbers became multiples of 65535.

Where did this discrepancy come from?

Continue reading