Yan Li's Words

My words on free/open source software

Saturday, November 05, 2016

Dell Precision 15 7510 Running CentOS 7.5

I set up my new laptop/mobile workstation. It's a certified refurbished Dell Precision 15 7510 from eBay. I'm pretty satisfied with the hardware as it looked brand new to me, although FedEx's shipping caused me some confusion and trouble. I installed CentOS 7.2 and am happy with the results.


I purchased this latest Skylate model because I checked dell.com and was sure that it has pretty good Ubuntu support. The following is a summary of this laptop's hardware:

  • 6th Generation Intel Core i7-6820HQ (Quad Core 2.70GHz, 3.60GHz Turbo, 8MB 45W, w/Intel HD Graphics 530)
  • Graphics: 4GB Nvidia Quadro M2000M
  • 15.6" UltraSharp UHD IGZO (3840x2160) Wide View Anti-Glare LED-backlit
  • HD1: 512GB M.2 Samsung PCIe NVMe Class 40 Solid State Drive
  • HD2: SK hynix SC210 2.5 7MM 512GB
  • Intel Dual-Band Wireless-AC 8260 Wi-Fi with Bluetooth 4.1 Wireless Card (2x2)
  • Webcam: 0x1bcf Sunplus Innovation Technology Inc.

Storage Performance: RAID-0 slower than single disk?

The new laptop has two drives:

  1. 512 GB M.2 PCIe NVMe Class 40 Solid State Drive
  2. 512 GB 2.5 inch High Performance SATA Class 30 Solid State Drive

I'd like my CentOS to run fast so it should be on the NVMe drive. I was curious about whether I can use RAID0 to make a super fast partition so I set up Linux RAID using two partitions of the same size on these two drives and compared its performance with a regular XFS partition on the NVMe drive. The result was a surprise.

First the mkfs.xfs ran very slowly on the RAID0 partition. It took about 30 minutes to finish while when running on a non-RAID partition it needed only a few seconds.

I then run fio using the ssd-test.fio workload. I changed the run time to 300 seconds to get more accurate numbers. The results are:

The RAID-0 chunk size was 128 kB. File system was XFS with 4096 sector size. You can see that, without encryption, RAID-0 is actually slower than NVMe in every workload, and LUKS encryption has a pretty big impact on I/O throughput.

Setting It Up

  1. Updated BIOS to the latest version. This can be done in Windows or using a FreeDOS USB if your Windows is gone.
  2. Disabled SecureBoot in BIOS. It is possible to install CentOS with SecureBoot but I don't want to deal with it right now.
  3. Disabled Intel Rapid Storage Technology and switched to AHCI mode for SATA. I tried accessing Intel RST disk using a Fedora 25 Beta LiveCD, it still can't be recognized by the 4.8.0 kernel.
  4. Disabled built-in Intel graphics and using nVIDIA only, because there's no easy way to do live switching between graphics card with X yet.
  5. Installed all Windows 10 Pro updates and cleaned up unnecessary bits to reduce the Windows footprint
    1. Uninstall unneeded software: Intel management stuff, trialware if there are any.
    2. Turn off unneeded Windows feature (can be found on the Uninstall control panel module).
    3. Disable hibernation: powercfg /hibernate off (as administrator) to free up around 32 GB (your RAM size).
    4. Reduce page file size if needed.
    5. Install WinDirStat and see if there are other useless files.
    6. Disk cleanup -> System cleanup -> More options -> Cleanup old restore points.
    7. Run a full system disk cleanup.
  6. Created a Windows recovery USB (16 GB is necessary) in case I need to recover it later.
  7. Luckily the pre-shipped Windows 10 Pro is on the SATA drive so I have the whole NVMe drive for CentOS. I used gparted and shrunk the Windows partition so I can save more things on the SATA drive using CentOS. There are around 3 or 4 other tiny partitions on the SATA drive for UEFI booting and I just left them alone.
  8. Ran memcheck to make sure the RAM is in good shape (it also tested that my machine isn't vulnerable to the Row hammer attack).
  9. Boot up a CentOS 7 Live USB.
  10. Created partitions on /dev/nvme0n1 using fdisk because I don't know how Anaconda aligns partitions.
    • nvme0n1p1: EFI partition (150 MB), nvme0n1p2: /boot (2 GB), nvme0n1p3: / (70 GB), nvme0n1p4: swap (16 GB), nvme0n1p5: /home (389 GB)
    • I was overcautious and made sure all partitions are 4 MiB aligned using fdisk (parted also has auto-alignment checking but I couldn't figure out what align size parted was using).
    • Created LUKS partitions and made XFS file systems on top of them. I don't need LVM and wasn't sure if I could disable LVM in Anaconda so I just created the LUKS partition and file systems myself. I used "mkfs.xfs -s size=4096" to create XFS with 4096 sector size.
    • I don't know if this is correct but NVMe reports its physical sector size as 512 (/sys/block/nvme0n1/queue/physical_block_size) so mkfs.xfs creates file systems with 512 sector size. I guess this is actually harmless but I'm just a little paranoid about using 4096 sector size since all my other SSDs are reporting 4096 sector size. I created the /home partition myself and told Anaconda not to format it. But Anaconda insisted that the / partition must be formatted by itself and didn't provide an easy way to pass parameters to mkfs and it. What I did is to rename /usr/sbin/mkfs.xfs to /usr/sbin/mkfs.xfs.orig and create a script in place to always pass -s size=4096 to mkfs.xfs. After installation I checked using xfs_info and all newly created XFS partitions are using 4096 sector size.
  11. After the installation finished, the system wouldn't boot. efibootmgr correctly created a new entry "CentOS Linux" that points to /EFI/somedrive/centos/shim.efi. However, it pointed to the WRONG drive. In my case, it pointed to the SATA drive that had a Windows partition instead of the NVMe drive that hosted the Linux UEFI partition. This is very likely a bug in Anaconda, which just uses whichever UEFI partition it detected first and assumed the system has only one UEFI partition. Finding the cause wasted me some time but once you found the reason fixing is easy: just boot into BIOS and create a new UEFI boot entry that points to the right shim.efi on the right drive. Dell's BIOS is pretty powerful and can do all sorts of UEFI/legacy booting mode mingled together.
  12. Installed the kernel-ml package since CentOS 7.2 doesn't support Skylake yet. CentOS 7.3 and later has Skylake support out-of-the-box.
  13. HiDPI (my laptop has a 4K internal display) works great after installing the mainline kernel.
  14. Added rd.luks.options=discard to /etc/default/grub and discard to /etc/crypttab to enable LUKS discard support. This enables the LUKS device mapper layer to pass along the discard/trim command from the file system above and can help to keep the flash drives run at optimal performance. The downside is this can make the LUKS encryption less secure by leaking locations of the unused blocks (which is not a big deal for me).
  15. I didn't mount XFS with discard, because that slows down daily operation and the XFS FAQ warns against that. Instead I just enabled weekly fstrim timer in systemd by running: systemctl enable fstrim.timer
  16. Installed iwl7260-firmware- from Fedora 25 to provide the firmware for the Intel 8260 wireless card.
  17. Installed nVIDIA Proprietary driver 370.28 (latest version).
  18. Connected my external low DPI monitor. By default everything looked huge on it because of the 2.0 scaling factor GNOME defaults to on the internal monitor. I used the nvidia-settings tool and enabled panning and changed scaling on the external monitor. The external monitor's resolution is 1920x1200, so I enabled panning 3840x2400 and set ViewPortIn to 3840x2400. After that the external screen became blank because of a bug so I also needed to run this command to enable "ForceFullCompositionPipeline=On":
    nvidia-settings --assign CurrentMetaMode="DPY-5: nvidia-auto-select @3840x2160 +0+0 {ViewPortIn=3840x2160, ViewPortOut=3840x2160+0+0}, DPY-2: nvidia-auto-select @3840x2400 +3840+0 {ViewPortIn=3840x2400, ViewPortOut=1920x1200+0+0, ForceFullCompositionPipeline=On}"
  19. Used PowerTOP to check that the CPU can actually enter deep sleeping mode (C10). If not your CPU might be dying faster.
              Package   |             Core    |            CPU 0    CPU 4
    POLL        0.0%    | POLL        0.0%    | POLL        0.0%    0.0 ms  0.0%    0.0 ms
    C1E-SKL     0.8%    | C1E-SKL     1.3%    | C1E-SKL     1.1%    0.4 ms  1.6%    3.2 ms
    C3-SKL      0.4%    | C3-SKL   1.1%    | C3-SKL 2.1%    0.4 ms  0.1%    0.4 ms
    C6-SKL      7.9%    | C6-SKL     13.3%    | C6-SKL     25.0%    0.7 ms  1.7%    1.1 ms
    C7s-SKL     0.0%    | C7s-SKL     0.0%    | C7s-SKL     0.0%    0.0 ms  0.0%    0.0 ms
    C8-SKL     29.0%    | C8-SKL     31.0%    | C8-SKL     43.4%    1.6 ms 18.6%    3.3 ms
    C9-SKL      0.0%    | C9-SKL   0.0%    | C9-SKL 0.0%    0.0 ms  0.0%    0.0 ms
    C10-SKL    51.0%    | C10-SKL    44.0%    | C10-SKL    17.0%    5.1 ms 71.0%   16.9 ms
  20. Updated on Nov. 24th, 2016: I discovered that the headphone jack had no sound. This is fixed by adding a file /etc/modprobe.d/alsa-base.conf with the following content:
    options snd-pcsp index=-2
    alias snd-card-0 snd-hda-intel
    alias sound-slot-0 snd-hda-intel
    options snd-hda-intel model=laptop
    options snd-hda-intel position_fix=1 enable=yes
    I heard that you will need to completely shutdown and restart the laptop to make this work. I got the instruction from here.
  21. Brightness of internal display is not remembered across reboot. Per Soon's comment, you can set a fixed value on boot by adding:
    echo 100 > /sys/class/backlight/acpi_video0/brightness
    chmod 666 /sys/class/backlight/acpi_video0/brightness
    to /etc/rc.d/rc.local, and then chmod +x /etc/rc.d/rc.local.
  22. Updated on Apr. 25th, 2017: Setting the screen backlight brightness to max when working in direct sun. CentOS 7.3's kernel defaults to using ACPI backlight control. It works great but cannot set the screen brightness to the maximum possible level, i.e., it's max brightness level is a little darker than the max level the hardware can reach. For now, in order to reach the max brightness level you can do a reboot and set the brightness level to max in BIOS (and disable any change to the brightness level you added such as the one above). After the system boots up, if you change the brightness level you wouldn't be able to come back to max again. The other workaround is to add the acpi_backlight=vendor kernel option, which switches to using the dell_backlight driver. The dell_backlight driver, however, is buggy; it would fix the brightness at the max level and you wouldn't be able to change it.

End Result

Everything else works well, including suspend-to-RAM, external display, HDMI, DisplayPort, webcam, sound, Wi-Fi, SD Card Reader, and USB-C.

Some nitpicks

  • Brightness of internal display is not remembered across reboot.
  • Sometimes after suspend to RAM, the CPU frequency scaling doesn't work well and all CPUs are stuck at around 400 to 500 MHz, like:
    analyzing CPU 0:
      driver: intel_pstate
      CPUs which run at the same hardware frequency: 0
      CPUs which need to have their frequency coordinated by software: 0
      maximum transition latency: 0.97 ms.
      hardware limits: 800 MHz - 3.60 GHz
      available cpufreq governors: performance, powersave
      current policy: frequency should be within 800 MHz and 3.60 GHz.
                      The governor "powersave" may decide which speed to use
                      within this range.
      current CPU frequency is 468 MHz.
      boost state support:
        Supported: yes
        Active: yes
    It doesn't happen most of the times, and I haven't figured out why. If I feel the machine is sluggish I just run sudo cpupower frequency-set --governor performance (with it binded to a hotkey). There are some other people having the same issue.
  • I wish the battery could last longer. I don't have time to do any manual tuning yet. Now when on battery I can get like 4 hours regular use or 1 hour heavy use (running my CPU heavy machine learning code). This is on par with an early review of this laptop running Windows. I am pretty sure it can do better if I do some power usage tuning in PowerTOP.
  • Live switching between nVIDIA and Intel graphics card not working but I'm ok with using nVIDIA only since the Intel Skylake driver is buggy anyway. The downside is probably more power consumption.

Some random thought. This new laptop's nVIDIA graphics feels much faster than the Intel graphics in my old Late 2013 Retina MacBook Pro. I'm just talking 2D performance here. The old Intel graphics in the rMBP always stutters when driving one external monitor and crashes when I add two external monitors. It drives two external monitors fine when running macOS so this is likely a software issue. I can't help but think that Intel doesn't allocate enough resources for their Linux drivers. There are other problems with Intel's driver:
  • No easy way to install different versions on RHEL/CentOS. The official installer doesn't support RHEL/CentOS yet. The nVIDIA driver, while being closed source, always support most Linux distros on the market and I can install any version I choose.
  • Buggy and incomplete 2D acceleration support.
  • 3D performance much slower than Windows driver.
  • The Xorg running on my old rMBP had an about 20% chance crashing when plugging in an external monitor. I don't know if I should blame the Xorg server or the Intel driver but with the same Xorg using nVIDIA driver in the new laptop, it has never crashed once yet.
Even though I hail Intel's engagement with the open source community and their open source drivers, I feel that the quality of their drivers is not on par with nVIDIA's proprietary drivers.


  • Oct. 2018: Upgraded to CentOS 7.5
  • Nov. 2017: Tested USB-C using a Google Pixel 2 XL
  • Dec. 2016: Upgraded to CentOS 7.3

Sunday, October 09, 2011

Using rsync to migrate data between Linux and Apple Mac OS X

I got a new MacBook Air running Mac OS X (10.7.1) "Lion", and I managed to migrate my archive data (around 1.3TB) from an old Linux laptop running Debian Testing "Wheezy." The process was not smooth and I hit several blocks. Here I document several points that you need to pay attention to when migrating data across operating systems:


Use the latest rsync (3.0.8 from MacPorts was used here). The old version of rsync is buggy on Mac. The version of rsync on Linux doesn't matter much because rsync on Linux has been quite stable since long ago.


Disable "Ignore ownership on this volume" (you can use Finder or from command line). If not rsync won't be able to set file permission correctly. I think if you don't instruct rsync to preserve permission and mode then you might not need to do this but I'm not very sure.


Disable antivirus software like McAfee because they interfere with rsync's time preservation.


Use rsync's "--iconv=UTF8-MAC,UTF-8" option (only available after rsync 3.0), because Mac's HFS+ decompose UTF-8 before storing it so the filename would be different from that of Linux if no conversion is done right.

If you saw "iconv_open("UTF-8", "UTF8") failed", try swapping the parameters of iconv.


This is not directly related to rsync nor Mac, but a general rule for handling archive data: use another piece of software to do the verification. Your rsync might be buggy, the OS might be buggy, the disk or USB cable might be kaput, you might have used the wrong options or settings. Therefore you should always use another piece of software other than rsync to verify the data before and after the migration. I used AIDE to do file checksum and time verification.

Wednesday, February 16, 2011

openSUSE Build Service (OBS): Break the Link Between a Branched Package and Its Source

We are using openSUSE Build Service (OBS) for the development of MeeGo. I have an old package, which was branched from a Trunk package long ago. But every time the source pakcage in Trunk is updated my package would be broken because the changes between them are so large that it's become meaningless to keep the link between them any more. So I think I should just remove the link between them.

The simplest way may be just removing my package and start a new package from scratch, however, this action not only removes the package history but will also reset the cumulative release version number, which is unacceptable since that would make it impossible for client machines to follow updates to this package.

I have carried out a long search but failed to find a hint to this issue, and after some trail-and-error I think I have found a way:

  1. Check out the project's unexpanded files by using "osc co -u PRJ/PKG"

  2. Remove the "_link" file by running "osc rm _link"

  3. Copy in the latest package files since the link is now broken

  4. Check-in the changes

I've done a test and the steps above worked. However, it seems that by those steps the release version number will also be reset.

Update: darix from freenode/#opensuse-buildservice said 'osc in git has "osc detachbranch" exactly for that.'

Saturday, December 25, 2010

Easy way to align Linux partitions for SSD and myths debunked

Recently I got a laptop from my employer with an Intel X25-M SSD drive in it. I have spent some time trying to figure out the best and easiest way to install Linux onto it while retaining the Windows 7 (so I have to use MBR instead of GPT). There are numerous discussions you can find on the net, some of them are quite complex and some are simply misleading (if not wrong). In fact this task is quite easy with modern Linux and here's a documentation of what I've done. This blog is not meant to be a full discussion on this topic, but only intended as a cheat sheet for those experienced Linux users (if you want a more detailed and correct discussion on this topic, I recommend Ted Ts'o's blog: Aligning filesystems to an SSD’s erase block size.)

For those impatient readers, here's an overview: the disk geometry (or C-H-S value) you see from partition programs as well as the warning about partitions are not aligned to cylinder boundary don't matter at all so there's no point to spend time on tuning loose knobs. New GParted (I used 0.7.1 from SysRescueCD, not to be confused with "GNU parted") is a very easy to use tool and it now defaults to create partitions aligned on 1 MiB (I heard that Ubuntu 10.10 and later can do partition alignment automatically too but I haven't tried that).

Here's the detail:

The myth: Does the C-H-S value still matter?

A lot of discussions were about checking the disk geometry after partitioning. However, in fact no modern operating system pays attention to those values any more (see the comment by Ted Ts'o). And those "-H xx -S xx" options passed to fdisk are only meant to make that fdisk program happy so it can create partitions aligned with cylinder boundaries. Although the C-H-S values are stored in MBR, they are not used by the Linux kernel who runs the whole system. Instead of paying attention to choosing or checking those C-H-S values, you should only need to make sure that the start and end sectors of partitions are multiples of the size of your erase block. Of course you can use any good partition programs to do this.

So you don't have to use that fdisk program

and you can simply choose any program that you are familiar with. For me I'm comfortable with sfdisk and GParted. This time I tried latest GParted and was very happy to see it had supported partition alignment on 1 MiB boundary. So my work is done in one minute without using a calculator. After that I also carefully checked the partition table with "sfdisk -uM" and was sure that those partitions were aligned on MiB boundaries. The extended partition was not aligned but I thought that didn't matter as long as all logical partitions in it were aligned correctly.

A few more tips:

  • If you want to have TRIM/DISCARD support you can use a 2.6.33+ kernel with Ext4 file-system, mounted with -o discard.

  • When making the Ext4 file-system, make sure you use the stripe-width option and let it align with erase block boundary too (see Ted's post above).

  • Don't use LVM or dm-crypt since they don't support TRIM/DISCARD command yet. Newer kernel supports TRIM/DISCARD passing along in LVM and dm-crypt.

compilebench test result:

Intel's X25-M is very smart that even if you don't align the access correctly the drive will do magic under the hood so the initial benchmark data doesn't differ too much, however, the performance will degrade much more quickly as the internal mapping tables getting more and more complex due to unaligned writes. Here I post the file-system benchmark result when it's still new. I'll try to do benchmark again after several months to see if there's any performance degrade (although my usage is not heavy).

I used compilebench v0.6 because I think it mimics daily desktop usage very well.

Machine spec:
System: Debian Squeeze (testing)
Kernel: 2.6.37-rc5-amd64 from Debian experimental repo
Test command: sudo ./compilebench -D t -i 10 --makej (the same option used by Phoronix Test Suite)
File system: Ext4, mount option: rw,noatime,discard

compilebench Initial Create test result: 105.24 MB/s (on 2010-12-27).

Monday, December 20, 2010

MeeGo: Auto-Start an Application on Session Start

For modern Linux desktop, it's a common way to auto-start an application in a user-session by placing a ".desktop" file into /etc/xdg/autostart. But if you are writing or porting an application to MeeGo and found that the application is not auto-started after you put in a seemingly correct .desktop, please check this:

If there's an "OnlyShowIn=" item, it must contain the string X-MOBLIN-NB. For example:


Another caveat is that MeeGo's uxlaunch doesn't honor AutostartCondition key yet, for the sake of simplicity and fast-boot.

For code under the hood, check MeeGo's uxlaunch/desktop.c.

Saturday, December 18, 2010

Very hard to make bootable USB drive from Ubuntu 10.10's CD Image

I was spoiled by MeeGo's super convenient USB live image so I was very surprised when I found that it was very hard to make a bootable USB drive from Ubuntu 10.10's CD-ROM image for my wife.

On Linux, Ubuntu only officially supports making bootable USB drive from its CD-ROM image by using "ubuntu-usb-creator", which is not available in any Linux installations I'm using now (MeeGo, Debian, CentOS). Ubuntu's official help document mentions one alternative way -- to download a small image and combine it with the iso file -- doesn't work, and community posts pointed out that that methods only worked with "alternative iso" (not desktop iso).

Finally I found a way (and the only way) to make a usable USB drive for installation is from this post.

So what? I think Canonical really should release official USB images since most netbooks shipped today have no CD-ROM drive.

Wednesday, July 14, 2010

Google Chrome (Chromium) Sync Server Address

If you want to know the host name and address of the sync server Google Chrome (or Chromium) uses, see here:


For now, the sync service's URL is


About Me

My photo
Santa Cruz, California, United States