f2fs compression doesn't seem to be compressing

Discussion in 'Other Operating Systems' started by elvis, Feb 14, 2021.

  1. elvis

    elvis Old school old fool

    Joined:
    Jun 27, 2001
    Messages:
    43,808
    Location:
    Brisbane
    Running F2FS on an old clunker laptop with Debian 11 Bullseye on a Compact Flash card and a CF to IDE adaptor inside.
    https://en.wikipedia.org/wiki/F2FS

    My own tests on performance are pretty good (better than ext4 for this specific setup). Various tests around the Internet demonstrate extended life specific to eMMC/CF/SD type devices, so that's nice.

    Recently the kernel on Debian 11 (5.10) as well as f2fs-tools (1.14.0) upgraded far enough that F2FS compression became an option. Before I do the whole dance of migrating my data about just to enable compression (requires a reformat of the volume), I thought I'd test it out on a VM.

    Problem is, it doesn't seem to be compressing.

    Under BtrFS, for example, I can do the following, using a 5.0GiB LVM volume I've got for testing:

    Code:
    # wipefs -af /dev/vg0/ftest
    # mkfs.btrfs -f -msingle -dsingle /dev/vg0/ftest
    # mount -o compress-force=zstd /dev/vg0/ftest /f
    # cd /f
    
    # df -hT ./
    Filesystem            Type   Size  Used Avail Use% Mounted on
    /dev/mapper/vg0-ftest btrfs  5.0G  3.4M  5.0G   1% /f
    
    # dd if=/dev/zero of=test bs=1M count=1024
    # sync
    # ls -lah
    -rw-r--r-- 1 root root 1.0G Feb 14 10:42 test
    
    # df -hT ./
    Filesystem            Type   Size  Used Avail Use% Mounted on
    /dev/mapper/vg0-ftest btrfs  5.0G   37M  5.0G   1% /f
    
    Writing ~1GB of zero data to a file creates a 1GB file, and BtrFS zstd compresses that down to about 30M or so (likely metadata and compression checkpoints).

    Try the same in F2FS:

    Code:
    # wipefs -af /dev/vg0/ftest
    # mkfs.f2fs -f -O extra_attr,inode_checksum,sb_checksum,compression /dev/vg0/ftest
    # mount -o compress_algorithm=zstd,compress_extension=txt /dev/vg0/ftest /f
    # chattr -R +c /f
    # cd /f
    
    # df -hT ./
    Filesystem            Type  Size  Used Avail Use% Mounted on
    /dev/mapper/vg0-ftest f2fs  5.0G  339M  4.7G   7% /f
    
    # dd if=/dev/zero of=test.txt bs=1M count=1024
    # sync
    # ls -lah
    -rw-r--r-- 1 root root 1.0G Feb 14 10:48 test.txt
    
    # df -hT ./
    Filesystem            Type  Size  Used Avail Use% Mounted on
    /dev/mapper/vg0-ftest f2fs  5.0G  1.4G  3.7G  27% /f
    
    Double checking that I'm ticking all the right boxes: formatting it correctly, mounting it correctly with forced extension compression, using chattr to force the whole volume to compress, naming the output file with the correct extension, no go. The resulting volume usage shows uncompressed data. Writing 5GB of zeros fills the volume on F2FS, but not BtrFS.

    I repeated the f2fs test with lzo and lzo-rle, same result.

    Anyone else played with this?

    I've seen one other person actually test this compression, and they claimed they saw nothing as well:
    https://forums.gentoo.org/viewtopic-p-8485606.html?sid=e6384908dade712e3f8eaeeb7cf1242b
     
    Last edited: Feb 14, 2021

Share This Page

Advertisement: