Lessfs-1.0.5 is available for download.

This release fixes some minor bugs that are related to logging only.

This entry was posted in Uncategorized. Bookmark the permalink.

30 Responses to Lessfs-1.0.5 is available for download.

  1. Alex says:

    How fast :p

    lessfs’s growing faster than we could ever expect ;) .. So, as those are minor bugs, i’ll wait for my biggest server migration, as it is now hard working on thousand of thousands littles files :) (who cares… ;) )

  2. Bas Bleeker says:

    Great, will check it out this week.

  3. Hubert Kario says:

    I’ve made an AUR build script for lessfs, it’s available at
    so all the Arch Linux users can easily install lessfs.

  4. MasterTH says:

    Nice speed, installed and created a new storage

    A nice feature would be to show how much storage is saved through dedup process, is this possible??

    • Hubert Kario says:

      currently not, but if you set debug to 3 or more, you can read statistics in /var/log/messages for the files that are being added to the volume

    • MasterTH says:

      is dedup with tar.gz files not possible?
      i’m syncing 50GB backups. one day has one tar.gz. But no duplicated blocks could be found

      • MasterTH says:

        each file has about 1 GB

        • Hubert Kario says:

          lessfs works on block level, without any clever shifting deduplication window, uncompressing of gzipped files or any other such stuff, from what I could read from whitepapers even NetApp, DataDomain or ExaGrid don’t do this.

          If you want to dedup backups you have two ways: ungzip and untar the files, tar them with high blocking factor (128 or 256). Do not compress them, lessfs will compress them by itself.

          it’s best to either keep the backups without any form of tar.gzipping or as a raw dd images. This way lessfs can do it’s magic. (for performance it’s best to use dd images, metadata manipulation on lessfs is far from quick and is CPU bound in a scenario with metadata and data on different HDDs)

  5. Sam says:

    I used to be able to do a ‘du’ within the file system and see how much space was in a directory and subtract out from total size on the parent file system to see de-dup rate. I can’t do a ‘du’ anymore since I went from 1.0.1 to 1.0.5. Something change?

    • maru says:

      Hmm, strange because it works for me.
      What do you see when you do a ‘du’?

      Is there an error message?

      • Sam says:

        every new dir made shows a size on ‘du -h’ as 512 whereas before it showed real size.
        [root@dedup1 Moved_from_linfs01]# du -h
        512 .

        Yet there’s over 150 gb of data in that folder.

        Same mounted lessfs file system from files I moved at version 1.0.1
        [root@dedup1 SamMove]# du -h
        512 ./1111ContactSamBeforeDeletingAnything
        16G ./iso
        15G ./move
        103G .

        (Note that 111ContactSamBeforeDeletingAnything is an empty folder anyway)

        • Mark says:

          I just tried to reproduce this but it works like expected.
          Does the problem disappear when you mount the filesystem with the old lessfs version?

        • Alex says:

          Hi Sam,
          just updated to 1.0.5 on my test system, but can’t reproduce the “du” bug you see.
          My try is on 64bit gentoo system with 2Gb of ram.
          My database (tc) is about 650GB containing approx 780GB of datas.
          I know saying “it works by my side” do not help a lot, but that seem to confirm that the version do not contain this kind of bug on another system than maru’s one.
          Maybe you could try to run a lessfsck on your datas….

          • Pete says:

            I am having the same problem with a fresh install of 1.0.6. I created a script that gives me a crude idea of the total size, but it’s not a fix — only a workaround.

            if [ “$1″ == “” ]; then
            echo “Usage: du-less [path-to-folders]”

            for i in `find “$1″ -ls | sed -e ‘s/^.* \([0-9][0-9]*\) [A-Z][a-z][a-z] .*$/\1/’`
            while [ ! “$NEWTOTAL” == “0” ];
            if [ ! “$NEWTOTAL” == “0” ]; then
            COUNT=$(($COUNT +1))
            case $COUNT in

            echo “Total: $TOTAL.$LASTDIFF $UNITS”

        • maru says:

          I have been doing some tests with a newly create lessfs filesystem.
          lessfs-1.0.6 and again no unexpected behaviour.

          xgtest02:~ # cd /fuse/
          xgtest02:/fuse # ls
          xgtest02:/fuse # du .
          104393 .
          xgtest02:/fuse # mkdir test
          xgtest02:/fuse # du .
          1 ./test
          104393 .
          xgtest02:/fuse # du . -h
          512 ./test
          102M .
          xgtest02:/fuse # mv boot.img test/
          xgtest02:/fuse # du . -h
          102M ./test
          102M .

          Can you repeat a similair test and show me the output?

          Thanks in advance,


          • Pete says:

            I am still having the problem. I have 2 machines running the same code-base and this issue is happening on one of the machines and not the other. Could this be a CPU instruction issue?

          • Pete says:

            Hey, one more bit of information that I noticed… The problem only seems to happen with files that I “move” from a non-deduped folder into the deduped folder. If I copy the files and folders, then the “du” command works properly.

          • Pete says:

            Here is some debugging info that illustrates the issue happening with the move command, but not with the copy command. My guess is that whatever is corrupt in the attributes of the original is fixed when a new copy is created.

            # ls -alh test
            total 1.0K
            drwxr-xr-x 2 root root 4.0K Mar 13 16:46 .
            drwxr-xr-x 10 500 500 4.0K Mar 13 16:51 ..
            -rwxrw-r– 1 root root 153M Feb 24 2009 VistaPE-Core.iso

            # du -sh test
            512 test

            # ls -alh test2
            total 1.0K
            drwxr-xr-x 2 root root 4.0K Mar 13 16:51 .
            drwxr-xr-x 10 500 500 4.0K Mar 13 16:51 ..

            # mv test/VistaPE-Core.iso test2/
            # du -sh test2
            512 test2
            # ls -alh test2
            total 1.0K
            drwxr-xr-x 2 root root 4.0K Mar 13 16:51 .
            drwxr-xr-x 10 500 500 4.0K Mar 13 16:51 ..
            -rwxrw-r– 1 root root 153M Feb 24 2009 VistaPE-Core.iso

            # cp test2/VistaPE-Core.iso test/
            # du -sh test
            153M test
            # ls -alh test
            total 153M
            drwxr-xr-x 2 root root 4.0K Mar 13 16:54 .
            drwxr-xr-x 10 500 500 4.0K Mar 13 16:51 ..
            -rwxr–r– 1 root root 153M Mar 13 16:54 VistaPE-Core.iso

            Note: These files were originally copied using a Samba share, from a Windows host.

          • Mark says:

            Hi Pete,

            I’ll do some testing with moving files.
            Let’s see if I can reproduce this.

          • Mark says:

            When I do move operations from a local filesystem to lessfs it all works.
            Are the file sizes correct when you look at them with : stat?

            It looks like this is a lessfs/samba related issue.

            Can you do some basic tests with and without samba?

          • Pete says:

            Mark, I agree. When I do move operations from within a Linux environment, it all works. My guess is that it’s a Lessfs/Samba issue as well. All of the affected files were moved (cut/paste) into the deduped share that was served up through Samba. If you can’t reproduce it, let me know. I might be able to make a live-cd environment where the error is reproducible in for you.

  6. Pete says:

    Mark, I just discovered some more information about the problem. The problem is apparent when using the “ls -s” command too, which displays the file’s size by the number of blocks. That results in a zero block size for each file that is either copied or moved into a samba/lessfs file share.

    If I move the file within lessfs, the problem remains. But if I move the file outside of the lessfs folder, the problem is permanently fixed, even if I move it back into the lessfs folder, the problem never recurs. Somehow, it’s the number of blocks that’s not being set. That must be the corrupt or missing attribute.


    • Mark says:

      Hi Pete,

      Thanks for testing this.
      I will setup samba and test lessfs with full debugging. It probably has to do with lessfs not updating the filesize on every write. Do you have aio enabled in smb.conf?

    • Mark says:

      Hi Pete,

      I tried to reproduce the problem with a very simple smb.conf configuration.

      workgroup = MYGROUP
      server string = Samba Server Version %v
      interfaces = eth0
      security = SHARE
      auth methods = guest
      passdb backend = tdbsam
      hosts allow =
      cups options = raw

      comment = videodata
      path = /fuse
      force user = samba
      read only = No
      guest ok = Yes

      After starting samba I mounted the samba share:
      mount -t cifs // /mnt

      Copying and moving data to /mnt works without problems as well as ‘du’.
      Can you confirm that you need to use the wonderful windows OS to reproduce the problem?

      Thanks in advance,


      • Pete says:

        I only tested copying files from the wonderful Windows XP OS. I do believe that Windows is a key player in this.

        • Pete says:

          FYI – I copied the same files into a non-lessfs share via Samba and the issue did not happen (ls -s reported a non-zero block count). So this does appear to be a Windows -> Samba -> Lessfs issue.

      • Mark says:

        I have been able to reproduce it.
        You need to copy the data from a windows system to samba->lessfs

        root@jupiter:/mnt# du . -s -h
        1.4G .
        root@jupiter:/mnt# du .
        359394 ./test2
        0 ./test3
        1381240 .
        root@jupiter:/mnt# cd test3
        root@jupiter:/mnt/test3# du . -s -h
        0 .

        Directory test2 contains the same files as test3
        Hmm…. Files that take no space at all.

        It’s official, we have found a bug.


        • Pete says:

          Alright! I can’t wait for the fix!

          By the way, kudos on lessfs. Even with this bug, I am still happy with it.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>