Lessfs-1.1.3 is available.

This release fixes a problem where lessfs would leave orphaned data chunks in the system under high load.
A number of issues with lessfschk has also been solved.

This entry was posted in Uncategorized. Bookmark the permalink.

16 Responses to Lessfs-1.1.3 is available.

  1. Nix says:

    Am I the only person to be seriously unimpressed by the decision of the mhash authors to reimplement memcpy(), strlen() et al? With dog slow buggy pure-C replacements which lack e.g. GCC __malloc__ attribute decoration so impede GCC’s optimization of their callers, as well? I’m frankly amazed mhash is fast. (It also appears to be pretty much completely maintenance-dead, or I’d submit a patch to rip these things out on glibc platforms and turn them into macros that call the functions from libc.)

    A general rule for library authors: if you think you’re smart enough to reimplement memcpy(), you aren’t smart enough to reimplement memcpy(). (Among other things, *any* portable implementation will be slower than both the version the compiler can generate for you and the version in the system C library. Insanely much slower in the case of recent glibcs, which can use things like SSE4.2 to compare/copy huge chunks at a time directly in hardware…)

  2. Pete says:

    I think that I have finally figured out a pattern that causes the lessfs database corruption that has prevented me from seriously using the 1.1.x versions. I believe that the database becomes corrupt if a file transfer is interrupted.

    I was running a Windows virtual machine, and copying a file to a lessfs share when my VM was abruptly restarted mid-transfer. After that, my lessfs database became corrupted. Is anyone else experiencing a similar issue?

    • Mark Ruijter says:

      Hi Pete,

      A subject like this tends to attract my attention.
      Which lessfs version are you using?

      Where you using samba to copy the file from Windows to lessfs?
      Was the VM running from lessfs? Is there anything in /var/log/messages or is there a coredump?

      “After that my database became corrupt”
      What version of tc are you running?
      Did you enable transactions?

      Did you try lessfchk?

      • Chris says:

        Hi,
        just to let you know: I have had the same problem many times but just thought that is was my bad ;)
        I do not have any dumps or logs because I always reverted to a previous snapshot but the problem always occured, when Samba-transfers from my Windows machines into the lessfs-VM hang somehow (host, vm, network…).
        I am using fuse 2.8.1, tc 1.4.21.
        This has happened to me from about lessfs 0.5 til 0.8, since then my setup has been stable enough, so I can’t say if it would still be present.
        Chris

      • Pete says:

        Mark,
        I am using libtokyocabinet 9.8.0. I tried running lessfsck, but it crashed when it tried to run. I tried mounting lessfs, but that failed. I tried to mount with debug enabled, but it printed some basic info and then exited without any noted error. No crash dump was created. I am using lessfs 1.1.4, which I just downloaded today. Given that lessfs did not give me any debug info, I resorted to running mklessfs -f to reformat the share and start fresh again. also, the du command still fails to show the proper file size of files created from samba. i am unsure if the problems are related.

        -pete

        • maru says:

          The du issue is not the source of the problems. Since you reformatted the filesystem it is now difficult to solve the issue. If it crashes again can you see if there is any information about the crash in /var/log/messages or one of the other syslog files?

          Lessfs-1.1.4 will autorepair the databases when needed. It should log this in /var/log/messages like:
          Could not open database : /data/mta/hardlink.tcb, automatic recovery is in progress : this is bad!

          lessfschk -o -f -c /etc/lessfs.cfg should repair and check the filesystem.

          • Pete says:

            I did not have the syslog service running at the time. I started it now and am trying to reproduce the issue.

  3. Pete says:

    :) I was able to reproduce it. I started copying a 15GB file onto a lessfs share via samba. 25% through the transfer, I clicked cancel. The cancel dialog froze, and everything became unresponsive. I am unable to access lessfs, even through the console. When I tried to restart the lessfs service, it crashed on me. I do have a core dump, now. And I also noticed the following in my syslog:

    Jul 19 11:53:49 localhost lessfs[12415]: The selected data store is tokyocabinet.
    Jul 19 11:53:49 localhost lessfs[12415]: Lessfs transaction support is enabled.
    Jul 19 11:53:49 localhost lessfs[12415]: Hash MHASH_TIGER192 has been selected
    Jul 19 11:53:49 localhost lessfs[12415]: Lessfs uses a 24 bytes long hash.
    Jul 19 11:53:49 localhost lessfs[12415]: Lessfs fsync does not sync the databases to the disk when fsync is called on an inode
    Jul 19 11:53:49 localhost lessfs[12415]: Automatic defragmentation is enabled.
    Jul 19 11:53:49 localhost lessfs[12415]: cache 400 data blocks
    Jul 19 11:55:23 localhost lessfs[12415]: segfault at 0 ip b7f254dd sp bfb79140 error 4 in libtokyocabinet.so.9.8.0[b7eef000+7e000]
    Jul 19 11:55:23 localhost lessfs[12415]: Exit signal received, exitting

    • Pete says:

      Wow, now my syslog is filling up with dozens and dozens of lines like this one:

      Jul 19 12:01:05 localhost lessfs[11708]: delete_dbb: failed to delete 17-15160 reason : no record found

      • Pete says:

        One more update… when I restarted lessfs, this time it said that it had to rollback. It did that, and was able to start it again. It could not open fileblock.tch, or blockusage.tch until it rolled back. The number after “failed to delete” kept decreasing sequentially, from 17-15804 until it reached 17-0. After it hit zero, the messages stopped, I killed the lessfs process and then restarted. That’s when it did the rollback and then started back up in a usable state again.

  4. wxp says:

    When I use the lessFS 1.1.4 , I don’t why to happen this:
    #mklessfs /etc/lessfs.cfg /fuse/
    #lessfs /etc/lessfs.cfg /fuse/ -d

    FUSE library version: 2.8.1
    nullpath_ok: 0
    unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
    INIT: 7.12
    flags=0x0000007b
    max_readahead=0x00020000
    INIT: 7.12
    flags=0x00000031
    max_readahead=0x00020000
    max_write=0x00020000
    unique: 1, success, outsize: 40

    and then I cope my /home directory to /fuse:(open another terminal)
    #cp -r /home/ /fuse

    It takes so many time and /home size is 586M,but in /fuse, the size of home/ is 2.0G

    I don’t why?

  5. wxp says:

    Hi, I first do by following:
    #mklessfs /etc/lessfs.cfg /fuse/
    #lessfs /etc/lessfs.cfg /fuse/

    then:

    # cd /home/wxp/test
    # ls
    test1 test4

    cd ..
    # du -sh test
    12K test

    I copy test/ in /home/wxp to /fuse directory:

    # cp -r test/ /fuse/test_lessfs/

    # cd /fuse/test_lessfs/test
    # ls
    test1 test4

    #cd ..
    # du -sh test
    257K test

    However when I tested with a single file:

    # tar -cvf linux_ker.bak.tar linux-2.6.33
    # du -sh linux_kernel.bak.tar
    1.4G linux_kernel.bak.tar

    —copy to /fuse
    # cp linux_kernel.bak.tar /fuse/tar
    # du -sh linux_kernel.bak.tar
    1.4G linux_kernel.bak.tar

    who can tell me why, and thank you!

    • Alex says:

      Hi wxp,

      i guess your different sizes on the little files is due to the blocksize you choose to store your datas.

      If you use 128k blocksize then, a single little file will be stored in 1 block, which is at least 128k because of your settings. So as that won’t happen on bigger files, because it will feat in multiple block store.

      So as, to be clear, if you store a 129K file, this will use 2x128K block, so, at least, about 256K for 1 129K file. That’s why you have to choose the best settings to fit your needs, as you may loose efficiency on really little files.

      By the way, on the 1.0.8 lessfs, storing files lower than block size make the FS become so slow near unusable.

      Hope i’m right in the way i explain it, and hope this helps.

    • maru says:

      Did you check .lessfs/lessfs_stats to see the real space that these files consume?
      The du statistics are something that I am working on. lessfs_stats will give you better insight for now. You can also check the size of the actual lessfs databases.

      • wxp says:

        First, Thank you very much!
        Alex, I think you are right and thank you for your help.
        I tested with a directory which contains so many small files, as what you said, if I store a 129K file, this will use 2×128K block, so, at least, about 256K for 1 129K .

        When I set the BlockSize = 4K (4096), and I copy a dircetory ,which also contains so many small files, to /fuse, the result are: 8.3M—->8.2M;38M—–>37M

        Now, I have another question: When I read lessfs_stats file, I find that the size of each file is about the half of the source file, but the whole size of the directory almost the same as source directory.

        why ?

        • maru says:

          For now lessfs reports the actual filesize to du and not the compressed filesize. I am thinking about changing du to report the compressed space instead.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>