Lessfs-0.8.2 is available for download.

Lessfs-0.8.2 is mostly a bugfix release.
Fixes a bug that causes lessfsck and mklessfs to segfault when compiled
with encryption support and encryption disabled in the config.
Fixes a bug that causes lessfs to segfault on umount when compiled
with encryption support and encryption disabled in the config.
lessfsck,listdb and mklessfs are now installed in /usr/sbin
instead of /usr/bin.

This entry was posted in Uncategorized. Bookmark the permalink.

9 Responses to Lessfs-0.8.2 is available for download.

  1. Chris-U says:

    What about my suggestion from september 14th?

    ———————————
    What about a deleting queue? A additional database which stores the ids/hashes whatever of the blocks which should be deleted.
    So when you delete a file first the meta data gets removed so the file is no longer visible in the filesystem. Second the entrys in the deleting queue were written. Finaly when no other jobs except deleting are pending, the blocks from the deletion queue get checked and deleted.

    This method would make it possible to handle files and use the filesystem like a traditional filesystem. Although there is no purge job necessary.
    ———————————

    It would be great if something like that would be aviable in v1.0, too.
    At the moment i stopped testing lessfs because i cannot realy use it without a better solution for deleting files.
    In addition it seems that there is only one file operation at the same time on one lessfs possible. When i try to do another file operation on the lessfs it will not respond until the first operation is completed.

    Has anybody tested lessfs on solid state disks? It wuoldn’t be useful for backup purposes, but for efficent use of the expensive disks when they are used for energy saving.

    Regards, Chris

    • Mark Ruijter says:

      Hi Chris,

      Did you try lessfs with the file_io backend?
      Deleting files with file_io is much faster because it just marks deleted
      blocks in the freelist database.
      This actually comes very close to the solution that you are suggesting.

      My question would be : Does file_io solve your problem?

      Mark.


      P.S. I will take a good look on optimizing deletion behavior in
      combination with the tc backend.
      I do agree that the current situation is ‘less then optimal’. For now I
      am working on fsck for the file_io backend.
      But this issue should be resolved before 1.0 comes out.

  2. Areq says:

    How fast lessfsck should be ?

    I run it on 15G lessfs few day ago and still is working….
    (first day Phase 1, and now Phase 2)

    config: http://pld.pastebin.com/f2e0e5456

    CPU P4 3.00GHz, 750MB RAM. SATA Seagate 7200.10 320GB

    No other process on this machine is runnig now.

    # dd if=blockdata.tch of=/dev/null bs=1M
    Ad12806+1 records in
    12806+1 records out
    13428124894 bytes (13 GB) copied, 245.718 s, 54.6 MB/s

    xfs, 2.6.27.12-1, i686, libfuse-2.8.0-1.i686

    USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
    root 2489 76.3 5.7 433916 44060 pts/1 R+ Nov10 5653:15 /usr/sbin/lessfsck -o -c /etc/lessfs/backup.cfg

    # strace -f -p 2489 shows a lot of pread64(8, it is mta/dirent.tcb

    • maru says:

      Hmm, lessfsck running for days is not what I intended. Don’t get me wrong lessfsck will be time consuming, but days is way to long.

      How did you mount lessfs?
      Did you use big_writes and a larger then 4k blocksize?
      I will do some testing with big database to see what happens.

      Mark

      • Areq says:

        it was mounted without any parameters:
        lessfs /etc/lessfs/backup.cfg /mnt
        blocksize 4k

        now is going: Phase 3 : Check for orphaned inodes.

        • Szycha says:

          My lessfsck stuck in a dead loop :-(

          $ ls -al /proc/18096/fd/3
          /proc/18096/fd/3 -> /var/bk/mta/fileblock.tch

          $ strace -p 18096 -s 90 2>&1 | head -10
          Process 18096 attached – interrupt to quit
          pread64(3, “\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234″, 40, 149089050) = 40
          pread64(3, “\260>\20\30j\245\4\2\277Y\242g\220\320\241\225\277+”, 48, 154400320) = 48
          pread64(3, “\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234″, 40, 149089050) = 40
          pread64(3, “\260>\20\30j\245\4\2\277Y\242g\220\320\241\225\277+”, 48, 154400320) = 48
          pread64(3, “\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234″, 40, 149089050) = 40
          pread64(3, “\260>\20\30j\245\4\2\277Y\242g\220\320\241\225\277+”, 48, 154400320) = 48
          pread64(3, “\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234″, 40, 149089050) = 40
          pread64(3, “\260>\20\30j\245\4\2\277Y\242g\220\320\241\225\277+”, 48, 154400320) = 48
          pread64(3, “\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234″, 40, 149089050) = 40

          I am afraid I will have to break this joyful activity and mount unchecked lessfs right away.


          Your testfield reporter,
          (-) Szycha.

  3. Szycha says:

    Hi,

    Thanks for this excellent tool. It is really something – first de-duplicating gpl-ed fs.

    I am trying to use lessfs on 1 TB USB hard drive and it shows ‘suboptimal’ performance of about 256 kB/s (2.6.24.5-smp on E8200 @2.66GHz) when copying ~60GB disk image.

    My setup:
    lessfs 0.8.0
    configured with:
    ./configure ” ‘–with-crypto’ ‘–with-lzo’ ‘–with-sha3′ ‘–prefix=/usr’ ‘CFLAGS=-O3 -mtune=native -mmmx -msse -msse2 -msse3 -ffast-math -fomit-frame-pointer -mfpmath=sse,387′

    lessfs.cfg:
    DEBUG = 1
    HASHLEN = 24
    BLOCKDATA_PATH=/var/bk/dta
    BLOCKDATA_BS=10485760
    BLOCKUSAGE_PATH=/var/bk/mta
    BLOCKUSAGE_BS=10485760
    DIRENT_PATH=/var/bk/mta
    DIRENT_BS=10485760
    FILEBLOCK_PATH=/var/bk/mta
    FILEBLOCK_BS=10485760
    META_PATH=/var/bk/mta
    META_BS=10485760
    HARDLINK_PATH=/var/bk/mta
    HARDLINK_BS=10485760
    SYMLINK_PATH=/var/bk/mta
    SYMLINK_BS=10485760
    FREELIST_PATH=/var/bk/mta
    FREELIST_BS=10485760
    CACHESIZE=640
    COMMIT_INTERVAL=300
    LISTEN_IP=127.0.0.1
    LISTEN_PORT=100
    MAX_THREADS=2
    DYNAMIC_DEFRAGMENTATION=off
    COREDUMPSIZE=256000000
    SYNC_RELAX=0
    ENCRYPT_DATA=off
    ENCRYPT_META=on

    /var/bk is 1 TB LUKS partition formatted with XFS:
    # xfs_info /var/bk
    meta-data=/dev/mapper/bak isize=256 agcount=4, agsize=61047468 blks
    = sectsz=4096 attr=2
    data = bsize=4096 blocks=244189871, imaxpct=25
    = sunit=0 swidth=0 blks
    naming =version 2 bsize=4096
    log =internal bsize=4096 blocks=32768, version=2
    = sectsz=4096 sunit=1 blks, lazy-count=0
    realtime =none extsz=4096 blocks=0, rtextents=0

    (I was unable to set up internal lessfs encryption for unknown reason, that’s why I use luks).

    lessfs is mounted with:
    mount |grep lessfs:
    lessfs on /backup type fuse.lessfs (rw,nosuid,nodev,max_read=4096)

    ps awwxf |grep lessfs:
    lessfs /etc/lessfs.cfg /backup/ -o max_write=4096,max_read=4096,max_readahead=256,big_writes

    /bin/dd_rescue -d img-65gb /backup/2009-1112-2123/img-65gb
    dd_rescue: (warning): O_DIRECT requires hardbs of at least 4096!
    dd_rescue: (warning): We don’t handle misalignment of last block w/ O_DIRECT!
    dd_rescue: (info): ipos: 68157440.0k, opos: 68157440.0k, xferd: 68157440.0k
    errs: 0, errxfer: 0.0k, succxfer: 68157440.0k
    +curr.rate: 16450kB/s, avg.rate: 436kB/s, avg.load: 0.1%
    dd_rescue: (info): img-65gb (68157440.0k): EOF

    /bin/dd_rescue -d img2-60gb /backup/2009-1112-2123/img2-60gb
    dd_rescue: (warning): O_DIRECT requires hardbs of at least 4096!
    dd_rescue: (warning): We don’t handle misalignment of last block w/ O_DIRECT!
    dd_rescue: (info): ipos: 37608448.0k, opos: 37608448.0k, xferd: 37608448.0k
    errs: 0, errxfer: 0.0k, succxfer: 37608448.0k
    +curr.rate: 119kB/s, avg.rate: 255kB/s, avg.load: 0.1%

    (it is still in progress)

    I would like to have 4kB blocks that match ssize on XFS source partitions in order to achieve highest hit rate. (I cannot easily upgrade kernel on that machine – this is the other reason).

    What did I mix up?

    • maru says:

      Using a 4k blocksize kills lessfs performance.

      One other thing, don’t use big_writes in combination with 4k blocks. big_writes implies a larger then 4k blocksize.

      I am afraid that a kernel upgrade is inevitable.
      You are not the first to struggle with this, so I should probably work on a Howto.

      Mark

      • Szycha says:

        > I am afraid that a kernel upgrade is inevitable.

        It was. Now average speed on similar data set completed with avg. speed of 3800kB/s (compared to ~250kB/s previously). It still was the first copy (I am trying to set up some snapshot-alike backup system) so I may assume speed to grow by factor of four with next copies.

        My configuration changed to kernel 2.6.31.6 and lessfs 0.8.3. I am wondering if I could use +1 concurrency (like in `make -j’).

        My hopes for backup are now restored. “A New Hope” I might say ;-)


        Cheers and thank you,
        (-) Szycha.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>