Lessfs-1.5.10 has been released

Lessfs-1.5.10 adds support for hamsterdb-2.0.1. A small change in the Hamsterdb API makes the transition from the 1.x series not completely transparent. Do not use Lessfs with hamsterdb-2.0 since it comes with a nasty bug. Please use the latest hamsterdb-2.0.1.

When compared with Berkeley DB hamsterdb does not suffer from the performance degradation that comes with Berkeley DB when the databases are becoming large. Hamsterdb 2.X performance is considerably better then Berkeley DB or even Tokyocabinet. The code is however not that well tested or as much used as the others.

Choose wisely ;-)

Mark

This entry was posted in Uncategorized. Bookmark the permalink.

5 Responses to Lessfs-1.5.10 has been released

  1. Charles says:

    This is interesting, I will give it a go, write performance is very important to me.

    I’m wondering what hardware spec is required for high-end performance.

    using Berkeley DB with file_io backend I get about 60MB/s sustained write (from a random number data source which produces data at 170MB/s)

    Is 60MB/s reasonable for my setup?

    I have 3 x 2TB sata drives – software raid 5 (sustained write 240MB/s) – for the data
    I have an SSD card (sustained write 500MB/s) – for the BDB
    I have 16GB RAM and CPU 6 cores at 2.8MHz
    I have 1.5TB of data in the lessfs mount.

    As an observation lessfs doesn’t seem to use much RAM, I was wondering how to increase throughput by utilising RAM. I have tried to alter the DB_CONFIG settings to have a 4GB cache, but it does not seem to make a difference.

    I wondered if the write speed of the data area (240MB/s) was the bottleneck, so I created the data directory on another SSD (500MB/s), no improvement.

    Is it CPU bound?

    Is it because it is a fuse file system?

    Does anyone have any clues as to what the bottleneck in lessfs with BDB is?
    What hardware/setup would I need to get 100MB/s?

    • maru says:

      Hi Charles,

      Your hardware should allow better speeds then what you are reporting.
      Although the 3 SATA will obviously not allow speeds up to 700MB/sec. I use 6 drives (Hitachi Ultrastar and an LSI controller with cache enabled) to get this speed.
      You will not be able to do more then 240MB/sec in this case.

      Since you have 6 CPU cores I would set MAX_THREADS higher so that it at least matches the number of CPU’s in the system.
      Higher will not harm, lower does.
      MAX_THREADS=8

      What type of IO load are you generating?
      Are you getting 60MB/sec when copying lots of small files or when you do something like dd if=/data/random.img of=/lessfs/ramdom.img bs=1M?

      You should tune DB_CONFIG and set the cachesize as large as you can spare:
      set_cachesize 1 0 8
      This will allocate 8 chunks of 1 GB.

      In lessfs.cfg
      CACHESIZE=512M is usually enough.
      You can play a bit with this value to see what works best.

      You did not compile with CFLAGS set to -ggdb2 instead of -O2?

      • maru says:

        One more thing that I noticed in you configuration file.
        TUNEFORSIZE=HUGE

        Since you have plenty of memory database this may work against you.
        Can you try :
        TUNEFORSIZE=MEDIUM or even SMALL?

  2. Charles says:

    thx for getting back.
    I made like this:
    ./configure –with-snappy –with-berkeleydb CFLAGS=’-ggdb2′

    I have set lessfs.cfg:
    TUNEFORSIZE=MEDIUM
    MAX_THREADS=8

    I have set DB_CONFIG:
    set_cachesize 1 0 8

    similar performance tho.

    Hamster actually performed slower at 20MB/s

    • Charles says:

      oh nearly forgot, I am writting data like this:

      dd if=/dev/frandom of=./test.107.out bs=1M count=5000

      basically over and over to fill up the area. It holds around 60MB/s

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>