TIER-0.2.0 has been released

Tier-0.2.0 adds crash recovery and some bug fixes.

A brief benchmark of tier, flashcache and bcache with fio shows these results:
flashcache
read : io=16635MB, bw=56778KB/s, iops=14194 , runt=300017msec
read : io=872528KB, bw=2908.4KB/s, iops=727 , runt=300007msec
write: io=8237.5MB, bw=28117KB/s, iops=7029 , runt=300001msec
write: io=6038.4MB, bw=20611KB/s, iops=5152 , runt=300001msec

bcache
read : io=20480MB, bw=103370KB/s, iops=25842 , runt=202878msec
read : io=936760KB, bw=3122.4KB/s, iops=780 , runt=300014msec
write: io=15604MB, bw=53263KB/s, iops=13315 , runt=300001msec
write: io=6453.1MB, bw=22025KB/s, iops=5506 , runt=300016msec

tier
read : io=20480MB, bw=167819KB/s, iops=41954 , runt=124965msec
read : io=528236KB, bw=1760.8KB/s, iops=440 , runt=300012msec
write: io=20480MB, bw=172857KB/s, iops=43214 , runt=121323msec
write: io=5091.7MB, bw=17371KB/s, iops=4342 , runt=300141msec

The SSD used in this test had a size of 10GB while the SAS drive had a size of 100GB.

The fio configuration file that was used is:
[global]
bs=4k
ioengine=libaio
iodepth=4
size=20g
direct=1
runtime=60
directory=/mnt/fio
filename=test.file
[seq-read]
rw=read
stonewall
[rand-read]
rw=randread
stonewall
[seq-write]
rw=write
stonewall
[rand-write]
rw=randwrite
stonewall
Posted in Uncategorized | 1 Comment

TIER-0.1.7 has been released.

This version of tier comes with some major changes. The caching layer has been removed from the code. EPRD can be used in cases where caching is needed. Also the block size has been changed so that TIER now uses a 1MB block size. This greatly reduces the amount of meta data that has to be stored. TIER will now automatically migrate the data between the different tiers. The policy that determines when a block should be migrated is still hard coded in this release but will be adjustable per tier in future releases. TIER will detect unclean shutdowns and unfinished migrations after unclean shutdown. However this release does not yet handle recovery.

Posted in Uncategorized | 1 Comment

Introducing TIER

Tier is a Linux kernel module that can be used to create a block device that allows automatically tiered storage. Tier can be used to aggregate up to 16 devices as one virtual device. Tier investigates access patterns to decide on which device the data should be written. It keeps track of how frequently data has been accessed as well as when it was used. Tier uses this information to decide if the data needs to be written to for example SSD/SAS or SATA.

One advantage of tier when compared to SSD caching only is that the total capacity of the tiered device is the sum of all attached devices. Kernel modules like flashcache use the SSD as cache only and therefore the capacity of the SSD is not available as part of the total size of the device.

Since TIER combines the RAM caching techniques of EPRD it is very fast. Even faster then what can be achieved with SSD only.

To get an impression of TIER performance I tested tier in this configuration.
An Intel SSD with a 160GB size is used as first tier and the second tier is made up of 6 * 300GB SAS in software RAID10.

The iometer test that is used comes from : http://vmktree.org/iometer/
Tier was configured with these parameters:

./tier_setup -f /dev/sdb:/dev/md1 -p 1000M -m 5 -b -c
                              TIER - SSD  - MD1(R10)
Max-throughput-100%read    : 32540 - 3796 - 2746
Reallife-60%rand-65%read   : 1927  - 3185 - 226
Max-Throughput-50%read     : 6890  - 1753 - 470
Random-8k-70%read          : 937   - 2870 - 401

As shown in the results table above TIER outperforms the MD raid10 on all tests. The SSD is faster in most cases but not all. TIER can outperform the SSD because it was configured to use 1GB of RAM for caching and TIER uses the speed advantage that raid10 will give on sequential reads and writes.

tier-iometer

Posted in Uncategorized | 12 Comments

EPRD & lessfs

To get an idea of the efficiency of EPRD caching I repeated the lessfs benchmark test with EPRD caching the Intel 320 SSD.

The Intel 320SSD was registered as /dev/sdc.
EPRD was setup like this : ./eprd_setup -f /dev/sdc -m 3 -b -p 2048M
The databases eventually reach a size of 8.5 GB during this test.

Lessfs with and without EPRD

Lessfs with and without EPRD 2nd write

As the graph’s show a user space application like Lessfs speeds up with EPRD even when it is used to cache a relatively fast medium like an Intel 320 SSD. I intend to test EPRD with a number of other applications as well. Candidates that come to mind are for example openldap and Mysql.

Posted in Uncategorized | 6 Comments

Lessfs-1.5.12 performance

Introduction

People frequently ask what the performance is that they may expect from Lessfs. This article will give an indication of what to expect.

About the hardware

All the tests are done using an Intel 5520HC system board with a single E5520 processor @ 2.27GHz. The meta data is written to an Intel 320 SSD while the data is written to 5 Hitachi HUA722010CLA330 SATA drives attached to an LSI Megaraid controller in RAID 5. The maximum transfer speed to the volume on the LSI controller is approximately 400 MB/sec. When I tested the same drives with Linux software raid5 I found it hard to get more then 250MB/sec out of them. Even worse is the amount of IOPS that you can get from the drives with software raid. So for now I will stick to using hardware raid.

Installing lessfs

In this test we will setup lessfs with file_io and hamsterdb 2.0.1.  After downloading and installing hamsterdb-2.0.1 we start with downloading lessfs.

wget http://sourceforge.net/projects/lessfs
          /files/lessfs/lessfs-1.5.12/lessfs-1.5.12.tar.gz
tar xvzf lessfs-1.5.12.tar.gz
cd lessfs-1.5.12
./configure --with-hamsterdb --with-snappy
make -j4

In this example the RAID5 volume on the LSI raidcontroller is mount on /data
The SSD is mounted on /data/mta

The configuration file used in this example can be downloaded here : lessfs.cfg

After downloading lessfs.cfg you will need to copy it to /etc

Please make sure that the directories /data/dta and /data/mta exist.
Now we can format lessfs and mount the filesystem:

./mklessfs -c /etc/lessfs.cfg
./lessfs /etc/lessfs.cfg /mnt

When everything went right you should now have lessfs mounted on /mnt

I now use a little tool to write 3000 files with a 1GB size to lessfs. The files can not be compressed and all have a unique content. The second pass writes files 100% identical to the first pass and will therefore be written with a much higher speed. After this the first files a read from lessfs. This is the result:

Continue reading

Posted in Uncategorized | 13 Comments

Lessfs-1.5.11

Lessfs-1.5.11 now allows users to specify the cache size that hamsterdb will use internally.
This version also solves a bug in configure.ac that would cause configure with –disable-debug to actually enable debugging. This bug caused users to report very low performance in a number of cases.

Posted in Uncategorized | 13 Comments

Lessfs-1.5.10 has been released

Lessfs-1.5.10 adds support for hamsterdb-2.0.1. A small change in the Hamsterdb API makes the transition from the 1.x series not completely transparent. Do not use Lessfs with hamsterdb-2.0 since it comes with a nasty bug. Please use the latest hamsterdb-2.0.1.

When compared with Berkeley DB hamsterdb does not suffer from the performance degradation that comes with Berkeley DB when the databases are becoming large. Hamsterdb 2.X performance is considerably better then Berkeley DB or even Tokyocabinet. The code is however not that well tested or as much used as the others.

Choose wisely ;-)

Mark

Posted in Uncategorized | 5 Comments

EPRD – An eventually persistent ramdisk / disk cache

Today I uploaded a kernel project that I call eprd. This kernel module allows you to create a persistent ram disk. It can also be used to use DRAM to cache disk IO. Of course this comes with all the dangers that volatile ram introduces. EPRD does however support barriers. When barriers are enabled any sync() on the file system will result in EPRD flushing all dirty buffers to disk. It also allows to set a commit interval for flushing dirty buffers to disk.

This project can be useful whenever one needs more IOPS in a non critical environment. There is more to this project though. I am working on a kernel based high performance deduplicating block device. This project will share code and ideas from EPRD as well as Lessfs.

Enjoy,

Mark Ruijter

Posted in Uncategorized | 33 Comments

Lessfs-1.6.0-beta1 has been released

This version of Lessfs contains some minor bug fixes as well as something new. Although my previous post states that no new features would be added to the 1.x series this release actually does. Lessfs now supports the LZ4 compression algorithm. Adding support for new compression methods to lessfs is not much work at all and there have been a number of votes for adding LZ4 so it can be compared with Googles snappy compression.

I did not test LZ4 on high end hardware yet. However even on my laptop it is clear that LZ4 does outperform snappy. With the hardware being the bottleneck LZ4 still manages to speed things up by 2~5%. Most likely the difference will be larger when fast hardware is used. The system that I use for performance testing has Berkeley DB stored on SSD and the data on a fast raid5 array containing 8 SATA drives.

I will post the exact performance numbers on low and high end hardware after testing has finished.

Enjoy,

Mark

Posted in Uncategorized | 8 Comments

Lessfs-1.6.0-beta0 has been released.

This version of Lessfs comes with a significant number of changes.
The multifile_io backend is now fully functional with replication.

By default Lessfs will now compile with berkeleydb instead of tokyocabinet. Lessfs requires berkeleydb >= 4.8.

Batch replication has been extensively tested and improved. Some nasty problems that could occur when either the master of the slave suffered an unclean shutdown have been solved.

In the SPEC files snappy compression is now enabled by default.

Lessfs-1.6.0 will be the last of the 1.x serie releases that introduces new features. From now on the Lessfs-1.x series will remain frozen and new releases will only contain bug fixes.

Posted in Uncategorized | 4 Comments