What’s in a name
TIER has been renamed to BTIER to improve the relevance of results returned by search engines.
BTIER current status
The btier code is now at 0.9.9.2 and I hope to release a stable 1.0 version within weeks.
BTIER performance tested with Vmware IO analyzer 1.5.0
To test the current performance capability of BTIER I conducted the following test. A server with a single STEC Zeus drive and a LSI controller with 5 Hitachi SAS drives is used to export a btier volume via iSCSI (SCST).
BTIER Server : Supermicro
Processor : E5606 @ 2.13GHz
Memory : 8GB
iSCSI network : 2 * 10Gbe
LSI controller : MegaRAID SAS 9280-4i4e ( 5 * Hitachi SAS in RAID 5)
LSI controller : SAS2008 PCI-Express Fusion-MPT SAS-2
( 1 * STEC Zeus 800GB SSD)
The native IOPS performance of the 5 Hitachi drives in RAID5 is approx 375 IOPS for writes. The native performance of the SSD can be found here : STEC ZeusIOPS
specifications
Vmware server : Intel 2500HC Vmware version : 5.1.0 Vmware io analyzer : http://labs.vmware.com/flings/io-analyzer iSCSI NIC : 2 * 10Gbe
In this test both bcache and btier are used to have an idea how btier compares with others.
bcache was setup with these commands:
make-bcache -B /dev/sda make-bcache -C -b1M /dev/sdd modprobe bcache echo /dev/sda >/sys/fs/bcache/register echo /dev/sdd >/sys/fs/bcache/register ls /sys/fs/bcache/ echo a38f0944-e439-4607-8222-7f5dfbbcf05e >/sys/block/sda/bcache/attach echo 1 >/sys/block/sda/bcache/writeback_running
Setting up tier:
insmod ./btier.ko ./btier_setup -f /dev/sdd:/dev/sda -c echo 0 >/sys/block/sdtiera/tier/sequential_landing
And finally SCST:
setup_id 0x1234
HANDLER vdisk_blockio {
DEVICE disk01 {
t10_dev_id "v-crsimp01 e951d814"
usn e951d814
# ONE OF THESE
#filename /dev/bcache0
#filename /dev/sdtiera
WRITE_THROUGH
}
}
TARGET_DRIVER iscsi {
enabled 1
rel_tgt_id 1
TARGET iqn.2006-11.net.storagedata:tgt-ctrl02 {
LUN 0 disk01
allowed_portal 192.168.1.20
allowed_portal 192.168.2.20
enabled 1
}
}
Two vmware guests where started with an iometer IOPS workload.
One guest doing 100% random reads and the other 100% random writes.
The test results are shown below.

![]() BTIER MAX IOPS |
![]() BCACHE MAX IOPS |
![]() BTIER MAX LATENCY |
![]() BCACHE MAX LATENCY |
Testing btier and bcache with fio
To ensure that the test results are valid I also tested both btier and bcache with fio.
---------------------------- BTIER ------------------------------ Jobs: 1 (f=1): [___w] [89.3% done] [0K/145.3M /s] [0 /36.4K iops] read : io=12288MB, bw=435651KB/s, iops=108912 , runt= 28883msec read : io=2398.6MB, bw=40935KB/s, iops=10233 , runt= 60001msec write: io=12288MB, bw=498412KB/s, iops=124603 , runt= 25246msec write: io=9218.6MB, bw=157306KB/s, iops=39326 , runt= 60006msec -----------------------------BCACHE (writeback) ----------------- Jobs: 1 (f=1): [___w] [57.2% done] [0K/6541K /s] [0 /1597 iops] read : io=10245MB, bw=174850KB/s, iops=43712 , runt= 60001msec read : io=146684KB, bw=2443.9KB/s, iops=610 , runt= 60021msec write: io=7253.4MB, bw=123785KB/s, iops=30946 , runt= 60003msec write: io=2192.4MB, bw=37410KB/s, iops=9352 , runt= 60008msec
The fio test results confirm the results from the Vmware io analyzer test.
Just before finishing up on this post I read the announcement of Enchance IO
A fio test shows that this project is serious competition:
eio_cli create -d /dev/sda4 -s /dev/sdd4 -m wb -c EIO Jobs: 1 (f=1): [___w] [81.9% done] [0K/135.5M /s] [0 /33.9K iops] [eta 00m:51s] read : io=12288MB, bw=253913KB/s, iops=63478 , runt= 49556msec read : io=3885.4MB, bw=66303KB/s, iops=16575 , runt= 60001msec write: io=7681.1MB, bw=131088KB/s, iops=32772 , runt= 60001msec write: io=6639.5MB, bw=113312KB/s, iops=28327 , runt= 60001msec
Conclusion
When no major bugs are reported in the weeks to come a btier stable release can be expected soon. btier performs very well and comes with more then enough features to justify a first major release.




You are awesome! Thanks for a great product.
Hi Mark,
Thanks also for the benchmark comparisons. Is it possible for you to test the same setup with ZFSonLinux using SSD for ZIL and L@ARC?
I’m currently tossing up whether to use ZFSonLinuz or Btier at the moment.
Conversely, are you able to provide the ‘fio’ command you use for your tests or if its in another post, direct me to it and run the tests on my rig.
Thanks,
David
Actually, I found this in a previous post…..is it still valid?
[global]
bs=4k
ioengine=libaio
iodepth=4
size=20g
direct=1
runtime=60
directory=/mnt/fio
filename=test.file
[seq-read]
rw=read
stonewall
[rand-read]
rw=randread
stonewall
[seq-write]
rw=write
stonewall
[rand-write]
rw=randwrite
stonewall
Hi Mark,
kudos for such an awesome package like Btier. We’re testing it on a HA setup (with DRBD) and so far it looks very promising. We could reach even better numbers than the ones you published.
As you write in Btier’s docs, “btier allows us to use either real blockdevices or files to be part of the tier.” Performance-wise, are there any differences between the two modes? Which one should be used in a production setup?
(Hope this is the right place to ask — feel free to redirect me to the SF mailing list if you prefer).
Best,
Corrado
If I use an SSD in front of a SATA drive and the SSD fails, is the data that exists on the SATA drive still available? Or is everything basically lost at that point? I guess I’m wondering what type of file system exists on the SSD and the SATA drives and can the be read if they are detached from each other.