Introduction
This version of TIER comes with a significant number of changes. The meta data has changed to support resizing of the TIER device. Therefore this version of TIER is not compatible with previous releases.
New features
This version of TIER introduces support for resizing the underlying devices. When the underlying devices grow TIER can be instructed to grow as well.
LVM will now work with TIER without modifications to the configuration of the system. The device name that TIER registers has changed from /dev/tierN to /dev/sdtierN. The LVM device filters would otherwise have to be changed to use a tier device with LVM. Although this is possible it would have been inconvenient for most users. In this case pvcreate /dev/sdtiera will work with most distributions.
How resizing TIER works
#First create a tier device insmod ./tier.ko dd if=/dev/zero of=/data/ssd.img bs=1M count=100 dd if=/dev/zero of=/data/sas.img bs=1M count=150 ./tier_setup -f /data/ssd.img:/data/sas.img -c mkfs.xfs /dev/sdtiera mount /dev/sdtiera /mnt df /mnt Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdtiera 243008 12548 230460 6% /mnt truncate --size=10000M /data/sas.img echo 1 >/sys/block/sdtiera/tier/resize xfs_growfs /mnt meta-data=/dev/sdtiera isize=256 agcount=4, agsize=15488 blks = sectsz=512 attr=2 data = bsize=4096 blocks=61952, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=1200, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 61952 to 2585600 df /mnt Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdtiera 10337600 17764 10319836 1% /mnt
The example above uses files instead of LVM devices. However this also works when tier is created with LVM devices.
./tier_setup -f /dev/mapper/meta-ssdlv:/dev/mapper/datavg-datalv -c And afterwards : lvextend -L+10G /dev/mapper/meta-ssdlv
Roadmap
The next feature that will be added to TIER is the ability to add and even remove devices from TIER.
Performance will be enhanced by loading the meta data into memory whenever sufficient memory is available.
Redundant (meta) data and data checksumming.
hi maru,,
this is great, i still on research about optimizing cloud storage using tiering storage method, then i found your post, where i can download your tier software?
Regards
M. Aditya Haferush
Hi maru
i have downloaded tier.0.4.3, but i can’t set this up, when i try to run ./test.sh appear this error message :
insmod: error inserting ‘./tier.ko': -1 Invalid module format
rm: cannot remove `/data/ssd.img': No such file or directory
rm: cannot remove `/data/sas.img': No such file or directory
dd: opening `/data/ssd.img': No such file or directory
dd: opening `/data/sas.img': No such file or directory
./test.sh: line 10: ./tier_setup: cannot execute binary file
please give me some advice to fix this probelem
Thank you,
Regards,
Haferush
Did you recompile the kernel module for your system?
Make clean; make
Did you check what test.sh actually does?
You may want to alter some things in this script.
Also make sure you only use -c with tier_setup only once if you don’t want to loose all data on the device.
Thank you mark, now it’s working
now, i’m still doing benchmarking on my storage system
hi Mark,
I have some questions not exactly about this post, but EPRD vs. Tier.
Is there a mailing list available?
Currently there are very basic questions.
Is EPRD a non-persistent cache? If there is a power or disk failure what happens to the data? Is it going to be inconsistent?
I can read in the description that “disk cache with barrier support”.
Is Tier a persistent component of a “volume” created by the HDD and a fast device (ssd) and data moved into the fast device? How can I remove a tiered device?
Is there a comparison of the two technologies including functional and performance?
Thanks so much in advance for your answers and thanks for your hard work!
tamas
Hi Tamas,
There now is a mailing list available for tier : tier-users@lists.sourceforge.net
EPRD stands for eventually persistent ram disk.
This means that at some point data will be written to disk. However should the system crash or not cleanly shutdown then you will loose data.
When you use a modern filesystem like ext4 or xfs with barriers enabled then the filesystem will in most cases recover. As long as EPRD barrier support is also enabled. Since ext2 does not support barriers you can set the flush interval (-m) at a low value to prevent EPRD from loosing to much data after an unclean shutdown.
The trade-off is speed versus the chance of loosing data.
TIER is not like EPRD.
TIER is persistent. In the same way that the kernel loop driver is persistent.
The idea behind TIER is that you create multiple redundant (raid) storage layers. TIER will for example allow you to create a device that uses SSD + SAS + a file on NFS. TIER works with block devices and / or files.
Random IO will automatically be stored on the SSD device. While sequential IO will be written to SAS. Data that is not frequently access will eventually move to the file on NFS.
TIER divides the blockdevice in 1MB chunks stores metadata about their access patterns.
struct blockinfo {
unsigned int device;
u64 offset;
time_t lastused;
unsigned int readcount;
unsigned int writecount;
} __attribute__ ((packed));
You can change the data migration policy via sysfs.
hi Mark,
I moved this thread to the mailing list, if it’s OK for you.
Hi mark,
how to make tier like service?, it’s because when my computer restart, tier device is gone, same with data in mount directory that i use to mount tier device,
Thank you
To create the tier device the first time:
tier_setup -f /dev/sdo:/dev/sdp:/dev/sdq -c
NOTE : -c means create (format)
To assemble the tier device later:
tier_setup -f /dev/sdo:/dev/sdp:/dev/sdq