<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: BTIER-1.0.0 stable has been released</title>
	<atom:link href="http://www.lessfs.com/wordpress/?feed=rss2&#038;p=909" rel="self" type="application/rss+xml" />
	<link>http://www.lessfs.com/wordpress/?p=909</link>
	<description>Open source data de-duplication &#38; data tiering for less</description>
	<lastBuildDate>Wed, 18 Mar 2015 13:40:57 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0.7</generator>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-19870</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Wed, 30 Oct 2013 10:46:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-19870</guid>
		<description><![CDATA[Which btier version where you using when you had this problem?
What filesystem where you using? That e2fsck takes a long time on a 40+TiB volume is hardly a surprise.
Please reconsider if you even want a single volume to be that large or maybe switch to a journaled filesystem like xfs?

A problem that could potentially cause data corruption was solved in btier-1.1.0.]]></description>
		<content:encoded><![CDATA[<p>Which btier version where you using when you had this problem?<br />
What filesystem where you using? That e2fsck takes a long time on a 40+TiB volume is hardly a surprise.<br />
Please reconsider if you even want a single volume to be that large or maybe switch to a journaled filesystem like xfs?</p>
<p>A problem that could potentially cause data corruption was solved in btier-1.1.0.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Pulsed Media</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-19868</link>
		<dc:creator><![CDATA[Pulsed Media]]></dc:creator>
		<pubDate>Wed, 30 Oct 2013 10:04:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-19868</guid>
		<description><![CDATA[A warning for everyone: I managed to get FS corrupted by a reboot.
It looks like one should always sync + detach btier before a reboot.

e2fsck gives warning was not cleanly unmounted, and we did try some manual block migrations as well - maybe that is part of the reason?

Not sure what causes this, but what we did was just a shutdown -r now, while all applications were still running (including iSCSI daemon) expecting things to be cleanly shutdown, unmounted etc.

Might be nothing particularly to do with btier - just the way we have things setup, so my point is that make sure you have all the necessary shutdown procedures in place.

Tho it still worries me a bit that FS got corrupted since servers are known to crash occasionally, and since this is 40+TiB array it takes a while to fsck.]]></description>
		<content:encoded><![CDATA[<p>A warning for everyone: I managed to get FS corrupted by a reboot.<br />
It looks like one should always sync + detach btier before a reboot.</p>
<p>e2fsck gives warning was not cleanly unmounted, and we did try some manual block migrations as well &#8211; maybe that is part of the reason?</p>
<p>Not sure what causes this, but what we did was just a shutdown -r now, while all applications were still running (including iSCSI daemon) expecting things to be cleanly shutdown, unmounted etc.</p>
<p>Might be nothing particularly to do with btier &#8211; just the way we have things setup, so my point is that make sure you have all the necessary shutdown procedures in place.</p>
<p>Tho it still worries me a bit that FS got corrupted since servers are known to crash occasionally, and since this is 40+TiB array it takes a while to fsck.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Aleksi</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-19784</link>
		<dc:creator><![CDATA[Aleksi]]></dc:creator>
		<pubDate>Sun, 20 Oct 2013 16:34:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-19784</guid>
		<description><![CDATA[Still in our experience SSDs can be fragile - the most worry some is that they fail without any warning.
Since wear is one issue with SSDs - what i&#039;m afraid of is that too many SSDs fails at the same time.

We are building bigger arrays - each of which may host 100 customer&#039;s data - therefore redundancy is quite an important factor for us - but also the cost of implementation due to our niche (low end dedis meant for data distribution).

Raid5 on SSD -&gt; In our instance the performance hit does seem somewhat negligible :)
Since we host system images, we only need to verify an average of 100+ IOPS per system - basicly same as single magnetic disk on each system, but we also need to do this more cost effectively than just having local disks on each system.
and we don&#039;t need to support ultra high IOPS, like databases etc. For that we have higher end models for customers with local SSD drives.

This means we need to have combined higher total IOPS than the underlying magnetic drives, and simply use larger disks than otherwise would be used.

All the caching software has been worse than just disappointing so far due to various design flaws, usually, killing the SSD performance, and then acting as a brake for the whole array :(

We calculated the cost for each system having their own disks to be 7.6€ per month per disk - since we charge only 25€ a month for cheapest one, this is too high of an cost. SAN also saves us a lot of other management headaches (while adding new ones tho).

Since it&#039;s pretty much the same cost no matter the disk size (operational cost is what makes bulk of the cost), it makes sense to utilize largest drives available, and since we can put several systems worth of data per each, and need to counter the additional cost of the base storage node cost, SSD caching/tiering is a must - if we can have reasonable redundancy at a reasonable cost.

So it would be a nice feature for our use case to have a backup copy of the SSD tier data on magnetic drives, or even on a separate drive, just in case too many SSDs fail at once, since i don&#039;t trust even latest model SSDs to be sufficiently reliable.

Next system i&#039;m building will have 4xOCZ SSD on RAID5 + 20x3Tb Cudas on RAID50.]]></description>
		<content:encoded><![CDATA[<p>Still in our experience SSDs can be fragile &#8211; the most worry some is that they fail without any warning.<br />
Since wear is one issue with SSDs &#8211; what i&#8217;m afraid of is that too many SSDs fails at the same time.</p>
<p>We are building bigger arrays &#8211; each of which may host 100 customer&#8217;s data &#8211; therefore redundancy is quite an important factor for us &#8211; but also the cost of implementation due to our niche (low end dedis meant for data distribution).</p>
<p>Raid5 on SSD -&gt; In our instance the performance hit does seem somewhat negligible <img src="http://www.lessfs.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /><br />
Since we host system images, we only need to verify an average of 100+ IOPS per system &#8211; basicly same as single magnetic disk on each system, but we also need to do this more cost effectively than just having local disks on each system.<br />
and we don&#8217;t need to support ultra high IOPS, like databases etc. For that we have higher end models for customers with local SSD drives.</p>
<p>This means we need to have combined higher total IOPS than the underlying magnetic drives, and simply use larger disks than otherwise would be used.</p>
<p>All the caching software has been worse than just disappointing so far due to various design flaws, usually, killing the SSD performance, and then acting as a brake for the whole array <img src="http://www.lessfs.com/wordpress/wp-includes/images/smilies/icon_sad.gif" alt=":(" class="wp-smiley" /></p>
<p>We calculated the cost for each system having their own disks to be 7.6€ per month per disk &#8211; since we charge only 25€ a month for cheapest one, this is too high of an cost. SAN also saves us a lot of other management headaches (while adding new ones tho).</p>
<p>Since it&#8217;s pretty much the same cost no matter the disk size (operational cost is what makes bulk of the cost), it makes sense to utilize largest drives available, and since we can put several systems worth of data per each, and need to counter the additional cost of the base storage node cost, SSD caching/tiering is a must &#8211; if we can have reasonable redundancy at a reasonable cost.</p>
<p>So it would be a nice feature for our use case to have a backup copy of the SSD tier data on magnetic drives, or even on a separate drive, just in case too many SSDs fail at once, since i don&#8217;t trust even latest model SSDs to be sufficiently reliable.</p>
<p>Next system i&#8217;m building will have 4xOCZ SSD on RAID5 + 20x3Tb Cudas on RAID50.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-19708</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Mon, 14 Oct 2013 07:34:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-19708</guid>
		<description><![CDATA[Hi Yuri,

Sorry for the extreme delay. I failed to notice you comment when I last reviewed incoming messages. I&#039;ll take a look at you path and merge what makes sense.

Thanks,

Mark]]></description>
		<content:encoded><![CDATA[<p>Hi Yuri,</p>
<p>Sorry for the extreme delay. I failed to notice you comment when I last reviewed incoming messages. I&#8217;ll take a look at you path and merge what makes sense.</p>
<p>Thanks,</p>
<p>Mark</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-19707</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Mon, 14 Oct 2013 07:30:09 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-19707</guid>
		<description><![CDATA[For redundancy Linux has md that allows you to create stacked raids.
Modern SSD&#039;s like Intel S3700 are extremely reliable.  Using Raid5 on them does seem silly though, since you buy these for IOPS.

To minimize the risk you could think of some SSD&#039;s in RAID1, some SATA in RAID10 and some SATA in 6(0). And as always, since RAID is not backup you should make sure that a proper backup regime is part of the solution.
You can also replicate you data should you choose to do so.

Adding another md layer to btier does not make sense to me.

Mark.

P.S. Just adding a backup copy on SATA is an oversimplified idea since this copy would need to be able the handle the IOPS of the whole stack which most likely includes SSD&#039;s.]]></description>
		<content:encoded><![CDATA[<p>For redundancy Linux has md that allows you to create stacked raids.<br />
Modern SSD&#8217;s like Intel S3700 are extremely reliable.  Using Raid5 on them does seem silly though, since you buy these for IOPS.</p>
<p>To minimize the risk you could think of some SSD&#8217;s in RAID1, some SATA in RAID10 and some SATA in 6(0). And as always, since RAID is not backup you should make sure that a proper backup regime is part of the solution.<br />
You can also replicate you data should you choose to do so.</p>
<p>Adding another md layer to btier does not make sense to me.</p>
<p>Mark.</p>
<p>P.S. Just adding a backup copy on SATA is an oversimplified idea since this copy would need to be able the handle the IOPS of the whole stack which most likely includes SSD&#8217;s.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Aleksi</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-19701</link>
		<dc:creator><![CDATA[Aleksi]]></dc:creator>
		<pubDate>Sun, 13 Oct 2013 22:56:48 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-19701</guid>
		<description><![CDATA[How about redundancy, what if the Tier0 SSDs fail?

Assume a setup of 5xSSD in RAID5 and all of sudden 2 of the SSDs fail at the sametime?
SSDs are so damn fragile :(

Are there anything else to do than adding couple drives, doing RAID6 + 1x hot spare to minimize the risk or ... ?

It would be nice option to have &quot;backup copy&quot; always on SATA drives at all times for those running more of an critical load for data losses.
Yea, i know that would lower btier to &quot;just as a cache&quot;, but since btier has better (actually sane) algos on handling what is hot and what is not, it would excel as &quot;cache only&quot; as well :)]]></description>
		<content:encoded><![CDATA[<p>How about redundancy, what if the Tier0 SSDs fail?</p>
<p>Assume a setup of 5xSSD in RAID5 and all of sudden 2 of the SSDs fail at the sametime?<br />
SSDs are so damn fragile <img src="http://www.lessfs.com/wordpress/wp-includes/images/smilies/icon_sad.gif" alt=":(" class="wp-smiley" /></p>
<p>Are there anything else to do than adding couple drives, doing RAID6 + 1x hot spare to minimize the risk or &#8230; ?</p>
<p>It would be nice option to have &#8220;backup copy&#8221; always on SATA drives at all times for those running more of an critical load for data losses.<br />
Yea, i know that would lower btier to &#8220;just as a cache&#8221;, but since btier has better (actually sane) algos on handling what is hot and what is not, it would excel as &#8220;cache only&#8221; as well <img src="http://www.lessfs.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Yuri Tcherepanov</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-19489</link>
		<dc:creator><![CDATA[Yuri Tcherepanov]]></dc:creator>
		<pubDate>Sun, 22 Sep 2013 19:46:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-19489</guid>
		<description><![CDATA[KVER=some_kernel_version make]]></description>
		<content:encoded><![CDATA[<p>KVER=some_kernel_version make</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Yuri Tcherepanov</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-19488</link>
		<dc:creator><![CDATA[Yuri Tcherepanov]]></dc:creator>
		<pubDate>Sun, 22 Sep 2013 19:46:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-19488</guid>
		<description><![CDATA[Mark - thank you for this great tool!

Please review patch: http://www.firsthost.lv/files/btier-1.0.2-p1.patch
Configuration moved to /etc/btier/*, configuration not rewriten on &quot;make install&quot;
/etc/btier/btmtab - automounting configuration on boot/shutdown for
debian/ubuntu support in init script.
Also building for custom kernel version is supported with: KVER= make]]></description>
		<content:encoded><![CDATA[<p>Mark &#8211; thank you for this great tool!</p>
<p>Please review patch: <a href="http://www.firsthost.lv/files/btier-1.0.2-p1.patch" rel="nofollow">http://www.firsthost.lv/files/btier-1.0.2-p1.patch</a><br />
Configuration moved to /etc/btier/*, configuration not rewriten on &#8220;make install&#8221;<br />
/etc/btier/btmtab &#8211; automounting configuration on boot/shutdown for<br />
debian/ubuntu support in init script.<br />
Also building for custom kernel version is supported with: KVER= make</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mark Ruijter</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-17229</link>
		<dc:creator><![CDATA[Mark Ruijter]]></dc:creator>
		<pubDate>Wed, 03 Jul 2013 17:12:12 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-17229</guid>
		<description><![CDATA[The numbers that you are using are _way_ to low / aggressive.
This will result in data being moved around all the time and therefore lower overall performance.

As a minimum interval I would use 30~60 minutes for the SSD and a few hours for the SATA drive.
To be effective this also requires adjusting the migration_interval. Default is once per 4 hours.
 
For optimal performance the SSD should be able to contain your hot data.
Usually this is approx. 10~25% of the total. So a 2TB SATA drive would be a good match for a 450GB SSD.

You can also consider setting : echo 1 &gt;/sys/block/sdtiera/tier/sequential_landing
This will direct all sequential IO to the SATA drive which it can handle pretty well.
Therefore the SSD will only be used for random IO which is what it does best.

Mark.

&gt;
&gt;
&gt; 4) Yes I changed  /sys/block/sdtiera/tier/migration_policy
&gt;
&gt; [root@localhost tmp]# cat  /sys/block/sdtiera/tier/migration_policy
&gt;    tier               device         max_age hit_collecttime
&gt;       0                 sda3              10              10
&gt;       1                 sdb2              60              60
&gt;
&gt;
&gt; 5) I am doing 80% sequential I/O and 20% random I/O. This is a test VM node with Raid10 SSD and Raid10 SATA. Intention is to offer SSD cached VMs targeted to heavy upload/download users.
&gt;
&gt;
&gt;
&gt; Regards,
&gt;
&gt; John
&gt;
&gt;
&gt; On Wed, Jul 3, 2013 at 3:27 PM, Mark Ruijter wrote:
&gt;
&gt;
&gt;     Hi John,
&gt;
&gt;     It looks like the statistics counters have corrupted or overflown.
&gt;
&gt;     Can you run: btier_inspect for me before resetting the statistics?
&gt;     It works similar to btier_setup and dumps a backup of your metadata in /tmp.
&gt;     ./btier_inspect -f /data/ssd.img:/data/sas.img -b
&gt;
&gt;     This will create these files in /tmp
&gt;     bitlist0 and bitlist1 (since I had two devices)
&gt;     magic_dev0 and magic_dev1
&gt;     And the file blocklist0
&gt;
&gt;     Can you email me those files?
&gt;     Your data is not part of them. They contain only btier metadata.
&gt;
&gt;     echo 1 &gt;/sys/block/sdtiera/tier/clear_statistics will reset the statistics.
&gt;     You can put that in the nightly cron when needed as well.
&gt;
&gt;     About pushing IO to SATA.
&gt;     What is the output of :
&gt;     cat /sys/block/sdtiera/tier/sequential_landing
&gt;
&gt;     Did you change : /sys/block/sdtiera/tier/migration_policy?
&gt;
&gt;     Are you doing random or sequential IO?
&gt;     Let me inspect your metadata before coming to conclusions.
&gt;
&gt;     Mark
&gt;
&gt;     P.S. Can you also share your kernel version, and btier messages from dmesg or /var/log/messages should you have those?
&gt;
&gt;     On 7/3/13 3:04 AM, John wrote:
&gt;
&gt;         Looks like this was not stable as expected.  After 24hrs run all perfromance gone and btier status showing some weird results
&gt;
&gt;
&gt;         [root@localhost  tier]# cat /sys/block/sdtiera/tier/device_usage
&gt;             TIER               DEVICE         SIZE MB    ALLOCATED MB   AVERAGE READS  AVERAGE WRITES     TOTAL_READS    TOTAL_WRITES
&gt;                0                 sda3           49945               0      4034061684               0 18446744073709551606             318
&gt;                1                 sdb2         1837337          226617               1               4         3045229         8898157
&gt;
&gt;         [root@localhost tier]#
&gt;
&gt;
&gt;         Tried deactivating LVM and rebooting the server, but no luck the status are same and on atop I can see that all requests are pushed to the SATA back end ( Tier1)
&gt;
&gt;         Do you have any solution for this ?
&gt;
&gt;         Also, how to use /sys/block/sdtiera/tier/clear_statistics
&gt;]]></description>
		<content:encoded><![CDATA[<p>The numbers that you are using are _way_ to low / aggressive.<br />
This will result in data being moved around all the time and therefore lower overall performance.</p>
<p>As a minimum interval I would use 30~60 minutes for the SSD and a few hours for the SATA drive.<br />
To be effective this also requires adjusting the migration_interval. Default is once per 4 hours.</p>
<p>For optimal performance the SSD should be able to contain your hot data.<br />
Usually this is approx. 10~25% of the total. So a 2TB SATA drive would be a good match for a 450GB SSD.</p>
<p>You can also consider setting : echo 1 &gt;/sys/block/sdtiera/tier/sequential_landing<br />
This will direct all sequential IO to the SATA drive which it can handle pretty well.<br />
Therefore the SSD will only be used for random IO which is what it does best.</p>
<p>Mark.</p>
<p>&gt;<br />
&gt;<br />
&gt; 4) Yes I changed  /sys/block/sdtiera/tier/migration_policy<br />
&gt;<br />
&gt; [root@localhost tmp]# cat  /sys/block/sdtiera/tier/migration_policy<br />
&gt;    tier               device         max_age hit_collecttime<br />
&gt;       0                 sda3              10              10<br />
&gt;       1                 sdb2              60              60<br />
&gt;<br />
&gt;<br />
&gt; 5) I am doing 80% sequential I/O and 20% random I/O. This is a test VM node with Raid10 SSD and Raid10 SATA. Intention is to offer SSD cached VMs targeted to heavy upload/download users.<br />
&gt;<br />
&gt;<br />
&gt;<br />
&gt; Regards,<br />
&gt;<br />
&gt; John<br />
&gt;<br />
&gt;<br />
&gt; On Wed, Jul 3, 2013 at 3:27 PM, Mark Ruijter wrote:<br />
&gt;<br />
&gt;<br />
&gt;     Hi John,<br />
&gt;<br />
&gt;     It looks like the statistics counters have corrupted or overflown.<br />
&gt;<br />
&gt;     Can you run: btier_inspect for me before resetting the statistics?<br />
&gt;     It works similar to btier_setup and dumps a backup of your metadata in /tmp.<br />
&gt;     ./btier_inspect -f /data/ssd.img:/data/sas.img -b<br />
&gt;<br />
&gt;     This will create these files in /tmp<br />
&gt;     bitlist0 and bitlist1 (since I had two devices)<br />
&gt;     magic_dev0 and magic_dev1<br />
&gt;     And the file blocklist0<br />
&gt;<br />
&gt;     Can you email me those files?<br />
&gt;     Your data is not part of them. They contain only btier metadata.<br />
&gt;<br />
&gt;     echo 1 &gt;/sys/block/sdtiera/tier/clear_statistics will reset the statistics.<br />
&gt;     You can put that in the nightly cron when needed as well.<br />
&gt;<br />
&gt;     About pushing IO to SATA.<br />
&gt;     What is the output of :<br />
&gt;     cat /sys/block/sdtiera/tier/sequential_landing<br />
&gt;<br />
&gt;     Did you change : /sys/block/sdtiera/tier/migration_policy?<br />
&gt;<br />
&gt;     Are you doing random or sequential IO?<br />
&gt;     Let me inspect your metadata before coming to conclusions.<br />
&gt;<br />
&gt;     Mark<br />
&gt;<br />
&gt;     P.S. Can you also share your kernel version, and btier messages from dmesg or /var/log/messages should you have those?<br />
&gt;<br />
&gt;     On 7/3/13 3:04 AM, John wrote:<br />
&gt;<br />
&gt;         Looks like this was not stable as expected.  After 24hrs run all perfromance gone and btier status showing some weird results<br />
&gt;<br />
&gt;<br />
&gt;         [root@localhost  tier]# cat /sys/block/sdtiera/tier/device_usage<br />
&gt;             TIER               DEVICE         SIZE MB    ALLOCATED MB   AVERAGE READS  AVERAGE WRITES     TOTAL_READS    TOTAL_WRITES<br />
&gt;                0                 sda3           49945               0      4034061684               0 18446744073709551606             318<br />
&gt;                1                 sdb2         1837337          226617               1               4         3045229         8898157<br />
&gt;<br />
&gt;         [root@localhost tier]#<br />
&gt;<br />
&gt;<br />
&gt;         Tried deactivating LVM and rebooting the server, but no luck the status are same and on atop I can see that all requests are pushed to the SATA back end ( Tier1)<br />
&gt;<br />
&gt;         Do you have any solution for this ?<br />
&gt;<br />
&gt;         Also, how to use /sys/block/sdtiera/tier/clear_statistics<br />
&gt;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: John</title>
		<link>http://www.lessfs.com/wordpress/?p=909&#038;cpage=1#comment-17197</link>
		<dc:creator><![CDATA[John]]></dc:creator>
		<pubDate>Wed, 03 Jul 2013 01:04:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=909#comment-17197</guid>
		<description><![CDATA[Looks like this was not stable as expected.  After 24hrs run all perfromance gone and btier status showing some weird results


[root@localhost  tier]# cat /sys/block/sdtiera/tier/device_usage
   TIER               DEVICE         SIZE MB    ALLOCATED MB   AVERAGE READS  AVERAGE WRITES     TOTAL_READS    TOTAL_WRITES
      0                 sda3           49945               0      4034061684               0 18446744073709551606             318
      1                 sdb2         1837337          226617               1               4         3045229         8898157

[root@localhost tier]#


Tried deactivating LVM and rebooting the server, but no luck the status are same and on atop I can see that all requests are pushed to the SATA back end ( Tier1)

Do you have any solution for this ? 

Also, how to use /sys/block/sdtiera/tier/clear_statistics]]></description>
		<content:encoded><![CDATA[<p>Looks like this was not stable as expected.  After 24hrs run all perfromance gone and btier status showing some weird results</p>
<p>[root@localhost  tier]# cat /sys/block/sdtiera/tier/device_usage<br />
   TIER               DEVICE         SIZE MB    ALLOCATED MB   AVERAGE READS  AVERAGE WRITES     TOTAL_READS    TOTAL_WRITES<br />
      0                 sda3           49945               0      4034061684               0 18446744073709551606             318<br />
      1                 sdb2         1837337          226617               1               4         3045229         8898157</p>
<p>[root@localhost tier]#</p>
<p>Tried deactivating LVM and rebooting the server, but no luck the status are same and on atop I can see that all requests are pushed to the SATA back end ( Tier1)</p>
<p>Do you have any solution for this ? </p>
<p>Also, how to use /sys/block/sdtiera/tier/clear_statistics</p>
]]></content:encoded>
	</item>
</channel>
</rss>
