<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Introducing TIER</title>
	<atom:link href="http://www.lessfs.com/wordpress/?feed=rss2&#038;p=776" rel="self" type="application/rss+xml" />
	<link>http://www.lessfs.com/wordpress/?p=776</link>
	<description>Open source data de-duplication &#38; data tiering for less</description>
	<lastBuildDate>Wed, 18 Mar 2015 13:40:57 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0.7</generator>
	<item>
		<title>By: How to: Home-brew automatic tiered storage solutions with Linux? (Memory -&#62; SSD -&#62; HDD -&#62; remote storage) #it #development #fix &#124; SevenNet</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-239998</link>
		<dc:creator><![CDATA[How to: Home-brew automatic tiered storage solutions with Linux? (Memory -&#62; SSD -&#62; HDD -&#62; remote storage) #it #development #fix &#124; SevenNet]]></dc:creator>
		<pubDate>Sat, 10 Jan 2015 23:14:26 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-239998</guid>
		<description><![CDATA[[&#8230;] http://www.lessfs.com/wordpress/?p=776 [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] <a href="http://www.lessfs.com/wordpress/?p=776" rel="nofollow">http://www.lessfs.com/wordpress/?p=776</a> [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: wiki-test1-1 &#8211;SSD &#124; test</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-17600</link>
		<dc:creator><![CDATA[wiki-test1-1 &#8211;SSD &#124; test]]></dc:creator>
		<pubDate>Fri, 19 Jul 2013 22:29:34 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-17600</guid>
		<description><![CDATA[[...] to boost performance on both desktop and server workloads. The bcache, dm-ssdcache, EnhanceIO and tier projects provide a similar concept for the Linux [...]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] to boost performance on both desktop and server workloads. The bcache, dm-ssdcache, EnhanceIO and tier projects provide a similar concept for the Linux [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: snowman</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-15718</link>
		<dc:creator><![CDATA[snowman]]></dc:creator>
		<pubDate>Sun, 07 Apr 2013 18:28:52 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-15718</guid>
		<description><![CDATA[Hi Maru,

I am really amazed at this project as my university project assigned me to do exactly the same thing. What I have done so far is only a user-space program that will migrate the file byte by byte or block by block to a block group specified by user (this is to simulate data migration between tiers). Also, my implementation is not a real-time tiering solution as the file system has to be unmounted when the program runs. In addition, as block bitmap, inode bitmap, group descriptor need to be updated, a lot of calculations are executed and it results an extremly slow speed when migrating a large file say tens of MBs. I didn&#039;t expect such a poor performance before I completed the implementation. And this is also why I am super &quot;shocked&quot; when I see you implemented the feature by creating a kernel module.

Just out of curiosity, was this project done by yourself alone or there was a group of guys developing the solution? How long did you spend on it from an idea to the first version?

Your reply will be much appreciated.

Thank you.

Best regards,
Snowman Zhang]]></description>
		<content:encoded><![CDATA[<p>Hi Maru,</p>
<p>I am really amazed at this project as my university project assigned me to do exactly the same thing. What I have done so far is only a user-space program that will migrate the file byte by byte or block by block to a block group specified by user (this is to simulate data migration between tiers). Also, my implementation is not a real-time tiering solution as the file system has to be unmounted when the program runs. In addition, as block bitmap, inode bitmap, group descriptor need to be updated, a lot of calculations are executed and it results an extremly slow speed when migrating a large file say tens of MBs. I didn&#8217;t expect such a poor performance before I completed the implementation. And this is also why I am super &#8220;shocked&#8221; when I see you implemented the feature by creating a kernel module.</p>
<p>Just out of curiosity, was this project done by yourself alone or there was a group of guys developing the solution? How long did you spend on it from an idea to the first version?</p>
<p>Your reply will be much appreciated.</p>
<p>Thank you.</p>
<p>Best regards,<br />
Snowman Zhang</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: CS</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-12813</link>
		<dc:creator><![CDATA[CS]]></dc:creator>
		<pubDate>Thu, 10 Jan 2013 07:29:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-12813</guid>
		<description><![CDATA[There is a person that was developing a tiered solution. There haven&#039;t been any commits to the git project for ~5 months though. I&#039;m interested if anyone else is working on something like this? Perhaps this is my chance to jump back into C.
https://bbs.archlinux.org/viewtopic.php?id=113529&amp;p=2]]></description>
		<content:encoded><![CDATA[<p>There is a person that was developing a tiered solution. There haven&#8217;t been any commits to the git project for ~5 months though. I&#8217;m interested if anyone else is working on something like this? Perhaps this is my chance to jump back into C.<br />
<a href="https://bbs.archlinux.org/viewtopic.php?id=113529&#038;p=2" rel="nofollow">https://bbs.archlinux.org/viewtopic.php?id=113529&#038;p=2</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Michael</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-12019</link>
		<dc:creator><![CDATA[Michael]]></dc:creator>
		<pubDate>Mon, 17 Dec 2012 22:06:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-12019</guid>
		<description><![CDATA[Hi,
very interesting project, how is the actual status? Is there an active developer community, or a small group / single person? Is it beeing used in productive enviroments already? Same questions for lessfs :-)

thanks,
Michael]]></description>
		<content:encoded><![CDATA[<p>Hi,<br />
very interesting project, how is the actual status? Is there an active developer community, or a small group / single person? Is it beeing used in productive enviroments already? Same questions for lessfs <img src="http://www.lessfs.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /></p>
<p>thanks,<br />
Michael</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: rob</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-11223</link>
		<dc:creator><![CDATA[rob]]></dc:creator>
		<pubDate>Thu, 22 Nov 2012 18:19:02 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-11223</guid>
		<description><![CDATA[Has anyone tried using tiered devices to back a drbd volume? Details of a known-good configuration would be great. &quot;Do not go there&quot; is also a good time saver if someone knows it&#039;s not a go.]]></description>
		<content:encoded><![CDATA[<p>Has anyone tried using tiered devices to back a drbd volume? Details of a known-good configuration would be great. &#8220;Do not go there&#8221; is also a good time saver if someone knows it&#8217;s not a go.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Calvin</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-9079</link>
		<dc:creator><![CDATA[Calvin]]></dc:creator>
		<pubDate>Fri, 27 Jul 2012 15:00:06 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-9079</guid>
		<description><![CDATA[One follow up question - is it possible to add additional tiers to an existing tiered device?]]></description>
		<content:encoded><![CDATA[<p>One follow up question &#8211; is it possible to add additional tiers to an existing tiered device?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Calvin</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-9037</link>
		<dc:creator><![CDATA[Calvin]]></dc:creator>
		<pubDate>Wed, 25 Jul 2012 23:27:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-9037</guid>
		<description><![CDATA[Can a TIER device be expanded after creation (without data loss)?  If one of the block devices below TIER is expanded (RAID capacity expansion), will TIER be able to use the new storage (tier_setup -d followed by tier_setup ...) or is the TIER metadata specific to the initial block device sizes?]]></description>
		<content:encoded><![CDATA[<p>Can a TIER device be expanded after creation (without data loss)?  If one of the block devices below TIER is expanded (RAID capacity expansion), will TIER be able to use the new storage (tier_setup -d followed by tier_setup &#8230;) or is the TIER metadata specific to the initial block device sizes?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-8266</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Tue, 26 Jun 2012 20:37:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-8266</guid>
		<description><![CDATA[The storage layers that you &#039;stripe&#039; together with tier should be protected with raid. Any device that fails will corrupt the whole tier device. So indeed, for production purposes one should use raid. When the SSD runs out of space the automatic tiering process will make sure that the blocks that are most frequently used are stored on SSD. 
Blocks are moved by the optimization process. It uses statistics like how often the block is used and when it was used to decide if a block must be reallocated. Therefore the speed will be considerably higher then a simple RAID10.]]></description>
		<content:encoded><![CDATA[<p>The storage layers that you &#8216;stripe&#8217; together with tier should be protected with raid. Any device that fails will corrupt the whole tier device. So indeed, for production purposes one should use raid. When the SSD runs out of space the automatic tiering process will make sure that the blocks that are most frequently used are stored on SSD.<br />
Blocks are moved by the optimization process. It uses statistics like how often the block is used and when it was used to decide if a block must be reallocated. Therefore the speed will be considerably higher then a simple RAID10.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dude</title>
		<link>http://www.lessfs.com/wordpress/?p=776&#038;cpage=1#comment-8184</link>
		<dc:creator><![CDATA[dude]]></dc:creator>
		<pubDate>Mon, 25 Jun 2012 05:29:31 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=776#comment-8184</guid>
		<description><![CDATA[&quot;One advantage of tier when compared to SSD caching only is that the total capacity of the tiered device is the sum of all attached devices.&quot;

But then if the one SSD fails, you lose all your data? Also, once the 160GB space of the SSD runs out, the rest of the 900GB space of the RAID 10 would be the same speed or lower than a simple RAID10, wouldn&#039;t it?]]></description>
		<content:encoded><![CDATA[<p>&#8220;One advantage of tier when compared to SSD caching only is that the total capacity of the tiered device is the sum of all attached devices.&#8221;</p>
<p>But then if the one SSD fails, you lose all your data? Also, once the 160GB space of the SSD runs out, the rest of the 900GB space of the RAID 10 would be the same speed or lower than a simple RAID10, wouldn&#8217;t it?</p>
]]></content:encoded>
	</item>
</channel>
</rss>
