<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Lessfs-1.5.11</title>
	<atom:link href="http://www.lessfs.com/wordpress/?feed=rss2&#038;p=705" rel="self" type="application/rss+xml" />
	<link>http://www.lessfs.com/wordpress/?p=705</link>
	<description>Open source data de-duplication &#38; data tiering for less</description>
	<lastBuildDate>Wed, 18 Mar 2015 13:40:57 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0.7</generator>
	<item>
		<title>By: richard</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4355</link>
		<dc:creator><![CDATA[richard]]></dc:creator>
		<pubDate>Fri, 27 Apr 2012 22:05:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4355</guid>
		<description><![CDATA[That&#039;s a good point. I don&#039;t think that would Lose Dedup, but that definitely would affect it. 
There should be away to specify padding so that the tar records are always a multiple of your lessfs blocksize. Since Tar doesn&#039;t do compression Unless you specify, I&#039;m thinking this should be possible. I&#039;d have to research it. I have a whole bunch of tar files I can copy to test. If you could try this and we both post our results that would be better than just one of use trying it.]]></description>
		<content:encoded><![CDATA[<p>That&#8217;s a good point. I don&#8217;t think that would Lose Dedup, but that definitely would affect it.<br />
There should be away to specify padding so that the tar records are always a multiple of your lessfs blocksize. Since Tar doesn&#8217;t do compression Unless you specify, I&#8217;m thinking this should be possible. I&#8217;d have to research it. I have a whole bunch of tar files I can copy to test. If you could try this and we both post our results that would be better than just one of use trying it.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jean</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4327</link>
		<dc:creator><![CDATA[Jean]]></dc:creator>
		<pubDate>Fri, 27 Apr 2012 11:42:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4327</guid>
		<description><![CDATA[Yes I do lose dedup capability because if the length of the first file changes, the following files would not be aligned inside the tar in the same way they were before. Since lessfs checks for dedups by segmenting the tarfile into blocksize-sized segments, none of such segments would match against the segments of the previous version of the tar.]]></description>
		<content:encoded><![CDATA[<p>Yes I do lose dedup capability because if the length of the first file changes, the following files would not be aligned inside the tar in the same way they were before. Since lessfs checks for dedups by segmenting the tarfile into blocksize-sized segments, none of such segments would match against the segments of the previous version of the tar.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: richard</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4289</link>
		<dc:creator><![CDATA[richard]]></dc:creator>
		<pubDate>Fri, 27 Apr 2012 00:29:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4289</guid>
		<description><![CDATA[You should not lose dedup capability with tar. I have used tar and lessfs, but not lately. I also believe you can still use rsync with tar. You could also do full tars once a week and the rest of the week do incremental tars. I think this would eliminate the need for rsync unless you want to make a local tar on one server then rsync to the Lessfs server.]]></description>
		<content:encoded><![CDATA[<p>You should not lose dedup capability with tar. I have used tar and lessfs, but not lately. I also believe you can still use rsync with tar. You could also do full tars once a week and the rest of the week do incremental tars. I think this would eliminate the need for rsync unless you want to make a local tar on one server then rsync to the Lessfs server.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jean</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4285</link>
		<dc:creator><![CDATA[Jean]]></dc:creator>
		<pubDate>Thu, 26 Apr 2012 21:40:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4285</guid>
		<description><![CDATA[Two major problems: 1) I would lose deduplication capabilities or I would need a tar thing that pads every file to the blocksize of lessfs (I don&#039;t think it  exists). 2) that would be incompatible with rsync.]]></description>
		<content:encoded><![CDATA[<p>Two major problems: 1) I would lose deduplication capabilities or I would need a tar thing that pads every file to the blocksize of lessfs (I don&#8217;t think it  exists). 2) that would be incompatible with rsync.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Andrew Tredwick</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4246</link>
		<dc:creator><![CDATA[Andrew Tredwick]]></dc:creator>
		<pubDate>Thu, 26 Apr 2012 13:36:36 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4246</guid>
		<description><![CDATA[my bad, it seems the benchmark provided already use compression...]]></description>
		<content:encoded><![CDATA[<p>my bad, it seems the benchmark provided already use compression&#8230;</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: richard</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4205</link>
		<dc:creator><![CDATA[richard]]></dc:creator>
		<pubDate>Thu, 26 Apr 2012 02:12:20 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4205</guid>
		<description><![CDATA[If you are backing up a lot of small files at once, use tar to create a non compressed BIG file out of your small files.
Please post your results, but this should help.]]></description>
		<content:encoded><![CDATA[<p>If you are backing up a lot of small files at once, use tar to create a non compressed BIG file out of your small files.<br />
Please post your results, but this should help.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Jean</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4178</link>
		<dc:creator><![CDATA[Jean]]></dc:creator>
		<pubDate>Wed, 25 Apr 2012 10:25:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4178</guid>
		<description><![CDATA[OW!!! That&#039;s a very serious problem :-(( 2 per second is so slow... I really hoped to use lessfs for a general purpose backup filesystem and I do have directories with thousands of files. I wouldn&#039;t want to wait for a complete reimplementation such as btrfs 2.x, it&#039;s so sad.
Wouldn&#039;t it be possible to cache some metadata content received from fuse, e.g. dedicate 50MB of RAM to cache last fuse received data, that so to avoid the round trip to fuse API each time? First lookup the cache, then if not there ask fuse, then periodically erase entries from the cache which are too old.]]></description>
		<content:encoded><![CDATA[<p>OW!!! That&#8217;s a very serious problem :-(( 2 per second is so slow&#8230; I really hoped to use lessfs for a general purpose backup filesystem and I do have directories with thousands of files. I wouldn&#8217;t want to wait for a complete reimplementation such as btrfs 2.x, it&#8217;s so sad.<br />
Wouldn&#8217;t it be possible to cache some metadata content received from fuse, e.g. dedicate 50MB of RAM to cache last fuse received data, that so to avoid the round trip to fuse API each time? First lookup the cache, then if not there ask fuse, then periodically erase entries from the cache which are too old.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Andrew Tredwick</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4134</link>
		<dc:creator><![CDATA[Andrew Tredwick]]></dc:creator>
		<pubDate>Tue, 24 Apr 2012 14:33:34 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4134</guid>
		<description><![CDATA[Thanks for clairification.
Since you are currently providing some detailed benchmark, do you also intent on providing some figures with compression enabled ?]]></description>
		<content:encoded><![CDATA[<p>Thanks for clairification.<br />
Since you are currently providing some detailed benchmark, do you also intent on providing some figures with compression enabled ?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4075</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Mon, 23 Apr 2012 18:40:23 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4075</guid>
		<description><![CDATA[Lessfs was never designed to handle lot&#039;s of small files. I designed it for the exact opposite. A relative small number of large files. 
To enable better performance with small files switching to the lowlevel fuse Api is needed. Lessfs is still using the high level Api.]]></description>
		<content:encoded><![CDATA[<p>Lessfs was never designed to handle lot&#8217;s of small files. I designed it for the exact opposite. A relative small number of large files.<br />
To enable better performance with small files switching to the lowlevel fuse Api is needed. Lessfs is still using the high level Api.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=705&#038;cpage=1#comment-4056</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Mon, 23 Apr 2012 13:10:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=705#comment-4056</guid>
		<description><![CDATA[I quickly added LZ4 to the 1.6+ serie and for now I did not yet find the time to port it to 1.5.x.
For now I would advise to use snappy with the 1.5 series. However file_io + the 1.6 series + LZ4 + BDB should work without problems as well.]]></description>
		<content:encoded><![CDATA[<p>I quickly added LZ4 to the 1.6+ serie and for now I did not yet find the time to port it to 1.5.x.<br />
For now I would advise to use snappy with the 1.5 series. However file_io + the 1.6 series + LZ4 + BDB should work without problems as well.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
