<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Lessfs-1.3.3.8 is available for download.</title>
	<atom:link href="http://www.lessfs.com/wordpress/?feed=rss2&#038;p=597" rel="self" type="application/rss+xml" />
	<link>http://www.lessfs.com/wordpress/?p=597</link>
	<description>Open source data de-duplication &#38; data tiering for less</description>
	<lastBuildDate>Wed, 18 Mar 2015 13:40:57 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0.7</generator>
	<item>
		<title>By: Alex</title>
		<link>http://www.lessfs.com/wordpress/?p=597&#038;cpage=1#comment-1358</link>
		<dc:creator><![CDATA[Alex]]></dc:creator>
		<pubDate>Tue, 05 Apr 2011 13:29:56 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=597#comment-1358</guid>
		<description><![CDATA[Hi,

just to let you know that i still use lessfs, just updated from 1.3.3.1 to 1.3.3.8, everything run fine on my 32bit systems. The upgraded version allow me to &quot;du&quot; folders faster than before, even on my low performance machines.

Thank you for your huge work on this project.]]></description>
		<content:encoded><![CDATA[<p>Hi,</p>
<p>just to let you know that i still use lessfs, just updated from 1.3.3.1 to 1.3.3.8, everything run fine on my 32bit systems. The upgraded version allow me to &#8220;du&#8221; folders faster than before, even on my low performance machines.</p>
<p>Thank you for your huge work on this project.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Dimitri Bellini</title>
		<link>http://www.lessfs.com/wordpress/?p=597&#038;cpage=1#comment-1327</link>
		<dc:creator><![CDATA[Dimitri Bellini]]></dc:creator>
		<pubDate>Fri, 25 Mar 2011 15:50:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=597#comment-1327</guid>
		<description><![CDATA[ops sorry a forget to report the sum from LessFS &amp; DD:

DataDomain: 2,807,467,410
LessFS: 4,547,298,233

LessFs is just more near to 1,6 bigger than DD.

Many thanks Maru!!]]></description>
		<content:encoded><![CDATA[<p>ops sorry a forget to report the sum from LessFS &amp; DD:</p>
<p>DataDomain: 2,807,467,410<br />
LessFS: 4,547,298,233</p>
<p>LessFs is just more near to 1,6 bigger than DD.</p>
<p>Many thanks Maru!!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Dimitri Bellini</title>
		<link>http://www.lessfs.com/wordpress/?p=597&#038;cpage=1#comment-1326</link>
		<dc:creator><![CDATA[Dimitri Bellini]]></dc:creator>
		<pubDate>Fri, 25 Mar 2011 15:44:56 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=597#comment-1326</guid>
		<description><![CDATA[Hi Maru
thanks so much to understand my opinion.
My test are very simple, i&#039;m working on Mondo Rescue Project Disaster recovery, so we use it to create an image for bare metal restore. I have create some image of few server using the same RHEL release (5.5) with and without compression you can see the file and the size:

filesys show compression /backup/
---
 629766144  sdlmoc01-4k_boot-1.iso
1659176960  sdlmoc01-4k_boot-nocomp-04032011-1.iso
1165178880  sglvms52-4k_boot-03032011-1.iso
1092429824  sglvms52-4k_boot-1.iso
3985399808  sglvms52-4k_boot-nocompressione-07032011-1.iso
4615309312  sglvms52-4k_boot-nocompressione207032011-1.iso
----
Some image are gziped and not but with some file changed inside of it (to the &quot;offset&quot; problem). 
This is the result from DataDomain command:
Total files: 6;  bytes/storage_used: 4.7
       Original Bytes:       13,188,470,376
  Globally Compressed:        4,026,972,574 (dedupped)
   Locally Compressed:        2,807,467,410 (dedupped+compress)
            Meta-data:           13,160,296
---- 
So the sum of all file is like a 12GiB dedupped from DD are only 2,5 GiB!!!Very good performance!!

Now i show the same file using LessFS:
---
[root@sglvms51 mondo-img]# cat /fuse/.lessfs/lessfs_stats
  INODE             SIZE  COMPRESSED_SIZE  FILENAME
     10                0                0  lessfs_stats
     14                0                0  enabled
     15                0                0  backlog
     20        629766144        110648854  sdlmoc01-4k_boot-1.iso
     21       1659176960        808317386  sdlmoc01-4k_boot-nocomp-04032011-1.iso
     22       1165178880       1034221145  sglvms52-4k_boot-03032011-1.iso
     23       1092429824        976047268  sglvms52-4k_boot-1.iso
     24       3985399808       1378525345  sglvms52-4k_boot-nocompressione-07032011-1.iso
     25       4615309312        239538235  sglvms52-4k_boot-nocompressione207032011-1.iso
-----

I think &quot;sliding windows&quot; is very useful for many file type, for example as backup storage for Bacula or other Backup software or for simple tar backup.
Please ask me other info if you need it.
Many thanks for your great work!!]]></description>
		<content:encoded><![CDATA[<p>Hi Maru<br />
thanks so much to understand my opinion.<br />
My test are very simple, i&#8217;m working on Mondo Rescue Project Disaster recovery, so we use it to create an image for bare metal restore. I have create some image of few server using the same RHEL release (5.5) with and without compression you can see the file and the size:</p>
<p>filesys show compression /backup/<br />
&#8212;<br />
 629766144  sdlmoc01-4k_boot-1.iso<br />
1659176960  sdlmoc01-4k_boot-nocomp-04032011-1.iso<br />
1165178880  sglvms52-4k_boot-03032011-1.iso<br />
1092429824  sglvms52-4k_boot-1.iso<br />
3985399808  sglvms52-4k_boot-nocompressione-07032011-1.iso<br />
4615309312  sglvms52-4k_boot-nocompressione207032011-1.iso<br />
&#8212;-<br />
Some image are gziped and not but with some file changed inside of it (to the &#8220;offset&#8221; problem).<br />
This is the result from DataDomain command:<br />
Total files: 6;  bytes/storage_used: 4.7<br />
       Original Bytes:       13,188,470,376<br />
  Globally Compressed:        4,026,972,574 (dedupped)<br />
   Locally Compressed:        2,807,467,410 (dedupped+compress)<br />
            Meta-data:           13,160,296<br />
&#8212;-<br />
So the sum of all file is like a 12GiB dedupped from DD are only 2,5 GiB!!!Very good performance!!</p>
<p>Now i show the same file using LessFS:<br />
&#8212;<br />
[root@sglvms51 mondo-img]# cat /fuse/.lessfs/lessfs_stats<br />
  INODE             SIZE  COMPRESSED_SIZE  FILENAME<br />
     10                0                0  lessfs_stats<br />
     14                0                0  enabled<br />
     15                0                0  backlog<br />
     20        629766144        110648854  sdlmoc01-4k_boot-1.iso<br />
     21       1659176960        808317386  sdlmoc01-4k_boot-nocomp-04032011-1.iso<br />
     22       1165178880       1034221145  sglvms52-4k_boot-03032011-1.iso<br />
     23       1092429824        976047268  sglvms52-4k_boot-1.iso<br />
     24       3985399808       1378525345  sglvms52-4k_boot-nocompressione-07032011-1.iso<br />
     25       4615309312        239538235  sglvms52-4k_boot-nocompressione207032011-1.iso<br />
&#8212;&#8211;</p>
<p>I think &#8220;sliding windows&#8221; is very useful for many file type, for example as backup storage for Bacula or other Backup software or for simple tar backup.<br />
Please ask me other info if you need it.<br />
Many thanks for your great work!!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=597&#038;cpage=1#comment-1318</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Wed, 23 Mar 2011 08:35:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=597#comment-1318</guid>
		<description><![CDATA[Hi Dimitri,

It&#039;s always interesting to hear about what other dedup solutions are doing. Datadomain uses sliding windows deduplication where lessfs is using fixed blocksize deduplication. I have been hesitating to add sliding windows to lessfs due to patent issues.

Can you give an example on the dedup ratios that you see on Datadomain and lessfs and what type of data you are testing with?

Lessfs is known to work very well with raw disk images and when you copy regular files to it. When you use Lessfs to store tar or zip archives then the compression will be low or none since the offsets in the tar archive will differ each time.]]></description>
		<content:encoded><![CDATA[<p>Hi Dimitri,</p>
<p>It&#8217;s always interesting to hear about what other dedup solutions are doing. Datadomain uses sliding windows deduplication where lessfs is using fixed blocksize deduplication. I have been hesitating to add sliding windows to lessfs due to patent issues.</p>
<p>Can you give an example on the dedup ratios that you see on Datadomain and lessfs and what type of data you are testing with?</p>
<p>Lessfs is known to work very well with raw disk images and when you copy regular files to it. When you use Lessfs to store tar or zip archives then the compression will be low or none since the offsets in the tar archive will differ each time.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Dimitri Bellini</title>
		<link>http://www.lessfs.com/wordpress/?p=597&#038;cpage=1#comment-1317</link>
		<dc:creator><![CDATA[Dimitri Bellini]]></dc:creator>
		<pubDate>Wed, 23 Mar 2011 08:17:32 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=597#comment-1317</guid>
		<description><![CDATA[Hi Maru
thanks for your great work on LessFS, i have tested some other &quot;competitor&quot; like NetApp, DataDomain and ZFS. I have no test read/write performance i have focus on Dedup factor and i saw a very big dedup factor on DataDomain. 
Please i dont want say LessFS have bad Dedup Algorithm but seems very simple because from my simple test it dont found duplicated chunk on simple file.
What do you think about?
I&#039;m sorry for my bad english :-)
Thanks so much]]></description>
		<content:encoded><![CDATA[<p>Hi Maru<br />
thanks for your great work on LessFS, i have tested some other &#8220;competitor&#8221; like NetApp, DataDomain and ZFS. I have no test read/write performance i have focus on Dedup factor and i saw a very big dedup factor on DataDomain.<br />
Please i dont want say LessFS have bad Dedup Algorithm but seems very simple because from my simple test it dont found duplicated chunk on simple file.<br />
What do you think about?<br />
I&#8217;m sorry for my bad english <img src="http://www.lessfs.com/wordpress/wp-includes/images/smilies/icon_smile.gif" alt=":-)" class="wp-smiley" /><br />
Thanks so much</p>
]]></content:encoded>
	</item>
</channel>
</rss>
