<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Lessfs-1.5.0 has been released</title>
	<atom:link href="http://www.lessfs.com/wordpress/?feed=rss2&#038;p=627" rel="self" type="application/rss+xml" />
	<link>http://www.lessfs.com/wordpress/?p=627</link>
	<description>Open source data de-duplication &#38; data tiering for less</description>
	<lastBuildDate>Wed, 18 Mar 2015 13:40:57 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0.7</generator>
	<item>
		<title>By: David</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2418</link>
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Mon, 12 Sep 2011 00:22:59 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2418</guid>
		<description><![CDATA[Another question about disk usage.  I am using file_io and my OS partition space for my dedupe is slowly creeping towards 100 % even though the data in the lessfs dta file is only using 70% of the space. When it reaches 100%, lessfs will &#039;crap out&#039; even though there is space availabe in the dta file.
Is there anyway around this without using chunk_io ?

If I do use chuck_io, can I use ext4 or xfs reliably as I&#039;m not sure I trust btrfs yet? It crapped out on me in a test VM using lessfs.]]></description>
		<content:encoded><![CDATA[<p>Another question about disk usage.  I am using file_io and my OS partition space for my dedupe is slowly creeping towards 100 % even though the data in the lessfs dta file is only using 70% of the space. When it reaches 100%, lessfs will &#8216;crap out&#8217; even though there is space availabe in the dta file.<br />
Is there anyway around this without using chunk_io ?</p>
<p>If I do use chuck_io, can I use ext4 or xfs reliably as I&#8217;m not sure I trust btrfs yet? It crapped out on me in a test VM using lessfs.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Chris</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2307</link>
		<dc:creator><![CDATA[Chris]]></dc:creator>
		<pubDate>Tue, 30 Aug 2011 09:02:11 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2307</guid>
		<description><![CDATA[Unfortunately my backup hard drive holding the copy of the VM crashed and I had to rebuild the whole system. I set it up with BDB, since the database was damaged and had to be refilled anyway. So I won&#039;t be able to test that.]]></description>
		<content:encoded><![CDATA[<p>Unfortunately my backup hard drive holding the copy of the VM crashed and I had to rebuild the whole system. I set it up with BDB, since the database was damaged and had to be refilled anyway. So I won&#8217;t be able to test that.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mark Ruijter</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2300</link>
		<dc:creator><![CDATA[Mark Ruijter]]></dc:creator>
		<pubDate>Mon, 29 Aug 2011 15:30:29 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2300</guid>
		<description><![CDATA[Lessfs-1.5.0 will not work with older releases of hamsterdb. The hamsterdb interface changed recently. Lessfs-1.5.0 should work with hamsterdb-1.1.13, if not please let me know.]]></description>
		<content:encoded><![CDATA[<p>Lessfs-1.5.0 will not work with older releases of hamsterdb. The hamsterdb interface changed recently. Lessfs-1.5.0 should work with hamsterdb-1.1.13, if not please let me know.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Sylar</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2273</link>
		<dc:creator><![CDATA[Sylar]]></dc:creator>
		<pubDate>Fri, 26 Aug 2011 15:05:58 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2273</guid>
		<description><![CDATA[Hi,
I just found lessfs recently and think it&#039;s a cool tool to save my disk spaces.
I have a question about the usage of lessfs.
It seems that lessfs could be used in the POSIX system.
But can lessfs be used in the Non-POSIX system as well?
Because the filesystem we are using is MogileFS and it can only be accessed by API calls, not POSIX.
So I am wondering if lessfs could fit in the MogileFS?
If it couldn&#039;t, will it be available in the future?
Thanks~~~~]]></description>
		<content:encoded><![CDATA[<p>Hi,<br />
I just found lessfs recently and think it&#8217;s a cool tool to save my disk spaces.<br />
I have a question about the usage of lessfs.<br />
It seems that lessfs could be used in the POSIX system.<br />
But can lessfs be used in the Non-POSIX system as well?<br />
Because the filesystem we are using is MogileFS and it can only be accessed by API calls, not POSIX.<br />
So I am wondering if lessfs could fit in the MogileFS?<br />
If it couldn&#8217;t, will it be available in the future?<br />
Thanks~~~~</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Richard</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2182</link>
		<dc:creator><![CDATA[Richard]]></dc:creator>
		<pubDate>Mon, 15 Aug 2011 14:46:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2182</guid>
		<description><![CDATA[If you get &quot;Lock table i sout of available lock entries&quot; then you need to increase then numbers in /data/mta/DB_CONFIG

the default DB_CONFIG in the lessfs-1.4.x series always needed to be modified. I did not have to modify the 1.5.0/DB_CONFIG, but depending on how many files you have, you may still need to increase many of the allowed #&#039;s.

I believe that while bdb does support sql, sql would not be optimal in a deduplicating file system, so it is probably being used as a key based database for faster processing.

myself, I am using Chuck_IO. I don&#039;t know if that bypasses bdb. I&#039;m going to email Mark some of my questions.]]></description>
		<content:encoded><![CDATA[<p>If you get &#8220;Lock table i sout of available lock entries&#8221; then you need to increase then numbers in /data/mta/DB_CONFIG</p>
<p>the default DB_CONFIG in the lessfs-1.4.x series always needed to be modified. I did not have to modify the 1.5.0/DB_CONFIG, but depending on how many files you have, you may still need to increase many of the allowed #&#8217;s.</p>
<p>I believe that while bdb does support sql, sql would not be optimal in a deduplicating file system, so it is probably being used as a key based database for faster processing.</p>
<p>myself, I am using Chuck_IO. I don&#8217;t know if that bypasses bdb. I&#8217;m going to email Mark some of my questions.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: cw</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2181</link>
		<dc:creator><![CDATA[cw]]></dc:creator>
		<pubDate>Mon, 15 Aug 2011 14:22:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2181</guid>
		<description><![CDATA[sorry for the long delay in responding...
&quot;Lock table is out of available lock entries&quot;

bdb is sql right?  what advantages in your usage does it have over sqlite3?  I know sqlite3 can handle a couple million rows without issue, and handles locking decently too.]]></description>
		<content:encoded><![CDATA[<p>sorry for the long delay in responding&#8230;<br />
&#8220;Lock table is out of available lock entries&#8221;</p>
<p>bdb is sql right?  what advantages in your usage does it have over sqlite3?  I know sqlite3 can handle a couple million rows without issue, and handles locking decently too.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Richard</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2159</link>
		<dc:creator><![CDATA[Richard]]></dc:creator>
		<pubDate>Fri, 12 Aug 2011 16:29:24 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2159</guid>
		<description><![CDATA[You could try to use chunk_io (see README.chunk_io in the lessfs-1.5.0 dir). I&#039;m testing it right now, but have not yet deleted anything. You would also have to have /data be reformatted to btrfs or reiserfs so don&#039;t forget to backup your data.]]></description>
		<content:encoded><![CDATA[<p>You could try to use chunk_io (see README.chunk_io in the lessfs-1.5.0 dir). I&#8217;m testing it right now, but have not yet deleted anything. You would also have to have /data be reformatted to btrfs or reiserfs so don&#8217;t forget to backup your data.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2152</link>
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Fri, 12 Aug 2011 01:56:06 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2152</guid>
		<description><![CDATA[Hi,
       Just saw the answer to my question in README.file_io. The blockdata file does not shrink.

Cheers!]]></description>
		<content:encoded><![CDATA[<p>Hi,<br />
       Just saw the answer to my question in README.file_io. The blockdata file does not shrink.</p>
<p>Cheers!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: David</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2151</link>
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Fri, 12 Aug 2011 01:08:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2151</guid>
		<description><![CDATA[Hi,
     I&#039;ve compiled with BerkeleyDB and now have it working. I still have a question where I cannot find an answer.  How do you see free space after deletes?

If I copy a 3GB ISO to the dedupe store and run &#039;df&#039;, I can see the added 3GB. If I delete it,  df does not change. I understand that df may not be able to see the changes (due to whatever various reason), but there must be some way to determine the amount of space the dedeup store is taking up?    Or, is .lessfs/lessfs_stats the only way? My issue is the dedupe store gets to 100% with the test backups I&#039;m sending to it yet I have no way to estimate the disk space I need as &#039;df&#039; is showing the size of /data/dta/blockdata.dta.  Does this file not shrink once files are deleted? Is there a manual way to shrink it ?   
Sorry if this has been answered elsewhere but I have not seen it.]]></description>
		<content:encoded><![CDATA[<p>Hi,<br />
     I&#8217;ve compiled with BerkeleyDB and now have it working. I still have a question where I cannot find an answer.  How do you see free space after deletes?</p>
<p>If I copy a 3GB ISO to the dedupe store and run &#8216;df&#8217;, I can see the added 3GB. If I delete it,  df does not change. I understand that df may not be able to see the changes (due to whatever various reason), but there must be some way to determine the amount of space the dedeup store is taking up?    Or, is .lessfs/lessfs_stats the only way? My issue is the dedupe store gets to 100% with the test backups I&#8217;m sending to it yet I have no way to estimate the disk space I need as &#8216;df&#8217; is showing the size of /data/dta/blockdata.dta.  Does this file not shrink once files are deleted? Is there a manual way to shrink it ?<br />
Sorry if this has been answered elsewhere but I have not seen it.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Richard</title>
		<link>http://www.lessfs.com/wordpress/?p=627&#038;cpage=1#comment-2147</link>
		<dc:creator><![CDATA[Richard]]></dc:creator>
		<pubDate>Thu, 11 Aug 2011 19:01:51 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=627#comment-2147</guid>
		<description><![CDATA[I don&#039;t know if it is not supported, but Maru is not that fond of HamsterDB anymore. See his 1.4.0 announcement:
Reason for adding support for Berkeley DB is not that it was sexy to introduce it. Nor that it was fun to write the code. I needed a very reliable back-end and speed was less important then reliability. While hamsterdb still looks promising, for now it does not fulfill the requirements.]]></description>
		<content:encoded><![CDATA[<p>I don&#8217;t know if it is not supported, but Maru is not that fond of HamsterDB anymore. See his 1.4.0 announcement:<br />
Reason for adding support for Berkeley DB is not that it was sexy to introduce it. Nor that it was fun to write the code. I needed a very reliable back-end and speed was less important then reliability. While hamsterdb still looks promising, for now it does not fulfill the requirements.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
