<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Lessfs-0.8.2 is available for download.</title>
	<atom:link href="http://www.lessfs.com/wordpress/?feed=rss2&#038;p=252" rel="self" type="application/rss+xml" />
	<link>http://www.lessfs.com/wordpress/?p=252</link>
	<description>Open source data de-duplication &#38; data tiering for less</description>
	<lastBuildDate>Wed, 18 Mar 2015 13:40:57 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0.7</generator>
	<item>
		<title>By: Szycha</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-241</link>
		<dc:creator><![CDATA[Szycha]]></dc:creator>
		<pubDate>Mon, 23 Nov 2009 08:15:36 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-241</guid>
		<description><![CDATA[My lessfsck stuck in a dead loop :-(

$ ls -al /proc/18096/fd/3
/proc/18096/fd/3 -&gt; /var/bk/mta/fileblock.tch

$ strace -p 18096 -s 90 2&gt;&amp;1 &#124; head -10
Process 18096 attached - interrupt to quit
pread64(3, &quot;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&quot;, 40, 149089050) = 40
pread64(3, &quot;\260&gt;\20\30j\245\4\2\277Y\242g\220\320\241\225\277+&quot;, 48, 154400320) = 48
pread64(3, &quot;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&quot;, 40, 149089050) = 40
pread64(3, &quot;\260&gt;\20\30j\245\4\2\277Y\242g\220\320\241\225\277+&quot;, 48, 154400320) = 48
pread64(3, &quot;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&quot;, 40, 149089050) = 40
pread64(3, &quot;\260&gt;\20\30j\245\4\2\277Y\242g\220\320\241\225\277+&quot;, 48, 154400320) = 48
pread64(3, &quot;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&quot;, 40, 149089050) = 40
pread64(3, &quot;\260&gt;\20\30j\245\4\2\277Y\242g\220\320\241\225\277+&quot;, 48, 154400320) = 48
pread64(3, &quot;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&quot;, 40, 149089050) = 40



I am afraid I will have to break this joyful activity and mount unchecked lessfs right away.


-- 
Your testfield reporter,
(-) Szycha.]]></description>
		<content:encoded><![CDATA[<p>My lessfsck stuck in a dead loop <img src="http://www.lessfs.com/wordpress/wp-includes/images/smilies/icon_sad.gif" alt=":-(" class="wp-smiley" /></p>
<p>$ ls -al /proc/18096/fd/3<br />
/proc/18096/fd/3 -&gt; /var/bk/mta/fileblock.tch</p>
<p>$ strace -p 18096 -s 90 2&gt;&amp;1 | head -10<br />
Process 18096 attached &#8211; interrupt to quit<br />
pread64(3, &#8220;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&#8243;, 40, 149089050) = 40<br />
pread64(3, &#8220;\260&gt;\20\30j\245\4\2\277Y\242g\220\320\241\225\277+&#8221;, 48, 154400320) = 48<br />
pread64(3, &#8220;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&#8243;, 40, 149089050) = 40<br />
pread64(3, &#8220;\260&gt;\20\30j\245\4\2\277Y\242g\220\320\241\225\277+&#8221;, 48, 154400320) = 48<br />
pread64(3, &#8220;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&#8243;, 40, 149089050) = 40<br />
pread64(3, &#8220;\260&gt;\20\30j\245\4\2\277Y\242g\220\320\241\225\277+&#8221;, 48, 154400320) = 48<br />
pread64(3, &#8220;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&#8243;, 40, 149089050) = 40<br />
pread64(3, &#8220;\260&gt;\20\30j\245\4\2\277Y\242g\220\320\241\225\277+&#8221;, 48, 154400320) = 48<br />
pread64(3, &#8220;\366Z\4\1mfy\210\242\247\374\203\22\237\213uQ57X209\16\256\1\344\2234&#8243;, 40, 149089050) = 40</p>
<p>I am afraid I will have to break this joyful activity and mount unchecked lessfs right away.</p>
<p>&#8212;<br />
Your testfield reporter,<br />
(-) Szycha.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Szycha</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-240</link>
		<dc:creator><![CDATA[Szycha]]></dc:creator>
		<pubDate>Mon, 23 Nov 2009 07:56:59 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-240</guid>
		<description><![CDATA[&gt; I am afraid that a kernel upgrade is inevitable.

It was. Now average speed on similar data set completed with avg. speed of 3800kB/s (compared to ~250kB/s previously). It still was the first copy (I am trying to set up some snapshot-alike backup system) so I may assume speed to grow by factor of four with next copies.

My configuration changed to kernel 2.6.31.6 and lessfs 0.8.3. I am wondering if I could use +1 concurrency (like in `make -j&#039;).

My hopes for backup are now restored. &quot;A New Hope&quot; I might say ;-)


-- 
Cheers and thank you,
(-) Szycha.]]></description>
		<content:encoded><![CDATA[<p>&gt; I am afraid that a kernel upgrade is inevitable.</p>
<p>It was. Now average speed on similar data set completed with avg. speed of 3800kB/s (compared to ~250kB/s previously). It still was the first copy (I am trying to set up some snapshot-alike backup system) so I may assume speed to grow by factor of four with next copies.</p>
<p>My configuration changed to kernel 2.6.31.6 and lessfs 0.8.3. I am wondering if I could use +1 concurrency (like in `make -j&#8217;).</p>
<p>My hopes for backup are now restored. &#8220;A New Hope&#8221; I might say <img src="http://www.lessfs.com/wordpress/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley" /></p>
<p>&#8212;<br />
Cheers and thank you,<br />
(-) Szycha.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Areq</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-209</link>
		<dc:creator><![CDATA[Areq]]></dc:creator>
		<pubDate>Wed, 18 Nov 2009 13:31:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-209</guid>
		<description><![CDATA[it was mounted without any parameters:
lessfs /etc/lessfs/backup.cfg /mnt
blocksize 4k

now is going: Phase 3 : Check for orphaned inodes.]]></description>
		<content:encoded><![CDATA[<p>it was mounted without any parameters:<br />
lessfs /etc/lessfs/backup.cfg /mnt<br />
blocksize 4k</p>
<p>now is going: Phase 3 : Check for orphaned inodes.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-203</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Mon, 16 Nov 2009 19:43:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-203</guid>
		<description><![CDATA[Hmm, lessfsck running for days is not what I intended. Don&#039;t get me wrong lessfsck will be time consuming, but days is way to long.

How did you mount lessfs?
Did you use big_writes and a larger then 4k blocksize?
I will do some testing with big database to see what happens.

Mark]]></description>
		<content:encoded><![CDATA[<p>Hmm, lessfsck running for days is not what I intended. Don&#8217;t get me wrong lessfsck will be time consuming, but days is way to long.</p>
<p>How did you mount lessfs?<br />
Did you use big_writes and a larger then 4k blocksize?<br />
I will do some testing with big database to see what happens.</p>
<p>Mark</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: maru</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-202</link>
		<dc:creator><![CDATA[maru]]></dc:creator>
		<pubDate>Mon, 16 Nov 2009 19:40:39 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-202</guid>
		<description><![CDATA[Using a 4k blocksize kills lessfs performance.

One other thing, don&#039;t use big_writes in combination with 4k blocks. big_writes implies a larger then 4k blocksize.

I am afraid that a kernel upgrade is inevitable.
You are not the first to struggle with this, so I should probably work on a Howto.

Mark]]></description>
		<content:encoded><![CDATA[<p>Using a 4k blocksize kills lessfs performance.</p>
<p>One other thing, don&#8217;t use big_writes in combination with 4k blocks. big_writes implies a larger then 4k blocksize.</p>
<p>I am afraid that a kernel upgrade is inevitable.<br />
You are not the first to struggle with this, so I should probably work on a Howto.</p>
<p>Mark</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Szycha</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-198</link>
		<dc:creator><![CDATA[Szycha]]></dc:creator>
		<pubDate>Mon, 16 Nov 2009 09:29:40 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-198</guid>
		<description><![CDATA[Hi,

Thanks for this excellent tool. It is really something - first de-duplicating gpl-ed fs.

I am trying to use lessfs on 1 TB USB hard drive and it shows &#039;suboptimal&#039; performance of about 256 kB/s (2.6.24.5-smp on E8200 @2.66GHz) when copying ~60GB disk image.

My setup:
lessfs 0.8.0
configured with:
./configure &quot; &#039;--with-crypto&#039; &#039;--with-lzo&#039; &#039;--with-sha3&#039; &#039;--prefix=/usr&#039; &#039;CFLAGS=-O3 -mtune=native -mmmx -msse -msse2 -msse3 -ffast-math -fomit-frame-pointer -mfpmath=sse,387&#039;

lessfs.cfg:
DEBUG = 1
HASHLEN = 24
BLOCKDATA_PATH=/var/bk/dta
BLOCKDATA_BS=10485760
BLOCKUSAGE_PATH=/var/bk/mta
BLOCKUSAGE_BS=10485760
DIRENT_PATH=/var/bk/mta
DIRENT_BS=10485760
FILEBLOCK_PATH=/var/bk/mta
FILEBLOCK_BS=10485760
META_PATH=/var/bk/mta
META_BS=10485760
HARDLINK_PATH=/var/bk/mta
HARDLINK_BS=10485760
SYMLINK_PATH=/var/bk/mta
SYMLINK_BS=10485760
FREELIST_PATH=/var/bk/mta
FREELIST_BS=10485760
CACHESIZE=640
COMMIT_INTERVAL=300
LISTEN_IP=127.0.0.1
LISTEN_PORT=100
MAX_THREADS=2
DYNAMIC_DEFRAGMENTATION=off
COREDUMPSIZE=256000000
SYNC_RELAX=0
ENCRYPT_DATA=off
ENCRYPT_META=on

/var/bk is 1 TB LUKS partition formatted with XFS:
# xfs_info /var/bk
meta-data=/dev/mapper/bak        isize=256    agcount=4, agsize=61047468 blks
         =                       sectsz=4096  attr=2
data     =                       bsize=4096   blocks=244189871, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096  
log      =internal               bsize=4096   blocks=32768, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

(I was unable to set up internal lessfs encryption for unknown reason, that&#039;s why I use luks).

lessfs is mounted with:
mount &#124;grep lessfs:
lessfs on /backup type fuse.lessfs (rw,nosuid,nodev,max_read=4096)

ps awwxf &#124;grep lessfs:
lessfs /etc/lessfs.cfg /backup/ -o max_write=4096,max_read=4096,max_readahead=256,big_writes


/bin/dd_rescue -d img-65gb /backup/2009-1112-2123/img-65gb
dd_rescue: (warning): O_DIRECT requires hardbs of at least 4096!
dd_rescue: (warning): We don&#039;t handle misalignment of last block w/ O_DIRECT!
dd_rescue: (info): ipos:  68157440.0k, opos:  68157440.0k, xferd:  68157440.0k
                   errs:      0, errxfer:         0.0k, succxfer:  68157440.0k
             +curr.rate:    16450kB/s, avg.rate:      436kB/s, avg.load:  0.1%
dd_rescue: (info): img-65gb (68157440.0k): EOF

/bin/dd_rescue -d img2-60gb /backup/2009-1112-2123/img2-60gb
dd_rescue: (warning): O_DIRECT requires hardbs of at least 4096!
dd_rescue: (warning): We don&#039;t handle misalignment of last block w/ O_DIRECT!
dd_rescue: (info): ipos:  37608448.0k, opos:  37608448.0k, xferd:  37608448.0k
                   errs:      0, errxfer:         0.0k, succxfer:  37608448.0k
             +curr.rate:      119kB/s, avg.rate:      255kB/s, avg.load:  0.1%

(it is still in progress)


I would like to have 4kB blocks that match ssize on XFS source partitions in order to achieve highest hit rate. (I cannot easily upgrade kernel on that machine - this is the other reason). 

What did I mix up?]]></description>
		<content:encoded><![CDATA[<p>Hi,</p>
<p>Thanks for this excellent tool. It is really something &#8211; first de-duplicating gpl-ed fs.</p>
<p>I am trying to use lessfs on 1 TB USB hard drive and it shows &#8216;suboptimal&#8217; performance of about 256 kB/s (2.6.24.5-smp on E8200 @2.66GHz) when copying ~60GB disk image.</p>
<p>My setup:<br />
lessfs 0.8.0<br />
configured with:<br />
./configure &#8221; &#8216;&#8211;with-crypto&#8217; &#8216;&#8211;with-lzo&#8217; &#8216;&#8211;with-sha3&#8242; &#8216;&#8211;prefix=/usr&#8217; &#8216;CFLAGS=-O3 -mtune=native -mmmx -msse -msse2 -msse3 -ffast-math -fomit-frame-pointer -mfpmath=sse,387&#8242;</p>
<p>lessfs.cfg:<br />
DEBUG = 1<br />
HASHLEN = 24<br />
BLOCKDATA_PATH=/var/bk/dta<br />
BLOCKDATA_BS=10485760<br />
BLOCKUSAGE_PATH=/var/bk/mta<br />
BLOCKUSAGE_BS=10485760<br />
DIRENT_PATH=/var/bk/mta<br />
DIRENT_BS=10485760<br />
FILEBLOCK_PATH=/var/bk/mta<br />
FILEBLOCK_BS=10485760<br />
META_PATH=/var/bk/mta<br />
META_BS=10485760<br />
HARDLINK_PATH=/var/bk/mta<br />
HARDLINK_BS=10485760<br />
SYMLINK_PATH=/var/bk/mta<br />
SYMLINK_BS=10485760<br />
FREELIST_PATH=/var/bk/mta<br />
FREELIST_BS=10485760<br />
CACHESIZE=640<br />
COMMIT_INTERVAL=300<br />
LISTEN_IP=127.0.0.1<br />
LISTEN_PORT=100<br />
MAX_THREADS=2<br />
DYNAMIC_DEFRAGMENTATION=off<br />
COREDUMPSIZE=256000000<br />
SYNC_RELAX=0<br />
ENCRYPT_DATA=off<br />
ENCRYPT_META=on</p>
<p>/var/bk is 1 TB LUKS partition formatted with XFS:<br />
# xfs_info /var/bk<br />
meta-data=/dev/mapper/bak        isize=256    agcount=4, agsize=61047468 blks<br />
         =                       sectsz=4096  attr=2<br />
data     =                       bsize=4096   blocks=244189871, imaxpct=25<br />
         =                       sunit=0      swidth=0 blks<br />
naming   =version 2              bsize=4096<br />
log      =internal               bsize=4096   blocks=32768, version=2<br />
         =                       sectsz=4096  sunit=1 blks, lazy-count=0<br />
realtime =none                   extsz=4096   blocks=0, rtextents=0</p>
<p>(I was unable to set up internal lessfs encryption for unknown reason, that&#8217;s why I use luks).</p>
<p>lessfs is mounted with:<br />
mount |grep lessfs:<br />
lessfs on /backup type fuse.lessfs (rw,nosuid,nodev,max_read=4096)</p>
<p>ps awwxf |grep lessfs:<br />
lessfs /etc/lessfs.cfg /backup/ -o max_write=4096,max_read=4096,max_readahead=256,big_writes</p>
<p>/bin/dd_rescue -d img-65gb /backup/2009-1112-2123/img-65gb<br />
dd_rescue: (warning): O_DIRECT requires hardbs of at least 4096!<br />
dd_rescue: (warning): We don&#8217;t handle misalignment of last block w/ O_DIRECT!<br />
dd_rescue: (info): ipos:  68157440.0k, opos:  68157440.0k, xferd:  68157440.0k<br />
                   errs:      0, errxfer:         0.0k, succxfer:  68157440.0k<br />
             +curr.rate:    16450kB/s, avg.rate:      436kB/s, avg.load:  0.1%<br />
dd_rescue: (info): img-65gb (68157440.0k): EOF</p>
<p>/bin/dd_rescue -d img2-60gb /backup/2009-1112-2123/img2-60gb<br />
dd_rescue: (warning): O_DIRECT requires hardbs of at least 4096!<br />
dd_rescue: (warning): We don&#8217;t handle misalignment of last block w/ O_DIRECT!<br />
dd_rescue: (info): ipos:  37608448.0k, opos:  37608448.0k, xferd:  37608448.0k<br />
                   errs:      0, errxfer:         0.0k, succxfer:  37608448.0k<br />
             +curr.rate:      119kB/s, avg.rate:      255kB/s, avg.load:  0.1%</p>
<p>(it is still in progress)</p>
<p>I would like to have 4kB blocks that match ssize on XFS source partitions in order to achieve highest hit rate. (I cannot easily upgrade kernel on that machine &#8211; this is the other reason). </p>
<p>What did I mix up?</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Areq</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-196</link>
		<dc:creator><![CDATA[Areq]]></dc:creator>
		<pubDate>Sun, 15 Nov 2009 21:41:49 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-196</guid>
		<description><![CDATA[How fast lessfsck  should be ?

I run it on 15G lessfs few day ago and still is working....
(first day Phase 1, and now Phase 2)

config: http://pld.pastebin.com/f2e0e5456

CPU P4 3.00GHz, 750MB RAM. SATA Seagate 7200.10 320GB

No other process on this machine is runnig now.

# dd if=blockdata.tch of=/dev/null bs=1M
Ad12806+1 records in
12806+1 records out
13428124894 bytes (13 GB) copied, 245.718 s, 54.6 MB/s

xfs, 2.6.27.12-1, i686, libfuse-2.8.0-1.i686

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      2489 76.3  5.7 433916 44060 pts/1    R+   Nov10 5653:15 /usr/sbin/lessfsck -o -c /etc/lessfs/backup.cfg

# strace  -f -p 2489 shows a lot of pread64(8, it is mta/dirent.tcb]]></description>
		<content:encoded><![CDATA[<p>How fast lessfsck  should be ?</p>
<p>I run it on 15G lessfs few day ago and still is working&#8230;.<br />
(first day Phase 1, and now Phase 2)</p>
<p>config: <a href="http://pld.pastebin.com/f2e0e5456" rel="nofollow">http://pld.pastebin.com/f2e0e5456</a></p>
<p>CPU P4 3.00GHz, 750MB RAM. SATA Seagate 7200.10 320GB</p>
<p>No other process on this machine is runnig now.</p>
<p># dd if=blockdata.tch of=/dev/null bs=1M<br />
Ad12806+1 records in<br />
12806+1 records out<br />
13428124894 bytes (13 GB) copied, 245.718 s, 54.6 MB/s</p>
<p>xfs, 2.6.27.12-1, i686, libfuse-2.8.0-1.i686</p>
<p>USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND<br />
root      2489 76.3  5.7 433916 44060 pts/1    R+   Nov10 5653:15 /usr/sbin/lessfsck -o -c /etc/lessfs/backup.cfg</p>
<p># strace  -f -p 2489 shows a lot of pread64(8, it is mta/dirent.tcb</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mark Ruijter</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-187</link>
		<dc:creator><![CDATA[Mark Ruijter]]></dc:creator>
		<pubDate>Thu, 12 Nov 2009 18:36:55 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-187</guid>
		<description><![CDATA[Hi Chris,

Did you try lessfs with the file_io backend?
Deleting files with file_io is much faster because it just marks deleted
blocks in the freelist database.
This actually comes very close to the solution that you are suggesting.

My question would be : Does file_io solve your problem?

Mark.

-
P.S. I will take a good look on optimizing deletion behavior in
combination with the tc backend.
I do agree that the current situation is &#039;less then optimal&#039;. For now I
am working on fsck for the file_io backend.
But this issue should be resolved before 1.0 comes out.]]></description>
		<content:encoded><![CDATA[<p>Hi Chris,</p>
<p>Did you try lessfs with the file_io backend?<br />
Deleting files with file_io is much faster because it just marks deleted<br />
blocks in the freelist database.<br />
This actually comes very close to the solution that you are suggesting.</p>
<p>My question would be : Does file_io solve your problem?</p>
<p>Mark.</p>
<p>&#8211;<br />
P.S. I will take a good look on optimizing deletion behavior in<br />
combination with the tc backend.<br />
I do agree that the current situation is &#8216;less then optimal&#8217;. For now I<br />
am working on fsck for the file_io backend.<br />
But this issue should be resolved before 1.0 comes out.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Chris-U</title>
		<link>http://www.lessfs.com/wordpress/?p=252&#038;cpage=1#comment-184</link>
		<dc:creator><![CDATA[Chris-U]]></dc:creator>
		<pubDate>Thu, 12 Nov 2009 09:55:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.lessfs.com/wordpress/?p=252#comment-184</guid>
		<description><![CDATA[What about my suggestion from september 14th?

---------------------------------
What about a deleting queue? A additional database which stores the ids/hashes whatever of the blocks which should be deleted.
So when you delete a file first the meta data gets removed so the file is no longer visible in the filesystem. Second the entrys in the deleting queue were written. Finaly when no other jobs except deleting are pending, the blocks from the deletion queue get checked and deleted.

This method would make it possible to handle files and use the filesystem like a traditional filesystem. Although there is no purge job necessary.
---------------------------------

It would be great if something like that would be aviable in v1.0, too.
At the moment i stopped testing lessfs because i cannot realy use it without a better solution for deleting files.
In addition it seems that there is only one file operation at the same time on one lessfs possible. When i try to do another file operation on the lessfs it will not respond until the first operation is completed. 


Has anybody tested lessfs on solid state disks? It wuoldn&#039;t be useful for backup purposes, but for efficent use of the expensive disks when they are used for energy saving.


Regards, Chris]]></description>
		<content:encoded><![CDATA[<p>What about my suggestion from september 14th?</p>
<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<br />
What about a deleting queue? A additional database which stores the ids/hashes whatever of the blocks which should be deleted.<br />
So when you delete a file first the meta data gets removed so the file is no longer visible in the filesystem. Second the entrys in the deleting queue were written. Finaly when no other jobs except deleting are pending, the blocks from the deletion queue get checked and deleted.</p>
<p>This method would make it possible to handle files and use the filesystem like a traditional filesystem. Although there is no purge job necessary.<br />
&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</p>
<p>It would be great if something like that would be aviable in v1.0, too.<br />
At the moment i stopped testing lessfs because i cannot realy use it without a better solution for deleting files.<br />
In addition it seems that there is only one file operation at the same time on one lessfs possible. When i try to do another file operation on the lessfs it will not respond until the first operation is completed. </p>
<p>Has anybody tested lessfs on solid state disks? It wuoldn&#8217;t be useful for backup purposes, but for efficent use of the expensive disks when they are used for energy saving.</p>
<p>Regards, Chris</p>
]]></content:encoded>
	</item>
</channel>
</rss>
