This version of lessfs comes with slightly improved read performance. Parts of the lessfs_read routine have been reviewed and optimized. A potential race condition has been solved as well.
-
Archives
- January 2015
- August 2014
- February 2014
- December 2013
- May 2013
- March 2013
- January 2013
- December 2012
- July 2012
- June 2012
- April 2012
- January 2012
- October 2011
- September 2011
- August 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- June 2009
- March 2009
-
Meta
Hi Maru
thanks for your great work on LessFS, i have tested some other “competitor” like NetApp, DataDomain and ZFS. I have no test read/write performance i have focus on Dedup factor and i saw a very big dedup factor on DataDomain.
Please i dont want say LessFS have bad Dedup Algorithm but seems very simple because from my simple test it dont found duplicated chunk on simple file.
What do you think about?
I’m sorry for my bad english
Thanks so much
Hi Dimitri,
It’s always interesting to hear about what other dedup solutions are doing. Datadomain uses sliding windows deduplication where lessfs is using fixed blocksize deduplication. I have been hesitating to add sliding windows to lessfs due to patent issues.
Can you give an example on the dedup ratios that you see on Datadomain and lessfs and what type of data you are testing with?
Lessfs is known to work very well with raw disk images and when you copy regular files to it. When you use Lessfs to store tar or zip archives then the compression will be low or none since the offsets in the tar archive will differ each time.
Hi Maru
thanks so much to understand my opinion.
My test are very simple, i’m working on Mondo Rescue Project Disaster recovery, so we use it to create an image for bare metal restore. I have create some image of few server using the same RHEL release (5.5) with and without compression you can see the file and the size:
filesys show compression /backup/
—
629766144 sdlmoc01-4k_boot-1.iso
1659176960 sdlmoc01-4k_boot-nocomp-04032011-1.iso
1165178880 sglvms52-4k_boot-03032011-1.iso
1092429824 sglvms52-4k_boot-1.iso
3985399808 sglvms52-4k_boot-nocompressione-07032011-1.iso
4615309312 sglvms52-4k_boot-nocompressione207032011-1.iso
—-
Some image are gziped and not but with some file changed inside of it (to the “offset” problem).
This is the result from DataDomain command:
Total files: 6; bytes/storage_used: 4.7
Original Bytes: 13,188,470,376
Globally Compressed: 4,026,972,574 (dedupped)
Locally Compressed: 2,807,467,410 (dedupped+compress)
Meta-data: 13,160,296
—-
So the sum of all file is like a 12GiB dedupped from DD are only 2,5 GiB!!!Very good performance!!
Now i show the same file using LessFS:
—
[root@sglvms51 mondo-img]# cat /fuse/.lessfs/lessfs_stats
INODE SIZE COMPRESSED_SIZE FILENAME
10 0 0 lessfs_stats
14 0 0 enabled
15 0 0 backlog
20 629766144 110648854 sdlmoc01-4k_boot-1.iso
21 1659176960 808317386 sdlmoc01-4k_boot-nocomp-04032011-1.iso
22 1165178880 1034221145 sglvms52-4k_boot-03032011-1.iso
23 1092429824 976047268 sglvms52-4k_boot-1.iso
24 3985399808 1378525345 sglvms52-4k_boot-nocompressione-07032011-1.iso
25 4615309312 239538235 sglvms52-4k_boot-nocompressione207032011-1.iso
—–
I think “sliding windows” is very useful for many file type, for example as backup storage for Bacula or other Backup software or for simple tar backup.
Please ask me other info if you need it.
Many thanks for your great work!!
ops sorry a forget to report the sum from LessFS & DD:
DataDomain: 2,807,467,410
LessFS: 4,547,298,233
LessFs is just more near to 1,6 bigger than DD.
Many thanks Maru!!
Hi,
just to let you know that i still use lessfs, just updated from 1.3.3.1 to 1.3.3.8, everything run fine on my 32bit systems. The upgraded version allow me to “du” folders faster than before, even on my low performance machines.
Thank you for your huge work on this project.