This version of Lessfs contains some minor bug fixes as well as something new. Although my previous post states that no new features would be added to the 1.x series this release actually does. Lessfs now supports the LZ4 compression algorithm. Adding support for new compression methods to lessfs is not much work at all and there have been a number of votes for adding LZ4 so it can be compared with Googles snappy compression.
I did not test LZ4 on high end hardware yet. However even on my laptop it is clear that LZ4 does outperform snappy. With the hardware being the bottleneck LZ4 still manages to speed things up by 2~5%. Most likely the difference will be larger when fast hardware is used. The system that I use for performance testing has Berkeley DB stored on SSD and the data on a fast raid5 array containing 8 SATA drives.
I will post the exact performance numbers on low and high end hardware after testing has finished.
Enjoy,
Mark
I can’t wait the figures…
I can’t wait the results….
Does this fix the following problem I am seeing using lessfs under heavy load:
lessfs freezes and I get the log entry:
Feb 5 02:51:04 mythserv kernel: INFO: task lessfs:2687 blocked for more than 120 seconds.
Feb 5 02:51:04 mythserv kernel: “echo 0 > /proc/sys/kernel/hung_task_timeout_secs” disables this message.
Feb 5 02:51:04 mythserv kernel: lessfs D ffff88006faf1998 0 2687 1 0x00000000
Feb 5 02:51:04 mythserv kernel: ffff880066579828 0000000000000086 ffff8800665797b8 ffffffff00000000
Feb 5 02:51:04 mythserv kernel: ffff88006faf1620 0000000000013300 ffff880066579fd8 ffff880066578010
Feb 5 02:51:04 mythserv kernel: ffff880066579fd8 0000000000013300 ffffffff81655020 ffff88006faf1620
Feb 5 02:51:04 mythserv kernel: Call Trace:
Feb 5 02:51:04 mythserv kernel: [] schedule+0x3f/0x60
Feb 5 02:51:04 mythserv kernel: [] get_active_stripe+0x2ea/0x790 [raid456]
Feb 5 02:51:04 mythserv kernel: [] ? try_to_wake_up+0x2b0/0x2b0
Feb 5 02:51:04 mythserv kernel: [] make_request+0x1ae/0x460 [raid456]
Feb 5 02:51:04 mythserv kernel: [] ? wake_up_bit+0x40/0x40
Feb 5 02:51:04 mythserv kernel: [] md_make_request+0xd5/0x200
Feb 5 02:51:04 mythserv kernel: [] generic_make_request+0xbf/0xf0
Feb 5 02:51:04 mythserv kernel: [] submit_bio+0x85/0x110
Feb 5 02:51:04 mythserv kernel: [] ? __bio_add_page+0x110/0x250
Feb 5 02:51:04 mythserv kernel: [] xfs_submit_ioend_bio+0x57/0x80 [xfs]
Feb 5 02:51:04 mythserv kernel: [] xfs_submit_ioend+0xf6/0x110 [xfs]
Feb 5 02:51:04 mythserv kernel: [] xfs_vm_writepage+0x230/0x500 [xfs]
Feb 5 02:51:04 mythserv kernel: [] __writepage+0x17/0x40
Feb 5 02:51:04 mythserv kernel: [] write_cache_pages+0x221/0x4a0
Feb 5 02:51:04 mythserv kernel: [] ? tomoyo_init_request_info+0x3f/0x70
Feb 5 02:51:04 mythserv kernel: [] ? set_page_dirty+0x70/0x70
Feb 5 02:51:04 mythserv kernel: [] generic_writepages+0x51/0x80
Feb 5 02:51:04 mythserv kernel: [] xfs_vm_writepages+0x53/0x70 [xfs]
Feb 5 02:51:04 mythserv kernel: [] do_writepages+0x21/0x40
Feb 5 02:51:04 mythserv kernel: [] __filemap_fdatawrite_range+0x5b/0x60
Feb 5 02:51:04 mythserv kernel: [] filemap_write_and_wait_range+0x5a/0x80
Feb 5 02:51:04 mythserv kernel: [] xfs_file_fsync+0x68/0x2d0 [xfs]
Feb 5 02:51:04 mythserv kernel: [] vfs_fsync_range+0x2b/0x40
Feb 5 02:51:04 mythserv kernel: [] vfs_fsync+0x1c/0x20
Feb 5 02:51:04 mythserv kernel: [] do_fsync+0x3a/0x60
Feb 5 02:51:04 mythserv kernel: [] sys_fsync+0x10/0x20
Feb 5 02:51:04 mythserv kernel: [] system_call_fastpath+0x16/0x1b
Hello,
can you replace/add/test hash tiger algo by http://code.google.com/p/smhasher/
MurmurHash3 bench is very impressive. Some other project uses it.
Regards,
Nicolas
Nicolas, by looking at the website it seems a non-cryptographically-strong hash function. Pay attention to such things in deduplication matters because a malicious user could deduplicate against bogus data, it could destroy backups of a different user or get ahold of his/her data. I would like to see this MurmurHash3 it in lessfs *only* if it is selectable as an OPTION.
Regards
Jean
Is anyone getting any reasonable performance on lessfs? my backend is a fast SAS connected Dell RAID 5 which I can write to at 850MB/s.
Lessfs on it writes at maximum 30-40MB/s and gets slower and slower.
I’m using Berkeley DB 1.4.8 and lzo with fill-in.
After putting in 300GB of data, it got to 7MB/s. not really usable even as a backup solution.
What can I tune?
Thanks.
How did you configure Lessfs?
I suspect that you have compiled in debugging.
Even my laptop does more then 30MB/sec.
Why did you choose LZO instead of SNAPPY or LZ4.
Both are way faster.