This release fixes a silly mistake the would lead to freeing a NULL pointer.
Oops..
-
Archives
- January 2015
- August 2014
- February 2014
- December 2013
- May 2013
- March 2013
- January 2013
- December 2012
- July 2012
- June 2012
- April 2012
- January 2012
- October 2011
- September 2011
- August 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- June 2009
- March 2009
-
Meta
Hi Mark
thanks for your great work and merry christmas!!!
I have test lessfs for backup/replication of my kvm machine from my site to another and i have some problem…
I have tested with fedora 14 x64 as master and other 2 box (both 32bit but one fedora V12/V13) as slave through openVPN.
From Master side dedup work without problem but as soon i start slave server after some handshake i have this error:
—-
/mnt/fuse': Transport endpoint is not connected
—-
I have used: lessfs 1.2.2.2 to 1.2.2.6 with fuse 2.8.5 and tokyocabinet v 1.4.46
many thanks for the help
Sicerely
Hi Mark
thanks for your fast reply, the crash happened on slave, but i have made another test use for master & slave same platform 32bit OS and now no more crash!!!! so i think the problem is related to Tokyocabinet mixed platform 64bit and 32bit.
But now during an upload of backup image to the master server i so this:
—-
nibiru:/mnt/backspace/backup# scp qemu-103.tar 10.10.11.90:/mnt/fuse/
qemu-103.tar 54% 514MB 0.0KB/s – stalled –
—-
Seems problem on Syncronos Dedup from master to slave server, what do you think about?
I have thought another type of scenario with lessfs replication, example:
1- Upload/copy file to master lessfs server and lately or as soon as possible replication of data to slave server.
many thanks for your help!!!!
Hi Mark, I just tested LessFS performance today and it turns out that small file write performance is a lot slower than writing to NTFS-3G on Ubuntu 10.10 (3 minutes vs. 21 minutes with 16000 files). I use Core quad-core with 4GB of RAM and my config is as follows. My question is that did I do anything wrong or is this expected performance? Thank you.
Regards,
Charles
# Enable informational messages about compression.
DEBUG = 5
HASHNAME=MHASH_TIGER192
#HASHNAME=MHASH_SHA256
# The (chopped) hashlen in bytes, minimum is 20.
HASHLEN = 24
#BLOCKDATA_IO_TYPE=file_io
#BLOCKDATA_PATH=/data/dta/blockdata.dta
BLOCKDATA_PATH=/data/dta
BLOCKDATA_BS=1048576
#
BLOCKUSAGE_PATH=/data/mta
BLOCKUSAGE_BS=1048576
#
DIRENT_PATH=/data/mta
DIRENT_BS=1048576
#
FILEBLOCK_PATH=/data/mta
FILEBLOCK_BS=1048576
#
META_PATH=/data/mta
META_BS=1048576
#
HARDLINK_PATH=/data/mta
HARDLINK_BS=1048576
#
SYMLINK_PATH=/data/mta
SYMLINK_BS=1048576
#
# The freelist database is only used
# with the file_io backend
#
FREELIST_PATH=/data/mta
FREELIST_BS=1048576
#
# CACHESIZE in MB
#CACHESIZE=1024
CACHESIZE=256
# Flush data to disk after X seconds.
COMMIT_INTERVAL=30
#
LISTEN_IP=127.0.0.1
LISTEN_PORT=100
# Not more then 2 on most machines.
MAX_THREADS=8
#DYNAMIC_DEFRAGMENTATION on or off, default is off.
#DYNAMIC_DEFRAGMENTATION=on
COREDUMPSIZE=2560000000
# Consider SYNC_RELAX=1 or SYNC_RELAX=2 when exporting lessfs with NFS.
SYNC_RELAX=0
# When BACKGROUND_DELETE=on lessfs will spawn a thread to delete
# a file as a background task. This is a recently added feature
# and is therefore disabled by default.
BACKGROUND_DELETE=on
# Requires openssl and lessfs has to be configured with –with-crypto
ENCRYPT_DATA=off
# ENCRYPT_META on or off, default is off
# Requires ENCRYPT_DATA=on and is otherwise ignored.
ENCRYPT_META=on
# You don’t like fsck?
ENABLE_TRANSACTIONS=on
# Select a blocksize to fit your needs.
BLKSIZE=131072
#BLKSIZE=65536
#BLKSIZE=32768
#BLKSIZE=16384
#BLKSIZE=4096
#COMPRESSION=none
#COMPRESSION=qlz
#COMPRESSION=lzo
#COMPRESSION=bzip
#COMPRESSION=deflate
COMPRESSION=disabled
#REPLICATION=masterslave
#REPLICATION_PARTNER_IP=127.0.0.1
#REPLICATION_PARTNER_PORT=101
#REPLICATION_ROLE=master
#REPLICATION_LISTEN_IP=127.0.0.1
#REPLICATION_LISTEN_PORT=101
Hi Mark,
I guess I just encountered a LessFS bug when I tried to time a copy of 1.6GB of picture files from external USB disk (NTFS) to a LessFS volume:
0.13user 2.81system 1:33.89elapsed 3%CPU (0avgtext+0avgdata 4624maxresident)k
3144888inputs+0outputs (0major+374minor)pagefaults 0swaps
Regards,
Charles
Hey Mark,
Just a minor thing, but I think your versioning is off. I just installed the lessfs 1.2.2.6 version but when I do lessfs -v I see “lessfs 1.2.2.3″
Greetz,
Daan
Yes, I must have forgotten to update the version number.
You can change it in configure.ac
Then type autoreconf, automake, ./configure,make to create a binary with the correct version number.
In your FAQ files you say “Fuse and kernelspace NFS does not seem to be stable yet”. In what ways is it not stable?
I’ve got it working using vMA and ghettoVCBg2 to backup virtual machines from ESX, it seems to be doing mostly OK, but occasionally a file will disappear just as the copy completes and I’m not sure what to blame, looking for a starting point.
Lessfs is currently using the high level Fuse API.
A point of concern comes from the Fuse documentation:
2) high-level interface
Because the high-level interface is path based, it is not possible to
delegate looking up by inode to the filesystem.
To work around this, currently a “noforget” option is provided, which
makes the library remember nodes forever. This will make the NFS
server happy, but also results in an ever growing memory footprint for
the filesystem. For this reason if the filesystem is large (or the
memory is small), then this option is not recommended.
The good news is that I am working on lessfs2 that will use the low level Fuse API and provide many more features, like snapshots.
This should work better with NFS. Disappearing files would be a reason for concern though!
Is there a way to reproduce the problem?
I don’t seem to be able to make it happen, but it is happening often. what’s worse, the viperl api that’s being used to copy the files between the vmfs and nfs is returning “success” so i don’t even know that it’s gone wrong till I go looking for the file.
I’m running it on an older amd mp2000+ 1GB ram system, not sure if it being slow and x32 is part of the problem. I’m setting up a x64 system to try now