This release fixes a major bug in the truncate code. This bug causes lessfs to crash with ccrypt and rsync -inline.
is there a limitation of the size of files that are running through the dedupp process? I’m asking because we’re planing to use lessfs for our backup-storage and i tried it with an external harddisk with 750GB. The Backup-Files are splitted into 2GB peaces. Looking into the log tells me, no blocks are found to be duplicated. This couldn’t be, we backuped the same server twice.
Backup is done by Veritas Backup Exec. Maybe this helps to find a solution.
Big thanks for your work & help
Sadly enough you can’t use Veritas Backup Exec and expect deduplication to work.
Let me explain why. Your backup software will not store the backup data on full blocksize offsets. This will render dedup useless if only one of the files on the disk grows. When a file grows in this case all other data will start on a different offset and lessfs (or any other block based dedup solution) will see it as a new block.
I am not even speaking about what happens when you backup multiple clients with Backup Exec. The backup streams will then be stored combined on lessfs.
Commercial VTL vendors try to recognize the data streams of most commercial vendors and align them so the dedup makes sense.
what can i do to make dedup work with backup exec?? Nothing?
I am afraid that your options are limited. You can try so see what happens if you only backup one server at a time so that no client data duplexing occurs. You should also disable compression when Backup Exec has such a feature.
Your email address will not be published. Required fields are marked *
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>