btier-1.1.2 has been released

This version of btier seems to be very stable indeed. One of the problems that has now been solved was that btier could deadlock on high vfs cache pressure. Mostly when btier was not used with writethrough enabled on a system with limited memory.

On older releases tuning vfs_cache_pressure to 150 or higher will greatly reduce the risk of running into this problem.
echo 150 > /proc/sys/vm/vfs_cache_pressure
However upgrading to the latest version is of course the best option!

The latest release also comes with great performance.
When btier is given a PCI-e SSD as first tier, it will reach speeds of 130k random 4k iops when writethrough is disabled. It will provide around 80k of random 4k iops with writethrough enabled.

btier provides an API that provides users full control over the placement of individual data blocks.
Example code in the distribution illustrates how users can create their own data migration scripts or tools to query data placement.


This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to btier-1.1.2 has been released

  1. Jason Fisher says:

    I am very interested in researching this as a method for replicating a large pool (1000) of very similar PHP applications (90% duplicate) to a pool of distributed application servers. Maybe you could weigh in on the concept?


    - 10 servers, each with 40GB SSD and 4GB RAM

    - CephFS/Distributed filesystem, 20GB from each server goes into the pool = 200GB = ~150GB usable storage

    - LessFS on one ‘monitor’ server that mounts the CephFS pool, mounts the LessFS image inside the pool and uses iSCSI/NFS? to share

    - Each app server mounts the iSCSI/NFS

    - btier with 512MB RAM cache tier and 5GB SSD cache tier

    - nginx serving PHP pages through btier

    - 500GB total uncompressed application pool size, with dedupe+compression, let’s say 80GB lessfs volume usage

    My thinking is that with this setup, each application server then becomes capable of serving any application, but becomes the ‘best’ at serving a small pool. The pool size can be dynamic and is basically controlled entirely by the load balancer. If the load balancer sends applications 1-100 to servers 1, 2, 3, then eventually those applications will be warmed into the 5GB cache tier. Any writes back to the pool would hit the RAM/SSD tier to keep the application responsive before sending to the network tier. If you have more applications, add more servers and everything grows relatively linearly.

    A few things that I would be looking to do with an API:

    - Periodically analyze/report which files/applications exist in RAM/SSD and compare with a global/recent eviction list to keep the active caches as recent as possible.

    - Can compression be enabled on the SSD/RAM caches? PHP scripts are very compressible, and CPU is cheap when you are primarily just delivering the appropriate cached application pages.

    - Add special rules around directories, i.e. if all applications are stored as /opt/data/applications/app1/modules/something/code.php, and that is the file being read and cached into local SSD, then I always want /opt/data/applications/app1 (4th deep) to travel with it, so the application can avoid slow network directory recursion, and potentially the metadata for that to always be in memory in that case?

  2. Jason Fisher says:

    Left out something ..

    Could btier be re-integrated back into the lessfs engine, so the RAM/SSD tiers are themselves lessfs stores? i.e., if I am caching 10 applications on local SSD that are essentially the same, it would be worth running a local ‘lessfs’ (preferably btier speaking to lessfs without fuse) in that space to optimize it as much as possible.

  3. Jason Fisher says:

    I suppose the forced tiering would simply be a repetitive/periodic tar of the desired cached file structure to >/dev>null.

    The most important factor that I omitted: Each application has an average of 20,000 files and an average file size of 15KB, so there would be approximately 20 million files in the filesystem in this example case.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>