|Home||B+Tree Product||B+Tree Guide||B+Tree Performance||Demos||Licensing and Pricing||B+Tree FAQ|
The B+Tree classes provide in the Virtual Machinery libraries are very efficient. The table below shows the times taken (in milliseconds) for the standard test (description of standard test follows table) on a number of operating environments and CPUs.
The standard test consists of 10000 random additions of entries with an average index length of 30 characters and an average data length of 52 characters. This is then followed by a further 20000 random operations divided randomly and roughly equally among additions, deletions and fetches.
This implementation of the BTree includes a cache. The cache allows any number of index and data pages in the tree to be held in memory. This significantly reduces access speeds to data in the tree since the number of relatively slow disk accesses is reduced. While both index and data pages can be held in memory if there is insufficient memory for all the pages to be held it is better to hold index pages rather than data pages. This is because index pages are accessed more frequently since any access to the data must first go through the index page 'tree'. The following table shows comparative figures are shown for various levels of cacheing of index and data pages.
If you wish to use a cached BTree then you can set the number of index and data pages to be buffered in the methods to create new and use existing BTrees.
Key CompressionFurtherspace efficiency is provided by compressing the keys. A number of algorithms are available to do this depending on the nature of the keys. The method used here is to substitute the common key prefix in subsequent keys e.g. if you have three keys in the one block -
These are represented in the file as -
Where the ASCII value of the character at + indicates the number of common characters shared with the previous record (up to a maximum of 255). If + is 0 then there is no compression. The first index record in a page is never compressed. Compression introduces an overhead to the manipulation of data in the BTree but shows definite advantages in very large trees where there is sufficient commonality of index prefix to allow a reduction in the number of index levels and hence access time. It also allows the programmer to provide sensible keys to the data without worrying unnecessarily about the space overhead.
Click here for pricing and licensing details and to buy Virtual Machinerys B+Tree implementation online.
All Content © 2017 Virtual Machinery All Rights Reserved.
Some of the icons in this page are generously provided by MySiteMyWay