SAP BI Accelerator
Those who have worked on squeezing out as much performance as possible with aggregates can appreciate the never-ending and time-consuming nature of analyzing user behavior and creating optimal aggregates for their use (while balancing the impacts this has on data management activities such as aggregate roll-ups, attribute change runs and dreaded re-initialization scenarios.Aggregate optimization strategies for end-user performance are a subject for the section on “Aggregates” in Chapter 12. From an administrator’s perspective, optimizing aggregates might consist of a roll-up hierarchy plan (filling aggregates from aggregates). From a modeler’s perspective, aggregate optimization may consist of aggregating data into separate data targets to avoid the performance impacts attribute change runs have on aggregates.
The BI accelerator simplifies this world by eliminating a lot of data redundancy through the use of an innovative indexing scheme leveraging SAP’s proprietary search engine technology called TREX. There is only one conceptual “aggregate” and that is the BI accelerator index. InfoCubes can have both aggregates and an BI accelerator index simultaneously, but only one or the other can be active at any given time (that is, one can toggle between the two to evaluate which option is preferred).
From a data-maintenance perspective, the BI accelerator is very similar to an aggregate roll-up. There is the build and fill of the index, as well as roll-ups. However, there are differences to be noted.
From a performance tuning perspective, the differences are a lot more apparent. While aggregates must be manually optimized based on end-user behavior, the plan is for BI Accelerator to automatically adjust it, and index accordingly (that is, zero administration from this perspective). The goal of BI Accelerator is to deliver automatic monitoring, configuration, optimization, and self-repair of the index and TREX-based BI Accelerator engine.
The build and fill of a BI Accelerator index is done manually via the data target context menu, while the roll-ups can be scheduled as process chain variants (the same exact one used for aggregates). Like aggregates, BI Accelerator indexes can be toggled active and inactive manually, and need an attribute change run scheduled after navigational attributes are changed via a master data load.
The data management impact of an attribute change run is very small in comparison to aggregates. This is because aggregates store navigational attributes inside the extended star schema (like a mini-InfoCube), while the BI Accelerator index is predicated on the InfoCube data model where navigational attributes are stored outside the extended star schema. As a result, adjusting the BI Accelerator index is like adjusting master data (no need for realignments).
From a different perspective, InfoCube compressions work differently for aggregates and the BI Accelerator index. Again, aggregates are like mini-InfoCubes and have a request dimension and use compressed (the E table) and uncompressed (the F table) fact tables just like their underlying InfoCube. This makes deletion of a specific request out of an InfoCube easy before compression is run (otherwise, aggregates must be rebuilt).
After compressing an InfoCube, it makes sense to compress the corresponding requests in the aggregates data conservation and performance reasons.
Data compression is not necessary for BI Accelerator indexes, and if compression is run frequently enough, it may prompt the need to rebuild the BI Accelerator index. This is because the index is not updated when compression is run. As a result, it is possible to have a state where there are more entries in the BI Accelerator index than in the InfoCube fact table. To keep the index optimized, at some point it makes sense to rebuild the BI Accelerator index to synchronize it with compressed data.

0 comments:
Post a Comment