ADLC Step 2: Remove Ineffective Aggregations

This is the third post in a series of posts covering the Aggregation Design Life-Cycle (ADLC).

  1. Introduction
  2. Create Initial Aggregation Designs
  3. Remove Ineffective Aggregations
  4. Capture Query Workload
  5. Run Usage-Based Optimization (UBO) Wizard
  6. Create Custom Aggregations
  7. Final Thoughts

In the previous post, we discussed the process for creating the initial set of aggregation designs which, from an aggregation perspective, serves as the jumping off point for a new cube prior to deployment into production.  In that post we primarily focused on the Aggregation Design Wizard and ways in which we can influence the resulting aggregation designs to better match our expected query load. Now, in this post we’ll be focusing on the second step in the ADLC which involves a bit of clean up that needs to take place before & after the cube is deployed to production.

ADLC - Step 2

Important Note: in the diagram above, this step “Remove Ineffective Aggregations” is included in the Production-section (bottom) indicating that this step should occur once the cube is in production.  However, for the very first iteration of the production-side loop (after the “Create Initial Aggregation Designs” step) this step can be performed prior to deploying into production…with the help of BIDS Helper which we will be discussing in more detail throughout this post.

Aggregation Effectiveness

An effective aggregation is one that is used (regularly) by the SSAS engine to satisfy incoming queries, providing a non-negligible performance increase over the alternative of not having the aggregation. To put it another way, an ineffective aggregation is one that meets any of the following criteria:

  1. not being used (because no queries are coming in that can be satisfied by that aggregation)
  2. too similar to another aggregation (queries that line up with this aggregation also line up with another aggregation)
  3. offer negligible performance increase over a full aggregation over the entire measure group partition

The goal of this step in the ADLC is to remove all ineffective aggregations…they provide little benefit (over the alternative of not having them) and they provide a cost in the form of space and processing duration.  Bottom line, ineffective aggregations are dead weight.

Finding the Dead Weight

Before BIDS Helper, I’m not sure exactly what methods people used to find ineffective aggregations…other than SQL Profiler and tons and tons of adderall strong coffee.  SQL Profiler detects aggregation hits via the Get Data From Aggregation event (shown below)…

click to zoom
click to zoom

…which means we can run a trace while the cube is being hammered by queries and then (meticulously) walk-the-trace to see which aggregations show up and then compare that to a list of all aggregations to determine which aggregations were not hit and delete them.  The difficulty hear is in ensuring that the profiler trace contains a representative sample of all the queries that are likely to going to be sent to the cube during production use…otherwise we could end up deleting aggregations that might prove helpful – but just failed to show up in the trace.

Note: the concept of capturing a sample of queries that accurately represents a production workload will be a recurring theme in the ADLC (and the focus of the next post in this series).  So for now, don’t sweat the details…just realize the importance/purpose of having it.  In the next post, we’ll cover methods that can be used to help ensure a representative query load.

Luckily for me, by the time I got hooked on SSAS development BIDS Helper was already out and being consistently developed (patched/enhanced) by two serious contributors to the Microsoft BI community: Greg Galloway (b | t) and Darren Gosbell (b | t).

Below are a few of the aggregation-maintenance related features in BIDS Helper:

  1. Test Aggregation Performance
  2. Delete Unused Aggregations
  3. Search Similar Aggregations

The first item, Test Aggregation Performance, is an excellent choice when performing this ADLC step for the first time…just after running the Aggregation Design Wizard.  On subsequent iterations, we might consider using Delete Unused Aggregations first…and then follow up with the Search Similar Aggregations or a run of the Test Aggregation Performance.

Test Aggregation Performance

This is a really powerful feature that can be run for the entire cube or for a single aggregation design:

click to zoom
click to zoom

At a high level, this feature generates MDX queries (based on the definition of the aggregations) and executes them in a variety of scenarios…depending on the options selected prior to kicking it off:

click to zoom
click to zoom

The end result is a report that provides you with information to help you determine which aggregations are worth keeping around.  Under-the-covers it does a lot of work to test the aggregations, so be prepared to go grab a cup of coffee and work on something else while this is running…because it can take a while for heavily aggregated cubes. You can read all the details here.

Delete Unused Aggregations

If the cube is already in production, then I much prefer the “Delete Unused Aggregations” feature over the previous one…mainly because it is data-driven.  This feature reads in the contents of a SQL Profiler trace that has been saved to a table…

click to zoom
click to zoom

…and enumerates the aggregations and the number of times each is hit by an MDX query captured during the trace.  Below is a sample screenshot showing the output after capturing a few queries targeting Aggregation 0:

click to zoom
click to zoom

Note: in the screenshot above, I used the “New In-Memory Live Trace”…in a real development scenario, it is unlikely you would use this option…in fact the scenarios where this option makes sense is very very limited.

The difficulty here, which I alluded to earlier, is in ensuring the source trace data contains a fully representative query workload.  If the trace only includes a small subset of queries our cube will need to support, then it is very possible that we end up deleting aggregations that are helpful…just not for any of the queries in this particular trace.  To help mitigate this risk I recommend taking either of the following approaches…maybe even a combination of both:

  1. Leave the trace running for a longggg time…as long as possible (up to a few weeks).  DBAs and IT-Ops are rolling their eyes at this recommendation, but in my defense, this particular trace is fairly light-weight requiring only a few events to be captured.

    click to zoom
    click to zoom
  2. Run a trace at various points in time (during business hours, weekends, period-end, etc) over the span of a few weeks…little bit here, little bit there. Then run each trace through the “Delete Unused Aggregations” feature…but don’t actually delete the unused aggregations.  Instead, compile a list of unused aggregations and update the list with the “Delete Unused Aggregations” results for each trace until you’re left with a final list of unused aggregations across all traces…then go in and delete only those remaining aggregations that didn’t show up in any of the individual traces.

Either (or both) of those options should increase your chances of capturing a fully representative sample and mitigate the risk of accidentally deleting effective aggregations.  Regardless, be sure to read the details of this feature beforehand.

Search Similar Aggregations

The “Search Similar Aggregations” feature is a feature that provides is the ability to detect aggregations that are similar to each other. It can be launched from the Aggregation Manager screen, at the Cube Level or at the individual Aggregation Design level:

click to zoom
click to zoom

…and produces an SSRS report listing each aggregation (parent) in the Aggregation Design along with an indented list of other aggregations that include the parent aggregation:

click to zoom
click to zoom

What you do from here is somewhat of an art/personal-preference.  In a heavy aggregation design, you might find that the aggregations resemble a plate of spaghetti. If you simply go in and remove *all* similar aggregations, there’s a chance you will hurt overall performance due to the inter-relatedness of aggregations. So another option is to go through and pick out only a few of the most similar aggregations and remove those…which reduces the chance of unintended consequences.

I don’t like either of those options…too manual, too risky.  So instead, I will skip this step and just run the “Test Aggregation Performance” feature (even though it takes much longer) which contains similar logic but will actually run through the process of testing various situations where similar aggregations are removed and put back…and then let you decide which similar aggregations to remove based on stats.

 

Considerations/Tips/Recommendations

  1. Remember you can begin this step prior to deploying the cube to production…or you can wait until the cube is in production.  This decision comes down to timeline and performance requirements.
  2. The use of certain features, such as measure expressions and certain MDX Script functionality (ex. currency conversion), will prevent the use of aggregations.  This information will NOT be taken into account by the Aggregation Design Wizard.  So if you don’t pay attention, when you get to this step and execute the “Test Aggregation Performance” for measure groups with measures using measure expressions…you might notice that NONE of the aggregations are being hit – despite the fact that the queries are generated to specifically target them.  Don’t panic, just confirm that it’s something like measure expressions and then delete the aggregations…why waste time processing them if they aren’t going to be used?
  3. The “Test Aggregation Performance” feature can take a longgggggggg time to complete.  If its a small cube without a ton of aggregations, I’ll kick it off for the entire cube…otherwise, it’s measure group by measure group.  You might even consider running this at the end of the day and letting it run after hours or over the weekend…checking in periodically to see if it’s finished.
  4. Consider backing up the original aggregation design prior to removing anything so that you can easily get back to where you started.

 

Leave a Reply