…brew your own Business Intelligence

How Much RAM Do I Need For My SSAS Tabular Server?

Much of my client work these days is focused on performance… making slow Analysis Services servers run faster. The good part about these types of engagements is that there’s usually (not always) a good amount of low-hanging fruit and I get to play the hero that swoops in to save the day. The bad part about these types of engagements is that many of the performance issues could have been avoided (or at least mitigated) had the server been given enough memory the start.

When you’re paying $100k+ in licensing costs for Enterprise Edition, the last thing you want to do is skimp on the amount of memory… classic case of penny wise pound foolish.

So, just how much memory is needed for your SSAS tabular server?

Short Answer: as much as you can afford. Relative to the cost of licensing, memory is cheap. Relative to the cost of having a SSAS pro tune the system/workload, memory is cheap. When in doubt buy more than you think you’ll need and move on to more important problems to solve.

Long Answer: it depends! Not everyone has the luxury of buying new hardware each time they need to stand up a new SSAS solution… and not every SSAS solution demands new-shiny top of the line hardware. Often times a company will have perfectly suitable hardware available for reuse – in which case the question changes ever so slightly to: “do I have enough memory? If not, how much more should I buy for this server?”

With an “in-memory” technology like Analysis Services Tabular, the answer may seem obvious: determine the size of your model and buy that much memory. Unfortunately, its not quite that simple.

Raw Data Size

If you’re building the Tabular model on top of an existing source database (ideally a dimensional model), then this is the size of that relational database…or at least the tables that you plan to bring into the model. If you’re unsure how to determine the size of your SQL Server database, start here: StackOverflow – Get Size of All Tables In Database

Compression Ratio

In addition to being an “in-memory” technology, Analysis Services Tabular is also a “column-store” technology which means all the values in a table for a single column are stored together. As a result – and this is especially true for dimensional models – we are able to achieve very high compression ratios. On average, you can expect to see compression ratios anywhere from 10x-100x depending on the model, data types, and value distribution.

What this ultimately means is that your 2 TB data mart will likely only require somewhere between 20 GB of memory (low-end) and 200 GB (high-end) of memory. That’s pretty amazing – but still leaves us with a fairly wide margin of uncertainty. In order to further reduce the level of uncertainty, you will want to take a representative sample from your source database, load it into a model on a server (in your DEV environment) and calculate the compression factor.

Note: make sure the sample is representative and don’t just test the fact tables or tables with the most rows – in many cases, dimension tables (with lots of string-based columns) can have a significant impact on model size.

Growth Factor

This is one is pretty straight forward – but often overlooked. How much is your data set going to grow over time? Is there a retention period or are you keeping all of history?

If the retention policy is fixed (e.g. 5 years), and the growth over time is flat the overall size remains constant over time. However, if data is growing at a rate of 15% year over year then calculating the growth factor depends on the life-span of the solution.

Note: Things can get a bit more complex if you have additional scaling factors such as those found in multi-tenant solutions. 

Workload Factor

This is the amount of additional memory (above and beyond that used to store the fully-loaded data model at rest) needed to accommodate your processing strategy and/or peak query workload.

Let’s start with processing strategy:

ProcessFull: When using the ProcessFull command to process a tabular model, a shadow copy of the model is built in memory, right next to the existing model, on the same server. It isn’t until one of the very last steps of the process, that the old model is dropped from memory and the new model becomes available for queries. This means we need enough memory to hold 2 copies of the same model at the very peak of processing.

ProcessClear+ProcessFull: This is the same as the previous option, only we clear out the existing model from memory before processing so our memory requirement is reduced to 1x because even at the peak we still only have a single copy of the model in memory.

The benefit of “ProcessFull” is that the (existing) model remains available for queries while the shadow copy is being processed. However, if you’re processing after business hours (common in the majority of cases) and don’t need to have the model available for queries, then using a “ProcessClear + ProcessFull” can greatly reduce your memory requirements.

Note: There are several other flavors of processing strategies (e.g. Incremental) requiring less memory than the 2x needed by ProcessFull… but they’re a bit more complicated to implement and usually require a bit more work upfront in the data model and ETL. If you’re model is small and a FullProcess is completing well within your time window, its best not to bother. The incremental is really only for short-processing windows and/or large models.

The requirement for peak query workload depends heavily on writing efficient DAX (which often depends on having the right data model and experienced DAX developers). Poorly written DAX can result in substantial “memory materialization” – which is when the engine needs to build an intermediate result set before applying some final calculation logic in order to resolve a DAX query. This intermediate result set is NOT compressed… it happens outside of the storage engine… so if it contains a lot of rows/columns, it can consume a substantial amount of memory.

Note: For more information on memory materialization, see Chapter 13 of The Definitive Guide To DAX.

Here’s an example from earlier this year…

image

What you’re looking at ^^^ is a single DAX query that was run against an 8 GB tabular model (no other activity on the server at the time)… see that black line? That’s the query eating all the available memory on the server. Once this query consumes ~50 GB of memory and hits the hard memory limit (red line @ 90 GB) it fails with an out-of-memory exception.

This is an extreme example to be sure – and I’m not saying you need to prepare for that scenario when purchasing hardware… because that’s not a hardware issue (it’s a DAX tuning issue). What I am saying is that you need to be aware that “materialization” happens in many queries/calculations and most of the time it’s not a big deal. But from a memory requirement standpoint, even a small amount of materialization can become non-negligible across a large user base.

Final Thoughts

  • never buy a new server with less than 128 GB of memory. Period. Doing so would be the equivalent of purchasing a $1k+ suit and matching it with a pair of crocs.
  • when re-purposing an existing server – don’t forget you have the option to upgrade/add more memory
  • tabular compression is awesome but hard to estimate… best to take a measurement and extrapolate
  • processing (depending on the approach) can require a substantial amount of memory
  • query materialization can become a problem if you don’t have the right model and/or write inefficient DAX

 

I help make slow SSAS servers run faster. Need immediate pain relief? Let’s have a conversation.

8 thoughts on “How Much RAM Do I Need For My SSAS Tabular Server?

  1. Colin says:

    In perfmon above which counter is diplayed?

    1. Bill says:

      Memory Usage KB (black line)
      Memory Limit Low KB (yellow line)
      Memory Limit High KB (gold line)
      Memory Limit Hard KB (red line)

      … if you’re interested in more counters for SSAS you might find this post helpful: http://byobi.com/blog/2016/03/performance-monitoring-for-ssas-perfmon-counter-cheat-sheet/

  2. A. Hitz says:

    If the raw data is already in columnstore tables, should we assume we won’t get much additional compression from the SSAS model?

    1. Bill says:

      i think that’s a very fair assumption

  3. davegugg says:

    Great post, thank you for the information.

    1. Bill says:

      thanks – glad you liked it

Leave a Reply