One of the more subtle architectural differences between tabular and multidimensional instances is caching.
Multidimensional is a disk-based technology and performance for query workloads (in many cases) depends heavily on cache utilization. Tabular, on the other hand, is a memory-based technology with (nearly) zero dependence on disk IO and is therefore much less dependent on cache for query workload performance.
Note: from a high-level, the term “cache” or “caching” or “cache layer” is synonymous with storing data (usually temporarily) in memory – but that’s a bit of an over simplification. Yes – cache is a memory-store, but from an Analysis Services perspective, it can (and usually does) involve storing data at an aggregate level or even the result of a calculation (in the case of the formula engine cache).
…which is perhaps (just a guess) the main reason for such significant differences in the cache designs between multidimensional and tabular instances. Tabular instances do not use the formula engine cache and the storage engine cache is smaller and less complex.
For comparison, here’s how a Multidimensional instance resolves an MDX query…
After the MDX is parsed, it goes to the formula engine (FE) at which point the FE cache stores (there are 2 types of FE caches stores: flat and calculation) are searched. If the full result cannot be found in the FE cache, the FE breaks down the query into a set of jobs and sends them to the storage engine (SE) where the data will be obtained from cache-store or one of the disk-stores (i.e. aggregation vs raw data).
Here’s how a Tabular instance resolves a DAX query…
No formula engine cache store, and no aggregation-store (or any disk-store for that matter)…there are only 2 places where data can be found to resolve a query request: storage engine cache and raw data – both of which reside in memory.
Note: the tabular storage engine cache is limited to 512 entries (which can quickly be saturated by poorly designed measures/queries) and can only be used for identical Vertipaq queries (whereas the storage engine cache in MD can summarize or subset existing cache entries for similar but not identical queries).
So what’s the big deal?
If tabular data is already in memory, what’s the point of having a cache at all? Memory is memory, right? Both are in main memory and access speed is the same, right?
Good question! Yes, access speed is the same. However, there are other benefits to a cache store.
For example, even though the data is already in memory, queries against the tabular model can still be slow… very slow even… usually in cases where the execution of the query is bound to the single threaded formula engine. To be sure, this is not a tabular specific problem… formula engine bound queries can be (and are commonly) found in both technologies and the issue (usually) speaks more to the deign of the model and/or the way the query is written (be that DAX for tabular or MDX in multidimensional). That said, performance problems related to FE-bound queries can be terribly difficult to resolve as it usually requires a redesign of the data model and rewrite of the query or measure(s) involved.
Note: while a deeper discussion of FE-bound query issues (and potential resolutions) is beyond the scope of this post, I urge you to explore the Optimizing DAX chapter in The Definitive Guide to DAX where Marco and Albert go into more detail and walk through several potential solutions.
As a short-term bandaid solution/workaround in MD environments, developers can sometimes implement a cache warming strategy to prime the SE/FE (storage engine/formula engine) caches so that when users come in and start loading dashboards and running reports against the cube(s), all data needed to resolve the queries is already in memory. This is a very effective solution for FE-bound queries as long as there are no other aspects of the solution/environment that prevent the use of the FE-caching and sharing between users (e.g. dynamic security, query scoped calculated members, etc).
With tabular, cache-warming is a much less used technique… mainly because there’s no disk IO bottlenecks to overcome and the storage engine cache is smaller…but also because the tabular engine does NOT use the formula engine cache stores when resolving DAX queries.
Here’s a (version of a) query published by Marco Russo (over at SQLBI.com) used to stress-test the formula engine and measure CPU performance:
Using a personal VM with SSAS 2014, this query takes ~43.6 seconds to complete on a cold cache (with over 99% of that time spent in the FE)…
…and ~42.8 seconds to run on a warm cache…
So even though the query spent zero time in the storage engine (results for all 3 storage engine queries were found in the storage engine cache), the query still required an unacceptable amount of time to run.
What if we move the calculation into the table model? In multidimensional solutions, moving calculations into the cube (MDX Script) can result in better cache utilization.
Now our query becomes…
…which, unfortunately, does nothing for performance of our warm cache run.
So how do you fix this?
One option is to try re-writing the query which may not be possible or may also require a redesign of the data model. Another option is to use MDX instead of DAX.
That’s right, MDX queries, even against a tabular model, can leverage the formula engine cache!
Here’s the MDX equivalent query…
On a cold cache, this query runs in 43.4 seconds…
However, our warm cache run takes 4 ms and has zero SE queries…
That’s right, zero storage engine queries… which tells us all of the information needed to resolve the query is coming from the formula engine cache. We can also confirm this via a profiler/xevents trace…
Keep in mind this only works with MDX queries that don’t have any query-scoped calculated members… a WITH-clause in the MDX query disables FE-caching!
…which means this doesn’t work for excel workbooks making use of slicers… slicers introduce query-scoped calculated members in the generated MDX 🙁
To be clear, this is not always going to be a viable solution. For example, if you are using a reporting tool, such as Power BI, which generates DAX queries against a tabular model, you’re out of luck because you can’t tell it to use MDX (though that would be a nice option). However, if your primary reporting tool is Excel (which generates MDX queries) or SSRS (where you can use DAX or MDX) then this is certainly an option. In both of these scenarios, assuming you have an FE-bound query workload, you may want to consider implementing a FE cache-warming strategy (with MDX) against the tabular model(s) in order take advantage of the formula engine cache stores as a interim solution.