…brew your own Business Intelligence

Interrogating the Azure AS Configuration Files via PowerShell

When brought in the perform a health check or performance optimization, one of the first steps in the process (after getting the perfmon and xevent traces running) is to document review the environment. In addition to the basics (i.e. CPU cores, GB of memory, clock speeds, etc) one item I’m really interested in is the instance configuration as its not all that uncommon to find the source of issues there.

Seeing as this type of engagement has become quite routine, it should be no surprise that I’ve invested a good chunk of time creating scripts to automate much of the work. My favorite method is to use code – similar to what I’ve shared in this blog post – to shred the msmdsrv.ini file and compare it with a version from a default installation. This is a very quick and easy way to detect any differences between the current configuration settings and the default… and there should be differences!

Unfortunately, this method doesn’t work with the new Azure AS (preview) and so we must take an alternative – and no, I’m certainly not suggesting a manual approach!

PowerShell to the Rescue

What you see below is a PowerShell script (very similar to the ones I included – but did not have time to demo at PASS Summit this year) that will connect to an Analysis Services instance and loop through the “ServerProperties” collection – displaying the current value along side the default value. The best part is that the only change needed to get this to work with an Azure AS (preview) instance is to change the server property:

## Add the AMO namespace
$loadInfo = [Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices")
$server = New-Object Microsoft.AnalysisServices.Server

#$server.connect("localhost\SSAS_TAB")
$server.connect("asazure://southcentralus.asazure.windows.net/ssas01")

cls
Write-Output("================================================================================") 
Write-Output(" Analysis Services Info")
Write-Output("================================================================================")
Write-Output("    OVERVIEW")
Write-Output("      {0,-80} {1}" -f "Instance:", $server.Name)
Write-Output("      {0,-80} {1}" -f "ProductName:", $server.ProductName)
Write-Output("      {0,-80} {1}" -f "ProductLevel:", $server.ProductLevel)
Write-Output("      {0,-80} {1}" -f "Edition:", $server.Edition)
Write-Output("      {0,-80} {1}" -f "Version:", $server.Version)
Write-Output("      {0,-80} {1}" -f "ServerMode:", $server.ServerMode)
Write-Output("      {0,-80} {1}" -f "ServerLocation:", $server.ServerLocation)
Write-Output("      {0,-80} {1}" -f "DefaultCompatibilityLevel:", $server.DefaultCompatibilityLevel)
Write-Output("")
Write-Output("    PROPERTIES")

foreach($svr_prop in $server.ServerProperties | Sort-Object -Property Name) { 
    Write-Output("      {0,-70} {1,30} {2,30} " -f $svr_prop.Name, $svr_prop.DefaultValue, $svr_prop.Value)
}

Running that script produces output similar to what you see below:

image

Note: this is only a subset of the instance configuration properties – the msmdsrv.ini file includes quite a few more. However, this does include most of the key properties – basically everything you see when you access the instance-level properties via the object explorer window in SSMS.

 

Interesting Observations?

Upon hearing from the Azure AS program managers (Bret Grinslade & Christian Wade – who’ve done a tremendous job!) that the GA release is targeting feature-parity with SSAS 2016 Tabular Enterprise Edition (wow!) I was immediately interested in testing the arsenal of scripts I’ve accumulated over the years.

Here are a few differences (from defaults) found using the script above to probe for instance configuration details. I’ve highlighted the ones I find most intersting:

Standard (S1) instance (100 QPUs, 25GB memory)

    • Port
      • default value: 0
      • current value: 49584
    • ThreadPool\Process\Max Threads
      • default value: 0 (absolute value of 64, or the number of logical processors, whichever is higher)
      • current value: 256
    • ThreadPool Affinity Masking (Command, IO Process, Parsing Short/Long, Query, Process)
      • default value: <blank>
      • current value: 0x1F
    • Memory\Total Memory Limit
      • default value: 80 (or 80% of physical RAM)
      • current value: 15 (or 15% of physical RAM)

Standard (S2) instance (200 QPUs, 50GB memory)

    • Port
      • default value: 0
      • current value: 49581
    • ThreadPool\Process\Max Threads
      • default value: 0 (absolute value of 64, or the number of logical processors, whichever is higher)
      • current value: 256
    • ThreadPool Affinity Masking (Command, IO Process, Parsing Short/Long, Query, Process)
      • default value: <blank>
      • current value: 0xFFC00
    • Memory\Total Memory Limit
      • default value: 80 (or 80% of physical RAM)
      • current value: 30 (or 30% of physical RAM)
    • Network\Requests\EnableBinaryXML & Enable Compression
      • default value: false
      • current value: true


Standard (S4) instance (400 QPUs, 100GB memory)

    • Port
      • default value: 0
      • current value: 49578
    • ThreadPool\Process\Max Threads
      • default value: 0 (absolute value of 64, or the number of logical processors, whichever is higher)
      • current value: 256
    • ThreadPool Affinity Masking (Command, IO Process, Parsing Short/Long, Query, Process)
      • default value: <blank>
      • current value: 0xFFFFF
    • Memory\Total Memory Limit
      • default value: 80 (or 80% of physical RAM)
      • current value: 61 (or 61% of physical RAM)
    • Network\Requests\EnableBinaryXML& Enable Compression’
      • default value: false
      • current value: true

As you can see, in all 3 cases (S1, S2, S4), the [Total Memory Limit] is set below the [Low Memory Limit]. This seems a bit odd as the effect of such a configuration (at least in the on-premise scenario) is to bring the [Low Memory Limit] & [VertiPaq Memory Limit] down to match the [Total Memory Limit].

For example, on my lab VM shown below, you can see where the [Total Memory Limit] (which, for whatever reason, is labeled as “High” instead of “Total” in perfmon) was changed from the default of 80% to 15%… and immediately, the [Low Memory Limit] (default = 65%) and [VertiPaq Memory Limit] (default = 60%) follow suit.

image

(warning: lots of personal assumptions, guessing, etc being made in the rest of this post)

Now if we assume the same behavior is true for the Azure AS instances, then I’m really curious how the memory limits (i.e. 25GB for S1, $50GB for S2, 100GB for S4) are measured… because it could have an negative performance impact if part of the model gets paged to disk (b/c the VertiPaq Paging Policy is set to “1”).

Using some back-of-the-napkin-math for a 10GB model deployed to an S1 instance…

  • Total Memory: 25GB
  • Total Memory Limit: 3.75 GB (15% of 25GB)
  • Low Memory Limit: 3.75 GB (configured for 60% but limited to 15% due to Total Memory Limit)
  • VertiPaq Memory Limit = 3.75 GB (configured for 60% but limited to 15% due to Total Memory Limit)
  • VertiPaq Paging Policy = 1

… most of that model will be paged to disk – which will have some impact on performance.

Ok – next interesting observation… affinity masking. For those who are unfamiliar, affinity masking is a technique used to turn on/off specific CPU cores for various processes (see additional references section at bottom for more detail). By default, these are left blank which simply means – “use all CPU cores for all subsystems”.

However, in the case of all 3 standard offerings, we see the following settings (for all thread pools)…

  • S1: 0x0001F
  • S2: 0xFFC00
  • S4: 0xFFFFF

Which, when converted from hex to binary starts to reveal a bit more info…

  • S1: 0x0000 0000 0000 0001 1111
  • S2: 0x1111 1111 1100 0000 0000
  • S4: 0x1111 1111 1111 1111 1111

Based on the above, my guess is that the Azure AS instances are running on 2 socket (10 cores/ CPU) commodity hardware… S1 & S2 share a server while S4 is on it’s own dedicated software. This also aligns with a comment I overheard recently regarding the rough approximation that 100 QPU’s is about the equivalent of 5 pretty fast CPU cores. Again, that’s an approximation, but it does align with the affinity masking you see above where the S1 @ 100 QPU has been affinitized with the first 5 CPU of the server, the S2 @ 200 QPU has been affinitized with the last 10 CPU cores of the server, and the S4 @ 400 QPU has been affinitized with all 20 CPU cores. And I wouldn’t be surprised if the D1 instances are spread across the remaining 5 CPU of the same server shared by the S1/S2 instances.

Now, assuming I’m right and S1/S2 are sharing the same hardware, then the memory configurations start to make a bit more sense…

Working backwards for an S1 instance where the [Total Memory Limit] (set to 15%) comes out to ~25GB of memory… and an S2 instance where the [Total Memory Limit] (set to 30%) comes out to ~50GB of memory… and assuming they are sharing the same hardware along with 1 or 2 D1 instances and some room for OS, we can estimate the hardware to be configured with somewhere around 192 GB of memory…

Final Thoughts

There’s certainly a lot more digging, exploring, and documentation to be done with this exciting new offering – which is still in preview mode so lots could change.

 

Additional References:

One thought on “Interrogating the Azure AS Configuration Files via PowerShell

Leave a Reply