You’re Deploying it Wrong! – AS Edition (Part 5)

This is part 5 of the Analysis Services DevOps blog series. Go to part 4

Release Pipelines

In the previous chapter, we saw how to set up a complete build pipeline, that loads our Tabular model from source control, performs a schema check, runs the Best Practice Analyzer and even performs a validation deployment. At the end, the build pipeline saves and publishes a Model.bim file as an artifact. You can think of this artifact as the “compiled” version of our model, even though it is still just a .json file. The bottom line is that this one file contains everything needed for deployment on any instance of Analysis Services.

To actually perform the deployment to various environments (Dev, UAT, Prod, etc.), we are going to use the concept of Release Pipelines in Azure DevOps.

A release pipeline uses the artifact(s) produced by one or more build pipelines, to perform the actual deployment to one or more so-called “Stages”. Each stage represents an environment such as Dev, UAT, Prod, etc. You can define as many stages as you need, and you can set up triggers between the stages. For example, you can enable automatic deployment to UAT once deployment to Dev succeeds. Or you can set up pre-deployment approval on a stage, such that someone should manually approve the deployment before it is started.

In the screenshot below, we have 3 environments. Deployment to Dev is triggered automatically whenever a new build artifact has been created. Deployment to Test is triggered automatically once deployment to Dev succeeds. Deployment to Prod is triggered once deployment to Test succeeds and someone approves the deployment. For this specific scenario, we set it up such that only the team lead could approve deployments to Prod:

Screenshot 2020-02-12 at 09 56 25

Release pipelines can be configured in many different ways, so I leave it up to you to decide how best to taylor them to the needs of your team. Ideally, you want to set up a single release pipeline that can deploy your entire BI solution (infrastructure, database project, ETL, tabular model, etc.) across all environments. For the purposes of this article, however, we will focus on how to deploy tabular models only.

Within each stage in your release pipeline, you can define a number of tasks similar to your build pipelines. You can define pipeline variables that have different values depending on which stage is being executed, making it easy to reuse scripts, command line tasks, etc. across all stages.

Creating your first Release Pipeline

To create a new release pipeline, go to Pipelines > Releases > Click “New” and choose “New Release Pipeline” > Start with an Empty Job. You should then be presented with a screen that looks like this:

Screenshot 2020-02-12 at 10 25 21

Click “Add an artifact” and choose the build pipeline you created in chapter 4 as the Source. Click “Add”. Then, it’s time to define your stages. Click on “Stage 1” and change its name to “Dev” or whatever else you would like to call the first environment in your release pipeline. Don’t add any additional stages just yet – it’s easier to clone the “Dev” stage once we have defined all the tasks it needs.

Next up, let’s define our pipeline variables. Click on the “Variables” tab and add the following variables. For now, only supply the values for the first stage (Dev):

  • ASConnectionString: Connection string of the Analysis Services instance you want to deploy to. For Azure Analysis Services, the recommendation is to use Service Principals for deployment. For Power BI Premium / A-SKUs, the connection string is a little different.
  • ASDatabase: Name of the database you want to deploy on the Analysis Services instance.
  • SQLDWConnectionString: Connection string to be used for the Data Source object inside your Tabular model. That is, the SQL Server connection that your Tabular model will connect when refreshing data.

Click on the “Tasks” tab and choose the “Dev” stage. You’ll be presented with an interface that is very similar to the one used for defining build pipelines in chapter 3 and 4.

Screenshot 2020-02-12 at 11 22 43

The tasks we add here, will be executed whenever a release to the Dev stage is performed. First, let’s add a PowerShell task for downloading Tabular Editor. You can use an inline script or point to a script stored in your repository. For reference, here’s the code that will download a specific version of Tabular Editor from GitHub (we used the same code in chapter 3):

# Download URL for Tabular Editor portable:
$TabularEditorUrl = "https://github.com/otykier/TabularEditor/releases/download/2.12.4/TabularEditor.Portable.zip" 

# Download destination (root of PowerShell script execution path):
$DownloadDestination = join-path (get-location) "TabularEditor.zip"

# Download from GitHub:
Invoke-WebRequest -Uri $TabularEditorUrl -OutFile $DownloadDestination

# Unzip Tabular Editor portable, and then delete the zip file:
Expand-Archive -Path $DownloadDestination -DestinationPath (get-location).Path
Remove-Item $DownloadDestination

Next up, add a Command Line task, which will use Tabular Editor to update the connection string within the model and deploy to the target instance of Analysis Services. No need to run schema checks, best practice analyzer, etc. here, since we already did that in our build pipeline, so we know that the build artifact is good and ready for deployment. However, we do still need to execute a C# script on our .bim file, to ensure that the Data Source inside our model gets updated to point to the correct SQL database. Tabular Editor scripts cannot be defined inline when calling TabularEditor.exe – we have to supply a file containing the script. Thus, we have three options available:

  • We can generate the script file in a separate command line step.
  • We can add another artifact to our release pipeline, based on our Git repository.
  • We can ensure that the script is published as part of the artifact we created in the build pipeline.

The best practice approach, is to revisit our build pipeline to ensure that the script gets added to the same artifact as the Model.bim file. This way, we can treat an artifact as a complete “build package” containing everything we need to deploy our model. This is left as an exercise for the reader (hint: Use a “Copy Files Task” to copy your script to $(Build.ArtifactStagingDirectory) before publishing the artifact).

Assuming your model only contains a single data source, and the only property we want to change is the connection string, the C# script is a very simple one-liner. The script simply assigns the value of environment variable SQLDWConnectionString to the ConnectionString property of the first (0-indexed) data source of our model, which is assumed to be a ProviderDataSource (legacy). If you’re using a StructuredDataSource (Power Query), remember modify the script accordingly:

(Model.DataSources[0] as ProviderDataSource).ConnectionString = Environment.GetEnvironmentVariable("SQLDWConnectionString");

For this tutorial, we’re just going to generate the script “on-the-fly” within our command line task, so that it does not need to be part of the build artifacts. Thus, our command line task will contain a script with two lines of code: The first line generates the C# script for Tabular Editor using an echo command. It’s ugly, but it works. The second line executes Tabular Editor similar to what we did in chapter 4, only this time, we use the build artifact as the source for deployment:

echo (Model.DataSources["Azure SQLDW AdventureWorks"] as ProviderDataSource).ConnectionString = Environment.GetEnvironmentVariable("SQLDWConnectionString"); > SetConnectionStringFromEnv.cs
TabularEditor.exe "_$(Build.DefinitionName)\drop\Model.bim" -S SetConnectionStringFromEnv.cs -D "$(ASConnectionString)" "$(ASDatabase)" -O -C -R -V -E -W

Pay close attention to the first argument passed to TabularEditor.exe: "_$(Build.DefinitionName)\drop\Model.bim". This points to the Model.bim file inside of our published artifact.

Why is there an underscore? Take a look at the source alias of the artifact on the release pipeline. Azure DevOps typically sets the alias to an underscore + the name of the build. Of course you’re welcome to change that to something else.

Why does the path include \drop\? Well, that’s because we kept the default artifact name in the Publish build artifact-task, that we set up in chapter 4. If you changed the name to something else, or if you copied the Model.bim file to a subfolder, make sure to update this string so that it points to the correct location of the .bim file. If in doubt, remember that you can go back to your build pipeline, queue a new build, and then inspect the published artifacts once the build finishes.

Lastly, make sure to pass the SQLDWConnectionString pipeline variable in as an environment variable:

Screenshot 2020-06-20 at 21 38 16

Caution: You may be tempted to reference the SQLDWConnectionString pipeline variable directly in the first line of the command, instead of passing it in as an environment variable as I did above. That’s a bad idea! You would end up with a file on the build agent, containing the full connection string including credentials, in clear text! Always pass sensitive values through environment variables to make sure they are not persisted or logged anywhere!

Save your release pipeline. At this point, you should test the pipeline to see if it deploys correctly, so hit “Create release” in the upper right corner. Go to the newly release and monitor it by clicking on the “Logs” button under the “Dev” stage. If everything is green and the log for the deployment step looks like this, you’ll know that the deployment was succesful.

Screenshot 2020-06-20 at 22 01 32

A note on deployment switches

In the example above, we used a number of switches in our deployment. Here are some more details as a reminder of what each of these switches mean:

  • -S / -SCRIPT This instructs Tabular Editor to execute the C# script in the file specified as the following argument. This is done after the model metadata is loaded, but before any schema checks, best practice analysis or deployments are performed, which is why it’s a good place to update data sources or other properties inside the model metadata as a sort of “last-second adjustment”.
  • -D / -DEPLOY This one is pretty self-explanatory. It instructs Tabular Editor to deploy the model. The argument immediately following this switch should be the connection string or server name of the instance of Analysis Services to deploy to. The argument after that, should be the name of the database to deploy.
  • -O / -OVERWRITE This one allows Tabular Editor to overwrite an existing database with the same name. If not specified, you would get an error if a database of the same name already exists. This switch can be followed by a number of switches that indicate partial deployment of objects within the model. These are:
    • -C / -CONNECTIONS If NOT specified, existing data sources on the destination database remain intact, and only any new data sources (by name) are deployed. You must specify this if you want to update a connection string in a data source, like we did above. However, if you know that the destination database already uses the correct connection string, then you can omit this exercise entirely.
    • -P / -PARTITIONS If specified, Tabular Editor will update the partitions in the destination database to match those in the source file. That means, if you have dynamic partitioning schemes (or incremental refresh policies for Power BI Datasets), you might lose data in your destination database, as any partitions not in the source file will be dropped. If NOT specified, partitions remain untouched. Keep this in mind if you add a column to a table, and your partition query explicitly specifies source columns instead of SELECT * FROM ....
    • -R / -ROLES If specified, Tabular Editor will update roles in the destination database to match those in the source files. By default, rolemembers are not deployed, meaning any members in existing roles will remain intact (unless the role is deleted in the source). If you want to deploy role members as well, use the following switch after the -R switch:
      • -M / -MEMBERS
  • -V / -VSTS Instructs Tabular Editor to output logging/error messages in a format that Azure DevOps (formerly known as Visual Studio Team Services – VSTS) can pick up. Should always be present when running Tabular Editor in an Azure DevOps pipeline.
  • -W / -WARN This flag instructs Tabular Editor to output information about unprocessed objects as warning messages. If not specified, information about unprocessed objects would still be outputted to the log, but it would be purely informational. Objects become unprocessed for example when adding a new column to a table, changing a calculated column or calculated table expression, etc.
  • -E / -ERR In some situations, Analysis Services may indicate that the model contains errors after an otherwise succesful deployment. For example, when the model contains measures with invalid DAX code. If this flag is enabled, such errors will instruct Tabular Editor to pretend that the deployment itself failed, such that the DevOps pipeline will not proceed with subsequent tasks. If not set, Tabular Editor will still report the error in the log, but the DevOps pipeline will continue execution of subsequent tasks.

Releasing to additional stages

Now that your deployment to Dev completed succesfully, simply clone the Dev stage a number of times for each additional environment you have, and set up relevant triggers according to your preferences. At a minimum, you probably want to set up pre-deployment approval on your Test / UAT and Prod stages, to avoid continuous integration to those stages (see section below).

Once done, go back to the “Variables” tab. It’s time to adjust the scopes of our variables to account for the different environments we’re deploying to. First, set the scope of the 3 variables you created initially to your first stage, “Dev”. Then add 3 new similarly named variables. If any of your variables have the same value throughout the stages, you don’t have to add multiple copies but can leave the variable under the “Release” scope.

Screenshot 2020-06-20 at 22 31 55

To get a better overview of how the variables are scoped, you can switch the view to “Grid” mode:

Screenshot 2020-06-20 at 22 33 26

If you want to reuse variables across multiple release pipelines, or if some variable values should be picked up from an Azure Key Vault, use Variable Groups instead.

Release triggers

The final thing you might want to consider, is to enable continuous integration/deployment. Go back to the Pipeline tab, and click the little “lightning icon” on the artifact of the pipeline. Enable the continuous deployment trigger. This will automatically start your release pipeline on all stages that are set to trigger “After release”, once a new build is available on the build pipeline that produces the artifact. Stages that are set to trigger “After stage”, will only be deployed once the previous stage is succesfully deployed. Stages that are set to trigger “Manual only”, will need user interaction for deployment.

On the picture below, we have set up continuous deployment on our “Dev” and “CI Test” stages. However, the “UAT” stage and the “Prod” stage requires pre-deployment approval from someone, and the latter also requires a succesful release on both “CI Test” and “UAT”. Keep in mind that we could have additional tasks on one of the stages – for example, running a set of tests against Analysis Services, to ensure business logic or performance benchmarks are satisfied.

Screenshot 2020-06-20 at 22 47 54

Even when you set your first stage to “After release”, you may also need to set up a trigger on your build pipeline, to get the build artifacts produced automatically in the first place. For example, you can set it up so that it is automatically triggered whenever a commit is made to a Git branch that starts with the name “/release/…”, or something like that. That way, triggering a full build and release to your Dev environment is just a matter of forking your code into a new branch that follows a certain name pattern. Once the build succeeds, your release pipeline is then automatically triggered, and the train is moving…

Branch policies

For larger projects, the recommendation is to set up a Git branch policy on your master or development branch, to ensure that no commits are made directly to this branch. Changes can only be made through Pull Requests, so developers should branch out from development as discussed in Chapter 2. In addition to the policies specified in that chapter, you should enable build validation for the branch, using the build pipeline you created in Chapter 4, or maybe even a seperate pipeline that executes additional tests and checks. This will ensure that PRs can not be completed before the build pipeline executes succesfully, which happens automatically when the PR is created or updated. You can also assign policies for PR review if needed.

Summing up

I sincerely apologise that it has taken me this long to complete this series of articles. I set out with a clear goal of teaching the world how we can apply true, automated CI/CD principles for Analysis Services Tabular models – something that would have been very difficult without the use of Tabular Editor. When writing the first article, I only had limited experience with this setup at a few clients, and I’ve learned many things along the way. Also, Azure DevOps has seen considerable updates through the years, so it can sometimes be difficult to keep yourself up-to-date on all the relevant areas. However, once you invest the necessary time in learning the techniques shown in these articles, you’ll be extremely glad you did, and your BI developers will thank you!

As you have seen, Azure DevOps is an incredibly powerful and flexible tool for setting up automated build and releases, but other tools exist that can do many similar things – and all that’s needed for Tabular model deployment is that the tool can execute TabularEditor.exe.

If you want to see the principles in action, please check out the video below, and feel free to leave a comment at the bottom of the page.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top