Loading...

Boost your CICD automation for Synapse SQL Serverless by taking advantage of SSDT and SqlPackage CLI

Boost your CICD automation for Synapse SQL Serverless by taking advantage of SSDT and SqlPackage CLI

Introduction

 

Azure Synapse Analytics Serverless SQL is a query service mostly used over the data in your data lake, for data discovery, transformation, and exploration purposes. It is, therefore, normal to find in a Synapse Serverless SQL pool many objects referencing external locations,  using disparate external data sources, authentication mechanisms, file formats, etc. In the context of CICD,  where automated processes are responsible for propagating the database code across environments, one can take advantage of database oriented tools like SSDT and SqlPackage CLI , ensuring that this code is conformed with the targeted resources.

 

In this article I will demonstrate how you can take advantage of thee tools when implementing the CICD for the Azure Synapse Serverless SQL engine. We will leverage SQL projects in SSDT to define our objects and implement deploy-time variables (SQLCMD variables).  Through CICD pipelines, we will build the SQL project to a dacpac artifact, which enables us to deploy the database objects one or many times with automation.


Pre-Requisites

 

Before you run this lab, make sure that you are using the latest version of Visual Studio, since the support for Synapse Serverless was recently introduced in the 17.x version. The one that I've used in this lab was Microsoft Visual Studio Community 2022 (64-bit)  Version 17.7.3.

 

I've used Azure DevOps Git to setup the automated processes to build and deploy the Dacpac. In case you are using your own infrastructure to run these processes, ensure that you have installed the latest SqlPackage version in your agent runner machine. In this lab, I've used a Microsoft-Hosted Agent runner using the latest Windows image (running SqlPackage v162.0.52). 

 

TOC

 

To facilitate navigating through this lab, I'm breaking down this article in several steps:

 

Step 1: Adding a new database project to Visual Studio and importing the serverless pool 
Step 2: Taking advantage of SQL CMD Variables in your Visual Studio code

Step 3: Integrating your Visual Studio solution with a Git Repository

Step 4: Creating a DevOps Pipeline and building the Dacpac file

Step 5: Creating a Release Pipeline and deploying the Dacpac

 

Starting the Lab

 

Step 1: Adding a new database project to Visual Studio and importing the serverless pool 

 

Create a new project and select the "SQL Server Database Project" template.

 

RuiCunha_0-1694251824897.png

 

Select "Next" to start configuring your new project.

 

RuiCunha_1-1694251903046.png

 

Select "Create" to finish the project configuration.

 

Navigate to the Solution Explorer blade and double-click "Properties". This will open a new window displaying the database project properties.

 

RuiCunha_2-1694251903051.png

 

In "Project Settings", select the "Azure Synapse Analytics Serverless SQL Pool" target platform as shown in the figure below. If you don't see this option available in the dropdown list, most likely you don't have the latest SSDT/ Visual Studio version installed. 

 

RuiCunha_3-1694251903052.png

Note: You can refer to these links in case you need to update SSDT and Visual Studio:

Download SQL Server Data Tools (SSDT) - SQL Server Data Tools (SSDT) | Microsoft Learn

SQL Server Data Tools - SQL Server Data Tools (SSDT) | Microsoft Learn


After selecting the target platform, you can start importing the Synapse Serverless pool to your project.

To do this, from the "Solution Explorer" blade, right-click the project name and then select the "Import" --> "Database…" option.

 

RuiCunha_4-1694251903054.png

 

Hit the "Select Connection" button to specify your Synapse Serverless pool and from the "Import Settings" section, make sure to uncheck the "Import application-scoped objects only" in case you need to import any server-scoped objects as well.

 

RuiCunha_5-1694251903055.png

 

Select "Start" to begin importing the Serverless SQL pool objects to your project.

When the import is finished, check the Solution Explorer as it will show the sql files containing your database objects:

 

RuiCunha_6-1694251903057.png

 

Step 2: Taking advantage of SQL CMD Variables in your Visual Studio code

 

In this lab, I'm using an external table that is pointing to a specific external location in my development environment. This external table, named "userData", is targeting a delimited file, named "eds_mapping.csv", saved in a storage container named "csv".

 

RuiCunha_0-1694252100396.png

 

The data source being used by this external table is named "eds_storagecicd" and it is targeting a storage account named "stgsyncicddev".

 

RuiCunha_1-1694252100399.png

 

I've decided to use the Shared Access Signature authentication method, when accessing my storage account. That's why I'm defining a database scoped credential with this kind of authentication. 

 

RuiCunha_2-1694252100401.png

 

IMPORTANT NOTE: 

In this screenshot above you are not seeing the full t-sql statement as the SECRET argument is missing from the CREATE DATABASE SCOPED CREDENTIAL statement. However, a SECRET value was provided at creation time. This is a by design behavior (for security reasons) in Visual Studio, when importing your database objects from your database to the database project, sensitive information is not being exposed in the code. 


So, make sure you revise any code that is using sensitive information, like database scoped credentials, validating if the object definition is consistent with what you have in the database.

 

Here's an example of an error message that can result from deploying an external table to a target environment using a missing or invalid credential:

 

Msg 16562, Level 16, State 1, Line 27

External table 'dbo.userData' is not accessible because location does not exist or it is used by another process.

 

Usually, when deploying database objects like external tables from a source environment to a target environment, you require these objects to reference different resources. For example, the files that are referenced by external tables might be stored in a different storage location or storage path (and eventually stored with a different filename).

 

So, how can you ensure that your database objects are referencing the right resources when being deployed to a target environment?

The answer is using SQLCMD Variables. These variables can be used in SQL Server Database Projects, providing dynamic substitution to be used for publishing of Dacpac files, for example. By entering these variables in project properties, they will automatically be offered in publishing and stored in publishing profiles.

 

In Visual Studio, you can add these variables to your project, from the "Project Properties" window, by selecting the "SQLCMD Variables" menu option:

 

RuiCunha_3-1694252100403.png

 

Important note: by adding these variables to your code in Visual Studio, you are not changing anything at database level, your changes will be reflected in your project files only.

 

In this example, I'm creating three new variables , setting the values for the storage location, the file path, and the SAS key that will be used by the external table "userData" in my development environment:

 

RuiCunha_4-1694252100405.png

 

In the figures below you can see how the hardcoded values have been replaced by these SQLCMD variables in my database project files: 

 

RuiCunha_5-1694252100408.png

 

RuiCunha_6-1694252100410.png

 

RuiCunha_7-1694252100412.png

 

From the "File" menu, you can save all the changes, and before moving on to the next step, I'd recommend building your solution, from the "Build" menu, ensuring that your code is error free.

 

RuiCunha_8-1694252100415.png


And with this last action, we complete the lab's second step.

 

Step 3: Integrating your Visual Studio solution with a Git Repository

 

By having your Visual Studio solution integrated with a Git repository, you are leveraging source control in your project and improving your Continuous Integration process, as part of the CICD automation for your Synapse Serverless pool.

As part of this process, the goal is to push the changes from your Visual Studio project to your Git branch, as the Dacpac file will be built on top of these files. This Dacpac file will represent the outcome of this Continuous Integration process.    


To integrate your Visual Studio solution with your Git provider,  switch from the "Solution Explorer" tab to the "Git Changes" tab (you can access this tab from the "View" menu).

 

From "Git Changes", select the "Create Git Repository…" option to initialize a local Git repository.

RuiCunha_0-1694253173881.png

 

To create a new remote repository to push your changes, select the "Push to a new remote" option, otherwise, in case you prefer to use an existing remote repository, select "Existing remote" and provide your repository URL.

 

RuiCunha_1-1694253173885.png

 

Select "Create and Push" to complete the Git integration. During this integration, your project files will be automatically pushed to your remote repository. You can check the master branch in your remote repository, as it should contain all your project files:

 

RuiCunha_2-1694253173886.png

 

 

Step 4: Creating a DevOps Pipeline and building the Dacpac file

 

After integrating your Visual Studio database project with your Git repository, it's time to setup a DevOps Pipeline to build the Dacpac.

 

From the left navigation menu, select "Pipelines" and then "New pipeline" to create a new pipeline.

Select "Use the classic editor" to create a new pipeline without YAML.

 

RuiCunha_3-1694253194763.png

 

Select "Azure Repos Git" as the source type, and then specify your project name, repository and branch.

 

RuiCunha_4-1694253194765.png

 

 

Select the .NET desktop template.

 

RuiCunha_5-1694253194765.png

 

To simply your pipeline, you can just keep these tasks below:

 

RuiCunha_6-1694253194766.png

 

Select "Save & Queue" to save your changes and then select "Save and Run" to run your pipeline.

 

RuiCunha_7-1694253194767.png

 

RuiCunha_8-1694253194768.png

 

Once the job run is finished, you can validate the list of published artifacts by selecting the link below:

 

RuiCunha_9-1694253194769.png

 

The link will take you to the Dacpac file:

 

RuiCunha_10-1694253194770.png


Now that the Dacpac file has been published, it's now time to configure the Continuous Deliver process. 

 

Step 5: Creating a Release Pipeline and deploying the Dacpac

 

In this step I'm creating a new release pipeline to deploy the Dacpac file to a target Synapse Serverless SQL pool.

 

From the left navigation menu, select "Pipelines" and then select "Releases".

To start configuring your new release pipeline, select "+New" and then "New release pipeline".

 

 

RuiCunha_11-1694253223879.png

 

When prompted to select a template, select "Empty Job".

 

RuiCunha_12-1694253223879.png

 

you can name your stage and then close this blade.

 

RuiCunha_13-1694253223880.png

 

Let's start by adding our Dacpac file as a pipeline artifact. Select "+Add" to add a new artifact.

 

RuiCunha_14-1694253223880.png

 

Under "Source Type" select "Build". you must specify your project name and the build pipeline name.

 

RuiCunha_15-1694253223881.png

 

Select "Add" to add this artifact to your pipeline.

 

RuiCunha_16-1694253223882.png

 

Select the "Tasks" tab to start configuring your pipeline. Click the "+" button in the Agent Job bar to add a new task.

 

RuiCunha_17-1694253223883.png

 

You can type "data warehouse" in the search bar , as you're looking to add the "Azure SQL Datawarehouse deployment" task to your release pipeline. This task will allow deploying a Dacpac file to the target environment.  

 

RuiCunha_18-1694253223884.png

 

Select "Add" to add this task to your pipeline.

 

Let's start by configuring the authentication related inputs in this task. Instead of using hardcoded values, I'll take advantage of the user defined Variables in my DevOps pipeline.
 

RuiCunha_19-1694253223884.png

 

In order to define and set the values for your variables, you must select the "Variables" tab. I'm using these variables below, defining values for my target Synapse Serverless server, database and user credentials. 

 

RuiCunha_20-1694253223885.png

 

Back to the "Tasks" tab, let's continue configuring our task , in particular the "Deployment Package" section.

 

When you select the "SQL DACPAC file" deploy type, the deployment task will execute the SqlPackage CLI to deploy (publish) the Dacpac file. The SqlPackage is a command line utility built on top of the Data-Tier Application Framework (DacFx) framework , and it exposes some of the public DacFx APIs like the Extract, Publish and Script. Since we want to deploy a dacpac file, the action that we are interested in is the PUBLISH action. 

 

To specify the "DACPAC file" location, hit the "Browse" button 

RuiCunha_21-1694253223886.png

 

Specify the Dacpac file location from the linked artifact:

 

RuiCunha_22-1694253223886.png

 

There's a final step that you need to take before saving and running your release pipeline: replacing the SQLCMD variables values with new values pointing to your target environment, as these variables are still referencing the resources in your source environment.  

 

Any valid SQLCMD variable existing in the Dacpac can be overridden by adding the /v: (short form for /Variables:) property to the arguments list.
You can refer to this link to get more details on how to use SQLCMD variables in SqlPackage:

SqlPackage Publish - SQL Server | Microsoft Learn

 

In this example, because I'm using those three SQLCMD variables in Visual Studio (storage_location , file_path and sas_key) , I'm adding three user defined variables to my pipeline, to override the SQLCMD variables.

My external table "userData" will be pointing to a different storage account (stgsyncicduat instead of stgsyncicddev)  and to a different file path (target-csv/eds_mapping.csv instead of csv/eds_mapping.csv). I'll be obviously replacing the storage account SAS key as well.

 

RuiCunha_23-1694253223887.png

 

After defining the pipeline variables, return to the task configuration, as you need to configure the SQLCMD variable replacement. This is done via SqlPackage arguments, when using the variables property. Using variables will instruct SqlPackage to override the SQLCMD variables being used in the Dacpac file with the new values defined in your DevOps variables.


This is how I'm overriding my SQLCMD variables (storage_location, file_path, and sas_key).

RuiCunha_24-1694253223887.png

 

After the configuration is complete, hit the "Save" button and then select "Create Release" to run your release pipeline.

 

You can track the release progress by selecting the release number link or by selecting the "View Releases" button.

RuiCunha_25-1694253223888.png

 

Once in the release, you can mouse over the stage name, and select the "Logs" button to get more details about the actions being performed during the job run.

 

RuiCunha_26-1694253223888.png

 

After the execution is completed, the task output should look similar to this:

 

RuiCunha_27-1694253223890.png

 

To validate that the deployment went well, and all the objects are now pointing to the target environment resources, you can use a client tool such SSMS.

Et voila! My Synapse serverless objects were successfully deployed to the target environment and they are now pointing to a different external location :)

 

RuiCunha_28-1694253223892.png

 

Conclusion


By completing this lab, you should have learned how to take advantage of database oriented tools (like SSDT or SqlPackage) to boost your CICD automation for Azure Synapse Serverless SQL pools. These tools will facilitate the deployment of database changes across the environments, by providing deploy-time variables (SQLCMD variables) that are particularly helpful in the context of CICD for an Azure Synapse Serverless SQL pool, where you must adapt your database objects to the target environment resources.

Published on:

Learn more
Azure Synapse Analytics Blog articles
Azure Synapse Analytics Blog articles

Azure Synapse Analytics Blog articles

Share post:

Related posts

The next frontier in enterprise intelligent automations with Microsoft Power Automate

Copilot in Power Automate makes it faster to build solutions that save valuable time and reduce costs, thereby achieving business outcomes. Th...

2 days ago

Power Automate – Power Fx integration with Power Automate for desktop

Get ready to experience the magic of low-code language with Power Automate for desktop! The exciting news is the integration of Power Fx scrip...

3 days ago

Copilot for Power Automate – Copilot provides more accurate answers to questions

The new feature of Copilot for Power Automate, "Copilot provides more accurate answers to questions," has been announced. This feature will le...

3 days ago

[Power Automate] Getting Unique Records or Items from an Excel Table

Level of Difficulty: Beginner – Senior. In today’s fast-paced world, managing data is important, and Excel is a common tool for organizi...

3 days ago

Create A Connection To ServiceNow In Power Automate

Power Automate can connect to ServiceNow to get records and perform other CRUD operations (create, ... The post Create A Connection To Service...

4 days ago

Microsoft Power Automate customer managed key support now generally available

We are excited to announce the general availability of Microsoft Power Automate customer managed key (CMK) support for environments with flows...

6 days ago

Power Automate – Copilot created flow regeneration feature

Power Automate is introducing a new feature on December 13, 2024 that will allow users to regenerate flows created by Copilot. This means that...

8 days ago

November 2024 update of Power Automate for desktop

Power Automate for desktop comes with new great additions in 2024 November’s release, including general availability of cloud connectors and P...

9 days ago

Power Automate Excel Write Value To A Specific Cell Or Range

Power Automate can write a value a specific cell in Excel without that cell being ... The post Power Automate Excel Write Value To A Specific ...

11 days ago
Stay up to date with latest Microsoft Dynamics 365 and Power Platform news!
* Yes, I agree to the privacy policy