Azure Data Blog articles

Azure Data Blog articles

https://techcommunity.microsoft.com/t5/azure-data-blog/bg-p/AzureDataBlog

Azure Data Blog articles

Data bricks Notebook Deployment using YAML code

Published

Data bricks Notebook Deployment using YAML code

Purpose of this Blog:

  1. We have defined end-end workflow for code check-in using two methods
    • Notebook Revision History (standard check-in process which was defined to check-in the notebook code)
    • Azure Databricks Repos (No Repo Size Limitation from May 13th)
  2. Changes made externally to the Databricks notebook will not automatically sync with the Databricks Workspace. Due to this limitation, it is recommended that developers sync the entire git repository as detailed in the process below.
  3. We have written yaml template for CI and CD with Powershell code which can deploy notebooks from multiple folders and the powershell code in pipeline will create folder in the destination if the folder doesn't exist against the Data thirst extension which can only deploy notebooks in single folder.
  4. One stop for whole Databricks deployment workflow from code-check-in to pipelines with detailed explanation(which is used by stakeholders)

 

Developer Workflow (CI/CD)


Git Integration:

  1. Create a feature branch based on the main branch and link a work item to it.

    Akshay_Attota_2-1640691680049.png

     

  2. Login into your Azure Databricks Dev/Sandbox and click on user icon (top right) and open user settings.

    Akshay_Attota_3-1640691717309.png

     

  3. Click on Git Integration Tab and make sure you have selected Azure Devops Services

    Akshay_Attota_4-1640691752996.png

     

  4. There are two ways to check-in the code from Databricks UI (described below)
    1.Using Revision History after opening Notebooks
    2.Work with notebooks and folders in an Azure Databricks repo(Repos which is a recent development - 13th May)

 

Code Check-in into the Git repository from Databricks UI


I. Notebook Revision History:


  1. Go to notebook you want to make changes and deploy to another environment.

    Akshay_Attota_0-1640691850900.png

    Note: Developers need to make sure to maintain a shared/common folder for all the notebooks. You can make all required changes in your personal folder and then finally move these changes to the shared/common folder. The CI process will create the artifact from this shared/common folder.

  2. Click on the Revision history on top right.

    Akshay_Attota_1-1640691900171.png

     

  3. If it is a new notebook, you will be able to notice that git is not linked to the notebook or Git might be linked to older branch which might not exist.

  4. Click on the ‘Git:Not Linked’(New Notebook) or 'Git Synced'(Already existing notebook).

  5. We need to configure the Git Repository. (screenshot below)

    Akshay_Attota_2-1640691972589.png
  • Select feature branch from the list of branches in the drop down
  • Mention the URL of the repository in the format
    https://dev.azure.com/<organisationname>/<ProjectName>/_git/<Repo>
  • Mention the path of the notebook in the repository. In our case it is src/Databricks/ITDataEngineerADBDev2/notebooks/<folder name>/<notebook.py>
  1. Click on Save Notebook by adding a comment to integrate the code to the repository.

    Akshay_Attota_3-1640692009711.png

     

  2. We need to create a PR after the changes are reflected in the feature branch.

    Akshay_Attota_4-1640692082964.png

     

  3. After PR gets approved, the code now is merged into the main branch and CI-CD process will start from here.

Akshay_Attota_5-1640692120299.png

 

Note: Linking individual notebooks has the following limitation

 

II. Azure Databricks Repos:


Introduction

  • Repos is a newly introduced feature in Azure Databricks which is in Public Preview.
  • This feature used to have a 100mb limitation on the size of the linked repository but this feature is now working with larger repositories as of May 13th.
  • We can directly link a repository to the Databricks workspace to work on notebooks based on git branches.

Repos Check-in Process:

  1. Click on Repos tab and right click on the folder you want to work and then select "Add Repos".

    Akshay_Attota_0-1640692322727.png

     

  2. Fill in the Repo URL from Azure Devops and select the Git provider as "Azure Devops Services" and click on create.

    Akshay_Attota_1-1640692350046.png

     

  3. The repo gets added with folder name as repo name(Data in the screenshot below) and a selection with all the branch names(branch symbol with feature/ and down arrow in screenshot below). Click on the down arrow beside the branch name.

    Akshay_Attota_2-1640692387593.png

     

  4. After clicking on down arrow(previous screenshot), search and select your existing feature branch OR create a new feature branch (as shown in screenshot below).

    Akshay_Attota_3-1640692415036.png

     

  5. All the folders in the branch are visible (refer the screenshot below)

    Akshay_Attota_4-1640692445592.png

     

  6. Open the folder which contains the notebooks(refer the screenshot below). Create a new notebook and write code(Right click on the folder and select "create"---->"Notebook" like screenshot below) or edit an existing notebook in the folder.

     

     

    Akshay_Attota_5-1640692479441.png

     

     

  7. After creating a new notebook or editing an existing notebook, click on top left hand of the notebook which contains feature branch name. Then, new window will appear which will show the changes. Add Summary(mandatory) and Description(optional), then click on "commit and push".

Akshay_Attota_6-1640692523855.png

 

  1. Then a pop up with the following message appears that it is "committed and pushed" and then the user should raise a PR to merge the feature branch into the "main" branch.

Akshay_Attota_7-1640692551074.png

 

  • After successful PR merge, the CI-CD pipeline is run for the deployment of the notebooks to the higher environments(QA/Prod).

CI-CD Process


Continuous Integration(CI) pipeline:

The CI pipeline builds the artifact by copying the notebooks from main branch to staging directory.

It has two tasks:

  1. Copy Task - Copies from main branch to staging directory.

  2. Publish Artifacts - publishes artifacts from $(build.stagingdirectory)

  • YAML Template

 

 

name: Release-$(rev:r) trigger: none variables: workingDirectory: '$(System.DefaultWorkingDirectory)/<path>' stages: - stage: Build displayName: Build stage jobs: - job: Build displayName: Build steps: - task: CopyFiles@2 displayName: 'Copy Files to: $(build.artifactstagingdirectory)' inputs: SourceFolder: '$(workingDirectory)' TargetFolder: ' $(build.artifactstagingdirectory)' - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: notebooks' inputs: ArtifactName: dev_release

 

 

Continuous Deployment(CD) pipeline:

Deployment with secure hosted agent

a. Run the release pipeline for the specified target environment.
This will download the previously generated Build Artifacts. It will also download secure connection strings from Azure Key Vault. Make sure your self hosted agent is configured properly as per Self-Hosted Agent.Then it will deploy notebooks to your Target Azure Databricks Workspace.

The code below shows how to run your release agent on a specific self hosted agent: Take note of the pool and demands configuration.

  • In the continuous deployment pipeline we have to deploy the artifacts build to Dev , QA, UAT and prod environments.
  • We will have approval gates setup before the deployment to each environment(stage) gets started.

 

 

stages: - stage: Release displayName: Release stage jobs: - deployment: DeployDatabricks displayName: Deploy Databricks Notebooks pool: name: OTT-DEV-FanDataPool demands: - agent.name -equals <agent-name> environment: <env-name>

 

 

We have two steps for the deployment:

  1. Getting Key Vault Secrets PAT(Personal Access Token) and Target Data bricks Workspace URL from the Key Vault.

  2. Importing the Notebooks to the target Databricks using import Rest API.(PowerShell Task with Inline Script)

  • We can perform multiple folder notebook deployments using this script.

  • We are creating a folder(Folder in Repository similar to Sandbox/Dev environment) if it does not exist in the target(dev/QA/UAT/Prod) data bricks workspace and then importing the notebooks into the folder.

 

  • YAML template

 

 

name: Release-$(rev:r) trigger: none resources: pipelines: - pipeline: notebooks source: Databricks-CI trigger: branches: - main variables: - name: azureSubscription value: '<serviceConnectionName>' - name: workingDirectory_shared value: '$(Pipeline.Workspace)/<path>/' stages: - stage: Release displayName: Release stage jobs: - deployment: DeployDatabricks displayName: Deploy Databricks Notebooks environment: DEV strategy: runOnce: deploy: steps: - task: AzureKeyVault@1 inputs: azureSubscription: '$(azureSubscription)' KeyVaultName: $(dev_keyvault) SecretsFilter: 'databricks-pat,databricks-url' RunAsPreJob: true - task: AzurePowerShell@5 inputs: azureSubscription: '$(azureSubscription)' ScriptType: 'InlineScript' Inline: | $filenames = get-childitem $(workingDirectory_shared) -recurse | where {$_.extension -eq ".py"}; $filenames | ForEach-Object { $NewNotebookName = $_.Name #Get Our PAT Token for workspace from the Variable $Secret = "Bearer "+ "$(databricks-pat)"; #Set the URI for API Endpoint $ImportNoteBookAPI = "$(databricks-url)"+"/api/2.0/workspace/import"; $FolderCheckAPI= "$(databricks-url)"+"/api/2.0/workspace/get-status"; $FolderCreateAPI = "$(databricks-url)"+"/api/2.0/workspace/mkdirs" #Open and Import the Notebook to Workspace $BinaryContents = [System.IO.File]::ReadAllBytes($_.FullName); $EncodedContents = [System.Convert]::ToBase64String($BinaryContents); $foldername = $_.FullName.split("\")[$_.FullName.split("\").length-2]; $folderpath = "/"+$foldername + "/"; $notebookpath = $folderpath + "$NewNotebookName" #API Body for Importing Notebooks $ImportNoteBookBody = @{ content = "$EncodedContents" language = "PYTHON" overwrite = $true format = "SOURCE" path = $notebookpath } #Convert body to JSON $ImportNoteBookBodyText = $ImportNoteBookBody | ConvertTo-Json #API Body for Creating Folder $CreateFolderBody = @{ path = $folderpath } #Convert body to JSON $CreateFolderBodyText = $CreateFolderBody | ConvertTo-Json #Headers for all the API calls $headers = @{ Authorization = $Secret } try{ #Check if the folder exists $CheckPath = $FolderCheckAPI + "?path="+ $folderpath; $CheckFolder = Invoke-RestMethod -Uri $CheckPath -Method Get -Headers $headers; } catch [System.Net.WebException] { Write-Host "Entering the catch block and Creating Folder"; #Creating a Folder if it does not exists Invoke-RestMethod -Uri $FolderCreateAPI -Method Post -Headers $headers -Body $CreateFolderBodyText } #Importing a notebook to the Folder in Target DataBricks Workspace Write-Host "Creating Notebook " + $notebookpath Invoke-RestMethod -Uri $ImportNoteBookAPI -Method Post -Headers $headers -Body $ImportNoteBookBodyText } azurePowerShellVersion: 'LatestVersion'

 

 

 

Continue to website...

More from Azure Data Blog articles

Related Posts