Closed Pim-Mostert closed 2 months ago
Chiming in as I ran into the same thing a few weeks ago.
The culprit is the Azure CLI configuration file location. We currently don't forward the AZURE_CONFIG_FILE
environment variable, which the AzureCLI@2
task sets (perhaps for isolation, but I don't know for sure). To work around this, you can set useGlobalConfig
and it will use the default configuration file location, and the Azure CLI will always find it:
- task: AzureCLI@2
inputs:
# ...
useGlobalConfig: true
# ...
@pietern That indeed works for me too, thanks!
For reference, here is my full working configuration:
variables:
databricksHost: "https://adb-XXX.azuredatabricks.net"
pool:
vmImage: "ubuntu-latest"
jobs:
- job: databricks_asset_bundle
displayName: "Deploy Databricks Asset Bundle"
steps:
- bash: |
# Install Databricks CLI - see https://learn.microsoft.com/en-us/azure/databricks/dev-tools/ci-cd/ci-cd-azure-devops
curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sh
# Verify installation
databricks --version
# Create databricks config file
file="~/.databrickscfg"
if [ -f "$file" ] ; then
rm "$file"
fi
echo "[DEFAULT]" >> ~/.databrickscfg
echo "host = $databricksHost" >> ~/.databrickscfg
displayName: Setup Databricks CLI
- task: AzureCLI@2
displayName: Deploy Asset Bundle
inputs:
azureSubscription: "my-wif-serviceconnection"
useGlobalConfig: true
scriptType: "bash"
scriptLocation: "inlineScript"
inlineScript: |
databricks bundle deploy --target dev
@pietern That indeed works for me too, thanks!
For reference, here is my full working configuration:
variables: databricksHost: "https://adb-XXX.azuredatabricks.net" pool: vmImage: "ubuntu-latest" jobs: - job: databricks_asset_bundle displayName: "Deploy Databricks Asset Bundle" steps: - bash: | # Install Databricks CLI - see https://learn.microsoft.com/en-us/azure/databricks/dev-tools/ci-cd/ci-cd-azure-devops curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sh # Verify installation databricks --version # Create databricks config file file="~/.databrickscfg" if [ -f "$file" ] ; then rm "$file" fi echo "[DEFAULT]" >> ~/.databrickscfg echo "host = $databricksHost" >> ~/.databrickscfg displayName: Setup Databricks CLI - task: AzureCLI@2 displayName: Deploy Asset Bundle inputs: azureSubscription: "my-wif-serviceconnection" useGlobalConfig: true scriptType: "bash" scriptLocation: "inlineScript" inlineScript: | databricks bundle deploy --target dev
Hello, @Pim-Mostert, In your:
echo "host = $databricksHost" >> ~/.databrickscfg
Did you add the host, client_id and client_secret of the Service Principal?
@pabtorres I only added the host. The necessary credentials are injected under the hood by the AzureCLI
task.
Describe the issue
I want to deploy a Databricks Asset Bundle from an Azure DevOps Pipeline using databricks. While authentication seems to work fine when using cli commands (such as
databricks experiments list-experiments
), authentication fails for bundle deploymentdatabricks bundle deploy
.In the pipeline I'm making use of the
AzureCLI
task, which enables databricks CLI to make use ofazure-cli
type authentication.As mentioned in https://github.com/databricks/databricks-sdk-go/issues/1025#issuecomment-2312280494 the issue appears to be:
Configuration
Steps to reproduce the behavior
Expected Behavior
The deployment of the asset bundle should succeed.
Actual Behavior
Note that the listing of experiments works fine:
OS and CLI version
Output by the Azure pipeline:
Databricks CLI:
v0.227.0
OS: Ubuntu (Microsoft-hosted agent, latest version)
Is this a regression?
I don't know, I'm new to Databricks.
Debug Logs
Output
databricks experiments list-experiments --log-level TRACE
:experiment-list.txt Output
databricks bundle deploy --log-level=debug --target dev
: bundle-deploy.txt