Open mwieczorek opened 6 years ago
If your VM has a public ip address, you have lots of good tools to upload a file. az vm open-port
can also be used if the tool you prefer requires a particular port to be opened.
Thanks @yugangw-msft Sure I can upload files using SCP, but using az cli would make some automation tasks simpler, especially for VMs without public IP.
@yugangw-msft I'd love it if we could consider this. It's a great feature, because for automation you don't want to use all those other tools AND you handle the non-public cases.
@squillace, let me see how much i can do
Thanks so much, of course. Lemme know. To flesh out the scenario a bit, the more basic functionality that is roughly speaking always required in real world scenarios can be shortened in any way, the more useful the CLI becomes and the happier the customer is with it. This one would replace getting the ssh endpoint, logging in and/or retrieving the credentials from KeyVault (where in the real world they should likely exist), and THEN using scp -- and that assumes an external ssh endpoint (which in the real world many people will simply not want).
yet the command is simple, and clean, and one of the critical tasks you will perform with a vm. In short, it's a real saver no matter whether you're automated or not.
Thanks for all the feedback. I have following ideas for review:
az vm scp
a. --ssh-private-key: default to `~/.ssh/id_rsa` since `vm/vmss create` also references it. It can take a file path, or keyvault secret id.
b: --action <upload/download>: default to upload
c. --local <file1> <file2> <folder1> (if download, then only accepts a single file path or directory, as many to many would be hard to get it right through command line)
d. --remote <file1> <file2> <folder> (if upload, then only take a single file path or directory)
e: --recursive (when directory involved)
f: --port (in case the port is different)
g: --user (cli will default to the default admin, but if you have multiple users configured, you can use those)
az vm -g rg1 -n vm1 --local ~/foo1.txt --remote ~/dest-dir
az vm -g rg1 -n vm1 --action download --local ~ --remote ~/dest-dir/bar.exe
az vm -g rg1 -n vm1 --local ~/myweb/src --remote ~/websites --recursive
az vm -g rg1 -n vm1 --local ~/myweb/src --remote ~/websites --recursive --ssh-private-key https://mmadeclitestkv.vault.azure.net/secrets/ssh-privates-key
Let me know any comments/suggestions; otherwise i will start to get things going
@khenidak that looks good to be at first blush. Whatchya think?
are we wrapping ssh? i think user expect this to work in private (non exposed to internet) VMs
@kkmsft FYI..
Also, sure seems like a dupe of #5622 to me....
two birds one stone!
While i see ssh of value.. short cuts for connection and so on. The above request is not scp
wrapper. User should be able to copy files from/to an Azure VM using only the cli without any configuration on the VM other than walinux
agent. The entire process should be done via the fabric. i.e if i have contributor
access to the VM resources (i can reset the password, right?) then i should be able to copy from/to vm
agree ^^
I'm just wondering, rather, that #5622 scenario would be largely solved by this. I don't want scp embedded here, either. Not the goal. Down/uploading easily through the fabric is the goal.
The implementation will not be the wrapper of SSH/SCP, and no dependencies on any external command. I chose the naming of scp
just for better discoveries
That makes sense, but I'm concerned that if you do that and don't use copy
, you'll get two things:
My guess is that copy-files or copy is best. @khenidak @kkmsft what do you think would be most obvious here? Do you think my concerns are valid?
For other questions, suggestions...
For the key/password, it can either come from local ~/.ssh
, command arguments, keyvault-key, or you can run "az vm user" to reset them and pass the new one to the command.
For private vm instances inside VMSS, by default, CLI enables inbound nat rule to enable SSH protocol, so the same command should work, but we will need to test it
For #5622, I am not convinced we need to support that much like ssh. External tools, to me, is sufficiently easy to use. I am fine to do scp
functionality because it can help scripting.
A question here though, what does through the fabric
mean here, particularly the fabric
?
For #5622, I think this MOSTLY solves the user's need. by "through the fabric" here I mean what @khenidak refers to above in https://github.com/Azure/azure-cli/issues/5275#issuecomment-412147656: `if I have contributor access to the VM resources... I should be able to copy from/to vm." authentication/auth is run through ARM, is the way I would say it. Make sense? I'm quite sure you're doing this, but just to be clear....
@yugangw-msft where are we on this? As it's not an scp wrapper, and we don't want that, and I really don't think we should call it "scp", as that will drive users mad with irritation when it doesn't DO scp, but if it wraps scp, then it won't do what we're asking.
I note that I didn't respond to your comment above, external tools, to me, is sufficiently easy to use. I am fine to do scp functionality because it can help scripting.
external tools are bad in this case. this ability, to use az to copy files through the aad auth mechanism onto the vm without any other tool or switch is what we're looking for. It's not about anything specific, either. Happy to write the formal spec if necessary.
I am pretty flexible on the naming thing :), as long as it is reasonable. But we need to sort out how to implement this first. The most important scenario is to find the solution to exchange files with VMs w/o public ip. W/o SSH keys/passwords involved, one of the few options left, is the "run-command" API (a wrapper around the custom script extension). To get the file out there are 2 mechanisms I can think of:
cat
and dt
to exchange the file content through stdout. This is a bit tricky and is also hitting the API limitation that only 4K stdout can be returned from the API, so you have to split up a big file manually. And for binary files, you are out of luck.Other means you can think of? I am counting on you :)
I'm up to the challenge! Gimme a minute. :-)
@squillace / @yugangw-msft - any update on this?
We need compute service support to copy large file content from client machine to VM w/o public ip.
Has this been implemented? If so it would have been of great use. As we had plans of using the "az vm scp" command within our existing GitLab CI/CD.
Any updates on this? Seems like an important feature to have
It's a bit quiet here. Any updates on this? As a workaround I'm base64 encoding the file, then i run a command to decode and save to a file. Do not use this, method if the content of the file is confidential, since it will appear in logs
sample powershell scripts on the host
$fileInBase64=[System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($fileContent))
$parms = @{
'ResourceGroupName' = $resourceGroupName;
'Name' = $vmName ;
'CommandId' = 'RunShellScript' ;
'ScriptPath' = "decodeToFile.sh";
'Parameter' = @{"arg1" = "$($fileInBase64)" ; "arg2"= "$($targetFileName)" }
}
$result = Invoke-AzVMRunCommand @parms
content of decodeToFile.sh:
#!/usr/bin/env bash
base64str=$1
location=$2
[ -z $base64str ] && echo "No base64 provided" && exit 1
[ -z $location ] && echo "No location provided" && exit 1
echo $base64str | base64 -d > $location
After reading @yugangw-msft answer, I implemented this for my needs (send a setup shell script to my VMs after I create it using ARM Templates and calling them from Templates in a VM extension) :
sas_token_expiry=`date -u -d "10 minutes" '+%Y-%m-%dT%H:%MZ'`
az storage account create --name "$sa_name" --resource-group "$rg_name" --location "$location" --sku "Standard_LRS"
sa_cs=`az storage account show-connection-string -g "$rg_name" -n "$sa_name"`
az storage share create --name "$fs_name" --account-name "$sa_name" --connection-string "$sa_cs"
az storage file upload -s "$fs_name" --source "$setup_file" --connection-string "$sa_cs"
sas_token=`az storage account generate-sas --expiry "$sas_token_expiry" --permissions "r" --resource-types "sco" --services "bqtf" --account-name "$sa_name" --ip "$vm_ip" --connection-string "$sa_cs"`
sas_token=$(echo "$sas_token" | jq -r '.')
setup_file_download_link="https://$sa_name.file.core.windows.net/$fs_name/$(basename $setup_file)?$sas_token"
This is far from being perfect but it can help waiting for an official functionality :)
Is this copy a file to my VM request forgotten? It would be really helpful if there was a comman like "Invoke-AzVMRunCommand". Maybe like: Invoke-AzVMCopyFile -ResourceGroupName 'rgname' -VMName 'vmname' -FileSourcePath 'Path/File' -FileDestinationPath 'Path/' @yugangw-msft Do you know anything about if this will be once available?
Invoke-AzVMCopyFile -ResourceGroupName 'rgname' -VMName 'vmname' -FileSourcePath 'Path/File' -FileDestinationPath 'Path/'
@Segaras Hi, in fact, this is a PowerShell command. if you have requirements related to PowerShell, please submmit issue to https://github.com/Azure/azure-powershell/issues
Whatever, I can understand what you want to achieve, which requires the service team to support a REST to upload the file to VM from local.
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @Drewm3, @avirishuv, @vaibhav-agar, @amjads1.
Author: | mwieczorek |
---|---|
Assignees: | qwordy, zhoxing-ms |
Labels: | `Compute`, `Service Attention`, `customer-reported`, `feature-request` |
Milestone: | Backlog |
Any movement on this? Seems like a valuable feature.
We need compute service support to copy large file content from client machine to VM w/o public ip.
@fitzgeraldsteele Could you please take a look at this feature request?
Adding @Chase @.> from Client Tools PM team, @Ankit @.> from VM PM team who may have more informed opinion on this one.
Jerry Steele Microsoft senior program manager 425.421.2566
From: Xing Zhou @.> Sent: Thursday, September 15, 2022 7:14 PM To: Azure/azure-cli @.> Cc: Jerry Steele @.>; Mention @.> Subject: Re: [Azure/azure-cli] [FEATURE] New command to copy files from/to VMs (#5275)
We need compute service support to copy large file content from client machine to VM w/o public ip.
@fitzgeraldsteelehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ffitzgeraldsteele&data=05%7C01%7Cjerry.steele%40microsoft.com%7Cefcc98be64e8400a2c0408da97892d25%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637988912669509969%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=iDU0q2oLspzVA4SodP4s4EM4t91bgtiZUk4CPyvCkOk%3D&reserved=0 Could you please take a look at this feature request?
— Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cli%2Fissues%2F5275%23issuecomment-1248834086&data=05%7C01%7Cjerry.steele%40microsoft.com%7Cefcc98be64e8400a2c0408da97892d25%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637988912669822324%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=CmnCTShdhdzBY8Hcn2mxVI3FoOKx3blnTpLNJo5VkFg%3D&reserved=0, or unsubscribehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAABGQSCNHFP2RHUMUSPBCE3V6PJYBANCNFSM4ELKZH2A&data=05%7C01%7Cjerry.steele%40microsoft.com%7Cefcc98be64e8400a2c0408da97892d25%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637988912669822324%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=XdRp7PIUNBKPEaJsUqxOJQgukbdmgczVM7vVkcP1HwU%3D&reserved=0. You are receiving this because you were mentioned.Message ID: @.***>
I often have setup with VM which are isolated from internet and this feature is very valuable
It would be very useful to have such a feature in az.
% az scp --help
'scp' is misspelled or not recognized by the system.
😞 (2024)
bump... +100 to add this feature, this is not only useful for private Virtual Machines in Azure but also for Azure Arc
Is it possible to add feature to azure cli to copy files from/to VMs? Something like: