Closed dhs-rec closed 8 years ago
Forgot to mention: Originating system is Ubuntu 14.04 with salt-common and salt-cloud version 2014.7.1+ds-3trusty1.
Hi @dhs-rec. I think the first thing I would want to check is whether winexe
is working properly. What happens when you run one of the above winexe
commands manually?
To verify this, I created an instance using salt-cloud, then connected to it and found no minion (should have bootstrapped 2014.7.2). Manually installing 2014.7.2 using winexe afterwards also didn't work. However, it worked with 2014.7.1:
# winexe -U Administrator%secret_passwd //10.11.12.13 "c:\\salttemp\\Salt-Minion-2014.7.1-AMD64-Setup.exe /S /master=ip-10-11-12-13.region.compute.internal /minion-name=salt-cloud-test"
# winexe -U Administrator%secret_passwd //10.11.12.13 "sc query salt-minion"
SERVICE_NAME: salt-minion
TYPE : 10 WIN32_OWN_PROCESS
STATE : 1 STOPPED
WIN32_EXIT_CODE : 1077 (0x435)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
# winexe -U Administrator%secret_passwd //10.11.12.13 "sc query salt-minion"
SERVICE_NAME: salt-minion
TYPE : 10 WIN32_OWN_PROCESS
STATE : 1 STOPPED
WIN32_EXIT_CODE : 1077 (0x435)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
# winexe -U Administrator%secret_passwd //10.11.12.13 "sc start salt-minion"
SERVICE_NAME: salt-minion
TYPE : 10 WIN32_OWN_PROCESS
STATE : 2 START_PENDING
(NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x7d0
PID : 828
FLAGS :
# winexe -U Administrator%secret_passwd //10.11.12.13 "sc query salt-minion"
SERVICE_NAME: salt-minion
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, PAUSABLE, ACCEPTS_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
@dhs-rec, thanks for the report.
Feel free to approach me for testing possible fixes.
Any news on this?
@dhs-rec, 2014.7.4 and beyond has a new windows installer that has fixed many issues. I'm not sure about the bootstrapping component, @UtahDave, @twangboy, or @techhat may have more comments on that.
Wanted to test with the new installer, but we've already switched AWS region to eu-central-1 (Frankfurt), so that I promptly ran into #16921.
Now that 2015.5.0 is released I gave it another try with this version, but this time it runs into a timeout:
# salt-cloud -p windows-server-2008 salt-cloud-test
[INFO ] salt-cloud starting
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Creating Cloud VM salt-cloud-test in eu-central-1
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Created node salt-cloud-test
[INFO ] Salt node data. Public_ip: 10.11.12.13
[ERROR ] Port connection timed out: 900
Error: There was a profile error: Failed to connect to remote windows host
A quick check in the EC2 Management Console shows that the instance is up and running.
Is there any news on this issue?
@dhs-rec I just spun up a Windows 2012R2 64 box on EC2 and had it install Salt-Minion 2015.5.3 using salt-cloud. I followed the instructions here: http://docs.saltstack.com/en/latest/topics/cloud/windows.html
I had to put a powershell script on my master to open port 445 in the windows firewall. I also had to make sure the machine belonged to a security group that opened 445 on amazon's firewall. Then I created a profile that included the powershell script.
The minion failed to start. This ended up being a regression in 2015.5.3 where the master_type
setting in the minion config was set to str
was causing the minion to fail to start. When I deleted the setting the minion started fine. A ticket has been created for this issue (#25335).
The work around was to add a few lines to the profile definition to set the master_type
to standard
.
minion:
master_type: standard
My final profile definition looked something like this:
aws_win_2012r2_64:
image: ami-5b9e6b30
provider: aws
size: t2.medium
subnetid: subnet-097b6a21
userdata_file: /etc/salt/windows-firewall.ps1
win_installer: /srv/salt/win/salt_64/Salt-Minion-2015.5.3-AMD64-Setup.exe
minion:
master_type: standard
win_username: Administrator
win_password: auto
keyname: slee
private_key: /root/.ssh/slee_aws.pem
securitygroupid:
- sg-cae73fae
Let me know if this helps. We're not updating the 2014.7 and earlier branches so if there is a problem we'll have to fix it on later branches.
No Ubuntu packages available yet for 2015.5.3, so I can't verify. 2015.5.2 still wants to use bootstrap.sh to install the minion.
@dhs-rec Do you have a profile definition like the one above for your windows instances?
win_installer
parameter to the version you want to installDid you use the Powershell script to open port 445 for the windows firewall?
userdata_file
parameter to run the powershell script to open port 445 in the windows firewall.Does your instance belong to a security group that opens port 445 on amazon's firewall?
Yes, that's all done. Only difference is that I want to setup Win 2008 R2.
@dhs-rec Testing this same configuration on a 2008R2 64 image (ami-5fe81d34).
It's working just fine for me.
salt-cloud -p aws_win_2008r2_64 slee_minion_install
/root/salt/salt/config.py:2094: DeprecationWarning: The term 'provider' is being deprecated in favor of 'driver'. Support for 'provider' will be removed in Salt Nitrogen. Please convert your cloud provider configuration files to use 'driver'.
[INFO ] salt-cloud starting
[INFO ] Creating Cloud VM slee_minion_install in us-east-1
[INFO ] Created node slee_minion_install
[INFO ] Salt node data. Public_ip: 52.3.45.123
[INFO ] Running command under pid 31430: 'winexe -U \'Administrator%NB$*Vr?mWo\' //52.3.45.123 "hostname"'
[INFO ] Rendering deploy script: /root/salt/salt/cloud/deploy/bootstrap-salt.sh
[INFO ] Running command under pid 31434: 'winexe -U \'Administrator%NB$*Vr?mWo\' //52.3.45.123 "sc query winexesvc"'
SERVICE_NAME: winexesvc
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
[INFO ] Running command under pid 31436: 'winexe -U \'Administrator%NB$*Vr?mWo\' //52.3.45.123 "c:\\salttemp\\Salt-Minion-2015.5.3-AMD64-Setup.exe /S /master=104.236.251.190 /minion-name=slee_minion_install"'
[INFO ] Running command under pid 31474: 'winexe -U \'Administrator%NB$*Vr?mWo\' //52.3.45.123 "sc stop salt-minion"'
SERVICE_EXIT
SERVICE_NAME: salt-minion
TYPE : 10 WIN32_OWN_PROCESS
STATE : 2 START_PENDING
(STOPPABLE, PAUSABLE, ACCEPTS_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x7d0
[INFO ] Running command under pid 31476: 'winexe -U \'Administrator%NB$*Vr?mWo\' //52.3.45.123 "sc start salt-minion"'
SERVICE_NAME: salt-minion
TYPE : 10 WIN32_OWN_PROCESS
STATE : 2 START_PENDING
(NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x7d0
PID : 2416
FLAGS :
[INFO ] Salt installed on slee_minion_install
[INFO ] Created Cloud VM 'slee_minion_install'
And here's a test.ping
to make sure:
salt slee_minion_install test.ping
slee_minion_install:
True
@dhs-rec It's got to be something that's not configured correctly on your end. Can you post your profile definition?
# /etc/salt/cloud.providers.d/my-ec2-eucentral1-public-ips.conf
my-ec2-eucentral1-public-ips:
minion:
master: ip-11-12-13-14.eu-central-1.compute.internal
keysize: 2048
master_type: standard
grains:
node_type: broker
release: 1.0.1
ssh_interface: public_ips
id: ABCDE...
key: 'TheSecretKey'
private_key: /root/SaltCloudTest.pem
keyname: SaltCloudTest
securitygroup: A_SECURITY_GROUP
location: eu-central-1
availability_zone: eu-central-1b
win_installer: /root/Salt-Minion-2015.5.3-AMD64-Setup.exe
win_username: Administrator
win_password: 'SuperSecret'
keysize: 2048
block_device_mappings:
- DeviceName: /dev/sda1
Ebs.VolumeType: gp2
- DeviceName: xvdca
Ebs.VolumeType: gp2
del_all_vols_on_destroy: True
provider: ec2
# /etc/salt/cloud.profiles.d/windows-server-2008.conf
windows-server-2008:
provider: my-ec2-eucentral1-public-ips
image: ami-36e4da2b
size: m3.medium
userdata_file: /etc/salt/windows-firewall.ps1
This is the output I get from 2015.5.2:
# salt-cloud -p windows-server-2008 salt-cloud-test
[INFO ] salt-cloud starting
[WARNING ] /usr/lib/python2.7/dist-packages/salt/cloud/clouds/digital_ocean.py:86: DeprecationWarning: The digital_ocean driver is deprecated and will be removed in Salt Beryllium. Please convert your digital ocean provider configs to use the digital_ocean_v2 driver.
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Creating Cloud VM salt-cloud-test in eu-central-1
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Created node salt-cloud-test
[INFO ] Salt node data. Public_ip: 52.53.54.55
[INFO ] Running command under pid 19799: 'winexe -U \'Administrator%SuperSecret\' //52.53.54.55 "hostname"'
ip-AC1F02C7
[INFO ] Rendering deploy script: /usr/lib/python2.7/dist-packages/salt/cloud/deploy/bootstrap-salt.sh
[INFO ] Running command under pid 19803: 'winexe -U \'Administrator%SuperSecret\' //52.53.54.55 "sc query winexesvc"'
SERVICE_NAME: winexesvc
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
[ERROR ] There was a profile error: global name 'SessionError' is not defined
SERVICE_EXIT#
@dhs-rec I just copied your config and spun up a VM on eu-central-1. I had to modify a few things to get it to work for me since I'm missing some items you have access to. Everything worked for me. I've included my full files below (removing sensitive data, of course):
Here's my profile:
# /etc/salt/cloud.profiles.d/test.conf
windows-server-2008:
provider: my-ec2-eucentral1-public-ips
# image: ami-36e4da2b
image: ami-6278407f
size: m3.medium
userdata_file: /etc/salt/windows-firewall.ps1
I had to use an amazon ami as I didn't have access to the one you're using in yours.
Here's my provider file:
# /etc/salt/cloud.providers.d/test.conf
my-ec2-eucentral1-public-ips:
minion:
master: <ip to my master on an external provider>
keysize: 2048
master_type: standard
grains:
node_type: broker
release: 1.0.1
ssh_interface: public_ips
id: ABCD.......
key: abunchofrandomness.......
private_key: /root/.ssh/slee_aws.pem
keyname: slee
securitygroup: windows
location: eu-central-1
availability_zone: eu-central-1b
win_installer: /srv/salt/win/salt_64/Salt-Minion-2015.5.3-AMD64-Setup.exe
win_username: Administrator
win_password: auto
keysize: 2048
# block_device_mappings:
# - DeviceName: /dev/sda1
# Ebs.VolumeType: gp2
# - DeviceName: xvdca
# Ebs.VolumeType: gp2
del_all_vols_on_destroy: True
rename_on_destroy: True
provider: ec2
I created a security group named windows
on EC2 that looks like this:
Here's the output from the console:
salt-cloud -p windows-server-2008 salt-cloud-test
[INFO ] salt-cloud starting
[INFO ] Creating Cloud VM salt-cloud-test in eu-central-1
[INFO ] Created node salt-cloud-test
[INFO ] Salt node data. Public_ip: 52.28.186.110
[INFO ] Running command under pid 32023: 'winexe -U 'Administrator%JmqSYrS$(F\' //52.28.186.110 "host name"'
WIN-8REM2J07KFL
[INFO ] Rendering deploy script: /root/salt/salt/cloud/deploy/bootstrap-salt.sh
[INFO ] Running command under pid 32027: 'winexe -U \'Administrator%JmqSYrS$(F\' //52.28.186.110 "sc query winexesvc"'
SERVICE_NAME: winexesvc
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
[INFO ] Running command under pid 32033: 'winexe -U \'Administrator%JmqSYrS$(F\' //52.28.186.110 "c:\\salttemp\\Salt-Minion-2015.5.3-AMD64-Setup.exe /S /master=104.236.251.190 /minion-name=salt-cloud-test"'
[INFO ] Running command under pid 32064: 'winexe -U \'Administrator%JmqSYrS$(F\' //52.28.186.110 "sc stop salt-minion"'
SERVICE_NAME: salt-minion
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, PAUSABLE, ACCEPTS_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
[INFO ] Running command under pid 32068: 'winexe -U \'Administrator%JmqSYrS$(F\' //52.28.186.110 "sc start salt-minion"'
SERVICE_NAME: salt-minion
TYPE : 10 WIN32_OWN_PROCESS
STATE : 2 START_PENDING
(NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x7d0
PID : 2568
FLAGS :
[INFO ] Salt installed on salt-cloud-test
[INFO ] Created Cloud VM 'salt-cloud-test'
salt-cloud-test:
----------
amiLaunchIndex:
0
architecture:
x86_64
blockDeviceMapping:
----------
item:
----------
deviceName:
/dev/sda1
ebs:
----------
attachTime:
2015-07-21T16:06:28.000Z
deleteOnTermination:
true
status:
attached
volumeId:
vol-5dafd0bf
clientToken:
None
deployed:
True
dnsName:
ec2-52-28-186-110.eu-central-1.compute.amazonaws.com
ebsOptimized:
false
groupSet:
----------
item:
----------
groupId:
sg-6aac1003
groupName:
windows
hypervisor:
xen
imageId:
ami-6278407f
instanceId:
i-9a73105b
instanceState:
----------
code:
16
name:
running
instanceType:
m3.medium
ipAddress:
52.28.186.110
keyName:
slee
launchTime:
2015-07-21T16:06:24.000Z
monitoring:
----------
state:
disabled
networkInterfaceSet:
----------
item:
----------
association:
----------
ipOwnerId:
amazon
publicDnsName:
ec2-52-28-186-110.eu-central-1.compute.amazonaws.com
publicIp:
52.28.186.110
attachment:
----------
attachTime:
2015-07-21T16:06:24.000Z
attachmentId:
eni-attach-21ed1005
deleteOnTermination:
true
deviceIndex:
0
status:
attached
description:
None
groupSet:
----------
item:
----------
groupId:
sg-6aac1003
groupName:
windows
macAddress:
06:73:06:d3:a5:31
networkInterfaceId:
eni-0ef1b375
ownerId:
771387433148
privateDnsName:
ip-172-31-1-157.eu-central-1.compute.internal
privateIpAddress:
172.31.1.157
privateIpAddressesSet:
----------
item:
----------
association:
----------
ipOwnerId:
amazon
publicDnsName:
ec2-52-28-186-110.eu-central-1.compute.amazonaws.com
publicIp:
52.28.186.110
primary:
true
privateDnsName:
ip-172-31-1-157.eu-central-1.compute.internal
privateIpAddress:
172.31.1.157
sourceDestCheck:
true
status:
in-use
subnetId:
subnet-868d8cfe
vpcId:
vpc-05ae486c
placement:
----------
availabilityZone:
eu-central-1b
groupName:
None
tenancy:
default
platform:
windows
privateDnsName:
ip-172-31-1-157.eu-central-1.compute.internal
privateIpAddress:
172.31.1.157
productCodes:
None
reason:
None
rootDeviceName:
/dev/sda1
rootDeviceType:
ebs
sourceDestCheck:
true
subnetId:
subnet-868d8cfe
virtualizationType:
hvm
vpcId:
vpc-05ae486c
Here are the differences I see between my setup and yours.
block_device_mappings
settings.auto
setting for win_password
salt-bootstrap.sh
file. It shouldn't do this.Again it worked for me, hopefully something here can give you a clue as to why yours isn't working.
@dhs-rec While going over your console output with a coworker, @UtahDave noticed the final line:
[ERROR ] There was a profile error: global name 'SessionError' is not defined
SERVICE_EXIT#
It's throwing that error when it tries to load IMPACKET. It looks like you're missing some requirements on your master. This error would indicate IMPACKET specifically. Please make sure you have all the requirements as stated in the requirements section of the instructions here: http://docs.saltstack.com/en/latest/topics/cloud/windows.html#requirements
@twangboy Thanks a lot for your efforts.
Yes, the salt master is in AWS. However, I run salt-cloud from a machine in our own network (mainly because it hangs forever after creating the instance when run from the AWS based master).
Reg. security group: The used group allows complete inbound and outbound traffic to the instance from within our network, incl. port 445. Reg. bootstrap-salt.sh: I see that in your output, too. Reg. impacket: python-impacket is installed on the master as well as the machine I run salt-cloud from.
@UtahDave ^^^^
@dhs-rec Maybe it's the image you're using. Can you try spinning one up with the same ami I used? ami-6278407f
Doesn't work, "Failed to authenticate against remote windows host".
OK, after setting "win_password: auto", it runs until
[INFO ] Rendering deploy script: /usr/lib/python2.7/dist-packages/salt/cloud/deploy/bootstrap-salt.sh
where it just hangs again.
@dhs-rec Could you try installing salt-cloud and all prerequisites on your master in AWS? Then spin up your new vm from the master. The fact that you're running salt-cloud from a machine other than the master might be the problem. That's a scenario we haven't tested and don't officially support. Not to say we won't in the future.
@twangboy: The reason I ran it from another machine was that it hangs even earlier in the process when run from the AWS located salt master. The same is true with your base AMI:
# salt-cloud -p windows-server-2008 salt-cloud-test
[INFO ] salt-cloud starting
[WARNING ] /usr/lib/python2.7/dist-packages/salt/cloud/clouds/digital_ocean.py:86: DeprecationWarning: The digital_ocean driver is deprecated and will be removed in Salt Beryllium. Please convert your digital ocean provider configs to use the digital_ocean_v2 driver.
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Creating Cloud VM salt-cloud-test in eu-central-1
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Created node salt-cloud-test
[INFO ] Salt node data. Public_ip: 52.28.101.238
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
[ERROR ] Port connection timed out: 900
Error: There was a profile error: Failed to connect to remote windows host
@dhs-rec It says there's something wrong with the profile. Would you mind posting it, removing sensitive data of course?
@dhs-rec It also mentions a problem with the port. Have you checked that port 445 is open and that it's set to run the powershell script?
See above, we've been there already. It's still the same profile and yes, it's configured to run the script and the port is open. But how can it run the script when it won't connect to the instance at all?
Look at https://github.com/saltstack/salt/issues/21256#issuecomment-123204422. There you can see that the connection via winexe works (from the non-master machine outside AWS). At least it can get at the hostname.
@dhs-rec I was referring to the Port connection timed out: 900
when you ran it from your salt master.
Also, I'm assuming then that the profile on your master is the same as the one on your non master. If I get some time I'll try setting up a salt master on amazon cloud and spin up a minion using the non-public network in amazon.
Hi, I have the same isseu on rackspace using openstack see below:
[DEBUG ]
SERVICE_NAME: winexesvc
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
SERVICE_NAME: winexesvc
TYPE : 10 WIN32_OWN_PROCESS
STATE : 4 RUNNING
(STOPPABLE, PAUSABLE, IGNORES_SHUTDOWN)
WIN32_EXIT_CODE : 0 (0x0)
SERVICE_EXIT_CODE : 0 (0x0)
CHECKPOINT : 0x0
WAIT_HINT : 0x0
[DEBUG ] winexe connected...
[DEBUG ] SMB port 445 on x.x.x.xis available
[DEBUG ] Logging into x.x.x.x:445 as bootstrap
[ERROR ] There was a query error: global name 'SessionError' is not defined
It also tries to use bootstap-salt.sh when deploying the machine.
@sdebot Do you have impacket installed on your master? Is your master also on Rackspace, or are you running salt-cloud from another machine, not the master?
Hi, thanks for your response. With regards to your questions: 1) Yes i have installed impacket dpkg -l | grep impack ii python-impacket 0.9.10-1 all Python module to easily build and dissect network protocols
2) Yes my salt-master is also located at Rackspace
3) I am running salt-cloud from the salt-master.
Hi, I have the same problem. Have you found a solution?
The Solution is: change "from impacket.smbconnection import SessionError in "from impacket.smb3 import SessionError" in /usr/lib/python2.7/dist-packages/salt/utils/smb.py
@kaiserconstantin Thanks for the fix!
@dhs-rec @sdebot Once this is merged would you mind verifying that this fixes the problem when you get a chance?
I have tested it. It works for me.
Hi, works also for my thanks for the fix!
Thanks for the confirmation.
Yes, works for me, too, but only from a machine outside AWS.
It still hangs after printing the public IP address of the new instance when run from a Salt master in AWS (same region) although the Minion seem to have been bootstrapped somehow (I see its new key accepted on the master, but can't test.ping it).
Can you all test this PR? https://github.com/saltstack/salt/pull/26853
This should allow the above fix while still avoiding a regression it caused.
@dhs-rec @sdebot @kaiserconstantin My previous fix caused a regression. Please test the new fix submitted by UtahDave.
@dhs-rec I have been able to replicate something similar to your issue by putting my minion on a different security group and subnet. You need to make sure that in your profile/provider files the subnet and security group are the same ones that your master is using. Otherwise, it will never find the minion.
@twangboy Other Salt actions run fine, it's only salt-cloud. And, in the working case, I only run salt-cloud from outside AWS. The master to which the new instance's minion connects is the one IN AWS. It only doesn't work if I run salt-cloud from that very same master.
The reworked fix only adds more lines of the form
[INFO ] Starting new HTTPS connection (1): ec2.eu-central-1.amazonaws.com
after
[INFO ] Salt node data. Public_ip: 11.12.13.14
which wasn't the case before. But that's all.
@dhs-rec Try changing ssh_interface: public_ips
to ssh_interface: private_ips
in your provider file.
Sorry for the delay. With the patch applied and the settings changed to private_ips, it now comes as far as
...
[INFO ] Rendering deploy script: /usr/lib/python2.7/dist-packages/salt/cloud/deploy/bootstrap-salt.sh
[INFO ] Running command under pid 5024: 'winexe -U \'Administrator%secret_password\' //10.11.12.13 "sc query winexesvc"'
[ERROR ] There was a profile error: global name 'smbSessionError' is not defined
regardless whether I run it from inside or outside AWS.
Hint: python-impacket in Ubuntu 14.04 is 0.9.10.
Hi all, I'm having the same problem. I've actually been fighting trying to bootstrap various Windows Server instances on EC2 via Salt for the last day or two now.
I am currently at the exact same spot as dhs-rec is. I tried to use Salt-Cloud to provision a Windows Server 2008 (with SQL) AMI from Amazon. I tried private_ips and public_ips. I got the same "smbSessionError".
I'll attach some more details in the morning.
@elxwill Is your master also on EC2? Are you running salt-cloud from the same master the new machines will connect to?
@techhat Is there anything we can do about this random smbSessionError we keeps seeing?
Hi,
while bootstrapping a Windows Server 2008R2 AWS instance via salt-cloud, the Salt Minion installer (2014.7.1) doesn't create the salt-minion service. OTOH, salt-cloud doesn't notice it's not there:
When running the installer manually, the service IS created, though.
Bye...
Dirk