Closed moizkhan2712 closed 8 years ago
Any update ? Loosing over a day trying to get a deployment working, and now hitting this issue. Earliest problem been s3 bucket not in the default US standard region.
do you see an error?
Sent from my iPhone
On Jan 29, 2016, at 11:55 AM, Sylvain Gibier notifications@github.com wrote:
Any update ? Loosing over a day trying to get a deployment working, and now hitting this issue. Earliest problem been s3 bucket not in the default US standard region.
— Reply to this email directly or view it on GitHub.
Hi,
Yep - same error as above.
using bosh release: 250 bosh-aws-cpi: 41
Command 'deploy' failed: Deploying: Building state for instance 'bosh/0': Compiling job package dependencies for instance 'bosh/0': Compiling job package dependencies: Remotely compiling package 'redis' with the agent: Sending 'compile_package' to the agent: Sending 'get_task' to the agent: Agent responded with error: Action Failed get_task: Task 7cc45ed1-1ce9-428e-7716-5de4a38b0547 result: Compiling package redis: Fetching package redis: Fetching package blob 916f30f0-5f15-4319-69bb-18d9425cc6f3: Getting blob from inner blobstore: Getting blob from inner blobstore: Shelling out to bosh-blobstore-s3 cli: Running command: 'bosh-blobstore-s3 -c /var/vcap/bosh/etc/blobstore-s3.json get 916f30f0-5f15-4319-69bb-18d9425cc6f3 /var/vcap/data/tmp/bosh-blobstore-externalBlobstore-Get957325771', stdout: 'Error: The specified key does not exist. ', stderr: '': exit status 1
can you share sanitized cloud_provider section of your manifest?
On Fri, Jan 29, 2016 at 12:13 PM, Sylvain Gibier notifications@github.com wrote:
Hi,
Yep - same error as above.
using bosh release: 250 bosh-aws-cpi: 41
Command 'deploy' failed: Deploying: Building state for instance 'bosh/0': Compiling job package dependencies for instance 'bosh/0': Compiling job package dependencies: Remotely compiling package 'redis' with the agent: Sending 'compile_package' to the agent: Sending 'get_task' to the agent: Agent responded with error: Action Failed get_task: Task 7cc45ed1-1ce9-428e-7716-5de4a38b0547 result: Compiling package redis: Fetching package redis: Fetching package blob 916f30f0-5f15-4319-69bb-18d9425cc6f3: Getting blob from inner blobstore: Getting blob from inner blobstore: Shelling out to bosh-blobstore-s3 cli: Running command: 'bosh-blobstore-s3 -c /var/vcap/bosh/etc/blobstore-s3.json get 916f30f0-5f15-4319-69bb-18d9425cc6f3 /var/vcap/data/tmp/bosh-blobstore-externalBlobstore-Get957325771', stdout: 'Error: The specified key does not exist. ', stderr: '': exit status 1
— Reply to this email directly or view it on GitHub https://github.com/cloudfoundry/bosh-init/issues/62#issuecomment-176949855 .
Here we go:
cloud_provider:
template: {name: aws_cpi, release: bosh-aws-cpi}
ssh_tunnel:
host: 10.10.10.6
port: 22
user: vcap
private_key: (( "./" AWSCredentials.AWS_DEFAULT_KEY_NAME ".pem" ))
mbus: (( "https://mbus:" bosh_credentials.mbus_password "@10.10.10.6:6868" ))
properties:
aws:
access_key_id: (( AWSCredentials.AWS_ACCESS_KEY_ID ))
secret_access_key: (( AWSCredentials.AWS_SECRET_ACCESS_KEY ))
default_key_name: (( AWSCredentials.AWS_DEFAULT_KEY_NAME ))
default_security_groups: [(( SecurityGroups.BOSH_SECURITY_GROUP_ID ))]
agent:
mbus: (( "https://mbus:" bosh_credentials.mbus_password "@0.0.0.0:6868" ))
blobstore:
provider: s3
s3_force_path_style: true
s3_region: (( AWSCredentials.AWS_DEFAULT_REGION ))
bucket_name: (( AWSCredentials.AWS_BUCKET_NAME ))
access_key_id: (( AWSCredentials.AWS_ACCESS_KEY_ID ))
secret_access_key: (( AWSCredentials.AWS_SECRET_ACCESS_KEY ))
ntp: *ntp
I see. blobstore config in cloud_provider section is not something that should be switched to s3. It's only used for bosh-init bootstrapping and is required to be local. Keep it as example shows in bosh.io/docs/init-aws.html
On Fri, Jan 29, 2016 at 12:25 PM, Sylvain Gibier notifications@github.com wrote:
Here we go:
cloud_provider: template: {name: aws_cpi, release: bosh-aws-cpi}
ssh_tunnel: host: 10.10.10.6 port: 22 user: vcap private_key: (( "./" AWSCredentials.AWS_DEFAULT_KEY_NAME ".pem" ))
mbus: (( "https://mbus:" bosh_credentials.mbus_password "@10.10.10.6:6868" ))
properties: aws: access_key_id: (( AWSCredentials.AWS_ACCESS_KEY_ID )) secret_access_key: (( AWSCredentials.AWS_SECRET_ACCESS_KEY )) default_key_name: (( AWSCredentials.AWS_DEFAULT_KEY_NAME )) default_security_groups: [(( SecurityGroups.BOSH_SECURITY_GROUP_ID ))] agent: mbus: (( "https://mbus:" bosh_credentials.mbus_password "@0.0.0.0:6868" )) blobstore: provider: s3 s3_force_path_style: true s3_region: (( AWSCredentials.AWS_DEFAULT_REGION )) bucket_name: (( AWSCredentials.AWS_BUCKET_NAME )) access_key_id: (( AWSCredentials.AWS_ACCESS_KEY_ID )) secret_access_key: (( AWSCredentials.AWS_SECRET_ACCESS_KEY )) ntp: *ntp
— Reply to this email directly or view it on GitHub https://github.com/cloudfoundry/bosh-init/issues/62#issuecomment-176953028 .
Thanks. It make sense.
Command 'deploy' failed: Deploying: Building state for instance 'bosh/0': Compiling job package dependencies for instance 'bosh/0': Compiling job package dependencies: Remotely compiling package 'nginx' with the agent: Sending 'compile_package' to the agent: Sending 'get_task' to the agent: Agent responded with error: Action Failed get_task: Task ea616367-8063-49ce-6775-9eb0b5e86b7a result: Compiling package nginx: Fetching package nginx: Fetching package blob 50a3aead-defb-46a4-6b66-58d21c87f656: Getting blob from inner blobstore: Getting blob from inner blobstore: Shelling out to bosh-blobstore-s3 cli: Running command: 'bosh-blobstore-s3 -c /var/vcap/bosh/etc/blobstore-s3.json get 50a3aead-defb-46a4-6b66-58d21c87f656 /var/vcap/data/tmp/bosh-blobstore-externalBlobstore-Get427473410', stdout: 'Error: The specified key does not exist.
Please help me.
Ensure that in your manifest that you keep the blobstore definition in the cloud_provider's properties sub section as
cloud_provider:
...
blobstore: {provider: local, path: /var/vcap/micro_bosh/data/cache}
...
Bosh-init needs to have for the initial bootstrap to have the blobstore local, while activating S3 blobstore is only for the deployed instance.
@muconsulting Could you provide bosh.yml manifest file to me?
Here we go - AWS / VPC
---
name: bosh
releases:
- name: bosh
url: https://bosh.io/d/github.com/cloudfoundry/bosh?v=250
sha1: 11b318d4ec9f0baf75d8afc6f78cf66f955d459f
- name: bosh-aws-cpi
url: https://bosh.io/d/github.com/cloudfoundry-incubator/bosh-aws-cpi-release?v=41
sha1: 124e3596293fa70f01ffff742ea5274769cc5efc
resource_pools:
- name: vms
network: private
stemcell:
url: https://bosh.io/d/stemcells/bosh-aws-xen-hvm-ubuntu-trusty-go_agent?v=3012
sha1: 3380b55948abe4c437dee97f67d2d8df4eec3fc1
cloud_properties:
instance_type: m3.medium
ephemeral_disk: {size: 25_000, type: gp2}
availability_zone: AWS_ZONE
disk_pools:
- name: disks
disk_size: 20_000
cloud_properties: {type: gp2}
networks:
- name: private
type: manual
subnets:
- range: 10.0.0.0/24
gateway: 10.0.0.1
dns: [10.0.0.2]
cloud_properties: {subnet: (( Resources.BOSHSubnet )) }
- name: public
type: vip
jobs:
- name: bosh
instances: 1
templates:
- {name: nats, release: bosh}
- {name: redis, release: bosh}
- {name: postgres, release: bosh}
- {name: director, release: bosh}
- {name: health_monitor, release: bosh}
- {name: registry, release: bosh}
- {name: aws_cpi, release: bosh-aws-cpi}
resource_pool: vms
persistent_disk_pool: disks
networks:
- name: private
static_ips: [10.0.0.6]
default: [dns, gateway]
properties:
nats:
address: 127.0.0.1
user: nats
password: nats_password
redis:
listen_addresss: 127.0.0.1
address: 127.0.0.1
password: redis_password
postgres: &db
listen_address: 127.0.0.1
host: 127.0.0.1
user: postgres
password: postgres-password
database: bosh
adapter: postgres
registry:
address: 10.0.0.6
host: 10.0.0.6
db: *db
http:
user: admin
password: registry_password
port: 25777
username: admin
password: registry_password
port: 25777
blobstore: &blobstore
provider: s3
s3_force_path_style: true
#s3_region: AWS_DEFAULT_REGION
bucket_name: AWS_BUCKET_NAME
access_key_id: AWS_ACCESS_KEY_ID
secret_access_key: AWS_SECRET_ACCESS_KEY
director:
address: 127.0.0.1
name: bosh
db: *db
cpi_job: aws_cpi
max_threads: 10
user_management:
provider: local
local:
users:
- {name: admin, password: director_password }
- {name: hm, password: hm_password }
hm:
director_account:
user: admin
password: director_password
resurrector_enabled: true
aws: &aws
access_key_id: AWS_ACCESS_KEY_ID
secret_access_key: AWS_SECRET_ACCESS_KEY
default_key_name: AWS_DEFAULT_KEY_NAME
default_security_groups: [ BOSH_SECURITY_GROUP_ID ]
region: AWS_DEFAULT_REGION
agent:
mbus: (( "nats://nats:" nats_password "@10.0.0.6:4222" ))
ntp: &ntp [0.pool.ntp.org, 1.pool.ntp.org]
cloud_provider:
template: {name: aws_cpi, release: bosh-aws-cpi}
ssh_tunnel:
host: 10.10.10.6
port: 22
user: vcap
private_key: (( "./" AWS_DEFAULT_KEY_NAME ".pem" ))
mbus: (( "https://mbus:" mbus_password "@10.0.0.6:6868" ))
properties:
aws: *aws
agent:
mbus: (( "https://mbus:" mbus_password "@0.0.0.0:6868" ))
blobstore: {provider: local, path: /var/vcap/micro_bosh/data/cache}
ntp: *ntp
@muconsulting Thanks for your help.
I suceed deploy BOSH on aws. but Cannot connection to director after restart a aws ec2 instance(bosh/0). Reset the /var/vcap/data directory after restart instance. Maybe this disk is ephemeral. How to not initialize directory?
Been a while since I reported back on this ticket :) . Works fine with the sample manifest from @muconsulting except that the s3_region
parameter was required for me. If I didn't specify it, the microBOSH was throwing an error when trying to upload a release.
I'm not able to deploy microBOSH on AWS using bosh-init with an S3 bucket as an external blobstore. I face the following error when trying to do so:
Don't know why it's trying to fetch blobs from the S3 bucket even though it's empty. I checked the AWS credentials and upload/download of files to and from the bucket works fine
Attached is the manifest I'm using. Just used the steps at https://bosh.io/docs/director-configure-blobstore.html. Also, I'm using bosh-init v0.0.81. is there anything I'm missing?
aws-dev-microbosh-blobstore.txt