Closed bhudlemeyer closed 8 years ago
We have created an issue in Pivotal Tracker to manage this:
https://www.pivotaltracker.com/story/show/126874549
The labels on this github issue will be updated when the story is started.
Logs from both vms attached from during troubleshooting yesterday
Manifest being used attached. This is all test so nothing sensitive present. vsphereprivate.txt
From api vm Every 2.0s: monit summary Fri Jul 22 15:47:02 2016
The Monit daemon 5.2.5 uptime: 10h 48m
Process 'cloud_controller_ng' not monitored Process 'cloud_controller_worker_local_1' Does not exist Process 'cloud_controller_worker_local_2' Does not exist Process 'nginx_cc' not monitored Process 'cloud_controller_clock' running Process 'cloud_controller_worker_1' Does not exist Process 'metron_agent' running Process 'statsd-injector' running Process 'route_registrar' running System 'system_localhost' running
From blobstore vm Process 'consul_agent' running Process 'metron_agent' running Process 'blobstore_nginx' running Process 'blobstore_url_signer' running Process 'route_registrar' running System 'system_localhost' running root@923178be-d00f-4579-a27b-38a2065edee7:~#
Hello @bhudlemeyer,
I'm glad you found our slack channel. As you can see that's the best way to get our quick help. Were you able to get your CF up? If so, can you please close this issue?
Thanks, @adowns01, CAPI Team Member
Issue resolved by using fog blob store instead of webdav. Not sure why webdav was not working but fog seems to have resolved it.
Hi @bhudlemeyer,
We want to make sure you can successfully use a webdav blob store if you want one, so feel free to reopen this issue if you decide to switch back to webdav and experience similar problems.
Thanks, @utako and @thausler786, CF CAPI Team
In case anyone else runs into this I think I found my issue. It was this change in the Job spec that I didnt notice for some reason that was tripping me up. blobstore.tls.port now defaults to 4443 and must be above 1024. When using WebDAV blobstore, the Cloud Controller must now be configured with the same port by adding :4443 to cc.buildpacks.webdav_config.private_endpoint, cc.droplets.webdav_config.private_endpoint, cc.packages.webdav_config.private_endpoint, and cc.resource_pool.webdav_config.private_endpoint.
I updated the blobstore it self to 4443 but needed to go under cc and update all private endpoints with :4443. The generation script didnt automagically do this for some reason for me.
Thanks for submitting an issue to
cloud_controller-ng
. We are always trying to improve! To help us, please fill out the following template.Issue
cloud_controller_ng / cloud_controller_workers will not start on api_z1/0 vm.
Context
Fresh deploy of Cloud Foundry on vsphere using v239. I am not finding anything in the logs to point to why exactly other than they dont seem to be able to talk to the blobstore possibly. The blobstore is online and running but shows no access attempts in its access.log when trying to start cloud_controller_ng via monit.
Steps to Reproduce
deploy cloud foundry v239 via bosh deploy
Expected result
CF up and running and accessible
Current result
api_z1/0 nginix_cc, Cloud_controller_ng, cloud_controller_worker all will not start
Possible Fix
[not obligatory, but suggest fixes or reasons for the bug]
name of issue
screenshot[if relevant, include a screenshot]