estevaobk / 3scaledump

Unofficial tool for dumping a Red Hat 3scale On-premises project
2 stars 6 forks source link

Fetch the 'oc status' #29

Closed estevaobk closed 5 years ago

estevaobk commented 5 years ago

This command could be helpful to further troubleshoot issues:

# oc status --help
Show a high level overview of the current project 

This command will show services, deployment configs, build configurations, and active deployments. If you have any
misconfigured components information about them will be shown. For more information about individual items, use the
describe command (e.g. oc describe buildConfig, oc describe deploymentConfig, oc describe service). 

You can specify an output format of "-o dot" to have this command output the generated status graph in DOT format that
is suitable for use by the "dot" command.

Usage:
  oc status [-o dot | --suggest ] [flags]

Examples:
  # See an overview of the current project.
  oc status

  # Export the overview of the current project in an svg file.
  oc status -o dot | dot -T svg -o project.svg

  # See an overview of the current project including details for any identified issues.
  oc status --suggest

Options:
      --all-namespaces=false: If true, display status for all namespaces (must have cluster admin)
  -o, --output='': Output format. One of: dot.
      --suggest=false: See details for resolving issues.

Use "oc options" for a list of global command-line options (applies to all commands).

Sample output:

# oc status --suggest
In project 3scale-26 on server https://master.ocp3-11-26.cluster:8443

https://api-3scale-apicast-production.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port gateway (svc/apicast-production)
https://oidc-3scale-apicast-production.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port gateway
https://api-using-port-8443-3scale-apicast-production.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port gateway
  dc/apicast-production deploys istag/amp-apicast:latest 
    deployment #2 deployed 13 days ago - 1 pod
    deployment #1 deployed 3 weeks ago

https://api-3scale-apicast-staging.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port gateway (svc/apicast-staging)
https://oidc-3scale-apicast-staging.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port gateway
https://api-using-port-8443-3scale-apicast-staging.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port gateway
  dc/apicast-staging deploys istag/amp-apicast:latest 
    deployment #3 deployed 4 days ago - 1 pod
    deployment #2 deployed 13 days ago
    deployment #1 deployed 3 weeks ago

https://backend-3scale.3scale-26.apps.ocp3-11-26.cluster (and http) to pod port http (svc/backend-listener)
  dc/backend-listener deploys istag/amp-backend:latest 
    deployment #1 deployed 3 weeks ago - 1 pod

svc/backend-redis - 172.30.146.199:6379
  dc/backend-redis deploys istag/backend-redis:latest 
    deployment #3 deployed 3 weeks ago - 1 pod
    deployment #2 deployed 3 weeks ago
    deployment #1 failed 3 weeks ago: newer deployment was found running

https://master.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port http (svc/system-master)
https://3scale-admin.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port http (svc/system-provider)
https://3scale.3scale-26.apps.ocp3-11-26.cluster (redirects) to pod port http (svc/system-developer)
  dc/system-app deploys istag/amp-system:latest 
    deployment #4 deployed 11 days ago - 1 pod
    deployment #3 deployed 11 days ago
    deployment #2 deployed 3 weeks ago

svc/system-memcache - 172.30.199.113:11211
  dc/system-memcache deploys istag/system-memcached:latest 
    deployment #1 deployed 3 weeks ago - 1 pod

svc/system-mysql - 172.30.224.83:3306
  dc/system-mysql deploys istag/system-mysql:latest 
    deployment #4 deployed 3 weeks ago - 0 pods
    deployment #3 deployed 3 weeks ago
    deployment #2 deployed 3 weeks ago

svc/system-redis - 172.30.43.97:6379
  dc/system-redis deploys istag/system-redis:latest 
    deployment #3 deployed 3 weeks ago - 1 pod
    deployment #2 deployed 3 weeks ago
    deployment #1 failed 3 weeks ago: newer deployment was found running

svc/system-sphinx - 172.30.249.37:9306
  dc/system-sphinx deploys istag/amp-system:latest 
    deployment #1 deployed 3 weeks ago - 1 pod

svc/zync - 172.30.240.127:8080
  dc/zync deploys istag/amp-zync:latest 
    deployment #1 deployed 3 weeks ago - 1 pod

svc/zync-database - 172.30.208.182:5432
  dc/zync-database deploys istag/zync-database-postgresql:latest 
    deployment #1 deployed 3 weeks ago - 1 pod

dc/backend-cron deploys istag/amp-backend:latest 
  deployment #1 deployed 3 weeks ago - 1 pod

dc/backend-worker deploys istag/amp-backend:latest 
  deployment #1 deployed 3 weeks ago - 1 pod

dc/system-sidekiq deploys istag/amp-system:latest 
  deployment #1 deployed 3 weeks ago - 1 pod

dc/zync-que deploys istag/amp-zync:latest 
  deployment #5 deployed 3 weeks ago - 1 pod
  deployment #4 deployed 3 weeks ago
  deployment #3 deployed 3 weeks ago

Warnings:
  * pod/apicast-production-2-wfbqn has restarted 9 times
  * pod/backend-cron-1-g5q2z has restarted 9 times
  * pod/backend-listener-1-f6zvk has restarted 12 times
  * pod/backend-redis-3-v2ctm has restarted 12 times
  * pod/backend-worker-1-lzpx9 has restarted 12 times
  * container "system-developer" in pod/system-app-4-trg29 has restarted 9 times
  * container "system-master" in pod/system-app-4-trg29 has restarted 9 times
  * container "system-provider" in pod/system-app-4-trg29 has restarted 9 times
  * pod/system-memcache-1-f4t6t has restarted 12 times
  * pod/system-redis-3-hcqkn has restarted 12 times
  * pod/system-sidekiq-1-m67nq has restarted 9 times
  * pod/system-sphinx-1-wnc2q has restarted 9 times
  * pod/zync-1-p46cc has restarted 9 times
  * pod/zync-database-1-7cr6k has restarted 12 times
  * pod/zync-que-5-8pfqd has restarted 15 times

Info:
  * dc/backend-cron has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/backend-cron --readiness ...
  * dc/backend-cron has no liveness probe to verify pods are still running.
    try: oc set probe dc/backend-cron --liveness ...
  * dc/backend-worker has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/backend-worker --readiness ...
  * dc/backend-worker has no liveness probe to verify pods are still running.
    try: oc set probe dc/backend-worker --liveness ...
  * dc/system-sidekiq has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/system-sidekiq --readiness ...
  * dc/system-sidekiq has no liveness probe to verify pods are still running.
    try: oc set probe dc/system-sidekiq --liveness ...
  * dc/system-sphinx has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/system-sphinx --readiness ...
  * dc/zync-que has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/zync-que --readiness ...

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
estevaobk commented 5 years ago

Addressed in https://github.com/estevaobk/3scaledump/commit/f6f3b86924033edb3a61f54f03ad208fc0f7253e