Open caesarWHLee opened 2 years ago
Since storytelling cloud run send request to Old Bucket can be successfully routed by VPC connector. Maybe the difference between the configuration of Old Bucket and New Bucket contains the root cause.
Does the cloud run request new bucket files provided by GCS (URL starts with https://storage.googleapis.com
) successfully?
If there is no timeout issue for requesting new bucket files provided by GCS, then I am afraid that the root cause might not be the bucket configurations.
紀錄後續:
2022/7/1 storytelling cloud run 之前的 CONNECTIONS 設定, 是將所有的 egress requests 都先經過 serverless VPC connector 處理(Route all traffic through the VPC connector)。
從 storytelling cloud run 上面 request https://editools-gcs-dev.read.tw/ 會遇到 timeout 的問題。
看起來是 serverless
VPC connector 找不到 https://editools-gcs-dev.read.tw/ 。
目前先將 egress requests 的設定改成 Route only requests to private IPs through the VPC connector, 只有 private ip/ internal dns 會經過 VPC connector。
root cause尚未釐清。
2022/9/20
cloud run CONNECTIONS 設定被改回 Route all traffic through the VPC connector
,
導致storytelling 首圖出不來再次發生,目前已由Nick再次調整完畢。
2022/9/21 更新
cloud run CONNECTIONS 設定被改成 Route all traffic through the VPC connector
後,首圖因為 storytelling cloud run 在 request https://editools-gcs.readr.tw GCS bucket 時,會發生 timeout 的問題(此問題的 root cause 待釐清),所以我們將 storytelling cloud run 的 CONNECTIONS 改成 Route only requests to private IPs through the VPC connector
。
當 storytelling cloud run 的 CONNECTIONS 設定改成 Route only requests to private IPs through the VPC connector
後,storytelling 就可以正常 request https://editools-gcs.readr.tw GCS bucket,進而修復首圖的問題。
但此改動會引起另外一個錯誤產生。因為 storytelling 出去的 requests,只有 private ips 會經由 VPC connector,但 storytelling 在 request editools-gql-(dev|prod) cloud run 時,request 的 URL origin 是 https://editools-gql-prod-4g6paft7cq-de.a.run.app 。此 URL 並非 private ip,所以 request 不會透過 VPC connector 出去。而 editools-gql-(dev|prod) 只允許內部網域的 requests,所以 storytelling 的 requests 都會拿到 403(forbidden) 的 response,最終導致內容無法顯示。
經過上述過程,最後 storytelling-(dev|prod) cloud run 的調整如下:
Route all traffic through the VPC connector
Define
Old Bucket: statics-editools-dev &cloudshell=false&project=mirrorlearning-161006&prefix=&forceOnObjectsSortingFiltering=false) New Bucket: editools-gcs-dev.readr.tw &cloudshell=true&project=mirrorlearning-161006&prefix&forceOnObjectsSortingFiltering=false)
Description
Since storyteling is Next.js structure and needs to rely on SSR data fetching to prevent hydrate error. And the page liveblog is based on the liveblog json file to render all the contents. So a request sent by storytelling server using
getServerSideProps
.Fetching the json from the Old Bucket got no problem. But after switching to use the New Bucket by chaning the environmental variable
GCS_BUCKET_URL
. The request always timeout and not able to get the liveblog json file.Follow-Up
From @nickhsine's observation: By the configuration of storytelling cloud run, all the egress requests will be routed through serverless VPC connector. Looks like request
https://editools-gcs-dev.read.tw/
from storytelling cloud run will face the timeout issue.Workaround
Setting egress requests to Route only requests to private IPs through the VPC connector. => Only private ip/ internal dns will be routed through VPC connectorr
Old request flow: cloud run -> vpc connector -> gcp load balancer -> gcs
After workaround: cloud run — (internal ip/ internal dns) —> vpc connector -> gcp resources cloud run — (external ip/ external dns) —> internet --> external resources
In this configuration, when storytelling cloud run send request to
https://editools-gcs-dev.readr.tw
. The request will be sent outside to the internet and then back to the gcp to prevent that VPC can't findhttps://editools-gcs-dev.readr.tw
.Note
Since storytelling cloud run send request to Old Bucket can be successfully routed by VPC connector. Maybe the difference between the configuration of Old Bucket and New Bucket contains the root cause.