goharbor / harbor

An open source trusted cloud native registry project that stores, signs, and scans content.
https://goharbor.io
Apache License 2.0
23.14k stars 4.66k forks source link

One of the layers is large than 80G, Can't push image to harbor #18888

Closed FLM210 closed 8 months ago

FLM210 commented 1 year ago

harbor-core error log image I set REGISTRY_HTTP_CLIENT_TIMEOUT=600 environment variable for jobservice and core service It's my nginx config

pid /tmp/nginx.pid;

events {
  worker_connections 3096;
  use epoll;
  multi_accept on;
}

http {
  client_body_temp_path /tmp/client_body_temp;
  proxy_temp_path /tmp/proxy_temp;
  fastcgi_temp_path /tmp/fastcgi_temp;
  uwsgi_temp_path /tmp/uwsgi_temp;
  scgi_temp_path /tmp/scgi_temp;
  tcp_nodelay on;
  include /etc/nginx/conf.d/*.upstream.conf;

  # this is necessary for us to be able to disable request buffering in all cases
  proxy_http_version 1.1;

  upstream core {
    server core:8080;
  }

  upstream portal {
    server portal:8080;
  }

  log_format timed_combined '$remote_addr - '
    '"$request" $status $body_bytes_sent '
    '"$http_referer" "$http_user_agent" '
    '$request_time $upstream_response_time $pipe';

  access_log /dev/stdout timed_combined;

  map $http_x_forwarded_proto $x_forwarded_proto {
    default $http_x_forwarded_proto;
    ""      $scheme;
  }

  include /etc/nginx/conf.d/*.server.conf;

  server {
    listen 8443 ssl;
#    server_name harbordomain.com;
    server_tokens off;
    # SSL
    ssl_certificate /etc/cert/server.crt;
    ssl_certificate_key /etc/cert/server.key;

    # Recommendations from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
    ssl_protocols TLSv1.2;
    ssl_ciphers '!aNULL:kECDH+AESGCM:ECDH+AESGCM:RSA+AESGCM:kECDH+AES:ECDH+AES:RSA+AES:';
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;

    # disable any limits to avoid HTTP 413 for large image uploads
    client_max_body_size 0;

    # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
    chunked_transfer_encoding on;

    # Add extra headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header Content-Security-Policy "frame-ancestors 'none'";

    # customized location config file can place to /etc/nginx dir with prefix harbor.https. and suffix .conf
    include /etc/nginx/conf.d/harbor.https.*.conf;

    location / {
      proxy_pass http://portal/;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

      proxy_cookie_path / "/; HttpOnly; Secure";

      proxy_buffering off;
      proxy_request_buffering off;
    }

    location /c/ {
      proxy_pass http://core/c/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

      proxy_cookie_path / "/; Secure";

      proxy_buffering off;
      proxy_request_buffering off;
    }

    location /api/ {
      proxy_pass http://core/api/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

      proxy_cookie_path / "/; Secure";

      proxy_buffering off;
      proxy_request_buffering off;
    }

    location /v1/ {
      return 404;
    }

    location /v2/ {
      proxy_pass http://core/v2/;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;
      proxy_buffering off;
      proxy_request_buffering off;
      proxy_send_timeout 18000;
      proxy_read_timeout 18000;
    }

    location /service/ {
      proxy_pass http://core/service/;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

      proxy_cookie_path / "/; Secure";

      proxy_buffering off;
      proxy_request_buffering off;
    }

    location /service/notifications {
      return 404;
    }
  }
  server {
      listen 8080;
      #server_name harbordomain.com;
      return 308 https://$host:443$request_uri;
  }
}
ChristianCiach commented 1 year ago

Try to set client_max_body_size 0 at http-scope, too. This is the solution according to many "stackoverflow.com" questions.

FLM210 commented 1 year ago

Try to set client_max_body_size 0 at http-scope, too. This is the solution according to many "stackoverflow.com" questions.

FLM210 commented 1 year ago

Try to set client_max_body_size 0 at http-scope, too. This is the solution according to many "stackoverflow.com" questions.

I set client_max_body_size 0 at http-scope ,but the push still failed

MinerYang commented 1 year ago

Is it still the same error msg after setting the client_max_body_size 0? ref: https://github.com/goharbor/harbor/issues/15823

Please provide more details, for example core log, proxy log etc. how do you install and set up your harbor?(via docker compose or harbor helm?) Is there an Nginx proxy in front of harbor-nginx container?

FLM210 commented 1 year ago

Is it still the same error msg after setting the client_max_body_size 0? ref: #15823

Please provide more details, for example core log, proxy log etc. how do you install and set up your harbor?(via docker compose or harbor helm?) Is there an Nginx proxy in front of harbor-nginx container?

I install harbor with docker-compose and no nginx proxy in front of harbor-nginx

docker push image

harbor-core logs image

harbor nginx logs image

zyyw commented 1 year ago

Hi @FLM210 How did you update the nginx config to client_max_body_size 0? Instead of docker exec -it nginx /bin/bash and update the nginx container config in real-time, we may need to update it here common/config/nginx/nginx.conf and then restart harbor by "sudo docker compose down" && sudo docker-compose up -d to make this config working.

FLM210 commented 1 year ago

@zyyw Sure , This is my current configuration file


pid /tmp/nginx.pid;

events {
  worker_connections 3096;
  use epoll;
  multi_accept on;
}

http {
  client_body_temp_path /tmp/client_body_temp;
  proxy_temp_path /tmp/proxy_temp;
  fastcgi_temp_path /tmp/fastcgi_temp;
  uwsgi_temp_path /tmp/uwsgi_temp;
  scgi_temp_path /tmp/scgi_temp;
  client_max_body_size 0;
  tcp_nodelay on;
  include /etc/nginx/conf.d/*.upstream.conf;

  # this is necessary for us to be able to disable request buffering in all cases
  proxy_http_version 1.1;

  upstream core {
    server core:8080;
  }

  upstream portal {
    server portal:8080;
  }

  log_format timed_combined '$remote_addr - '
    '"$request" $status $body_bytes_sent '
    '"$http_referer" "$http_user_agent" '
    '$request_time $upstream_response_time $pipe';

  access_log /dev/stdout timed_combined;

  map $http_x_forwarded_proto $x_forwarded_proto {
    default $http_x_forwarded_proto;
    ""      $scheme;
  }

  include /etc/nginx/conf.d/*.server.conf;

  server {
    listen 8443 ssl;
#    server_name harbordomain.com;
    server_tokens off;
    # SSL
    ssl_certificate /etc/cert/server.crt;
    ssl_certificate_key /etc/cert/server.key;

    # Recommendations from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
    ssl_protocols TLSv1.2;
    ssl_ciphers '!aNULL:kECDH+AESGCM:ECDH+AESGCM:RSA+AESGCM:kECDH+AES:ECDH+AES:RSA+AES:';
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;

    # disable any limits to avoid HTTP 413 for large image uploads
    client_max_body_size 0;

    # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
    chunked_transfer_encoding on;

    # Add extra headers
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header Content-Security-Policy "frame-ancestors 'none'";

    # customized location config file can place to /etc/nginx dir with prefix harbor.https. and suffix .conf
    include /etc/nginx/conf.d/harbor.https.*.conf;

    location / {
      proxy_pass http://portal/;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

      proxy_cookie_path / "/; HttpOnly; Secure";

      proxy_buffering off;
      proxy_request_buffering off;
    }

    location /c/ {
      proxy_pass http://core/c/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

      proxy_cookie_path / "/; Secure";

      proxy_buffering off;
      proxy_request_buffering off;
    }

    location /api/ {
      proxy_pass http://core/api/;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

      proxy_cookie_path / "/; Secure";

      proxy_buffering off;
      proxy_request_buffering off;
    }

    location /v1/ {
      return 404;
    }

    location /v2/ {
      client_max_body_size 0;
      proxy_pass http://core/v2/;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;
      proxy_buffering off;
      proxy_request_buffering off;
      proxy_send_timeout 18000;
      proxy_read_timeout 18000;
    }

    location /service/ {
      proxy_pass http://core/service/;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

      proxy_cookie_path / "/; Secure";

      proxy_buffering off;
      proxy_request_buffering off;
    }

    location /service/notifications {
      return 404;
    }
  }
  server {
      listen 8080;
      #server_name harbordomain.com;
      return 308 https://$host:443$request_uri;
  }
}
zyyw commented 1 year ago

@FLM210 Can you push the large layer image now after update the common/config/nginx/nginx.conf and then restart harbor by sudo docker compose down && sudo docker-compose up -d. Or the issue still persists with the same error message.

FLM210 commented 1 year ago

@zyyw This is the configuration I used before reopening the issue

jorisdonkers commented 1 year ago

Same problem here.

danielzhanghl commented 1 year ago

the error comes from harbor core, seems it reached some limitation on golang http package side. for this big size layer, if golang need to parse the request, need to create some tmp file to store temp data, you may have a check if there is enough space for tmp file.

FLM210 commented 1 year ago

the error comes from harbor core, seems it reached some limitation on golang http package side. for this big size layer, if golang need to parse the request, need to create some tmp file to store temp data, you may have a check if there is enough space for tmp file.

My remaining disk space exceeds 1T, so I guess this isn't a disk space issue

danielzhanghl commented 12 months ago

the tmp files resides on /tmp, which is memory actually, maybe that is memory limitation?

FLM210 commented 12 months ago

the tmp files resides on /tmp, which is memory actually, maybe that is memory limitation?

My memory is only 24GB, but uploading a 30GB sized layer can be successful

github-actions[bot] commented 9 months ago

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

FLM210 commented 9 months ago

I failed to push a layer with a 37G image using Skopeo

skopeo logs:

skopeo --debug copy oci-archive:image.tar docker://registry.light-field.tech/ue/project-spsp:4.1.11-3-op_5.0.3-7_v0.4 --tls-verify=false  
WARN[0000] '--tls-verify' is deprecated, instead use: --src-tls-verify, --dest-tls-verify 
DEBU[0000] Using registries.d directory /etc/containers/registries.d 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/shortnames.conf" 
DEBU[0000] Found credentials for registry.light-field.tech/ue/project-spsp in credential helper containers-auth.json in file /run/containers/0/auth.json 
DEBU[0000]  No signature storage configuration found for registry.light-field.tech/ue/project-spsp:4.1.11-3-op_5.0.3-7_v0.4, using built-in default file:///var/lib/containers/sigstore 
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/registry.light-field.tech 

DEBU[0226] Using blob info cache at /var/lib/containers/cache/blob-info-cache-v1.boltdb 
DEBU[0226] IsRunningImageAllowed for image oci-archive:/data/unreal-project-build-dir/project_spsp/image.tar 
DEBU[0226]  Using default policy section                
DEBU[0226]  Requirement 0: allowed                      
DEBU[0226] Overall: allowed                             
Getting image source signatures
DEBU[0226] Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.docker.distribution.manifest.v1+json] 
DEBU[0226] ... will first try using the original manifest unmodified 
DEBU[0226] Checking if we can reuse blob sha256:9d19ee268e0d7bcf6716e6658ee1b0384a71d6f2f9aa1ae2085610cf7c7b316f: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
DEBU[0226] Checking /v2/ue/project-spsp/blobs/sha256:9d19ee268e0d7bcf6716e6658ee1b0384a71d6f2f9aa1ae2085610cf7c7b316f 
DEBU[0226] GET https://registry.light-field.tech/v2/    
DEBU[0226] Ping https://registry.light-field.tech/v2/ status 401 
DEBU[0226] GET https://registry.light-field.tech/service/token?account=rxopy&scope=repository%3Aue%2Fproject-spsp%3Apull%2Cpush&service=harbor-registry 
DEBU[0226] HEAD https://registry.light-field.tech/v2/ue/project-spsp/blobs/sha256:9d19ee268e0d7bcf6716e6658ee1b0384a71d6f2f9aa1ae2085610cf7c7b316f 
DEBU[0226] ... already exists                           
DEBU[0226] Skipping blob sha256:9d19ee268e0d7bcf6716e6658ee1b0384a71d6f2f9aa1ae2085610cf7c7b316f (already present): 
DEBU[0226] Checking if we can reuse blob sha256:4df4cfaff6862be5b4dac7cf8b239b8daf6652720f2bffa5f382562a17d37b6a: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
DEBU[0226] Checking /v2/ue/project-spsp/blobs/sha256:4df4cfaff6862be5b4dac7cf8b239b8daf6652720f2bffa5f382562a17d37b6a 
DEBU[0226] HEAD https://registry.light-field.tech/v2/ue/project-spsp/blobs/sha256:4df4cfaff6862be5b4dac7cf8b239b8daf6652720f2bffa5f382562a17d37b6a 
Copying blob 9d19ee268e0d skipped: already exists  
DEBU[0226] ... already exists                           
DEBU[0226] Skipping blob sha256:4df4cfaff6862be5b4dac7cf8b239b8daf6652720f2bffa5f382562a17d37b6a (already present): 
DEBU[0226] Checking if we can reuse blob sha256:7735de6ab791b4375db5e37b4bd2d6cf6f1b7df2d9f44ab6e1e93dfcad009271: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
DEBU[0226] Checking /v2/ue/project-spsp/blobs/sha256:7735de6ab791b4375db5e37b4bd2d6cf6f1b7df2d9f44ab6e1e93dfcad009271 
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
DEBU[0226] ... already exists                           
DEBU[0226] Skipping blob sha256:7735de6ab791b4375db5e37b4bd2d6cf6f1b7df2d9f44ab6e1e93dfcad009271 (already present): 
DEBU[0226] Checking if we can reuse blob sha256:481d35d15b9c45af82a85e35d1af79dbdcbe6bebe067145f6b0e435a082828ce: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar+gzip" = true 
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 7735de6ab791 skipped: already exists  
DEBU[0226] ... already exists                           
DEBU[0226] Skipping blob sha256:481d35d15b9c45af82a85e35d1af79dbdcbe6bebe067145f6b0e435a082828ce (already present): 
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 7735de6ab791 skipped: already exists  
Copying blob 481d35d15b9c skipped: already exists  
DEBU[0226] ... already exists                           
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 7735de6ab791 skipped: already exists  
Copying blob 481d35d15b9c skipped: already exists  
Copying blob 695b2dfd44b4 skipped: already exists  
DEBU[0226] ... not present                              
DEBU[0226] Detected compression format gzip             
DEBU[0226] Using original blob without modification     
DEBU[0226] Checking /v2/ue/project-spsp/blobs/sha256:7a660d7f4f0752f9bf3930192f83bf77f850285f470dd6a18ac5bfef6a9811de 
DEBU[0226] HEAD https://registry.light-field.tech/v2/ue/project-spsp/blobs/sha256:7a660d7f4f0752f9bf3930192f83bf77f850285f470dd6a18ac5bfef6a9811de 
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 7735de6ab791 skipped: already exists  
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 7735de6ab791 skipped: already exists  
Copying blob 481d35d15b9c skipped: already exists  
Copying blob 695b2dfd44b4 skipped: already exists  
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 7735de6ab791 skipped: already exists  
Copying blob 481d35d15b9c skipped: already exists  
Copying blob 695b2dfd44b4 skipped: already exists  
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 7735de6ab791 skipped: already exists  
Copying blob 481d35d15b9c skipped: already exists  
Copying blob 695b2dfd44b4 skipped: already exists  
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 9d19ee268e0d skipped: already exists  
Copying blob 4df4cfaff686 skipped: already exists  
Copying blob 7735de6ab791 skipped: already exists  
Copying blob 481d35d15b9c skipped: already exists  
Copying blob 695b2dfd44b4 skipped: already exists  
Copying blob 7a660d7f4f07 [===============================>------] 32.0GiB / 37.7GiB
Copying blob fff116f7c5df skipped: already exists  
Copying blob 7e71c4a6f428 skipped: already exists  
Copying blob f10e0d82a91b skipped: already exists  
Copying blob 37b1a15e34f2 skipped: already exists  
DEBU[0554] error deleting tmp dir: <nil>                
FATA[0554] writing blob: uploading layer chunked: received unexpected HTTP status: 502 Bad Gateway

harbor nginx logs

192.168.1.50 - "GET /service/token?account=rxopy&scope=repository%3Aue%2Fproject-spsp%3Apull%2Cpush&service=harbor-registry HTTP/1.1" 200 971 "-" "skopeo/1.12.0" 0.025 0.026 .
192.168.1.50 - "HEAD /v2/ue/project-spsp/blobs/sha256:9d19ee268e0d7bcf6716e6658ee1b0384a71d6f2f9aa1ae2085610cf7c7b316f HTTP/1.1" 200 0 "-" "skopeo/1.12.0" 0.024 0.024 .
192.168.1.50 - "HEAD /v2/ue/project-spsp/blobs/sha256:4df4cfaff6862be5b4dac7cf8b239b8daf6652720f2bffa5f382562a17d37b6a HTTP/1.1" 200 0 "-" "skopeo/1.12.0" 0.022 0.021 .
192.168.1.50 - "HEAD /v2/ue/project-spsp/blobs/sha256:7735de6ab791b4375db5e37b4bd2d6cf6f1b7df2d9f44ab6e1e93dfcad009271 HTTP/1.1" 200 0 "-" "skopeo/1.12.0" 0.022 0.022 .
192.168.1.50 - "HEAD /v2/ue/project-spsp/blobs/sha256:481d35d15b9c45af82a85e35d1af79dbdcbe6bebe067145f6b0e435a082828ce HTTP/1.1" 200 0 "-" "skopeo/1.12.0" 0.026 0.026 .
192.168.1.50 - "HEAD /v2/ue/project-spsp/blobs/sha256:695b2dfd44b492c91b0c44fddbf855f68c822a3ef4ed53416c82c1d216584368 HTTP/1.1" 200 0 "-" "skopeo/1.12.0" 0.019 0.019 .
192.168.1.50 - "HEAD /v2/ue/project-spsp/blobs/sha256:7a660d7f4f0752f9bf3930192f83bf77f850285f470dd6a18ac5bfef6a9811de HTTP/1.1" 404 0 "-" "skopeo/1.12.0" 0.005 0.006 .
192.168.1.50 - "HEAD /v2/ue/project-spsp/blobs/sha256:7a660d7f4f0752f9bf3930192f83bf77f850285f470dd6a18ac5bfef6a9811de HTTP/1.1" 404 0 "-" "skopeo/1.12.0" 0.005 0.005 .
192.168.1.50 - "POST /v2/ue/project-spsp/blobs/uploads/ HTTP/1.1" 202 0 "-" "skopeo/1.12.0" 0.011 0.012 .
192.168.50.67 - "GET /service/token?scope=repository%3Aue%2Fproject-hpwt%3Apull&service=harbor-registry HTTP/1.1" 200 939 "-" "containerd/v1.6.14-k3s1" 0.024 0.024 .
192.168.50.67 - "HEAD /v2/ue/project-hpwt/manifests/4.1.11-5_5.0.3-7_v0.4 HTTP/1.1" 200 0 "-" "containerd/v1.6.14-k3s1" 0.023 0.023 .
172.18.0.1 - "GET /api/version HTTP/1.1" 200 19 "-" "Go-http-client/1.1" 0.005 0.005 .
172.18.0.1 - "GET /v2/ HTTP/1.1" 401 76 "-" "Go-http-client/1.1" 0.001 0.001 .
172.18.0.1 - "GET /service/token?service=harbor-registry HTTP/1.1" 200 890 "-" "Go-http-client/1.1" 0.026 0.026 .
172.18.0.1 - "GET /v2/ HTTP/1.1" 200 2 "-" "harbor-registry-client" 0.019 0.019 .
127.0.0.1 - "GET / HTTP/1.1" 308 171 "-" "curl/8.0.1" 0.000 - .
192.168.50.67 - "HEAD /v2/ue/project-hpwt/manifests/4.1.11-5_5.0.3-7_v0.4 HTTP/1.1" 401 0 "-" "containerd/v1.6.14-k3s1" 0.002 0.002 .
192.168.50.67 - "GET /service/token?scope=repository%3Aue%2Fproject-hpwt%3Apull&service=harbor-registry HTTP/1.1" 200 941 "-" "containerd/v1.6.14-k3s1" 0.022 0.022 .
192.168.50.67 - "GET /service/token?scope=repository%3Aue%2Fproject-hpwt%3Apull&service=harbor-registry HTTP/1.1" 200 939 "-" "containerd/v1.6.14-k3s1" 0.022 0.021 .
192.168.50.67 - "HEAD /v2/ue/project-hpwt/manifests/4.1.11-5_5.0.3-7_v0.4 HTTP/1.1" 200 0 "-" "containerd/v1.6.14-k3s1" 0.015 0.015 .
2023/10/10 02:50:33 [error] 343160#0: *1835137 writev() failed (104: Connection reset by peer) while sending request to upstream, client: 192.168.1.50, server: , request: "PATCH /v2/ue/project-spsp/blobs/uploads/fd8cd4c3-b62b-4cee-bc9b-4c6eebc8d4ac?_state=zzaRBVRNRft8ka30TXl_nDsYVBHeTucexEbyMM0XYyh7Ik5hbWUiOiJ1ZS9wcm9qZWN0LXNwc3AiLCJVVUlEIjoiZmQ4Y2Q0YzMtYjYyYi00Y2VlLWJjOWItNGM2ZWViYzhkNGFjIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIzLTEwLTEwVDAyOjQ1OjQzLjM3MzkxMTQxNFoifQ%3D%3D HTTP/1.1", upstream: "http://172.18.0.10:8080/v2/ue/project-spsp/blobs/uploads/fd8cd4c3-b62b-4cee-bc9b-4c6eebc8d4ac?_state=zzaRBVRNRft8ka30TXl_nDsYVBHeTucexEbyMM0XYyh7Ik5hbWUiOiJ1ZS9wcm9qZWN0LXNwc3AiLCJVVUlEIjoiZmQ4Y2Q0YzMtYjYyYi00Y2VlLWJjOWItNGM2ZWViYzhkNGFjIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIzLTEwLTEwVDAyOjQ1OjQzLjM3MzkxMTQxNFoifQ%3D%3D", host: "registry.light-field.tech"
192.168.1.50 - "PATCH /v2/ue/project-spsp/blobs/uploads/fd8cd4c3-b62b-4cee-bc9b-4c6eebc8d4ac?_state=zzaRBVRNRft8ka30TXl_nDsYVBHeTucexEbyMM0XYyh7Ik5hbWUiOiJ1ZS9wcm9qZWN0LXNwc3AiLCJVVUlEIjoiZmQ4Y2Q0YzMtYjYyYi00Y2VlLWJjOWItNGM2ZWViYzhkNGFjIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDIzLTEwLTEwVDAyOjQ1OjQzLjM3MzkxMTQxNFoifQ%3D%3D HTTP/1.1" 502 150 "-" "skopeo/1.12.0" 316.306 290.071 .

harbor core logs

2023-10-09T15:01:29Z [INFO] [/pkg/notifier/notifier.go:206]: Handle notification with Handler 'AuditLog' on topic 'PULL_ARTIFACT': ID-1942, Repository-ue/project-ciie-ue5 Tags-[4.1.11-5_5.0.3-7_v0.4] Digest-sha256:dc336fa14f86724714801b0a754d345c1fb14b6a5240dea1756b83d8e32e5c7c Operator-evopen OccurAt-2023-10-09 15:01:29
2023/10/09 15:10:33 http: proxy error: readfrom tcp 172.18.0.10:46690->172.18.0.5:5000: http: request body too large
2023/10/09 15:20:15 http: proxy error: readfrom tcp 172.18.0.10:43846->172.18.0.5:5000: http: request body too large
2023-10-09T16:00:12Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-09T16:00:12Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-09T17:00:12Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-09T17:00:12Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-09T18:00:12Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-09T18:00:12Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-09T19:00:12Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-09T19:00:12Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-09T20:00:13Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-09T20:00:13Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-09T21:00:13Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-09T21:00:13Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-09T22:00:13Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-09T22:00:13Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-09T23:00:13Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-09T23:00:13Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-10T00:00:13Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 9 executions with outdate status, refresh status to db
2023-10-10T00:00:13Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 9 succeed, 0 failed
2023-10-10T01:00:14Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-10T01:00:14Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-10T02:00:14Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-10T02:00:14Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023/10/10 02:19:36 http: proxy error: readfrom tcp 172.18.0.10:48122->172.18.0.5:5000: http: request body too large
2023/10/10 02:34:41 http: proxy error: readfrom tcp 172.18.0.10:53998->172.18.0.5:5000: http: request body too large
2023-10-10T02:36:14Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 1 executions with outdate status, refresh status to db
2023-10-10T02:36:14Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 1 succeed, 0 failed
2023-10-10T02:37:41Z [WARNING] [/common/rbac/project/evaluator.go:80]: Failed to get info of project 10 for permission evaluator, error: project 10 not found
2023/10/10 02:50:59 http: proxy error: readfrom tcp 172.18.0.10:56916->172.18.0.5:5000: http: request body too large
2023-10-10T03:00:14Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-10T03:00:14Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed
2023-10-10T03:05:30Z [INFO] [/lib/config/userconfig.go:255]: skip_update_pull_time:false
2023-10-10T03:05:30Z [INFO] [/pkg/notifier/notifier.go:206]: Handle notification with Handler 'InternalArtifact' on topic 'PULL_ARTIFACT': ID-1942, Repository-ue/project-ciie-ue5 Tags-[] Digest-sha256:dc336fa14f86724714801b0a754d345c1fb14b6a5240dea1756b83d8e32e5c7c Operator-dwei OccurAt-2023-10-10 03:05:30
2023-10-10T03:05:30Z [INFO] [/pkg/notifier/notifier.go:206]: Handle notification with Handler 'ArtifactWebhook' on topic 'PULL_ARTIFACT': ID-1942, Repository-ue/project-ciie-ue5 Tags-[] Digest-sha256:dc336fa14f86724714801b0a754d345c1fb14b6a5240dea1756b83d8e32e5c7c Operator-dwei OccurAt-2023-10-10 03:05:30
2023-10-10T03:05:30Z [INFO] [/pkg/notifier/notifier.go:206]: Handle notification with Handler 'AuditLog' on topic 'PULL_ARTIFACT': ID-1942, Repository-ue/project-ciie-ue5 Tags-[] Digest-sha256:dc336fa14f86724714801b0a754d345c1fb14b6a5240dea1756b83d8e32e5c7c Operator-dwei OccurAt-2023-10-10 03:05:30
^[[C2023-10-10T04:00:14Z [INFO] [/pkg/task/dao/execution.go:466]: scanned out 2 executions with outdate status, refresh status to db
2023-10-10T04:00:14Z [INFO] [/pkg/task/dao/execution.go:507]: refresh outdate execution status done, 2 succeed, 0 failed

regitry logs

level=error msg="response completed with error" auth.user.name="harbor_registry_user" err.code="blob unknown" err.detail=sha256:7ba2948eb13f2ac47431f248bca894725cdd1ec4d2b87b83d1660e5af41135f3 err.message="blob unknown to registry" go.version=go1.20.4 http.request.host=registry.light-field.tech http.request.id=485fac50-f9a1-49b4-a056-5364eaa94f42 http.request.method=GET http.request.remoteaddr=192.168.47.31 http.request.uri="/v2/library/controller/blobs/sha256:7ba2948eb13f2ac47431f248bca894725cdd1ec4d2b87b83d1660e5af41135f3?ns=docker.io" http.request.useragent="containerd/v1.6.14-k3s1" http.response.contenttype="application/json; charset=utf-8" http.response.duration=4.023832ms http.response.status=404 http.response.written=157 vars.digest="sha256:7ba2948eb13f2ac47431f248bca894725cdd1ec4d2b87b83d1660e5af41135f3" vars.name="library/controller" 
qmloong commented 8 months ago

Same problem here...

qmloong commented 8 months ago

It looks like limited by beego config.

https://github.com/goharbor/harbor/blob/main/src/core/main.go#L131

chlins commented 8 months ago

Maybe the behavior will be different for different clients, if it used chunked uploads and will not encounter the problem, some uploads by the whole blob and recognize as the upload by beego will be limited by the MaxUploadSize.

DaiYifan commented 5 months ago

We have tryied all nginx configuration and set beego memory maxuploads with 128GB,but still encounter this error. Find error in registry log like 'client disconnected during blob PATCH (PATCH request with 500 code)' ' msg="client disconnected during blob PATCH" auth.user.name="harbor_registry_user" contentLength=-1 copied=12419550 error="unexpected EOF'.