minio / console

Simple UI for MinIO Object Storage :abacus:
https://min.io/docs/minio/linux/index.html
GNU Affero General Public License v3.0
849 stars 278 forks source link

Access-Control-Allow-Origin not being sent inspite of Origin header set #3311

Closed slushpuppy closed 5 months ago

slushpuppy commented 6 months ago

NOTE

If this case is urgent, please subscribe to Subnet so that our 24/7 support team may help you faster.

Expected Behavior

appropriate cors headers set when Origin headers are set

Current Behavior

root@DESKTOP-6QQM0PN:~# curl http://127.0.0.1:9090/api/v1/download-shared-object/aHR0cDovLzEyNy4wLjAuMTo5MDAwL3B1YmxpYy9iYW5uZXIyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPThITUEySTcxTE04Tkk5MklFNFg1JTJGMjAyNDA0MjQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNDI0VDExMjA0N1omWC1BbXotRXhwaXJlcz00MzIwMCZYLUFtei1TZWN1cml0eS1Ub2tlbj1leUpoYkdjaU9pSklVelV4TWlJc0luUjVjQ0k2SWtwWFZDSjkuZXlKaFkyTmxjM05MWlhraU9pSTRTRTFCTWtrM01VeE5PRTVKT1RKSlJUUllOU0lzSW1WNGNDSTZNVGN4TXprNU56YzVOaXdpY0dGeVpXNTBJam9pWVdSdGFXNGlmUS4xRF9CanltVDZLOHBseE1CaTFXbTVPN2xLc0hlM0x3VDZlaUxNblEyRUwyakdRbWRvQm92a0twUVlMNkprX0hZVURySHhOVjlIZ3FKdFZFQVJxZk14QSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmdmVyc2lvbklkPW51bGwmWC1BbXotU2lnbmF0dXJlPTcxNDQ3ZjgwMjE5OGVkZGI3ZDZmN2U0OGUzOTliNDQ4YjYwMGVhYmExZDhkMmNhYWFlMzY0Zjg2YjBhNzJlMGM= -H "Or igin: http://localhost:3000" -v

Possible Solution

Steps to Reproduce (for bugs)

  1. wget https://dl.min.io/server/minio/release/linux-amd64/minio
  2. chmod +x minio
  3. MINIO_ROOT_USER=admin MINIO_ROOT_PASSWORD=password ./minio server /mnt/data --console-address ":9001"
  4. create a bucket and upload a file
  5. access it via curl

Context

Regression

Your Environment

*variables

root@DESKTOP-6QQM0PN:~# mc admin config export minio1
subnet license= api_key= proxy=
# callhome enable=off frequency=24h
drive max_timeout=
site name= region=
api requests_max=0 requests_deadline=10s cluster_deadline=10s cors_allow_origin=* remote_transport_deadline=2h list_quorum=strict replication_priority=auto replication_max_workers=500 transition_workers=100 stale_uploads_cleanup_interval=6h stale_uploads_expiry=24h delete_cleanup_interval=5m odirect=on gzip_objects=off root_access=on sync_events=off object_max_versions=9223372036854775807
scanner speed=default alert_excess_versions=100 alert_excess_folders=50000
batch replication_workers_wait=0ms keyrotation_workers_wait=0ms expiration_workers_wait=0ms
# compression enable=off allow_encryption=off extensions=.txt,.log,.csv,.json,.tar,.xml,.bin mime_types=text/*,application/json,application/xml,binary/octet-stream
# identity_openid enable= display_name= config_url= client_id= client_secret= claim_name=policy claim_userinfo= role_policy= claim_prefix= redirect_uri= redirect_uri_dynamic=off scopes= vendor= keycloak_realm= keycloak_admin_url=
# identity_ldap enable= server_addr= srv_record_name= user_dn_search_base_dn= user_dn_search_filter= group_search_filter= group_search_base_dn= tls_skip_verify=off server_insecure=off server_starttls=off lookup_bind_dn= lookup_bind_password=
# identity_tls skip_verify=off
# identity_plugin url= auth_token= role_policy= role_id=
# policy_plugin url= auth_token= enable_http2=off
# logger_webhook enable=off endpoint= auth_token= client_cert= client_key= proxy= batch_size=1 queue_size=100000 queue_dir=
# audit_webhook enable=off endpoint= auth_token= client_cert= client_key= batch_size=1 queue_size=100000 queue_dir=
# audit_kafka enable=off topic= brokers= sasl_username= sasl_password= sasl_mechanism=plain client_tls_cert= client_tls_key= tls_client_auth=0 sasl=off tls=off tls_skip_verify=off version= queue_size=100000 queue_dir=
# notify_webhook enable=off endpoint= auth_token= queue_limit=0 queue_dir= client_cert= client_key=
# notify_amqp enable=off url= exchange= exchange_type= routing_key= mandatory=off durable=off no_wait=off internal=off auto_deleted=off delivery_mode=0 publisher_confirms=off queue_limit=0 queue_dir=
# notify_kafka enable=off topic= brokers= sasl_username= sasl_password= sasl_mechanism=plain client_tls_cert= client_tls_key= tls_client_auth=0 sasl=off tls=off tls_skip_verify=off queue_limit=0 queue_dir= version= batch_size=0 compression_codec= compression_level=
# notify_mqtt enable=off broker= topic= password= username= qos=0 keep_alive_interval=0s reconnect_interval=0s queue_dir= queue_limit=0
# notify_nats enable=off address= subject= username= password= token= tls=off tls_skip_verify=off cert_authority= client_cert= client_key= ping_interval=0 jetstream=off streaming=off streaming_async=off streaming_max_pub_acks_in_flight=0 streaming_cluster_id= queue_dir= queue_limit=0
# notify_nsq enable=off nsqd_address= topic= tls=off tls_skip_verify=off queue_dir= queue_limit=0
# notify_mysql enable=off format=namespace dsn_string= table= queue_dir= queue_limit=0 max_open_connections=2
# notify_postgres enable=off format=namespace connection_string= table= queue_dir= queue_limit=0 max_open_connections=2
# notify_elasticsearch enable=off url= format=namespace index= queue_dir= queue_limit=0 username= password=
# notify_redis enable=off format=namespace address= key= password= user= queue_dir= queue_limit=0
# lambda_webhook enable=off endpoint= auth_token= client_cert= client_key=
# etcd endpoints= path_prefix= coredns_path=/skydns client_cert= client_cert_key=
# cache enable=off endpoint= block_size=
browser csp_policy="default-src 'self' 'unsafe-eval' 'unsafe-inline';" hsts_seconds=0 hsts_include_subdomains=off hsts_preload=off referrer_policy=strict-origin-when-cross-origin
cesnietor commented 5 months ago

The console api is not meant to be directly consumed by external clients. You can use something like nginx to add those headers for you if needed. You can also use our SDKs which are meant to be used for consuming s3 apis.