TheHive-Project / TheHive

TheHive: a Scalable, Open Source and Free Security Incident Response Platform
https://thehive-project.org
GNU Affero General Public License v3.0
3.28k stars 605 forks source link

[Bug] S3 functionality throws an error due to missing uploadId #2417

Closed kesh-stripe closed 1 year ago

kesh-stripe commented 1 year ago

Request Type

Bug

Work Environment

Question Answer
OS version (server) Docker
OS version (client) MacOS Chrome
Virtualized Env. True
Dedicated RAM 16 GB
vCPU 8
TheHive version / git hash 4.1.23-1, sha256:4380da1ec88ee09512eb1bd343aa3a468cf56e000df52106fa63244d1e24970e
Package Type Docker
Database Cassandra
Index type Elasticsearch
Attachments storage S3
Browser type & version Chrome Version 103.0.5060.53

Problem Description

Describe the problem/bug as clearly as possible. Uploading attachments to case tasks throws an error related to S3. I've verified that this host/docker image have full access to the bucket in question via s3:*.

Steps to Reproduce

  1. Go to a case task
  2. Click "Add new task log"
  3. Click "Add attachment"
  4. Choose file, and click "Add Log"
  5. Error
[error] o.t.s.u.Retry [00000236|12080214] uncaught error, not retrying
akka.stream.alpakka.s3.FailedUpload: Upload part 1 request failed. Response header: (HttpResponse(400 Bad Request,List(x-amz-request-id: JVK1DM95V2F568E5, x-amz-id-2: ePLuIaWk0S9t9AjXUJOWrx1aV2yDa9bYpAbMcYwU6HcFEIxxKJ6avcW16IMVF84NNHXk9XTiO4k=, Date: Tue, 23 Aug 2022 19:11:02 GMT, Server: AmazonS3, Connection: close),HttpEntity.Chunked(application/xml),HttpProtocol(HTTP/1.1))), response body: (<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidArgument</Code><Message>This operation does not accept partNumber without uploadId</Message><ArgumentName>partNumber</ArgumentName><ArgumentValue>partNumber</ArgumentValue><RequestId>JVK1DM95V2F568E5</RequestId><HostId>ePLuIaWk0S9t9AjXUJOWrx1aV2yDa9bYpAbMcYwU6HcFEIxxKJ6avcW16IMVF84NNHXk9XTiO4k=</HostId></Error>).
at akka.stream.alpakka.s3.FailedUpload$.apply(model.scala:130)
at akka.stream.alpakka.s3.impl.S3Stream$.$anonfun$completionSink$3(S3Stream.scala:763)
at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$2(ContextPropagatingDisptacher.scala:57)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.thp.scalligraph.DiagnosticContext$$anon$2.withContext(ContextPropagatingDisptacher.scala:77)
at org.thp.scalligraph.ContextPropagatingDispatcher$$anon$1.$anonfun$execute$1(ContextPropagatingDisptacher.scala:57)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:49)

Complementary information

S3 Config

storage {
  provider: s3
  s3 {
    bucket = <bucket name>
    readTimeout = 1 minute
    writeTimeout = 1 minute
    chunkSize = 1 MB
    endpoint-url = "https://s3.us-west-2.amazonaws.com"
    #accessKey = "xxx"
    #secretKey = "xxx"
    region = "us-west-2"
  }
}
kesh-stripe commented 1 year ago

I believe that this issue is fixed by adding the following to your top level config:

alpakka.s3.path-style-access = force
alpakka.s3.aws.credentials.provider = default
alpakka.s3.access-style = virtual

This should be added to the documentation for future users.