orbitdb-archive / orbit-db-http-api

A HTTP API Server for the OrbitDB distributed peer-to-peer database
MIT License
37 stars 11 forks source link

Unhandled Promise Rejection Warning #26

Closed JoaoFSCruz closed 4 years ago

JoaoFSCruz commented 4 years ago

I'm currently participating in a European project called EUNOMIA where we need to decentralised the information. We rely on decoupled services and therefore have a main server responsible for managing data by using OrbitDB HTTP API (developed in Spring).

To assess the robustness of our server we made a load test by sending a lot of requests. During the test, our instance of OrbitDB HTTP Client threw some errors (a lot of these):

image

And after a while, our server was always receiving an error which implies the instance of OrbitDB HTTP API closed the connection without a response.

Server Error:

org.springframework.web.client.ResourceAccessException: I/O error on POST request for "http://orbitdb:3000/db/zdpuAnPwqKYn7gikwj4w2hDXKnJHFNJKkzwNLyL1qsrg9gZ8j%2Fmodel/put": Unexpected end of file from server; nested exception is java.net.SocketException: Unexpected end of file from server

I've tried different versions of OrbitDB, as another issue suggested, but without any luck.

Anyone has any idea of how to fix this problem? Many thanks :pray:

aphelionz commented 4 years ago

This would happen when multiple processes are trying to open the same keystore - which would make sense during a load test... I'm wondering if this is Orbit, or if it's how the http-api is marshalling requests 🤔

What do you think, @phillmac ?

phillmac commented 4 years ago

The http-api server spawns a single instance of orbitdb and tries very hard to only ever have a single instance of any given db open. Unless there is a logic flaw somewhere. If there's two http-api processes running this could easily cause that error. @aphelionz the put handler looks something like this:

handler: dbMiddleware( async (db, request, _h) => {
                    let params;
                    params = request.payload;

                    if (db.type == 'keyvalue') {
                        let key, value;
                        if (!params['key']) {
                            [key,value] = [Object.keys(params)[0], Object.values(params)[0]];
                        } else {
                            ({key,value} = params);
                        }
                        return {hash: await db.put(key, value)};
                    } else {
                        return {hash: await db.put(params)};
                    }
                }

So there's really no internal marshaling at all. The hapi server will process requests as fast as it can possibly pass them off to orbitdb.

Do we have a release that includes that PR for an internal operations marshaling queue?

aphelionz commented 4 years ago

yeah that looks like it should work to me! @JoaoFSCruz do you have your load balancing code handy?

phillmac commented 4 years ago

Perhaps this might be fixed by https://github.com/orbitdb/orbit-db-store/pull/85. I might try merging that PR in the development fork, and see if it fixes anything.

phillmac commented 4 years ago

Another fix might be to use an external cache, e.g. redis. Is this a viable solution for you?

JoaoFSCruz commented 4 years ago

Wow, thanks a lot for the fast responses!

@aphelionz We don't have load balancing code in our server, and I'm not sure if we are adding it. Either way, I'll suggest doing so. Thanks!

@phillmac I'll keep an eye out for the merge to test as soon as this possible fix is added.

@phillmac I'm not getting how that could help. In query requests yeah it does help a lot (and I'll suggest to implement it, thank you!), but in creating objects as I specified above there's not much it can do to help.

phillmac commented 4 years ago

I was thinking using a cache like redis that can handle concurrent writes might fix your problem in the short term

aphelionz commented 4 years ago

@phillmac any luck with that orbit-db-store PR?

JoaoFSCruz commented 4 years ago

I'll be doing some more tests, and as soon as I have some results I'll post here.

phillmac commented 4 years ago

@JoaoFSCruz I noticed you were using docker, phillmac/orbit-db-http-api-dev:debug is the bleeding edge build, warts and all. I've included orbitdb/orbit-db-store#85 in the latest build, you may like to test if it fixes this issue or not. I haven't been able to replicate the problem yet, so I can't say for sure.

JoaoFSCruz commented 4 years ago

Yeah, I'm using docker but not the official image. I pulled your image and made some adjustments in the query feature (which I'll use to create a PR someday, hopefully soon :crossed_fingers:) and disabling HTTPS.

What do I need to run docker-compose with that image?

Right now I have this (which was working with our custom image):

  orbitdb:
    container_name: orbitdb
    image: phillmac/orbit-db-http-api-dev:debug
    depends_on: 
      - ipfs
      - ipfs-cluster
    ports: 
      - "3000:3000"
    volumes:
      - ./orbitdb

The tests I conducted with or custom image went rather well. Only some requests threw an error indicating it closed the connection without a response.

We used jmeter, here goes the test plan used including the results: load-test.zip.

JoaoFSCruz commented 4 years ago

@phillmac couldn't use your image, always getting an error :man_shrugging: (error code 0) it just exited and didn't know where to get logs. But no worries!

I went to you repo and cloned it, changed to debug branch and altered your Dockerfile. I did the same load test (and heavier) and it worked like a charm! :tada:

I took a look inside the repo and saw it had a lot more functionalities. How stable is this version?

phillmac commented 4 years ago

@JoaoFSCruz https://github.com/phillmac/orbit-db-http-api-dev is very much less stable than this repo. It's bleeding edge, proof-of-concept etc. and often gets broken. The problem is that I've let these two repos drift much too far apart to be able to merge new features.

here's an example docker-compose.yml

version: "3.7"

services:
  ipfs:
    image: ipfs/go-ipfs:v0.4.22
    restart: unless-stopped
    command: ["daemon", "--migrate=true", "--enable-gc", "--enable-namesys-pubsub"]
    volumes:
      - /ipfs_data:/data/ipfs
    ports:
      - 0.0.0.0:4001:4001/tcp
    networks:
      - ipfs
  orbitdb-api:
    image:  phillmac/orbit-db-http-api-dev:debug
    restart: unless-stopped
    command: ["src/cli.js", "api", "--debug"]
    init: TRUE
    environment:
      IPFS_HOST: 'ipfs'
      ORBITDB_DIR: '/orbitdb'
      FORCE_HTTP1: 'true'
      ANNOUNCE_DBS: 'true'
      LOG: 'DEBUG'
    depends_on:
      - ipfs
    ports:
      - 0.0.0.0:3000:3000/tcp
    networks:
      - ipfs
    volumes:
      - orbitdb-data:/orbitdb
networks:
  ipfs:
volumes:
  orbitdb-data:

a second example with letsencrypt certs :

 orbitdb-api:
    image:  phillmac/orbit-db-http-api-dev:debug
    restart: unless-stopped
    command: ["src/cli.js", "api", "--debug"]
    init: TRUE
    environment:
      IPFS_HOST: 'ipfs'
      ORBITDB_DIR: '/orbitdb'
      HTTPS_CERT: '/certs/live/xxxx.xxxx.xxxx/fullchain.pem'
      HTTPS_KEY: '/certs/live/xxxx.xxxx.xxxx/privkey.pem'
      ALLOW_HTTP1: 'true'
      ANNOUNCE_DBS: 'true'
      DEBUG_QUERY: 'true'
      LOG: DEBUG

    depends_on:
      - ipfs
    ports:
      - 0.0.0.0:3000:3000/tcp
    networks:
      - ipfs
    volumes:
      - type: volume
        source: orbitdb-data
        target: /orbitdb
      - type: volume
        source: cert-data
        target: /certs
        read_only: true

note: FORCE_HTTP1: 'true' without ssl, ALLOW_HTTP1: 'true' with ssl

JoaoFSCruz commented 4 years ago

Thank you @phillmac! Worked perfectly!

aphelionz commented 4 years ago

Closing this for now, good work gents!