IBM / nodejs-idb-connector

A JavaScript (Node.js) library for communicating with Db2 for IBM i, with support for queries, procedures, and much more. Uses traditional callback-style syntax
MIT License
37 stars 22 forks source link

Inserting over ~3000 rows concurrently causes SQLSTATE=57011 SQLCODE=-904 #24

Closed abmusse closed 6 years ago

abmusse commented 6 years ago

Original report by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


I am attempting to insert a few thousand rows of data concurrently with the following code:

#!javascript

function writeDb2(data) { 
  // data is an array of arrays of rows to be inserted
  // each row consists of roughly 60 elements
  data.forEach(function(row) {
    let stmt = new db2a.dbstmt(db);

    // helper function to build the param array
    let bindingParams = prepareBindingParams(row);

    // definition below
    bindAndExecute(stmt, sqlInsert, bindingParams);
  });
}

function bindAndExecute(stmt, sqlQuery, bindingParams) {
  stmt.prepare(sqlInsert, function(out, err) {
    if (err) throw err;

    stmt.bindParam(bindingParams, function(err) {
      if (err) throw err;

      stmt.execute(function(out, err) {
         if (err) throw err;

        stmt.close();
      });
    });
  });
}

when I do, I get

SQLSTATE=57011 SQLCODE=-904 RESOURCE UNAVAILABLE

When I checked the job log, it said that it was resource number 14.

How can I avoid that? Should I insert all values with a single insert statement?

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


Hey @rangercairns ,

I tried to contact you via email via adc@us.ibm.com but I get a user unknown error. I wanted to tell you a little something something about http://yips.idevcloud.com/wiki/index.php/PASE/PASENutshell and stuff. You got any other way to email you?

Cheers!

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


@ThePrez

I think this issue can be closed. Unfortunately, I can't really find the time to test the suggested solution but I trust it works. Thank you.

@rangercairns Thank you, as well. This discussion was very educational, I really appreciate it.

abmusse commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


@KerimG et al, what should be the next action or resolution of this issue? We have covered several topics in the discussion, but I believe the initial issue is resolved with the reset() provided. Please clarify.

abmusse commented 6 years ago

Original comment by Xu Meng (Bitbucket: mengxumx, GitHub: dmabupt).


Per Tony's suggestion @2018-03-13, from v1.0.9 I added the new API stmt.reset() to support resetting param bindings.

abmusse commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


@droessner, I agree that the JSON interface should be pursued. It is on the radar, but there are currently no dates to share.

abmusse commented 6 years ago

Original comment by Danny Roessner (Bitbucket: droessner, GitHub: droessner).


Another wild card, may turn out using json style interface SQL400Json from litmis/db2sock for both toolkit calls and db2 work may be a fine answer. Somebody (not me), should write a little combined json toolkit/db2 driver to see makes life much better for everyone node scripting on IBM i. Maybe talk with Jesse G. if your intuition reflect mine on 'pure json' being the answer for both db2 and toolkit. Again, only a wild guess here.

@ThePrez Can you comment? We very much like the pure json approach of db2sock and would like to know if any plans have been made for any official implementation.

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Hah, I didn't mean to come across rude or anything. You coming to the POW3R conference in Hannover? I'll buy you a German beer :-)

We are cool.

No, i don't travel.

Also, like many quirky geeks, i don't drink. However, I can assure you, I am trustworthy (... more humour).

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


@rangercairns

(I am regretting involvement a bit already.)

Don't say stuff like that, you're breaking my heart!

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Will the idb-connector ever have it's own connection pooling implementation like the node mysql client? Is bear.js supposed to be (or become) the connection pooling library for DB2

I dunno. I explained your situation clearly to IBM i co-op working on the next idb2 (next door to mine). He understands idea that a single connection QSQSRVR job can be overrun doing async many stmt operations.

Anyway, bear project is simply an example introducing two concepts. First, connection pooling (everybody doing this wrong on IBM i). Second, result caching for better performance ( controversial argument that the truth of data can be a white lie for performance). Feel free to investigate (or not).

However, design speaking, one can always argue a line between responsibility of the user script vs. the middleware (db2a driver). That is, who bears responsible for creating a connection pool. I suggest you open a direct dialogue on the new idb issues. I mean, I only work part time, so best to talk with author of this new driver on his issues. (I am regretting involvement a bit already.)

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


@rangercairns Hah, I didn't mean to come across rude or anything. You coming to the POW3R conference in Hannover? I'll buy you a German beer :-)

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Yes, wit can be singular. Old geeks do forget humor is eye of beholder. Alas, I will retire and younger folks follow. Later.

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


@rangercairns

Will the idb-connector ever have it's own connection pooling implementation like the node mysql client? Is bear.js supposed to be (or become) the connection pooling library for DB2?

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


... don't worry. I'm not offended or anything. I appreciate your help ...

Cool.

... more geek business 'async' db2a interfaces (understanding your driver 'as is')

One more bit of geek business about 'async' APIs in db2a node driver. Currently db2a 'asnyc' operations (db2a function c code), are using technology libuv (see uv_work_t in c code). This is a common node technique to help make 'stuff' not normally threaded (1st version db2a node not async), to participate in 'async' proper (current version db2a).

Most important is how many worker threads are available to handle db2a 'async' operations at one moment (it's not 3000). Basically if you read the link above ... Its default size is 4, but it can be changed at startup time by setting the UV_THREADPOOL_SIZE environment variable to any value (the absolute maximum is 128).

So, uv worker threads plays a factor in your connection pool design, should you choose to follow my suggestion. I suspect pool connection(s) may match UV_THREADPOOL_SIZE (only theory). For more info ... a link with a fellow exploring UV_THREADPOOL_SIZE.

future db2a ...

Yes or not 'promises' may be the next db2a API. We will still have to deal with 'async' in a rational way. To wit, I don't know node current use libuv will stay in node author style/vogue for 'async' db2a (the next generation). Certainly litmis/db2sock natural aggregate 'async' interfaces may have some influences (or not).

Another wild card, may turn out using json style interface SQL400Json from litmis/db2sock for both toolkit calls and db2 work may be a fine answer. Somebody (not me), should write a little combined json toolkit/db2 driver to see makes life much better for everyone node scripting on IBM i. Maybe talk with Jesse G. if your intuition reflect mine on 'pure json' being the answer for both db2 and toolkit. Again, only a wild guess here.

Ok, happy coding.

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


@rangercairns I have only one request. Please write more succinctly and clearly. Avoid writing stuff like

demand ... then ... uf da ... slower ... and ... slower ... and ... slower ...

or

"After You, My Dear Alphonse

or your garage story. Those things add no value to your otherwise great responses and make your responses less approachable.

So, connection pooling it is. Thanks!

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


Oh, Oh! The world got silent. I am only trying to help you understand async programming.

Haha @rangercairns, don't worry. I'm not offended or anything. I appreciate your help, I'm on a business trip until Friday (and then again until Tuesday) so I don't really have much time to work on this issue.

All is good mate, thanks a lot :-)

I'll check out what you wrote in detail as soon as I can.

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Oh, Oh! The world got silent. I am only trying to help you understand async programming.

Well, I was trying to be nice. Your design is very bad with writeDb2(data). I have heart of teacher, so I will try to help you. So, let me get to the rest of issues with writeDb2(data).

1) Review. We have a mismatch performance design in writeDb2(data). Again, new db2a.dbstmt will happen 'way too fast', and will eat all the available handles before the actual inserts can run (depending on request load).

#!bash

function writeDb2(data) { <-- client will call this 'async' (bad news)
  data.forEach(function(row) {
    let stmt = new db2a.dbstmt(db); <-- This will occur 'way too fast'
    let bindingParams = prepareBindingParams(row); <- slower
    bindAndExecute(stmt, sqlInsert, bindingParams); <- wagon slower

2) Not work anyway. Next, fix problem 1 and cure speed match (aka, promises task ordering, etc.). However, still fail due to using only one connection with bindAndExecute (3000 uses at same time). Basically DB2 is not thread safe, therefore you will fail or corrupt some of the inserts in 'async' design via 3000 inserts at same time on same connection. The old PASE driver for DB2 (libdb400.a), has no mutex locks to sycronize the inserts (bindAndExecute), so you will simply overrun the single connected QSQSRSVR job.

#!bash

myapp.js               DB2 QSQSRVR proxy (server mode)
---------------        -------------------------------
-> db.dbconn()     --> one connection (wrong)
-> new db2a.dbstmt --> way too many statements (1-n run out)
-> bindAndExecute(1)-> process stmt 1 'async' (not safe)
-> bindAndExecute(2)-> process stmt 2 'async' (not safe)
:
-> bindAndExecute(n)-> process stmt n 'async' (not safe)

Note: Better news ... litmis/db2sock project (new libdb400.a), has a mutex at connection level for each stmt use, so this may work with new driver. Aka, stmts will synchronize one at a time due to the lock with db2sock.

What to do?

Assuming you want to stay with the current (faulty) technology, you will need to use a connection pool to work around the missing mutex in old PASE driver (node db2a -> libdb400.a).

Like this ... you will be doing inserts with connection pool, aka, not remove by id (below), but same net 'pooled' affect for inserts. That is, you will spread the insert blocks across many connection/QSQSRVR jobs. Essentially, you will have one connect + db2a.dbstmt + bindAndExecute in each of the connections. Aka, you will 'trick' DB2 into thinking simply a bunch of separate jobs all inserting block data at the same time (no worry about thread safe here).

Note: The bear pool design is posted to litmis/nodejs examples/express_api_bears_node6. You do not have to follow literally, but presents the idea of 'connection pools'. This is missing from your writeDb2(data) design.

#!bash

Bear.prototype.removeById = function(myid, callback) {
  this.bpool.attach( function(bconn) {
    var sql = "select * from " + bconn.schema + "." + bconn.table + " where ID=" + myid;
    try {
      bconn.stmt.exec(sql, function (query) {
        bconn.free();
        if (query[0] == undefined) {
          bconn.detach();
          global._bear_callback_404(callback,false,global._bear_msg_10007,myid);
        } else {
          var sql = "delete from " + bconn.schema + "." + bconn.table + " where ID=" + myid;
          try {
            bconn.stmt.exec(sql, function (query) {
              bconn.detach();
              global._bear_callback_200(callback,false,global._bear_msg_10006,myid);
            });
          } catch(ex) {
            bconn.detach();
            global._bear_callback_404(callback,false,global._bear_msg_10001);
          }
        }
      });
    } catch(ex) {
      bconn.detach();
      global._bear_callback_404(callback,false,global._bear_msg_10001);
    }
  });
}

Graphically, many connection design looks like below. Each connect + stmt + bindAndExecute set will run on a different QSRSRVR job.

#!bash

yapp.js               DB2 QSQSRVR prox(ies) (server mode)
---------------        -------------------------------
-> db.dbconn(1)-> new db2a.dbstmt(6) -> bindAndExecute(6)-> qsqsrvr job 1 stmt 'async' (safe)
-> db.dbconn(2)-> new db2a.dbstmt(7) -> bindAndExecute(7)-> qsqsrvr job 2 stmt 'async' (safe)
-> db.dbconn(3)-> new db2a.dbstmt(8) -> bindAndExecute(8)-> qsqsrvr job 3 stmt 'async' (safe)
:
-> db.dbconn(n-1)-> new db2a.dbstmt(n) -> bindAndExecute(n)-> qsqsrvr job n stmt 'async' (safe)

Note: Of course there will be a performance wall to hit when you overrun the pre-started QSQSRVR jobs. That is, will run like great for a while until stress causes many, many, many, QSQSRVR to be created to match demand ...

Fixing the current technology

Well, this is tough, because conventional database CLI thinging is not overly good for 'async'. I already described one possible single call technique using SQL400JsonAsync. Of course, people MAY want to 'go faster' than json design, so a 'similar binary' interface may be needed. However, on other hand, seems node programs always want to reduce everything 'data' to json anyway (cut out the db2a middle man).

But i digress ... back to current.

I have already described in a previous post the idea of stmt.reset(). However, for that 'slimmer resource' design to work you will still need some sort of checkin/checkout connection pool design, wherein excess stress usage of the 'inserts' will be synchronized (probably the dreaded 'promises').

Always Javascript (old guy editorial)?

I love elegance of JavaScript async callback design. I love nested callbacks that fire willy nilly with no obvious order. This may be the best language ever created due to 'async'. Aka, I consider frameworks like 'promises' or 'wait' for people that can't write or don't like writing JavaScript. However, these frameworks are very popular with some devlopers.

If i may be bold, most casual web design folks should stay with python or php as these languages really cater to 'procedural' or 'traditional' database designs. Node 'async' csan difficult for many, especially when trying to map onto a non-async database like db2 (traditional CLI APIs). Again, not picking on you per say, just trying to help you understand 'async' (bad design function writeDb2). You may choose to ignore my teaching as you wish.

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


I expected something more like ... Where several stmt objects are released before others are created or executed.

The exact async fear of mine is all the stmts would get created (3000), before any bind/insert operations run. That is, async call writeDb2(data) creates statements like crazy (at speed of light). Followed by essentially a covered wagon operation to insert block of records (git along ol' mule). Also, no amount of yelling at author of the insert mule (block size), will actually fix a relative time differentiation problem. The real problem is 'traditional database' API thinking itself (see below).

Note: Worse, when in a web daemon, multiple requesters (browsers), may add a multiplier of the problem 3000 stmts * 3000 request = 'way too many handles'.

Design driver is wrong for async (db2sock to rescue ... maybe) ...

The following is only my opinion (years of experience). Do not read following unless you want to understand truth and issues about current design using existing fine grain CLI APIs.

Problem: Async 'operations' only works when granularity of operations is significant. That is, fine grained CLI operations don't really work, for instance like DbStmt::DbStmt rc = SQLAllocStmt(conn->connh, &stmth);. This runs disproportional fast to the actual prepare/bind/execute operations following for insert of block records.

Answer: You need composite async 'APIs' that actually do aggregate operations (connect/stmt/prepare/bind/execute/fetch). That is, aggregate new CLI APIs, for instance like litmis/db2sock toolkit db2 driver SQL400JsonAsync(injson,outjson). Wherein, SQL400JsonAsync entire essence of the whole operation occurs in one async call.

#!bash

bash-4.3$ ./test1000_sql400json32 ../json/j0601_query_qcustcdt.json
input(5000000):
{"query":[{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
        {"parm":[{"value":"Jones"},{"value":"Vine"}]},
        {"fetch":[{"rec":"all"}]}
       ]}

output(471):
{"script":[{"query":[{"handle":3},
{"stmt":"select * from QIWS/QCUSTCDT where LSTNAM=? or LSTNAM=?"},
{"records":[
{"CUSNUM":839283,"LSTNAM":"Jones","INIT":"B D",
"STREET":"21B NW 135 St","CITY":"Clay","STATE":"NY","ZIPCOD":13041,
"CDTLMT":400,"CHGCOD":1,"BALDUE":100.00,"CDTDUE":0.00},
{"CUSNUM":392859,"LSTNAM":"Vine","INIT":"S S"
,"STREET":"PO Box 79","CITY":"Broton","STATE":"VT",
"ZIPCOD":5046,"CDTLMT":700,"CHGCOD":1,"BALDUE":439.00,
"CDTDUE":0.00},
"SQL_NO_DATA_FOUND"
]}]}]}

result:
success (0)
abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


@rangercairns

you could dump stmt number as your program runs along to see if it is climbing up the bean stalk.

Good idea. I did that and found out that the code doesn't exactly run asynchronously. Least not in a way I expected.

My output looks something like

"Created stmt Object # 1"
.
. (the same statement increasing in number)
.
"Created stmt Object # 3000"
"Executing stmt # 1"
.
.
.
"Executing stmt # 3000"
"Closing stmt # 1"
.
.
.
"Closing stmt # 3000"

I expected something more like:

"Created stmt Object # 1"
"Created stmt Object # 2"
"Executing stmt # 1"
"Created stmt Object # 3"
"Executing stmt # 2"
"Closing stmt # 1"
"Executing stmt # 3"
"Closing stmt # 2"
"Closing stmt # 3"

Where several stmt objects are released before others are created or executed. Maybe the step from binding to executing is too slow, I don't know.

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


suggestion to authors of idb team

I suggest offer two stmt close APIs.

1) Current stmt.close(), aka, SQLFreeHandle(hstmt, SQL_DROP).

2) Additional new stmt.reset(), aka, SQLFreeHandle(hstmt, SQL_RESET_PARAMS) for API better block writes re-use stmt handle (different block sizes).

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Another though debug ... you could dump stmt number as your program runs along to see if it is climbing up the bean stalk.

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


What exactly is a descriptor?

Well, CLI is really simple. That is, 32K 'handles/descriptors' allowed in an process. Each time you open a 'connection', 'statement', etc., one of the slots in DB2 CLI array is marked 'in use' (technical simplification, but really is an array).

BTW -- I can't recall 32K is exact limit modern DB2 days (may be old info). Point is, you should never be up that high actively 'in use' handles in single process anyway (unless you are Google at millions hits/second).

Is it possible to increase the maximum number of descriptors I can allocate?

No.

Is my code not deallocating? Or not deallocating fast enough?

In this case, 'resource number 14' would indicate you used them all up. Speculation by simple answer, you allocated statements, but never closed them. Aka, assuming not a PTF where DB2 is bad (less likely, but possible).

So, looking at your script code ...

BTW -- Technically, prepareBindingParams, if each write block of records is always same row size (columns fixed size), and same number rows (count records), aka, then blocks exactly same. You should be able to simply re-use a single handle, but re-bind parameters with next set of records until last insert (different nbr rows, means different block size).

How did you find the recovery text?

BTW -- Only helping, not trying to take over issue. China folks are probably sleeping by now (other side world).

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


Hey @rangercairns , thank you. How did you find the recovery text? That's actually a bit helpful.

What exactly is a descriptor? Is it possible to increase the maximum number of descriptors I can allocate? Is my code not deallocating? Or not deallocating fast enough?

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


Ops ... sorry ... I see you already found ... When I checked the job log, it said that it was resource number 14.

abmusse commented 6 years ago

Original comment by Tony Cairns (Bitbucket: rangercairns, GitHub: rangercairns).


There are many possibilities ...

Perhaps check joblog while 'run' and see which limit you hit (below). Good Luck.

BTW - I am assuming you understand that your node program can hold the connection open while observer the attached QSQSRVR joblog (aka, don't close the connection).

#!bash

SQL0904
Message Text:   Resource limit exceeded.
Cause Text:     Resource limit type &1 exceeded with reason code &2. A list of the limit types follows:

    Type 1 indicates that the user profile storage limit or the machine storage limit was exceeded.
    Type 2 indicates that the machine lock limit was exceeded.
    Type 3 indicates that the query resource limit was exceeded. For more information see the previously listed message CPD4365.
    Type 4 indicates that a journal error has occurred.
    Type 5 indicates that the commit lock limit was exceeded.
    Type 6 indicates that the maximum size of the table has been reached.
    Type 7 indicates that the maximum size of the prepared statement area has been reached.
    Type 8 indicates that the maximum number of cursors have been opened for this job.
    Type 9 indicates that the maximum number of entries in the lock table have been used for this job.
    Type 12 indicates that the maximum DRDA communications buffer size was exceeded.
    Type 13 indicates that the maximum amount of blocked data was exceeded.
    Type 14 indicates that the maximum amount of descriptor space has been allocated.
    Type 15 indicates that the maximum size of the parameter default area has been reached.

Recovery Text:  Do one of the following: If this is error type 1, contact the security officer to increase the user profile storage limit, or delete some objects to free up storage and then try the request again.

    If this is error type 2, then try the operation when the number of machine locks held has decreased.
    If this is error types 3, 4, or 5, see previously listed messages in the job log for recovery information.
    If this is error type 6, Some of the rows from this table must be moved to another table.
    If this is error type 7, issue a COMMIT or ROLLBACK without the HOLD clause before issuing anymore PREPARE statements.
    If this is error type 8, issue a CLOSE before issuing any more OPEN statements.
    If this is error type 9, issue a COMMIT or ROLLBACK without the HOLD clause.
    If this is error type 12, reduce the total size of column data supplied with the SQL request.
    If this is error type 13, reduce the number of rows in the block.
    If this is error type 14, reduce the number of allocated descriptors with the DEALLOCATE DESCRIPTOR statement.
    If this is error type 15, simplify the parameter defaults.

SQLCODE or SQLCODEs:    +904, -904
SQLSTATE or SQLSTATEs:  01687, 57011
abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


Is it possible that I need one of these two PTFs ?

http://www-01.ibm.com/support/docview.wss?uid=nas3SI44448

http://www-01.ibm.com/support/docview.wss?uid=nas3SI30791

http://www-01.ibm.com/support/docview.wss?uid=nas2SE49099

I don't really understand if the descriptions of what problems the PTFs seem to fix fits the problem I seem to have.

abmusse commented 6 years ago

Original comment by Kerim Gueney (Bitbucket: KerimG, GitHub: KerimG).


@dmabupt , I'm inserting ~60 values per insert. I checked out the spoolfile and got the following:

Resource Limit Exceeded. (Type 14, Reason Code 542) error. Type 14 indicates that the maximum amount of descriptor space  
has been allocated.

Maybe that helps a bit more.

EDIT: wrong error code

abmusse commented 6 years ago

Original comment by Xu Meng (Bitbucket: mengxumx, GitHub: dmabupt).


@KerimG , I tried to insert 5000 rows but can not recreate the issue.

#!javascript

let db = require('idb-connector')
let dbConn = new db.dbconn();
let limit = parseInt(process.argv[2]);
let sql = "INSERT INTO MOEXU.TEST1 (NAME) VALUES(?)";
dbConn.conn("*LOCAL");
for(let i=0; i<limit; i++) {
  let stmt = new db.dbstmt(dbConn);
  stmt.prepare(sql, err => {
    let id = Math.floor(Math.random() * 1000);
    let name = "XUMENG" + id;
    let params = [
       [name, db.SQL_PARAM_INPUT, 1]
    ];
    stmt.bindParam(params, err => {
      stmt.execute(err => {
        stmt.close();
      });
    });
  });
}
#!shell

bash-4.4$ node 0002-async_insert.js 5000
......
abmusse commented 6 years ago

Original comment by Jesse G (Bitbucket: ThePrez, GitHub: ThePrez).


@xumeng, can you please look into this?