Closed AlexanderRay closed 9 years ago
And you missed linkedhashmap dependency, you can find it in distribution.
concurrentlinkedhashmap-lru-1.4.1.jar
yes, you see it is running and truncated automatically http://screencloud.net/v/22t1
wal files from 18 to 22
hm now it is running huge, but I suppose I will find issue in test ))
ok, you can also reproduce it?
yes
super-)
and seems found a bug. )) Lets wait 30 min, I will run 2 times.
Just for next time. If you need to remove all data, do not do "delete all" action. Do truncate otherwise insertions will be slow.
truncate?
truncate class ... command
ok, I see, thanks)
Hi, now it works, but, as you say, after delete, its very slow. it is possible to restructure the database, so I can insert records at the same speed as before delete (for my purposes, I need sometimes to delete 80-90 percent from class records)?
how can I speed up my inserts. You say it is possible to make till to 200.000 inserts per second on common hardware. I have I7, ssd, 16GB ram, but the maximum speed is about 11.000 records per second ... can you upload some benchmark test with 200.000 inserts per second, so I can test it on my hardware?
Alexander, Do not do delete all use truncate, and it will not be slow.
On Sun, Feb 8, 2015 at 10:26 PM, Alexander Ray notifications@github.com wrote:
how can I speed up my inserts. You say it is possible to make till to 200.000 inserts per second on common hardware. I have I7, ssd, 16GB ram, but the maximum speed is about 11.000 records per second ... can you upload some benchmark test with 200.000 inserts per second, so I can test it on my hardware?
— Reply to this email directly or view it on GitHub https://github.com/orientechnologies/orientdb/issues/3517#issuecomment-73429701 .
Best regards, Andrey Lomakin.
I see,
Hi
On Mon, Feb 9, 2015 at 12:39 PM, Alexander Ray notifications@github.com wrote:
I see,
- but what can I do, if I need to delete not all of the records, but only 80 or 90 percent of them? should I use "truncate record".
- is it possible to restructure the database after "massive delete"? may be with backup/restore?
- what is about a "speed-insert" example?
— Reply to this email directly or view it on GitHub https://github.com/orientechnologies/orientdb/issues/3517#issuecomment-73488165 .
Best regards, Andrey Lomakin.
Alexander, you could create multiple clusters and just drop the clusters the contain the data you want to drop. This would be super fast and efficient. What kind of data do you have? How you filter the 80/90% of data you delete?
Thanks, it is a good idea to use multiple clusters for storing data and to drop it by cluster. I will try it so.
You write that orientdb can store till to 220.000 records per second on common hardware. What type of records? which API? Have you an example what achieve that insert speed on yours hardware?
220k as massive insertion in multi-threads, no wal, no index and Document API with documents with 6 fields.
plocal or remote?
plocal.
thanks for your advise)
just for info, here are my multithread test results:
Hi,
The OOM error does not seem to be fixed yet. I am using 2.2-alpha. Can you please point me to the JIRA for this issue
Regards Sarit
Hi @lvca .... I am getting this error when inserting records in multiple threads. I am using version 2.2.21.
505 HTTP Version Not Supported: {
"errors": [
{
"code": 505,
"reason": 505,
"content": "java.lang.OutOfMemoryError: GC overhead limit exceeded"
}
]
}
Is the issue fixed? Any ideas how can I get this resolved.
@nishantkumar1292 OOM is to general issue and may be caused by many factors do you have heap dump generated by OOM in server directory?
@laa I have no idea about the heap dump. Where can I find the heap dump?
I am also getting Request Timeout
for multiple queries. Is this a consequence of the above error?
I am running more than 1500 threads in parallel which are either performing one of the CREATE, READ and UPDATE operations.
I also saw the command INVALIDATE_ALL
to remove all the query results at every Create, Update, and Delete operation. This is faster than PER_CLUSTER
if many writes occur. Will this help?
@nishantkumar1292 how many cores do you have? Yes, that is possible that all memory was consumed by the handling of temporary data generated during query processing? Do you use command cache? Could you switch it off? About heap dump, it should be in server or bin directory and have .hprof extension.
I have 8 cores and they are all being used at around 98-99%.
Yes...found the heap dump file.
No not using command cache. Below is the command.cache.json
file.
{
"enabled": false,
"evictStrategy": "INVALIDATE_ALL",
"minExecutionTime": 10,
"maxResultsetSize": 500
}
Also, after force stopping the threads the cores usage drops to 1-3%, thus suggesting that the cores are used by orientDB.
@nishantkumar1292 1500 threads too much for 8 cores you will have a lot of context switches and as result bigger memory consumption and worse performance, in reality, big CPU usage does not mean better performance. Amount of threads is up to you of course, could you send me this file I will check it but I am on 99% sure that is caused by big memory consumption of queries
Yes you are right..... decreasing the number of threads improved the performance and also eliminated Request Timeout
.
What is the maximum number of threads I can work with. Each thread does some CREATE, READ and UPDATE operations.
Also what should be the ideal orientDB configuration to handle this kind of load?
@nishantkumar1292 it is hard to say, but my preference is not more than 10 for a single core. ok, let say 20. so in our case, it should be 160 but not 1500 it is an enormous number. You can limit the number of threads by handling user requests in the thread pool.
What do you mean by handling user requests in thread pool?
I do not know how your architecture I just proposed that you accept HTTP request, for example, parse them and execute a command. So commands may be executed in a separate thread pool.
Was running scripts on 100 parallel threads. The server stopped with this error.
Error on client connection
java.lang.OutOfMemoryError: GC overhead limit exceeded
$ANSI{green {db=db_development}} Exception `3AD12824` in storage `db_development`
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "OrientDB ONetworkProtocolHttpDb listen at 0.0.0.0:2480-2490" java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "OrientDB Write Cache Flush Task (db_development)"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "OrientDB Write Cache Flush Task (db_development)"
@laa ...any ideas how to resolve this?
Here is the screen-shot of the htop
output on the OrientDB server:
Why is the java process asking for 13.9GB of virtual memory?
Also the heap-dump generated filled my disk space. Can they be of any help or should I remove them to free disk space.
Hi @laa ...any updates or resolution for the above issue?
Hi,
I got an OutOfMemoryError while inserting about 10^6 Nodes in a OrientGraph.
orientdb.err
a test class in scala: