google-code-export / redis

Automatically exported from code.google.com/p/redis
BSD 3-Clause "New" or "Revised" License
1 stars 1 forks source link

Even after flushall maxmemory shows the previous max value. #654

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What version of Redis you are using, in what kind of Operating System?
2.2.7

What is the problem you are experiencing?
I have set maxmemory setting to 1.5GB. When my application tried to insert data 
 > 1.5GB data it started throwing maxmemory reached exception as expected. Then 
I executed 'FLUSHALL' command from redis client console to clear all datasets. 
when I checked info command it still shows "used_memory_human:1.5GB" and my 
application fails to insert any data with the same maxmemory reached exception 
even though I have deleted all datasets with flushall command.
This is forcing me to restart redis server for my client application to work.

What steps will reproduce the problem?
Set maxmemory limit to some value with maxmemory-policy noeviction.
Start inserting data till maxmemory reaches and you start getting exception. 
Then run flushall command from redis client console and again try to insert 
data into redis.

Do you have an INFO output? Please past it here.
no

If it is a crash, can you please paste the stack trace that you can find in
the log file or on standard output? This is really useful for us!
-

Please provide any additional information below.
My app is a java console app which is using jedis client for inserting data 
into redis data structure.

Original issue reported on code.google.com by sonal...@gmail.com on 2 Sep 2011 at 9:03

GoogleCodeExporter commented 9 years ago
Hello, this report is clear about what it states, but it is not clear how this 
is possible and not enough information is provided in order to isolate the 
problem.

Please try to include the INFO output first and after the FLUSHALL call, this 
would be an important first step in understanding what is happening.

Cheers,
Salvatore

Original comment by anti...@gmail.com on 13 Sep 2011 at 8:36

GoogleCodeExporter commented 9 years ago
Here are the steps to reproduce the problem.
1. Set maxmemory value to 1.5GB in redis.conf
2. Start redis server
3. Insert data into redis to exceed maxmemory limit using jedis client.
4. When the limit will be reached to 1.5GB, you will start getting exceptions 
as "redis.clients.jedis.exceptions.JedisDataException: ERR command not allowed 
when used memory > 'maxmemory'"
5. Now open redis client console and type info command to check used memory.It 
should be 1.5GB.
6. Now run FLUSHALL command to delete all data from redis. Ensure that data is 
deleted by quering your dataset where you inserted records.
7. Now again run info command as well as try to insert data.
8. You should be able to insert data as you have deleted old records to free 
memory. 

However info command still shows 1.5GB used memory to me and I keep getting the 
same memory reached exception.

I have attached info command before and after running flushall when max memory 
had reached.

Original comment by sonal...@gmail.com on 13 Sep 2011 at 2:37

Attachments:

GoogleCodeExporter commented 9 years ago
The problem goes only when I restart redis server. After restarting I am able 
to insert data and used memory via info command is few MBs as expected.

Attaching info command output after restarting redis server.

Original comment by sonal...@gmail.com on 13 Sep 2011 at 2:42

Attachments:

GoogleCodeExporter commented 9 years ago
There is a client with a large pending reply buffer in the INFO output after 
FLUSHALL. Making this client disconnect, or read its replies faster, will be 
the first step in trying to fix this condition. We can take a closer look when 
this client does not appear to be the cause of the large amount of memory usage.

Original comment by pcnoordh...@gmail.com on 13 Sep 2011 at 2:43

GoogleCodeExporter commented 9 years ago
Issue 525 probably describes what is happening then.

Original comment by jwillp@gmail.com on 14 Sep 2011 at 2:10

GoogleCodeExporter commented 9 years ago
Please let me know if you need any other inputs from me. I have seen the same 
issue on 2.2.12 version which I am using on production. Fix for this will be 
greatly appreciated. Thanks. 

Original comment by sonal...@gmail.com on 14 Sep 2011 at 7:06

GoogleCodeExporter commented 9 years ago
Sonalivk,

Are you using publish/subscribe, or do you know of any client that may be 
reading its reply/replies too slow? It is very likely that this client its 
output buffer that keeps the memory from being released.

Cheers,
Pieter

Original comment by pcnoordh...@gmail.com on 14 Sep 2011 at 7:09

GoogleCodeExporter commented 9 years ago
No I am not using pub/sub. Also I did not notice any slowness while getting 
reply from redis but I will cross check and let you know. 

NOTE: I have jedis client who is executing redis commands like hset, hget, 
smembers etc. I am using JedisPool and I have set testOnBorrow flag to true so 
that it makes ping request to redis server to check if it is alive. 
I am not sure if this info would be useful.

Thanks,
Sonali

Original comment by sonal...@gmail.com on 14 Sep 2011 at 9:38

GoogleCodeExporter commented 9 years ago
Hello, as Pieter noted there is a client with long output buffer, so it looks 
like a client requested some long reply like LRANGE foo 0 -1, then did not read 
the reply back.

Also note that 20k is the *longest* client output buffer list, and you have 600 
clients connected.
Probably all the other clients have a long output buffer as well.

So in short it looks like many of the clients are not reading the reply back, 
and this objects in queue are using all the memory. Looks like it is not a 
redis-server related problem.

Taking the issue open for some more time just to see the outcome, closing later 
in the event of no news.

Cheers,
Salvatore

Original comment by anti...@gmail.com on 14 Sep 2011 at 12:44

GoogleCodeExporter commented 9 years ago
Yes you are correct the issue is due to output buffer. If I close my client 
application and ensure that there is no client connected to redis server and 
then execute FLUSHALL it cleans up the memory as expected.
I will need to look at jedis application to fix the client code.

Thanks once again for your help.

Original comment by sonal...@gmail.com on 15 Sep 2011 at 9:27