Open GoogleCodeExporter opened 9 years ago
Hello, this report is clear about what it states, but it is not clear how this
is possible and not enough information is provided in order to isolate the
problem.
Please try to include the INFO output first and after the FLUSHALL call, this
would be an important first step in understanding what is happening.
Cheers,
Salvatore
Original comment by anti...@gmail.com
on 13 Sep 2011 at 8:36
Here are the steps to reproduce the problem.
1. Set maxmemory value to 1.5GB in redis.conf
2. Start redis server
3. Insert data into redis to exceed maxmemory limit using jedis client.
4. When the limit will be reached to 1.5GB, you will start getting exceptions
as "redis.clients.jedis.exceptions.JedisDataException: ERR command not allowed
when used memory > 'maxmemory'"
5. Now open redis client console and type info command to check used memory.It
should be 1.5GB.
6. Now run FLUSHALL command to delete all data from redis. Ensure that data is
deleted by quering your dataset where you inserted records.
7. Now again run info command as well as try to insert data.
8. You should be able to insert data as you have deleted old records to free
memory.
However info command still shows 1.5GB used memory to me and I keep getting the
same memory reached exception.
I have attached info command before and after running flushall when max memory
had reached.
Original comment by sonal...@gmail.com
on 13 Sep 2011 at 2:37
Attachments:
The problem goes only when I restart redis server. After restarting I am able
to insert data and used memory via info command is few MBs as expected.
Attaching info command output after restarting redis server.
Original comment by sonal...@gmail.com
on 13 Sep 2011 at 2:42
Attachments:
There is a client with a large pending reply buffer in the INFO output after
FLUSHALL. Making this client disconnect, or read its replies faster, will be
the first step in trying to fix this condition. We can take a closer look when
this client does not appear to be the cause of the large amount of memory usage.
Original comment by pcnoordh...@gmail.com
on 13 Sep 2011 at 2:43
Issue 525 probably describes what is happening then.
Original comment by jwillp@gmail.com
on 14 Sep 2011 at 2:10
Please let me know if you need any other inputs from me. I have seen the same
issue on 2.2.12 version which I am using on production. Fix for this will be
greatly appreciated. Thanks.
Original comment by sonal...@gmail.com
on 14 Sep 2011 at 7:06
Sonalivk,
Are you using publish/subscribe, or do you know of any client that may be
reading its reply/replies too slow? It is very likely that this client its
output buffer that keeps the memory from being released.
Cheers,
Pieter
Original comment by pcnoordh...@gmail.com
on 14 Sep 2011 at 7:09
No I am not using pub/sub. Also I did not notice any slowness while getting
reply from redis but I will cross check and let you know.
NOTE: I have jedis client who is executing redis commands like hset, hget,
smembers etc. I am using JedisPool and I have set testOnBorrow flag to true so
that it makes ping request to redis server to check if it is alive.
I am not sure if this info would be useful.
Thanks,
Sonali
Original comment by sonal...@gmail.com
on 14 Sep 2011 at 9:38
Hello, as Pieter noted there is a client with long output buffer, so it looks
like a client requested some long reply like LRANGE foo 0 -1, then did not read
the reply back.
Also note that 20k is the *longest* client output buffer list, and you have 600
clients connected.
Probably all the other clients have a long output buffer as well.
So in short it looks like many of the clients are not reading the reply back,
and this objects in queue are using all the memory. Looks like it is not a
redis-server related problem.
Taking the issue open for some more time just to see the outcome, closing later
in the event of no news.
Cheers,
Salvatore
Original comment by anti...@gmail.com
on 14 Sep 2011 at 12:44
Yes you are correct the issue is due to output buffer. If I close my client
application and ensure that there is no client connected to redis server and
then execute FLUSHALL it cleans up the memory as expected.
I will need to look at jedis application to fix the client code.
Thanks once again for your help.
Original comment by sonal...@gmail.com
on 15 Sep 2011 at 9:27
Original issue reported on code.google.com by
sonal...@gmail.com
on 2 Sep 2011 at 9:03