colinmollenhour / Cm_Cache_Backend_Redis

A Zend_Cache backend for Redis with full support for tags (works great with Magento)
Other
390 stars 142 forks source link

fwrite(): send of 8192 bytes failed with errno=104 Connection reset by peer #153

Closed ilnytskyi closed 1 year ago

ilnytskyi commented 4 years ago

Some app may write too many keys into Redis and then try to clean them at once like Magento 2 Basically it's this issue however may be resolved in the library https://github.com/magento/magento2/issues/27151

just needs to implement batch processing here \Cm_Cache_Backend_Redis::_removeByMatchingTags

would it be ok and safe to implement this method like this as a temporary workaround ?

    /**
     * @param array $tags
     */
    protected function _removeByMatchingTags($tags)
    {
        $maxCount = 10000;
        $ids = $this->getIdsMatchingTags($tags);
        if($ids)
        {
            $ids = array_chunk($ids, $maxCount);
            foreach ($ids as $batchedIds) {
                $this->_redis->pipeline()->multi();

                // Remove data
                $this->_redis->del( $this->_preprocessIds($batchedIds));

                // Remove ids from list of all ids
                if($this->_notMatchingTags) {
                    $this->_redis->sRem( self::SET_IDS, $batchedIds);
                }
                $this->_redis->exec();
            }
        }
    }
colinmollenhour commented 4 years ago

Yes, although the use of array_chunk will make the operation less atomic it seems that is preferable to errors and there is no better workaround that is simple that I can think of.

bmitchell-ldg commented 4 years ago

Hello @ilnytskyi, Did this change resolve the issues for you? we are also experiencing this with Magento 2.3.3 with the reindexing processes.

ilnytskyi commented 4 years ago

@bmitchell-ldg yes. However, we verified what Magento writes to the cache and found a lot of swatches blocks cached per URL. Additionally, you can check the Redis config, I noticed that my dev laptop had no problems cleaning 500K keys at once with total request size > 40M, but the test instance barely cleaned 10K (< 1MB). So we used 10K batch size in our case.

bmitchell-ldg commented 4 years ago

@ilnytskyi thank you! we went live with the 10k batch size change and it resolved the issue.

colinmollenhour commented 4 years ago

Pushed a fix in 02eef64

cdcrothers commented 1 year ago

Seems like this issue has been resolved. It can probably be closed now.