Closed GoogleCodeExporter closed 9 years ago
The error:
Mysql_error: MySQL server has gone away may not have anything to do with this
script.
I'm building a test server now and inserting a bunch of rows to test better
scalability with this script.
Original comment by cdu...@gmail.com
on 27 Oct 2009 at 5:54
Going back to basics here...but can you please try the following?
Edit common_funcs.php near line 216:
Change:
$sql = "SELECT program FROM ".$table;
$result = perform_query($sql, $link);
$i = 0;
while($row = fetch_array($result, 'ASSOC')) {
array_push($cacheProgramValues, $row['program']);
if ($i++> 1000) {
$i = 0;
$cacheProgramValues = array_unique($cacheProgramValues);
}
}
To:
$sql = "SELECT DISTINCT(program) FROM ".$table;
$result = perform_query($sql, $link);
while($row = fetch_array($result, 'ASSOC')) {
array_push($cacheProgramValues, $row['program']);
}
I ran this on a test database with 66m entries and it took about .01 seconds to
run.
Please let me know if it works ok for you.
Original comment by cdu...@gmail.com
on 27 Oct 2009 at 11:59
It made HUGE difference to reload the cache except for the LPD one. The
following is
the result. I don't have problem to separate them, updating LPD cache once a day
should be good enough for me.
php reloadcache.php
Starting to reload cache
2009-10-28 01:03:49
Got lock, continue.
Loading the cache with data from the merge table
Deleting cache entries for tables that no longer exist...
2009-10-28 01:03:54
Starting LPD Cache
2009-10-28 01:03:54
Setting daily log counts in search_cache
Entry 0...INSERTing
Entry 1...INSERTing
Entry 2...INSERTing
Entry 3...INSERTing
Entry 4...INSERTing
Entry 5...INSERTing
Entry 6...INSERTing
Entry 7...INSERTing
Entry 8...INSERTing
Entry 9...INSERTing
Entry 10...INSERTing
Entry 11...INSERTing
Entry 12...INSERTing
Entry 13...INSERTing
Entry 14...INSERTing
2009-10-28 01:24:14
All done!
2009-10-28 01:24:14
Reloadcache Completed!
Reloadcache Completed!
Original comment by arnold.x...@gmail.com
on 28 Oct 2009 at 5:25
Try this and let me know if it works better (it should):
--- lpdcache.php 2009-10-28 15:08:04.000000000 -0400
+++ lpdcache.php.new 2009-10-28 15:39:31.000000000 -0400
@@ -48,9 +48,11 @@
$check = mysql_query ("SELECT * FROM ".MERGELOGTABLE." LIMIT
0,1"); /* >>limit<< is just to make it faster in case the db is huge */
if ($check) {
for($i=0;$i<=LOGROTATERETENTION;$i++) {
- $query="SELECT value as count
from ".CACHETABLENAME." WHERE type='LPD' AND updatetime=DATE_SUB(CURDATE(),
INTERVAL
$i DAY)";
- $result = perform_query($query, $dbLink);
- if(num_rows($result) >= 1) {
+ $query="SELECT SQL_CALC_FOUND_ROWS *
FROM ".CACHETABLENAME." WHERE type='LPD' AND updatetime=DATE_SUB(CURDATE(),
INTERVAL
$i DAY) LIMIT 1";
+ $result = mysql_query("SELECT FOUND_ROWS()");
+ $total = mysql_fetch_row($result);
+ echo "Total = " .$total[0] ."\n";
+ if($total[0] >= 1) {
echo "Entry $i already in cache...";
if ($i == 0) { // We only need to update
today's count since (presumably) old logs should not be getting new INSERTs
echo "UPDATING\n";
Original comment by cdu...@gmail.com
on 28 Oct 2009 at 7:40
Sorry, you can remove the line:
echo "Total = " .$total[0] ."\n";
I had it in there for testing purposes.
Original comment by cdu...@gmail.com
on 28 Oct 2009 at 7:41
I'm also thinking about removing the OPTIMIZE command from the logrotate.php
script,
here's what mysql says about that:
Taken from http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html
"In most setups, you need not run OPTIMIZE TABLE at all. Even if you do a lot
of
updates to variable-length rows, it is not likely that you need to do this more
than
once a week or month and only on certain tables."
Based on that, I think it might be better to let the sysadmin decide when to
run
OPTIMIZE, what do you think?
Original comment by cdu...@gmail.com
on 28 Oct 2009 at 7:43
Ignore that diff - it's not working. I'll need more time to work on this but
have to
leave. I'll try to do something tonight.
Original comment by cdu...@gmail.com
on 28 Oct 2009 at 7:45
I think probably it's a good idea to leave OPTIMIZE TABLE out as well. Since I
never
delete from the table, I don't think I will gain much, if any, from "OPTIMIZE".
just
my 2 cents.
Original comment by arnold.x...@gmail.com
on 28 Oct 2009 at 8:40
I've uploaded a new version with several fixes to address this issue. v2.9.9o
is now
available for download and includes the following changes:
2.9.9o
- Removed the OPTIMIZE command from log rotation due to slow response in larger DB's
- Fixed bug in reloadcache that was deleting cache entries before checking to see
if they even needed updating (essentially causing
lpdcache to re-insert all cache data for all tables on every run).
- Modified common_funcs.php to use a SELECT DISTINCT on the program field instead
of using array_unique in order to speed up querie
s in larger DB's.
If you don't want to do a "full" upgrade from 2.9.9n, just replace the
following
files:
scripts/logrotate.php
scripts/lpdcache.php
scripts/reloadcache.php
html/includes/common_funcs.php
...be sure to run fixpaths.sh after replacing the files
Of course, you should back up your original files first :-)
Original comment by cdu...@gmail.com
on 29 Oct 2009 at 4:19
Ok,
i have updated with our modification so that :
scripts/logrotate.php
scripts/lpdcache.php
scripts/reloadcache.php
html/includes/common_funcs.php
and after i have run fixpaths.sh
but this monday i have the following error when i atenpt to connect on the web
UI
Error in "function perform_query()"
Mysql_error: The table 'search_cache' is full
do you know why this table are full ?
regards
Original comment by dr.les...@yahoo.fr
on 2 Nov 2009 at 12:36
Full? Wow, that's a new one...
How big is the table?
How much memory do you have?
search_cache uses a MEMORY engine, I suspect you're low on mem?
can you please run "select count(*) from search_cache" for me?
Maybe I messed something up and forgot to remove old entries? I'll check.
Original comment by cdu...@gmail.com
on 2 Nov 2009 at 1:56
approximately 19000,i haven't exactly the number.
i have deleted all record on this table.
actually i have 15946 record in search_cache
i'm waiting if the same error comeback.
i have 4Gb on RAM and "tmp table size" is on Mysql at 33 554 432
"tmp table size" are located on /tmp and is on the / mount point
/ mount point have +13Go free
Original comment by dr.les...@yahoo.fr
on 2 Nov 2009 at 3:53
Ok, I found the problem.
You need to increase your max_heap_table_size setting in /etc/my.cnf
The default is only 16M, I set mine to 32M and restarted mysql and was able to
get
173280 rows inserted.
Note that when you restart mysql it will clear the search_cache table (since
it's a
MEMORY db) - you can either re-run reloadcache.php after restarting or you can
backup
the current table before you restart and restore the table. To do that:
mysqldump syslog search_cache > temp.sql
/etc/init.d/mysql restart
mysql syslog < temp.sql
Here's the documentation:
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_h
eap_table_size
Original comment by cdu...@gmail.com
on 2 Nov 2009 at 6:20
Original issue reported on code.google.com by
arnold.x...@gmail.com
on 26 Oct 2009 at 5:49