Open marcus-at-localhost opened 6 years ago
The log is collected in the memory. As the log contains the complete query and the values inserted, the log can get huge when inserting tons of records.
How to solve this? No idea, as the getter for the Log imho is inconsistent: passing TRUE
just returns the log, passing FALSE
disables it. no way to re-enable logging or clear the log.
Maybe there should be a null
-value as default (to return the log) and any boolean en-/disables logging.
I also think that the behaviour of this feature could be improved. IMHO, sql logging should be disabled by default. After all this is a debugging feature and the DEBUG
variable itself is disabled by default.
I will leave it to whatever the consensus is.
+1 for disabling by default
well, any solution is somehow ugly:
TRUE
to re-enable and return log is a bad and probably unwanted sideeffect, as you have to call log(FALSE)
to get the same state if logging was disabled before. null
as default to return the log and any boolean to enable / disable logging
TRUE
because of consistent code stylestill a way to reset the log is missing.
Well if we want to do this properly, we need to give the possibility to perform 4 actions:
This can't be done without breaking BC so I suggest to tag this issue for v4 and fix it properly when time will come. This is not a blocking issue anyway.
Or maybe a buffer, like only the last 50 queries? ;)
After introducing a buffer someone will ask for a configuration option to bump up the buffer size to m
or to reduce it to n
. If the solution is only working with a buffer, then it would be also necessary to specify a big integer (e.g. INT_MAX
) as buffer size to log "all queries".
A buffer size equal to 0
(or (int) false
, (int) null
) would imply disabled logging. A size of true
could enable logging without a buffer.
We could introduce new methods and keep log()
for backwards compatibility. On the other hand we are talking about adding fat to fatfree. :thinking:
For now these are all just ideas: But when we got enough pro+cons, it could end up in something beautiful, even if it will only be in v4, but the ideas now can shape the future ;) i.e. a new Log class, which can handle multiple different application aspects/modules and is injected into the DB object upon instantiation. No injection = no logging, otherwise the configuration could be made in the Log class itself for this instance, etc. In such a scenario you could also use a custom Log implementation if you want... just an idea ;)
V4, i do hope we will get to this V4 someday.
Hi, I have a cli script, that runs through a big (12000 items) json file and updates a mysql database. After 200 entries the script dies with the info that sql.php line 230 tried to allocate X of memory. (Sorry I don't remember the exact words)
This is line 230:
This is my script (simplified)
After I set a var_dump($this->log) on sql.php:230 I saw that
$this->log
just grows (exec time and queries) and then hits the php memory limit.I know that this is intended, to profile SQL calls, but since logging is on by default, it was confusing to see the script crash and not really knowing why.
With logging turned off
$this->db->log(FALSE);
it seems to work. So I wonder if I made a mistake in my application? Or should the documentation mention this? Or is this an edge case?Thanks. Marcus