Open scambra opened 10 years ago
The Model of that process is like this :
This away you are able to use datatables "next" and "previous" feutures faster because is using the filesearch table not the Bacula File Table.
In Large deployment Bacula infrastructures the File Table is HUGE, to query information on that table, it will take i lots of time.
You did well changing the maximum execution time of 30s to bigger value on php. You change as well the max memory usage of php script as well.
About the QueryException, i will check on that and tell you more information
For example i have i job that stores 1 Terabyte of information, and i never had the chance to make query on that job with any bacula webgui.
What do you think about using INSERT INTO filessearch SELECT, although raw insert sql must be used:
diff --git a/app/controllers/FilesController.php b/app/controllers/FilesController.php
index 697ec69..5d70359 100755
--- a/app/controllers/FilesController.php
+++ b/app/controllers/FilesController.php
@@ -43,12 +43,9 @@ class FilesController extends BaseController
$files = Files::select(array($this->tables['path'].'.path', $this->tables['filename'].'.name as filename','jobid'))
->join($this->tables['filename'],$this->tables['file'].'.filenameid', '=', $this->tables['filename'].'.filenameid')
->join($this->tables['path'],$this->tables['file'].'.pathid', '=', $this->tables['path'].'.pathid')
- ->where('jobid','=', $job)->remember(10)->get();
+ ->where('jobid','=', $job);
- $files = $files->toArray();
- if (!empty($files)) {
- $t= Filessearch::insert($files);
- }
+ $filessearch->getConnection()->insert("INSERT INTO ".($filessearch->getTable())." (path, filename, jobid) ".$files->toSql(), array($job));
}
/* Mostra o log do Job */
It's working really fast here, I can send a PR if you like it.
If you remove this line, you are going to loose the Orm Laravel cache feature. ->where('jobid','=', $job)->remember(10)->get();
This Line did you tested on Mysql and Portgres ? $filessearch->getConnection()->insert("INSERT INTO ".($filessearch->getTable())." (path, filename, jobid) ".$files->toSql(), array($job));
I only tested on postgres, although it's a quite standard SQL.
I know I lose laravel cache, but those records are not loaded in laravel anymore so I don't think I lose anything important. Previously it took more than 30 seconds, and then an exception was raised, now it takes few seconds only, less than 30 seconds. I don't think cache it's important here.
I will try to install on mysql and test
Probably I can't test on mysql until next week, I need to try on a vm. I don't want to mess my production server.
If I try to see status of a big job, 1.54G and 24401 files, it fails. I had to increase php execution time, because it was failing with maximum execution time of 30s. But now, it gives this exception: https://gist.github.com/scambra/ba3e423f4d51b6cbbd99
This is the backtrace I got, I don't know why there is no line numbers: