Closed germayneng closed 5 years ago
Do you have exactly the same environment? Such as identical gcc, identical OS version, identical kernel.
hi laurae,
for the case of aws vs local, i may not guarantee that i have the same os ... etcc but if it is base vs venv with the same lgbm version, will it guarantee the same results?
edit: Is there more modules that i should also include the requirements.txt to ensure similar environment?
No, you need at least an identical OS and an identical compiler (and probably an identical kernel now due to cache trashing from security mitigations) to guarantee identical random number generation.
Thank you for the prompt reply.
So basically the random number generation is the factor that is differing across the environment? Any methods to ensure compiler and kernel to be the same?
edit: When u mean compiler, do you mean the python version? If so i have the same python version in the venv at 3.6.5
edit2:
So basically I have:
(base)
(local1)
(local2)
(aws 1)
all set up using the requirement.txt above. Same python. Only local 1 and local 2 have the same results. Base and aws1 are differing.
If getting exactly identical results is critical for your task, install the correct OS and the corresponding kernel with identical OS packages of the same repositories to have identical environments (this means fully wiping and reinstalling from scratch either your local machine or the remote machine). If using Ubuntu there are no guarantees a setup script will provide identical OS environments due to partial control over updates from repositories for apt-get. For instance, 2 identical RHEL / SUSE (Enterprise distributions, stable) gives same result, while 2 identical Ubuntu might not.
You can try the latest version as well. I remember we fix a bug for consistency.( std::sort -> std::stable_sort) BTW, what is the result when removed sampling (row and col)?
@guolinke i believe 2.1.2 is the latest? Alsso, do you mean removing
'colsample_bytree': 0.5, 'subsample': 0.7, 'subsample_freq': 3,
Edit: Yes i removed the settings above but still not able to replicate across environments/machines. Tested (base) vs (local venv1) Probably will diff on AWS as well.
The latest is 2.2.2
Thanks @guolinke and @Laurae2
Managed to isolate the issue. It is not with lgbm. With the params as above, the result can be reproduced, even with lgbm 2.2.2 vs 2.1.2.
Issue was another module: pmdarima which caused a slightly variation in float numbers but was hidden in between the dataset. (in the preprocessing stage of the data) so lgbm was training on a slightly different dataset (by few float points)
Good day,
I realized that in different machines, I am not able to reproduce the exact same results from lgbm. In the same machine, running the sript multiple times give me the same results. So i do not think it is from the script itself.
By setting up a EC2 in aws and creating a virtual venv and installing the requirement,txt (so the modules are the same with my local virtual environment. )
There are some sort of randomness still in the prediction. Of course, using the aws and running multiple times gives the same results, except that it is simply different from the local one.
In sum: i created 3 environment: local base, local venv and aws venv. Running the scripts give same results in the same environment but differ across environment. Seed are already set and nthread =1
Some params:
and my requirement.txt to ensure lgbmm is the same version:
will be good to hear what may caused this. thanks!