Open dgliu opened 2 months ago
Hi,
Do you mean the search ranges of hyper-parameters? Most of methods are retuned as follows:
Note: Hyper-parameter search is applied to the cross-entropy loss, while SCE/NCE adopt the same setting.
Hi, thanks for your answer. it is also valuable reference information. Let me describe my problem in more detail:
In the current folder of each method, an execution example of calling the main file and configuration file is provided, where the parameters are set by freerec.parser.Parser(), and the configuration file is specified by configs/X.yaml.
I noticed that the log file format provided in the benchmark seems to be generated through a grid search setting (with different seeds) instead of the above setting, where the parameters seem to be set through freerec.parser.CoreParser(). In addition, the configuration file format at this time seems to be different from the above settings, according to the comments with the check() function in CoreParser().
Since I'm unfamiliar with the FreeRec framework and its documentation doesn't seem to be sufficient, could you please add a configuration file and execution example for this grid search setting? Thanks.
Sorry, I didn't realize you would be interested in this script. You can conduct it by calling
freerec tune Beauty-5 config.yaml
where config.yaml
is defined as follows:
command: python main.py
envs:
root: ../../data
dataset: AmazonBeauty_550_Chron
device: '0,1,2,3'
params:
seed: [0, 1, 2, 3, 4]
defaults:
config: configs/AmazonBeauty_550_Chron.yaml
Of course, root
and device
should be changed according to your environment.
Since the GPUs and servers used may differ, I wanted to re-run the benchmark results on our current environment.
Thanks for your answer. I have successfully tested it using the example you provided.
Hi, thank you for open-sourcing this precious project.
Since each method's folder currently only contains execution examples under a given set of hyperparameters, an execution example under grid search (such as that used in the benchmark) is not provided.
If it is convenient, could you please add a corresponding execution example? Thanks!