alibaba / FederatedScope

An easy-to-use federated learning platform
https://www.federatedscope.io
Apache License 2.0
1.26k stars 206 forks source link

关于图学习中链接任务的样例使用错误 #770

Closed zzborz closed 4 months ago

zzborz commented 4 months ago

(fs) D:\Paper\federatedScope\FederatedScope-master>python federatedscope/main.py --cfg federatedscope/gfl/baseline/fedavg_gcn_fullbatch_on_kg.yaml 2024-04-29 08:34:14,331 (logging:124) INFO: the current machine is at 192.168.1.103 2024-04-29 08:34:14,332 (logging:126) INFO: the current dir is D:\Paper\federatedScope\FederatedScope-master 2024-04-29 08:34:14,332 (logging:127) INFO: the output dir is exp\FedAvg_gcn_on_FB15k-237_lr0.25_lstep16 Downloading https://raw.githubusercontent.com/MichSchli/RelationPrediction/master/data/FB-Toutanova/entities.dict Downloading https://raw.githubusercontent.com/MichSchli/RelationPrediction/master/data/FB-Toutanova/relations.dict Downloading https://raw.githubusercontent.com/MichSchli/RelationPrediction/master/data/FB-Toutanova/test.txt Downloading https://raw.githubusercontent.com/MichSchli/RelationPrediction/master/data/FB-Toutanova/train.txt Downloading https://raw.githubusercontent.com/MichSchli/RelationPrediction/master/data/FB-Toutanova/valid.txt Processing... Done! 2024-04-29 08:34:29,444 (config:243) INFO: the used configs are: aggregator: BFT_args:

byzantine_node_num: 0 inside_weight: 1.0 num_agg_groups: 1 num_agg_topk: [] outside_weight: 0.0 robust_rule: fedavg asyn: use: False attack: alpha_TV: 0.001 alpha_prop_loss: 0 attack_method: attacker_id: -1 classifier_PIA: randomforest edge_num: 100 edge_path: edge_data/ freq: 10 info_diff_type: l2 inject_round: 0 insert_round: 100000 label_type: dirty max_ite: 400 mean: [0.9637] mia_is_simulate_in: False mia_simulate_in_round: 20 pgd_eps: 2 pgd_lr: 0.1 pgd_poisoning: False poison_ratio: 0.5 reconstruct_lr: 0.01 reconstruct_optim: Adam scale_para: 1.0 scale_poisoning: False self_epoch: 6 self_lr: 0.05 self_opt: False setting: fix std: [0.1592] target_label_ind: -1 trigger_path: trigger/ trigger_type: edge backend: torch cfg_file: check_completeness: False criterion: type: CrossEntropyLoss data: args: [] batch_size: 64 cSBM_phi: [0.5, 0.5, 0.5] cache_dir: consistent_label_distribution: True drop_last: False file_path: hetero_data_name: [] hetero_synth_batch_size: 32 hetero_synth_feat_dim: 128 hetero_synth_prim_weight: 0.5 is_debug: False loader: max_query_len: 128 max_seq_len: 384 max_tgt_len: 128 num_contrast: 0 num_of_client_for_data: [] num_steps: 30 num_workers: 0 pre_transform: ['Constant', {'value': 1.0, 'cat': False}] quadratic: dim: 1 max_curv: 12.5 min_curv: 0.02 root: data/ save_data: False server_holds_all: False shuffle: True sizes: [10, 5] splits: [0.8, 0.1, 0.1] splitter: rel_type splitter_args: [] subsample: 1.0 target_transform: [] test_pre_transform: [] test_target_transform: [] test_transform: [] transform: [] trunc_stride: 128 type: FB15k-237 val_pre_transform: [] val_target_transform: [] val_transform: [] walk_length: 2 dataloader: batch_size: 64 drop_last: False num_steps: 30 num_workers: 0 pin_memory: False shuffle: True sizes: [10, 5] theta: -1 type: pyg walk_length: 2 device: 0 distribute: use: False early_stop: delta: 0.0 improve_indicator_mode: mean patience: 20 eval: best_res_update_round_wise_key: val_loss count_flops: True freq: 5 metrics: ['hits@1', 'hits@5', 'hits@10'] monitoring: [] report: ['weighted_avg', 'avg', 'fairness', 'raw'] split: ['test', 'val'] expname: FedAvg_gcn_on_FB15k-237_lr0.25_lstep16 expname_tag: feat_engr: num_bins: 5 scenario: hfl secure: dp:

encrypt:
  type: dummy
key_size: 3072
type: encrypt

selec_threshold: 0.05 selec_woe_binning: quantile type: federate: atc_load_from: atc_vanilla: False client_num: 5 data_weighted_aggr: False ignore_weight: False join_in_info: [] make_global_eval: True master_addr: 127.0.0.1 master_port: 29500 merge_test_data: False merge_val_data: False method: FedAvg mode: standalone online_aggr: False process_num: 1 resource_info_file: restore_from: sample_client_num: 5 sample_client_rate: -1.0 sampler: uniform save_to: share_local_model: False total_round_num: 400 unseen_clients_rate: 0.0 use_diff: False use_ss: False fedopt: use: False fedprox: use: False fedsageplus: a: 1.0 b: 1.0 c: 1.0 fedgen_epoch: 200 gen_hidden: 128 hide_portion: 0.5 loc_epoch: 1 num_pred: 5 fedswa: use: False finetune: batch_or_epoch: epoch before_eval: False epoch_linear: 10 freeze_param: local_param: [] local_update_steps: 1 lr_linear: 0.005 optimizer: lr: 0.1 type: SGD scheduler: type: warmup_ratio: 0.0 simple_tuning: False weight_decay: 0.0 flitplus: factor_ema: 0.8 lambdavat: 0.5 tmpFed: 0.5 weightReg: 1.0 gcflplus: EPS_1: 0.05 EPS_2: 0.1 seq_length: 5 standardize: False grad: grad_accum_count: 1 grad_clip: -1.0 hpo: fedex: cutoff: 0.0 diff: False eta0: -1.0 flatten_ss: True gamma: 0.0 pi_lr: 0.01 psn: False sched: auto ss: use: False fts: M: 100 M_target: 200 allow_load_existing_info: True diff: False fed_bo_max_iter: 50 g_var: 1e-06 gp_opt_schedule: 1 local_bo_epochs: 50 local_bo_max_iter: 50 ls: 1.0 obs_noise: 1e-06 ss: target_clients: [] use: False v_kernel: 1.0 var: 0.1 init_cand_num: 16 larger_better: False metric: client_summarized_weighted_avg.val_loss num_workers: 0 pbt: max_stage: 5 perf_threshold: 0.1 pfedhpo: discrete: False ss: target_fl_total_round: 1000 train_anchor: False train_fl: False use: False scheduler: rs sha: budgets: [] elim_rate: 3 iter: 0 ss: table: eps: 0.1 idx: 0 num: 27 trial_index: 0 working_folder: hpo model: contrast_temp: 1.0 contrast_topk: 100 downstream_tasks: [] dropout: 0.5 embed_size: 8 gamma: 0 graph_pooling: mean hidden: 64 in_channels: 0 input_shape: () labelsmoothing: 0.1 lambda: 0.1 layer: 2 length_penalty: 2.0 max_answer_len: 30 max_length: 200 max_tree_depth: 3 min_length: 1 model_num_per_trainer: 1 model_type: google/bert_uncased_L-2_H-128_A-2 n_best_size: 20 no_repeat_ngram_size: 3 null_score_diff_threshold: 0.0 num_beams: 5 num_item: 0 num_labels: 1 num_of_trees: 10 num_user: 0 out_channels: 18 pretrain_tasks: [] stage: task: link type: gcn use_bias: True use_contrastive_loss: False nbafl: use: False outdir: exp\FedAvg_gcn_on_FB15k-237_lr0.25_lstep16 personalization: K: 5 beta: 1.0 epoch_feature: 1 epoch_linear: 2 local_param: [] local_update_steps: 16 lr: 0.25 lr_feature: 0.1 lr_linear: 0.1 regular_weight: 0.1 share_non_trainable_para: False weight_decay: 0.0 print_decimal_digits: 6 quantization: method: none nbits: 8 regularizer: mu: 0.0 type: seed: 0 sgdmf: use: False train: batch_or_epoch: batch data_para_dids: [] local_update_steps: 16 optimizer: lr: 0.25 type: SGD weight_decay: 0.0005 scheduler: type: warmup_ratio: 0.0 trainer: disp_freq: 50 local_entropy: alpha: 0.75 eps: 0.0001 gamma: 0.03 inc_factor: 1.0 sam: adaptive: False eta: 0.0 rho: 1.0 type: linkfullbatch_trainer val_freq: 100000000 use_gpu: True verbose: 1 vertical: use: False wandb: use: False 2024-04-29 08:34:29,514 (utils:147) INFO: The device information file is not provided Traceback (most recent call last): File "D:\Paper\federatedScope\FederatedScope-master\federatedscope\main.py", line 52, in runner = get_runner(data=data, File "D:\Anaconda\envs\fs\lib\site-packages\federatedscope\core\auxiliaries\runner_builder.py", line 52, in get_runner return runner_cls(data=data, File "D:\Anaconda\envs\fs\lib\site-packages\federatedscope\core\fed_runner.py", line 87, in init self._set_up() File "D:\Anaconda\envs\fs\lib\site-packages\federatedscope\core\fed_runner.py", line 336, in _set_up self.server = self._setup_server( File "D:\Anaconda\envs\fs\lib\site-packages\federatedscope\core\fed_runner.py", line 150, in _setup_server server_data, model, kw = self._get_server_args(resource_info, File "D:\Anaconda\envs\fs\lib\site-packages\federatedscope\core\fed_runner.py", line 363, in _get_server_args model = get_model(self.cfg.model, File "D:\Anaconda\envs\fs\lib\site-packages\federatedscope\core\auxiliaries\model_builder.py", line 128, in get_model input_shape = get_shape_from_data(local_data, model_config, backend) File "D:\Anaconda\envs\fs\lib\site-packages\federatedscope\core\auxiliaries\model_builder.py", line 42, in get_shape_from_data return data['data'].x.shape, num_label, num_edge_features AttributeError: 'NoneType' object has no attribute 'shape' Describe the bug 对于几种图链接任务的使用提示错误都是如此

To Reproduce Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Smartphone (please complete the following information):

Additional context Add any other context about the problem here.

zzborz commented 4 months ago

任务已解决