Open ahworld22 opened 1 month ago
Hi @ahworld22 Please reinstall apex with --cuda_ext --cpp_ext. Or skip apex installation.
Hi @ahworld22 Please reinstall apex with --cuda_ext --cpp_ext. Or skip apex installation. Yea!But when i run as python setup.py install ,it done.When i run as python setup.py install --cuda_ext --cpp_ext,it always fails and says it cann't find the csrc/flatten_unflatten.cpp which is correctly in my files.
It always says: This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I think it's the problem with windows,it need the "main" to run multiprocess
I use Windows to run the project ,but it seems something wrong.Because the train.py doesn't have 'main',which cann't do on Windows with multiprocess.
At 2024-09-12 00:56:58, "Zhedong Zheng" @.***> wrote:
Hi @ahworld22 Please reinstall apex with --cuda_ext --cpp_ext. Or skip apex installation.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'") Epoch 0/119
Traceback (most recent call last): File "", line 1, in
File "D:\Anaconda\envs\university\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "D:\Anaconda\envs\university\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "D:\Anaconda\envs\university\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\Anaconda\envs\university\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "D:\Anaconda\envs\university\lib\runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "D:\Anaconda\envs\university\lib\runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "D:\Anaconda\envs\university\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\hujia\Desktop\University1652-Baseline-master\University1652-Baseline-master\train.py", line 529, in
model = train_model(model, model_test, criterion, optimizer_ft, exp_lr_scheduler,
File "C:\Users\hujia\Desktop\University1652-Baseline-master\University1652-Baseline-master\train.py", line 237, in train_model
for data, data2, data3, data4 in zip(dataloaders['satellite'], dataloaders['street'], dataloaders['drone'],
File "D:\Anaconda\envs\university\lib\site-packages\torch\utils\data\dataloader.py", line 438, in iter
return self._get_iterator()
File "D:\Anaconda\envs\university\lib\site-packages\torch\utils\data\dataloader.py", line 384, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "D:\Anaconda\envs\university\lib\site-packages\torch\utils\data\dataloader.py", line 1048, in init
w.start()
File "D:\Anaconda\envs\university\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "D:\Anaconda\envs\university\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda\envs\university\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "D:\Anaconda\envs\university\lib\multiprocessing\popen_spawn_win32.py", line 45, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\Anaconda\envs\university\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "D:\Anaconda\envs\university\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.