Closed Wzj02200059 closed 4 years ago
Sorry, transforms in base is wrong. I have fixed it. Thanks a lot.
emmmmmm,its seem likes image type error , after that fix:
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) TypeError: Expected Ptr
@Wzj02200059 I have fixed the problem.
@Wzj02200059 Thanks.
thanks too
when i training small_satrn model, i found the improve of acc if very slow, only 0.01 after 2000iter, and my dataset is simple number recognition,when i training tps_resnet_bilstm_attn or simple crnn model ,its easily get 0.9. is that normal?
I think it's okay, the accuracy is also low at the begining when i done the experiment.
PS We are checking small satrn and will modify it in some days.
Okay~
2021-04-20 16:38:40,779 - INFO - Use GPU 0,1,2,3 2021-04-20 16:38:40,779 - INFO - Set cudnn deterministic True 2021-04-20 16:38:40,779 - INFO - Set cudnn benchmark True 2021-04-20 16:38:40,779 - INFO - Set seed 1111 2021-04-20 16:38:40,780 - INFO - Build model 2021-04-20 16:38:40,785 - INFO - GResNet init weights 2021-04-20 16:38:41,130 - INFO - TransformerEncoder init weights 2021-04-20 16:38:41,552 - INFO - TransformerDecoder init weights 2021-04-20 16:38:41,581 - INFO - TransformerHead init weights Traceback (most recent call last): File "tools/train.py", line 42, in main() File "tools/train.py", line 37, in main runner = TrainRunner(train_cfg, deploy_cfg, common_cfg) File "tools/../vedastr/runners/train_runner.py", line 19, in init train_cfg['data']['train']) File "tools/../vedastr/runners/base.py", line 83, in _build_dataloader transform = build_transform(cfg['transform']) File "tools/../vedastr/transforms/builder.py", line 11, in build_transform tf = build_from_cfg(cfg, TRANSFORMS) File "tools/../vedastr/utils/common.py", line 14, in build_from_cfg return obj_from_dict_registry(cfg, parent, default_args) File "tools/../vedastr/utils/common.py", line 79, in obj_from_dict_registry return obj_cls(args) File "tools/../vedastr/transforms/transforms.py", line 192, in init super(ExpandRotate, self).init(kwargs) File "tools/../vedastr/transforms/transforms.py", line 185, in init value=value, always_apply=always_apply, p=p) TypeError: init() got an unexpected keyword argument 'value'
Here is the log while running small_satrn. I'm not sure where it's going wrong. I'm currently trying to train small_satrn on a custom lmdb dataset. Any help/suggestion would be great.
@vardhanaleti Hi, can you tell me which code you've changed in config file?
I've changed the paths as I'm training it on a different language.
train_root_st = path train_root_mlt = path train_dataset_st = [dict(type='LmdbDataset', root=train_root_st)] train_dataset_mlt = [dict(type='LmdbDataset', root=train_root_mlt)]
valid_root_real = path valid_root_syn = path valid_dataset_real = [dict(type='LmdbDataset', root=valid_root_real)] valid_dataset_syn = [dict(type='LmdbDataset',root=valid_root_syn)]
The changes in the dictionary are mentioned below:
train = dict( data=dict( train=dict( dataloader=dict( type='DataLoader', batch_size=batch_size, num_workers=4, ), sampler=dict( type='BalanceSampler', batch_size=batch_size, shuffle=True, oversample=True, ), dataset=dict( type='ConcatDatasets', datasets=[ dict( type='ConcatDatasets', datasets=train_dataset_st, ), dict( type='ConcatDatasets', datasets=train_dataset_mlt, ) ], batch_ratio=[0.5, 0.5], dataset_params, ), transform=train_transforms, ), val=dict( dataloader=dict( type='DataLoader', batch_size=batch_size, num_workers=4, shuffle=False, ), sampler=dict( type='BalanceSampler', batch_size=batch_size, shuffle=True, oversample=False, ), dataset=dict( type='ConcatDatasets', datasets=[ dict( type='ConcatDatasets', datasets=valid_dataset_real, ), dict( type='ConcatDatasets', datasets=valid_dataset_syn, ) ], batch_ratio=[0.5, 0.5], dataset_params, ), transform=test['data']['transform'], ),
@vardhanaleti
First, all you changed is path or related datasets? How about transforms? If you you only change the code of path and related dataset, can you show me your environment because i have no error when i run the code? Maybe there're some related package changed.
Second, you'd better not set the sampler=dict(type='BalanceSampler')
when you do validation, which may give incorrect accuracy?
I've only changed the paths and the related datasets and I haven't made any changes to transforms. I'm able to run the tps_resnet model with the same changes but not this. I'm running the models on google colab.
Here are the environment variables: CUDNN_VERSION=8.0.4.30 __EGL_VENDOR_LIBRARY_DIRS=/usr/lib64-nvidia:/usr/share/glvnd/egl_vendor.d/ PYDEVD_USE_FRAME_EVAL=NO LD_LIBRARY_PATH=/usr/lib64-nvidia CLOUDSDK_PYTHON=python3 LANG=en_US.UTF-8 HOSTNAME=1a1173a9756f OLDPWD=/ CLOUDSDK_CONFIG=/content/.config NVIDIA_VISIBLE_DEVICES=all DATALAB_SETTINGS_OVERRIDES={"kernelManagerProxyPort":6000,"kernelManagerProxyHost":"172.28.0.3","jupyterArgs":["--ip=\"172.28.0.2\""],"debugAdapterMultiplexerPath":"/usr/local/bin/dap_multiplexer"} ENV=/root/.bashrc PAGER=cat NCCL_VERSION=2.7.8 TF_FORCE_GPU_ALLOW_GROWTH=true JPY_PARENT_PID=47 NO_GCE_CHECK=True PWD=/content/vedastr HOME=/root LAST_FORCED_REBUILD=20210420 CLICOLOR=1 DEBIAN_FRONTEND=noninteractive LIBRARY_PATH=/usr/local/cuda/lib64/stubs GCE_METADATA_TIMEOUT=0 GLIBCPP_FORCE_NEW=1 TBE_CREDS_ADDR=172.28.0.1:8008 TERM=xterm-color SHELL=/bin/bash GCS_READ_CACHE_BLOCK_SIZE_MB=16 PYTHONWARNINGS=ignore:::pip._internal.cli.base_command MPLBACKEND=module://ipykernel.pylab.backend_inline CUDA_VERSION=11.0.3 NVIDIA_DRIVER_CAPABILITIES=compute,utility SHLVL=1 PYTHONPATH=/env/python NVIDIA_REQUIRE_CUDA=cuda>=11.0 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 COLAB_GPU=1 GLIBCXX_FORCE_NEW=1 PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4 GITPAGER=cat =/usr/bin/printenv
I've only changed the paths and the related datasets and I haven't made any changes to transforms. I'm able to run the tps_resnet model with the same changes but not this. I'm running the models on google colab.
Here are the environment variables: CUDNN_VERSION=8.0.4.30 __EGL_VENDOR_LIBRARY_DIRS=/usr/lib64-nvidia:/usr/share/glvnd/egl_vendor.d/ PYDEVD_USE_FRAME_EVAL=NO LD_LIBRARY_PATH=/usr/lib64-nvidia CLOUDSDK_PYTHON=python3 LANG=en_US.UTF-8 HOSTNAME=1a1173a9756f OLDPWD=/ CLOUDSDK_CONFIG=/content/.config NVIDIA_VISIBLE_DEVICES=all DATALAB_SETTINGS_OVERRIDES={"kernelManagerProxyPort":6000,"kernelManagerProxyHost":"172.28.0.3","jupyterArgs":["--ip="172.28.0.2""],"debugAdapterMultiplexerPath":"/usr/local/bin/dap_multiplexer"} ENV=/root/.bashrc PAGER=cat NCCL_VERSION=2.7.8 TF_FORCE_GPU_ALLOW_GROWTH=true JPY_PARENT_PID=47 NO_GCE_CHECK=True PWD=/content/vedastr HOME=/root LAST_FORCED_REBUILD=20210420 CLICOLOR=1 DEBIAN_FRONTEND=noninteractive LIBRARY_PATH=/usr/local/cuda/lib64/stubs GCE_METADATA_TIMEOUT=0 GLIBCPP_FORCE_NEW=1 TBE_CREDS_ADDR=172.28.0.1:8008 TERM=xterm-color SHELL=/bin/bash GCS_READ_CACHE_BLOCK_SIZE_MB=16 PYTHONWARNINGS=ignore:::pip._internal.cli.base_command MPLBACKEND=module://ipykernel.pylab.backend_inline CUDA_VERSION=11.0.3 NVIDIA_DRIVER_CAPABILITIES=compute,utility SHLVL=1 PYTHONPATH=/env/python NVIDIA_REQUIRE_CUDA=cuda>=11.0 brand=tesla,driver>=418,driver<419 brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451 COLAB_GPU=1 GLIBCXX_FORCE_NEW=1 PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4 GITPAGER=cat =/usr/bin/printenv
Sorry, i mean the python environment?
Sorry, since I'm running it on colab, the number of packages is huge.
Python version: Python 3.7.10 Pip version: pip 19.3.1 from /usr/local/lib/python3.7/dist-packages/pip (python 3.7)
Package Version
absl-py 0.12.0
addict 2.4.0
alabaster 0.7.12
albumentations 0.1.12
altair 4.1.0
appdirs 1.4.4
argon2-cffi 20.1.0
astor 0.8.1
astropy 4.2.1
astunparse 1.6.3
async-generator 1.10
atari-py 0.2.6
atomicwrites 1.4.0
attrs 20.3.0
audioread 2.1.9
autograd 1.3
Babel 2.9.0
backcall 0.2.0
beautifulsoup4 4.6.3
bleach 3.3.0
blis 0.4.1
bokeh 2.3.1
Bottleneck 1.3.2
branca 0.4.2
bs4 0.0.1
CacheControl 0.12.6
cachetools 4.2.1
catalogue 1.0.0
certifi 2020.12.5
cffi 1.14.5
chainer 7.4.0
chardet 3.0.4
click 7.1.2
cloudpickle 1.3.0
cmake 3.12.0
cmdstanpy 0.9.5
colorcet 2.0.6
colorlover 0.3.0
community 1.0.0b1
configparser 5.0.2
contextlib2 0.5.5
convertdate 2.3.2
coverage 3.7.1
coveralls 0.5
crcmod 1.7
cufflinks 0.17.3
cupy-cuda101 7.4.0
cvxopt 1.2.6
cvxpy 1.0.31
cycler 0.10.0
cymem 2.0.5
Cython 0.29.22
daft 0.0.4
dask 2.12.0
datascience 0.10.6
debugpy 1.0.0
decorator 4.4.2
defusedxml 0.7.1
descartes 1.1.0
dill 0.3.3
distributed 1.25.3
dlib 19.18.0
dm-tree 0.1.6
docker-pycreds 0.4.0
docopt 0.6.2
docutils 0.17
dopamine-rl 1.0.5
earthengine-api 0.1.260
easydict 1.9
ecos 2.0.7.post1
editdistance 0.5.3
en-core-web-sm 2.2.5
entrypoints 0.3
ephem 3.7.7.1
et-xmlfile 1.0.1
fa2 0.3.5
fancyimpute 0.4.3
fastai 1.0.61
fastdtw 0.3.4
fastprogress 1.0.0
fastrlock 0.6
fbprophet 0.7.1
feather-format 0.4.1
filelock 3.0.12
firebase-admin 4.4.0
fix-yahoo-finance 0.0.22
Flask 1.1.2
flatbuffers 1.12
folium 0.8.3
future 0.16.0
gast 0.3.3
GDAL 2.2.2
gdown 3.6.4
gensim 3.6.0
geographiclib 1.50
geopy 1.17.0
gin-config 0.4.0
gitdb 4.0.7
GitPython 3.1.14
glob2 0.7
google 2.0.3
google-api-core 1.26.3
google-api-python-client 1.12.8
google-auth 1.28.1
google-auth-httplib2 0.0.4
google-auth-oauthlib 0.4.4
google-cloud-bigquery 1.21.0
google-cloud-bigquery-storage 1.1.0
google-cloud-core 1.0.3
google-cloud-datastore 1.8.0
google-cloud-firestore 1.7.0
google-cloud-language 1.2.0
google-cloud-storage 1.18.1
google-cloud-translate 1.5.0
google-colab 1.0.0
google-pasta 0.2.0
google-resumable-media 0.4.1
googleapis-common-protos 1.53.0
googledrivedownloader 0.4
graphviz 0.10.1
greenlet 1.0.0
grpcio 1.32.0
gspread 3.0.1
gspread-dataframe 3.0.8
gym 0.17.3
h5py 2.10.0
HeapDict 1.0.1
hijri-converter 2.1.1
holidays 0.10.5.2
holoviews 1.14.3
html5lib 1.0.1
httpimport 0.5.18
httplib2 0.17.4
httplib2shim 0.0.3
humanize 0.5.1
hyperopt 0.1.2
ideep4py 2.0.0.post3
idna 2.10
imageio 2.4.1
imagesize 1.2.0
imbalanced-learn 0.4.3
imblearn 0.0
imgaug 0.2.6
importlib-metadata 3.10.1
importlib-resources 5.1.2
imutils 0.5.4
inflect 2.1.0
iniconfig 1.1.1
intel-openmp 2021.2.0
intervaltree 2.1.0
ipykernel 4.10.1
ipython 5.5.0
ipython-genutils 0.2.0
ipython-sql 0.3.9
ipywidgets 7.6.3
itsdangerous 1.1.0
jax 0.2.12
jaxlib 0.1.65+cuda110
jdcal 1.4.1
jedi 0.18.0
jieba 0.42.1
Jinja2 2.11.3
joblib 1.0.1
jpeg4py 0.1.4
jsonschema 2.6.0
jupyter 1.0.0
jupyter-client 5.3.5
jupyter-console 5.2.0
jupyter-core 4.7.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
kaggle 1.5.12
kapre 0.1.3.1
Keras 2.4.3
Keras-Preprocessing 1.1.2
keras-vis 0.4.1
kiwisolver 1.3.1
knnimpute 0.1.0
korean-lunar-calendar 0.2.1
librosa 0.8.0
lightgbm 2.2.3
llvmlite 0.34.0
lmdb 0.99
LunarCalendar 0.0.9
lxml 4.2.6
Markdown 3.3.4
MarkupSafe 1.1.1
matplotlib 3.2.2
matplotlib-venn 0.11.6
missingno 0.4.2
mistune 0.8.4
mizani 0.6.0
mkl 2019.0
mlxtend 0.14.0
more-itertools 8.7.0
moviepy 0.2.3.5
mpmath 1.2.1
msgpack 1.0.2
multiprocess 0.70.11.1
multitasking 0.0.9
murmurhash 1.0.5
music21 5.5.0
natsort 5.5.0
nbclient 0.5.3
nbconvert 5.6.1
nbformat 5.1.3
nest-asyncio 1.5.1
networkx 2.5.1
nibabel 3.0.2
nltk 3.2.5
notebook 5.3.1
np-utils 0.5.12.1
numba 0.51.2
numexpr 2.7.3
numpy 1.19.5
nvidia-ml-py3 7.352.0
oauth2client 4.1.3
oauthlib 3.1.0
okgrade 0.4.3
opencv-contrib-python 4.1.2.30
opencv-python 4.1.2.30
openpyxl 2.5.9
opt-einsum 3.3.0
osqp 0.6.2.post0
packaging 20.9
palettable 3.3.0
pandas 1.1.5
pandas-datareader 0.9.0
pandas-gbq 0.13.3
pandas-profiling 1.4.1
pandocfilters 1.4.3
panel 0.11.2
param 1.10.1
parso 0.8.2
pathlib 1.0.1
pathtools 0.1.2
patsy 0.5.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 6.2.2
pip 19.3.1
pip-tools 4.5.1
plac 1.1.3
plotly 4.4.1
plotnine 0.6.0
pluggy 0.7.1
pooch 1.3.0
portpicker 1.3.1
prefetch-generator 1.0.1
preshed 3.0.5
prettytable 2.1.0
progressbar2 3.38.0
prometheus-client 0.10.1
promise 2.3
prompt-toolkit 1.0.18
protobuf 3.12.4
psutil 5.4.8
psycopg2 2.7.6.1
ptyprocess 0.7.0
py 1.10.0
pyarrow 3.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycocotools 2.0.2
pycparser 2.20
pyct 0.4.8
pydata-google-auth 1.1.0
pydot 1.3.0
pydot-ng 2.0.0
pydotplus 2.0.2
PyDrive 1.3.1
pyemd 0.5.1
pyerfa 1.7.2
pyglet 1.5.0
Pygments 2.6.1
pygobject 3.26.1
pymc3 3.7
PyMeeus 0.5.11
pymongo 3.11.3
pymystem3 0.2.0
PyOpenGL 3.1.5
pyparsing 2.4.7
pyrsistent 0.17.3
pysndfile 1.3.8
PySocks 1.7.1
pystan 2.19.1.1
pytest 3.6.4
python-apt 0.0.0
python-chess 0.23.11
python-dateutil 2.8.1
python-louvain 0.15
python-slugify 4.0.1
python-utils 2.5.6
pytz 2018.9
pyviz-comms 2.0.1
PyWavelets 1.1.1
PyYAML 3.13
pyzmq 22.0.3
qdldl 0.1.5.post0
qtconsole 5.0.3
QtPy 1.9.0
regex 2019.12.20
requests 2.23.0
requests-oauthlib 1.3.0
resampy 0.2.2
retrying 1.3.3
rpy2 3.4.3
rsa 4.7.2
scikit-image 0.16.2
scikit-learn 0.22.2.post1
scipy 1.4.1
screen-resolution-extra 0.0.0
scs 2.1.3
seaborn 0.11.1
Send2Trash 1.5.0
sentry-sdk 1.0.0
setuptools 56.0.0
setuptools-git 1.2
Shapely 1.7.1
shortuuid 1.0.1
simplegeneric 0.8.1
six 1.15.0
sklearn 0.0
sklearn-pandas 1.8.0
smart-open 5.0.0
smmap 4.0.0
snowballstemmer 2.1.0
sortedcontainers 2.3.0
SoundFile 0.10.3.post1
spacy 2.2.4
Sphinx 1.8.5
sphinxcontrib-serializinghtml 1.1.4
sphinxcontrib-websupport 1.2.4
SQLAlchemy 1.4.7
sqlparse 0.4.1
srsly 1.0.5
statsmodels 0.10.2
subprocess32 3.5.4
sympy 1.7.1
tables 3.4.4
tabulate 0.8.9
tblib 1.7.0
tensorboard 2.4.1
tensorboard-plugin-wit 1.8.0
tensorflow 2.4.1
tensorflow-datasets 4.0.1
tensorflow-estimator 2.4.0
tensorflow-gcs-config 2.4.0
tensorflow-hub 0.12.0
tensorflow-metadata 0.29.0
tensorflow-probability 0.12.1
termcolor 1.1.0
terminado 0.9.4
terminaltables 3.1.0
testpath 0.4.4
text-unidecode 1.3
textblob 0.15.3
textgenrnn 1.4.1
Theano 1.0.5
thinc 7.4.0
tifffile 2021.4.8
toml 0.10.2
toolz 0.11.1
torch 1.8.1+cu101
torchsummary 1.5.1
torchtext 0.9.1
torchvision 0.9.1+cu101
tornado 5.1.1
tqdm 4.41.1
traitlets 5.0.5
tweepy 3.10.0
typeguard 2.7.1
typing-extensions 3.7.4.3
tzlocal 1.5.1
uritemplate 3.0.1
urllib3 1.24.3
vega-datasets 0.9.0
wandb 0.10.27
wasabi 0.8.2
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.36.2
widgetsnbextension 3.5.1
wordcloud 1.5.0
wrapt 1.12.1
xarray 0.15.1
xgboost 0.90
xkit 0.0.0
xlrd 1.1.0
xlwt 1.3.0
yellowbrick 0.9.1
zict 2.0.0
zipp 3.4.1
@vardhanaleti
Can you update the albumentations
to the newest version?
I've upgraded it. It doesn't show the error anymore. Thank you very much! But now, it says that cuda is out of memory. Is it because the small_satrn model is too heavy? Is there anyway I can still run it on colab?
@vardhanaleti Small batch size please.
Thank you very much.
Thank you very much.
You're welcome.
can not run the small_satrn model, my cofig file only changed dataset path , and the tps_resnet_bilstm_attn model is ok. my only modify is : train_dataset_mj = [dict(type='FolderDataset', root='/home/wzj/train')] train_dataset_st = [dict(type='FolderDataset', root='/home/wzj/train1')]
valid
valid_root = data_root + 'validation/'
valid_dataset = dict(type='FolderDataset', root='/home/wzj/validation_miniset', **test_dataset_params) and the error is: File "tools/../vedastr/datasets/base.py", line 52, in getitem
img, label = self.transforms(img, label)
TypeError: call() takes from 1 to 2 positional arguments but 3 were given