haixiansheng / unet-keras-for-Multi-classification

48 stars 6 forks source link

error occurred when run the demo #2

Open sunyongke opened 4 years ago

sunyongke commented 4 years ago

python main.py -n 001 -lr 0.00004 -ldr 0.00008 -b 16 -s 60 -e 80 Using TensorFlow backend. 2019-11-13 13:22:56.058414: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-11-13 13:22:56.198953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.683 pciBusID: 0000:82:00.0 totalMemory: 7.93GiB freeMemory: 7.84GiB 2019-11-13 13:22:56.199013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-11-13 13:22:56.661630: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-11-13 13:22:56.661690: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-11-13 13:22:56.661702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-11-13 13:22:56.661836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7566 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:82:00.0, compute capability: 6.1) /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras_preprocessing/image/image_data_generator.py:699: UserWarning: This ImageDataGenerator specifies featurewise_center, but it hasn't been fit on any training data. Fit it first by calling .fit(numpy_data). warnings.warn('This ImageDataGenerator specifies ' /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras_preprocessing/image/image_data_generator.py:707: UserWarning: This ImageDataGenerator specifies featurewise_std_normalization, but it hasn't been fit on any training data. Fit it first by calling .fit(numpy_data). warnings.warn('This ImageDataGenerator specifies ' 2019-11-13 13:23:10.721859: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.25GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-11-13 13:23:10.764330: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 868.50MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.

... 2019-11-13 13:23:20.794389: I tensorflow/core/common_runtime/bfc_allocator.cc:645] Sum Total of in-use chunks: 7.01 GiB
2019-11-13 13:23:20.794413: I tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats:
Limit: 7933876634
InUse: 7528904192
MaxInUse: 7682977792
NumAllocs: 1654
MaxAllocSize: 2495660032

2019-11-13 13:23:20.794570: W tensorflow/core/common_runtime/bfc_allocator.cc:271] *$ ***__*x
2019-11-13 13:23:20.794613: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at image_resizer_sta$ e.h:115 : Resource exhausted: OOM when allocating tensor with shape[16,256,256,128] and type float on /job:localho$ t/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
File "main.py", line 71, in
training = model.fit_generator(myGene, steps_per_epoch=steps_per_epoch, epochs=epochs, validation_steps=10, ca$ lbacks=[model_checkpoint])
File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrappe$ return func(
args, *kwargs)
File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py", line 1418, in fit_ge$ erator
initial_epoch=initial_epoch)
File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training_generator.py", line 217, $ n fit_generator
class_weight=classweight)
File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py", line 1217, in train
$ n_batch outputs = self.train_function(ins) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py", line 1217, in train_o n_batch outputs = self.train_function(ins) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2715, in call return self._call(inputs) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(
array_vals) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1439, in call run_metadata_ptr) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", lin e 528, in
exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[16,256,256,12 8] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node up_sampling2d_4/ResizeNearestNeighbor}} = ResizeNearestNeighbor[T=DT_FLOAT, _class=["loc:@train.. .ighborGrad"], align_corners=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](leaky_re_lu_19/LeakyRel u-0-0-TransposeNCHWToNHWC-LayoutOptimizer, up_sampling2d_4/mul)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to Ru nOptions for current allocation info.

     [[{{node metrics/acc/Mean/_1099}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/t

ask:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_deviceincarnation=1, tensor name="edge_9494_metrics/acc/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()] ] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to Ru nOptions for current allocation info.

haixiansheng commented 4 years ago

OOM! I think your dimention of image is too big,or you need use higher gpu. When I ran the code,my picture is 1024×1024, gpu is nvdia GTX1080

------------------ Original ------------------ From: sunyongke <notifications@github.com> Date: Wed,Nov 13,2019 1:26 PM To: haixiansheng/unet-keras-for-Multi-classification <unet-keras-for-Multi-classification@noreply.github.com> Cc: Subscribed <subscribed@noreply.github.com> Subject: Re: [haixiansheng/unet-keras-for-Multi-classification] error occurred when run the demo (#2)

python main.py -n 001 -lr 0.00004 -ldr 0.00008 -b 16 -s 60 -e 80 Using TensorFlow backend. 2019-11-13 13:22:56.058414: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-11-13 13:22:56.198953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.683 pciBusID: 0000:82:00.0 totalMemory: 7.93GiB freeMemory: 7.84GiB 2019-11-13 13:22:56.199013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-11-13 13:22:56.661630: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-11-13 13:22:56.661690: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-11-13 13:22:56.661702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-11-13 13:22:56.661836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7566 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:82:00.0, compute capability: 6.1) /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras_preprocessing/image/image_data_generator.py:699: UserWarning: This ImageDataGenerator specifies featurewise_center, but it hasn't been fit on any training data. Fit it first by calling .fit(numpy_data). warnings.warn('This ImageDataGenerator specifies ' /home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras_preprocessing/image/image_data_generator.py:707: UserWarning: This ImageDataGenerator specifies featurewise_std_normalization, but it hasn't been fit on any training data. Fit it first by calling .fit(numpy_data). warnings.warn('This ImageDataGenerator specifies ' 2019-11-13 13:23:10.721859: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.25GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2019-11-13 13:23:10.764330: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 868.50MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.

... 2019-11-13 13:23:20.794389: I tensorflow/core/common_runtime/bfc_allocator.cc:645] Sum Total of in-use chunks: 7.01 GiB 2019-11-13 13:23:20.794413: I tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats: Limit: 7933876634 InUse: 7528904192 MaxInUse: 7682977792 NumAllocs: 1654 MaxAllocSize: 2495660032

2019-11-13 13:23:20.794570: W tensorflow/core/common_runtime/bfc_allocator.cc:271] *$ ***___*x 2019-11-13 13:23:20.794613: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at image_resizer_sta$ e.h:115 : Resource exhausted: OOM when allocating tensor with shape[16,256,256,128] and type float on /job:localho$ t/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc Traceback (most recent call last): File "main.py", line 71, in training = model.fit_generator(myGene, steps_per_epoch=steps_per_epoch, epochs=epochs, validation_steps=10, ca$ lbacks=[model_checkpoint]) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrappe$ return func(args, *kwargs) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py", line 1418, in fit_ge$ erator initial_epoch=initial_epoch) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training_generator.py", line 217, $ n fit_generator class_weight=class_weight) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py", line 1217, in train$ n_batch outputs = self.train_function(ins) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/engine/training.py", line 1217, in train_o n_batch outputs = self.train_function(ins) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2715, in call return self._call(inputs) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(array_vals) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1439, in call run_metadata_ptr) File "/home/syk/miniconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", lin e 528, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[16,256,256,12 8] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[{{node up_sampling2d_4/ResizeNearestNeighbor}} = ResizeNearestNeighbor[T=DT_FLOAT, _class=["loc:@train.. .ighborGrad"], align_corners=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](leaky_re_lu_19/LeakyRel u-0-0-TransposeNCHWToNHWC-LayoutOptimizer, up_sampling2d_4/mul)]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to Ru nOptions for current allocation info. [[{{node metrics/acc/Mean/_1099}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/t
ask:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_deviceincarnation=1, tensor name="edge_9494_metrics/acc/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()] ] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to Ru nOptions for current allocation info.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.

sunyongke commented 4 years ago

I run the code with your image data, and my GPU is GTX 1070.