AlexeyAB / darknet

YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
http://pjreddie.com/darknet/
Other
21.64k stars 7.96k forks source link

Can I use double ram with 2 TITAN RTX which is connected via nvlink? #2932

Open HanSeYeong opened 5 years ago

HanSeYeong commented 5 years ago

I wonder if I can use 48GB of ram with 2 Titan RTX.

I trained my data with below configurations.

width, height = 576 batch = 64 subdivisions = 16

I also added some photos with blank text file and I got 92% accuracy which is really awesome.

But further question,

I am always grateful to you. Thanks

AlexeyAB commented 5 years ago

If I connect two TITAN RTX with nvlink, will my ram double? and can I use that ram in training?

No (as I know)

My dataset resolution is 1280X720 so I can choose either increasing the width and height of network or decreasing subdivisions. I wanna recognize special trucks which is reasonable big. In this situation, which option do you recommend to increase accuracy?

If your objects are big, then you shouldn't set high network resulotion.

Just train two models with different cfg-files and check where do you get the highest mAP (accuracy).

HanSeYeong commented 5 years ago

@AlexeyAB No double ram?

Then I doubt how could you make your trained weight with width and height are 608 and subdivisions=16? In my experience 23GB of memory will be required with that configuration.

I wanna know what graphic card did you use before. Thanks to read my question.

HanSeYeong commented 5 years ago

titanrtx

@AlexeyAB They are advertising that nvlink can make double ram, so I have no doubt that I can't use 48GB of memory to train YOLO model....

AlexeyAB commented 5 years ago

@HanSeYoung

Then I doubt how could you make your trained weight with width and height are 608 and subdivisions=16? In my experience 23GB of memory will be required with that configuration.

If you want to have 24 GB GPU-RAM, then you can use:

They are advertising that nvlink can make double ram, so I have no doubt that I can't use 48GB of memory to train YOLO model....


titanrtx

@AlexeyAB They are advertising that nvlink can make double ram, so I have no doubt that I can't use 48GB of memory to train YOLO model....

I have never tried this.

If you can combine the memory of two RTX cards and successfully launch Darknet with lower subdivisions, and measure the acceleration or deceleration performance, then let me know.

HanSeYeong commented 5 years ago

Thanks @AlexeyAB

I think you are right. The speed bottleneck will be matter even if I can make them be configured by one GPU.

But I'll buy one more titan to accelerate training speed and try if I can use double ram.

Really appreciate.

AlexeyAB commented 5 years ago

@HanSeYoung

It seems Turing GPUs can share their VRAM.

https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_SLI_RTX_2080_SLI_NVLink/9.html

With NVLink things have changed dramatically. All SLI member cards can now share their memory with the VRAM of each card sitting at different addresses in a flat address space. Each GPU is able to access the other's memory

So if there is bottleneck in the VRAM-amount rather than VRAM-bandwidth, then it can improve performance, for example, by using larger mini-batch that is speedup training.

HanSeYeong commented 5 years ago

@AlexeyAB Yeah that's what I want from NVLINK!

But as I searched, I couldn't find how to use double vrams by NVLINK.... If you find the solution please share to all of YOLO lovers!

Thanks to notify me!

pullmyleg commented 4 years ago

@HanSeYeong not sure if you have progressed on this or tried? But it seems RTX cards within Linux environment will support memory pooling with CUDA. Not sure how / if it will work with darknet, but curious to find out.

See below:

https://www.pugetsystems.com/labs/articles/NVLink-on-NVIDIA-GeForce-RTX-2080-2080-Ti-in-Windows-10-1253/

https://www.daz3d.com/forums/discussion/353011/test-if-nvlink-is-working-for-vram-pooling