cbuchner1 / CudaMiner

a CUDA accelerated litecoin mining application based on pooler's CPU miner
Other
692 stars 304 forks source link

noob -- 'maximum total warps' not reported the same cross platform Ubuntu 13.10 <> Windows 7 (x64) -- performance exactly the same (LOW priority, just curious) #111

Closed thdillin closed 10 years ago

thdillin commented 10 years ago

Using the source released 2-28-2014 (and many hours when I supposed to be sleeping), I made cudaminer on Ubunutu 13.10-x86 and on Windows 7 (64 bit).

Works perfect and the NVIDIA GTX 750 (not the Ti) holds at 227 KHps (LTC) on both platforms with no overclocking.

Same Hardware I just changed the hard drive (SSDs Win XP and 10k 320 on Ubuntu)

Same driver version on both platforms (334.x). NOT the same effort to install the drivers. (Ubuntu was tough because I had to update grub, X Windows, and a long list of other things to make cudaminer see the drivers)

CUDA Tools 5.0 on Ubuntu (could not get 5.5 to install because I have no idea how to make the OS use gcc 4.6 instead of 4.8 which is installed native. cudaminer is looking for gcc 4.6 (I think) so it won't make with 4.8. {I will go back and see if that makes a difference after some sleep})

CUDA Tools 5.5 on Windows 7 (64).

Here is what is reported on Ubuntu 13.10 (x86): GPU #0: maximum total warps (BxW): 148 I used -l T8x12, which holds at 227.2 KHps {<<Average}

Here is what is reported on Windows 7(64): GPU #0: maximum total warps (BxW): 203 I used -l T8x24, which holds at 227.2 KHps {<<Average}

Board is the ASUS GTX750 OC.

Any ideas why the max warps are lower on the *Nix side and higher on the Microstink side?

Aside from platform (which should not matter) the only difference was the version of CUDATools I used to make. Could that be it?

As I said in the title, I am a noob, this is not even really important. I am just curious and would like understand this.

Thanks, TwD

cbuchner1 commented 10 years ago

windows needs more safety margin in the memory allocation, or the driver crashes and worse...

It's related to the WDDM driver model which inposes some restrictions on memory allocations that don't exist on Linux (and Windows XP and Mac OS)

Christian

2014-03-08 11:53 GMT+01:00 thdillin notifications@github.com:

Using the source released 2-28-2014 (and many hours when I supposed to be sleeping), I made cudaminer on Ubunutu 13.10-x86 and on Windows 7 (64 bit).

Works perfect and the NVIDIA GTX 750 (not the Ti) holds at 227 KHps (LTC) on both platforms with no overclocking.

Same Hardware I just changed the hard drive (SSDs Win XP and 10k 320 on Ubuntu)

Same driver version on both platforms (334.x). NOT the same effort to install the drivers. (Ubuntu was tough because I had to update grub, X Windows, and a long list of other things to make cudaminer see the drivers)

CUDA Tools 5.0 on Ubuntu (could not get 5.5 to install because I have no idea how to make the OS use gcc 4.6 instead of 4.8 which is installed native. cudaminer is looking for gcc 4.6 (I think) so it won't make with 4.8. {I will go back and see if that makes a difference after some sleep})

CUDA Tools 5.5 on Windows 7 (64).

Here is what is reported on Ubuntu 13.10 (x86): GPU #0: maximum total warps (BxW): 148 I used -l T8x12, which holds at 227.2 KHps {<<Average}

Here is what is reported on Windows 7(64): GPU #0: maximum total warps (BxW): 203 I used -l T8x24, which holds at 227.2 KHps {<<Average}

Board is the ASUS GTX750 OC.

Any ideas why the max warps are lower on the *Nix side and higher on the Microstink side?

Aside from platform (which should not matter) the only difference was the version of CUDATools I used to make. Could that be it?

As I said in the title, I am a noob, this is not even really important. I am just curious and would like understand this.

Thanks, TwD

Reply to this email directly or view it on GitHubhttps://github.com/cbuchner1/CudaMiner/issues/111 .