kmddd59 / pyrit

Automatically exported from code.google.com/p/pyrit
0 stars 0 forks source link

Pyrit does not recognize HD5770 and HD5870 on same PC #137

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. PC 4 slot PCI-X with one HD5770 works fine
2. I add one HD5870
3. Pyrit recognizes only the videocard on primary PCI-X x16

What is the expected output? What do you see instead?
I aspected that command "pyrit list_cores" show
Videocard #1
Videocard #2
cpu (sse2) #1
cpu (sse2) #2

instead is see
Videocard #1
cpu (sse2) #1
cpu (sse2) #2
cpu (sse2) #3

What version of the product are you using? On what operating system?
Pyrit r236
ATI stream 2.01
ATI driver 10.2
Debian stable 64 bit with kernel linux 2.6.26-2-amd64

Please provide any additional information below.
I run a power supply of 600Wat (585 real Watt). I tried to:
Run HD5770 alone on primary PCI-X x16: it works.
Run HD5870 alone on primary PCI-X x16: it works.

I put HD5770 on primary PCI-X x16 and HD5870 on secondary PCI-X x16: only 
HD5770 is recognized and works.
I put HD5870 on primary PCI-X x16 and HD5770 on secondary PCI-X x16: only 
HD5870 is recognized and works.

PS: Another strange info: HD5770 does about 23000 PSK/s so I aspected 
HD5870 does at least 45000 PSK/s but instead it does only 41000 PSK/s.
It is strange because HD5870 has 1600 stream processors (HD5770 only 800 
sp) and other parameters (gpu and ram clock) are identical: performances 
do not scale in linear mode in function of number of stream processors.

Original issue reported on code.google.com by pyrit.lo...@gmail.com on 9 Apr 2010 at 5:32

GoogleCodeExporter commented 9 years ago
Same problem but with 2x HD4870 (2 different cards) in crossfire.
Got only the 1st in "pyrit list_cores" command and 3x CPU cores.
How can we use both cards in pyrit?
Thanks.

Original comment by Karser...@gmail.com on 10 Apr 2010 at 9:11

GoogleCodeExporter commented 9 years ago
I think about this last night, maybe it is not a pyrit issue, but a 
configuration issue.
Maybe it is not enough to plug in another videocard to have it running, maybe 
further
actins are required.

I had a look to /etc/X11/xorg.conf, the configuration file for the graphic 
server:
inside there is entry only for one videocard, here the entry:

Section "Device"
        Identifier "aticonfig-device[0]-0"
        Driver     "fglrx"
        BusID      "PCI:1:0:0"
EndSection

Maybe this entry must be double, addig the follow:

Section "Device"
        Identifier "aticonfig-device[X]-Y"
        Driver     "fglrx"
        BusID      "PCI:Z:0:0"
EndSection

With X,Y and Z configured in function of your own hardware.

I would be glad to test by myself this idea, but i got a issue with power 
supply and
now I cannot use 2 videocard at same time cos of not enough power.

can KarserasL do a test related to xorg.conf modification? By the way, try also 
to
disable crossfire....

Original comment by pyrit.lo...@gmail.com on 10 Apr 2010 at 10:52

GoogleCodeExporter commented 9 years ago
Well i played abit with the xorg.conf and aticonfig tool. 

If i use the command --dual-head for creating 2 desktop and plug my 2nd monitor 
to
the 2nd card i have these in xorg.conf
Section "Device"
        Identifier "aticonfig-device[0]-0"
        Driver     "fglrx"
        BusID      "PCI:1:0:0"
EndSection

Section "Device"
        Identifier "aticonfig-device[1]-0"
        Driver     "fglrx"
        BusID      "PCI:2:0:0"
EndSection

When i do pyrit list_cores it still gives 1 GPU and 3x CPUcores but i think it 
use
the 1st card if i use the command in the first display and the 2nd card if i 
use the
command in the 2nd monitor thats connected to the 2nd card.

So i use a test capture .cap file and i did dictionary attack on the 1st and at 
the
same time on the 2nd monitor.. Both monitors had 25000 PMK/sec. 
Im not expert in linux but i think both cards was working with 3x CPU cores 
each.
The PC was abit slow at that moment.

Dunno if that helps and thanks for the response.

Original comment by Karser...@gmail.com on 10 Apr 2010 at 12:09

GoogleCodeExporter commented 9 years ago
KArserasL, my HD5770 does about 23000 PSKs/s, and your HD4870 should do more or 
less
the same PSKs/s, so I suspect only one videocard of your are running....

But to be 100% sure, here my suggestion:

Unplug the second videocard, run "pyrit benchmark", take note of PSKs/s
Replug the second videocard, run "pyrit benchmark", take note of PSKs/s

If PSKs/s on both test are quite identical, it means only one videocard is 
running.

Original comment by pyrit.lo...@gmail.com on 10 Apr 2010 at 12:45

GoogleCodeExporter commented 9 years ago
Well i know it just one card that do 25000 PMK/s but what i wrote above is that 
i run
2 dictionary attacks at the same time, one at the 1st monitor which is 
connected to
the 1st card and the 2nd monitor which is connected on the 2nd card. Both test 
was
running at approximately 25000PMK/s. So that means both cards was working at 
the same
time or otherwise it would be almost half the PMK/s if only 1 card was running 
both
attacks at the same time. 

Original comment by Karser...@gmail.com on 10 Apr 2010 at 12:55

GoogleCodeExporter commented 9 years ago
Sorry, I did not understand well you post.
Well that both cards runs, but they should be aggregate by pyrit into one single
session of 50000 PSKs/s. 
I guess where problem can allign... meibe the command --dual-head make 
confusion...

Original comment by pyrit.lo...@gmail.com on 10 Apr 2010 at 1:30

GoogleCodeExporter commented 9 years ago
Well thats the problem.. It just wont recognize both cards and add them into 1 
session.
Anyone with any ideas would be rly appreciated. My xorg.conf file states:

Section "Device"
    Identifier  "aticonfig-Device[0]-0"
    Driver      "fglrx"
    BusID       "PCI:1:0:0"
EndSection

Section "Device"
    Identifier  "aticonfig-Device[1]-0"
    Driver      "fglrx"
    BusID       "PCI:2:0:0"
EndSection

and fglrxinfo: 
display: :0.0  screen: 0
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: ATI Radeon HD 4800 Series
OpenGL version string: 3.2.9704 Compatibility Profile Context

display: :0.0  screen: 1
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: ATI Radeon HD 4800 Series
OpenGL version string: 3.2.9704 Compatibility Profile Context

So how i can make pyrit understand thats there 2 cards and use them?

Original comment by Karser...@gmail.com on 10 Apr 2010 at 2:15

GoogleCodeExporter commented 9 years ago
there should be a multi-gpu example in ATIs SDK. Please try if that can detect 
both
cards.

As ATIs OpenCL implementation uses the X-server, you also need a running X 
display on
every GPU...

Original comment by lukas.l...@gmail.com on 11 Apr 2010 at 2:46

GoogleCodeExporter commented 9 years ago
I am having the same issue with two 4850s (same exact card). It seems that the 
only
way pyrit sees both GPUs is when they are in crossfire, but pyrit won't work 
when
crossfire is active. This is expected, as AMD's Stream FAQ states that its 
stream
implementation will not work when crossfire is on.

I've tried a bunch of different xorg.conf settings to try to make both GPUS 
active on
the same X display, but none have worked for me. As KarserasL points out, 
running
dual-head (two monitors, one attached to each card) still only shows one GPU 
with the
"list_cores" command (likely just the GPU for the card attached to each 
monitor).

If anyone can come up with a working xorg.conf file which would allow pyrit to 
see 2
GPUs in a non-crossfire setup, please post it.

BTW, I've tried this with both OpenCL and the new Calpp setup. No joy with 
either.

I think this issue is also the problem that another bug reporter (issue 123) is
having with his 5970 card. His 5970 has 2 GPUs on the same card linked by 
internal
crossfire. Lukas says that for him it is a driver issue confirmed by AMD. I'd be
surprised if that's really the case. I think this really comes down to multiple 
GPUs
not being seen by pyrit except in crossfire (where stream, and therefore, pyrit,
won't work). Unless we can figure out a way for 1) multiple GPUs to be assigned 
to
the same X display without crossfire or 2) pyrit to see multiple GPUs operating 
on
different X displays, I don't think this can be fixed.

Original comment by robert.b...@gmail.com on 14 Apr 2010 at 5:05

GoogleCodeExporter commented 9 years ago
Pyrit actually has little to do with all this. We just take what ATIs API has 
to offer.

I'll keep this issue open until ATI has resolved the problem or given 
instructions...

Original comment by lukas.l...@gmail.com on 14 Apr 2010 at 5:12

GoogleCodeExporter commented 9 years ago
I came across a potential fix in the Stream Dev Forums:

http://forums.amd.com/devforum/messageview.cfm?catid=390&threadid=129850&enterth
read=y

Unfortunately, I'm not home right now to test it. Is there any kind soul who 
can try
this and see if it works? Make sure you've disabled crossfire before trying.

The suggestion is to run "aticonfig --adapter=all --initial" to create your 
xorg.conf
file (may want to run with -f as well) and then run "export DISPLAY=:0". The 
author
states that this will have all GPUS assigned to the same display, instead of the
default 0.1, 0.2, etc. when the DISPLAY env is at its default ":0.0". If this 
works,
you can just add the DISPLAY option to your .bashrc to make it sticky.

If no one else tries it first, I'll try it tonight and report back.

Original comment by robert.b...@gmail.com on 14 Apr 2010 at 6:14

GoogleCodeExporter commented 9 years ago
Update -- It Worked! After a few failed attempts, I wound up adding the "export
DISPLAY=:0" to the end of my .bashrc file and ran "aticonfig --adapter=all 
--initial
-f" and now both GPUs are seen and being used by pyrit. Selftest and benchmark 
ran
fine. I do seem to have an issue after doing the latest svn update, and 
rebuilding
and reinstalling pyrit_calpp and pyrit with a "version mismatch" being reported 
(part
of it is at r244 and part at 247) but it doesn't cause any errors in the 
execution of
the program. I can post the full warning message if needed, but it may just be 
that I
updated in between svn commits?

One side effect from both GPUs being listed in list_cores is that now no CPU is
listed. "Network-Clients" is listed as #3 under list_cores, with #1 and #2 being
"CAL++ Device #1 (and #2) 'ATI RV770'". When I change the pyrit config to not 
use
network clients, only the 2 GPUs are listed. One CPU had been listed previously 
(I
have a dual core in this machine). Can this be fixed?

Original comment by robert.b...@gmail.com on 15 Apr 2010 at 3:41

GoogleCodeExporter commented 9 years ago
Update 2 - Did a brand new svn checkout (instead of svn update -- forgive me if 
that
was the wrong command -- I'm new to subversion) and now no more warning message!
Still no CPU cores, though. :(

Original comment by robert.b...@gmail.com on 15 Apr 2010 at 4:05

GoogleCodeExporter commented 9 years ago
Robert: Pyrit keeps one CPU free for every GPU you got to do low-latency 
scheduling
towards the GPU.

Regarding the version-warning: Pyrit checks if all modules are built from the 
same
subversion-revision. All you got to do is to 'svn update' the whole code tree 
and
rebuild both cpyrit_calpp and pyrit itself.

Original comment by lukas.l...@gmail.com on 15 Apr 2010 at 6:30

GoogleCodeExporter commented 9 years ago
Hi Robert,
It is enough to do what you report in comment #11 or it is necessary some other 
step
that you discovered in meantime?

More, can you report PMKs/s comparing opencl and calpp version? (yes, I am so 
curious
:) )

Original comment by pyrit.lo...@gmail.com on 15 Apr 2010 at 7:42

GoogleCodeExporter commented 9 years ago
lukas,

Thanks for the info re: CPU/GPU issue. I can live without the CPU being used as 
long
as the GPUS are cranking. As for the "svn update", I got the same warning issue 
when
I used that on another PC, again fixed by doing a full svn checkout. Maybe one 
of the
files isn't marked correctly to be updated (again, my knowledge of svn is 
limited,
but I do know going from 244 or 245 to 247 just using "svn update" didn't work 
for
me). No biggie. It's not like doing a full checkout takes more than a few 
seconds.

pyrit.lover,

Follow the steps I listed in comment #12 to be safe. In your user home 
directory (cd
~), add the "export DISPLAY=:0" command as the last line to your .bashrc file. 
Then
(as root/sudo) run "aticonfig --adapters=all --initial -f". I'd log out and 
back in
again to make the export command run. Hell, maybe even reboot. To test it, run 
"echo
$DISPLAY". If it comes up as ":0", it worked. If it comes up ":0.0", recheck 
your
.bashrc to make sure you typed the command correctly.

If you have problems, let me know and I can either e-mail or post my .bashrc and
xorg.conf files so you can see how mine are set up.

As to the speed difference between OpenCL and Calpp, it's not really that 
noticeable
to me on the 4850s. (Right now, Calpp is doing between 34000-35000 PMKs/sec. I
wouldn't expect much less from OpenCL). I've seen people post much better
improvements with 5000 series cards going from OpenCL to Calpp.

My setup: My 4850s are both running stock 625 core and 993 memory. I'm running 
64-bit
Karmic with a dual-core Athlon 64 X2 5200+ (CPU may be holding it back a 
little, as
GPU utilization is about 90% on each card). I've got 4 gigs of ram and I'm 
running an
attack_passthrough with john the ripper mangling and piping a huge wordlist 
through
to pyrit (this way the hard drive doesn't become a limiting factor).

Earlier tonight, I had an additional PC with another single 4850 running "pyrit
serve" to my 2-card PC (PMKs were well above 42,000 with that) but it looks 
like that
died on me and took down the client a fair bit as well (pyrit slowed to a crawl 
-
14000 PMKs/s total). I don't know what the cause was, but the "serve" box's 
4850 is
one of the older hot versions, so it could have been a hardware issue. I'm 
going to
leave the 2-card box running on it's own overnight to make sure it stays 
running. If
it dies, I may switch back to OpenCL and give that a go again. When I was 
running
OpenCL on 2 PCs, each with a single 4850 in it using "pyrit serve", it never 
died.
But I've changed so many things, it will take too long to diagnose tonight.

I'm just glad I can run multiple GPUs on a single pyrit box now. Hope this 
works for you.

Original comment by robert.b...@gmail.com on 15 Apr 2010 at 8:21

GoogleCodeExporter commented 9 years ago
Cheers Robert :). It works very well. Thank you for sharing.

Did a test with opencl and cal with my 2x 4870.

OpenCL: total: 42211.62 PMK/sec
        1# card: 19635.3 PMK/sec
        2# card: 19392.5 PMK/sec
        CPU: 762.8 PMK/sec
        CPU: 751.5 PMK/sec
Cal: Computed 42946.90 PMKs/s total.
#1: 'CAL++ Device #1 'ATI RV770'': 20613.1 PMKs/s (RTT 2.8)
#2: 'CAL++ Device #2 'ATI RV770'': 19406.1 PMKs/s (RTT 2.8)
#3: 'CPU-Core (SSE2)': 691.4 PMKs/s (RTT 3.0)
#4: 'CPU-Core (SSE2)': 751.1 PMKs/s (RTT 2.9)

Original comment by Karser...@gmail.com on 15 Apr 2010 at 4:54

GoogleCodeExporter commented 9 years ago
closed

Original comment by lukas.l...@gmail.com on 15 Apr 2010 at 6:42

GoogleCodeExporter commented 9 years ago
I don't know if it was stated, but in order for Pyrit to recognize both cards, 
it seems you need to have a screen recognized on both cards.

If you have ATI, you can follow my Steps
1. Open up /home/username/.bashrc and add export DISPLAY=:0
2. Open up a command prompt and run sudo aticonfig --adapter=all --initial -f
3. Log off and login, and both cards should be recognized.  If you are running 
a multi monitor setup, then your screens just became cloned.. 

Heres How you fix that!
4. make a copy of file /etc/X11/xorg.conf 
5. Configure Your Screens via ATI's Catalyst Control Center
6. Log out and then log back in.
7. Using your favorite editor copy line

Screen         "aticonfig-Screen[1]-0" RightOf "aticonfig-Screen[0]-0
Into the new xorg.conf at the same position.  Log off and log back in, and you 
should have your multi Monitor config with both cards being recognized.

Original comment by Blackor...@gmail.com on 5 Feb 2011 at 3:16

GoogleCodeExporter commented 9 years ago

Does anyone know why my cards radeon 5870 do not scale at the same speed.

When i use the top PCI-X 16 slot both card achieve 90k psk/s each but when i 
use both PCI-X 16 slots the second card only on the bottom slot only achieves 
45k psk/s

????

Original comment by odeamark...@gmail.com on 2 Aug 2015 at 11:30

GoogleCodeExporter commented 9 years ago
p.s My gpu cards have the same clock speeds

Original comment by odeamark...@gmail.com on 2 Aug 2015 at 11:34