Closed zerodrama closed 7 years ago
As of now, we don't support multiple devices. The search is probabilistic in nature, so you can run several instances (in screen or tmux, perhaps) with different device numbers and get the same effect, except that you'll have to pay attention to end the search on the other cards when there's a winner.
Can you verify that it's working for you (as in, actually able to find results in close to the noted ETA)? My 5770 has recently died, but I wasn't able to get it working with the AMD APP SDK before. Others have also reported issues where it appears to work but returns no results.
On 11/14/2013 11:17 PM, Rares Marian wrote:
Is there a way to code it to allow use of multiple devices?
— Reply to this email directly or view it on GitHub https://github.com/lachesis/scallion/issues/23.
Eric Swanson http://www.alloscomp.com/
Yes. Found a few. Times are scattershot tho. All over. But average is close to predicted time.
Alright, excellent. Yeah, as I mentioned, it's a probabilistic search, conceptually similar to bitcoin mining if you're familiar. It's expected that it will take a random amount of time, sometimes much more or much less than estimated.
There's been a problem with the AMD APP SDK and Radeon 5xxx cards (and maybe newer ones as well) which I've been able to replicate, but not actually fix (mostly because my 5770 gave up the ghost). The program will appear to work correctly, but will never return results, or only after a very long time (on average).
Glad you're not hitting that problem.
On 11/16/2013 02:54 AM, Rares Marian wrote:
Yes. Found a few. Times are scattershot tho. All over. But average is close to predicted time.
— Reply to this email directly or view it on GitHub https://github.com/lachesis/scallion/issues/23#issuecomment-28621834.
Eric Swanson http://www.alloscomp.com/
Just to clarify: I have a few machines with R9 280x/7970s here. Each machine has at least 4 GPUs in them. I also have a gaming desktop with 3x Titans on an X79 4930k.
So, if I run 6x instances on a single machine, how do I define what GPU for it to use? Is that what the -d 0
command line option is? To specify the device?
As a side question: what's the expected throughput for these kinds of setups (7970s/280x and Titans)? I ask because if this is going to take weeks or months, I'd rather not start that investment.
My goal is to create a custom 12 to 16 character vanity url (yes, all 16). What if I put forth all hardware to that goal, for a single 16 character onion address?
Am I still looking at months to find it?
Thanks!
Bit lengths multiply. GPU processing power adds. 8 bits = 256 multiplier. If 1 GPU gets you 8 bits in a reasonable amount of time, 256 GPUs will get you 16 bits in the same amount of time. In other words, good luck with that.
I believe the d option specifies the device. But you will need to run multiple instances to use multiple GPUs.
On 08/24/14 16:01, Eric Duncan wrote:
Just to clarify: I have a few ASIC miners with R9 280x/7970s here. Each machine has at least 4 GPUs in them (once machine has 6x 280x GPUs). I also have a gaming desktop with 3x Titans on an X79 4930k.
So, if I run 6x instances on a single machine, how do I define what GPU for it to use? Is that what the |-d 0| command line option is? To specify the device?
As a side question: what's the expected throughput for these kinds of setups (7970s/280x and Titans)? I ask because if this is going to take weeks or months, I'd rather not start that investment.
My goal is to create a custom 12 to 16 character vanity url (yes, all 16). What if I put forth all hardware to that goal, for a single 16 character onion address?
- machine 1: 4x R9 280x
- machine 2: 4x R9 280x
- machine 3: 4x 7970s (same as 280x)
- machine 4: 6x R9 280x
- machine 5: 3x Nvidia Titans
Am I still looking at months to find it?
Thanks!
— Reply to this email directly or view it on GitHub https://github.com/lachesis/scallion/issues/23#issuecomment-53205030.
Yep, that's what the "-d" switch is for. If you do "-l" it will show all your devices. "--help" can tell you more.
I don't know anything about newer card performance, but I've seen estimates ranging anywhere from 600MH/s to 1GH/s for newer card. As you'll see, it doesn't really matter.
For now, let's assume 1GH/s for each card. The Titans will likely be well below this (Nvidia hardware has poor integer performance), but whatever.
That's 21 * 1GH/s or 21 GH/s. The formula for calculating the ETA is
seconds = 2^(5*length-1) / hashspeed
So, 2 ^ (516 - 1) / (21 \ 10^9) = 28783948086062 seconds = ~900,000 years.
A 12-character hash would be more tenable, at only around 11 months on average.
Haha, that's what I thought. But thought my math was a bit off.
I have had luck with ASIC mining in the past, where it was supposed to take 45 days - I found the hash within a few hours. Similar to the 900,000 years estimate, it could be a luck of the draw (only 1000 years).
Is there a way to code it to allow use of multiple devices?