Closed ThinerDAS closed 4 years ago
I'm sorry, I cannot understand what you mean by "Recipe Series", nor can I understand your (1,2,3,1*5,2*5,3*5,1*5^2,2*5^2,3*5^2,1*5^3,2*5^3,3*5^3)
notation.
Do you mind explaining it in some more depth?
I'm sorry, I cannot understand what you mean by "Recipe Series", nor can I understand your
(1,2,3,1*5,2*5,3*5,1*5^2,2*5^2,3*5^2,1*5^3,2*5^3,3*5^3)
notation.Do you mind explaining it in some more depth?
Sorry about my bad explanation.
Basic idea is: suppose we have 9 gems sized (1:2:3:5:10:15:25:50:75) g1,..,g9
, we will somehow make 3 new gems wth sizes (125:250:375) g10,..,g12
, like g10=f(g1,..,g9),..,g12=h(g1,..,g9)
and throw away the earliest gems (1:2:3). So resulting gems are (5:10:15:25:50:75:125:250:375) which is same as the start, with each gem 5 times "fatter" or "more expensive". The "somehow" (f,..,h
) can be repeated.
The parameter 3 and 5 are good according to my experiment but you can be more systematic here to be not limited to leech gem in gc2cs.
Ok, so you're considering a particular subset of recipes (that can be compactly described in your notation) and you've done the asymptotic fixed-point analysis.
I'm well aware the asymptotic analysis suggests there are better recipes, but that's a non-constructive result, and needs the recipe size to tend to infinity. I don't have a way to constructively provide a better recipe, if not computing them one by one with gemforce.
If you have an algorithm in mind to find your f,...h
functions, we can think about this, otherwise knowing upper bounds of specific recipe classes isn't of much use.
Ok, so you're considering a particular subset of recipes (that can be compactly described in your notation) and you've done the asymptotic fixed-point analysis.
I'm well aware the asymptotic analysis suggests there are better recipes, but that's a non-constructive result, and needs the recipe size to tend to infinity. I don't have a way to constructively provide a better recipe, if not computing them one by one with gemforce.
If you have an algorithm in mind to find your
f,...h
functions, we can think about this, otherwise knowing upper bounds of specific recipe classes isn't of much use.
I want recipes (crafting steps) that can run under 36 slot "memory" that has high growth.
The recipe size is infinity but it is simple repetition of f,..,h
and can be cut off at any time, and in my case I just picked a good one using stupid dynamic programming and it turns out that the fixed-point is around 0.506 if I use 3x7 gems each frame (1:2:3:5:10:15:...:5^6:2*5^6:3*5^6)
- although I did not verify that it is doable in standard 36 slot "memory".
The theoretic max is around 0.521901 but I can find no way to go close to it within reasonable amount of memory usage.
The issue is in the effective growth at cut off.
For your specific choice of f,..,h
on your you have an asymptotic fixed-point growth that's higher than lc3M, that's fine, but the effective growth you have when you stop at, say, g100e or g200e isn't the fixed-point growth, it's lower.
If you can exhibit a choice of (a:b:c:...:z)
and f,...,h
that has an effective growth better than the combines of comparable length (e.g. say f...h are 1k steps each, compare against 4kc or so) at g100 or so, I'll happily look more into this.
For comparison, the fixed-point growth of a 1->1 gem combine is, as you say, 0.521902
, and we are nowhere close to it, so I'm not sure your fixed-point growth will translate into a comparable real growth at cutoff, but I eagerly wait for your results.
Also I wonder another question: given a recipe (for example 64 lines long) which is like SSA(Static Single Assignment), is the problem to determining the minimum number of gem slots (different lines may use one same slot so total slot can be less than 36) equivalent to the known NP-complete problem of register allocation? Or have you already implemented a version of the best method to allocate gem slots?
Notice the fact that gem combining can be reordered arbitrarily as long as they are parallel (for example 4=1+3 and 5=2+3 can be reordered arbitrarily) - actually there are multiple different topological orderings for the same recipe and not all of the orderings use same amount of slots.
Update: notice another fact that in-degree of each node is at most 2 since combination is always two gems at a time - though I don't know whether the fact is useful.
Register allocation isn't done here, it's a problem of the programs that execute the combines, you have to ask the maintainers of them.
Register allocation isn't done here, it's a problem of the programs that execute the combines, you have to ask the maintainers of them.
Register allocation is done statically given a gem order - just employ lifetime analysis and greedily allocate registers; however deciding the gem order so that the number of registers is minimized with the mentioned greedy strategy is nontrivial, as far as I am concerned.
I have no idea whether the wGemCombiner or anything similar that actively reorder the gem crating steps so that the max number of gems at the same time in the whole process is minimized - it just follow the result provided here, so I conjectured. So if no active reordering is done here as well, the print tree operation (or something like that - I am not very familiar to the intricate algorithm details here) will provide either definite ordering DFS or BFS or random topological ordering unrelated to minimizing the "memory" usage. (I saw just now that gemsmith directly bypass the space complexity constraint by calling the crafting function and injecting the resulting gem object into game, no wonder why there is less focus on the topic that minimize the "memory" usage ..)
Edit: I just figured out (or I believe I figured out) that the wGemCombiner just took the normal method that I initially dropped as suboptimal - he only blindly craft two gems individually, not considering whether the two gems share components - so it sacrifices CPU time to satisfy memory constraint.
Anyway I know (or believe) that consideration about smartly reordering the gems to minimize memory consumption is a mess here as well. It might be not that important in this program, though I still wonder who has a neat answer - either proof that this problem is NP complete or a nice solution that do not create one gem twice and minimize the upperbound of number of gems in the whole process.
I organized my work a little and make the output prettier. Here is a recipe that can reach 0.503 when the resulting gem is of grade 50 in panel / grade 60 in cost. Also the recipe only takes 32 gem slots. https://paste.ubuntu.com/p/45hCtyspXd/
https://img.vim-cn.com/b0/090548b5e78360801e1b6f5d2999f89c70b8b8.png
Edit: I also figured out several recipes and currently the best usable achieves 0.5048 as fixed point (0.504 at lv100) taking 34 slots at most.
About the register allocation: the recipes are already outputted with each different subgem appearing only once, in the equation format. Most recipes easily fit within GC2's limit of 36 slots just by greedily walking the gem tree from biggest to smallest and removing the subgems you don't need anymore.
For more complex recipes, wGC uses some algorithm I don't know that approximates a solution in a reasonable time. Gemsmith instead completely forgoes register allocation and combines in memory, because we figured any reasonable recipe could be done in the theoretically 100+ slots you could get by filling the map with amps, so not doing register allocation isn't cheating.
Now, your method, I thought a bit about it, and I understood how you can theoretically get better results, and seen your example it seems you did actually get it.
I'll have to do a detailed analysis, the parameter space is absolutely gigantic, I'll probably start with something less ambitions than 9,3 to get a feel for it.
If you want to talk more about it, you can find me and other technically inclined players in the GC discord: https://discord.gg/ptte6SP , I'd like to see the code you used.
If nothing else, I can say that one red gem is more than is needed for a g11 spec, and every combine I've ever seen fails to take this into account. I will very often make a second spec'd gem, replacing the red gem in the recipe with another black one. Then, when it comes time to run a combine on the first gem, I'll try to replace a few of the copies with copies of the second gem, instead. Of course, I'll also run the combine on the second gem, too. So, in the end, I effectively end up with a combine with two gem inputs and two gem outputs, though I'll admit they're gems of the same grade.
In theory, a similar principal could be used to take a supergem, one of its associated amp gems, and a dedicated pure-black gem, and produce a corresponding set of three gems at higher grades.
I'll also point out that I eventually hit a point where running a +19 combine on my managem doesn't immediately produce enough mana to run a +19 combine on THAT managem. I have to maintain multiple "generations" of managems that are out-of-sync by a few grades. If I've got another gem that's X grades higher, and the higher-grade gem has more efficient growth than the one I'm trying to upgrade... I'd naturally assume that there's a better way that involves incorporating the more efficiently-grown gem.
Of course, I use a similar system for killgems, too. I think the simplest request would be to find a system that takes three killgems that are 5 grades apart each, and produce a new killgem that's only 5 grades higher than the highest input gem, but has better growth than a +5 combine on the highest gem, or a +20 combine on the lowest one.
Because, yeah, to produce these out-of-sync generations, I have to start with gems that got U-combined before spec (no thanks), use a substandard gem spec (ewww, no freaking way!!!), or use a substandard combine (not happy about it, but there's not much I can do). Having a way to give my poor stunted-growth gems a chance to catch up would be fantastic.
Oh, man, I forgot about this.
The https://paste.ubuntu.com/p/45hCtyspXd/ link has now rotten away, could you reupload it?
@SarahKirksey I assume you're playing GC2, so you're using wGC. The way to squeeze red out is to make a red and a red-less spec, then in the first combine you replace the most nested of the m
by a g
, and make that g
the red gem. This squeezes red out by a factor 1/combine_length and needs to be done once or twice tops.
For the second point, yeah, this class of recipes would solve it, I hope I can make time to study this in depth, maybe during my summer vacations.
The https://paste.ubuntu.com/p/45hCtyspXd/ link has now rotten away, could you reupload it?
https://paste.ubuntu.com/p/fZ7BYVPrq2/
Actually I expect you to find better recipes - this is only one of the simplest recipes.
@SarahKirksey About the red part I cannot help much .. however I believe that that only accounts for constant factor of strength where the later recipes of combination are more important since they are applied repeatedly. I do not consider minimizing red as major problem.
The combination part is exactly what the issue is about. In the pastebin is one example of such combination scheme working out very strong gems by iteration. That is not the most powerful, though; also I find out a problem soon after I put the scheme into actual usage - such type of scheme is much slower than normal scheme, in that for normal scheme, a giant scheme of hundreds of lines usually yields a gem with 1000+x power (for example) while for the aforementioned iterative method, a scheme of a hundred lines only make a gem 5 times stronger. To make a 100 level combination it usually takes a bot an hour.
A completely different approach, sharing some recursion similarities, has proved successful. It's now possible to generate arbitrary-length recipes of a certain class (similar to the one described here) with increasing growth.
I'm closing this, because the other strategy is better, both in steps taken and growth obtainable.
Actual recipes will come in a while.
I did rough calculation and find out that if we do not limit the starting gem and ending gem to be one, we may yield stronger growth of gems.
Example: leechgem in gc2cs (0.88,0.50,0.89,0.44,0.90,0.38)
Each iteration of Recipe Series we go from
(1,2,3,1*5,2*5,3*5,1*5^2,2*5^2,3*5^2,1*5^3,2*5^3,3*5^3)
to(1*5,2*5,3*5,1*5^2,2*5^2,3*5^2,1*5^3,2*5^3,3*5^3,1*5^4,2*5^4,3*5^4)
I noticed that lc3072117.txt find out a 0.505002-growth recipe; however from rough calculation I found out that a fixed point of ~0.506 can be achieved when the largest gem become 3*5^6=46875.
Given that the usable lc4096 is 0.501448, new methods can make a small but observable difference on ~lv100 gems. (2^(0.005*100)=2^0.5 more power for leech gems which is one more grade per 100 grades)
Btw for the case of leech (0.88,0.50,0.89,0.44,0.90,0.38) I believe that the theoretical maximum is around 0.521901 (simple high-school calculus) since this is the fixed point for the growth when the content of gem is real number (smoothened) rather than integer, and each combination is combining two gems of same grades. I currently have no idea about how to achieve that within reasonable amount of time (times of operation) and memory (gem slots) complexity, nor am I good at the abstract algebra knowledge needed to model and give a nice solution to the problem.
I believe that developers here are better than me in this field in both mathematics and algorithm implementation. Please investigate more on this.