CleverRaven / Cataclysm-DDA

Cataclysm - Dark Days Ahead. A turn-based survival game set in a post-apocalyptic world.
http://cataclysmdda.org
Other
10.08k stars 4.1k forks source link

Proficiencies, crafting, failure chance, UI, etc #48981

Closed Kelenius closed 3 years ago

Kelenius commented 3 years ago

Describe the bug

I would like to collect all issues and RPs that are currently opened (there are too many, so my solution for solving that is to open one more), because there is a significant overlap between them and they boil down to the same problem:

The effect of having missing proficiencies on crafting and crafting failures in general are broken in various ways.

Existing discussions

Issues:

46662

47566

48465

48597

48925

48956

PRs:

48658

46082

46153

48673

48650

48924

And a discussion:

48912

Why I'm opening this

Because all of this is very closely related and it's essentially the same problem that should be resolved in one go, not broken up into a bunch of PRs - merging #48673 fixed one problem, but also made crafting even very simple items practically impossible without proficiencies and left missing proficiencies impossible to mitigate because that's also broken - a worse situation, in my opinion!

I'd like to bring all discussion into the same spot, because right now it's spread out and is difficult to follow.

Let's untangle the knot

Here's the full list of what's currently bugged:

How to resolve this mess

First we need to stop and asks ourselves: why am I spending my Friday evening digging through C++ code how difficult should it be for different characters to craft? There are three factors at play:

(there are also NPCs and other small factors like crafting in darkness, but let's put that aside)

I see a lot of people arguing about how math works. That is not the approach that we should take. It would be, in my opinion, best to answer these:

"A character with 8 intelligence and the skill matching craft's difficulty should succeed XX% of the time." "If that character is missing a 2x failure chance, they should succeed YY% of the time, which is halved success rate / doubled failure rate / something else, according to a formula." "If that character instead has Z or more intelligence, they should always succeed." "If that character's skill is instead W or more % higher than craft's difficulty, they should always succeed." "If that character has 4 intelligence, their success rate should be (I am running out of good letters to use)%." "If that character has skill (whatever)% lower than craft's difficulty, their success rate should be (something)%."

And then adjust the results to fit these. Failure checks should be more evenly spread across the recipe, and failures themselves should be more manageable. Right now they send you into a death spiral where you can't finish the craft because you keep getting set back by failures.

Second, and this is important, there needs to be one unified function for getting the penalties. Right now it's in four different places: recipe.cpp, proficiency_time_maluses and proficiency_failure_maluses; crafting.cpp, crafting_success_roll; recipe.cpp, time_to_craft_moves; recipe.cpp, missing_proficiencies_string.

Third, there needs to be a clear UI that shows what your actual failure chance is and how the proficiencies missing are affecting it. "Missing this proficiency makes you twice as likely to fail" isn't useful information when you don't know the base failure rate and how that 2x is applied to it. Showing how much proficiency progression you'd get from crafting the item is also something to consider.

actual-nh commented 3 years ago

Fourth, the code needs to be better-documented so it's clear to see exactly what is happening... EDIT: The below and related arguments are an indicator how much this is needed!

deadcutlass2 commented 3 years ago
  • your failures are checked at specific checkpoints. These checkpoints occur more frequently when your failure rate is high...

They are not. The output of Character::crafting_success_roll sets one singular failure point for the craft based on it's output. This is done in item::set_next_failure_point, line 1054 of crafting.cpp.

Attempting the same craft from scratch and attempting it at 99% have the same odds of succeeding. The difference is, assuming you roll a 0.6 from Character::crafting_success_roll, you'd end up at 60% completion and 99.6% completion respectively before failing, a 0.2 would give 20% and 99.2% respectively.

48925 and #48956 should be resolved by #48924

46662 and #48465 should be resolved by #48650 and #48658

48597 should be resolved by #46082

46153 should be the long term resolution to all of this.

Kelenius commented 3 years ago

They are not. The output of Character::crafting_success_roll sets one singular failure point for the craft based on it's output. This is done in item::set_next_failure_point, line 1054 of crafting.cpp.

It's called multiple times, setting multiple failure points.

pehamm commented 3 years ago

That is semantics, only one failure point exists at any one time, but the higher your expected failure, the higher the expected total number of failures until completion.

Kelenius commented 3 years ago

That is semantics, only one failure point exists at any one time, but the higher your expected failure, the higher the expected total number of failures until completion.

I don't think you understand. Every time you trigger a failure, the next failure point is reset. In your example, if you are starting with 0% and 0.2 success rate, your next failure is 20% away, and if you start with 99%, it's 0.2% away. This creates a "clustering" effect where failures become more frequent as your approach 100%. And since every failure sets back your progress, that makes it very difficult to finish the craft.

deadcutlass2 commented 3 years ago

your phrasing indicated that the failure points are pre-chosen before even attempting the recipe, and that they were pre-chosen at the high end of a recipe. They are generated on the fly was my point.

Kelenius commented 3 years ago

I suppose that's worth clarifying, but doesn't change my overall point.

deadcutlass2 commented 3 years ago

Are we waiting on I-am-Erk to check through the PRs or is Kevin willing to do them?

ghost commented 3 years ago

I'm concerned that the "0.F Feature Freeze" tag means the soon(tm) stable will include the current 'impossible' crafting.

actual-nh commented 3 years ago

I'm concerned that the "0.F Feature Freeze" tag means the soon(tm) stable will include the current 'impossible' crafting.

I can remove it if you'd like; it was more to make sure this doesn't go stale, and that I suggest that further fixing should definitely be a 0.G blocker.

ghost commented 3 years ago

That is because the probability is tied to the item itself with the progress, if you have a 20% chance to pass each check you are expected to have an average pass rate of 20 out of 100 checks it presents to you. If it detects that you are passing too many checks and succeeding too often in the beginning, it starts to make you fail much more to compensate. The probability is reflected in the frequency of events to preserve the meaning of "20% chance to pass" you can treat this as a pool.

If you succeed too much you have depleted your average amount of success pool and need to start filling a failure quota, consecutive successes lower your amount of success chance by nature (or rather, have a lower frequency).

The more you succeed, the lower your effective success for passing threshold (to preserve what the pass rate means), because you have completed your success quota and it's time to fail, the more you fail the more the success chance can be "regenerated" and then have a chance to succeed.

With a high pass rate, your success will be lowered the more times you succeed, especially consecutively, and it will generate the failure threshold points depending on the expected result of failing the average amount and reflect it in a frequency of an event, this leads to the so-called clustering of failures, if you depleted your pool of expected successes, you will start to be forced to fail to fulfill the expected amount of failures the end, this regenerates the pool for success, then you can potentially finish the item (or just drawing out the failures that should have to happen on average).

The pool is tied to the item craft itself trying to preserve the frequency of events. At some point, you have to start "failure crafting" because it's necessary for the item to accept the completion of item progress.

Succeeding too much early can be setting yourself up for failures later.

Alternatively, failing early is a form of "failstacking" for success later, adding a bonus to effective success (compared to the last check) and are a necessary component to work towards item completion.

In the end, it should not matter on average, because they are just both quotas in a way to work towards item completion, if you can calculate the average amount of failures and successes necessary after incurring your penalties, you can find the quota.

pehamm commented 3 years ago

That is because the probability is tied to the item itself with the progress, if you have a 20% chance to pass each check you are expected to have an average pass rate of 20 out of 100 checks it presents to you. If it detects that you are passing too many checks and succeeding too often in the beginning, it starts to make you fail much more to compensate. The probability is reflected in the frequency of events to preserve the meaning of "20% chance to pass" you can treat this as a pool.

This is not how any of this works. If you fail at 50%, your future progress does not depend in any way on whether you failed 10 times or 0 times, your probability of more failures going forward is exactly the same. Look up Gambler's Fallacy

pehamm commented 3 years ago

I made some plots to illustrate how the proficiency factor changes the success probability. This is a recipe with 5 difficulty and attempted with 5 skill, 8 intelligence. No proficiency missing. To succeed the result of skill_roll has to be larger than diff_roll, or skill / diff_roll > 1. The left plot shows the density function of both rolls, which lie on top of each other because we fulfill the requirements exactly. The second plot shows the density function of the ratio skill_roll / diff_roll, with the probability to succeed being the orange area under the curve, which starts at x > 1.0. Prof1

Now, if a recipe has a 2x failure multiplier, the ratio is divided by 2, which is equivalent to diff_roll being multiplied by 2. This leads to the following densities. The orange area is barely visible on the right plot (the area is equivalent to roughly 0.01%): Prof2

This illustrates why the effect of proficiencies seems to be so extreme all of a sudden. I do not think that when the default failure multipliers were set to the levels they are, it was foreseen that the effect would be this large.

The plots were created by replicating the success_roll code in python, then creating 1,000,000 example rolls for each roll type and applying a Gaussian kernel density estimator.

KurzedMetal commented 3 years ago

merging #48673 fixed one problem, but also made crafting even very simple items practically impossible without proficiencies and left missing proficiencies impossible to mitigate because that's also broken - a worse situation, in my opinion!

Probably merging ANYTHING that improves this situation is better than we currently have right now.

All GitHub Actions buiilds from "Experimental Build #1" (about ~28 days ago) until now can't be played due to this game-breaking crafting issue. Specially because it's blocking key progress craftings like makeshift welders and reinforced solar panels.

Kelenius commented 3 years ago

@pehamm Thank you! This is what I was trying to explain "The impact the missing proficiencies have is absolutely insane", but this is much clearer.

@KurzedMetal Merging the PR is what started all these problems - previously missing proficiencies increased your success chances instead of decreasing it, e.g. missing principles of leatherworking made your crafting 5 times MORE likely to succeed instead of LESS. When the PR got merged and the bug got fixed - that's where crafting became so difficult because all other bugs came to light - before that they weren't noticeable because the success rate became stupidly high which hid all problems.

KurzedMetal commented 3 years ago

I know that, I'm just saying the current state of the code & builds have been like this for 30 days and merging absolutely anything that alleviate the problem is better than leaving current state even more time. We already have some PRs trying to solve this but they are not getting reviewed or merged.

Also, keep in mind that the Launcher has started to deliver the latest GitHub Actions builds recently, so more players are being impacted by this issue and starting to create new threads in Reddit about it out of confusion (thinking that this is how the dev team wants proficiencies to work).

EDIT: Another example would be just reverting the PR that "fixed" proficiencies so it goes back to super high crafting rates until a more a holistic fix is prepared instead of having crafting not work at all.

actual-nh commented 3 years ago

Ping: @kevingranade?

ennob commented 3 years ago

I feel like this shouldn't be postponed because of feature freeze. This is a game-breaking bug that effectively means it is impossible to craft certain items (makeshift welder and electrohack are recent examples where I got stuck). Can we add this to the blockers/To Do list for 0.F? Alternatively we could drastically reduce the failure chance (or the amount of progress lost) as a temporary fix until a rework can be completed.

ghost commented 3 years ago

The difficulty rolls are normalized to the point on item progression (on the item counter itself), if you cleared up to that point you can still do more work towards item completion, you would just incur a penalty to your success and it will generate more failures along the way as you continue crafting.

This is how crafting any item works, people just interpret it all in a binary way when the difficulty rolls are tied towards the crafted item's degree of progress (item counter).

The difficulty of any crafting item even at full requirements gets increasingly harder to generate success as you progress towards item completion. But there is the illusion where it's out of player expectation that the difficulty is constant and they should be able to complete the item.

Since there's just an increased amount of ticks and multipliers that shifts your success rates and ability to make progress, failure point thresholds are just generated at levels you can not clear because the fail counter is incrementing up with every tick of failures that are being generated. At the same time, triggering them resets the counter and scales up your success chance by a small amount because it could be considered as part of a quota to fill to finish the item itself if you do not have the ability to clear the next segments of progress.

You just have to accept that you can't clear it in a single craft since you're required to do more work to complete the item if it's missing modifiers, but depending on how many you're missing it just becomes impossible to craft.

I just think that the numbers are a bit too higher, and it is out of expectation for people. Entry-level recipes should be scaled down to more reasonable amounts like ".5x more likely to incur failure" but convert that to "50% more likely to incur failure" so they know what to expect.

ennob commented 3 years ago

It is not just about expectation. For example in the latest case with the electrohack I consistently get to about 77%, fail, loose some materials, and get knocked back to between 50 and 60%. When I continue I get to around 77% again, only for the same thing to repeat. The problem is that I am stuck between 50 and 77%. I literally tried 15 times, spending more resources each time. If only I would have gotten slightly further each time it would have been OK, but I didn't.

If the player is not actually able to ever complete the item, then the item should have a higher skill requirement. Saying that "this is how crafting any item works" is not helpful.

One option to help fix this might be the following: After the player fails and looses some progress (and maybe some materials), make the player immune to failure until they reach the completion level where they previously failed. That way the player could still fail many times but he would at least make forward progress each time he invests more materials. It also makes sense from a story point of view since if the player just tried and failed some part of the crafting process, they are unlikely to make the exact same mistake again.

Kelenius commented 3 years ago

Plus, the current crafting system makes it literally impossible to complete the item until you roll more than 1 success rate, because there will ALWAYS be a failure point between the current progress and 100%.

pehamm commented 3 years ago

Plus, the current crafting system makes it literally impossible to complete the item until you roll more than 1 success rate, because there will ALWAYS be a failure point between the current progress and 100%.

I have not seen that, can you show me where this happens in the code?

Kelenius commented 3 years ago

I have not seen that, can you show me where this happens in the code?

activity_actor.cpp, craft_activity_actor::do_turn, near the end checks if the craft is over a failure point. If it is, it triggers a failure, which calls craft.handle_craft_failure, which calls set_next_failure_point. And because of how set_next_failure_point works, unless crafting_success_roll returns at least 1, the next failure point will always be between the current progress and 100%, so you'll hit it again. #48924 is currently attempting to resolve that.

Triggering a failure also always sets you back by some percent, but even if it didn't, you'd still be triggering failures over and over as you approach 100%.

pehamm commented 3 years ago

I meant that part:

because there will ALWAYS be a failure point between the current progress and 100%.

This sounds like you are guaranteed at least one fail. But you can get a success_roll above one immediately, in which case you will succeed without any fails.

Kelenius commented 3 years ago

Yes, if you roll above 1 at any point, you will finish the craft. If that's the first check, then there will be no failures at all.

But you cannot finish the craft without rolling at least a 1.

ghost commented 3 years ago

It's just part of the requirements, the x2 time just generates a x1 more likely to incur failure or x2 proficiency modifier. The difficulty is the same as the previous craft in you count purely on success rate per tick (but awarded 1/2 progress per tick). As, before the craft is finished you have failed twice as much as before even when you succeed twice as much (failure point counter will never reset until item completion or being triggered). Each tick of progress is a calculated value of the previous one on the current crafting system so the results should still be the same if you purely look at successes compared to failures (but increase the frequency of failures).

In the end, you have 1 item. You did the work of creating 2 items but got 1. All in all, everything seems consistent, it should be working towards "doing more work" and getting less out of it. What you expect out of batch crafting 2 of the same item but getting 1 in the end, that's why it's considered "x1 more likely to incur failure" with an x2 multiplier. You hit the fail point 1x more time or an x2 multiplier.

They aren't getting consistent results because they aren't going through how they're used for based addition and multiplication. The recipe and interface are also spelling it out, the "x2 more likely to incur failure" is add 2 to base failure (1). It's a x3 "as likely as" to be treated for multiplication and division. They've been predefined this way due to the way success and failure outcomes are defined at base (1) in order to link the related calculations together.

Failure multipliers seem to skew the difficulty at the higher portions towards that direction instead of preserving the frequency of events and both will define and multiply each other differently.

The only way to cancel out the failures is by going over the requirements by the same multiplier for success.

"x more likely to incur failure" must be canceled out with an equal amount of "x more likely to succeed" since they are both applied for base addition and multiplication. If you standardize success rolls to values 1 and have a proficiency multiplier of 3, you need to calculate for base (3/3) success_roll 1.

If you are being segmented during the craft and trigger a failure point early, it means you never had the ability to roll for that 1. Failure is a requirement to finish the item, if you failed early it means your failure point counter already reached its limit to trigger the failure point.

You compensate the modifiers by doing more work and failing enough to fill the average amount expected before it will let you complete the item. Rolling in each portion of the craft per tick is a form of canceling out the time modifiers at each segment, if it's making you lose progress at a certain point it means you just need more requirements there to cancel out the modifiers.

If you are expected to trigger the failure point at the end of the craft before and after the modifiers, you can expect the modifier itself to just be a multiple of it.

If the value of the failure counter is (1) for a craft and then you apply a proficiency modifier of 20, you treat it as you can't go over (1/20) of that counter at the end if you can have a success_roll of 1 from the original craft without it. You still need to succeed 20 times more to cancel it out.

If you want to compress the previous value into a die, you can't inflate the amount that it contains, otherwise it just represents something else.

They're all just base calculations that are meant to preserve a frequency of events, along with as any variable isn't broken on the way, each tick of progress you make on the item isn't going to make a difference on the outcome. You can just convert your expected results of doing the recipe, compute a value between the base numbers for success and failure outcomes that are predefined, and expect a frequency of results tied to it.

You can even go by the UI on the crafting menu, when you mitigate everything to x0 as long as normal and x0 more likely to incur failure. That's when success=failure, everything else is scaled to that.

If you started with a x3 failure multiplier, by default you have a 0% success chance of completely clearing the craft in one roll if you haven't had enough to completely negate it. So I have no clue why you guys are trying to interpret positive success chances off that.

ghost commented 3 years ago

I forgot to mention, people are usually a degree off when they use the multipliers, x5 failure literally means 500% failure rate (different from chance) on the crafting recipe. The failure rate is also different from a failure multiplier, the difference is that rates are a frequency or measure of something already defined.

If you had a success roll at 1 at 1x success rate and 1x failure rate then you go to a new craft with a 10x failure rate. The success roll you now have is .1 when you need 1.

The base value of your success rate has to be scaled up to the new difficulty. I'm not really sure why you want to treat the tick-by-tick messages you are being spammed with during crafting as a die when different portions of progress are associated with different varying degrees of difficulty and rates. When you already know frequency has to be preserved by the definitions of what simulating probability, you just go by frequency of events if you want to generate an expectation.

If you want to go by the crafting system, the associated bonus tied to having all the proficiencies tied to the craft is a time multiplier of (1/2) and a "x1 more likely to succeed" but that is only compared to the "normal" value. It means add 1x success rate (different from chance).

This is the opposite of the effect of having a x2 time multiplier and an "x1 more likely to incur failure" if you noticed. You'll see the difference of a proficiency multiplier of 2 with and without proficiencies.

What's normal? The normalization point is between them where the calculations are tied from. the normal point is a multiple of 0x or 1x or 2x, etc, and everything value that ties to it. success chance = failure chance.

The proficiency mitigation system and values are consistent, is just that people are off by an order of magnitude that they aren't expecting.

Just look at all the message ticks you're spammed with, those are depicting your current success rolls and will add to your progress tick by tick, each outcome is generated from the previous, each failure you incur is counted per tick. It is rigged to preserved a frequency of events that will be skewed towards one direction, if you already know this then the outcome generated is much easier to accept. Because you know the probability is now real, it's just it's a frequency of events being generated.

The compression of the event into a die destroys the original meaning of these outcomes unless you create a base (1) die that scales accordingly.

If you batch craft 100 items and know your success chance is 75% and you lose 25 of them you wouldn't be upset.

I'm legitimately confused why people are also treating the 75% success chance as a success roll 1 when doing the original craft. That is success roll .75 which is where you clear the item 75% of the time and treat it as success roll 1. Success roll 1 is the requirement for clearing it 100% of the time if you want to consider it a binary event. The effectiveness is each to add to the degree of progression in an item is completely different, don't treat them with the same effect. You're inflating the values of your rolls to mean something different.

When the failure counter reaches (1), you'll trigger a failure, you can tie this to failure roll 1 if you want to lump it into a condition. If the value of the success roll depletes to 0, you are forced to do a failure roll 1 by default as it is the only occurrence of events that can happen. Success and failure rolls are linked together by the modifiers that affect them and scaled to the rates.

Success roll (1) by default means you cleared that entire segment because it contains no degree of failures. Allowing you to roll again but now influenced by a fail roll. No one should have a real success roll (1), it does not exist unless you are counting by ticks, if you have any degree of a chance to trigger a failure, it is not a real success roll (1). You can trigger a success with a success roll (.25) and inflate that into a success roll (1).

The item progress is counted in finite units of ticks as well as the failure point counter, it legitimately never stops or resets unless you trigger it or finish the craft. If you trigger it, go back to 0 for you to trigger again. There is no probability of going over it, one tick of failure is going to increment by 1. Probability of not triggering doesn't actually exist, you guys are ignoring the huge amounts of "you lose progress" messages that are being mitigated by your skills (that actually doesn't remove progress) that you are being spammed with. Those are a +1 to the failure point counter, after it fills up you trigger its condition. Then it resets after you trigger it.

How is this avoidable?

I think it's better to go by how the game displays it, I'll rather trust the guy's code. Filling the requirements for the recipe with everything will never get you to success roll (1) by ticks. I'm simply telling you now, the value stored in the failure point counter determines where the failure threshold point is, once its reached penalties are applied, triggering it multiple times will roll back your progress.

If you end on "x5 more likely to incur failure" and have to go through 0, 1x, 2x, 3x, 4x, 5x until the end, triggering it multiple times already and being rolled back, its for a reason. I'll rather trust this guy's code to translate missing modifiers to compensate for your reduced roll values because, by nature, it needs to be self-defining to tie it all together.

Every failure point threshold is just there to normalize you to another value, you might not even have the ability to make progress in it if it's high enough.

If you count by "work" towards item completion it's still linear based off a frequency of events to be expected, it's just normalizing values by pushing them into other things that contain them in order to keep track of them. A time multiplier by itself is treated as more rolls (ticks). Progression rollback is 100% necessary when progress is contained in a limit integer value that cannot exceed that. Progression lost and working up again is supposed to generate the compensation necessary for actual item completion along with the increased chance for item destruction, the end result will be the same.

The game's numbers are it's own tied to its own variables, if you use your own it'll generate a different result. The game is forcing normalization on its own values, I would say frequency wise nothing is wrong with it at all.

If you jam in a bunch of failure rolls into a success roll (1) then compare it to the game's success roll (1) naturally things will be different from expectations.

Technically speaking, since the last segment is by nature the hardest, repeatedly triggering progression lost (on the display) will generate the most amount of work efficiency to the actual completion of the item once all modifiers are in place and while it almost contains the highest difficulty, the failure point itself will mitigate your reduced success rate by bumping it up because it is an expected event that must occur, by nature the rate of the other must increase to preserve frequency in probability.

You're forced to fail to complete the work towards the actual completion of the item using the unit conversion on the game's own standards, all of which are self-defining and already consistent.

You break that consistency if you try to normalize off other things. It will force you to do failure roll (1) if you miss a proficiency without enough mitigation, you are predefined to do the failure roll (1) multiple times.

Every success roll you were considering to be a success roll (1) is jamming a higher amount of failures into your failure roll (1) throughout the craft.

ghost commented 3 years ago

Normalization of a zero value is associated with the event of success and failure. It is simultaneous determining it as a (1) or whole while treating it as a point of referencing other values.

Photons have no mass, but there is negative mass, the high energy collisions of photons create a mass: an electron-positron pair. Photons are a normalization of defining mass to zero value.

The observation of creating success and failures that tie to a value of "negative success" and "negative failure" is occurring due to the normalization of values associated.

All things are consistent and tie into one complete system that requires a normalization of values to create the effect requiring "negative progress" to be made in order to successfully finish the item. Time and "negative time" normalization occurs between the point associated with the normal point, it has defined what time is and generated a defined effect tied to the crafting system.

Since values are tied to base addition and multiplication, the necessity for these values to be defined must occur from the normal value to preserve the direction choice.

Crafting with success, and crafting with failure.

samuelmattjohnston commented 3 years ago

Why not revert the "fix" ( #48673 ) of the formula for now. Fix all these issues in a future version, that way you can not be blocked with so many balance changing question last minute for this release.

ghost commented 3 years ago

It's a feature of the code itself, the amount of time it takes to craft the item is predetermined by its time multiplier, even if it's fluctuating up and down, it's technically doing a linear amount of work. Just treat finishing the task when the time is up, if it isn't up, it isn't ready. The item counter will keep track of how things should be scaled to fluctuate based off of the degree of progression.

It's based on the normal values that it's being scaled by, how it goes up and down is suppose to provide for the illusion of probability while preserving the frequency of events. The guy who coded it is doing it right. If you fill the failure quota you're done, You should associate the property of a "negative success roll" crafting with failures and it gets translated into being more likely to fail with progress lost in order to be an actual success roll. Eventually, you can get a crafted item at the cost of more effort to create.

If you have the proficiencies you actually have the bonus of a (.5) time associated with the normal value with an increased chance to succeed compared to it.

But technically it's also treated as the base outcome for 75% success and 25% failure. Which is normal.

You could just show them the supposed bonuses to having the proficiencies, technically it still is true. You are 1x more likely to succeed than normal.

Mitigation seems to only work towards going to a 0x multiplier towards the normal values. It's why the calculations are so wonky and the recipes as well. Because of how they can be used as but multipliers and for base addition and subtraction for it to work as well as how it's limited to being stored.

If you start 1% of a craft and it doesn't generate the same set of outcomes after if you compare reloads then something is broken.

There is a contradiction in the display due to that, to some degree, I understand why he didn't include the bonuses from having the proficiencies because it might screw up anyone trying to see what's happening but it might just be better for players to understand. The way it's being operated, showing everything as 0x is exactly 1x normal, it's forced to be scaled to a multiplier of the code and be higher than 1, but the value it stores is always higher than 1.

It's still true that the rates can increment them by 25% in either direction with base addition and subtraction but also be treated for multiplication. The values end up being the same, in the end, it doesn't matter. It's simply the properties they have.

Maybe for players to accept the proficiency system it's better to show them a bonus tied to the normal value from full mitigation. That way they don't get confused with what mitigation actually does for proficiencies.

EDIT: It did look like he might have messed up the code depending on how it's supposed to be operated from its original intention. But it could be fine as well the game state is going to constantly change anyways. The top is always going to be the rates, the bottom should be values that move to 0x with mitigation but are suppose to be working towards a value of 1 to be 1x normal of the operation. The values are supposed to be defining each other and the display is by definition, not for division. The reversal of it affected the normal rates for everything tied to it.

You're base recipe roll maxes at success (.75) when it should have been (1) from this change. Everything is shifted an order of several magnitudes higher because of it, when you don't even have success roll (1) for clearing the recipe everything else scales accordingly 2 orders higher than it should have been. A difficulty multiplier of 2 became a difficulty multiplier of 4.

I-am-Erk commented 3 years ago

After some discussion in discord, I think there are two primary changes to make for stable regarding fail rates.

  1. Dramatically reduce fail multipliers in proficiencies. I suggest changing them to 1+ (1/10 current value), so a fail mult of 5 would become 1.5. This may be a bit too far, it will require a little testing... at a guess it's goign to lie somewhere between that and 1+(1/5 current) where the highest mult would now translate to 2.
  2. Consider having current stage of crafting impact chance to fail, by either adding current progress/2 or just straight up current progress to your crafting roll. The former will preserve some of the current effect of failing more at higher percent completion, which is perhaps more accurate to real life, while the latter should more or less eliminate it entirely.

For the raw problem of proficiency failure rates, 1 should solve almost everything and 2 should cover our bases on the rest. Then #46153, which is closer to complete than I honestly thought it was, will turn this all on its head again after stable.

Of note as it came up a few times: we should avoid any string changes for stable. Math changes to solve the bug are appropriate. This obviously doesn't touch the UI calculations or the effect of books, I haven't looked at those.

pehamm commented 3 years ago

I'll start a few simulation runs.

On 2: wouldn't current_progress/2 guarantee success after 67%? Or am I misunderstanding your intentions? On the other alternative, I made #48924 for just that.

I-am-Erk commented 3 years ago

Since it is only adding your current_progress/2 to your craft_roll, which is counted as a success if >1, it shouldn't guarantee success at any point. It just progressively reduces your fail chance the further along the craft is when you begin. I think some degree of failures clustering later on is OK behaviour.

I am OOTL on what #48924 is doing in the actual code, I would need to take a closer look at that.

EDIT: as I understand it it now just picks a point for the next failure rate that can, potentially, include a craft point you've already passed, and thus they won't cluster more towards the end?

pehamm commented 3 years ago

No, because we add the result to the current progress in line 1052. The PR makes the progress rolls additive, failing at 51% and rolling another 50% leads to success at 101% (ignoring lost progress).

I-am-Erk commented 3 years ago

Aaah, I see how it's changed now.

That is also probably a perfectly fine fix.

pehamm commented 3 years ago

If the failure rates with missing proficiencies are adjusted down, this can be put off for pretty long. It still peeves me that my probability of failure depends on when I last failed (right now reaching 99% without having failed is much more likely to finish than reaching 99% after failing at 97%), but if people succeed eventually that is no more than a minor annoyance again.

The current system has some elegance in that the failure rates are geometrically distributed and ingnoring skill gain we can say that the expected number of failures equals the reciprocal of the (computable) failure probability.

The downside is that if we want to train proficiencies, it is better to start a new craft after failing with the current system, as the expected time until failure increases.

I-am-Erk commented 3 years ago

Yes, reducing the magnitude of proficiency effect is by far the more important fix, but I would still like that bug to be managed: it is not one that is likely to get fixed by the normal_roll conversion unless that starts expanding outward, so since we know it is there we should do soemthing about it.

ghost commented 3 years ago

The failure counter never resets unless it is triggered or until the item is complete and by nature is a precondition for completion, you have to scale things off the counter as well, afterwards you'll end up with huge numbers of expected failures between each failure point thresholds that can continually scale upward with a higher degree of progression. I kind of have to emphasize this again, failure is the only option to complete the craft.

Technically, you still doing the craft, but at a much higher difficulty, to some degree, I have to call it failure crafting, because it's part of the quota system being applied to the item counter. The degree of progress and difficulty scaling is tied to the exact tick on the progress counter, each should be calculated from the last, and the item counter will influence the difficulty settings to skew it towards a certain direction.

The difficulty settings might be tweaked too high too because it looks now the relationship between everything is reversed in some respects between base addition and multiplication for how the bonuses are applied now. If you ever stop crafting, you have triggered the counter effect, every "you lose progress" tick you're missing during a craft just adds to the counter by +1. When it's filled the failure condition triggers scaled to the difficulty of the craft and amount of times you trigger it. The problem is that it's triggered way too often for players to adjust to I feel, with the item counter it's actually enforcing a higher quota you have to fill.

In my definition, it's a "negative success roll" that you guys are trying to do for success roll. If you have to repeat the last segment, it's due to the item counter's effect pushing you down by skewing the results.

It looks like the proficiency multiplier that supposes to be a multiplication is now a division from the last change that influenced the order of operations on the math. I honestly don't think you can even do it through the success roll method because it enforces failure by definition, your simulation has to fluctuate and lose progress to be an accurate depiction with a chance of item destruction now, It's just part of actually crafting the item now.

pehamm commented 3 years ago
1. Dramatically reduce fail multipliers in proficiencies. I suggest changing them to 1+ (1/10 current value), so a fail mult of 5 would become 1.5. This may be a bit too far, it will require a little testing... at a guess it's goign to lie somewhere between that and 1+(1/5 current) where the highest mult would now translate to 2.

Looking good. I simulated crafting an anvil because it is imo the most annoying recipe as of now due to the recipe needing both Principles of Metalworking and Blacksmithing proficiencies to craft and both proficiencies being annoying to practice without one (I am drowning in steel frames as one of the few viable options for blacksmithing).

Anyway, the recipe needs 3 fabrication, principles of metalworking has a 5 x fail_mulitplier, and blackmithing a 2.5 x fail_multiplier. With the old system this would have been a 5*2.5 = 12.5 divisor on the success roll, now it would only be 2.25. With average intelligence and 3 fabrication, we get a success probability of rought 0.05% and the following densities (again, probability densities of both rolls left, density of the ratio right, probability to succeed equal to the orange area under the density starting at x >= 1): 3 Skill

with 6 skill, we make massive progress getting 28% successes: 6 Skill

and with 9 skill, this grows to 95%: 9 skill

For comparison, with the old system I did not roll a single success in a million tries even with 9 fabrication. 9 skill old

I-am-Erk commented 3 years ago

thank you very much for that @pehamm, that's exactly the sort of data I was looking for and a good choice of recipe. that suggests that if anything the 1+1/10 model might still be too tough, but it is definitely on a reasonable scale now.

This is an easy fix. I will set up a PR.

ghost commented 3 years ago

Part of the reason you guys are failing more is because of how the proficiency multiplier was being used for multiplication is now used for division.

Basically, if you were playing on updates before the change, the failures were spread out much more evenly and your fail counter just incremented upwards by the same value. You just compensated with time and your failures were spread out much more evenly.

Now for division, it's treated as an opposite effect one failure is multiplied by "x more likely to incur failure" under the way it's now scaled.

Both directions work with the current code, they just scale differently to achieve the same thing, you see it as a high failure rate because you suppose to see them in bigger bursts. You have a very small failure counter in comparison to the multiplier affecting it.

I don't really think there's anything wrong with the way it is now, overall as long as the frequency of failures is kept within the expected range it's the same to me.

I'm just saying the code itself is just a quota system and you have to fill it: success quota (progress bar) failure quota (between progress bar) item counter (between progress and item completion to check for failure quota being filled) Work quota (scaled by time multiplier) Etc.

Unless you pass all of them you're not allowed to complete the item as long as you aren't removing core components of the code. The progression lost will always be necessary to scale things if your progress bar fills up too much. I don't know why you're going off skill rolls and probability when it's more of a checklist.

You'll see more failures because of the way the multipliers are being used there an originally a time compensation that they had to pay in the original craft being converted more into a failure rate compensation so I think this is acceptable.

From the start to the end they must be normalized to values and offset each other until the item's true completion is allowed, there should be no elements of randomness since it should spawn failure points closer to you if you haven't met one. They start to offset one another until you finish the craft, so technically the high failure chance isn't too bad.

Zireael07 commented 3 years ago

Whatever ends up done, the sheer amount of discussions and misinformation about this problem show that this part of the code needs to be well documented/commented to avoid getting broken sometime in the future again...

ghost commented 3 years ago

Well technically time and failure rates have inversive properties tied to the recipe modifier by base manipulation, the solution and problem itself are tied together.

If a solution of compacting the time multiplier and ends up increasing the failure rate, you see it as a problem when it was originally a solution, you're limited to how the code works with the proficiency system in the ways you can manipulate it.

If you want to increase the recipe's "time multiplier" to a point where you never fail, technically that can happen but that's going to create more problems.

The progress bar you see (on the displayJ rolling you back is normalizing itself in base manipulations of each other towards higher values while you get rolled back.

Treat your progress bar on the display of 100% as a value of 1 for base manipulation. There is a reasoning behind your progression loss.

I hope you know the implications of the solutions you try to employ.

They're all tied and linked together, you shift one, you shift the other.

"2x time modifier" means "1x likely to incur failure" tied to "as long as normal".

There is also the a reason why the recipes values are wonky when you mitigate, the normal value is a base value of 1 but get 0x degrees increased by a certain degree.

The normal value is the point of reference.

Normalization of values to 1.

There is a reason you can do a skill roll (1) when it was a prof_multiplier that multiplied when you had the proficiency.

There is a reason why you can do a skill roll of (.75) with proficiencies now that is a prof_multiplier divided, if do not have the proficiency shift 1 degree skill roll (.25).

You can create a new normal point and increase your rate of success so high that makes having no proficiencies the same as having them.

Normalizing factors: -5x, -4x, -3x, -2x, -1x,0x, 1x, 2x, 3x, etc

In the end, it doesn't matter they have to all lineup and if you want to treat 1 as the item completion number, you get 1.

There is a -.5x time multiplier from having a proficiency from the normal point.

Giving yourself a -x "time multiplier" that decreases the number of rolls to the lowest amount of a single tick of 1, success roll 1, item completion 1.

Every operation will ripple in effect with everything that is tied to it. You can call this the condition of fulfillment to be working towards quota 1 and tie this to item completion 1.

Shifting a normal point affects all calculations tied to it, look at what happened before and after the changes applied to prof_multiplier being multiplication and prof_multiplier being division, you never shift your normal point based on a 0x or 1x or 2x value because someone is interpreting it as 1=0x (which you should never do, do not use those).

Those are all values that calculate base on the normal point and for consistency sake, let's try to keep a single normal point to calculate things off of and not shift it around. All your calculations before were off for skill roll (1) because you can never do skill roll (1) unless it's operated off the same normal point. Is it the same normal point from before?

We can either revert or keep this normal point. If you multiply you go up, if you divide you go down. By this basis, only failure crafting is allowed success crafting does not exist.

The direction in which they are applied affects the direction of how things will progress.

I already adapted my values to shift to the normal point to be consistent for myself.

If you haven't noticed, success crafting is based on clearing the failure points before the condition for completion, failure crafting is allowing them to happen as a condition for completion.

Prof_multiplier is divided which determines our direction to be pointed in that direction, it has a high failure rate but it is also it's a feature, the failure conditions lead to item completion enforced by the rolling back your progression. If you aren't failing the last segments, you should never be allowed to complete it until the quota for item completion is 1 to allow for item completion 1.

Technically it's also a good thing since the more you fail the more completion you have. I'm not sure why you're trying to skip your rolls for item completion. You treat those rollbacks as normalizing and adding to other values you need to finish the item. It's still the same thing, The direction we're pointed at is the direction that it's going to be set towards, we're not pointed up in the positive direction, we're pointed down.

It is literally that simple, no need to pull out all the code, unless you broke the code it should stay consistent. You can look at the code and have no idea what it does if you don't look at how it all works together.

If you want to balance things, you also need to consider the direction you end up working towards, positive or negative.

Since they did not invert the rates, it seems very punishing but it actually in some ways the opposite, it helps you fill the quota to go towards completion, if you avoid it you end up decreasing your success rate, it is supposed to unavoidably respawn to be at a level you cannot clear because clearance is the goal.

I-am-Erk commented 3 years ago

Evrze, I don't mean to be harsh here, but I'd really appreciate if you'd back away from this topic. As best I can tell you don't have a really good grasp on it and your enormous posts have actively slowed me down from fixing it dramatically as I try to figure out what you're saying and how it relates to anything. I know you mean well, but it's clear you don't have a deep understanding of the issue; this is getting in the way.

ghost commented 3 years ago

Alright, I've said all I need to say.

EDIT: For anyone who actually knows, log base 2 is a binary logarithm, all things should be working properly. Each normal point has its own "temperature" scaling with how the normalizing factor operates on it. What you have determined to be your skill roll is your "negative skill roll" on the new "temperature" scaling with a "negative progression loss" in order to generate positive progression towards item completion. The original code stands to be consistent in both directions with no probability of clearance until conversion factors are accounted for. Its consistency is seen as inconsistent depending on the normal point of reference, since there are infinitely many, it's inconsistency is consistent. In the essence of Godel's incompleteness theorem, there is a limit of probability in all things, only the code and game can complete itself. Any simulated expression no longer becomes the complete expression of what the game and code are trying to do together.

The only way I can keep myself consistent is by being inconsistent.

If you think in die a 2d5 can have success but is a success roll (.9), there is a difference to it from a 1d10 success roll (1). The way it is stored is either incremented and can go through all forms of base operations to reflect a result that must be enforced. By definition, a die can only reflect the mathematical pattern that it allows.

You do not understand what a success roll (1) is until you understand what it is supposed to cover, it is supposed to cover everything within it and is allowed to roll further after being affected by success rate and failure rate modifiers.

There is no probability, it just set to convert the values that way, you even have the settings that it will skew you towards.

The progression loss you see (on the display) is, unfortunately, necessary because of the failure crafting settings treating your progress bar (display) as going towards the negative direction in some parts, the positive direction of progress is when it is lost from the failure trigger to offset your "negative progression". Until it is set to be converted to a positive value required, your progress bar (display) cannot be allowed to finish the craft, if you remove the triggers, you might as well remove the proficiency system penalties.

Because of the negative perception of the failure point, players and anyone who interprets the code may have no clue what they are doing. If success and failure are inverted in an equation, what does it mean?

Why haven't they inverted the meaning of the failure point to mean the opposite thing?

If you need to have a quota that needs to be enforced in either direction that needs to be met before item completion, one thing can mean the opposite in the other when the situation is reversed. It is a failure point but it is not a failure point. The failure point is the success point and you need a set amount of success points to be awarded.

You set it up for one direction and create a condition for the other event, everything in place should just be working fine it just that people might tweak with it, which is fine, it shouldn't really matter to a certain point. The feature of failure crafting is the so-called high failure chance that people are complaining about because it counts for 1 point of success on the item counter for the condition of clearance.

If you do not have enough, you are never allowed to be successfully cleared, once you have enough you're allowed to be cleared and you should not ever need more.

You can read the code and not understand how it works both ways, I have hardly read the code but everything seems to be in place enough, I'm practically a beginner in coding so I'm limited in what I can see. All in all the setup seems simple enough that I could understand it that way and it seems to be setup correctly.

If you define one, the other is defined, when conditions trigger the situation of events that allow for those occurrences to shift accordingly. You should never be allowed to finish a failure crafting recipe without clearing out every "negative progress" in your bar at the end of the item's completion simply by defining one side.

Failure crafting is not failure crafting, it simply the enforcement of play expectation that they should be succeeding, it's still the same crafting event but they need to compensate with what they see as "failures" to be awarded success and complete the item.

See how every time you are awarded a success point you can keep going further in the craft until you complete it that is just how should work.

Honestly, they're just numbers being converted to a value of 1, where 1 is considered complete. You can choose not to look at the progress bar or count the number of success points you need to be cleared for, it will always be the same for the same item.

Success crafting lets you avoid failure to succeed and failure crafting lets you overcome them for success, progression loss can never be avoided, you're set back but always allowed to move further ahead.

You if see your progress bar stall, it's due to the conversion limit that you can move forward by, as you go forward it should always be more difficult.

If it isn't, then the code itself is broken.

If you can only decide that a fail point is a fail point, then the only option is going back to having where you can do your success roll 1 by multiplying prof_multiplier, this setting itself is not from the intention of the original code but just has the features of being able to define it if you just let the multiplier be the trigger.

By definition, a success roll will never be able to operate the same way in the current settings when the order of operations isn't respected to allow for it but generate a "negative success roll" that forces an encounter than clearing it, I naturally had my own expectations of what it does because, in actual fact, the code is always broken.

It just puts me in a state of confusion about what actually is suppose to be done, the easiest fix seems to be reverting back to being able to do a success roll which is impossible in the current state. At this point, I'll step aside and let you all do your work for good now and be done with my rants.

I-am-Erk commented 3 years ago

Thanks for understanding. I still value your feedback, I just need a little less discussion at the moment.

ghost commented 3 years ago

A high failure looks necessary since inversion of the proficiency multiplier shifts the time compensation from duration into failure chance.

In the end, it's just rolling for success and failures, if you have any failures the craft can not be complete because it looks like it's recalculating for a success roll 1 for a progress bar roll 1. It should make it impossible to clear with doing a failure roll which could be considered a feature, after clearing out your failure rolls it lets you progress forward.

Players are just seeing their failures build-up towards the end because the progress bar on the display can only show so much, it has to clear out all your failure rolls from the previous step in order to build up because your success die can fully cover all of the areas involved.

It looks like it's a nerf to your skills but if it compresses your success die, since it's the inversion of the prof_multiplier it creates an effect where your die is considered to have fewer failures but requirements more failure rolls because in the end you still need to do more rolls to complete the craft.

If it's working correctly it should just be making crafting times shorter but the time multiplier isn't really time.

It doesn't really need fixing, players should just adapt to it as a new type of crafting method calculation where you're forced to do failure rolls because not a single spot of failure is supposed to exist in a success roll (1) for progression (1) and all spots must be covered in order to move on into item completion.

A success roll (1) for the craft can never exist unless you overfill the requirements to a point where not a single spot is missing in the crafting progression, which is unrealistic.

If your success die is a 4d8 for example, you need to account for areas it cannot cover in each roll, those are your failure rolls tat add the failure count that everyone seems to be ignoring. It has to compress itself to a 1d8 and cover every area in order to have a true success roll (1) with the die.

As you move on your success roll has to be skewed towards failure if you simply look at the progress bar on the display.

It's a failure quota by default because you have to clear out all the areas you missed before and backtrack if it isn't covered, you need the failure quota in order for the successes to be allowed.

Since people have been playing with this change for a month it should just be fine to keep. Overall it just looks like it creates a bigger material sink, I know it's frustrating to have your craft stop at 99.5% on the progress bar and rolled back, but it's a feature of the current system.

Although the feature was a "fix" that didn't need to occur and sort of broke the game, just accept the broken state and treat it as the new norm where you're enforcing complete coverage of a success roll (1).

Failure points are never allowed to be skipped currently, we can keep it this way. All it's really doing is rolling everything in real-time, I don't get why people don't want to do their failure rolls, they see their craft in segments this way because of it. If you want to just do everything in skill roll just revert back to the old system.

This one makes you do fewer rolls just at a higher failure chance. No matter what ends up happening someone is going to break it with each other changes to fix what they think is broken.

You can break the guys "fix" too, no one really seems to know what they're doing in the end, and it's actually great to see that because you can expect the game state to always be broken that way and set your expectations on that.

ghost commented 3 years ago

Do people not understand the number of rolls that are required to fill a progress bar on the display up?

Literally, you're rolling real-time per tick while it fills up as time is passing, the change does reduce the number of rolls it takes so I see it as fine. If you all are treating it in a binary in two rolls you won't see the change, it's not binary, it was never binary even before this.

If you have a success roll (.90) and roll 10 times you get .35.

What happens to the other part where your success roll isn't 1? It's a failure roll, then it's picked from one of the failure outcomes.

Increasing failure rates and reducing the number of rolls has a similar effect but just generates a higher perceived failure rate since you'll see them more frequently.

Since it seems like no one understands probability, math, and coding the changes we make break the game state even more even if it's originally correct.

I legitimately am confused, fixing things to make them broken seems to always be the trend, plugging around numbers that you don't know what they're tied to just create that effect.

If you had to do 400 rolls but now you only do 200 rolls but at a higher failure rate isn't it fine?

It can achieve a similar effect even if it's more punishing, there is always a correlation between the number of rolls and failure rates because a success roll (1) can never always be a success roll (1) for every area as it continues rolling.

You have to trigger a failure outcome at some point and choose from one of the effects.

This has been the case even before the change, you were just incrementing it and spreading it out in a larger number of rolls, that is just what the "time multiplier" affects, it's the number of rolls, it's not the same "time" as the game ties it to but a standard of rolls per craft.

People who do not understand what the numbers are tied to will not be able to accept the change. If you want to do more rolling and reduce your failure chance that's fine too, reduce the number of rolls.

All I'm saying is that it does have the benefit of cutting back on "rolls" per craft, if even a tick of your progress bar even by an increment is a simulation of a roll. Why are people trying to compress it into one or two rolls?

It is the definition of insanity if you try to do that, that will never work because you end up inflating your rolls to be something it isn't. You can sit and wait for your roll 1d8 to roll 100 and it will never work. What you clear is what you work towards per tick, why can't people see that?

You want to split it into two rolls and it will never work, do you roll once for 50% progress on the display and 50-100% progress on the other?

That is never how it worked and never how it should work. Every time a segment stalls it is picking a failure outcome than continuing, what wrong with how that works?

The code already allows for it to work in both directions by inverting the values tied to everything, perhaps it isn't balanced for it, but the difficulty increase seems intentional, all it does is increase the amount of material sunk into it at worse while compensating on the "time".

If you want to go back to doing more rolls and spreading out your failures that works too. I'm just saying that the change is fine because you still get the quota is always enforced to skew you towards a certain direction by definition of probability. It is never allowed to do what is considered impossible. It is impossible for 2d5 to roll a 1. Yes, it can roll a 10, but you need that 1 still, which goes in the failure counter for your failure roll to be expressed in a set of outcomes.

You never stop rolling, you always roll until the end of the craft. You failed a lot of times in the beginning already but you didn't notice it because your success was so high but in the end you need to fail more to finish the craft. Consecutive successes are always pentalized but what it is now allowed to do, the more you succeed the more it depletes because probability has to preserve itself in frequency.

If there is no frequency tied to it then it isn't a probability. If you have a 1% chance of rolling success and it happens successfully 500 rolls, what does that 1% success chance even mean?

Then you take a probability of that to be 100% and keep rolling with it?

The game is just recalculating values ties the mathematical pattern by what the die and set of outcomes of success and failure associated with it. Whenever you compress a single set into a probability or die for you to simulate, you by definition have destroyed the mathematical pattern that is being expressed.

Essentially your numbers will mean nothing if you try to do that. No one had a success roll (1) before the change, because they had increments of failures stored in the counter already but they treated it as a binary outcome.

It was never binary and you never had a true success roll (1), if your failure counter ever had a single value above 0 at any point, it was not a success roll (1) and you can finish the craft even with the failure outcome triggering with you noticing.

Now that the number rolls is reduced, the failure rate has to be compensated in order for it to express the mathematical pattern as before in order to express frequency, you are forced to into failure outcomes more because everything is "faster' by taking place in a fewer set of rolls.

So what's wrong with that, if your success rate at the end is lower because in the end your success is expressed in a fewer number of rolls when it was an order of magnitude higher before. Yes, the difficulty is increased, but the same patterns have to expressed in the same way with the same method the game uses. You're skipping rolls this way so you're forced to have lower success rates and higher failure rates at the end.

That's simply how probability works, if you don't have enough to meet the quotas the item counter has to be there to prevent you from finishing it too early if you're not cleared for the values that are normalized to be.

Please tell me someone understands probability has to be expressed that way. If you want to break probability that is fine because the probability is never real, the probability that is not expressed in the frequency of events is always fake.

If you want to go with fake numbers and set your expectations to that, that is fine too, you can shift it all around and invert the values around to your liking, that is also how it works.

That's why in the end you should realize that there is NO probability.

There is NO probability.

That's what the counters are there for.

ghost commented 3 years ago

I have been emphasizing the differences between success chance and success rate as well as failure chance and failure rates.

People tend to get them confused with each other but they are completely different.

This is why a 1% success chance can have consecutive successes in 100 rolls, and you can have a success rate that offsets it, but in the end, you need more rolls to finish the item, what if there is a failure rate modifier that offsets your success chance and increasing your failure chances?

If you're sampling for 100 rolls, you require 100000 rolls to complete, what makes anyone think that your success roll has an equivalent effect towards every portion?

Your progress bar always had failures in them, but the way they incremented before made you not care too much about them, now since everything much more compressed, you have to see the failures go up since the rates have to be different to scale it down, otherwise, it wouldn't express the same thing.

If you scale down to 100 rolls, are you going to expect the game and code to give you a 100% success chance with a success roll (1) if you had a success chance of 1% at the start?

Of course not, you can tweak how it scales, but in the end, it's going to affect it both ways. Scaling down the number of rolls will always increase the failure rates in order to modify the situation towards the expected results.

There are no particular benefits with what we have now, but at least you can see the portions you have completely cleared out, if the game pushes your progress back, you can always climb to higher completeness afterward.

ghost commented 3 years ago

https://medium.com/@tglaiel/how-to-roll-half-a-dice-21631c1fe694

This is the effect of cutting the rolls in order to represent the new values, what you had before was a different "shape" than what we have now.

The normalization factors were just holding you accountable towards a certain factor away from it. A x.1 more likely to incur failure is roughly less than 1.1 from the normal point.

Please understand the values of your die, these are just going to be the effects of what it is to hold you accountable for every outcome that occurs.

Your success is built from your failure and your failures are built from successes, it must be forced to be that way. You can change the "shape" of the distribution if you want but I'm saying how it works now is absolutely fine and functional but require a different set of interpretation.

There -.x time multiplier was making the base value from the normal point essentially lower than what it is, you need that "negative time" for cutting down rolls from the normal point. Now since it's the division you must need to have a higher failure rate when you cut down the "time".

As you continue, your success rate must continually decrease to represent:

"function: roll one continuous dice, apply f(x) = x^(1/n)"

You can treat everything that is happening in your progress bar as the effects of that continuous die constantly being rolled.

Progression rollback is necessary by definition if you cannot fulfill the completion of the item before that.

It's not intentionally there to punish you, it's just the shape of what you're rolling for must be the accepted new outcome of events, technically you will always fail more at the end, if you remove the item counter it will stop spawning unclearable failure points and just let you do segments normally, but then you no only preserve the same probability of events in the frequency that must occur. The failure points mean you're filled up your success quota number and not allowed to have any more success rolls and now must pick from a failure outcome if you want to continue, after picking a failure outcome you accept the outcome and continue rolling for successes.

The shape of "multiplication" of the prof_multipler and the shape of "division" of the prof_multiplier is already different from how things are rolled for.

If you want to distribute the failures more evenly it can only be done through the multiplication of the prof_multiplier and reverting back to how things were before.

Overall, if you already adapted to the new "shape" of the failure distribution by reducing the number of die rolls you can just keep the change and consider it as a feature.