Open pack7df opened 3 years ago
We have enough issues getting the correct data out of the KSP API and we have enough new users that adding noise such data which requires implantation of filtering in kerboscript to get data is not going to happen. To say nothing about how it would break existing scripts.
kOS operates by default at 200 opcodes per KSP physics tick, as KSP has a physics delta of around 0.02s that makes for 10,000 opcodes per sec. Also how much a given IPU setting lags KSP depends completely on the code that is executing as in kOS not all opcodes are created equal so baring significant back end changes to kOS in how OPcodes work this is also likely to not change. To give an example I have had massive lag on bad code with only 200 IPU but at the same time no lag at 2000 IPU with much better behaved code. As for the size/mass stuff take a look at RP1 as part of that mod set includes techtree based limits on kOS disk sizes.
If you want light lag install the remote tech mod and kOS will mostly respect the imposed light lag.
Adding "RAM" limits to kOS would break to many of the existing scripts, require significant back end changes to how kOS operates, and would be extremely hard to quantify due to the significant layers of abstraction that exist in kOS.
Ok. I understand. But i think i miss to explain the last point, I'm not saying as RAM to ensure all variable used occupy less than actual RAM, it's very hard for nothing. Just that the code tu run loaded in memory should be less or equal than size disk for instance. Currently i could load any code size and break the HD restriction.
Some what more reasonable but still unlikely to go in for backwards compatibility reasons and because as the "hack" as you call it is an intentional feature not a bug. Due to the fact that a significant number of kOS users don't use the file system and just run off the archive all the time. Imposing such a restriction would break to many peoples existing scripts and craft.
Speaking as someone who runs off the archive all the time, I would like having a difficulty setting (opt-in) that prevented me from running arbitrarily large scripts on low-tech CPUs. I run off the archive to avoid having to think about the dependency tree, but I view the irrelevance of the RAM size as an unfortunate consequence of this approach, not a desired feature.
Add options for realistic evolution, i think it would be very easy to implement. Some points: