Closed trhr closed 2 years ago
Base cost is only calculated on the main module, not any imports?
export.js
export function test(ns) {
ns.print("test");
}
import.js
import { test } from '/export.js'
export async function main(ns) {
test(ns);
}
[home ~/]> mem import.js This script requires 1.60GiB of RAM to run for 1 thread(s) 1.60GiB | baseCost (misc)
Test 2:
export.js
export function test(ns) { ns.print("test"); }
export async function main(ns) { ns.hack('n00dles'); }
import.js (no changes)
> [home ~/]> mem import.js
> This script requires 1.60GiB of RAM to run for 1 thread(s)
> 1.60GiB | baseCost (misc)
I'm aware of that. Two notes, however:
Which is somewhat alien to real world software, which is typically performance costed by cycle time and not "module import size." If you never reference a function in a module, you never eat that performance hit.
I do not agree with that. Performance in cycle time / execution time is a real world cost as much as the size of the executable, libs and RAM usage. Even if you never call a foreign function, as long as it is part of an imported lib (or bound to a name in interpreted languages) a RAM cost is paid. If there is no dynamic loading feature or if the module has to be run to be able to import any function (I think about Python) then a higher RAM cost is paid. The RAM cost is never as high as the cost of running the function (because of dynamic allocation, but we could argue that BB OS maps all the reserved virtual memory to physical memory) and is paid once for all instances of a program.
But we don't really care about real world, this is a game mechanic to limit the amount/kind of scripts running, it doesn't need to be completely realistic. As for the main topic, I am not sure whether running script should be cheap even if the script uses no/only 0GB functions. I got used to this constraint in early game and it goes away after upgrading memory a couple of times. My opinion on the matter is that that low cost script would make early BN less interesting and challenging. The lost challenge is the tradeoff between heavy convenience / automation decision and chopping every RAM cost to fit the most instance of money producing scripts in the starting 32/128GB.
The lost challenge is the tradeoff between heavy convenience / automation decision and chopping every RAM cost to fit the most instance of money producing scripts in the starting 32/128GB
We have different challenges, then. My challenge is that my monolithic scripts are physically lagging my computer. Heapsorts ain't free. Especially in javascript. Excising some of the more intensive workloads into a separate application would mean they can have their own timers and scheduling and not be beholden to being stuck inside an infinite loop which (i'm pretty sure) doesn't support setInterval.
ns.asleep may be helpful for you.
You can import individual functions from your massive import library, and will only incur any costs from those specific functions. (This allows you to have a single library, but also requires you to specifically name each function you want to use, which is ok if you only need/want a few, but can be a bother if you use alot of them.)
Also, importing a script with no ns functions in it does not increase the size of the file importing it (my notns.js file does exactly this - it doesn't even HAVE a main or any references to ns in it - when I import it into other scripts, they do not increase in size).
As far as "real-world" the idea of threads taking up more RAM is also kind of weird. My assumption is that this is a "high performance" architecture, meaning that "RAM" is more like cache in real-world personal computers, ie, it's the space where computations done by the processor are done/held. In essence, every script that is running is loaded onto the computer's "RAM drive" for faster access/etc which is why RAM is consumed as soon as you include the data (it's loaded to the RAM drive in case you need it).
But really, asleep is probably what you're looking for. Import the functions into a single script, then use asleep to run them on their own timers (I think that's how it's supposed to be used?)
+1 to this, even a ns.sleep(1) will make a huge difference, tho i find it needs to be a bit more to keep things smooth, a better way to look at is your kind of running an "infinite loop". most languages will early out based on depth i.e infinite recursion
before you lagg out, but due to most peoples scripts only going maybe 10 functions deep before coming back out again (i assume) we have other logic somewhere to work out if the user script has cause things to become completely unresponsive.
as for the comment of ram usage i disagree the idea is you are being "charged" to start automating, if the issue is that you are doing lots of things in smaller scripts just bundle them together in one.
i would argue the idea is to encourage the user to be "clever" about how they distribute their tasks across scripts.
(just be thankful there are any 0 cost functions at all)
if it was up to me everything would cost something and every call would be additive 😆
Alternatively, an arbitrarily low base cost; like .05GB.
Alternatively: extend this to scripts that exclusively use 0GB functions (print, tprint, read, write, etc).
Modular design is a good pattern, and one we should encourage. As is, the game heavily penalizes modular design, since modules each carry a significant cost.