Closed that-ben closed 1 week ago
Infinite Mac normally does not do concurrent requests for disks chunks. In fact, it can't – requests are driven by what the emulated Mac is asking for, and it's blocked until a response is received.
The only exception are chunks that are prefetched - the disk definitions have them because those are needed during startup and it's a performance optimization (example: https://github.com/mihaip/infinite-mac/blob/b1357d86d89073b8604f00191054ff0123d50b97/src/disks.ts#L255). But those are only for built-in disks, so they're never used for archive.org or other URL-based disks. If you're generating those for your custom server setup and are running into problems, you may want to not prefetch anything.
You can see the sequential nature in action with something like https://infinitemac.org/run?cdrom=https://img.classicmacdemos.com/starcraft.dsk&machine=Power+Macintosh+9500&ram=32M&saved_hd=true - each chunk is requested when the previous one finishes:
So I could simply add chunk # 0 to the pre-fetch queue and nothing else then?
Yes. That's what I do for the generated disk definition for CD-ROMs: https://github.com/mihaip/infinite-mac/blob/b1357d86d89073b8604f00191054ff0123d50b97/src/emulator/emulator-worker-cdrom-disk.ts#L47
OK I'll try that. Thanks for replying! So this doesn't explain why ARCHIVE.ORG errors out with a 500 when streaming ISO's then. When I caught my IM implementation requesting 50 chunks all at once I just assumed ARCHIVE.ORG saw the same thing coming from IM.
Would it be possible to reduce the number of parallel connections made from InfiniteMac to the disk image server? I'm pretty sure this is the reason why ARCHIVE.ORG crashes with an error 500 on ISO files. I tested on my server and caught InfiniteMac establishing 50 individual connections at once requesting file chunks over HTTP/1.1. This feels very unnecessary as downloading 50 chunks in parallel won't go any faster than only 1, plus a ton of connections in parallel is blocked by many web host firewalls nowadays due to obvious abuse leading to DDoS.
I feel like InfiniteMac should use some kind of queue in which only a maximum 4 connections are made at once at any given time as it would enable more web hosts to stream ISO files without denying access. I understand HTTP/2 uses multiplexing (or in other words reuse the same established connection to requests a bunch of chunks instead of an individual connection per chunk) but still for those of us who still use HTTP/1.1 it should be good to reduce the number of parallel connections established to get chunks.