Closed ddsnowboard closed 6 years ago
I realized too late that this is another version of #58. This one is more serious though, so I'm just going to link to this there and close that one.
Another crazy, but probably effective and educational, idea would be to have the system automatically batch calls in the background. So you can call the remote_load
method from anywhere whenever you want, but it will block for some number of milliseconds N while it waits for more calls, since they usually come in big groups. After N ms, it will make the call, parse out all the outputs, and return. This will make some calls take N ms more than they should, which is probably bad, although it's the network, so who cares about time. This will guarantee that we only call the server at 1/N kHz, which is a plus, and it will require no changes to any other part of the app, which is also a plus. And I think I might learn something. I don't know what, but something.
More specific thoughts: I write an application in Go because that's fun (I would also try Rust, but this is just not what Rust is for). The app can either be a server that holds everything in memory or just a program that writes everything to disk all the time (as I think of it, the server seems like it would be easier to communicate and return things, and handling a bunch of connections is what Go does best.). Whenever you want a price, you just ping the program and wait for it to come back. The program automatically takes care of all the batching and returning.
This sounds fun. And stupid. I'm excited for this now.
My last idea won't actually work. The unit tests will just block every call for some amount of time and return, having asked the API for only one stock. The number of API calls won't change. I need some solution with either manually combining the price checks (boring, tedious, sometimes impossible, I suspect) or using some asynchronous stuff so that they can be properly batched. I need to do more research on what python supports with regard to asynchronous web calls. And even with all this I might need to still build the separate batching program because the JavaScript calls are all separate. Or I could rebuild those and then not have a separate program, I think. Decisions decisions...
Upon further research, I could use either ThreadPoolExecutor
s or asyncio
or any of the other options to do this. ThreadPoolExecutor
s will probably take less refactoring, which is as good a reason as any to use it. Nonetheless, I think I will still need some batching middleware to sit in the middle and intercept everything, which isn't the end of the world. Although I want to make sure I've thought this all the way through before I do anything because there will be a lot going on after and I'd be frustrated to waste all the effort. But I think if I change more things to use ThreadPoolExecutor
s (assuming they work how I expect them to) and write the middleware I'll be in good shape. I think.
If, however, ThreadPoolExecutor
s never stop a running thread and start another one, I'll either be limited to a small, finite number of requests per API call or I'll have lots and lots of threads, since if they run to completion, just N at a time, that would be bad. But one of those things might not be true. We shall find out...
Or you could just use this https://iextrading.com/developer/docs/#getting-started
I just changed to a different API. Maybe I'll do this someday, but it doesn't really make sense to do it anymore other than for learning. But I've got plenty to learn on other things on this app that really need to get done.
The new stock API (alphavantage.co) works pretty well, but it's slow and really sensitive to high call volume. I think the old way I used of just hitting the API a million times whenever I felt like it will not work into the future. For testing, I can probably just sleep and call it a day until I find a whole different approach, but for production, I think I need to re-architect how the system asks for stocks. Probably keep the async call, but have it ask for all the stocks at once. Can't be that hard. Famous last words...