The current implementation will always read in the same data from the historical data stock files and then randomize it for building the training/testing sets. This is a little time consuming to format the data every time, so it would be nice to be able to just do this once, save all the information, then select it for training/testing.
Potential problems:
The labels for training/testing are dynamically created based on how "far" into the future we want the network to make predictions
The size of the data sets are dynamically assigned based on how large of a network input we want
The current implementation will always read in the same data from the historical data stock files and then randomize it for building the training/testing sets. This is a little time consuming to format the data every time, so it would be nice to be able to just do this once, save all the information, then select it for training/testing.
Potential problems: