zcakhaa / DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books

This jupyter notebook is used to demonstrate our recent work, "DeepLOB: Deep Convolutional Neural Networks for Limit Order Books", published in IEEE Transactions on Singal Processing. We use FI-2010 dataset and present how model architecture is constructed here. The FI-2010 is publicly avilable and interested readers can check out their paper.
374 stars 205 forks source link

how can you train 6m LSE data by the method fit #4

Closed willinglion closed 3 years ago

willinglion commented 3 years ago

hi, deeplob.fit(trainX_CNN, trainY_CNN, epochs=200, batch_size=64, verbose=2, validation_data=(testX_CNN, testY_CNN)) this is the only train method I find in the code. In the paper, you use 6m LSE data as train data and 3m as validation data. But I can not use fit to train them, the data is too huge.

zcakhaa commented 3 years ago

It is usually a memory problem. You just need to split your data into chunks, train your model with one chunk, and keep doing it.


From: willinglion notifications@github.com Sent: Tuesday, January 12, 2021 12:57 PM To: zcakhaa/DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: [zcakhaa/DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books] how can you train 6m LSE data by the method fit (#4)

hi, deeplob.fit(trainX_CNN, trainY_CNN, epochs=200, batch_size=64, verbose=2, validation_data=(testX_CNN, testY_CNN)) this is the only train method I find in the code. In the paper, you use 6m LSE data as train data and 3m as validation data. But I can not use fit to train them, the data is too huge.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/zcakhaa/DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books/issues/4, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AFN66TGJC5G7BHGO6GO4CZ3SZRBM3ANCNFSM4V7FLR6A.

willinglion commented 3 years ago

you only use fit, and do not use fit_generator()? do the chunks include train data and validation data together?

zcakhaa commented 3 years ago

You don't necessarily need to use fit_generator(). You can validate once you have trained your model on the full training data for one epoch. Essentially, you just split training data into small parts and randomly load each small part and train.


From: willinglion notifications@github.com Sent: Tuesday, January 12, 2021 2:02 PM To: zcakhaa/DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books@noreply.github.com Cc: Zihao Zhang zhangzihao@hotmail.co.uk; Comment comment@noreply.github.com Subject: Re: [zcakhaa/DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books] how can you train 6m LSE data by the method fit (#4)

you only use fit, and do not use fit_generator()? do the chunks include train data and validation data together?

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/zcakhaa/DeepLOB-Deep-Convolutional-Neural-Networks-for-Limit-Order-Books/issues/4#issuecomment-758674522, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AFN66TGQYRSXXDWBAAK6ZK3SZRJBFANCNFSM4V7FLR6A.

willinglion commented 3 years ago

thank you.