@lifrordi hi, In the Deepstack paper said that compute preflop strategy in last 20 iterations was query flop network, which means that input tensor to network size is [22100* 10, 2001] (10 nodes), if the network was 7 layers as the paper said, the need memory is large then 8GB, and they speed is slower than paper said. Do you use suit isomorphism or some other technique to reduce the public cards from 22100 to same value?
@lifrordi hi, In the Deepstack paper said that compute preflop strategy in last 20 iterations was query flop network, which means that input tensor to network size is [22100* 10, 2001] (10 nodes), if the network was 7 layers as the paper said, the need memory is large then 8GB, and they speed is slower than paper said. Do you use suit isomorphism or some other technique to reduce the public cards from 22100 to same value?