Once the stack v1.2 is done #24 , attribute weight to each data source (PL and NL).
Some PLs like HTML and CSS should probably be down-sampled.
On the other hand, we might want to upsample some low-resource PLs.
For the final model the goal would be to have a limit of 5 epochs maximum for example, and check that each of the data sources stays under that limit. An estimated 600B training tokens can be used to check this limit.
Once the stack v1.2 is done #24 , attribute weight to each data source (PL and NL).
Some PLs like HTML and CSS should probably be down-sampled. On the other hand, we might want to upsample some low-resource PLs.
For the final model the goal would be to have a limit of 5 epochs maximum for example, and check that each of the data sources stays under that limit. An estimated 600B training tokens can be used to check this limit.