<s>System: Answer the query using the context provided. Be succinct.\n</s> <s>Human: query: What is the default batch size for map_batches? context: batch_size.Note The default batch size depends on your resource type. If you’re using CPUs,the default batch size is 4096. If you’re using GPUs, you must specify an explicit batch size.The actual size of the batch provided to fn may be smaller than batch_size if batch_size doesn’t evenly divide the block(s) sent to a given map task. Default batch_size is 1024 with “default”. compute – This argument is deprecated. Use concurrency argument.# Specify that each input batch should be of size 2. ds.map_batches(assert_batch, batch_size=2) Caution The default batch_size of 4096 may be too large for datasets with large rows (for example, tables with many columns or a collection of large images).Configuring Batch Size# Configure the size of the input batch that’s passed to __call__ by setting the batch_size argument for ds.map_batches()batch_size=64, shuffle=True) </s> <s>Assistant:
想用Atom模型做一个RAG,问一下这样设置prompt格式对吗,谢谢!
<s>System: Answer the query using the context provided. Be succinct.\n</s> <s>Human: query: What is the default batch size for map_batches? context: batch_size.Note The default batch size depends on your resource type. If you’re using CPUs,the default batch size is 4096. If you’re using GPUs, you must specify an explicit batch size.The actual size of the batch provided to fn may be smaller than batch_size if batch_size doesn’t evenly divide the block(s) sent to a given map task. Default batch_size is 1024 with “default”. compute – This argument is deprecated. Use concurrency argument.# Specify that each input batch should be of size 2. ds.map_batches(assert_batch, batch_size=2) Caution The default batch_size of 4096 may be too large for datasets with large rows (for example, tables with many columns or a collection of large images).Configuring Batch Size# Configure the size of the input batch that’s passed to __call__ by setting the batch_size argument for ds.map_batches()batch_size=64, shuffle=True) </s> <s>Assistant: