Open wbeardall opened 1 year ago
As someone who needs repeatability I'd really like to see this merged. Do you have any insight into the performance implications? Clearly this should be faster than setting workers_count=1
and slower than using the current unordered ThreadPool
.
I would assume that any performance differences would only be noticeable for the first few items, and once all queues are all filled and model training becomes the bottleneck there should be almost no difference compared to unordered ThreadPool
. If that is correct, then I think this should actually become the default.
The additional overhead seems to be functionally negligible for most practical use-cases where you're model bottlenecked. @jrauch-pros you are correct that there is a performance cost at initialization, see below:
import os
import time
import shutil
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from petastorm import make_batch_reader
from petastorm.tests.test_common import create_test_scalar_dataset
import seaborn as sns
sns.set()
tmp_pq = "/tmp/tmp.parquet"
url = "file://" + tmp_pq
file_counts = [2, 5, 10, 20]
results = []
for num_files in file_counts:
if os.path.exists(tmp_pq):
shutil.rmtree(tmp_pq)
_ = create_test_scalar_dataset(url, max(file_counts), num_files=num_files, partition_by=['id'])
for pool_type in ["thread","orderedthread"]:
times = []
reader = make_batch_reader(url, reader_pool_type=pool_type)
times.append(time.time())
for row in reader:
times.append(time.time())
times = np.asarray(times)
durs = times[1:] - times[:-1]
for row, d in enumerate(durs):
results.append(dict(num_files=str(num_files),pool_type=pool_type, row=row, time=d))
shutil.rmtree(tmp_pq)
results = pd.DataFrame(results)
f,ax = plt.subplots()
_ = sns.lineplot(results, x="row",y="time",hue="num_files",style="pool_type", ax=ax)
plt.yscale('log')
plt.legend(loc='upper right')
f.savefig("ordered_thread_pool_performance.pdf")
This PR offers a solution for #551, where the standard ThreadPool implementation can return dataset pieces out of order.
Contributions
OrderedThreadPool
implementation, which internally keeps track of results and the pieces indexes returned by the ventilator, and only returns pieces in exact order.WorkerThread
is that it returns indexedOrderedVentilatedItemProcessedMessage
objects.make_reader
andmake_batch_reader
to allow for instantation ofOrderedThreadPool
objects from string withreader_pool_type="orderedthread"
make_reader
andmake_batch_reader
to include the ordered optiontest_parquet_reader.py
Worked Example
We provide a modified version of the minimal code example in #551, which can be used to verify the solution.