ServiceNow / N-BEATS

N-BEATS is a neural-network based model for univariate timeseries forecasting. N-BEATS is a ServiceNow Research project that was started at Element AI.
Other
508 stars 116 forks source link

Project dependencies may have API risk issues #17

Open PyDeps opened 1 year ago

PyDeps commented 1 year ago

Hi, In N-BEATS, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

gin-config
fire
matplotlib
numpy
pandas
patool
torch
tqdm
xlrd

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project, The version constraint of dependency pandas can be changed to >=0.13.0,<=0.23.4. The version constraint of dependency tqdm can be changed to >=4.42.0,<=4.64.0.

The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the pandas
pandas.DataFrame.to_csv
pandas.read_csv
pandas.read_excel
pandas.concat
The calling methods from the tqdm
itertools.product
tqdm.tqdm
The calling methods from the all methods
dates.s.datetime.strptime.strftime.map.list.np.unique.dump
layer
collections.OrderedDict
numpy.cos
pandas.concat
numpy.array
datasets.tourism.TourismDataset.download
i.permutations.np.where.raw_data.rstrip
os.stat
tqdm.tqdm
training_values.extend
group_by.forecast_file.summary_filter.experiment_path.os.path.join.glob.tqdm.file.file.pd.read_csv.pd.concat.set_index.groupby
self.build
join
x_mask.x.model.cpu.detach
cmd.write
experiments.model.interpretable
x_mask.x.model.cpu
torch.abs
self.snapshot
values.extend
models.nbeats.GenericBasis
min
os.fsync
numpy.sin
optimizer.state_dict
numpy.where
iter
summary.utils.group_values
enumerate
torch.no_grad
__loss_fn
URL_TEMPLATE.format
test.reset_index.reset_index
i.i.data.sum
model.to.parameters
numpy.mean
permutations.rstrip.split.np.array.astype.rstrip
os.chmod
patoolib.extract_archive
torch.device
value.str.replace
left_indices.append
torch.load
weighted_score.values
torch.save
super.__init__
datasets.tourism.TourismDataset.load
pandas.DataFrame.to_csv
experiments.trainer.trainer
snapshot_manager.register
float
common.sampler.TimeseriesSampler
dir_path.Path.mkdir
pandas.DataFrame
str
os.getenv
groups.extend
torch.mean
row_vector.split.np.array.astype
i.permutations.np.where.raw_data.rstrip.split
x.flip
models.nbeats.TrendBasis
numpy.random.randint
group.lower
datasets.traffic.TrafficDataset.download
common.metrics.mase
shutil.copy
torch.cuda.is_available
training_loss_fn.backward
metric
d.items
model.load_state_dict
datasets.m4.NAIVE2_FORECAST_FILE_PATH.pd.read_csv.values.astype
range
common.torch.losses.smape_2_loss
parsed_values.np.array.astype
numpy.array.dump
datetime.timedelta
i.timedelta.current_date.strftime
itertools.product
os.path.dirname
os.walk
time.time
torch.nn.ModuleList
optimizer.load_state_dict
row_vector.split
os.rename
common.http_utils.download
dataset.dump
urllib.request.urlretrieve
numpy.isnan
numpy.load
snapshot_manager.restore
Exception
self.basis_parameters
training_loss_fn
super
url.split
numpy.abs
snapshot_manager.enable_time_tracking
int
numpy.power
forecasts.extend
test_values.extend
datasets.m4.M4Dataset.download
pandas.read_csv.iterrows
common.sampler.TimeseriesSampler.last_insample_window
list
common.metrics.mape
success_flag.Path.touch
dict.items
pandas.read_csv.set_index
cfg.write
ids.extend
TourismDataset
model.to.to
datasets.traffic.TrafficDataset.load.split_by_date
file_path.os.path.dirname.pathlib.Path.mkdir
torch.nn.Linear
zip
numpy.concatenate
models.nbeats.NBeats
right_indices.append
fire.Fire
torch.optim.Adam
M3Dataset
logging.root.setLevel
raw_line.replace.strip.split
timeseries_dict.values.list.np.array.dump
collections.OrderedDict.values
gin.configurable
models.nbeats.NBeatsBlock
max
s.datetime.strptime.strftime
permutations.rstrip.split.np.array.astype
numpy.append
datasets.m3.M3Dataset.load
pandas.read_csv
len
pandas.read_excel
default_device
TrafficDataset
numpy.prod
test.iloc.astype
dict
datasets.electricity.ElectricityDataset.load
common.torch.ops.default_device
common.torch.ops.divide_no_nan
models.nbeats.SeasonalityBasis
numpy.zeros
datasets.electricity.ElectricityDataset.load.split_by_date
torch.load.items
torch.nn.Parameter
isinstance
torch.nn.utils.clip_grad_norm_
numpy.transpose
common.torch.snapshots.SnapshotManager
sys.stdout.flush
torch.load.keys
numpy.max
os.path.isdir
numpy.sum
input_mask.flip.flip
tempfile.NamedTemporaryFile
M4Dataset
torch.optim.Adam.zero_grad
datasets.m3.M3Dataset.download
numpy.round
train_meta.iloc.astype
numpy.unique
torch.tensor
dataclasses.dataclass
dates.np.array.dump
tempfile.NamedTemporaryFile.flush
self.instance
f.readlines
os.path.basename
open
permutations.rstrip.split
common.torch.losses.mape_loss
common.metrics.smape_1
forecast_file.summary_filter.experiment_path.os.path.join.glob.tqdm.file.file.pd.read_csv.pd.concat.set_index
raw_line.replace.strip
urllib.request.install_opener
sys.stdout.write
horizons.extend
group_by.group_by.forecast_file.summary_filter.experiment_path.os.path.join.glob.tqdm.file.file.pd.read_csv.pd.concat.set_index.groupby.median
round
group_count
logging.info
gin.parse_config_file
sorted
common.torch.losses.mase_loss
urllib.request.build_opener
torch.relu
common.http_utils.url_file_name
glob.glob
x_mask.x.model.cpu.detach.numpy
model.state_dict
round_all
torch.float32.array.t.tensor.to
format
datetime.datetime.strptime
os.path.join
numpy.save
datasets.m4.M4Dataset.load
self.summarize_groups.keys
dates.extend
torch.optim.Adam.step
self.summarize_groups
pathlib.Path
torch.einsum
instance_path.Path.mkdir
self.basis_function
numpy.ceil
map
datasets.traffic.TrafficDataset.load
os.path.isfile
model.to.train
block
experiments.model.generic
datasets.electricity.ElectricityDataset.download
experiments.trainer.trainer.eval
model
raw_line.replace
build_cache
splits.items
tempfile.NamedTemporaryFile.fileno
train.reset_index.reset_index
numpy.array.append
ElectricityDataset
common.metrics.smape_2
numpy.arange
next
numpy.sqrt

@developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.