NVIDIA / vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Other
8.57k stars 1.2k forks source link

Info regarding the dataset used to train model for CityScapes #100

Open ayooshkathuria opened 5 years ago

ayooshkathuria commented 5 years ago

I have a few questions about the dataset that was used to train the cityscapes model that is provided.

  1. I see CityScapes has about 4k frames, coming from 3 sequences ( Stuttgart 00, 01, 02). These contain about 4k frames in total ( 599, 1100, 1200)

  2. How many sequences were used to train (It's 6 in the repo).

  3. How many frames per video. (30?)