hongzimao / pensieve

Neural Adaptive Video Streaming with Pensieve (SIGCOMM '17)
http://web.mit.edu/pensieve/
MIT License
516 stars 280 forks source link

Pensieve and real experiments. How to use my own video sequences. #36

Open AnatoliyZabrovskiy opened 6 years ago

AnatoliyZabrovskiy commented 6 years ago

Hi!

I have integrated Pensieve ABR algorithm (pensieve/real_exp/) in our AdViSE: Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players (https://dl.acm.org/citation.cfm?id=3083221). Now I use your pretrained model with linear QoE (NN_MODEL = '../rl_server/results/pretrain_linear_reward.ckpt'). And I get good results. Do you have any information about this model, how did you create this model and for what network conditions it was designed?

How can I conduct tests with another dash content? What should I change in a rl_server_no_training.py? I want to conduct some experiment with Bick Buck Bunny sequense including some representaion.

Thank you!

hongzimao commented 6 years ago

For the evaluation and training of the model, please refer to the paper (https://dl.acm.org/citation.cfm?id=3098843) section 5.1 and 4.4.

You may want to retrain the learning agent with your video (since the video length might be different). The instruction of doing training is in https://github.com/hongzimao/pensieve/blob/master/sim/README.md. I think you also need to replace https://github.com/hongzimao/pensieve/tree/master/video_server with your own Bick Buck Bunny video.

Hope this helps.

AnatoliyZabrovskiy commented 6 years ago

Thanks for your reply. Using your video, I can train the model. But I can not train the model using my video (Bick Buck Bunny with 14 representations). When I change in ./sim/multi_agent.py the following variables::

A_DIM = 6 # default

A_DIM = 14 # Bick Buck Bunny

VIDEO_BIT_RATE = [300,750,1200,1850,2850,4300] # Kbps

VIDEO_BIT_RATE = [100,150,200,300,500,800,1200,1800,2400,2500,2995,3000,4500,8000] # Kbps

HD_REWARD = [1, 2, 3, 12, 15, 20]

HD_REWARD = [1, 2, 3, 12, 15, 20, 23, 26, 30, 33, 36, 39, 42, 45]

I get an error:

Traceback (most recent call last): File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) File "multi_agent.py", line 291, in agent state[4, :A_DIM] = np.array(next_video_chunk_sizes) / M_IN_K / M_IN_K # mega byte ValueError: could not broadcast input array from shape (6) into shape (8)

Maybe I need to add or change something else? Thanks!

hongzimao commented 6 years ago

I think you need to change the input data shape in a3c.py as well. Sorry we didn't have a global parameter for this.

AnatoliyZabrovskiy commented 6 years ago

I changed A_DIM value in a3c.py from 6 to 14, but still have the same error. Maybe something else needs to be changed ....? Thanks!

hongzimao commented 6 years ago

So the error message (your earliest post) was saying the number of actions is different with the number of bitrates of your video. Notice we have chunk size embedded in the state space (ref. The Design section of the paper). Can you make sure these shapes match? As a sanity check, you can set the bitrate levels the same as ours (by ignoring some of yours) and see if the code runs with no error?