Netflix / vmaf

Perceptual video quality assessment based on multi-method fusion.
Other
4.56k stars 748 forks source link

Open Source or Open Knowledge? #153

Closed bernabe123 closed 5 years ago

bernabe123 commented 6 years ago

Hi,

I am still very confused about this open source initiative, which is to be applauded. But there is no technical documentation about it.

In general you would expect a truly open source initiative to include technical details of the project. I don't see that. I don't see tech reports of the perceptual studies. We all know about data fusion and SVM, so why not give the details?

The threads seem to be mostly about software engineering and whether this compiled here or there or error reports.

Do you have plans to release any kind of technical documentation? Otherwise I don't really consider it open source technology.

There are people who debate about whether a vmaf score of 97 is better than 95, when there are statistical uncertainties that are not easy at all to find.

So, my petition is: can you be more transparent? Are you ever going to release a 4k or 8k version? Even for you mobile version you are obscure about what ITU protocols you are following.

Best,

yuhjay commented 6 years ago

Hi bernabe123,

At first, I feel impolite to call a person with a codename and I am sorry if my following words sound harsh but I actually intended no harm.

For "Knowledge", everyone may have different definitions. If you are an academic researcher, there are related academic papers about VMAF are publicly accessible. Please feel free to cite the following works if you are going to publish related work.

"Challenges in cloud based ingest and encoding for high quality streaming media", A Aaron, Z Li, M Manohara, JY Lin, ECH Wu, CCJ Kuo "A fusion-based video quality assessment (FVQA) index", JY Lin, TJ Liu, ECH Wu, CCJ Kuo "MCL-V: A streaming video quality assessment database", JY Lin, R Song, CH Wu, TJ Liu, H Wang, CCJ Kuo "EVQA: An ensemble-learning-based video quality assessment index", JY Lin, CH Wu, I Katsavounidis, Z Li, A Aaron, CCJ Kuo

If you are in industry, you may want to follow Netflix's blog. I don't think Netflix want to hide anything, but Github is usually not a good place for sharing technical details. You don't want to make your README so huge to scare other people.

For ITU protocol for video quality assessment, the latest P. 910 published in 2008 and it is not even designed for current mobile phones. I guess this would be difficult for them to follow an appropriate ITU protocol to conduct the subjective test. However, I also hope to see @li-zhi will have an update of experimental settings.

Anyway, the software discussion is very important here, I observed a lot of kind people contribute to this project because of some threads. If you want to start open discussions, the wiki page https://en.wikipedia.org/wiki/Video_Multimethod_Assessment_Fusion might be a place holder.

Sincerely, Joe

li-zhi commented 6 years ago

@bernabe123 thanks for the great comment. Definitely we have not been diligent enough to document details / keep things up to date. As @yuhjay pointed out, there have been some materials here and there, but we haven't kept a central repo for everyone's easy access. I'll have a discussion with colleagues on what's the best way moving forward.

But also VMAF has been evolving ever since. I expect that it will continue evolving in the next couple of years. This adds another layer of complexity.

We'll be discussing some further development of VMAF, including a new 4K model, in the upcoming VQEG meeting in Madrid. If you are going to this one, we can discuss further.

bernabe123 commented 6 years ago

HI guys, it is not my intention to hide my identity, apparently my name was taken so I need to keep looking at one that is close to mine. This is my LinkedIn profile:

https://www.linkedin.com/in/manuel-tiglio-a13b1b15/

We are based in San Diego but myself i spend a lot of my time in the Bay Area. Would it be possible Li to set up a meeting? (I am assuming you are based in the Bay Area).

Joe, I have no reasons or intentions to hide my identity. I just started adding numbers to my nickname until one was not being used.... Just that.

Are you guys planning on any meeting, even if informal, in the Bay Area? If not, do you have zoom calls or can we have one? My intention here is to help the whole community.

Manuel Tiglio. Email: tiglio@fastechmedia.com. Cell: 202 412-4088

Cheers-- Manuel

li-zhi commented 6 years ago

Hi all,

I've created a page to track the reference to VMAF. Hope this help with sharing the knowledge of VMAF more easily. Let me know if you think anything is missing.

https://github.com/Netflix/vmaf/blob/master/resource/doc/references.md

Zhi

mht13 commented 6 years ago

Joe,

First, the codename was cleared up months ago - maybe you missed it.

I am not looking for general data on data fusion and basic machine learning or tech blogs but a technical paper.

What I am assuming is that VMAF uses data fusion to combine several computational metrics, plus perceptual scores. Using which ITU protocol? If you look at the ITU references they seem a rough average between disparaging different preferred viewing distances. And if you read the papers, some of them deal with one observer, images not video, and what not. Forget about statistics and confidence intervals and anything that would make VMAF a publishable paper in a serious, peer reviewed journal.

Since 2016, I believe that everything has been refactoring from the community and Zhi (only person working on this at Netflix????). No algorithmic changes. No technical explanations (please, don’t bring up tech blogs). Two years now.

Sure, I have to grant the mobile version and the new 4k one. But details? Nothing. If VMAF is a truly open source project everything should be documented, including the algorithms.

Some “experts” insist on having a high VMAF score, say, 97, while at that point everything is statistical errors. Joe: have you tried to compute VMAF between it and itself? You will see what I mean what I mean by statistical errors (which come either from data fusion and/or perceptual scores).

Come on people, write like scientists. I suspect (hope) that many of you have Ph Ds.

Best,

Manuel

On Mar 9, 2018, at 10:25 AM, Joe Yuchieh Lin notifications@github.com wrote:

Hi bernabe123,

At first, I feel impolite to call a person with a codename and I am sorry if my following words sound harsh but I actually intended no harm.

For "Knowledge", everyone may have different definitions. If you are an academic researcher, there are related academic papers about VMAF are publicly accessible. Please feel free to cite the following works if you are going to publish related work.

"Challenges in cloud based ingest and encoding for high quality streaming media", A Aaron, Z Li, M Manohara, JY Lin, ECH Wu, CCJ Kuo "A fusion-based video quality assessment (FVQA) index", JY Lin, TJ Liu, ECH Wu, CCJ Kuo "MCL-V: A streaming video quality assessment database", JY Lin, R Song, CH Wu, TJ Liu, H Wang, CCJ Kuo "EVQA: An ensemble-learning-based video quality assessment index", JY Lin, CH Wu, I Katsavounidis, Z Li, A Aaron, CCJ Kuo

If you are in industry, you may want to follow Netflix's blog. I don't think Netflix want to hide anything, but Github is usually not a good place for sharing technical details. You don't want to make your README so huge to scare other people.

For ITU protocol for video quality assessment, the latest P. 910 published in 2008 and it is not even designed for current mobile phones. I guess this would be difficult for them to follow an appropriate ITU protocol to conduct the subjective test. However, I also hope to see @li-zhi https://github.com/li-zhi will have an update of experimental settings.

Anyway, the software discussion is very important here, I observed a lot of kind people contribute to this project because of some threads. If you want to start open discussions, the wiki page https://en.wikipedia.org/wiki/Video_Multimethod_Assessment_Fusion <x-msg://66/url> might be a place holder.

Sincerely, Joe

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Netflix/vmaf/issues/153#issuecomment-371902540, or mute the thread https://github.com/notifications/unsubscribe-auth/AVyfNpOGBnnLLvIsU9cV9AeaoSXfenKuks5tcskXgaJpZM4SkgvS.

christosbampis commented 5 years ago

Hi Manuel,

with regards to your point on confidence intervals for VMAF, there was some recent work which you can learn more about here: https://github.com/Netflix/vmaf/blob/master/resource/doc/VQEG_SAM_2018_023_VMAF_Variability.pdf and here: https://github.com/Netflix/vmaf/blob/master/resource/doc/VQEG_SAM_2018_111_AnalysisToolsInVMAF.pdf

Since this issue has not been active lately, it could be closed. However, I will leave this open in case we want to touch this subject later on, or find some other resources to bring up as a comment to this issue.

Best,

Christos