spotDL / spotify-downloader

Download your Spotify playlists and songs along with album art and metadata (from YouTube if a match is found).
https://spotdl.readthedocs.io/en/latest/
MIT License
17.18k stars 1.58k forks source link

Fix tests again #349

Closed linusg closed 5 years ago

linusg commented 6 years ago

What is the purpose of your issue?

Description

Any chance we can (finally) make the tests more stable? It's a bummer to see those failing because some YouTube video search ended not up as expected, it makes it kind of hard to quickly notice real broken builds!

E.g. the latest: https://travis-ci.org/ritiek/spotify-downloader/jobs/426367358

=================================== FAILURES ===================================
____________ TestYouTubeTitle.test_single_download_with_youtube_api ____________
self = <test_without_metadata.TestYouTubeTitle object at 0x7fc115278c18>
    def test_single_download_with_youtube_api(self):
        global content
        global title
        expect_title = "Tony's Videos VERY SHORT VIDEO 28.10.2016"
[1m        key = 'AIzaSyAnItl3udec-Q1d5bkjKJGL-RgrKO_vU90'
        const.args.youtube_api_key = key
        youtube_tools.set_api_key()
        content = youtube_tools.go_pafy(raw_song, metadata)
        title = youtube_tools.get_youtube_title(content)
>       assert title == expect_title
E       assert 'Very short video' == "Tony's Videos VERY SHORT VIDEO 28.10.2016"
E         - Very short video
E         + Tony's Videos VERY SHORT VIDEO 28.10.2016
test/test_without_metadata.py:98: AssertionError
----------------------------- Captured stderr call -----------------------------
DEBUG: query: {'maxResults': 50, 'part': 'snippet', 'type': 'video', 'q': "Tony's Videos VERY SHORT VIDEO 28.10.2016"}
DEBUG: query_results: {'maxResults': 50, 'part': 'contentDetails,snippet,statistics', 'id': 'j0VwlUkxJmw,fK4SLPmxeY4,OE4dDJw5wAI,uRasm7397MM,plCGs8PYreo,Twkb_jXEPuU,sulRwNHbjcc,5USR1Omo7f0,AkDye53pE0M,XV4LCjRERlM,t-1vaOy8oPU,ouWRtZhVQSU,f1GUj4dKYc0,mrOPFt9E0eQ,YXeLqhkBjyA,FDL4C8oiJJg,F2gEzNbPDWA,9JISEn8rXis,B2lOBiN_RD8,qBNmY7S0BG8,cpoj2thH7WI,TbUIaV6qm1U,FmWUx7Uf6so,ghhOWJlKwHo,WgOTH0b2HDc,3H2EUKPZkxA,_mogWYk6ivE,nD4JmSv_rGo,roaIAdnGEJQ,mmcztns2a3w,J8HdWBKe4eQ,N2QZ6eo8Yfs,VqxLe7dbR14,rMPmpQBeESg,SzwpRcK5xfY,zsVF9Y4P8ho,thXtwk2_Jbg,vICQsxfjw5U,UQBnBFvxo4M,NgO3Rr9o-Hk,_yHtPPYyoBM,uFqO7A89DAY,QsEBPWeSOJ0,ZCzXDp_Mtug,c9AcYskh_0U,ZNxRXQRg1JU,YY2l--Vje7o,aBbemsM9d28,2--NFvaVgL4,0zbPVwnyacg'}
DEBUG: Since no metadata found on Spotify, going with the first result
===================== 1 failed, 52 passed in 78.21 seconds =====================
ritiek commented 6 years ago

Omg, this is so weird. Just a few moments ago, all builds were passing without problems. I have no idea how to make these tests more stable while actually keeping them in.

For now, I am going to keep re-running the tests on Travis until they pass lol.

EDIT: They finally did pass.

linusg commented 6 years ago

I hope this is temporary: https://travis-ci.org/ritiek/spotify-downloader/jobs/439046788#L575

ritiek commented 6 years ago

Yep, this seems to be an issue with travis-ci itself.

vn-ki commented 6 years ago

I have no idea how to make these tests more stable while actually keeping them in.

The thumb rule with tests (unit testing) is that when you have to make a network request (or something similar which you can't necessarily test), is to mock it.

Mocking in our case would be giving the function the html that is required for the function.

Mocking the tests will ensure that they won't fail when youtube changes something. But the problem with mocking tests are they won't fail when youtube changes something. (See what I did there :smile_cat:)

So if we want our CI to not fail occasionally, we can mock the tests. If we want to catch when yt actually changes something we have to live with these occasional failures.

What are your thoughts on this?

NOTE: I initially thought mocking tests were a joke. But I have grown to respect them more nowadays.

linusg commented 6 years ago

The thumb rule with tests (unit testing) is that when you have to make a network request (or something similar which you can't necessarily test), is to mock it.

That sounds like a neat idea indeed!

So if we want our CI to not fail occasionally, we can mock the tests. If we want to catch when yt actually changes something we have to live with these occasional failures.

I'd prefer to see the CI passing if our code changes are correct, not failing because of YouTube changing something - which we would notice after a short amount of time anyway (someone opens an issue or we experience the issue ourselves).

So the way I see it the tests should indicate whether our code is correct.

ritiek commented 5 years ago

I monkeypatched them in #448.