oSumAtrIX / DownOnSpot

🎧 A Spotify music and playlist downloader working with free Spotify accounts written in Rust
https://osumatrix.me
GNU General Public License v3.0
548 stars 45 forks source link

Liked songs #56

Open AirOne01 opened 11 months ago

AirOne01 commented 11 months ago

Is your feature request related to a problem? Please describe.

AFAIK, it is not possible to download the entirety of the liked songs of an account (edit: you can, but it requires the web/electron client and making a new playlist), as Spotify does not provide the option to share it. Would It be possible to have an option to download the liked songs?

Describe the solution you'd like

I see two solutions:

Additional context


Love your project, you are saving me a ton of time. Could help with PR (not only this issue) in about two weeks when I'm back from vacation.

oSumAtrIX commented 11 months ago

I'd be up for collaborating. I currently have a big refactor/migration from a deprecated dependency stashed locally to fix #53, where I would have to solve merge conflicts after implementing this, though unrelated to this issue.

A new function can be added next to this to fetch the liked songs and collect them into a Vec<Track>: https://github.com/oSumAtrIX/DownOnSpot/blob/main/src/spotify.rs#L116

At last, the CLI would need to be somehow instructed to download liked songs.

alkeryn commented 2 months ago

idk if it has been added but i thought i could contribute some. here is the spotify api reference: https://developer.spotify.com/documentation/web-api/reference/get-users-saved-tracks

here is a basic python example i did of how you can get it, you can add the fields you want:

import requests
import json

# Your access token from Spotify
access_token = 'YOUR TOKEN'

def get_liked_tracks(access_token):
    headers = {
        'Authorization': f'Bearer {access_token}',
        'Content-Type': 'application/json'
    }

    # Spotify API endpoint for fetching liked tracks
    url = 'https://api.spotify.com/v1/me/tracks?limit=50'

    liked_tracks = []

    while url:
        response = requests.get(url, headers=headers)
        if response.status_code != 200:
            print(f'Failed to fetch data: {response.status_code}')
            return

        data = response.json()
        for item in data['items']:
            track = item['track']
            track_info = {
                'track_name': track['name'],
                'artist': track['artists'][0]['name'],  # Assuming one artist
                'album': track['album']['name'],
                'added_at': item['added_at'],
                # 'href': track['href'],
                'href': track['external_urls']
            }
            liked_tracks.append(track_info)
            print(track_info)

        url = data.get('next')  # URL for the next page of items

    return liked_tracks

def main():
    tracks = get_liked_tracks(access_token)
    with open('dump.json', 'w', encoding="utf-8") as f:
        json.dump(tracks, f, ensure_ascii=False, indent=4)

if __name__ == '__main__':
    main()

also you don't need to get the next url if you want to do it in parallel you could just do it for each increment of 50 offset in parallel prolly.