kylejginavan / youtube_it

An object-oriented Ruby wrapper for the YouTube GData API
http://groups.google.com/group/ruby-youtube-library
595 stars 223 forks source link

FeedParser exception handling #37

Closed tokland closed 12 years ago

tokland commented 12 years ago

While testing the gem I had a network problem and I was surprised to see that _videosby continued "working", but returning empty objects. I checked the code and the problem seems to be here:

https://github.com/kylejginavan/youtube_it/blob/master/lib/youtube_it/parser.rb#L9

class FeedParser #:nodoc:
  def initialize(content)
    @content = open(content).read        
  rescue OpenURI::HTTPError => e
    raise OpenURI::HTTPError.new(e.io.status[0],e)
  rescue
    @content = content
  end 

I don't understand the details of the first rescue but I guess for some reason it's important to re-raise it with different parameters. But the second rescue makes no sense to me, since it assigns content (a url, why is it called content?) to @content (the expected body response)? why don't leave the method raise the exception and let the caller decide how to handle it?

chebyte commented 12 years ago

the thing is this, @content can be url or xml data so for this it have two rescue for example if you send to parser method a param like xml data instead url when the code make open(content).read this make an exception so it means that the param is xml data not URL so it goes to the last rescue and assigns the @content var with the xml content, and if you send as param a BAD URL when the code make .read it going to the second rescue and throws the exception

tokland commented 12 years ago

if you send to parser method a param like xml data instead url when the code make open(content).read this make an exception so it means that the param is xml data not URL so it goes to the last rescue and assigns the @content var with the xml content

I understand, but IMHO this is a very fragile approach, at the end these catch-them-all rescues end up overshadowing a myriad of problems. A possible solution is to check if content looks like a URL and then act differently (a simple way to look for valid URLS: url =~ URI::regexp).

chebyte commented 12 years ago

yeap I'm agree I just was doing that :) I going to push soon the changes, thanks for the advice