elixir-crawly / crawly

Crawly, a high-level web crawling & scraping framework for Elixir.
https://hexdocs.pm/crawly
Apache License 2.0
984 stars 116 forks source link

Unable to get up and running from the quick start #14

Closed Ziinc closed 4 years ago

Ziinc commented 5 years ago

Hi, I am unable to scrape the erlang solutions blog as the quickstart guide states here: https://github.com/oltarasenko/crawly#quickstart

Attempting to run the spider through iex results in:

iex(1)> Crawly.Engine.start_spider(MyCrawler.EslSpider)
[info] Starting the manager for Elixir.MyCrawler.EslSpider
[debug] Running spider init now.
[debug] Scraped ":title,:url"
[debug] Starting requests storage worker for Elixir.MyCrawler.EslSpider...
[debug] Started 2 workers for Elixir.MyCrawler.EslSpider
:ok
iex(2)> [info] Current crawl speed is: 0 items/min
[info] Stopping MyCrawler.EslSpider, itemcount timeout achieved

I'm quite lost as there is no way for me to debug this, if it is a network issue (which is highly unlikely since i can access the esl website through my browser), or if it is an issue with the urls being filtered out.

defmodule MyCrawler.EslSpider do
  @behaviour Crawly.Spider
  alias Crawly.Utils
  require Logger
  @impl Crawly.Spider
  def base_url(), do: "https://www.erlang-solutions.com"

  @impl Crawly.Spider
  def init() do
    Logger.debug("Running spider init now.")
    [start_urls: ["https://www.erlang-solutions.com/blog.html"]]
  end

  @impl Crawly.Spider
  def parse_item(response) do
    IO.inspect(response)
    hrefs = response.body |> Floki.find("a.more") |> Floki.attribute("href")

    requests =
      Utils.build_absolute_urls(hrefs, base_url())
      |> Utils.requests_from_urls()

    # Modified this to make it even more general, to eliminate the possibility of selector problem
    title = response.body |> Floki.find("title") |> Floki.text()

    %{
      :requests => requests,
      :items => [%{title: title, url: response.request_url}]
    }
  end
end

Of note is that the spider does not even call the parse_items callback, as the IO.inspect for the response is not called at all.

Config is as follows:

config :crawly,
  closespider_timeout: 10,
  concurrent_requests_per_domain: 2,
  follow_redirects: true,
  output_format: "csv",
  item: [:title, :url],
  item_id: :title
oltarasenko commented 5 years ago

Hey @Ziinc apparently your code works for me. Maybe you have some other details to share? Maybe I should try with different Elixir/Erlang versions?

2019-10-01_1724
oltarasenko commented 5 years ago

Could you also try to do Crawly.fetch("https://www.erlang-solutions.com/blog.html") just to check if it's possible to make requests to the blog from your machine?

oltarasenko commented 5 years ago

@Ziinc in any case tell me what you're trying to achieve so I could try to help.

Ziinc commented 5 years ago

I'm trying to integrate Crawly with an existing phoenix project.

My dependencies are as follows (though I doubt that there would be dependency conflicts:

defp deps do
    [
      {:phoenix, "~> 1.4.0"},
      {:phoenix_pubsub, "~> 1.1"},
      {:phoenix_ecto, "~> 4.0"},
      {:ecto_sql, "~> 3.0"},
      {:postgrex, ">= 0.0.0"},
      {:phoenix_html, "~> 2.11"},
      {:phoenix_live_reload, "~> 1.2", only: :dev},
      {:gettext, "~> 0.11"},
      {:jason, "~> 1.0"},
      {:plug_cowboy, "~> 2.0"},
      {:mix_test_watch, "~> 0.8", only: :dev, runtime: false},
      {:comeonin, "~> 4.1"},
      {:bcrypt_elixir, "~> 1.1"},
      {:distillery, "~> 2.0", runtime: false},
      {:httpoison, "~> 1.4"},
      {:ex_aws, "~> 2.0"},
      {:ex_aws_s3, "~> 2.0"},
      {:sweet_xml, "~> 0.6"},
      {:bureaucrat, "~> 0.2.5"},
      {:hound, "~> 1.0", only: [:dev, :test], runtime: false},
      {:mogrify, "~> 0.7.3"},
      {:honeydew, "~> 1.4.4"},
      {:crawly, "~> 0.5.0"}
    ]
  end

I was actually initially on 1.8.2 when I first encountered the issue, and I had updated to see if it would help.

This is what happens with the Crawly.fetch()

Interactive Elixir (1.9.1) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> Crawly.fetch("https://www.erlang-solutions.com/blog.html")
{:error,
 %HTTPoison.Error{id: nil, reason: {:option, :server_only, :honor_cipher_order}}}

I will try the quickstart on a fresh project and report back

Ziinc commented 5 years ago

@oltarasenko I've been able to get the quickstart to work in a new project, but it still does not work when i try add it into the existing project. I think it's an issue with dependency conflicts, notably httpoison.

Ziinc commented 5 years ago

@oltarasenko Seems like it is an issue with hackney:

https://elixirforum.com/t/hackney-error-option-server-only-honor-cipher-order/25541

I'll try recompiling and updating the deps and try again

Ziinc commented 5 years ago

Updating the httpoison dependency did the trick, it upgraded hackney to the latest version, where the ssl issue was fixed. Thanks!

Ziinc commented 5 years ago

I think instead of letting the httpoison errors get swallowed up, it would be good to let them surface as debug logs.

oltarasenko commented 5 years ago

Ok, I am re-opening it, as the thing you have mentioned (https://github.com/oltarasenko/crawly/blob/master/lib/crawly/worker.ex#L43) requires a fix. Will process an error here & log the error message. Good catch!

oltarasenko commented 5 years ago

@Ziinc Could you please have a glance at https://github.com/oltarasenko/crawly/pull/15. It is pretty trivial for now. however, I plan to extend the worker quite soon (as I am currently working on a support of different user agents (aka webdriver support))

Ziinc commented 5 years ago

looks good. A possible extension (to add to backlog) for error behaviour could be a configurable fallback module to call when the backoff retries fail, thereby allowing the engine to exec some wrap up function, possibly to alert for errors.

Ziinc commented 5 years ago

A possible extension (to add to backlog) for error behaviour could be a configurable fallback module to call when the backoff retries fail, thereby allowing the engine to exec some wrap up function, possibly to alert for errors.

Possibly specified at either spider level or config level

oltarasenko commented 4 years ago

This is now fixed in 0.6.0