Closed kaiiyer closed 4 years ago
I would like to work on this.
Okay !! I've assigned the issue to you @tinaoberoi
Hello, @tinaoberoi are you still working on this issue?
Yes @nis130
Please assign this issue to me.
Someone is working on it. Please wait for a few days @sanjanaagrawal
Could you please elaborate on what kind of information scraping are we looking at?
I guess this might help https://medium.com/@heavenraiza/web-scraping-with-python-170145fd90d3
Is this still active?
I am assigning @aayush1205 . Please understand that in fairness of the ongoing contest, we have to limit the time assigned to medium issues to three days.
Could you please elaborate on what kind of data to scrape,
Headings will do !
@kaiiyer I'm still working on the issue, sir.
Also, @kaiiyer the script is really open ended in a sense that the user might give any website to scrape right? For instance, scraping YouTube is very different than let's say scraping Wikipedia. Henceforth, I'm leaving the script open for feature addition and right now am tackling the scraping of general info like links etc for any given website. Sounds allright?
Yeah cool
If I have cloned your repository, what command do I run (in terms of python somefile.py -u google.com) to simulate the symlink command of webtech -u google.com @kaiiyer
Why do you wanna simulate it in first place ? webtech as a whole is a library you can't use a single python file to do the same job
@utkarsh-raj will help you out with your queries @aayush1205
@kaiiyer agreeably. But then, if I make changes to some files, for reference take a look at #46 , how do I test them. @utkarsh-raj
@kaiiyer, now can this be closed?
Yeah
@kaiiyer, Thank you so much. Also, for some reason, my score is not updating.
@utkarsh-raj can help you with that !
@kaiiyer, @utkarsh-raj The score for this issue was not added to my profile. Please see to it.
@aayush1205 I've updated your score
@koderjoker , Thanks a lot. I also wanted to ask if more issues will be added for us to solve.
You'll have to refer to @kaiiyer regarding the roadmap of the project.
Use beautiful soup (bs4) to make a script so that the user can scrape information from web pages.