Closed fmikaelian closed 5 years ago
The scrapped articles have a very complicated and mixed list of paragraphs, some phrases do not have anything to do with the articles such as "Share this page !", "Tools" and even some text that work as links to other pages.
For exemple, the text in the csv for second article is:
"['Category', 'Share', 'Tools', 'BNP Paribas has received the notification by the European Central Bank of the outcome of the 2018 Supervisory Review and Evaluation Process (SREP) confirming the Common Equity Tier 1 (CET1) requirement that the Group has to respect on a consolidated basis.', 'The Common Equity Tier 1 (CET1) requirement that the Group has to respect on a consolidated basis is 9.83% since 1st January 2019 (excluding the Pillar 2 guidance, not public), which includes 1.50% for the G-SIB buffer, 2.50% for the Conservation buffer, 1.25% for the Pillar 2 requirement (unchanged vs. last year) and 0.08% of countercyclical buffer. The requirement for the Tier 1 Capital is 11.33% and that for the Total Capital is 13.33%.', 'With a CET1 ratio at 11.8%, a Tier 1 ratio at 13.1% and a Total Capital ratio at 15.0% as at 31 December 2018, the Group is well above the regulatory requirements.', '', 'BNP Paribas is a leading bank in Europe with an international reach. It has a presence in 73 countries, with more than 196,000 employees, including around 149,000 in Europe. The Group has key positions in its three main activities: Domestic Markets and International Financial Services (whose retail-banking networks and financial services are covered by Retail Banking & Services) and Corporate & Institutional Banking, which serves two client franchises: corporate clients and institutional investors. The Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporates and institutional clients) to realise their projects through solutions spanning financing, investment, savings and protection insurance. ', 'In Europe, the Group has four domestic markets (Belgium, France, Italy and Luxembourg) and BNP Paribas Personal Finance is the European leader in consumer lending.', 'BNP Paribas is rolling out its integrated retail-banking model in Mediterranean countries, in Turkey, in Eastern Europe and a large network in the western part of the United States. In its Corporate & Institutional Banking and International Financial Services activities, BNP Paribas also enjoys top positions in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific.', 'Share this page !', 'Tools']"
I understand that we should filter that, however do you expect us to do the analysis including these "trashy" pieces of text?
You are right, the dataset is not "clean". This is due to my current implementation of the data scrapper. See https://github.com/fmikaelian/cdQA/blob/develop/cdqa/scrapper/bs4_scrapper.py
I think we could either make a better scrapper or just post-process the scrapped data (filter out irrelevant paragraphs. Maybe there are patterns we can detect and remove?).
Exploratory analysis of raw dataset can help us filter out or rewrite scrapper and once the dataset is clean we can do KPI analysis.
How do you think?
Yes, I will try to find patterns in other to filter it.
If it doesn't work I will try to improve the scrapper.
I just notice that sometimes the scrapper does not detect some useful text.
For example, in this article the paragraph highlighted below is not found in the csv.
I think we will have to try to improve the scrapper.
Could be. What I did in the scrapper is take all the paragraphs tags <p>
inside the whole div of content:
I just don't know if this impacts a lot of articles or not?
I guess it will be hard for us to get the perfect dataset. Let's list issues like you are doing then decide what to fix or not.
Also, keep in mind that this dataset is the "open data" dataset since it is BNPP public articles. I have a similar-structure dataset with 20K+ internal articles, that may also be imperfect... But this is data science IRL 😅
Yes, I completely agree that it will be pretty difficult to obtain a perfect dataset
For now I am trying to filter at the maximum the data set we have
The good thing is that the internal dataset has exactly the same header so you're cleaning functions (if there are) should be able to "scale".
@andrelmfarias @osans-tel @TNazon @mamrou
Can you share your progress/scripts on this issue?
Generate some KPIs on the dataset we collected. Ideas of KPIs:
Before and after filtering.