allenai / mmc4

MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.
MIT License
901 stars 34 forks source link

Duplicates and multiple versions of samples #10

Closed pfischer-nvidia closed 1 year ago

pfischer-nvidia commented 1 year ago

Dear authors, while processing the MMC4 dataset, we found some anomalies and we hope you can comment on or explain these.

Our Expectations

Our Findings

We found that

Exact Duplicates

At first, we matched samples by the MD5 hash of the JSON string to find exact duplicates.

For example for mmc4-core-ff, we found 5598117 total samples (i.e. json lines) among all shards, but only 5506430 unique samples. This means that 1.6% within that subset are exact duplicates.

Other Duplicates

If we match just by the document URL string, the duplicate rate is higher, in the case of mmc4-core-ff we then obtain only 5492699 unique samples, so 1.9% are duplicates. Interestingly, the duplicates appear not just twice but up to 88 times each.

Here are the top ten duplicate URLs with the number of appearances:

('https://site.clubrunner.ca/page/clubrunner-mobile-app-now-available', 88),
('https://www.amazon.com.au/All-New-Kindle-With-Front-Light-Black/dp/B07FQ4DJ83', 59),
('https://www.plentygram.com/blog/how-to-make-your-instagram-account-famous/', 46),
('http://www.fuelly.com/', 41),
('https://www.bhhsnv.com/', 39),
('https://www.kikocosmetics.com/en-us/', 34),
('http://www.manchesteruniversitypress.co.uk/articles/freedom-and-the-fifth-commandment-qa-with-brian-heffernan/', 31),
('http://www.manchesteruniversitypress.co.uk/articles/mup-advent-calendar-starts-thursday/', 31),
('https://emeraldcoastbyowner.com/', 29),
('https://www.ait.com/web-development/?typhon', 29)

We took a closer look at the first sample with 88 duplicates and found that 87 of those are exact duplicates but 1 is slightly different. For that 1 sample, the image similarities and the similarity matrix are different altough the text and images match with those of the other 87 samples.

Faces vs. No Faces

We assumed that fewer faces dataset is simply a filtered version of the sets with faces. We filtered the set with faces ourselves, keeping only the samples that have face_detections: None. However, this does not result in the same set as the published fewer faces set. This effect is related to the similar but slightly different samples mentioned above. One example is this: Compare mmc4_core_faces/docs_shard_4943_v3.jsonl.zip sample 113 with mmc4_full_faces/docs_shard_4943_v2.jsonl.zip sample 1523. Both have the same URL and the core set should be a subset of the full set. However, the second sample contains an additional image with face detections, while all other images contain no face detections.

image

Questions

jmhessel commented 1 year ago

Hi @pfischer-nvidia ! Thanks for your interest in the dataset, and for going through the corpus in detail! Our goal was to release the corpus as a v1 exactly so we can get community input about quality issues, and so this input is super helpful. I will go through this in more detail soon, but wanted to get back to you with some quick answers ASAP:

For a strict definition of "subset", they aren't true subsets, and they aren't intended to be --- my apologies for the confusing naming. If you imagine a document with images, some of them contain faces, and some of them don't. If you simply remove the images with detected faces, the resulting image-text alignment might not have as high-of similarity compared to if you re-ran the assignment procedure. So, we remove images with detected faces, and then re-run the assignment algorithm, which might result in different assignments globally.

These are also not true subsets for a strict definition of subset. As described in the paper, there are additional filters we apply that can affect which images are available within each document: these include document thresholds (like min/max number of images/sentences), but also things that can affect within-document properties like more strict deduplicaiton that (as mentioned in the paper) can create some false positives which are discarded.

This is something we are aware of, and is a concern with lots of pretraining datasets out there. Our assumption was that the deduplication efforts of c4 were sufficient for us to not run deduplication but we have also recently realized a small number of duplicate URLs. We removed a /ton/ of duplicate images from our original 1.4B set, but it looks like we missed these in v1 of the release. We'll check it out with your findings.

jmhessel commented 1 year ago

Hi @pfischer-nvidia --- thanks for this report! Along with fixing some of the alignments mentioned in #11 , we are working on a v1.1 of the corpus now which aims to address the ~1% duplicate url issue.

pfischer-nvidia commented 1 year ago

Thanks. Are you going to make the samples unique wrt. the URL? And how can we interpret the current _v2 and _v3 suffixes of the files?

pfischer-nvidia commented 1 year ago

Oh and one more question: A large part of the images referenced in the dataset is not available on the internet anymore. Would it be possible to get these from you?

jmhessel commented 1 year ago

I am closing this issue as resolved by https://github.com/allenai/mmc4/pull/13 --- the update we made was to do probabilistic deduplication such that, in expectation, each URL appears once. But, if you want a more strictly url deduplicated set, you can discard any docs marked by the new could_have_url_duplicate field (see main readme). Thanks for your help on this!

to answer your questions: