zinggAI / zingg

Scalable identity resolution, entity resolution, data mastering and deduplication using ML
GNU Affero General Public License v3.0
960 stars 120 forks source link

zingg 0.4.0 in docker consume lots disk space #893

Open iqoOopi opened 2 months ago

iqoOopi commented 2 months ago

I'm running a docker zingg with 2.7M records with 10 columns match. On my 6 core, 32gb RAM, it took 14 hours now and still running. It already has 230GB disk space.

For the past few hours, looks like it keep writing to rdd_71_1 nonstop.

Is this normal?

image image image image

sonalgoyal commented 2 months ago

That should not happen. What is the labelDataSampleSize and number of matches you have labeled?

iqoOopi commented 2 months ago

so I trained the model with 60K records first, and runner the match with 60K. Everything works and the results is accurate. labelDataSampleSize for 60K records is 0.1. Number of match is roughly 60 records.

Then I want to try on 2.7M records (same table schema but just more records). So I changed the labelDataSampleSize to 0.001. "FindAndLabel" command works fine. But the "match" never finished and took all my disk space (more than 300GB) and never finish

sonalgoyal commented 2 months ago

It is better to run train on a data size close to the one you want to run match on. Can you please run train with the bigger 2.7m dataset and try match post that?

iqoOopi commented 2 months ago

Oh, I forgot to mention. After I switched to 2.7m, I reruned few round of "findandLabel" and retrain the model as well. Then I started the match

sonalgoyal commented 2 months ago

Ok. Seems it’s not trained the blocking model. Zingg jobs are more compute intensive than memory intensive, and the training samples help to learn the blocking model which helps to parallelise the job across cores. Do you have some major null values in your dataset which may be getting clubbed together?

iqoOopi commented 2 months ago

Yes, each record has 15 fields. all these fields are nullable. I do have quite many records that only have value for 2 fields (firstName, phone) then rest 13 fields are null

sonalgoyal commented 2 months ago

Ah. That’s a tricky one. Are the null fields important to the match?

iqoOopi commented 2 months ago

yes, they are important if they have a value. Like (SIN number, DOB, email address etc)

sonalgoyal commented 2 months ago

I suspect you will need a lot more training in terms of matching data for combinations of these null values. Nulls are tough to block as they have zero signal. Maybe add through trainingSamples and see if that changes things?

iqoOopi commented 2 months ago

Seems it’s not trained the blocking model.

will try train more. Btw, how do you figure out it is not using the block model?

sonalgoyal commented 2 months ago

I have seen it in the past on one dataset which did not have a lot of values populated. If there is no signal, it is hard to learn. I think we should build some way to let users know this while running Zingg.

iqoOopi commented 2 months ago

image Also when running Zingg, how can I tell it is analyzing something? nothing from log. Hard to tell whether it is working normal or need abort the current task.

sonalgoyal commented 2 months ago

These warnings are ok. If the logs are not moving at all, that may be a sign.

sonalgoyal commented 2 months ago

If you are familiar with spark, you could look at the spark gui

iqoOopi commented 2 months ago

btw, in the config.json file I have "trainingSamples" and "data" section, both are pointing to SQL server table. wondering is the schema order matters? like in training samples, I have "schema": A string,B string,C string, but in the data section I have column "schema" as A string,C string,B string. Since they are SQL table, so I think the sequence does not matter, just want to confirm

iqoOopi commented 2 months ago

image image

Just restart the match after re-trained for around 100 matches. I saw from spark GUI, it only have 1 active job with 3 tasks. Is this normal?

sonalgoyal commented 2 months ago

what is the numPartitions setting? For better parallelisation, you want it to be 4-5 times your number of cores

iqoOopi commented 2 months ago

AH, thanks @sonalgoyal for the quick reply. In the config.json file, it is only 4. My CPU has 4 core, 8 thread. (docker interface shows 8 cpus), so I should give it 4 X 5 or 8 X 5? Also in the zingg.conf file, I saw there is a "spark.default.parallelism" setting being commented out, what should be that value?

iqoOopi commented 2 months ago

btw, in the config.json file I have "trainingSamples" and "data" section, both are pointing to SQL server table. wondering is the schema order matters? like in training samples, I have "schema": A string,B string,C string, but in the data section I have column "schema" as A string,C string,B string. Since they are SQL table, so I think the sequence does not matter, just want to confirm

@sonalgoyal also how about this Q?

iqoOopi commented 2 months ago

I make both numberOfPartitions and spark.default.parallelsim settings to 20. Is this looks normal?

image image image

sonalgoyal commented 2 months ago

btw, in the config.json file I have "trainingSamples" and "data" section, both are pointing to SQL server table. wondering is the schema order matters? like in training samples, I have "schema": A string,B string,C string, but in the data section I have column "schema" as A string,C string,B string. Since they are SQL table, so I think the sequence does not matter, just want to confirm

@sonalgoyal also how about this Q?

I think the SQL server dataframe should be read correctly, but I am not 100% sure as thats a case we have not tested. Are you seeing an issue?

iqoOopi commented 2 months ago

btw, in the config.json file I have "trainingSamples" and "data" section, both are pointing to SQL server table. wondering is the schema order matters? like in training samples, I have "schema": A string,B string,C string, but in the data section I have column "schema" as A string,C string,B string. Since they are SQL table, so I think the sequence does not matter, just want to confirm

@sonalgoyal also how about this Q?

I think the SQL server dataframe should be read correctly, but I am not 100% sure as thats a case we have not tested. Are you seeing an issue?

NO, I have not seen any issues. From the findandlabel command result, I can see the model are doing its job.

sonalgoyal commented 1 month ago

how did it go @iqoOopi ?

iqoOopi commented 1 month ago

No success yet, it never finished the scan on our 2.7M records.

sonalgoyal commented 1 month ago

can you share the complete logs?

sonalgoyal commented 1 month ago

this is likely the case of a poorly formed blocking model. this can happen due to less traiing data, but the user has mentioned that they have labelled sufficiently. hard to say further without logs or sample data. @iqoOopi will you be open to a debug session on this?