megagonlabs / sato

Code and data for Sato https://arxiv.org/abs/1911.06311.
Apache License 2.0
108 stars 40 forks source link

Directory Error #11

Open Spruces opened 4 years ago

Spruces commented 4 years ago

Hello, I tried to extract the features with extract_features.py

image and the result is just like this.

I don't get where the open_data_type_78_header_valid.csv's location is.

Too complicate to use, and not much instructions

jason022085 commented 3 years ago

Same error occurs to me. I think the solution is to understand the usage of Viznet, because there is similar code in it. https://github.com/mitmedialab/viznet

Spruces commented 3 years ago

Same error occurs to me. I think the solution is to understand the usage of Viznet, because there is similar code in it. https://github.com/mitmedialab/viznet

Thanks bro, it's weird. To execute it, you need certain .csv files(like open_data_type_78_header_valid.csv) that doesn't provided by sato.

and I had examined the codes, you need certain type of NLTKs to generate open_data_type_78_header_valid.csv.

Also, after I found certain types of NLTKs, I couldn't figure out where exactly the NLTK should be located in certain directory.

Poor guidelines.

Spruces commented 3 years ago

And how they can detect semantic columns with NLTKs?

Like my ID, and yours. They don't have any semantic things as they are just nouns.

I think that for detecting 'ID' columns based on our id(Spruces,jason022085) they need something else like the lengths of inputs or how many blanks they had, and the number of separator comma(address).

cus there's no meaning in our names of addresses as themselves.

jason022085 commented 3 years ago

p.s. Remember to download VizNet data first(produced by retrieve_corpora.sh) and set RAW_DIR to the path of raw data form Viznet In my case, it is os.environ['RAW_DIR'] = r"D:\viznet-master\raw"

Since other dataset are too large, I am working on "manyeyes" dataset. image

And I found there is a extract_header.py. It may help image

I got it ! You have to extract header first, and then extract features image

RichaMax commented 3 years ago

Hello, from what I understand you can download open_data_type_78_header_valid.csv and the sherlocks features they used using their script ./download_data.sh

horseno commented 3 years ago

Sorry about the confusion. If you wish to extract the features, you'll first need to run extract_header.py to get the valid headers (in the 78 types after canonicalization) from the dataset.

iganand commented 3 years ago

I have empty header files generated when I try extract_headers.py for manyeyes. Can you please help? What it might be. Please let me know