Closed mlbelobraydi closed 3 years ago
Good start on the formatting. Using the definitions of each section to create the correctly formatted dictionary results. Everything captured in the following notebook. https://github.com/mlbelobraydi/TXRRC_data_harvest/blob/master/Notebooks/Testing%20data%20definitions.ipynb
QAQC of the formatting definitions in progress. Need to ensure the fields are being split and formatted correctly before moving on to dataframes and SQL.
Created .py files that work in conjunction to be able to test the formatting. https://github.com/mlbelobraydi/TXRRC_data_harvest/blob/master/WorkingFileForTesting.py
Same data parsing can be found in the notebook: https://github.com/mlbelobraydi/TXRRC_data_harvest/blob/master/Notebooks/Testing%20data%20definitions.ipynb
Starting to map out the dependencies of unique keys in the different sections. Tracking most changes in https://github.com/mlbelobraydi/TXRRC_data_harvest/blob/master/dbf900_layouts.py and will need to move the results the the .txt in the definition file and update the definitions in the jupyter notebook.
Sections 1, 4, 5, 7, 12, 13, 23, 24, 25, 26, 27 have passed QAQC. Sections like 24 will need to be formatted into json and added to previous record in 23. Section 22 has a known byte error.
Section 2, 3, 6, 8, 9, 10, 11, 14, 15, 16, 17, 18, 19, 20, 21, 22, 28 need QAQC
Sections 2, 14, 21, 22 will need an additional subroutine to decode the 'WB-OIL-GAS-INFO' field into the appropriate oil or gas components
This is still ongoing, but the bytes rewrite is taking priority to capture the full decimal digits for lat-long and coordinates in section 13.
https://github.com/mlbelobraydi/TXRRC_data_harvest/blob/master/WorkingFileForTesting.py now working with: https://github.com/mlbelobraydi/TXRRC_data_harvest/blob/master/dbf900_main_bytes.py https://github.com/mlbelobraydi/TXRRC_data_harvest/blob/master/dbf900_layouts_bytes.py https://github.com/mlbelobraydi/TXRRC_data_harvest/blob/master/dbf900_formats_bytes.py
WorkingFileForTesting.py also captures the unique keys and is placing the values in the appropriate dataframes
All sections are being read into dataframes and output to csv files. Format testing is still ongoing so QAQC of the layouts and formats needs to be completed.
Since the definitions have been complete, now a notebook needs to be created to test the process to turn the .ebc file to usable data that can be formatted to JSON or SQL tables. This task is to create a prototype of that process in a notebook.