Closed cotton1234 closed 3 months ago
It looks like there are duplicate entries in your CSV. You can run defs/diagnostic.py to confirm the same.
Just follow the instructions to pull the latest eod2_data and then run init.py
.
If you're still running older version of EOD2, make sure to update that as well. (Instructions in the same link)
Hello Benny,
Getting the following..
(.venv) saurabhgarg@MacBook-Pro defs % python3 diagnostic.py
File or Pandas exceptions SETFNN50.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 2112, saw 17\n') INTLCONV.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 703, saw 17\n') DIAMONDYD.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 1677, saw 17\n') AEROFLEX.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 218, saw 17\n') BHARTIARTL.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 4479, saw 17\n') SBIETFPB.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 924, saw 17\n') INFINIUM_SME.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 309, saw 17\n') BAWEJA_SME.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 111, saw 17\n') HDFCNEXT50.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 477, saw 17\n') PRIVISCL.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 2275, saw 17\n') GABRIEL.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 5880, saw 17\n') AVROIND.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 564, saw 17\n') 3PLAND.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 3210, saw 17\n') ZENTEC.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 2300, saw 17\n') TATASTEEL.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 4645, saw 17\n') SENCO.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 251, saw 17\n') OMFURN_SME.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 528, saw 17\n') BLBLIMITED.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 5391, saw 17\n') RELAXO.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 3235, saw 17\n') LIQUIDADD.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 72, saw 17\n') CELLO.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 174, saw 17\n') HDFCVALUE.CSV: ParserError('Error tokenizing data. C error: Expected 9 fields in line 444, saw 17\n') (.venv) saurabhgarg@MacBook-Pro defs %
Now if i opened the csv file of the same files i see the 16 entry has more than 9 columsn.
2024-07-11,829.9,836.75,825.1,831.35,54298,6234,8.71,26475 2024-07-12,828.0,832.5,821.2,824.55,54348,5542,9.81,28417 2024-07-15,825.0,828.5,812.75,816.5,83440,6365,13.11,53597 2024-07-16,816.6,839.4,815.3,836.05,486371,8348,58.26,4165102024-07-18,836.1,843.55,827.1,830.25,87214,9256,9.42,42215
(.venv) saurabhgarg@MacBook-Pro src % tail -1 eod2_data/daily/relaxo.csv 2024-07-16,816.6,839.4,815.3,836.05,486371,8348,58.26,4165102024-07-18,836.1,843.55,827.1,830.25,87214,9256,9.42,42215 (.venv) saurabhgarg@MacBook-Pro src % tail -1 eod2_data/daily/cello.csv 2024-07-16,965.0,1014.35,965.0,990.3,546197,18912,28.88,2796572024-07-18,991.9,996.45,969.7,971.25,120273,7726,15.57,81641 (.venv) saurabhgarg@MacBook-Pro src %
I was thinking of removing all the entries of 16 and then rolling it forward. But question is how did it happen only for 16th data.
Thanks Saurabh
I'm not sure. What version are you running? py init.py -v
I'm not seeing this on my own files. If you're running the latest and this happened, I can recheck my code.
omfurn_sme.csv 2024-07-12,64.95,69.95,64.95,69.95,7200,3,2400.0,7200 2024-07-16,65.05,72.0,65.0,70.65,33600,14,2400.0,31200 2024-07-18,68.0,68.0,68.0,68.0,4800,2,2400.0,4800
hdfcvalue.csv 2024-07-15,140.28,141.93,140.08,141.44,9520,199,47.84,8146 2024-07-16,141.45,142.23,141.16,141.55,15779,178,88.65,9835 2024-07-18,143.49,143.7,140.83,143.34,17357,168,103.32,11973
Here is what you could do (assuming you have the latest version)
git status
to check for accidental edits to the code.Either ways dont try to repair your files manually. Just reset it as per instructions i mentioned.
(.venv) saurabhgarg@MacBook-Pro src % python3 init.py -v sh: color: command not found EOD2 init.py: version 6.0.2 (.venv) saurabhgarg@MacBook-Pro src % git status On branch main Your branch is up to date with 'origin/main'.
Changes not staged for commit:
(use "git add
no changes added to commit (use "git add" and/or "git commit -a") (.venv) saurabhgarg@MacBook-Pro src %
You mean i should delete the files which came in diagnostic.py and will create new ones with all the data ?
Thanks Saurabh
So you're running the latest EOD2 and still had this problem.
Its not an OS issue with \n
character otherwise you'd have this issue prior as well. I cant see any immediate issue with the code.
If you see below, the two lines are joined but its not the same date. So the \n
character was not added for some reason.
2024-07-16,816.6,839.4,815.3,836.05,486371,8348,58.26,4165102024-07-18,836.1,843.55,827.1,830.25,87214,9256,9.42,42215
You mean i should delete the files which came in diagnostic.py and will create new ones with all the data ?
Just reset eod2_data like i mentioned in my first post and run init.py to sync up the data.
Run diagnostic.py to check for errors. If it persists, send me a copy of defs.py. Send it over telegram or rename it to .txt file and send it here.
Thanks Benny , issue is fixed now.
I noticed one thing yesterday with NSE csv files. Generally the file size is 40-45Kb but yesterday it was 90kb for sometime and it seems when i updated , it got the 90 kb file. Right now its back to 45kb file.
Saurabh
I usually sync at 7pm via crontab. Yesterday the sync failed as report was not ready. So i ran it at 10pm after i saw your issue.
Maybe it was just a corrupted bhavcopy.
On 19 July 2024 08:59:02 UTC, Saurabh @.***> wrote:
Thanks Benny , issue is fixed now.
I noticed one thing yesterday with NSE csv files. Generally the file size is 40-45Kb but yesterday it was 90kb for sometime and it seems when i updated , it got the 90 kb file. Right now its back to 45kb file.
Saurabh
-- Reply to this email directly or view it on GitHub: https://github.com/BennyThadikaran/eod2/issues/162#issuecomment-2238701745 You are receiving this because you commented.
Message ID: @.***>
Yes could be. Thanks for your response.
Hello Benny,
Getting error today while syncing.
2024-07-18 20:12:30,872 - main - ERROR - Error while making adjustments. All adjustments have been discarded. Traceback (most recent call last): File "/Users/saurabhgarg/shivshakti/eod2/src/init.py", line 123, in
defs.adjustNseStocks()
File "/Users/saurabhgarg/shivshakti/eod2/src/defs/defs.py", line 846, in adjustNseStocks
raise e
File "/Users/saurabhgarg/shivshakti/eod2/src/defs/defs.py", line 804, in adjustNseStocks
commit = makeAdjustment(
^^^^^^^^^^^^^^^
File "/Users/saurabhgarg/shivshakti/eod2/src/defs/defs.py", line 681, in makeAdjustment
last = df.iloc[idx:]