Open rachelHoman opened 7 months ago
I'm running into a similar problem where the files take too long to load and I just time out. Not sure how to proceed.
It is just taking time because it is under a lot of load. try running it in the background
chmod u+x ./load_tweets_parallel.sh
nohup ./load_tweets_parallel.sh &
Then check the nohup.txt file to see whether the same error pops up
cat nohup.txt
I'm getting the same error under my pg_denormalized, and at least for me, it pops up immediately after I run the sh load_tweets_parallel.sh
command (so it doesn't seem like a time-out or duration stress-induced error..)
under pg_normalized_batch I'm also getting a very long error message that looks something like this:
Traceback (most recent call last):
File "/home/Liann.Bielicki.24/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 3371, in _wrap_pool_connect
return fn()
File "/home/Liann.Bielicki.24/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 327, in connect
return _ConnectionFairy._checkout(self)
File "/home/Liann.Bielicki.24/.local/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 894, in _checkout
fairy = _ConnectionRecord.checkout(pool)
I tried the nohup method but when i ran cat nohup.txt
it also errored and said the file didn't exist- maybe I'm missing something there.
if any of you were able to get past this I'd appreciate your advice!
Hi Mike,
I'm trying to run
sh load_tweets_parallel.sh
after modifying thedocker-compose.yml
file and I am running into this error:I've double checked that the ports are correct and have brought down all the containers, removed, and pruned. Is there something I can do to troubleshoot this? Thank you!