Closed arkaroy14 closed 2 months ago
I am experiencing the same issue. Once my upload has "finished", meaning all bytes were uploaded somewhere to telegram's servers, the process will just hang indefinetly.
same here, use telethon can solve the problem
same here, use telethon can solve the problem
I've had no luck with telethon. When no cryptg is installed, the upload is slow enough to work 90% of the time. But when cryptg is installed, it always fails if the file is big enough. (Granted you have a fast upload speed, i am using a 1Gbit uplink on my Server)
Here my code:
# Create temp dir, make python venv, install telethon and cryptg
mkdir telethon && \
cd telethon && \
python3 -m venv pyvenv && \
source pyvenv/bin/activate && \
pip3 install telethon cryptg
# Create 2000MiB Testfile
dd if=/dev/urandom of=blob bs=1024k count=2000 status=progress
This is a script which i've quickly put together to try to manage the flood wait errors but as you can see when you test it, the upload will always restart after waiting for the timeout.
from telethon import TelegramClient
from telethon.errors import *
import os
import time
import asyncio
api_id = '' api_hash = ''
client = TelegramClient('telethon-custom', api_id, api_hash)
async def upload_file(file_path, resume_at=0): file_size = os.path.getsize(file_path) last_time = time.time() last_uploaded = resume_at
print(f'Last uploaded: {last_uploaded}')
def progress_callback(current, total):
nonlocal last_time, last_uploaded
current_time = time.time()
elapsed_time = current_time - last_time
bytes_uploaded_since_last = current - last_uploaded
upload_speed = bytes_uploaded_since_last / elapsed_time / (1024 * 1024) # MiB/s
current_mib = current / (1024 * 1024)
total_mib = total / (1024 * 1024)
percent = current * 100 / total
print(f'\rUploaded {file_path} - {current_mib:.2f} MiB out of {total_mib:.2f} MiB: {percent:.2f}% at {upload_speed:.2f} MiB/s', end='')
last_time = current_time
last_uploaded = current
try:
file = await client.upload_file(file_path, progress_callback=progress_callback, part_size_kb=512, file_size=file_size)
print(file)
await client.send_file('me', file)
except FloodError as e:
print(f'\n{e}')
seconds = int(str(e).split('_')[-1].split()[0])
print(f"\nGot FloodWaitError. Waiting for {seconds} seconds...")
await asyncio.sleep(seconds)
await upload_file(file_path, resume_at=last_uploaded)
async def main(): await client.start()
files_to_upload = [
'blob',
]
for file_path in files_to_upload:
print(f'Uploading {file_path}...')
await upload_file(file_path)
print() # Print empty line because the upload-progress won't send one after it has finished
with client: client.loop.run_until_complete(main()) print()
- **finally, execute it**
```bash
python3 telethon-upload.py
# Output... as you can see, the upload will always start from scratch after waiting.
Uploading blob...
Last uploaded: 0
Uploaded blob - 60.00 MiB out of 2000.00 MiB: 3.00% at 7.23 MiB/ss
RPCError 420: FLOOD_PREMIUM_WAIT_7 (caused by SaveBigFilePartRequest)
Got FloodWaitError. Waiting for 7 seconds...
Last uploaded: 62914560
Uploaded blob - 13.00 MiB out of 2000.00 MiB: 0.65% at 4.11 MiB/s^CTraceback (most recent call last):
Just let telethon handle the flood wait, this is my code:
from telethon import TelegramClient
vid_path = sys.argv[1]
client = TelegramClient('test_telethon', tg_api_id, tg_api_hash)
client.start()
client.send_file('me', vid_path)
client.disconnect()
It returns:
2024-06-07 08:58:55,492 | INFO | Using TgCrypto
2024-06-07 08:58:56,048 | INFO | libssl detected, it will be used for encryption
2024-06-07 08:58:57,764 | INFO | Connecting to 91.108.56.133:443/TcpFull...
2024-06-07 08:58:57,845 | INFO | Connection to 91.108.56.133:443/TcpFull complete!
2024-06-07 08:58:58,379 | INFO | Uploading file of 247711036 bytes in 945 chunks of 262144
2024-06-07 08:59:08,615 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait
2024-06-07 08:59:24,314 | INFO | Sleeping for 7s (0:00:07) on SaveBigFilePartRequest flood wait
2024-06-07 08:59:42,313 | INFO | Sleeping for 5s (0:00:05) on SaveBigFilePartRequest flood wait
2024-06-07 08:59:57,316 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait
2024-06-07 09:00:14,140 | INFO | Sleeping for 5s (0:00:05) on SaveBigFilePartRequest flood wait
2024-06-07 09:00:32,593 | INFO | Sleeping for 3s (0:00:03) on SaveBigFilePartRequest flood wait
2024-06-07 09:00:48,437 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait
2024-06-07 09:01:07,598 | INFO | Sleeping for 3s (0:00:03) on SaveBigFilePartRequest flood wait
2024-06-07 09:01:23,117 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait
2024-06-07 09:01:39,126 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait
2024-06-07 09:01:53,316 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait
2024-06-07 09:02:11,517 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait
2024-06-07 09:02:25,717 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait
2024-06-07 09:02:42,540 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait
2024-06-07 09:03:01,297 | INFO | Sleeping for 3s (0:00:03) on SaveBigFilePartRequest flood wait
2024-06-07 09:03:47,966 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait
2024-06-07 09:04:04,825 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait
2024-06-07 09:04:20,825 | INFO | Disconnecting from 91.108.56.133:443/TcpFull...
2024-06-07 09:04:20,825 | INFO | Disconnection from 91.108.56.133:443/TcpFull complete!
@Cosmos-Break my problem with telethon is, 1) very slow speed compared to pyrogram which gives around 7-15+ MBPS sometimes even 40 MBPS. 2) I already implemented large based portion of my code using pyrogram now switching to telethon everywhere seems lots of extra task. 3) I used TgCrypto along with pyrogram so speed is very good there but i am not able to use it along with telethon. If u can share TgCrypto package u use for telethon i can also test speed and compare.
What i feel if telethon can handle it properly pyrogram should also able to handle "SaveBigFilePartRequest" flood properly.
As this project seems little bit updated than orginal pyrogram so i thought someone can help with source code to fix this issue and handle FloodWait properly.
Anyway i will try to find in telethon code how this type of flood wait is handling properly. Thanks for your detailed reply.
I checked code of pyrogram before, It uses 4 workers to upload large file, maybe it is the reason of pyrogram's high upload speed.
Just let telethon handle the flood wait, this is my code:
from telethon import TelegramClient vid_path = sys.argv[1] client = TelegramClient('test_telethon', tg_api_id, tg_api_hash) client.start() client.send_file('me', vid_path) client.disconnect()
It returns:
2024-06-07 08:58:55,492 | INFO | Using TgCrypto 2024-06-07 08:58:56,048 | INFO | libssl detected, it will be used for encryption 2024-06-07 08:58:57,764 | INFO | Connecting to 91.108.56.133:443/TcpFull... 2024-06-07 08:58:57,845 | INFO | Connection to 91.108.56.133:443/TcpFull complete! 2024-06-07 08:58:58,379 | INFO | Uploading file of 247711036 bytes in 945 chunks of 262144 2024-06-07 08:59:08,615 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait 2024-06-07 08:59:24,314 | INFO | Sleeping for 7s (0:00:07) on SaveBigFilePartRequest flood wait 2024-06-07 08:59:42,313 | INFO | Sleeping for 5s (0:00:05) on SaveBigFilePartRequest flood wait 2024-06-07 08:59:57,316 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait 2024-06-07 09:00:14,140 | INFO | Sleeping for 5s (0:00:05) on SaveBigFilePartRequest flood wait 2024-06-07 09:00:32,593 | INFO | Sleeping for 3s (0:00:03) on SaveBigFilePartRequest flood wait 2024-06-07 09:00:48,437 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait 2024-06-07 09:01:07,598 | INFO | Sleeping for 3s (0:00:03) on SaveBigFilePartRequest flood wait 2024-06-07 09:01:23,117 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait 2024-06-07 09:01:39,126 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait 2024-06-07 09:01:53,316 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait 2024-06-07 09:02:11,517 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait 2024-06-07 09:02:25,717 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait 2024-06-07 09:02:42,540 | INFO | Sleeping for 6s (0:00:06) on SaveBigFilePartRequest flood wait 2024-06-07 09:03:01,297 | INFO | Sleeping for 3s (0:00:03) on SaveBigFilePartRequest flood wait 2024-06-07 09:03:47,966 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait 2024-06-07 09:04:04,825 | INFO | Sleeping for 4s (0:00:04) on SaveBigFilePartRequest flood wait 2024-06-07 09:04:20,825 | INFO | Disconnecting from 91.108.56.133:443/TcpFull... 2024-06-07 09:04:20,825 | INFO | Disconnection from 91.108.56.133:443/TcpFull complete!
Doesn't work for me. First of all i had to make some changes to make this snippet run properly. Here's my code:
from telethon import TelegramClient
import sys
api_id = ''
api_hash = ''
async def send_video(vid_path):
await client.start()
await client.send_file('me', vid_path)
await client.disconnect()
if __name__ == '__main__':
client = TelegramClient('telethon-custom', api_id, api_hash)
vid_path = sys.argv[1]
client.loop.run_until_complete(send_video(vid_path))
vid_path is my 2000MiB "blob" file again.
here the output:
Traceback (most recent call last):
File "test.py", line 15, in <module>
client.loop.run_until_complete(send_video(vid_path))
File "/home/PunchEnergyFTW/.local/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "test.py", line 9, in send_video
await client.send_file('me', vid_path)
File "/telethon_venv/lib/python3.8/site-packages/telethon/client/uploads.py", line 408, in send_file
file_handle, media, image = await self._file_to_media(
File "/telethon_venv/lib/python3.8/site-packages/telethon/client/uploads.py", line 744, in _file_to_media
file_handle = await self.upload_file(
File "/telethon_venv/lib/python3.8/site-packages/telethon/client/uploads.py", line 678, in upload_file
result = await self(request)
File "/telethon_venv/lib/python3.8/site-packages/telethon/client/users.py", line 30, in __call__
return await self._call(self._sender, request, ordered=ordered)
File "/telethon_venv/lib/python3.8/site-packages/telethon/client/users.py", line 87, in _call
result = await future
telethon.errors.rpcbaseerrors.FloodError: RPCError 420: FLOOD_PREMIUM_WAIT_7 (caused by SaveBigFilePartRequest)
Telegram premium solves everything.
Telegram premium solves everything.
Which sadly is not availiable in Germany
the issue is the new Flood_Premium_wait error , in the code there is nothing that can handle the wait and then continue . i found a way to get it work with telethon but i need pyrogram🥲
Telegram premium solves everything.
Absolutely, Telegram Premium effectively resolves all the mentioned issues.
Can anyone check by modifying below portion, (Have already tested with official pyrogram working fine able to handle FloodWait)
Replace,
try:
await session.invoke(data)
except Exception as e:
log.exception(e)
With,
try:
await session.invoke(data)
except pyrogram.errors.exceptions.flood_420.FloodPremiumWait as e:
log.warning(f"FloodWait: Waiting for {e.value} seconds.")
await asyncio.sleep(e.value + 1) # Wait for the suggested time plus a buffer
await session.invoke(data) # Retry after waiting
except Exception as e:
log.exception(f"An unexpected error occurred: {e}")
Tested working for it auto handle and wait for flood. Now use logger in your code and set level error so nothing will print.
Checklist
pip3 install -U https://github.com/pyrogram/pyrogram/archive/master.zip
and reproduced the issue using the latest development versionDescription
I am trying to upload video but i am getting lots of [420 FLOOD_PREMIUM_WAIT_X] message, terminal getting full, so its difficult to check output.
Steps to reproduce
If i use progress_callback then only this spamming messages showing. I just want to hide spam with using progress_callback
Code example
Logs