PipedreamHQ / pipedream

Connect APIs, remarkably fast. Free for developers.
https://pipedream.com
Other
8.32k stars 5.27k forks source link

[BUG] ECONNRESET error on python code steps #5086

Closed ctrlaltdylan closed 10 months ago

ctrlaltdylan commented 1 year ago

Describe the bug

Python code steps fail with a ECONNRESET error without any additional debugging information.

It appears to be widespread with an unknown single cause, has potentially multiple cause.

Expected behavior We expect to be able to run valid Python code, or have more helpful error messages surfaced if a runtime error occurs.

Additional context

If you have a specific example, please include the code step code below to help us identify the cause.

Update Feb 22nd 2023

A new version of our Python execution environment for workflows was released. I have tested the scenarios below that I can.

The error handling of various Python exceptions has been fixed.

CleanShot 2023-02-22 at 07 21 53@2x

If you continue to have issues, please provide your Python code that throws this error, without authentication or integration with a 3rd party app is preferable so we can all reproduce the bug.

NebularNerd commented 1 year ago

I was getting that and other errors until I indented my code under the initial def handler, as far as I could work out this in a must do:

def handler(pd: "pipedream"):
    # Do all stuff under this indent
    string = 'Hello everybody'
    print (string)
    return string
shuvocasanova commented 1 year ago

Hi here is the code with I am having problem with read ECONNRESET

from dydx3 import Client from dydx3 import constants from dydx3 import epoch_seconds_to_iso import time from dydx3.constants import TIME_IN_FORCE_GTT from dydx3.constants import TIME_IN_FORCE_FOK

def handler(pd: "pipedream"):

Reference data from previous steps

market = pd.steps["trigger"]["event"]["body"]["market"]

side = pd.steps["trigger"]["event"]["body"]["side"]

order_type = pd.steps["trigger"]["event"]["body"]["order_type"]

size = pd.steps["trigger"]["event"]["body"]["size"]

price = pd.steps["trigger"]["event"]["body"]["price"]

limit_fee = pd.steps["trigger"]["event"]["body"]["limit_fee"]

client_id = pd.steps["trigger"]["event"]["body"]["client_id"]

Authentication

########################## YOU FILL THIS OUT ################# _private_key = ''

_private_key is optional and may be set to '' (hardware wallets do not generally provide this information)

If _private_key is set, you do not need to set _api_key/_api_secret/_api_passphrase/_stark_private_key

_api_key = '8a7d647a-f40d-b584-1aed-f91d30f3cdc0' _api_secret = 'FfeYDCn4d8U-YX0JR64V_u2zh7ocTNJYnP84sX1l' _api_passphrase = 'cpjbF72bf-pWoC-DComT' _stark_private_key = '0662e179af2bef34ba68ddda726d831f1ba619904e93da0f2bbcce9d54ef8611' _eth_address = '0x2ea8AEa8b848C60e4E387ddEdb3003020e825478' _network_id = str(constants.NETWORK_ID_GOERLI)

_network_id is set to either str(constants.NETWORK_ID_MAINNET) or str(constants.NETWORK_ID_GOERLI)

_api_host = constants.API_HOST_GOERLI

_api_host is set to either constants.API_HOST_MAINNET or constants.API_HOST_GOERLI

##############################################################

client = Client( host = _api_host, network_id = _network_id, api_key_credentials = { 'key': _api_key, 'secret': _api_secret, 'passphrase': _api_passphrase } ) client.stark_private_key = _stark_private_key

get_account_result = client.private.get_account( ethereum_address = _eth_address ) account = get_account_result.data['account'] one_minute_from_now_iso = epoch_seconds_to_iso(time.time() + 700)

all_orders = client.private.get_orders( market="BTC-USD", )

Return data for use in future steps

return all_orders.data

jc-tzn commented 1 year ago

I'm also getting this error on a Python step and sometimes the following errors as well: "Error: connect ECONNREFUSED" "connect ECONNREFUSED /tmp/nano--7-dnP1oL521oLZ-.sock" They all occur seemingly at random on the same step so I assumed they might be related to the same underlying problem.

hwestphal commented 1 year ago

I've got problems with 2 different Python steps having not much in common. Both fail from time to time with ECONNRESET:

Step extract_data:

# pipedream add-package beautifulsoup4

from bs4 import BeautifulSoup
import re

def normalize(value):
    return re.sub(r'\s+', ' ', value.strip())

def extractValue(soup, label):
    fields = [n.parent for n in soup.select("td.field > span.label") if normalize(n.string) == label]
    if len(fields) == 1:
        values = [normalize(n.string) for n in fields[0].select("span.value")]
        if len(values) == 1:
            return values[0]
    return ""

def handler(pd: "pipedream"):
    soup = BeautifulSoup(pd.steps["trigger"]["event"]["body"]["html"])

    return {
        "fullName": extractValue(soup, "Ihr vollständiger Name"),
        "email": extractValue(soup, "Ihre E-Mail-Adresse"),
        "phone": extractValue(soup, "Ihre Telefonnummer"),
        "make": extractValue(soup, "Marke"),
        "model": extractValue(soup, "Modell"),
        "modelDescription": extractValue(soup, "Variante"),
        "vin": extractValue(soup, "Fahrgestellnummer / FIN (zu finden im Fahrzeugschein unter „Feld E“)"),
        "mileage": extractValue(soup, "Kilometerstand: [numeric]"),
        "priceSuggestion": extractValue(soup, "Haben Sie schon eine Preisvorstellung?")
    }

Step data_to_xml:

import base64
from xml.sax.saxutils import escape

def object_to_xml(data, root='dealerdeskLead'):
    if isinstance(data, dict):
        xml = f'<{root}>'
        for key, value in data.items():
            xml += object_to_xml(value, key)
        xml += f'</{root}>'
    elif isinstance(data, (list, tuple, set)):
        xml = ''
        for item in data:
            xml += object_to_xml(item, root)
    else:
        xml = f'<{root}>{escape(str(data))}</{root}>'
    return xml

def handler(pd: "pipedream"):
    data = pd.steps["extract_data"]["$return_value"]
    xml = object_to_xml(
        {
            "contact": {
                "fullName": data["fullName"],
                "email": data["email"],
                "phone": data["phone"]
            },
            "contactVehicle": {
                "make": data["make"],
                "model": data["model"],
                "modelDescription": data["modelDescription"],
                "vin": data["vin"],
                "mileage": data["mileage"],
                "priceSuggestion": data["priceSuggestion"]
            },
            "acquisition": {
                "tradeInRequested": "true"
            }
        })

    return {"attachment": base64.b64encode(xml.encode('utf-8')).decode('ascii')}
vunguyenhung commented 1 year ago

Reported by another user here: https://pipedream-users.slack.com/archives/CPTJYRY5A/p1675373872775859

# pipedream add-package supabase
from supabase import create_client, Client

def handler(pd: "pipedream"):
  # Reference data from previous steps
  print(pd.steps["trigger"]["context"]["id"])
  # Return data for use in future steps
  return {"foo": {"test":True}}
image
iterlace commented 1 year ago

Same issue. Random ECONNRESET errors, even on a step which worked fine 5 minutes ago.

iterlace commented 1 year ago

I've also got some Unexpected EOF errors from your executor. Which seems to have caused this error on a subsequent execution:

Command failed: pipreqs --force ERROR: Failed on file: /tmp/__pdg__/dist/code/33569c80db70521534c15acdd8169f1b013d099674a178ea43a5d2077351498b/code.py Traceback (most recent call last): File "/var/lang/bin/pipreqs", line 8, in <module> sys.exit(main()) File "/var/lang/lib/python3.9/site-packages/pipreqs/pipreqs.py", line 488, in main init(args) File "/var/lang/lib/python3.9/site-packages/pipreqs/pipreqs.py", line 415, in init candidates = get_all_imports(input_path, File "/var/lang/lib/python3.9/site-packages/pipreqs/pipreqs.py", line 131, in get_all_imports raise exc File "/var/lang/lib/python3.9/site-packages/pipreqs/pipreqs.py", line 117, in get_all_imports tree = ast.parse(contents) File "/var/lang/lib/python3.9/ast.py", line 50, in parse return compile(source, filename, mode, flags, File "<unknown>", line 7 return None ^ SyntaxError: invalid syntax

(pipreqs has failed to build an AST, because the source file is broken)

JofoJ commented 1 year ago

Any ideas on what's happening here? I'm still getting this error. Everything works great when testing, then get error that says "read ECONNRESET" with no other context when I deploy

JofoJ commented 1 year ago

The only way I can recreate the error is to run the entire workflow at once. If I test the step individually I don't get this error. At least when testing the workflow in the edit environment it provides more context. Inserting my error details below.

Error: read ECONNRESET at node_internal_captureLargerStackTrace (internal/errors.js:412:5) at node_internal_errnoException (internal/errors.js:542:12) at Pipe.onStreamRead (internal/stream_base_commons.js:209:20)

iterlace commented 1 year ago

Any ideas on what's happening here? I'm still getting this error. Everything works great when testing, then get error that says "read ECONNRESET" with no other context when I deploy

I have an opposite problem :) This error randomly occurs when I'm testing a pipeline. But when it's finally deployed, everything works fine

mcgraf commented 1 year ago

I keep getting this error on my workflow. If I create a new workflow with just a scheduler as a trigger a the python step it works ok but if I create a new workflow with something else i get this error on even the example code

ctrlaltdylan commented 1 year ago

def handler(pd: "pipedream"):

Do all stuff under this indent

string = 'Hello everybody'
print (string)
return string

Resolved with the latest Python execution environment:

CleanShot 2023-02-22 at 07 20 51@2x
lachied522 commented 1 year ago

I have been getting this error repeatedly and seemingly at random. Workflows that worked fine the day before suddenly get ECONNRESET error. I thought it might be to do with the openai library I am using, but I get the same error accross all my workflows that use python even without the openai library. I also saw somewhere to increase workflow memory, which I maxed out but still got the error. My workflows sometimes work fine, and then I get hit by this error repeatedly before I give up. Then I come back later and it works fine

I'm a complete noob at this so I feel embarrassed sharing my code, but here is an example of something that gives me this error (again, it seems to happen accross all my python steps).

def handler(pipedream: "pipedream"):
  user_record_summary = pipedream.steps["get_user_record"]["$summary"]
  task_record_summary = pipedream.steps["get_task_record"]["$summary"]
  idea_record_summary = pipedream.steps["get_idea_record"]["$summary"]

  if "no data found" not in user_record_summary.lower():
    user_jobs_list = pipedream.steps["get_user_record"]["$return_value"]["jobs"]
    user_jobs = []
    for (index, job) in enumerate(user_jobs_list):
      job_samples = []
      for sample in [d for d in job["data"] if d["feedback"]=="user-upload"]:
        prompt = sample["prompt"]
        completion = sample["completion"]
        job_samples.append({"prompt": prompt, "completion": completion})
      user_jobs.append({"name": job["name"], "word_count": job["word_count"], "data": job_samples})
  else: 
    user_jobs = []

  if "no data found" not in task_record_summary.lower():
    task_record_list = pipedream.steps["get_task_record"]["$return_value"]
    if not isinstance(task_record_list, list):
      #if return value only contains one record, it is returned as a dict
      task_record_list = [task_record_list]
  else: 
    task_record_list = []

  if "no data found" not in idea_record_summary.lower():
    idea_record_list = pipedream.steps["get_idea_record"]["$return_value"]
    if not isinstance(idea_record_list, list):
      idea_record_list = [idea_record_list]
  else: 
    idea_record_list = []

  response_dict = {
    "user": user_jobs,
    "tasks": task_record_list[::-1],
    "ideas": idea_record_list[::-1]
  }

  return response_dict
GiladL-IVLead commented 1 year ago

Here is my issue: import json import requests import re import numpy as np def handler(pd: "pipedream"): token = f'{pd.inputs["hubspot"]["$auth"]["oauth_access_token"]}' authorization = f'Bearer {token}' headers = {"Authorization": authorization}

p = 0 d = pd.steps["hubspot"]["$return_value"]["contacts"] cntc = pd.steps["hubspot"]["$return_value"]["contacts"]

n = 8

n = len(cntc) n -=1 lineid = pd.steps["extract_number"]["$return_value"] headline = pd.steps["get_values_in_range"]["$return_value"][0][2] lnk = pd.steps["get_values_in_range"]["$return_value"][0][6] heb = r'[\u0590-\u05FF]+'

lnk = 'google.com'

data = [] while p<n: firstname = d[p]["properties"]["firstname"]["value"] lastname = d[p]["properties"]["lastname"]["value"] email = d[p]["identity-profiles"][0]["identities"][0]["value"]

r = requests.get('https://api.hubapi.com/contacts/v1/contact/email/chen@iv-lead.com/profile?property=hebrew_first_name', headers=headers)

r = requests.get('https://api.hubapi.com/contacts/v1/contact/email/{}/profile?property=hebrew_first_name'.format(email), headers=headers)
json_data = r.json()
hebrew_first_name = json_data['properties']['hebrew_first_name']['value']
body = pd.steps["get_values_in_range"]["$return_value"][0][3]
body = body.replace("\n\n", "</p><p>").replace("\n", "<br>")
match = re.search(heb, body)
if match is not None: 
  firstname = hebrew_first_name
  body = '<div style="text-align: right;">'+body+'</div>'
if firstname == "TBD":
  firstname = ""
if "[NAME]" in body:
  body = body.replace("[NAME]", firstname)
if "[שם]" in body:
  body = body.replace("[שם]", firstname)
if "link" in body:
  body = body.replace("link", f"<a href='{lnk}'>link</a>")
if "קישור" in body:
  body = body.replace("קישור", f"<a href='{lnk}'>קישור</a>")
html = f"<p>{body}<p>"
ar = [lineid, firstname, lastname, email, headline, html]
data.append(ar)
p+=1

return (data)

nekitonn commented 1 year ago

I have the same issue with Python. Error: read ECONNRESET at node_internal_captureLargerStackTrace (internal/errors.js:412:5) at node_internal_errnoException (internal/errors.js:542:12) at Pipe.onStreamRead (internal/stream_base_commons.js:209:20)

import re
import openai
from tokenizers import Tokenizer
from tokenizers.models import BPE
from tokenizers.pre_tokenizers import Whitespace
from tokenizers.trainers import BpeTrainer

def handler(pd: "pipedream"):
    try:
        results = pd.steps["Get_papers_CORE"]["$return_value"]["results"]
        if results is None:
            return {"sources": False}
        else:
            # Set OpenAI API key
            openai.api_key = f'{pd.inputs["openai"]["$auth"]["api_key"]}'

            # Filter sources with OpenAccessPDF and URL ends with .pdf
            text_sources = [entry for entry in results if 'fullText' in entry]

            # if there are no pdf sources, set no_sources to True
            if not text_sources:
                return {"sources": False}
            else:
                # take the first entry as the source to be used in the next steps
                source = text_sources[0]
                print(text_sources)
                print(source)

            # get the full text from the source
            text = source['fullText']
            text = ' '.join(text.split()[200:])

        # Tokenize text and divide to parts, so we don't exceed 2048 tokens limit

        tokenizer = Tokenizer(BPE(unk_token='[UNK]'))
        tokenizer.pre_tokenizer = Whitespace()
        trainer = BpeTrainer(special_tokens=['[UNK]', '[CLS]', '[SEP]', '[PAD]', '[MASK]'])
        tokenizer.train_from_iterator([text], trainer=trainer)
        tokens = tokenizer.encode(text)
        prompt=f'Give 5 key insights for essay about {pd.steps["trigger"]["event"]["query"]["topic"]} from this part of academic paper:"""\n\n"""\nInsight 1 - '
        max_tokens = 2040 - len(tokenizer.encode(prompt).ids)-1000
        print(max_tokens)
        num_parts = min(len(tokens.ids) // max_tokens + 1, 10)

        parts = []
        for i in range(num_parts):
            print(i)
            start = i * max_tokens
            end = min((i + 1) * max_tokens, len(tokens.ids))
            part_tokens = tokens.ids[start:end]
            part_text = tokenizer.decode(part_tokens).replace('[UNK]', '')
            parts.append(part_text)

        # Summarize each part using OpenAI API
        summaries = []
        for part in parts:
            result = openai.Completion.create(
                engine='text-curie-001',
                prompt=f'Give 5 key insights for essay about {pd.steps["trigger"]["event"]["query"]["topic"]} from this part of academic paper:"""\n{part}\n"""\nInsight 1 - ',
                max_tokens=120,
                temperature=0.5,
                n=1,
                stop=None,
                timeout=30
            )
            summary = result.choices[0].text.strip()
            summaries.append(summary)

        # Join all summaries into one string
        full_summary = ' '.join(summaries)
        # Remove Insight {number} instances
        full_summary = re.sub(r'Insight \d+ - ', '- ', full_summary)

        # Start every paragraph from dash "- "
        full_summary = re.sub(r'\n\n', '\n- ', full_summary)

        # Use full_summary in next steps
        return({
        "sources": True,
        "full_summary": full_summary
        })
    except Exception as e:
        print(f"An error occurred: {e}")
        return {"sources": False}

EDIT: Seems like it's working now. I Copied the workflow where this code was and looks like it's working so far.

EDIT2: :man_facepalming: It’s ridiculous, this error came back in 10 minutes. Also I got EPIPE error.

What I did after it worked without errors:

kyleschiess commented 1 year ago

I haven't touched this code since I started getting these errors. This is for a step that pulls Salesforce creds from a connection:

import json
import os
from simple_salesforce import Salesforce
import traceback

def alert_workflow_error(error):
    slack_bot_token = os.environ['SLACK_BOT_TOKEN']
    fa_webhook = os.environ['FA_WEBHOOK']
    headers = {
        'Content-type': 'application/json',
        'Authorization': f'Bearer {slack_bot_token}'
    }

    body = {
        "channel": "C03N5QK3LGY",
        "text":(
            f"<@U01ER53NR1B>" + "\n" +
            "*Stats:*" + "\n" +
            f"Where: `Astro Organization Creation Handler: sfdc_creds`" + "\n" +
            f"Error: {error}" + "\n"
        )
    }
    response = requests.post(fa_webhook,data=json.dumps(body),headers=headers)

def handler(pd: 'pipedream'):
    sfdc_creds = pd.inputs['salesforce_rest_api']['$auth']
    try:
        sf = Salesforce(instance_url=sfdc_creds['instance_url'], session_id=sfdc_creds['oauth_access_token'])
    except Exception as e:
        alert_workflow_error(traceback.format_exc())
        raise Exception(e)
    return sfdc_creds

This can give me any of these errors:

I've since switched to using a Node.js version of this step.

tkrunning commented 1 year ago

I'm also seeing this error read ECONNRESET occasionally (maybe every 20-30 runs so) with python code steps.

Here's the code:

Example code

I've tried to throttle the workflow to only allow one run every 5 seconds, without seeing any improvement.

persicom commented 1 year ago

I have tried to import the smartsheet module using "# pipedream add-package module" then "import module" but received an error message

image

antopolskiy commented 1 year ago

I get the same error with this code trying to process a message from telegram:

def handler(pd: "pipedream"):
  # Reference data from previous steps
  is_forwarded = "forward_from_chat" in pd.steps["trigger"]["event"]["message"].keys()

  if is_forwarded:
    source_chat_username = pd.steps["trigger"]["event"]["message"]["forward_from_chat"]["username"]
    forwarded_message_id = pd.steps["trigger"]["event"]["message"]["forward_from_message_id"]
    url = f"https://t.me/{source_chat_username}/{forwarded_message_id}"
  else:
    url = ""

  if is_forwarded:
    title = pd.steps["trigger"]["event"]["message"]["forward_from_chat"]["title"]
  else:
    title = pd.steps["trigger"]["event"]["message"]["text"][:30] + "..."

  content = (
    pd.steps["trigger"]["event"]["message"]["text"] + 
    "\n\n" + 
    f'Links: {str(pd.steps["trigger"]["event"]["message"].get("entities", []))}'
  )

  # Return data for use in future steps
  return {"title": title, "content": content, "url": url}

Error:

Error
read ECONNRESET

DETAILS
Error: read ECONNRESET
    at __node_internal_captureLargerStackTrace (internal/errors.js:412:5)
    at __node_internal_errnoException (internal/errors.js:542:12)
    at Pipe.onStreamRead (internal/stream_base_commons.js:209:20)

UPD:

I discovered that for me this error depends on the input events.

Right now it works for this event:

{"context":{"id":"2NCHiatRMHFmRgHVvptT5jR6eu3","ts":"2023-03-18T17:41:11.073Z","pipeline_id":null,"workflow_id":"p_7NCPpw1","deployment_id":"d_QAsbjBKz","source_type":"TRACE","verified":false,"hops":null,"test":true,"replay":false,"owner_id":"u_YJhkBxd","platform_version":"3.36.2","workflow_name":"New Message Updates (Instant) workflow","resume":null,"trace_id":"2NCHiatRMHFmRgHVvptT5jR6eu3"},"event":{"update_id":7426612,"message":{"message_id":39,"from":{"id":97390651,"is_bot":false,"first_name":"Sergey","last_name":"Antopolskiy","username":"antopolsky","language_code":"en"},"chat":{"id":97390651,"first_name":"Sergey","last_name":"Antopolskiy","username":"antopolsky","type":"private"},"date":1679161079,"text":"test this is a test"}}}

But it doesn't work for this event:

{"context":{"id":"2NCHohlv46BDDUBN4LUkLB4Fveb","ts":"2023-03-18T17:42:00.514Z","pipeline_id":null,"workflow_id":"p_7NCPpw1","deployment_id":"d_QAsbjBKz","source_type":"TRACE","verified":false,"hops":null,"test":true,"replay":false,"owner_id":"u_YJhkBxd","platform_version":"3.36.2","workflow_name":"New Message Updates (Instant) workflow","resume":null,"trace_id":"2NCHohlv46BDDUBN4LUkLB4Fveb"},"event":{"update_id":7426613,"message":{"message_id":40,"from":{"id":97390651,"is_bot":false,"first_name":"Sergey","last_name":"Antopolskiy","username":"antopolsky","language_code":"en"},"chat":{"id":97390651,"first_name":"Sergey","last_name":"Antopolskiy","username":"antopolsky","type":"private"},"date":1679161224,"forward_from_chat":{"id":-1001329188755,"title":"Инжиниринг Данных","username":"rockyourdata","type":"channel"},"forward_from_message_id":3682,"forward_signature":"Dmitry","forward_date":1659029593,"text":"Новости из мира аналитики:\n\nНесколько статей про Metrics Store:\nHow Airbnb Achieved Metric Consistency at Scale part 1\nHow Airbnb Standardized Metric Computation at Scale part 2\nMetrics Layer & Metadata | Drew Banin (dbt Labs), Nick Handel (Transform) & Prukalpa Sankar (Atlan)\nDBT: The Metrics System\n\nДругие новости:\nMeasuring downstream impact on social networks by using an attribution framework - вообще Donwstream Impact - это очень мощная штука, мы учимся понимать какое влияние окажет конкретный канал или действие на весь путь клиенты, это уже серьезный анализ. Такой подход очень популярен в Амазон, действительно важная задача для серьезного Аналитика, который анализирует бизнес и принимает важные решения.\n\nПро инструменты оркестарции:\nShould You Use Apache Airflow?\n\nСудя по отзывам неплохие и недорогие курсы на русском:\nApache Airflow 2.2: практический курс\nВведение в Data Engineering: дата-пайплайны (про Luigi)\n\n\nПро ML:\nA Chat with Andrew on MLOps: From Model-centric to Data-centric AI\n\nИнструменты:\nSoda - CLI утилита для проверки данных\nDataBathing -  библиотека которая трансформирует SQL в Dataframe\nSQL Fluff - linting для SQL, популярен для DBT","entities":[{"offset":64,"length":47,"type":"text_link","url":"https://medium.com/airbnb-engineering/how-airbnb-achieved-metric-consistency-at-scale-f23cc53dea70"},{"offset":119,"length":51,"type":"text_link","url":"https://medium.com/airbnb-engineering/airbnb-metric-computation-with-minerva-part-2-9afe6695b486"},{"offset":178,"length":100,"type":"text_link","url":"https://youtu.be/1DiY546dhek"},{"offset":278,"length":25,"type":"text_link","url":"https://www.getdbt.com/coalesce-2021/keynote-the-metrics-system/"},{"offset":319,"length":80,"type":"text_link","url":"https://engineering.linkedin.com/blog/2022/measuring-downstream-impact-on-social-networks"},{"offset":749,"length":32,"type":"text_link","url":"https://medium.com/coriers/should-you-use-apache-airflow-e71c6cf7c0c4"},{"offset":836,"length":38,"type":"text_link","url":"https://startdatajourney.com/ru/course/apache-airflow-2"},{"offset":874,"length":43,"type":"text_link","url":"https://startdatajourney.com/ru/course/luigi-data-pipelines"},{"offset":940,"length":68,"type":"text_link","url":"https://youtu.be/06-AZXmwHjo"},{"offset":1021,"length":4,"type":"text_link","url":"https://docs.soda.io/soda-core/overview-main.html"},{"offset":1060,"length":11,"type":"text_link","url":"https://medium.com/walmartglobaltech/databathing-a-framework-for-transferring-the-query-to-spark-code-484957a7e049"},{"offset":1125,"length":9,"type":"text_link","url":"https://www.sqlfluff.com/"}]}}}

UPD2: For anyone interested in alternative solutions -- I rewrote my code to JS using GPT4 and it works flawlessly. I hope they'll fix the Python blocks in the future though.

hacksman commented 1 year ago

Error read ECONNRESET Error: read ECONNRESET at node_internal_captureLargerStackTrace (internal/errors.js:412:5) at node_internal_errnoException (internal/errors.js:542:12) at Pipe.onStreamRead (internal/stream_base_commons.js:209:20)

kofygoxi commented 1 year ago

I have the same issue. The pipeline runs well in the past two months, but it had an error yesterday ( I manually ran it and still fail).

Today, it is back to running well automatically.

image

dylburger commented 1 year ago

FYI — we've added extra observability internally to catch the errors that yield these ECONNRESET issues, and we've already addressed a couple of bugs that were identified as a result.

Please make a trivial change to any affected workflows and re-deploy them. This will ship your workflow with the newest debugging code, and any ECONNRESET errors will be raised to our team and addressed.

Let us also know if these workflows appear to work after testing / running them on prod. It's possible the bugs we addressed will already resolve the issue for you.

tkrunning commented 1 year ago

I haven't seen any of these errors in the past 2-3 days. Prior to that I saw 1-3 errors a day. So quite possible that you've squashed these in my case now 👍

Thank you!

MekhrubonT commented 1 year ago

The code: https://pastebin.com/shHvAjxM - receive a poll vote event from telegram, read a file from google cloud, update the content and write the file back. Periodically fails with: Error read ECONNRESET Error: read ECONNRESET at node_internal_captureLargerStackTrace (internal/errors.js:412:5) at node_internal_errnoException (internal/errors.js:542:12) at Pipe.onStreamRead (internal/stream_base_commons.js:209:20) But usually succeeds after restart, there was one time when restarts were failing as well.

There is one user, at whose vote I get this error more often: https://pastebin.com/ucEEKqAF

Failures usually come in batch: everything's ok, then I get 5-10 failures in a row, usually within couple minutes and back to ok again. image

tlpriest commented 1 year ago

Code working on April 1, 2023 is now failing April 25, 2023. OpenAI is up.

import requests

def handler(pd: "pipedream"):
  token = f'{pd.inputs["openai"]["$auth"]["api_key"]}'
  authorization = f'Bearer {token}'
  headers = {"Authorization": authorization, "Content-type": "application/json"}

  content = pd.steps["trigger"]["event"]["content"]

  chunk_size = 3800
  output = []
  for i in range(0, len(content), chunk_size):
    chunk = content[i:i+chunk_size]
    # Do something with the current 4k chunk, for example:
    r = requests.post(
      'https://api.openai.com/v1/chat/completions', 
      json={
        "model": "gpt-3.5-turbo",
        "messages": [{"role": "user", "content": chunk + "\n\nCreate an approximatley 150 word summary of this content"}],
        "temperature": 0.7,
      },
      headers=headers,
      )
    print(r.json())
    output.append(r.json()["choices"][0]["message"]["content"])

  # Export the data for use in future steps
  return output
dylburger commented 10 months ago

@MekhrubonT @tlpriest Apologies for the delayed response. Are y'all still seeing ECONNRESET errors? I'm closing this general ticket since we haven't observed them since, but if you're still seeing issues, let me know and I can open up a new ticket to track and let the team know.