astral-sh / ruff

An extremely fast Python linter and code formatter, written in Rust.
https://docs.astral.sh/ruff
MIT License
31.8k stars 1.07k forks source link

Support `__builtins__.pyi` mechanism to provide additional builtins #8852

Open hassec opened 10 months ago

hassec commented 10 months ago

Sometimes one has to work in a python environment that exposes additional builtins. In pyright on can provide such information via a __builtins__.pyi file, see their docs

I was wondering what you would think about enabling ruff be able to read such a file too and then e.g. not throw an undefined name error for these special builtins?

zanieb commented 10 months ago

This seems reasonable but I'm unsure of the performance cost and feasibility with our current setup cc @charliermarsh

charliermarsh commented 10 months ago

We do support defining addition builtins via the builtins setting which is also consistent with Flake8. Does that help here?

hassec commented 10 months ago

Thanks @charliermarsh, I currently use that option, but it requires that I keep that list in sync with the __builtins__.pyi and if there is a long list of additional builtins, it clutters the pyproject.toml file quite a bit.

For the specific project I'm working on, the __builtins__.pyi is something that is generated, being able to not worry about having the two settings in sync would be nice.

cpbotha commented 1 week ago

Just FYI, below is an example __builtins__.pyi supplied by the Databricks VSCode extension to facilitate local development of Databricks. Note that the built-in values are also typed (with types being imported) so that one gets autocomplete in the IDE.

I am able to re-add the globals to the ruff builtins and it seems reasonably happy. It would of course be much more convenient if ruff could somehow extract and apply the list of globals from the same file.

# Typings for Pylance in Visual Studio Code
# see https://github.com/microsoft/pyright/blob/main/docs/builtins.md
from databricks.sdk.runtime import *

from databricks.sdk.runtime import *
from pyspark.sql.session import SparkSession
from pyspark.sql.functions import udf as U
from pyspark.sql.context import SQLContext

udf = U
spark: SparkSession
sc = spark.sparkContext
sqlContext: SQLContext
sql = sqlContext.sql
table = sqlContext.table
getArgument = dbutils.widgets.getArgument

def displayHTML(html): ...

def display(input=None, *args, **kwargs): ...
MichaReiser commented 6 days ago

Our work on red-knot (type checker) builds the foundation to support this in the future.