modelscope / data-juicer

A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大模型提供更高质量、更丰富、更易”消化“的数据!
Apache License 2.0
2.8k stars 170 forks source link

[Bug]: 质量分类器无法正确运行 #176

Closed BiqiangWang closed 8 months ago

BiqiangWang commented 9 months ago

Before Reporting 报告之前

Search before reporting 先搜索,再报告

OS 系统

win11

Installation Method 安装方式

from source

Data-Juicer Version Data-Juicer版本

v0.1.3

Python Version Python版本

python3.9

Describe the bug 描述这个bug

根据质量分类器文档描述,使用 predict.py 来预测一个文档的“质量”分数,执行python .\tools\quality_classifier\predict.py .\demos\data\demo-dataset-deduplication.jsonl .\outputs\demo-quality\demo-quality.jsonl, 完整提示信息如下:

24/01/12 15:47:02 WARN Shell: Did not find winutils.exe: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
24/01/12 15:47:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2024-01-12 15:47:03.388 | INFO     | tools.quality_classifier.qc_utils:init_spark:45 - Spark initialization done.
2024-01-12 15:47:03.389 | INFO     | tools.quality_classifier.qc_utils:prepare_model:65 - Preparing scorer model in [C:\Users\biqia/.cache\data_juicer\models\gpt3_quality_model]...
2024-01-12 15:47:03.792 | ERROR    | fire.core:_CallAndUpdateTrace:691 - An error has been caught in function '_CallAndUpdateTrace', process 'MainProcess' (23956), thread 'MainThread' (12776):
Traceback (most recent call last):

  File "C:\Dev\project\sources\data-juicer\tools\quality_classifier\predict.py", line 135, in <module>
    fire.Fire(predict_score)
    │    │    └ <function predict_score at 0x0000028B7E824430>
    │    └ <function Fire at 0x0000028B284BC280>
    └ <module 'fire' from 'C:\\Software\\anaconda3\\envs\\py-data-juicer\\lib\\site-packages\\fire\\__init__.py'>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\fire\core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      │     │          │     │                 │        └ 'predict.py'
                      │     │          │     │                 └ {}
                      │     │          │     └ Namespace(verbose=False, interactive=False, separator='-', completion=None, help=False, trace=False)
                      │     │          └ ['.\\demos\\data\\demo-dataset-deduplication.jsonl', '.\\outputs\\demo-quality\\demo-quality.jsonl']
                      │     └ <function predict_score at 0x0000028B7E824430>
                      └ <function _Fire at 0x0000028B2A5FD700>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\fire\core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
    │                           └ <function _CallAndUpdateTrace at 0x0000028B2A5FD820>
    └ <function predict_score at 0x0000028B7E824430>

> File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\fire\core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                │   │          └ {}
                │   └ ['.\\demos\\data\\demo-dataset-deduplication.jsonl', '.\\outputs\\demo-quality\\demo-quality.jsonl', 'gpt3', None, 'gpt3', 't...
                └ <function predict_score at 0x0000028B7E824430>

  File "C:\Dev\project\sources\data-juicer\tools\quality_classifier\predict.py", line 116, in predict_score
    model = prepare_model(model_name=model)
            │                        └ 'gpt3'
            └ <function prepare_model at 0x0000028B7E81DE50>

  File "c:\dev\project\sources\data-juicer\tools\quality_classifier\qc_utils.py", line 67, in prepare_model
    return PipelineModel.load(real_model_path)
           │             │    └ 'C:\\Users\\biqia/.cache\\data_juicer\\models\\gpt3_quality_model'
           │             └ <classmethod object at 0x0000028B7E1E2AF0>
           └ <class 'pyspark.ml.pipeline.PipelineModel'>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\ml\util.py", line 369, in load
    return cls.read().load(path)
           │   │           └ 'C:\\Users\\biqia/.cache\\data_juicer\\models\\gpt3_quality_model'
           │   └ <classmethod object at 0x0000028B7E232CA0>
           └ <class 'pyspark.ml.pipeline.PipelineModel'>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\ml\pipeline.py", line 282, in load
    metadata = DefaultParamsReader.loadMetadata(path, self.sc)
               │                   │            │     │    └ <property object at 0x0000028B7E1E1D60>
               │                   │            │     └ <pyspark.ml.pipeline.PipelineModelReader object at 0x0000028B7E8F5730>
               │                   │            └ 'C:\\Users\\biqia/.cache\\data_juicer\\models\\gpt3_quality_model'
               │                   └ <staticmethod object at 0x0000028B7E1EF250>
               └ <class 'pyspark.ml.util.DefaultParamsReader'>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\ml\util.py", line 579, in loadMetadata
    metadataStr = sc.textFile(metadataPath, 1).first()
                  │  │        └ 'C:\\Users\\biqia/.cache\\data_juicer\\models\\gpt3_quality_model\\metadata'
                  │  └ <function SparkContext.textFile at 0x0000028B59AB1C10>
                  └ <SparkContext master=local[*] appName=pyspark-shell>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\rdd.py", line 2888, in first
    rs = self.take(1)
         │    └ <function RDD.take at 0x0000028B59A275E0>
         └ C:\Users\biqia/.cache\data_juicer\models\gpt3_quality_model\metadata MapPartitionsRDD[1] at textFile at NativeMethodAccessorI...

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\rdd.py", line 2822, in take
    totalParts = self.getNumPartitions()
                 │    └ <function RDD.getNumPartitions at 0x0000028B59A17D30>
                 └ C:\Users\biqia/.cache\data_juicer\models\gpt3_quality_model\metadata MapPartitionsRDD[1] at textFile at NativeMethodAccessorI...

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\rdd.py", line 952, in getNumPartitions
    return self._jrdd.partitions().size()
           │    └ JavaObject id=o44
           └ C:\Users\biqia/.cache\data_juicer\models\gpt3_quality_model\metadata MapPartitionsRDD[1] at textFile at NativeMethodAccessorI...

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\py4j\java_gateway.py", line 1322, in __call__
    return_value = get_return_value(
                   └ <function capture_sql_exception.<locals>.deco at 0x0000028B7E8DA700>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\errors\exceptions\captured.py", line 179, in deco
    return f(*a, **kw)
           │  │    └ {}
           │  └ ('xro45', <py4j.clientserver.JavaClient object at 0x0000028B7E825370>, 'o44', 'partitions')
           └ <function get_return_value at 0x0000028B598DA3A0>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
          └ <class 'py4j.protocol.Py4JJavaError'>

py4j.protocol.Py4JJavaError: An error occurred while calling o44.partitions.
: java.lang.UnsatisfiedLinkError: 'boolean org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(java.lang.String, int)'
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:793)
        at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1249)
        at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1454)
        at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:601)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1972)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:2014)
        at org.apache.hadoop.fs.FileSystem$4.<init>(FileSystem.java:2180)
        at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2179)
        at org.apache.hadoop.fs.ChecksumFileSystem.listLocatedStatus(ChecksumFileSystem.java:783)
        at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:244)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:332)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:208)
        at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:291)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:287)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:291)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:287)
        at org.apache.spark.api.java.JavaRDDLike.partitions(JavaRDDLike.scala:61)
        at org.apache.spark.api.java.JavaRDDLike.partitions$(JavaRDDLike.scala:61)
        at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:568)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
        at java.base/java.lang.Thread.run(Thread.java:833)

To Reproduce 如何复现

无代码编辑,使用demo/data文件夹中的数据集

Configs 配置信息

No response

Logs 报错日志

24/01/12 15:47:02 WARN Shell: Did not find winutils.exe: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
24/01/12 15:47:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2024-01-12 15:47:03.388 | INFO     | tools.quality_classifier.qc_utils:init_spark:45 - Spark initialization done.
2024-01-12 15:47:03.389 | INFO     | tools.quality_classifier.qc_utils:prepare_model:65 - Preparing scorer model in [C:\Users\biqia/.cache\data_juicer\models\gpt3_quality_model]...
2024-01-12 15:47:03.792 | ERROR    | fire.core:_CallAndUpdateTrace:691 - An error has been caught in function '_CallAndUpdateTrace', process 'MainProcess' (23956), thread 'MainThread' (12776):
Traceback (most recent call last):

  File "C:\Dev\project\sources\data-juicer\tools\quality_classifier\predict.py", line 135, in <module>
    fire.Fire(predict_score)
    │    │    └ <function predict_score at 0x0000028B7E824430>
    │    └ <function Fire at 0x0000028B284BC280>
    └ <module 'fire' from 'C:\\Software\\anaconda3\\envs\\py-data-juicer\\lib\\site-packages\\fire\\__init__.py'>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\fire\core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      │     │          │     │                 │        └ 'predict.py'
                      │     │          │     │                 └ {}
                      │     │          │     └ Namespace(verbose=False, interactive=False, separator='-', completion=None, help=False, trace=False)
                      │     │          └ ['.\\demos\\data\\demo-dataset-deduplication.jsonl', '.\\outputs\\demo-quality\\demo-quality.jsonl']
                      │     └ <function predict_score at 0x0000028B7E824430>
                      └ <function _Fire at 0x0000028B2A5FD700>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\fire\core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
    │                           └ <function _CallAndUpdateTrace at 0x0000028B2A5FD820>
    └ <function predict_score at 0x0000028B7E824430>

> File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\fire\core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                │   │          └ {}
                │   └ ['.\\demos\\data\\demo-dataset-deduplication.jsonl', '.\\outputs\\demo-quality\\demo-quality.jsonl', 'gpt3', None, 'gpt3', 't...
                └ <function predict_score at 0x0000028B7E824430>

  File "C:\Dev\project\sources\data-juicer\tools\quality_classifier\predict.py", line 116, in predict_score
    model = prepare_model(model_name=model)
            │                        └ 'gpt3'
            └ <function prepare_model at 0x0000028B7E81DE50>

  File "c:\dev\project\sources\data-juicer\tools\quality_classifier\qc_utils.py", line 67, in prepare_model
    return PipelineModel.load(real_model_path)
           │             │    └ 'C:\\Users\\biqia/.cache\\data_juicer\\models\\gpt3_quality_model'
           │             └ <classmethod object at 0x0000028B7E1E2AF0>
           └ <class 'pyspark.ml.pipeline.PipelineModel'>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\ml\util.py", line 369, in load
    return cls.read().load(path)
           │   │           └ 'C:\\Users\\biqia/.cache\\data_juicer\\models\\gpt3_quality_model'
           │   └ <classmethod object at 0x0000028B7E232CA0>
           └ <class 'pyspark.ml.pipeline.PipelineModel'>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\ml\pipeline.py", line 282, in load
    metadata = DefaultParamsReader.loadMetadata(path, self.sc)
               │                   │            │     │    └ <property object at 0x0000028B7E1E1D60>
               │                   │            │     └ <pyspark.ml.pipeline.PipelineModelReader object at 0x0000028B7E8F5730>
               │                   │            └ 'C:\\Users\\biqia/.cache\\data_juicer\\models\\gpt3_quality_model'
               │                   └ <staticmethod object at 0x0000028B7E1EF250>
               └ <class 'pyspark.ml.util.DefaultParamsReader'>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\ml\util.py", line 579, in loadMetadata
    metadataStr = sc.textFile(metadataPath, 1).first()
                  │  │        └ 'C:\\Users\\biqia/.cache\\data_juicer\\models\\gpt3_quality_model\\metadata'
                  │  └ <function SparkContext.textFile at 0x0000028B59AB1C10>
                  └ <SparkContext master=local[*] appName=pyspark-shell>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\rdd.py", line 2888, in first
    rs = self.take(1)
         │    └ <function RDD.take at 0x0000028B59A275E0>
         └ C:\Users\biqia/.cache\data_juicer\models\gpt3_quality_model\metadata MapPartitionsRDD[1] at textFile at NativeMethodAccessorI...

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\rdd.py", line 2822, in take
    totalParts = self.getNumPartitions()
                 │    └ <function RDD.getNumPartitions at 0x0000028B59A17D30>
                 └ C:\Users\biqia/.cache\data_juicer\models\gpt3_quality_model\metadata MapPartitionsRDD[1] at textFile at NativeMethodAccessorI...

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\rdd.py", line 952, in getNumPartitions
    return self._jrdd.partitions().size()
           │    └ JavaObject id=o44
           └ C:\Users\biqia/.cache\data_juicer\models\gpt3_quality_model\metadata MapPartitionsRDD[1] at textFile at NativeMethodAccessorI...

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\py4j\java_gateway.py", line 1322, in __call__
    return_value = get_return_value(
                   └ <function capture_sql_exception.<locals>.deco at 0x0000028B7E8DA700>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\pyspark\errors\exceptions\captured.py", line 179, in deco
    return f(*a, **kw)
           │  │    └ {}
           │  └ ('xro45', <py4j.clientserver.JavaClient object at 0x0000028B7E825370>, 'o44', 'partitions')
           └ <function get_return_value at 0x0000028B598DA3A0>

  File "C:\Software\anaconda3\envs\py-data-juicer\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
          └ <class 'py4j.protocol.Py4JJavaError'>

py4j.protocol.Py4JJavaError: An error occurred while calling o44.partitions.
: java.lang.UnsatisfiedLinkError: 'boolean org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(java.lang.String, int)'
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:793)
        at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:1249)
        at org.apache.hadoop.fs.FileUtil.list(FileUtil.java:1454)
        at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:601)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1972)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:2014)
        at org.apache.hadoop.fs.FileSystem$4.<init>(FileSystem.java:2180)
        at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2179)
        at org.apache.hadoop.fs.ChecksumFileSystem.listLocatedStatus(ChecksumFileSystem.java:783)
        at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:244)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:332)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:208)
        at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:291)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:287)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49)
        at org.apache.spark.rdd.RDD.$anonfun$partitions$2(RDD.scala:291)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:287)
        at org.apache.spark.api.java.JavaRDDLike.partitions(JavaRDDLike.scala:61)
        at org.apache.spark.api.java.JavaRDDLike.partitions$(JavaRDDLike.scala:61)
        at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:568)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
        at java.base/java.lang.Thread.run(Thread.java:833)

Screenshots 截图

No response

Additional 额外信息

No response

HYLcool commented 9 months ago

你好,感谢你对Data-Juicer的关注与使用!

我们目前没有在windows系统上进行过完整的测试,根据你遇到的报错信息,我们注意到两个比较重要的点:

24/01/12 15:47:02 WARN Shell: Did not find winutils.exe: java.io.FileNotFoundException: java.io.FileNotFoundException: HADOOP_HOME and hadoop.home.dir are unset. -see https://wiki.apache.org/hadoop/WindowsProblems
......
java.lang.UnsatisfiedLinkError: 'boolean org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(java.lang.String, int)'
......

根据这两条信息我们搜索后,发现有可能是spark与windows系统之间的一些问题,以下为两个可能有帮助的资料,你可以先参考与尝试下:

  1. https://stackoverflow.com/questions/41851066/exception-in-thread-main-java-lang-unsatisfiedlinkerror-org-apache-hadoop-io
  2. https://stackoverflow.com/questions/35652665/java-io-ioexception-could-not-locate-executable-null-bin-winutils-exe-in-the-ha

如还有后续问题,请随时联系我们~我们后续也会尝试完善在windows系统上的相关测试

BiqiangWang commented 9 months ago

参考这两篇以及其他的一些文章,我单独下载了这些文件并配置了环境变量,且添加至了系统动态链接库后,WARN Shell: Did not find winutils.exe的报错初步被解决了。但是出现了新的问题:python Python worker failed to connect back.,具体信息如下:

py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (WBQ executor driver): org.apache.spark.SparkException: Python worker failed to connect back.

参考 https://stackoverflow.com/questions/70571389/py4jjavaerror-an-error-occurred-while-calling-zorg-apache-spark-api-python-pyt 配置 PYSPARK_PYTHON 环境依然无法解决这一问题,其他可见的回答似乎关注于代码层面

HYLcool commented 9 months ago

你好,我们正在windows系统上进行data-juicer的全面测试,windows上相关的兼容性问题我们正在尝试解决,如果对于这个问题我们有进一步消息或者新的发现了会及时与你同步,还请耐心等待,感谢理解~

github-actions[bot] commented 8 months ago

This issue is marked as stale because there has been no activity for 21 days. Remove stale label or add new comments or this issue will be closed in 3 day.

github-actions[bot] commented 8 months ago

Close this stale issue.