scalameta / metals

Scala language server with rich IDE features 🚀
https://scalameta.org/metals/
Apache License 2.0
2.09k stars 332 forks source link

method not found during metals initialization #2132

Closed rickyninja closed 4 years ago

rickyninja commented 4 years ago

Describe the bug

I'm getting a method not found error upon vim startup during metals initialization. It's possible vim-lsp really is sending an invalid method, but I'm unable to make that judgement.

To Reproduce

Steps to reproduce the behavior:

  1. git clone https://github.com/apache/spark.git
  2. tail -f .metals/metals.log
  3. vim project/MimaExcludes.scala
  4. See error in metals.log

Expected behavior

I expected no initialization errors, and for goto definition etc. to be functional.

Screenshots

Installation:

Additional context

The error returned via jsonrpc doesn't specifiy which method metals is unable to find. I wrote a wrapper program to capture stdin & stdout to help determine which method call is triggering this error.

metals.log

2020.10.10 10:37:35 INFO  started: Metals version 0.9.4 in workspace '/home/jeremys/git/spark' for client vim-lsp.
2020.10.10 10:37:35 INFO  time: initialize in 0.3s
2020.10.10 10:37:37 INFO  no build target: using presentation compiler with only scala-library
2020.10.10 10:37:37 WARN  no build target for: /home/jeremys/git/spark/project/MimaExcludes.scala
2020.10.10 10:37:37 ERROR Unexpected error initializing server
org.eclipse.lsp4j.jsonrpc.ResponseErrorException: Method not found
        at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.handleResponse(RemoteEndpoint.java:209)
        at org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.consume(RemoteEndpoint.java:193)
        at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.handleMessage(StreamMessageProducer.java:194)
        at org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.listen(StreamMessageProducer.java:94)
        at org.eclipse.lsp4j.jsonrpc.json.ConcurrentMessageProcessor.run(ConcurrentMessageProcessor.java:113)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

jsonrpc capture

jeremys@skynet> cat metals-vim-rpc.txt
Content-Length: 1153

{"id":1,"jsonrpc":"2.0","method":"initialize","params":{"rootUri":"file:///home/jeremys/git/spark","initializationOptions":{"rootPatterns":"build.sbt"},"capabilities":{"workspace":{"configuration":true,"applyEdit":true},"textDocument":{"implementation":{"linkSupport":true},"documentSymbol":{"symbolKind":{"valueSet":[10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,1,2,3,4,5,6,7,8,9]},"hierarchicalDocumentSymbolSupport":false},"semanticHighlightingCapabilities":{"semanticHighlighting":false},"codeAction":{"codeActionLiteralSupport":{"codeActionKind":{"valueSet":["","quickfix","refactor","refactor.extract","refactor.inline","refactor.rewrite","source","source.organizeImports"]}},"dynamicRegistration":false},"completion":{"completionItem":{"snippetSupport":false,"documentationFormat":["plaintext"]},"completionItemKind":{"valueSet":[10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,1,2,3,4,5,6,7,8,9]}},"foldingRange":{"lineFoldingOnly":true},"typeDefinition":{"linkSupport":true},"typeHierarchy":false,"declaration":{"linkSupport":true},"definition":{"linkSupport":true}}},"rootPath":"/home/jeremys/git/spark","processId":32217,"trace":"off"}}Content-Length: 163

{"jsonrpc":"2.0","method":"window/logMessage","params":{"type":4,"message":"2020.10.09 17:44:14 INFO  logging to file /home/jeremys/git/spark/.metals/metals.log"}}Content-Length: 203

{"jsonrpc":"2.0","method":"window/logMessage","params":{"type":4,"message":"2020.10.09 17:44:14 INFO  started: Metals version 0.9.4 in workspace \u0027/home/jeremys/git/spark\u0027 for client vim-lsp."}}Content-Length: 130

{"jsonrpc":"2.0","method":"window/logMessage","params":{"type":4,"message":"2020.10.09 17:44:14 INFO  time: initialize in 0.34s"}}Content-Length: 1235

{"jsonrpc":"2.0","id":1,"result":{"capabilities":{"textDocumentSync":{"openClose":true,"change":1,"save":{"includeText":true}},"hoverProvider":true,"completionProvider":{"resolveProvider":true,"triggerCharacters":[".","*"]},"signatureHelpProvider":{"triggerCharacters":["(","[",","]},"definitionProvider":true,"implementationProvider":true,"referencesProvider":true,"documentHighlightProvider":true,"documentSymbolProvider":true,"workspaceSymbolProvider":true,"codeActionProvider":{"codeActionKinds":["quickfix","refactor"]},"codeLensProvider":{"resolveProvider":false},"documentFormattingProvider":true,"documentRangeFormattingProvider":true,"documentOnTypeFormattingProvider":{"firstTriggerCharacter":"\n","moreTriggerCharacter":["\""]},"renameProvider":{"prepareProvider":true},"foldingRangeProvider":true,"executeCommandProvider":{"commands":["build-import","build-restart","build-connect","sources-scan","doctor-run","compile-cascade","compile-cancel","compile-clean","bsp-switch","debug-adapter-start","goto","goto-position","new-scala-file","new-scala-project","goto-super-method","analyze-stacktrace","super-method-hierarchy","reset-choice","ammonite-start","ammonite-stop"]}},"serverInfo":{"name":"Metals","version":"0.9.4"}}}Content-Length: 52

{"method":"initialized","jsonrpc":"2.0","params":{}}Content-Length: 170416

{"method":"textDocument/didOpen","jsonrpc":"2.0","params":{"textDocument":{"uri":"file:///home/jeremys/git/spark/project/MimaExcludes.scala","version":1,"languageId":"scala","text":"/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOTICE file distributed with\n * this work for additional information regarding copyright ownership.\n * The ASF licenses this file to You under the Apache License, Version 2.0\n * (the \"License\"); you may not use this file except in compliance with\n * the License.  You may obtain a copy of the License at\n *\n *    http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\nimport com.typesafe.tools.mima.core._\nimport com.typesafe.tools.mima.core.ProblemFilters._\n\n/**\n * Additional excludes for checking of Spark's binary compatibility.\n *\n * This acts as an official audit of cases where we excluded other classes. Please use the narrowest\n * possible exclude here. MIMA will usually tell you what exclude to use, e.g.:\n *\n * ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.take\")\n *\n * It is also possible to exclude Spark classes and packages. This should be used sparingly:\n *\n * MimaBuild.excludeSparkClass(\"graphx.util.collection.GraphXPrimitiveKeyOpenHashMap\")\n *\n * For a new Spark version, please update MimaBuild.scala to reflect the previous version.\n */\nobject MimaExcludes {\n\n  // Exclude rules for 3.1.x\n  lazy val v31excludes = v30excludes ++ Seq(\n    // mima plugin update caused new incompatibilities to be detected\n    // core module\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.shuffle.sort.io.LocalDiskShuffleMapOutputWriter.commitAllPartitions\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.shuffle.api.ShuffleMapOutputWriter.commitAllPartitions\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.shuffle.api.ShuffleMapOutputWriter.commitAllPartitions\"),\n    // mllib module\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionTrainingSummary.totalIterations\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionTrainingSummary.$init$\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.labels\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.truePositiveRateByLabel\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.falsePositiveRateByLabel\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.precisionByLabel\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.recallByLabel\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.fMeasureByLabel\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.fMeasureByLabel\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.accuracy\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedTruePositiveRate\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedFalsePositiveRate\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedRecall\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedPrecision\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedFMeasure\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedFMeasure\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.roc\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.areaUnderROC\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.pr\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.fMeasureByThreshold\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.precisionByThreshold\"),\n    ProblemFilters.exclude[NewMixinForwarderProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.recallByThreshold\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.FMClassifier.trainImpl\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.FMRegressor.trainImpl\"),\n    // [SPARK-31077] Remove ChiSqSelector dependency on mllib.ChiSqSelectorModel\n    // private constructor\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.ChiSqSelectorModel.this\"),\n\n    // [SPARK-31127] Implement abstract Selector\n    // org.apache.spark.ml.feature.ChiSqSelectorModel type hierarchy change\n    // before: class ChiSqSelector extends Estimator with ChiSqSelectorParams\n    // after: class ChiSqSelector extends PSelector\n    // false positive, no binary incompatibility\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.feature.ChiSqSelectorModel\"),\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.feature.ChiSqSelector\"),\n\n    // [SPARK-24634] Add a new metric regarding number of inputs later than watermark plus allowed delay\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StateOperatorProgress.<init>$default$4\"),\n\n    //[SPARK-31893] Add a generic ClassificationSummary trait\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionTrainingSummary.org$apache$spark$ml$classification$ClassificationSummary$_setter_$org$apache$spark$ml$classification$ClassificationSummary$$multiclassMetrics_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionTrainingSummary.org$apache$spark$ml$classification$ClassificationSummary$$multiclassMetrics\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionTrainingSummary.weightCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary.org$apache$spark$ml$classification$BinaryClassificationSummary$_setter_$org$apache$spark$ml$classification$BinaryClassificationSummary$$sparkSession_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary.org$apache$spark$ml$classification$BinaryClassificationSummary$_setter_$org$apache$spark$ml$classification$BinaryClassificationSummary$$binaryMetrics_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary.org$apache$spark$ml$classification$BinaryClassificationSummary$$binaryMetrics\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary.org$apache$spark$ml$classification$BinaryClassificationSummary$$sparkSession\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary.org$apache$spark$ml$classification$ClassificationSummary$_setter_$org$apache$spark$ml$classification$ClassificationSummary$$multiclassMetrics_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary.org$apache$spark$ml$classification$ClassificationSummary$$multiclassMetrics\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary.weightCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.org$apache$spark$ml$classification$ClassificationSummary$_setter_$org$apache$spark$ml$classification$ClassificationSummary$$multiclassMetrics_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.org$apache$spark$ml$classification$ClassificationSummary$$multiclassMetrics\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.org$apache$spark$ml$classification$BinaryClassificationSummary$_setter_$org$apache$spark$ml$classification$BinaryClassificationSummary$$sparkSession_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.org$apache$spark$ml$classification$BinaryClassificationSummary$_setter_$org$apache$spark$ml$classification$BinaryClassificationSummary$$binaryMetrics_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.org$apache$spark$ml$classification$BinaryClassificationSummary$$binaryMetrics\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.org$apache$spark$ml$classification$BinaryClassificationSummary$$sparkSession\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.org$apache$spark$ml$classification$ClassificationSummary$_setter_$org$apache$spark$ml$classification$ClassificationSummary$$multiclassMetrics_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.org$apache$spark$ml$classification$ClassificationSummary$$multiclassMetrics\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.weightCol\"),\n\n    // [SPARK-32879] Pass SparkSession.Builder options explicitly to SparkSession\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SparkSession.this\")\n  )\n\n  // Exclude rules for 3.0.x\n  lazy val v30excludes = v24excludes ++ Seq(\n    // [SPARK-29306] Add support for Stage level scheduling for executors\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#RetrieveSparkAppConfig.productElement\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#RetrieveSparkAppConfig.productArity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#RetrieveSparkAppConfig.canEqual\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#RetrieveSparkAppConfig.productIterator\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#RetrieveSparkAppConfig.productPrefix\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages#RetrieveSparkAppConfig.toString\"),\n\n    // [SPARK-29399][core] Remove old ExecutorPlugin interface.\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ExecutorPlugin\"),\n\n    // [SPARK-28980][SQL][CORE][MLLIB] Remove more old deprecated items in Spark 3\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.KMeans.train\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.mllib.clustering.KMeans.train\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.mllib.classification.LogisticRegressionWithSGD$\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.classification.LogisticRegressionWithSGD.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.feature.ChiSqSelectorModel.isSorted\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.mllib.regression.RidgeRegressionWithSGD$\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.regression.RidgeRegressionWithSGD.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.regression.LassoWithSGD.this\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.mllib.regression.LassoWithSGD$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.mllib.regression.LinearRegressionWithSGD$\"),\n\n    // [SPARK-28486][CORE][PYTHON] Map PythonBroadcast's data file to a BroadcastBlock to avoid delete by GC\n    ProblemFilters.exclude[InaccessibleMethodProblem](\"java.lang.Object.finalize\"),\n\n    // [SPARK-27366][CORE] Support GPU Resources in Spark job scheduling\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.resources\"),\n\n    // [SPARK-29417][CORE] Resource Scheduling - add TaskContext.resource java api\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.resourcesJMap\"),\n\n    // [SPARK-27410][MLLIB] Remove deprecated / no-op mllib.KMeans getRuns, setRuns\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.KMeans.getRuns\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.KMeans.setRuns\"),\n\n    // [SPARK-26580][SQL][ML][FOLLOW-UP] Throw exception when use untyped UDF by default\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.UnaryTransformer.this\"),\n\n    // [SPARK-27090][CORE] Removing old LEGACY_DRIVER_IDENTIFIER (\"<driver>\")\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkContext.LEGACY_DRIVER_IDENTIFIER\"),\n\n    // [SPARK-25838] Remove formatVersion from Saveable\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.DistributedLDAModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.LocalLDAModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.BisectingKMeansModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.KMeansModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.PowerIterationClusteringModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.GaussianMixtureModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.recommendation.MatrixFactorizationModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.feature.ChiSqSelectorModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.feature.Word2VecModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.classification.SVMModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.classification.LogisticRegressionModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.classification.NaiveBayesModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.util.Saveable.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.fpm.FPGrowthModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.fpm.PrefixSpanModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.regression.IsotonicRegressionModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.regression.RidgeRegressionModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.regression.LassoModel.formatVersion\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.regression.LinearRegressionModel.formatVersion\"),\n\n    // [SPARK-26132] Remove support for Scala 2.11 in Spark 3.0.0\n    ProblemFilters.exclude[DirectAbstractMethodProblem](\"scala.concurrent.Future.transformWith\"),\n    ProblemFilters.exclude[DirectAbstractMethodProblem](\"scala.concurrent.Future.transform\"),\n\n    // [SPARK-26254][CORE] Extract Hive + Kafka dependencies from Core.\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.deploy.security.HiveDelegationTokenProvider\"),\n\n    // [SPARK-26329][CORE] Faster polling of executor memory metrics.\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.scheduler.SparkListenerTaskEnd$\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerTaskEnd.apply\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.scheduler.SparkListenerTaskEnd.copy$default$6\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerTaskEnd.copy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerTaskEnd.this\"),\n\n    // [SPARK-26311][CORE]New feature: apply custom log URL pattern for executor log URLs\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerApplicationStart.apply\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerApplicationStart.copy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerApplicationStart.this\"),\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.scheduler.SparkListenerApplicationStart$\"),\n\n    // [SPARK-27630][CORE] Properly handle task end events from completed stages\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted.apply\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted.copy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted.this\"),\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.scheduler.SparkListenerSpeculativeTaskSubmitted$\"),\n\n    // [SPARK-26632][Core] Separate Thread Configurations of Driver and Executor\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.network.netty.SparkTransportConf.fromSparkConf\"),\n\n    // [SPARK-16872][ML][PYSPARK] Impl Gaussian Naive Bayes Classifier\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.NaiveBayesModel.this\"),\n\n    // [SPARK-25765][ML] Add training cost to BisectingKMeans summary\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.BisectingKMeansModel.this\"),\n\n    // [SPARK-24243][CORE] Expose exceptions from InProcessAppHandle\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.launcher.SparkAppHandle.getError\"),\n\n    // [SPARK-25867] Remove KMeans computeCost\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.clustering.KMeansModel.computeCost\"),\n\n    // [SPARK-26127] Remove deprecated setters from tree regression and classification models\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setSeed\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setMinInfoGain\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setCacheNodeIds\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setCheckpointInterval\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setMaxDepth\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setImpurity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setMaxMemoryInMB\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setMaxBins\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.DecisionTreeClassificationModel.setMinInstancesPerNode\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setSeed\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setMinInfoGain\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setSubsamplingRate\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setMaxIter\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setCacheNodeIds\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setCheckpointInterval\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setMaxDepth\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setImpurity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setMaxMemoryInMB\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setStepSize\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setMaxBins\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setMinInstancesPerNode\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.setFeatureSubsetStrategy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setSeed\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setMinInfoGain\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setSubsamplingRate\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setCacheNodeIds\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setCheckpointInterval\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setMaxDepth\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setImpurity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setMaxMemoryInMB\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setFeatureSubsetStrategy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setMaxBins\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setMinInstancesPerNode\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setNumTrees\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setSeed\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setMinInfoGain\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setCacheNodeIds\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setCheckpointInterval\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setMaxDepth\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setImpurity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setMaxMemoryInMB\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setMaxBins\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressionModel.setMinInstancesPerNode\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setSeed\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setMinInfoGain\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setSubsamplingRate\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setMaxIter\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setCacheNodeIds\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setCheckpointInterval\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setMaxDepth\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setImpurity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setMaxMemoryInMB\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setStepSize\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setMaxBins\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setMinInstancesPerNode\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.setFeatureSubsetStrategy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setSeed\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setMinInfoGain\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setSubsamplingRate\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setCacheNodeIds\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setCheckpointInterval\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setMaxDepth\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setImpurity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setMaxMemoryInMB\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setFeatureSubsetStrategy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setMaxBins\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setMinInstancesPerNode\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setNumTrees\"),\n\n    // [SPARK-26090] Resolve most miscellaneous deprecation and build warnings for Spark 3\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.mllib.stat.test.BinarySampleBeanInfo\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.mllib.regression.LabeledPointBeanInfo\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ml.feature.LabeledPointBeanInfo\"),\n\n    // [SPARK-28780][ML] Delete the incorrect setWeightCol method in LinearSVCModel\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.LinearSVCModel.setWeightCol\"),\n\n    // [SPARK-29645][ML][PYSPARK] ML add param RelativeError\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.QuantileDiscretizer.relativeError\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.QuantileDiscretizer.getRelativeError\"),\n\n    // [SPARK-28968][ML] Add HasNumFeatures in the scala side\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.FeatureHasher.getNumFeatures\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.FeatureHasher.numFeatures\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.HashingTF.getNumFeatures\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.HashingTF.numFeatures\"),\n\n    // [SPARK-25908][CORE][SQL] Remove old deprecated items in Spark 3\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.BarrierTaskContext.isRunningLocally\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskContext.isRunningLocally\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.shuffleBytesWritten\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.shuffleWriteTime\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.shuffleRecordsWritten\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.AccumulableInfo.apply\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.evaluation.MulticlassMetrics.fMeasure\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.evaluation.MulticlassMetrics.recall\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.evaluation.MulticlassMetrics.precision\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.util.MLWriter.context\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.util.MLReader.context\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.util.GeneralMLWriter.context\"),\n\n    // [SPARK-25737] Remove JavaSparkContextVarargsWorkaround\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.api.java.JavaSparkContext\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.api.java.JavaSparkContext.union\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.streaming.api.java.JavaStreamingContext.union\"),\n\n    // [SPARK-16775] Remove deprecated accumulator v1 APIs\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.Accumulable\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.AccumulatorParam\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.Accumulator\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.Accumulator$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.AccumulableParam\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.AccumulatorParam$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.AccumulatorParam$FloatAccumulatorParam$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.AccumulatorParam$DoubleAccumulatorParam$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.AccumulatorParam$LongAccumulatorParam$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.AccumulatorParam$IntAccumulatorParam$\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkContext.accumulable\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkContext.accumulableCollection\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkContext.accumulator\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.util.LegacyAccumulatorWrapper\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.api.java.JavaSparkContext.intAccumulator\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.api.java.JavaSparkContext.accumulable\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.api.java.JavaSparkContext.doubleAccumulator\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.api.java.JavaSparkContext.accumulator\"),\n\n    // [SPARK-24109] Remove class SnappyOutputStreamWrapper\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.io.SnappyCompressionCodec.version\"),\n\n    // [SPARK-19287] JavaPairRDD flatMapValues requires function returning Iterable, not Iterator\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.api.java.JavaPairRDD.flatMapValues\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.streaming.api.java.JavaPairDStream.flatMapValues\"),\n\n    // [SPARK-25680] SQL execution listener shouldn't happen on execution thread\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.util.ExecutionListenerManager.clone\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.util.ExecutionListenerManager.this\"),\n\n    // [SPARK-25862][SQL] Remove rangeBetween APIs introduced in SPARK-21608\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.unboundedFollowing\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.unboundedPreceding\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.currentRow\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.expressions.Window.rangeBetween\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.expressions.WindowSpec.rangeBetween\"),\n\n    // [SPARK-23781][CORE] Merge token renewer functionality into HadoopDelegationTokenManager\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.nextCredentialRenewalTime\"),\n\n    // [SPARK-26133][ML] Remove deprecated OneHotEncoder and rename OneHotEncoderEstimator to OneHotEncoder\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ml.feature.OneHotEncoderEstimator\"),\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.feature.OneHotEncoder\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.feature.OneHotEncoder.transform\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ml.feature.OneHotEncoderEstimator$\"),\n\n    // [SPARK-30329][ML] add iterator/foreach methods for Vectors\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Vector.activeIterator\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.mllib.linalg.Vector.activeIterator\"),\n\n    // [SPARK-26141] Enable custom metrics implementation in shuffle write\n    // Following are Java private classes\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.shuffle.sort.UnsafeShuffleWriter.this\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.TimeTrackingOutputStream.this\"),\n\n    // [SPARK-26139] Implement shuffle write metrics in SQL\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ShuffleDependency.this\"),\n\n    // [SPARK-26362][CORE] Remove 'spark.driver.allowMultipleContexts' to disallow multiple creation of SparkContexts\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkContext.setActiveContext\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkContext.markPartiallyConstructed\"),\n\n    // [SPARK-26457] Show hadoop configurations in HistoryServer environment tab\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.ApplicationEnvironmentInfo.this\"),\n\n    // [SPARK-30144][ML] Make MultilayerPerceptronClassificationModel extend MultilayerPerceptronParams\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.layers\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.this\"),\n\n    // [SPARK-30630][ML] Remove numTrees in GBT\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.numTrees\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.numTrees\"),\n\n    // Data Source V2 API changes\n    (problem: Problem) => problem match {\n      case MissingClassProblem(cls) =>\n        !cls.fullName.startsWith(\"org.apache.spark.sql.sources.v2\")\n      case _ => true\n    },\n\n    // [SPARK-27521][SQL] Move data source v2 to catalyst module\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.vectorized.ColumnarBatch\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.vectorized.ArrowColumnVector\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.vectorized.ColumnarRow\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.vectorized.ColumnarArray\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.vectorized.ColumnarMap\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.vectorized.ColumnVector\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.GreaterThanOrEqual\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.StringEndsWith\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.LessThanOrEqual$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.In$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.Not\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.IsNotNull\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.LessThan\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.LessThanOrEqual\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.EqualNullSafe$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.GreaterThan$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.In\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.And\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.StringStartsWith$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.EqualNullSafe\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.StringEndsWith$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.GreaterThanOrEqual$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.Not$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.IsNull$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.LessThan$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.IsNotNull$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.Or\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.EqualTo$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.GreaterThan\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.StringContains\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.Filter\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.IsNull\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.EqualTo\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.And$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.Or$\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.StringStartsWith\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.StringContains$\"),\n\n    // [SPARK-26216][SQL] Do not use case class as public API (UserDefinedFunction)\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction$\"),\n    ProblemFilters.exclude[AbstractClassProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.inputTypes\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.nullableTypes_=\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.dataType\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.f\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.this\"),\n    ProblemFilters.exclude[DirectAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.asNonNullable\"),\n    ProblemFilters.exclude[ReversedAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.asNonNullable\"),\n    ProblemFilters.exclude[DirectAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.nullable\"),\n    ProblemFilters.exclude[ReversedAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.nullable\"),\n    ProblemFilters.exclude[DirectAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.asNondeterministic\"),\n    ProblemFilters.exclude[ReversedAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.asNondeterministic\"),\n    ProblemFilters.exclude[DirectAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.deterministic\"),\n    ProblemFilters.exclude[ReversedAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.deterministic\"),\n    ProblemFilters.exclude[DirectAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.apply\"),\n    ProblemFilters.exclude[ReversedAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.apply\"),\n    ProblemFilters.exclude[DirectAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.withName\"),\n    ProblemFilters.exclude[ReversedAbstractMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.withName\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.productElement\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.productArity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.copy$default$2\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.canEqual\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.copy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.copy$default$1\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.productIterator\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.productPrefix\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.UserDefinedFunction.copy$default$3\"),\n\n    // [SPARK-11215][ML] Add multiple columns support to StringIndexer\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.feature.StringIndexer.validateAndTransformSchema\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.feature.StringIndexerModel.validateAndTransformSchema\"),\n\n    // [SPARK-26616][MLlib] Expose document frequency in IDFModel\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.feature.IDFModel.this\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.mllib.feature.IDF#DocumentFrequencyAggregator.idf\"),\n\n    // [SPARK-28199][SS] Remove deprecated ProcessingTime\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.ProcessingTime\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.ProcessingTime$\"),\n\n    // [SPARK-25382][SQL][PYSPARK] Remove ImageSchema.readImages in 3.0\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.image.ImageSchema.readImages\"),\n\n    // [SPARK-25341][CORE] Support rolling back a shuffle map stage and re-generate the shuffle files\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.shuffle.sort.UnsafeShuffleWriter.this\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.storage.ShuffleIndexBlockId.copy$default$2\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleIndexBlockId.copy\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleIndexBlockId.this\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.storage.ShuffleDataBlockId.copy$default$2\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleDataBlockId.copy\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleDataBlockId.this\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.storage.ShuffleBlockId.copy$default$2\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleBlockId.copy\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleBlockId.this\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleIndexBlockId.apply\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleDataBlockId.apply\"),\n    ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.ShuffleBlockId.apply\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.storage.ShuffleIndexBlockId.mapId\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.storage.ShuffleDataBlockId.mapId\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.storage.ShuffleBlockId.mapId\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.FetchFailed.mapId\"),\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.FetchFailed$\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.FetchFailed.apply\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.FetchFailed.copy$default$5\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.FetchFailed.copy\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.FetchFailed.copy$default$3\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.FetchFailed.this\"),\n\n    // [SPARK-28957][SQL] Copy any \"spark.hive.foo=bar\" spark properties into hadoop conf as \"hive.foo=bar\"\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.appendS3AndSparkHadoopConfigurations\"),\n\n    // [SPARK-29348] Add observable metrics.\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryProgress.this\"),\n\n    // [SPARK-30377][ML] Make AFTSurvivalRegression extend Regressor\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegression.fit\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegressionModel.setFeaturesCol\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegressionModel.setPredictionCol\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegression.setFeaturesCol\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegression.setLabelCol\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegression.setPredictionCol\"),\n\n    // [SPARK-29543][SS][UI] Init structured streaming ui\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener#QueryStartedEvent.this\"),\n\n    // [SPARK-30667][CORE] Add allGather method to BarrierTaskContext\n    ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.RequestToSync\")\n  )\n\n  // Exclude rules for 2.4.x\n  lazy val v24excludes = v23excludes ++ Seq(\n    // [SPARK-23429][CORE] Add executor memory metrics to heartbeat and expose in executors REST API\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate.apply\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate.copy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate.this\"),\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate$\"),\n\n    // [SPARK-25248] add package private methods to TaskContext\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.markTaskFailed\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.markInterrupted\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.fetchFailed\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.markTaskCompleted\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.getLocalProperties\"),\n\n    // [SPARK-10697][ML] Add lift to Association rules\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.fpm.FPGrowthModel.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.fpm.AssociationRules#Rule.this\"),\n\n    // [SPARK-24296][CORE] Replicate large blocks as a stream.\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.network.netty.NettyBlockRpcServer.this\"),\n    // [SPARK-23528] Add numIter to ClusteringSummary\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.clustering.ClusteringSummary.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.clustering.KMeansSummary.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.clustering.BisectingKMeansSummary.this\"),\n    // [SPARK-6237][NETWORK] Network-layer changes to allow stream upload\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.network.netty.NettyBlockRpcServer.receive\"),\n\n    // [SPARK-20087][CORE] Attach accumulators / metrics to 'TaskKilled' end reason\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.apply\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.copy\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.this\"),\n\n    // [SPARK-22941][core] Do not exit JVM when submit fails with in-process launcher.\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.printWarning\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.parseSparkConfProperty\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.printVersionAndExit\"),\n\n    // [SPARK-23412][ML] Add cosine distance measure to BisectingKmeans\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasDistanceMeasure.org$apache$spark$ml$param$shared$HasDistanceMeasure$_setter_$distanceMeasure_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasDistanceMeasure.getDistanceMeasure\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasDistanceMeasure.distanceMeasure\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.BisectingKMeansModel#SaveLoadV1_0.load\"),\n\n    // [SPARK-20659] Remove StorageStatus, or make it private\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.SparkExecutorInfo.totalOffHeapStorageMemory\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.SparkExecutorInfo.usedOffHeapStorageMemory\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.SparkExecutorInfo.usedOnHeapStorageMemory\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.SparkExecutorInfo.totalOnHeapStorageMemory\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkContext.getExecutorStorageStatus\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.numBlocks\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.numRddBlocks\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.containsBlock\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.rddBlocksById\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.numRddBlocksById\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.memUsedByRdd\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.cacheSize\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.rddStorageLevel\"),\n\n    // [SPARK-23455][ML] Default Params in ML should be saved separately in metadata\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.param.Params.paramMap\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.param.Params.org$apache$spark$ml$param$Params$_setter_$paramMap_=\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.param.Params.defaultParamMap\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.param.Params.org$apache$spark$ml$param$Params$_setter_$defaultParamMap_=\"),\n\n    // [SPARK-7132][ML] Add fit with validation set to spark.ml GBT\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.getValidationIndicatorCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.org$apache$spark$ml$param$shared$HasValidationIndicatorCol$_setter_$validationIndicatorCol_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.validationIndicatorCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.getValidationIndicatorCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.org$apache$spark$ml$param$shared$HasValidationIndicatorCol$_setter_$validationIndicatorCol_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.validationIndicatorCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.getValidationIndicatorCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.org$apache$spark$ml$param$shared$HasValidationIndicatorCol$_setter_$validationIndicatorCol_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasValidationIndicatorCol.validationIndicatorCol\"),\n\n    // [SPARK-23042] Use OneHotEncoderModel to encode labels in MultilayerPerceptronClassifier\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ml.classification.LabelConverter\"),\n\n    // [SPARK-21842][MESOS] Support Kerberos ticket renewal and creation in Mesos\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.getDateOfNextUpdate\"),\n\n    // [SPARK-23366] Improve hot reading path in ReadAheadInputStream\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.io.ReadAheadInputStream.this\"),\n\n    // [SPARK-22941][CORE] Do not exit JVM when submit fails with in-process launcher.\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.addJarToClasspath\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.mergeFileLists\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment$default$2\"),\n\n    // Data Source V2 API changes\n    // TODO: they are unstable APIs and should not be tracked by mima.\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.v2.ReadSupportWithSchema\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.sources.v2.reader.SupportsScanColumnarBatch.createDataReaderFactories\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.sources.v2.reader.SupportsScanColumnarBatch.createBatchDataReaderFactories\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.sources.v2.reader.SupportsScanColumnarBatch.planBatchInputPartitions\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.v2.reader.SupportsScanUnsafeRow\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.sources.v2.reader.DataSourceReader.createDataReaderFactories\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.sources.v2.reader.DataSourceReader.planInputPartitions\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.v2.reader.SupportsPushDownCatalystFilters\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.v2.reader.DataReader\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.sources.v2.reader.SupportsReportStatistics.getStatistics\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.sources.v2.reader.SupportsReportStatistics.estimateStatistics\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.v2.reader.DataReaderFactory\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.v2.reader.streaming.ContinuousDataReader\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.v2.writer.SupportsWriteInternalRow\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.sources.v2.writer.DataWriterFactory.createDataWriter\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.sources.v2.writer.DataWriterFactory.createDataWriter\"),\n\n    // Changes to HasRawPredictionCol.\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasRawPredictionCol.rawPredictionCol\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasRawPredictionCol.org$apache$spark$ml$param$shared$HasRawPredictionCol$_setter_$rawPredictionCol_=\"),\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasRawPredictionCol.getRawPredictionCol\"),\n\n    // [SPARK-15526][ML][FOLLOWUP] Make JPMML provided scope to avoid including unshaded JARs\n    (problem: Problem) => problem match {\n      case MissingClassProblem(cls) =>\n        !cls.fullName.startsWith(\"org.sparkproject.jpmml\") &&\n          !cls.fullName.startsWith(\"org.sparkproject.dmg.pmml\") &&\n          !cls.fullName.startsWith(\"org.spark_project.jpmml\") &&\n          !cls.fullName.startsWith(\"org.spark_project.dmg.pmml\")\n      case _ => true\n    }\n  )\n\n  // Exclude rules for 2.3.x\n  lazy val v23excludes = v22excludes ++ Seq(\n    // [SPARK-22897] Expose stageAttemptId in TaskContext\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.stageAttemptNumber\"),\n\n    // SPARK-22789: Map-only continuous processing execution\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryManager.startQuery$default$8\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryManager.startQuery$default$6\"),\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryManager.startQuery$default$9\"),\n\n    // SPARK-22372: Make cluster submission use SparkApplication.\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.getSecretKeyFromUserCredentials\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.isYarnMode\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.getCurrentUserCredentials\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.addSecretKeyToUserCredentials\"),\n\n    // SPARK-18085: Better History Server scalability for many / large applications\n    ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.ExecutorSummary.executorLogs\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.history.HistoryServer.getSparkUI\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ui.env.EnvironmentListener\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ui.exec.ExecutorsListener\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ui.storage.StorageListener\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.storage.StorageStatusListener\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.ExecutorStageSummary.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.JobData.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkStatusTracker.this\"),\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ui.jobs.JobProgressListener\"),\n\n    // [SPARK-20495][SQL] Add StorageLevel to cacheTable API\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.cacheTable\"),\n\n    // [SPARK-19937] Add remote bytes read to disk.\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.ShuffleReadMetrics.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.ShuffleReadMetricDistributions.this\"),\n\n    // [SPARK-21276] Update lz4-java to the latest (v1.4.0)\n    ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.io.LZ4BlockInputStream\"),\n\n    // [SPARK-17139] Add model summary for MultinomialLogisticRegression\n    ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary\"),\n    ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.predictionCol\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.labels\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.truePositiveRateByLabel\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.falsePositiveRateByLabel\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.precisionByLabel\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.recallByLabel\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.fMeasureByLabel\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.accuracy\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedTruePositiveRate\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedFalsePositiveRate\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedRecall\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedPrecision\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.weightedFMeasure\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.asBinary\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.org$apache$spark$ml$classification$LogisticRegressionSummary$$multiclassMetrics\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.org$apache$spark$ml$classification$LogisticRegressionSummary$_setter_$org$apache$spark$ml$classification$LogisticRegressionSummary$$multiclassMetrics_=\"),\n\n    // [SPARK-14280] Support Scala 2.12\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.FutureAction.transformWith\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.FutureAction.transform\"),\n\n    // [SPARK-21087] CrossValidator, TrainValidationSplit expose sub models after fitting: Scala\n    ProblemFilters.exclude[FinalClassProblem](\"org.apache.spark.ml.tuning.CrossValidatorModel$CrossValidatorModelWriter\"),\n    ProblemFilters.exclude[FinalClassProblem](\"org.apache.spark.ml.tuning.TrainValidationSplitModel$TrainValidationSplitModelWriter\"),\n\n    // [SPARK-21728][CORE] Allow SparkSubmit to use Logging\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.downloadFileList\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.downloadFile\"),\n\n    // [SPARK-21714][CORE][YARN] Avoiding re-uploading remote resources in yarn client mode\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment\"),\n\n    // [SPARK-22324][SQL][PYTHON] Upgrade Arrow to 0.8.0\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.network.util.AbstractFileRegion.transfered\"),\n\n    // [SPARK-20643][CORE] Add listener implementation to collect app state\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.TaskData.<init>$default$5\"),\n\n    // [SPARK-20648][CORE] Port JobsTab and StageTab to the new UI backend\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.TaskData.<init>$default$12\"),\n\n    // [SPARK-21462][SS] Added batchId to StreamingQueryProgress.json\n    // [SPARK-21409][SS] Expose state store memory usage in SQL metrics and progress updates\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StateOperatorProgress.this\"),\n\n    // [SPARK-22278][SS] Expose current event time watermark and current processing time in GroupState\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.GroupState.getCurrentWatermarkMs\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.GroupState.getCurrentProcessingTimeMs\"),\n\n    // [SPARK-20542][ML][SQL] Add an API to Bucketizer that can bin multiple columns\n    ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasOutputCols.org$apache$spark$ml$param$shared$HasOutputCols$_setter_$outputCols_=\"),\n\n    // [SPARK-18619][ML] Make QuantileDiscretizer/Bucketizer/StringIndexer/RFormula inherit from HasHandleInvalid\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.Bucketizer.getHandleInvalid\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.StringIndexer.getHandleInvalid\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.QuantileDiscretizer.getHandleInvalid\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.feature.StringIndexerModel.getHandleInvalid\")\n  )\n\n  // Exclude rules for 2.2.x\n  lazy val v22excludes = v21excludes ++ Seq(\n    // [SPARK-20355] Add per application spark version on the history server headerpage\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.ApplicationAttemptInfo.this\"),\n\n    // [SPARK-19652][UI] Do auth checks for REST API access.\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.history.HistoryServer.withSparkUI\"),\n    ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.status.api.v1.UIRootFromServletContext\"),\n\n    // [SPARK-18663][SQL] Simplify CountMinSketch aggregate implementation\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.util.sketch.CountMinSketch.toByteArray\"),\n\n    // [SPARK-18949] [SQL] Add repairTable API to Catalog\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.recoverPartitions\"),\n\n    // [SPARK-18537] Add a REST api to spark streaming\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.streaming.scheduler.StreamingListener.onStreamingStarted\"),\n\n    // [SPARK-19148][SQL] do not expose the external table concept in Catalog\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.createTable\"),\n\n    // [SPARK-14272][ML] Add logLikelihood in GaussianMixtureSummary\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.clustering.GaussianMixtureSummary.this\"),\n\n    // [SPARK-19267] Fetch Failure handling robust to user error handling\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.setFetchFailed\"),\n\n    // [SPARK-19069] [CORE] Expose task 'status' and 'duration' in spark history server REST API.\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.TaskData.this\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.TaskData.<init>$default$10\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.TaskData.<init>$default$11\"),\n\n    // [SPARK-17161] Removing Python-friendly constructors not needed\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.OneVsRestModel.this\"),\n\n    // [SPARK-19820] Allow reason to be specified to task kill\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.TaskKilled$\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.productElement\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.productArity\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.canEqual\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.productIterator\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.countTowardsTaskFailures\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.productPrefix\"),\n    ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.TaskKilled.toErrorString\"),\n    ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.TaskKilled.toString\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.killTaskIfInterrupted\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.getKillReason\"),\n\n    // [SPARK-19876] Add one time trigger, and improve Trigger APIs\n    ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.sql.streaming.Trigger\"),\n    ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.sql.streaming.ProcessingTime\"),\n\n    // [SPARK-17471][ML] Add compressed method to ML matrices\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.compressed\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.compressedColMajor\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.compressedRowMajor\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.isRowMajor\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.isColMajor\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.getSparseSizeInBytes\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.toDense\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.toSparse\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.toDenseRowMajor\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.toSparseRowMajor\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.toSparseColMajor\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.getDenseSizeInBytes\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.toDenseColMajor\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.toDenseMatrix\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.toSparseMatrix\"),\n    ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Matrix.getSizeInBytes\"),\n\n    // [SPARK-18693] Added weightSum to trait MultivariateStatisticalSummary\n    ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.mllib.stat.MultivariateStatisticalSummary.weightSum\")\n  ) ++ Seq(\n      // [SPARK-17019] Expose on-heap and off-heap memory usage in various places\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerBlockManagerAdded.copy\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerBlockManagerAdded.this\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.scheduler.SparkListenerBlockManagerAdded$\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerBlockManagerAdded.apply\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.storage.StorageStatus.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.RDDDataDistribution.this\")\n    )\n\n  // Exclude rules for 2.1.x\n  lazy val v21excludes = v20excludes ++ {\n    Seq(\n      // [SPARK-17671] Spark 2.0 history server summary page is slow even set spark.history.ui.maxApplications\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.deploy.history.HistoryServer.getApplicationList\"),\n      // [SPARK-14743] Improve delegation token handling in secure cluster\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.getTimeFromNowToRenewal\"),\n      // [SPARK-16199][SQL] Add a method to list the referenced columns in data source Filter\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.sources.Filter.references\"),\n      // [SPARK-16853][SQL] Fixes encoder error in DataSet typed select\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.Dataset.select\"),\n      // [SPARK-16967] Move Mesos to Module\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkMasterRegex.MESOS_REGEX\"),\n      // [SPARK-16240] ML persistence backward compatibility for LDA\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.clustering.LDA$\"),\n      // [SPARK-17717] Add Find and Exists method to Catalog.\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.getDatabase\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.getTable\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.getFunction\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.databaseExists\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.tableExists\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.functionExists\"),\n\n      // [SPARK-17731][SQL][Streaming] Metrics for structured streaming\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.SourceStatus.this\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.streaming.SourceStatus.offsetDesc\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQuery.status\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.SinkStatus.this\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.StreamingQueryInfo\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener#QueryStarted.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener#QueryStarted.queryInfo\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener#QueryProgress.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener#QueryProgress.queryInfo\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener#QueryTerminated.queryInfo\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener$QueryStarted\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener$QueryProgress\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener$QueryTerminated\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener.onQueryStarted\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener.onQueryStarted\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener.onQueryProgress\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener.onQueryProgress\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener.onQueryTerminated\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener.onQueryTerminated\"),\n\n      // [SPARK-18516][SQL] Split state and progress in streaming\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.SourceStatus\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.SinkStatus\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQuery.sinkStatus\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQuery.sourceStatuses\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.streaming.StreamingQuery.id\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQuery.lastProgress\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQuery.recentProgress\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQuery.id\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryManager.get\"),\n\n      // [SPARK-17338][SQL] add global temp view\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.dropGlobalTempView\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.catalog.Catalog.dropTempView\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.catalog.Catalog.dropTempView\"),\n\n      // [SPARK-18034] Upgrade to MiMa 0.1.11 to fix flakiness.\n      ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasAggregationDepth.aggregationDepth\"),\n      ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasAggregationDepth.getAggregationDepth\"),\n      ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasAggregationDepth.org$apache$spark$ml$param$shared$HasAggregationDepth$_setter_$aggregationDepth_=\"),\n\n      // [SPARK-18236] Reduce duplicate objects in Spark UI and HistoryServer\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.scheduler.TaskInfo.accumulables\"),\n\n      // [SPARK-18657] Add StreamingQuery.runId\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQuery.runId\"),\n\n      // [SPARK-18694] Add StreamingQuery.explain and exception to Python and fix StreamingQueryException\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.streaming.StreamingQueryException$\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryException.startOffset\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryException.endOffset\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.streaming.StreamingQueryException.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryException.query\")\n    )\n  }\n\n  // Exclude rules for 2.0.x\n  lazy val v20excludes = {\n    Seq(\n      ProblemFilters.exclude[Problem](\"org.apache.spark.rpc.*\"),\n      ProblemFilters.exclude[Problem](\"org.spark-project.jetty.*\"),\n      ProblemFilters.exclude[Problem](\"org.spark_project.jetty.*\"),\n      ProblemFilters.exclude[Problem](\"org.sparkproject.jetty.*\"),\n      ProblemFilters.exclude[Problem](\"org.apache.spark.internal.*\"),\n      ProblemFilters.exclude[Problem](\"org.apache.spark.unused.*\"),\n      ProblemFilters.exclude[Problem](\"org.apache.spark.unsafe.*\"),\n      ProblemFilters.exclude[Problem](\"org.apache.spark.memory.*\"),\n      ProblemFilters.exclude[Problem](\"org.apache.spark.util.collection.unsafe.*\"),\n      ProblemFilters.exclude[Problem](\"org.apache.spark.sql.catalyst.*\"),\n      ProblemFilters.exclude[Problem](\"org.apache.spark.sql.execution.*\"),\n      ProblemFilters.exclude[Problem](\"org.apache.spark.sql.internal.*\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.mllib.feature.PCAModel.this\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.status.api.v1.StageData.this\"),\n      ProblemFilters.exclude[MissingMethodProblem](\n        \"org.apache.spark.status.api.v1.ApplicationAttemptInfo.this\"),\n      ProblemFilters.exclude[MissingMethodProblem](\n        \"org.apache.spark.status.api.v1.ApplicationAttemptInfo.<init>$default$5\"),\n      // SPARK-14042 Add custom coalescer support\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.rdd.RDD.coalesce\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.rdd.PartitionCoalescer$LocationIterator\"),\n      ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.rdd.PartitionCoalescer\"),\n      // SPARK-15532 Remove isRootContext flag from SQLContext.\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SQLContext.isRootContext\"),\n      // SPARK-12600 Remove SQL deprecated methods\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.SQLContext$QueryExecution\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.SQLContext$SparkPlanner\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.applySchema\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.parquetFile\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.jdbc\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.jsonFile\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.jsonRDD\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.load\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.dialectClassName\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.getSQLDialect\"),\n      // SPARK-13664 Replace HadoopFsRelation with FileFormat\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ml.source.libsvm.LibSVMRelation\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.HadoopFsRelationProvider\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.HadoopFsRelation$FileStatusCache\"),\n      // SPARK-15543 Rename DefaultSources to make them more self-describing\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ml.source.libsvm.DefaultSource\")\n    ) ++ Seq(\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.SparkContext.emptyRDD\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.broadcast.HttpBroadcastFactory\"),\n      // SPARK-14358 SparkListener from trait to abstract class\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.SparkContext.addSparkListener\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.JavaSparkListener\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.SparkFirehoseListener\"),\n      ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.scheduler.SparkListener\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ui.jobs.JobProgressListener\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ui.exec.ExecutorsListener\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ui.env.EnvironmentListener\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ui.storage.StorageListener\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.storage.StorageStatusListener\")\n    ) ++\n    Seq(\n      // SPARK-3369 Fix Iterable/Iterator in Java API\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.function.FlatMapFunction.call\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.function.FlatMapFunction.call\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.function.DoubleFlatMapFunction.call\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.function.DoubleFlatMapFunction.call\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.function.FlatMapFunction2.call\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.function.FlatMapFunction2.call\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.function.PairFlatMapFunction.call\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.function.PairFlatMapFunction.call\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.function.CoGroupFunction.call\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.function.CoGroupFunction.call\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.function.MapPartitionsFunction.call\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.function.MapPartitionsFunction.call\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.function.FlatMapGroupsFunction.call\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.function.FlatMapGroupsFunction.call\")\n    ) ++\n    Seq(\n      // [SPARK-6429] Implement hashCode and equals together\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.Partition.org$apache$spark$Partition$$super=uals\")\n    ) ++\n    Seq(\n      // SPARK-4819 replace Guava Optional\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.JavaSparkContext.getCheckpointDir\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.api.java.JavaSparkContext.getSparkHome\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaRDDLike.getCheckpointFile\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaRDDLike.partitioner\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaRDDLike.getCheckpointFile\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaRDDLike.partitioner\")\n    ) ++\n    Seq(\n      // SPARK-12481 Remove Hadoop 1.x\n      ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.mapred.SparkHadoopMapRedUtil\"),\n      // SPARK-12615 Remove deprecated APIs in core\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.<init>$default$6\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.numericRDDToDoubleRDDFunctions\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.intToIntWritable\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.intWritableConverter\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.writableWritableConverter\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.rddToPairRDDFunctions\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.rddToAsyncRDDActions\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.boolToBoolWritable\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.longToLongWritable\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.doubleWritableConverter\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.rddToOrderedRDDFunctions\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.floatWritableConverter\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.booleanWritableConverter\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.stringToText\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.doubleRDDToDoubleRDDFunctions\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.doubleToDoubleWritable\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.bytesWritableConverter\"),\n      ProblemFilters.exclude[MissingMethodContent-Length: 202

{"jsonrpc":"2.0","id":"1","method":"window/showMessageRequest","params":{"actions":[{"title":"sbt"},{"title":"mvn"}],"type":3,"message":"Multiple build definitions found. Which would you like to use?"}}Problem](\"org.apache.spark.SparkContext.rddToSequenceFileRDDFunctions\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.bytesToBytesWritable\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.longWritableConverter\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.stringWritableConverter\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.floatToFloatWritable\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.rddToPairRDDFunctions$default$4\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.TaskContext.addOnCompleteCallback\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.TaskContext.runningLocally\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.TaskContext.attemptId\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.defaultMinSplits\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.SparkContext.runJob\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.runJob\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.tachyonFolderName\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.initLocalProperties\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.clearJars\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.clearFiles\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.SparkContext.this\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.flatMapWith$default$2\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.toArray\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.mapWith$default$2\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.mapPartitionsWithSplit\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.flatMapWith\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.filterWith\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.foreachWith\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.mapWith\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.RDD.mapPartitionsWithSplit$default$2\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.rdd.SequenceFileRDDFunctions.this\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaRDDLike.splits\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaRDDLike.toArray\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaSparkContext.defaultMinSplits\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaSparkContext.clearJars\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.api.java.JavaSparkContext.clearFiles\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.externalBlockStoreFolderName\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.storage.ExternalBlockStore$\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.storage.ExternalBlockManager\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.storage.ExternalBlockStore\")\n    ) ++ Seq(\n      // SPARK-12149 Added new fields to ExecutorSummary\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.status.api.v1.ExecutorSummary.this\")\n    ) ++\n    // SPARK-12665 Remove deprecated and unused classes\n    Seq(\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.graphx.GraphKryoRegistrator\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.util.Vector\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.util.Vector$Multiplier\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.util.Vector$\")\n    ) ++ Seq(\n      // SPARK-12591 Register OpenHashMapBasedStateMap for Kryo\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.serializer.KryoInputDataInputBridge\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.serializer.KryoOutputDataOutputBridge\")\n    ) ++ Seq(\n      // SPARK-12510 Refactor ActorReceiver to support Java\n      ProblemFilters.exclude[AbstractClassProblem](\"org.apache.spark.streaming.receiver.ActorReceiver\")\n    ) ++ Seq(\n      // SPARK-12895 Implement TaskMetrics using accumulators\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.TaskContext.internalMetricsToAccumulators\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.TaskContext.collectInternalAccumulators\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.TaskContext.collectAccumulators\")\n    ) ++ Seq(\n      // SPARK-12896 Send only accumulator updates to driver, not TaskMetrics\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.Accumulable.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.Accumulator.this\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.Accumulator.initialValue\")\n    ) ++ Seq(\n      // SPARK-12692 Scala style: Fix the style violation (Space before \",\" or \":\")\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.SparkSink.org$apache$spark$streaming$flume$sink$Logging$$log_\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.SparkSink.org$apache$spark$streaming$flume$sink$Logging$$log__=\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.SparkAvroCallbackHandler.org$apache$spark$streaming$flume$sink$Logging$$log_\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.SparkAvroCallbackHandler.org$apache$spark$streaming$flume$sink$Logging$$log__=\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.Logging.org$apache$spark$streaming$flume$sink$Logging$$log__=\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.Logging.org$apache$spark$streaming$flume$sink$Logging$$log_\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.Logging.org$apache$spark$streaming$flume$sink$Logging$$_log\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.Logging.org$apache$spark$streaming$flume$sink$Logging$$_log_=\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.TransactionProcessor.org$apache$spark$streaming$flume$sink$Logging$$log_\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.flume.sink.TransactionProcessor.org$apache$spark$streaming$flume$sink$Logging$$log__=\")\n    ) ++ Seq(\n      // SPARK-12689 Migrate DDL parsing to the newly absorbed parser\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.execution.datasources.DDLParser\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.execution.datasources.DDLException\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.sql.SQLContext.ddlParser\")\n    ) ++ Seq(\n      // SPARK-7799 Add \"streaming-akka\" project\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.zeromq.ZeroMQUtils.createStream\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.streaming.zeromq.ZeroMQUtils.createStream\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.streaming.zeromq.ZeroMQUtils.createStream$default$6\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.zeromq.ZeroMQUtils.createStream$default$5\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.StreamingContext.actorStream$default$4\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.StreamingContext.actorStream$default$3\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.StreamingContext.actorStream\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.api.java.JavaStreamingContext.actorStream\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.streaming.zeromq.ZeroMQReceiver\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.streaming.receiver.ActorReceiver$Supervisor\")\n    ) ++ Seq(\n      // SPARK-12348 Remove deprecated Streaming APIs.\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.streaming.dstream.DStream.foreach\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.StreamingContext.toPairDStreamFunctions\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.StreamingContext.toPairDStreamFunctions$default$4\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.StreamingContext.awaitTermination\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.StreamingContext.networkStream\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.streaming.api.java.JavaStreamingContextFactory\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.api.java.JavaStreamingContext.awaitTermination\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.api.java.JavaStreamingContext.sc\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.api.java.JavaDStreamLike.reduceByWindow\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.api.java.JavaDStreamLike.foreachRDD\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.api.java.JavaDStreamLike.foreach\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate\")\n    ) ++ Seq(\n      // SPARK-12847 Remove StreamingListenerBus and post all Streaming events to the same thread as Spark events\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.util.AsynchronousListenerBus$\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.util.AsynchronousListenerBus\")\n    ) ++ Seq(\n      // SPARK-11622 Make LibSVMRelation extends HadoopFsRelation and Add LibSVMOutputWriter\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.source.libsvm.DefaultSource\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.ml.source.libsvm.DefaultSource.createRelation\")\n    ) ++ Seq(\n      // SPARK-6363 Make Scala 2.11 the default Scala version\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.cleanup\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.metadataCleaner\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnDriverEndpoint\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint\")\n    ) ++ Seq(\n      // SPARK-7889\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.deploy.history.HistoryServer.org$apache$spark$deploy$history$HistoryServer$@tachSparkUI\"),\n      // SPARK-13296\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.UDFRegistration.register\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.UserDefinedPythonFunction$\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.UserDefinedPythonFunction\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.UserDefinedFunction\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.UserDefinedFunction$\")\n    ) ++ Seq(\n      // SPARK-12995 Remove deprecated APIs in graphx\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.graphx.lib.SVDPlusPlus.runSVDPlusPlus\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.graphx.Graph.mapReduceTriplets\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.graphx.Graph.mapReduceTriplets$default$3\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.graphx.impl.GraphImpl.mapReduceTriplets\")\n    ) ++ Seq(\n      // SPARK-13426 Remove the support of SIMR\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkMasterRegex.SIMR_REGEX\")\n    ) ++ Seq(\n      // SPARK-13413 Remove SparkContext.metricsSystem/schedulerBackend_ setter\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.metricsSystem\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.SparkContext.schedulerBackend_=\")\n    ) ++ Seq(\n      // SPARK-13220 Deprecate yarn-client and yarn-cluster mode\n      ProblemFilters.exclude[MissingMethodProblem](\n        \"org.apache.spark.SparkContext.org$apache$spark$SparkContext$$createTaskScheduler\")\n    ) ++ Seq(\n      // SPARK-13465 TaskContext.\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.TaskContext.addTaskFailureListener\")\n    ) ++ Seq (\n      // SPARK-7729 Executor which has been killed should also be displayed on Executor Tab\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.status.api.v1.ExecutorSummary.this\")\n    ) ++ Seq(\n      // SPARK-13526 Move SQLContext per-session states to new class\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\n        \"org.apache.spark.sql.UDFRegistration.this\")\n    ) ++ Seq(\n      // [SPARK-13486][SQL] Move SQLConf into an internal package\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.SQLConf\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.SQLConf$SQLConfEntry\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.SQLConf$\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.SQLConf$SQLConfEntry$\")\n    ) ++ Seq(\n      //SPARK-11011 UserDefinedType serialization should be strongly typed\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.mllib.linalg.VectorUDT.serialize\"),\n      // SPARK-12073: backpressure rate controller consumes events preferentially from lagging partitions\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.kafka.KafkaTestUtils.createTopic\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.streaming.kafka.DirectKafkaInputDStream.maxMessagesPerPartition\")\n    ) ++ Seq(\n      // [SPARK-13244][SQL] Migrates DataFrame to Dataset\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.tables\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.sql\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.baseRelationToDataFrame\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.table\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrame.apply\"),\n\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.DataFrame\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.DataFrame$\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.LegacyFunctions\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.DataFrameHolder\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.DataFrameHolder$\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SQLImplicits.localSeqToDataFrameHolder\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SQLImplicits.stringRddToDataFrameHolder\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SQLImplicits.rddToDataFrameHolder\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SQLImplicits.longRddToDataFrameHolder\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SQLImplicits.intRddToDataFrameHolder\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.GroupedDataset\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.Dataset.subtract\"),\n\n      // [SPARK-14451][SQL] Move encoder definition into Aggregator interface\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.expressions.Aggregator.toColumn\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.expressions.Aggregator.bufferEncoder\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.expressions.Aggregator.outputEncoder\"),\n\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.mllib.evaluation.MultilabelMetrics.this\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.predictions\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionSummary.predictions\")\n    ) ++ Seq(\n      // [SPARK-13686][MLLIB][STREAMING] Add a constructor parameter `reqParam` to (Streaming)LinearRegressionWithSGD\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.mllib.regression.LinearRegressionWithSGD.this\")\n    ) ++ Seq(\n      // SPARK-15250 Remove deprecated json API in DataFrameReader\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.DataFrameReader.json\")\n    ) ++ Seq(\n      // SPARK-13920: MIMA checks should apply to @Experimental and @DeveloperAPI APIs\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.Aggregator.combineCombinersByKey\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.Aggregator.combineValuesByKey\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ComplexFutureAction.run\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ComplexFutureAction.runJob\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ComplexFutureAction.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkEnv.actorSystem\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkEnv.cacheManager\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkEnv.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.getConfigurationFromJobContext\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.getTaskAttemptIDFromTaskAttemptContext\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.deploy.SparkHadoopUtil.newConfiguration\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.bytesReadCallback\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.bytesReadCallback_=\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.canEqual\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.copy\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.productArity\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.productElement\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.productIterator\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.productPrefix\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.setBytesReadCallback\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.updateBytesRead\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.OutputMetrics.canEqual\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.OutputMetrics.copy\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.OutputMetrics.productArity\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.OutputMetrics.productElement\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.OutputMetrics.productIterator\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.OutputMetrics.productPrefix\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleReadMetrics.decFetchWaitTime\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleReadMetrics.decLocalBlocksFetched\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleReadMetrics.decRecordsRead\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleReadMetrics.decRemoteBlocksFetched\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleReadMetrics.decRemoteBytesRead\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.decShuffleBytesWritten\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.decShuffleRecordsWritten\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.decShuffleWriteTime\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.incShuffleBytesWritten\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.incShuffleRecordsWritten\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.incShuffleWriteTime\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.ShuffleWriteMetrics.setShuffleRecordsWritten\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.feature.PCAModel.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.regression.StreamingLinearRegressionWithSGD.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.rdd.RDD.mapPartitionsWithContext\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.AccumulableInfo.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate.taskMetrics\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.scheduler.TaskInfo.attempt\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.ExperimentalMethods.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.callUDF\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.callUdf\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.cumeDist\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.denseRank\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.inputFileName\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.isNaN\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.percentRank\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.rowNumber\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.functions.sparkPartitionId\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.BlockStatus.apply\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.BlockStatus.copy\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.BlockStatus.externalBlockStoreSize\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.BlockStatus.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.offHeapUsed\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatus.offHeapUsedByRdd\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.storage.StorageStatusListener.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.streaming.scheduler.BatchInfo.streamIdToNumRecords\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.storageStatusList\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.storage.StorageListener.storageStatusList\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ExceptionFailure.apply\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ExceptionFailure.copy\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ExceptionFailure.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.executor.InputMetrics.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.executor.OutputMetrics.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.Estimator.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.Pipeline.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.PipelineModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.PredictionModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.PredictionModel.transformImpl\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.Predictor.extractLabeledPoints\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.Predictor.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.Predictor.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.Transformer.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionTrainingSummary.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.ClassificationModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.GBTClassifier.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.MultilayerPerceptronClassifier.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.NaiveBayes.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.OneVsRest.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.OneVsRestModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.RandomForestClassifier.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.clustering.KMeans.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.clustering.KMeansModel.computeCost\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.clustering.KMeansModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.clustering.LDAModel.logLikelihood\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.clustering.LDAModel.logPerplexity\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.clustering.LDAModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.evaluation.BinaryClassificationEvaluator.evaluate\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.evaluation.Evaluator.evaluate\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator.evaluate\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.evaluation.RegressionEvaluator.evaluate\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.Binarizer.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.Bucketizer.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.ChiSqSelector.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.ChiSqSelectorModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.CountVectorizer.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.CountVectorizerModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.HashingTF.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.IDF.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.IDFModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.IndexToString.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.Interaction.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.MinMaxScaler.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.MinMaxScalerModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.OneHotEncoder.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.PCA.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.PCAModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.QuantileDiscretizer.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.RFormula.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.RFormulaModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.SQLTransformer.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.StandardScaler.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.StandardScalerModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.StopWordsRemover.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.StringIndexer.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.StringIndexerModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.VectorAssembler.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.VectorIndexer.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.VectorIndexerModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.VectorSlicer.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.Word2Vec.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.Word2VecModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.recommendation.ALS.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.recommendation.ALSModel.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.recommendation.ALSModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegression.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegressionModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.GBTRegressor.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.IsotonicRegression.extractWeightedLabeledPoints\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.IsotonicRegression.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.IsotonicRegressionModel.extractWeightedLabeledPoints\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.IsotonicRegressionModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.LinearRegression.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.LinearRegressionSummary.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.LinearRegressionTrainingSummary.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.RandomForestRegressor.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.tuning.CrossValidator.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.tuning.CrossValidatorModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.tuning.TrainValidationSplit.fit\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.tuning.TrainValidationSplitModel.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.mllib.evaluation.BinaryClassificationMetrics.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.mllib.evaluation.MulticlassMetrics.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.mllib.evaluation.RegressionMetrics.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.DataFrameNaFunctions.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.DataFrameStatFunctions.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.DataFrameWriter.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.functions.broadcast\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.functions.callUDF\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.sources.CreatableRelationProvider.createRelation\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.sources.InsertableRelation.insert\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.fMeasureByThreshold\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.pr\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.precisionByThreshold\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.predictions\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.recallByThreshold\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.BinaryLogisticRegressionSummary.roc\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.clustering.LDAModel.describeTopics\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.Word2VecModel.findSynonyms\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.Word2VecModel.getVectors\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.recommendation.ALSModel.itemFactors\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.recommendation.ALSModel.userFactors\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.LinearRegressionSummary.predictions\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.LinearRegressionSummary.residuals\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.scheduler.AccumulableInfo.name\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.scheduler.AccumulableInfo.value\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameNaFunctions.drop\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameNaFunctions.fill\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameNaFunctions.replace\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameReader.jdbc\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameReader.json\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameReader.load\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameReader.orc\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameReader.parquet\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameReader.table\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameReader.text\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameStatFunctions.crosstab\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameStatFunctions.freqItems\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.DataFrameStatFunctions.sampleBy\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.createExternalTable\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.emptyDataFrame\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.range\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.functions.udf\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.scheduler.JobLogger\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.streaming.receiver.ActorHelper\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.streaming.receiver.ActorSupervisorStrategy\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.streaming.receiver.ActorSupervisorStrategy$\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.streaming.receiver.Statistics\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.streaming.receiver.Statistics$\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.executor.InputMetrics\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.executor.InputMetrics$\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.executor.OutputMetrics\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.executor.OutputMetrics$\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.sql.functions$\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.Estimator.fit\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.Predictor.train\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.Transformer.transform\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.evaluation.Evaluator.evaluate\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.scheduler.SparkListener.onOtherEvent\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.sources.CreatableRelationProvider.createRelation\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.sources.InsertableRelation.insert\")\n    ) ++ Seq(\n      // [SPARK-13926] Automatically use Kryo serializer when shuffling RDDs with simple types\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ShuffleDependency.this\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ShuffleDependency.serializer\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.serializer.Serializer$\")\n    ) ++ Seq(\n      // SPARK-13927: add row/column iterator to local matrices\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.mllib.linalg.Matrix.rowIter\"),\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.mllib.linalg.Matrix.colIter\")\n    ) ++ Seq(\n      // SPARK-13948: MiMa Check should catch if the visibility change to `private`\n      // TODO(josh): Some of these may be legitimate incompatibilities; we should follow up before the 2.0.0 release\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.Dataset.toDS\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.sources.OutputWriterFactory.newInstance\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.util.RpcUtils.askTimeout\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.util.RpcUtils.lookupTimeout\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.UnaryTransformer.transform\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.DecisionTreeClassifier.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LogisticRegression.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.DecisionTreeRegressor.train\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.sql.Dataset.groupBy\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.Dataset.groupBy\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.Dataset.select\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.Dataset.toDF\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.Logging.initializeLogIfNecessary\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.scheduler.SparkListenerEvent.logEvent\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.sql.sources.OutputWriterFactory.newInstance\")\n    ) ++ Seq(\n      // [SPARK-14014] Replace existing analysis.Catalog with SessionCatalog\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SQLContext.this\")\n    ) ++ Seq(\n      // [SPARK-13928] Move org.apache.spark.Logging into org.apache.spark.internal.Logging\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.Logging\"),\n      (problem: Problem) => problem match {\n        case MissingTypesProblem(_, missing)\n          if missing.map(_.fullName).sameElements(Seq(\"org.apache.spark.Logging\")) => false\n        case _ => true\n      }\n    ) ++ Seq(\n      // [SPARK-13990] Automatically pick serializer when caching RDDs\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.network.netty.NettyBlockTransferService.uploadBlock\")\n    ) ++ Seq(\n      // [SPARK-14089][CORE][MLLIB] Remove methods that has been deprecated since 1.1, 1.2, 1.3, 1.4, and 1.5\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkEnv.getThreadLocal\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.rdd.RDDFunctions.treeReduce\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.rdd.RDDFunctions.treeAggregate\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.tree.configuration.Strategy.defaultStategy\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.mllib.util.MLUtils.loadLibSVMFile\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.mllib.util.MLUtils.loadLibSVMFile\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.util.MLUtils.loadLibSVMFile\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.util.MLUtils.saveLabeledData\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.util.MLUtils.loadLabeledData\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.optimization.LBFGS.setMaxNumIterations\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.evaluation.BinaryClassificationEvaluator.setScoreCol\")\n    ) ++ Seq(\n      // [SPARK-14205][SQL] remove trait Queryable\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.sql.Dataset\")\n    ) ++ Seq(\n      // [SPARK-11262][ML] Unit test for gradient, loss layers, memory management\n      // for multilayer perceptron.\n      // This class is marked as `private`.\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.ml.ann.SoftmaxFunction\")\n    ) ++ Seq(\n      // [SPARK-13674][SQL] Add wholestage codegen support to Sample\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.util.random.PoissonSampler.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.util.random.PoissonSampler.this\")\n    ) ++ Seq(\n      // [SPARK-13430][ML] moved featureCol from LinearRegressionModelSummary to LinearRegressionSummary\n      ProblemFilters.exclude[MissingMethodProblem](\"org.apache.spark.ml.regression.LinearRegressionSummary.this\")\n    ) ++ Seq(\n      // [SPARK-14437][Core] Use the address that NettyBlockTransferService listens to create BlockManagerId\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.network.netty.NettyBlockTransferService.this\")\n    ) ++ Seq(\n      // [SPARK-13048][ML][MLLIB] keepLastCheckpoint option for LDA EM optimizer\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.mllib.clustering.DistributedLDAModel.this\")\n    ) ++ Seq(\n      // [SPARK-14475] Propagate user-defined context from driver to executors\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.TaskContext.getLocalProperty\"),\n      // [SPARK-14617] Remove deprecated APIs in TaskMetrics\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.executor.InputMetrics$\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.executor.OutputMetrics$\"),\n      // [SPARK-14628] Simplify task metrics by always tracking read/write metrics\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.readMethod\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.OutputMetrics.writeMethod\")\n    ) ++ Seq(\n      // SPARK-14628: Always track input/output/shuffle metrics\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.ShuffleReadMetrics.totalBlocksFetched\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.status.api.v1.ShuffleReadMetrics.this\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.TaskMetrics.inputMetrics\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.TaskMetrics.outputMetrics\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.TaskMetrics.shuffleWriteMetrics\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.TaskMetrics.shuffleReadMetrics\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.status.api.v1.TaskMetrics.this\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.TaskMetricDistributions.inputMetrics\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.TaskMetricDistributions.outputMetrics\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.TaskMetricDistributions.shuffleWriteMetrics\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.TaskMetricDistributions.shuffleReadMetrics\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.status.api.v1.TaskMetricDistributions.this\")\n    ) ++ Seq(\n      // SPARK-13643: Move functionality from SQLContext to SparkSession\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.SQLContext.getSchema\")\n    ) ++ Seq(\n      // [SPARK-14407] Hides HadoopFsRelation related data source API into execution package\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.OutputWriter\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.sql.sources.OutputWriterFactory\")\n    ) ++ Seq(\n      // SPARK-14734: Add conversions between mllib and ml Vector, Matrix types\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.mllib.linalg.Vector.asML\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.mllib.linalg.Matrix.asML\")\n    ) ++ Seq(\n      // SPARK-14704: Create accumulators in TaskMetrics\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.InputMetrics.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.executor.OutputMetrics.this\")\n    ) ++ Seq(\n      // SPARK-14861: Replace internal usages of SQLContext with SparkSession\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\n        \"org.apache.spark.ml.clustering.LocalLDAModel.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\n        \"org.apache.spark.ml.clustering.DistributedLDAModel.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\n        \"org.apache.spark.ml.clustering.LDAModel.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\n        \"org.apache.spark.ml.clustering.LDAModel.sqlContext\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\n        \"org.apache.spark.sql.Dataset.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\n        \"org.apache.spark.sql.DataFrameReader.this\")\n    ) ++ Seq(\n      // SPARK-14542 configurable buffer size for pipe RDD\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.rdd.RDD.pipe\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.api.java.JavaRDDLike.pipe\")\n    ) ++ Seq(\n      // [SPARK-4452][Core]Shuffle data structures can starve others on the same thread for memory\n      ProblemFilters.exclude[IncompatibleTemplateDefProblem](\"org.apache.spark.util.collection.Spillable\")\n    ) ++ Seq(\n      // [SPARK-14952][Core][ML] Remove methods deprecated in 1.6\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.input.PortableDataStream.close\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.weights\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.LinearRegressionModel.weights\")\n    ) ++ Seq(\n      // [SPARK-10653] [Core] Remove unnecessary things from SparkEnv\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkEnv.sparkFilesDir\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.SparkEnv.blockTransferService\")\n    ) ++ Seq(\n      // SPARK-14654: New accumulator API\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ExceptionFailure$\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ExceptionFailure.apply\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ExceptionFailure.metrics\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ExceptionFailure.copy\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ExceptionFailure.this\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.executor.ShuffleReadMetrics.remoteBlocksFetched\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.executor.ShuffleReadMetrics.totalBlocksFetched\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.executor.ShuffleReadMetrics.localBlocksFetched\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.ShuffleReadMetrics.remoteBlocksFetched\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.status.api.v1.ShuffleReadMetrics.localBlocksFetched\")\n    ) ++ Seq(\n      // [SPARK-14615][ML] Use the new ML Vector and Matrix in the ML pipeline based algorithms\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.clustering.LDAModel.getOldDocConcentration\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.clustering.LDAModel.estimatedDocConcentration\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.clustering.LDAModel.topicsMatrix\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.clustering.KMeansModel.clusterCenters\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LabelConverter.decodeLabel\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LabelConverter.encodeLabeledPoint\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.weights\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.predict\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.MultilayerPerceptronClassificationModel.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.NaiveBayesModel.predictRaw\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.NaiveBayesModel.raw2probabilityInPlace\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.NaiveBayesModel.theta\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.NaiveBayesModel.pi\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.NaiveBayesModel.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.probability2prediction\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.predictRaw\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.raw2prediction\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.raw2probabilityInPlace\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.predict\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.coefficients\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.classification.ClassificationModel.raw2prediction\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.ClassificationModel.predictRaw\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.classification.ClassificationModel.predictRaw\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.ElementwiseProduct.getScalingVec\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.ElementwiseProduct.setScalingVec\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.PCAModel.pc\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.MinMaxScalerModel.originalMax\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.MinMaxScalerModel.originalMin\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.MinMaxScalerModel.this\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.Word2VecModel.findSynonyms\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.IDFModel.idf\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.StandardScalerModel.mean\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.feature.StandardScalerModel.this\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.feature.StandardScalerModel.std\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegressionModel.predict\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegressionModel.coefficients\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegressionModel.predictQuantiles\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.AFTSurvivalRegressionModel.this\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.IsotonicRegressionModel.predictions\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.IsotonicRegressionModel.boundaries\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.LinearRegressionModel.predict\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.LinearRegressionModel.coefficients\"),\n      ProblemFilters.exclude[IncompatibleMethTypeProblem](\"org.apache.spark.ml.regression.LinearRegressionModel.this\")\n    ) ++ Seq(\n      // [SPARK-15290] Move annotations, like @Since / @DeveloperApi, into spark-tags\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.annotation.package$\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.annotation.package\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.annotation.Private\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.annotation.AlphaComponent\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.annotation.Experimental\"),\n      ProblemFilters.exclude[MissingClassProblem](\"org.apache.spark.annotation.DeveloperApi\")\n    ) ++ Seq(\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.mllib.linalg.Vector.asBreeze\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.mllib.linalg.Matrix.asBreeze\")\n    ) ++ Seq(\n      // [SPARK-15914] Binary compatibility is broken since consolidation of Dataset and DataFrame\n      // in Spark 2.0. However, source level compatibility is still maintained.\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.load\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.jsonRDD\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.jsonFile\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.jdbc\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.parquetFile\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.sql.SQLContext.applySchema\")\n    ) ++ Seq(\n      // SPARK-17096: Improve exception string reported through the StreamingQueryListener\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener#QueryTerminated.stackTrace\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.sql.streaming.StreamingQueryListener#QueryTerminated.this\")\n    ) ++ Seq(\n      // SPARK-17406 limit timeline executor events\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorIdToData\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToTasksActive\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToTasksComplete\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToInputRecords\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToShuffleRead\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToTasksFailed\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToShuffleWrite\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToDuration\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToInputBytes\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToLogUrls\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToOutputBytes\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToOutputRecords\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToTotalCores\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToTasksMax\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ui.exec.ExecutorsListener.executorToJvmGCTime\")\n    ) ++ Seq(\n      // [SPARK-17163] Unify logistic regression interface. Private constructor has new signature.\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.this\")\n    ) ++ Seq(\n      // [SPARK-17498] StringIndexer enhancement for handling unseen labels\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.feature.StringIndexer\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.feature.StringIndexerModel\")\n    ) ++ Seq(\n      // [SPARK-17365][Core] Remove/Kill multiple executors together to reduce RPC call time\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.SparkContext\")\n    ) ++ Seq(\n      // [SPARK-12221] Add CPU time to metrics\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.TaskMetrics.this\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.status.api.v1.TaskMetricDistributions.this\")\n    ) ++ Seq(\n      // [SPARK-18481] ML 2.1 QA: Remove deprecated methods for ML\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.PipelineStage.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.param.JavaParams.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.param.Params.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassificationModel.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegression.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.GBTClassifier.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.LogisticRegressionModel.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.numTrees\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.feature.ChiSqSelectorModel.setLabelCol\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.evaluation.Evaluator.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressor.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.GBTRegressionModel.validateParams\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.LinearRegressionSummary.model\"),\n      ProblemFilters.exclude[DirectMissingMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.numTrees\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.classification.RandomForestClassifier\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.classification.GBTClassifier\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.classification.GBTClassificationModel\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.regression.RandomForestRegressor\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.regression.GBTRegressor\"),\n      ProblemFilters.exclude[MissingTypesProblem](\"org.apache.spark.ml.regression.GBTRegressionModel\"),\n      ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.getNumTrees\"),\n      ProblemFilters.exclude[FinalMethodProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.getNumTrees\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.numTrees\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.classification.RandomForestClassificationModel.setFeatureSubsetStrategy\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.numTrees\"),\n      ProblemFilters.exclude[IncompatibleResultTypeProblem](\"org.apache.spark.ml.regression.RandomForestRegressionModel.setFeatureSubsetStrategy\")\n    ) ++ Seq(\n      // [SPARK-21680][ML][MLLIB]optimzie Vector coompress\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.mllib.linalg.Vector.toSparseWithSize\"),\n      ProblemFilters.exclude[ReversedMissingMethodProblem](\"org.apache.spark.ml.linalg.Vector.toSparseWithSize\")\n    ) ++ Seq(\n      // [SPARK-3181][ML]Implement huber loss for LinearRegression.\n      ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasLoss.org$apache$spark$ml$param$shared$HasLoss$_setter_$loss_=\"),\n      ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasLoss.getLoss\"),\n      ProblemFilters.exclude[InheritedNewAbstractMethodProblem](\"org.apache.spark.ml.param.shared.HasLoss.loss\")\n    )\n  }\n\n  def excludes(version: String) = version match {\n    case v if v.startsWith(\"3.1\") => v31excludes\n    case v if v.startsWith(\"3.0\") => v30excludes\n    case v if v.startsWith(\"2.4\") => v24excludes\n    case v if v.startsWith(\"2.3\") => v23excludes\n    case v if v.startsWith(\"2.2\") => v22excludes\n    case v if v.startsWith(\"2.1\") => v21excludes\n    case v if v.startsWith(\"2.0\") => v20excludes\n    case _ => Seq()\n  }\n}\n"}}}Content-Length: 155

{"id":2,"jsonrpc":"2.0","method":"textDocument/foldingRange","params":{"textDocument":{"uri":"file:///home/jeremys/git/spark/project/MimaExcludes.scala"}}}Content-Length: 79

{"id":"1","jsonrpc":"2.0","error":{"code":-32601,"message":"Method not found"}}Content-Length: 173

{"jsonrpc":"2.0","method":"window/logMessage","params":{"type":4,"message":"2020.10.09 17:44:16 INFO  no build target: using presentation compiler with only scala-library"}}Content-Length: 176

{"jsonrpc":"2.0","method":"window/logMessage","params":{"type":4,"message":"2020.10.09 17:44:16 WARN  no build target for: /home/jeremys/git/spark/project/MimaExcludes.scala"}}Content-Length: 1042

{"jsonrpc":"2.0","method":"window/logMessage","params":{"type":4,"message":"2020.10.09 17:44:16 ERROR Unexpected error initializing server\norg.eclipse.lsp4j.jsonrpc.ResponseErrorException: Method not found\n\tat org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.handleResponse(RemoteEndpoint.java:209)\n\tat org.eclipse.lsp4j.jsonrpc.RemoteEndpoint.consume(RemoteEndpoint.java:193)\n\tat org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.handleMessage(StreamMessageProducer.java:194)\n\tat org.eclipse.lsp4j.jsonrpc.json.StreamMessageProducer.listen(StreamMessageProducer.java:94)\n\tat org.eclipse.lsp4j.jsonrpc.json.ConcurrentMessageProcessor.run(ConcurrentMessageProcessor.java:113)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n"}}Content-Length: 2083

{"jsonrpc":"2.0","id":2,"result":[{"startLine":0,"endLine":14,"startCharacter":0,"endCharacter":3,"kind":"comment"},{"startLine":20,"endLine":32,"startCharacter":0,"endCharacter":3,"kind":"comment"},{"startLine":38,"endLine":39,"startCharacter":4,"endCharacter":18,"kind":"comment"},{"startLine":68,"endLine":69,"startCharacter":4,"endCharacter":26,"kind":"comment"},{"startLine":72,"endLine":76,"startCharacter":4,"endCharacter":48,"kind":"comment"},{"startLine":373,"endLine":374,"startCharacter":4,"endCharacter":41,"kind":"comment"},{"startLine":623,"endLine":624,"startCharacter":4,"endCharacter":70,"kind":"comment"},{"startLine":740,"endLine":741,"startCharacter":4,"endCharacter":92,"kind":"comment"},{"startLine":1454,"endLine":1455,"startCharacter":6,"endCharacter":116,"kind":"comment"},{"startLine":1502,"endLine":1504,"startCharacter":6,"endCharacter":43,"kind":"comment"},{"startLine":1657,"endLine":1658,"startCharacter":6,"endCharacter":79,"kind":"comment"},{"startLine":17,"endLine":18,"startCharacter":6,"endCharacter":52,"kind":"imports"},{"startLine":34,"endLine":1751,"startCharacter":20,"endCharacter":1,"kind":"region"},{"startLine":37,"endLine":106,"startCharacter":25,"endCharacter":3,"kind":"region"},{"startLine":110,"endLine":530,"startCharacter":25,"endCharacter":3,"kind":"region"},{"startLine":397,"endLine":400,"startCharacter":39,"endCharacter":5,"kind":"region"},{"startLine":534,"endLine":655,"startCharacter":25,"endCharacter":3,"kind":"region"},{"startLine":648,"endLine":654,"startCharacter":39,"endCharacter":5,"kind":"region"},{"startLine":659,"endLine":755,"startCharacter":25,"endCharacter":3,"kind":"region"},{"startLine":759,"endLine":838,"startCharacter":25,"endCharacter":5,"kind":"region"},{"startLine":842,"endLine":918,"startCharacter":25,"endCharacter":3,"kind":"region"},{"startLine":922,"endLine":1739,"startCharacter":25,"endCharacter":3,"kind":"region"},{"startLine":1477,"endLine":1480,"startCharacter":41,"endCharacter":7,"kind":"region"},{"startLine":1742,"endLine":1750,"startCharacter":47,"endCharacter":3,"kind":"region"}]}

Search terms

vim metals vim-lsp method not found initializing server

ckipp01 commented 4 years ago

Thanks for the report @rickyninja!

It's possible vim-lsp really is sending an invalid method, but I'm unable to make that judgement.

So it's actually the other way around, Metals is sending something to vim-lsp that vim-lsp doesn't have implemented yet. Since the spark repo you have linked is a Maven project, but it also has sbt-specific files in project/, Metals recognizes that there can be 2 possible build definitions. It then sends you a window/showMessageRequest for you to choose which you'd like. You can see this in the logs you posted here:

"method":"window/showMessageRequest","params":{"actions":[{"title":"sbt"},{"title":"mvn"}]

However, that method isn't implemented yet in vim-lsp. Ironically, there looks to be a longstanding pr that is still open to add support for this here that mentions Metals. It used to be that you could just not worry about the message request and instead manually trigger a build import, but in this case, you can't since even if you manually trigger it, it will still send the message request for you to choose the build tool. It's good you brought this up, because I honestly didn't think about this scenario before. Part of me wants to say that there isn't a lot we can do, since window/showMessageRequest is part of the spec, and it needs to be implemented by the client, but it's also not great for vim-lsp users. I'll leave this open for a bit because I'll have to think a bit about this scenario, and whether or not it makes sense for Metals to try to offer something to get around this situation.

As an aside, it also looks like the settings here are pretty out of date. I haven't played around with vim-lsp in quite some time, so I'll see if I can update these. In the mean time, coc-metals does implement the full spec if you do want to use vim for this project and are considering another lsp client. If not, let's see what we can do to solve this.

rickyninja commented 4 years ago

@ckipp01 I appreciate the details and context you have provided. I'd prefer to keep using vim-lsp for now, since I already have it working with pyls and clangd, and it has lighter dependencies. I'll keep an eye on things to see where it ends up in the future, but feel free to close this issue at your discretion.

ckipp01 commented 4 years ago

So I added a couple comments in various places in vim-lsp to mention things that would really help Metals vim-lsp users, and I've also sent in a pr to update the settings in the vim-lsp-settings repo. I think I'm going to go ahead and close this since it doesn't make a ton of sense for us to add something to go around this issue as it's part of the spec, and really needs to be implemented client side. Just as importantly, I see that vim-lsp doesn't support window/showMessage nor window/logMessage, so it's even more difficult to show any information to Metals users of vim-lsp. Hopefully some of these missing features can be added, and I think it would greatly improve your experience. If you get stuck with anything else, don't hesitate to create another issue.