Lokale pysparks .show () scheitert mit Sparkexception: Job wegen des Bühnenversagens abgebrochenPython

Python-Programme
Anonymous
 Lokale pysparks .show () scheitert mit Sparkexception: Job wegen des Bühnenversagens abgebrochen

Post by Anonymous »

Ich versuche, PYSPark lokal auszuführen, indem PYSPark mit PIP -Installation von PYSPARK installiert wird. Alle Env -Pfade sind ebenfalls eingestellt. Das Notizbuch in Port 8889, aber die Probleme in der .show () -Methode. />from pyspark.sql import SparkSession

spark = (
SparkSession
.builder \
.appName("PracticeSpark") \
.config("spark.sql.execution.pyspark.udf.faulthandler.enabled", "true") \
.config("spark.python.worker.faulthandler.enabled", "true") \
.getOrCreate()
)
< /code>

SparkSession - hive

SparkContext

Spark UI

Version v4.0.0
Master local[*]
AppName PySparkShell
< /code>
< /blockquote>
data = [
("nitin", 23, "jaipur"),
("piyush", 23, "jaipur")
]

column = ("name", "age", "city")

emp = spark.createDataFrame(data = data, schema = column)

emp.show()
< /code>

Vollständiger Fehler: < /li>
< /ul>
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
Cell In[4], line 1
----> 1 emp.show()

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pyspark\sql\classic\dataframe.py:285, in DataFrame.show(self, n, truncate, vertical)
284 def show(self, n: int = 20, truncate: Union[bool, int] = True, vertical: bool = False) -> None:
--> 285 print(self._show_string(n, truncate, vertical))

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pyspark\sql\classic\dataframe.py:303, in DataFrame._show_string(self, n, truncate, vertical)
297 raise PySparkTypeError(
298 errorClass="NOT_BOOL",
299 messageParameters={"arg_name": "vertical", "arg_type": type(vertical).__name__},
300 )
302 if isinstance(truncate, bool) and truncate:
--> 303 return self._jdf.showString(n, 20, vertical)
304 else:
305 try:

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\py4j\java_gateway.py:1362, in JavaMember.__call__(self, *args)
1356 command = proto.CALL_COMMAND_NAME +\
1357 self.command_header +\
1358 args_command +\
1359 proto.END_COMMAND_PART
1361 answer = self.gateway_client.send_command(command)
-> 1362 return_value = get_return_value(
1363 answer, self.gateway_client, self.target_id, self.name)
1365 for temp_arg in temp_args:
1366 if hasattr(temp_arg, "_detach"):

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\pyspark\errors\exceptions\captured.py:282, in capture_sql_exception..deco(*a, **kw)
279 from py4j.protocol import Py4JJavaError
281 try:
--> 282 return f(*a, **kw)
283 except Py4JJavaError as e:
284 converted = convert_exception(e.java_exception)

File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\py4j\protocol.py:327, in get_return_value(answer, gateway_client, target_id, name)
325 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
326 if answer[1] == REFERENCE_TYPE:
--> 327 raise Py4JJavaError(
328 "An error occurred while calling {0}{1}{2}.\n".
329 format(target_id, ".", name), value)
330 else:
331 raise Py4JError(
332 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
333 format(target_id, ".", name, value))

Py4JJavaError: An error occurred while calling o54.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (host.docker.internal executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:624)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:35)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:945)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:925)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:532)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:601)
at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583)
at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:402)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:901)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:901)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)
at org.apache.spark.scheduler.Task.run(Task.scala:147)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:647)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:80)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:77)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:650)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: java.io.EOFException
at java.base/java.io.DataInputStream.readFully(DataInputStream.java:210)
at java.base/java.io.DataInputStream.readInt(DataInputStream.java:385)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:933)
... 2 6 m o r e < b r / > < b r / > D r i v e r s t a c k t r a c e : < b r / > at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$3(DAGScheduler.scala:2935)
at scala.Option.getOrElse(Option.scala:201)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2935)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2927)
at scala.collection.immutable.List.foreach(List.scala:334)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2927)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1295)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1295)
at scala.Option.foreach(Option.scala:437)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1295)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3207)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3141)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3130)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:50)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1009)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2484)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2505)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2524)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:544)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:497)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:58)
at org.apache.spark.sql.classic.Dataset.collectFromPlan(Dataset.scala:2244)
at org.apache.spark.sql.classic.Dataset.$anonfun$head$1(Dataset.scala:1379)
at org.apache.spark.sql.classic.Dataset.$anonfun$withAction$2(Dataset.scala:2234)
at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:654)
at org.apache.spark.sql.classic.Dataset.$anonfun$withAction$1(Dataset.scala:2232)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$8(SQLExecution.scala:162)
at org.apache.spark.sql.execution.SQLExecution$.withSessionTagsApplied(SQLExecution.scala:268)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$7(SQLExecution.scala:124)
at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:112)
at org.apache.spark.sql.artifact.ArtifactManager.withClassLoaderIfNeeded(ArtifactManager.scala:106)
at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:111)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:124)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:291)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:123)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:804)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:77)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:233)
at org.apache.spark.sql.classic.Dataset.withAction(Dataset.scala:2232)
at org.apache.spark.sql.classic.Dataset.head(Dataset.scala:1379)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2810)
at org.apache.spark.sql.classic.Dataset.getRows(Dataset.scala:339)
at org.apache.spark.sql.classic.Dataset.showString(Dataset.scala:375)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:75)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:52)
at java.base/java.lang.reflect.Method.invoke(Method.java:580)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:184)
at py4j.ClientServerConnection.run(ClientServerConnection.java:108)
at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:624)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:599)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:35)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:945)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:925)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:532)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:601)
at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583)
at scala.collection.Iterator$$anon$9.hasNext(Iterator.scala:583)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:50)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:402)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:901)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:901)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:171)
at org.apache.spark.scheduler.Task.run(Task.scala:147)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$5(Executor.scala:647)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:80)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:77)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:650)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
... 1 more
Caused by: java.io.EOFException
at java.base/java.io.DataInputStream.readFully(DataInputStream.java:210)
at java.base/java.io.DataInputStream.readInt(DataInputStream.java:385)
at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:933)
... 26 more
< /code>
Jetzt habe ich versucht, dieses Problem zu lösen, indem ich die JDK -Version änderte (sah irgendwo, dass die JDK -Version es beeinflusst), und dennoch habe ich auf JDK 11 heruntergestuft, es hat nicht funktioniert, nicht einmal eine einzige Zelle wurde ausführte, und dann wurde auf JDK 17 aufgerüstet. darüber, als ich gestern nur angefangen habe). < /p>
Was könnten die möglichen Probleme sein? Fehlt mir etwas
Python - 3.12.10
Java - 17
Spark - 4.0.0

Quick Reply

Change Text Case: 
   
  • Similar Topics
    Replies
    Views
    Last post