Ich werde auf Fehler geraten, während ich versuche, den PYSPark -Datenrahmen in der Parquetdatei zu speichern. Das Verzeichnis befindet sich am externen Volumen, an dem der an den Arbeitsbereich, an dem ich arbeite, angeschlossen, und Spark erstellt den leeren Ordner test_2.Parquet selbst, wirft aber dann den Fehler aus. Ich leite Spark lokal aus. /p>
Code, den ich ausführe (Pfade sind verschleiert): < /p>
df = spark.read.parquet(different_path)
df.write.mode("overwrite").parquet("/PATH/test_2.parquet")
< /code>
ls -l für dieses test_2.Parquet -Verzeichnis, das angibt, dass alle anderen Benutzer auch Schreibberechtigungen haben sollten: < /p>
drwxrwxrwx 2 root ubuntu 0 Feb 17 07:56 test_2.parquet
< /code>
Fehler: < /p>
Py4JJavaError Traceback (most recent call last)
/SCRIPT_PATH/script.py in line 2
----> 69 df.write.mode("overwrite").parquet("/PATH/test_2.parquet")
File /opt/conda/envs/default/lib/python3.9/site-packages/pyspark/sql/readwriter.py:1721, in DataFrameWriter.parquet(self, path, mode, partitionBy, compression)
1719 self.partitionBy(partitionBy)
1720 self._set_opts(compression=compression)
-> 1721 self._jwrite.parquet(path)
File /opt/conda/envs/default/lib/python3.9/site-packages/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args)
1316 command = proto.CALL_COMMAND_NAME +\
1317 self.command_header +\
1318 args_command +\
1319 proto.END_COMMAND_PART
1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
1323 answer, self.gateway_client, self.target_id, self.name)
1325 for temp_arg in temp_args:
1326 if hasattr(temp_arg, "_detach"):
File /opt/conda/envs/default/lib/python3.9/site-packages/pyspark/errors/exceptions/captured.py:179, in capture_sql_exception..deco(*a, **kw)
177 def deco(*a: Any, **kw: Any) -> Any:
178 try:
--> 179 return f(*a, **kw)
180 except Py4JJavaError as e:
181 converted = convert_exception(e.java_exception)
File /opt/conda/envs/default/lib/python3.9/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
331 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
332 format(target_id, ".", name, value))
Py4JJavaError: An error occurred while calling o55.parquet.
: ExitCodeException exitCode=1: chmod: changing permissions of '/PATH/test_2.parquet': O p e r a t i o n n o t p e r m i t t e d < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . r u n C o m m a n d ( S h e l l . j a v a : 1 0 0 7 ) < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . r u n ( S h e l l . j a v a : 9 0 0 ) < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l $ S h e l l C o m m a n d E x e c u t o r . e x e c u t e ( S h e l l . j a v a : 1 2 1 2 ) < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . e x e c C o m m a n d ( S h e l l . j a v a : 1 3 0 6 ) < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . e x e c C o m m a n d ( S h e l l . j a v a : 1 2 8 8 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . s e t P e r m i s s i o n ( R a w L o c a l F i l e S y s t e m . j a v a : 9 7 8 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . m k O n e D i r W i t h M o d e ( R a w L o c a l F i l e S y s t e m . j a v a : 6 6 0 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . m k d i r s W i t h O p t i o n a l P e r m i s s i o n ( R a w L o c a l F i l e S y s t e m . j a v a : 7 0 0 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . m k d i r s ( R a w L o c a l F i l e S y s t e m . j a v a : 6 7 2 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . m k d i r s W i t h O p t i o n a l P e r m i s s i o n ( R a w L o c a l F i l e S y s t e m . j a v a : 6 9 9 ) < b r / > at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:672)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:699)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:672)
at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:788)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:356)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:188)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.writeAndCommit(FileFormatWriter.scala:269)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeWrite(FileFormatWriter.scala:304)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:190)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:190)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:437)
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83)
at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:142)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:869)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:391)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:364)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:243)
at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:802)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:829)
Ich werde auf Fehler geraten, während ich versuche, den PYSPark -Datenrahmen in der Parquetdatei zu speichern. Das Verzeichnis befindet sich am externen Volumen, an dem der an den Arbeitsbereich, an dem ich arbeite, angeschlossen, und Spark erstellt den leeren Ordner test_2.Parquet selbst, wirft aber dann den Fehler aus. Ich leite Spark lokal aus. /p> Code, den ich ausführe (Pfade sind verschleiert): < /p> [code]df = spark.read.parquet(different_path) df.write.mode("overwrite").parquet("/PATH/test_2.parquet") < /code> ls -l für dieses test_2.Parquet -Verzeichnis, das angibt, dass alle anderen Benutzer auch Schreibberechtigungen haben sollten: < /p> drwxrwxrwx 2 root ubuntu 0 Feb 17 07:56 test_2.parquet < /code> Fehler: < /p>
Py4JJavaError Traceback (most recent call last) /SCRIPT_PATH/script.py in line 2 ----> 69 df.write.mode("overwrite").parquet("/PATH/test_2.parquet")
Py4JJavaError: An error occurred while calling o55.parquet. : ExitCodeException exitCode=1: chmod: changing permissions of '/PATH/test_2.parquet': O p e r a t i o n n o t p e r m i t t e d < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . r u n C o m m a n d ( S h e l l . j a v a : 1 0 0 7 ) < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . r u n ( S h e l l . j a v a : 9 0 0 ) < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l $ S h e l l C o m m a n d E x e c u t o r . e x e c u t e ( S h e l l . j a v a : 1 2 1 2 ) < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . e x e c C o m m a n d ( S h e l l . j a v a : 1 3 0 6 ) < b r / > a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . e x e c C o m m a n d ( S h e l l . j a v a : 1 2 8 8 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . s e t P e r m i s s i o n ( R a w L o c a l F i l e S y s t e m . j a v a : 9 7 8 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . m k O n e D i r W i t h M o d e ( R a w L o c a l F i l e S y s t e m . j a v a : 6 6 0 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . m k d i r s W i t h O p t i o n a l P e r m i s s i o n ( R a w L o c a l F i l e S y s t e m . j a v a : 7 0 0 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . m k d i r s ( R a w L o c a l F i l e S y s t e m . j a v a : 6 7 2 ) < b r / > a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m . m k d i r s W i t h O p t i o n a l P e r m i s s i o n ( R a w L o c a l F i l e S y s t e m . j a v a : 6 9 9 ) < b r / > at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:672) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:699) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:672) at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:788) at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:356) at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:188) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.writeAndCommit(FileFormatWriter.scala:269) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeWrite(FileFormatWriter.scala:304) at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:190) at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:190) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107) at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:437) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:142) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:869) at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:391) at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:364) at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:243) at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:802) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:829) [/code]
Ich möchte wissen, wie man einen Datenrahmen mit Fastapi streamen, ohne den Datenrahmen in einer CSV -Datei auf der Festplatte speichern zu müssen. Derzeit habe ich es geschafft, Daten aus der CSV...
Heyyy, ich kann mit Pyspark problemlos aus BigQuery lesen, aber das Schreiben scheint unmöglich, da dieser Fehler ständig auftritt:
Caused by: java.lang.NoSuchMethodError:...
Ich kämpfe mit der richtigen Kombination von Piso -Funktionen, um die folgende Analyse durchzuführen:
Nehmen wir an, ich habe eine Straße mit Meilenmarkierungen als solche: 0 --- 1 --- 2 --- 3 --- 4...