spark.conf.set("spark.sql.parquet.compression.codec", "brotli")
df.write.format("delta").mode("overwrite").saveAsTable(table_name, path= delta_table_path)
< /code>
Fehlermeldung lautet:
spark.conf.set("Spark.sql.parquet.compression.codec "," brotli ")
df.write.format("delta").Mode("overwite").saVeastable(Table -( - trat beim Aufrufen von O432.Saveastable auf. org.apache.hadoop.io.compress.brotlicodec wurde nicht unter org.apache.parquet.hadoop.Codecfactory.getCodec (Codecfactory.java:254) bei org.apache.parquet.hadoop.Codecfactory $ $ $ $ $ $ $ $ $ $ & sresceTesCressory. org.apache.parquet.hadoop.codecfactory.createCompressor (codEcfactory.java:219) at org.apache.parquet.hadoop.Codecfactory.getCompressor (codecfactory.java:202)
at org.apache.parquet.hadoop.parqueTrecordwriter. (Parquetrecordwriter.java:152)
at org.apache.parquet.hadoop.parquetoutputformat.getRecordWriter (Parquetoutputformat.java:565)
ATR /> AT AT ATREI. org.apache.parquet.hadoop.parquetoutputformat.getRecordwriter (Parquetoutputformat.java:473)
at org.apache.parquet.hadoop.parquetputformat.getRecordwriter (Parquetoutputformat.java:462)
at AT AT AT AT org.apache.spark.sql.execution.datasources.parquet.ParquetoutputWriter. (Parquetoutputwriter.scala: 36)
bei org.apache.spark.sql.execution.datasources.Parquet.ScalaTils $$ $ $ $ $ 1.Newinstance (parquetils /> at org.apache.spark.sql.execution.datasources.singledirectoryDatawriter.NeWoutputWriter (FileFormatDatawriter.scala: 205)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDatawriter. (FileFormatDatawriter.scala: 187)
bei org.apache.spark.sql.execution.dataSources.Fileformatriter $ .ExecUTETASK (Dateiformattriter.ScalaSources. org.apache.spark.sql.execution.datasources.writeFileSexec. org.apache.spark.rdd.rdd. spark.conf.set ("spark.sql.parquet.compression.codec", "brotli")
---> 10 df.write.format ("Delta"). modus ("overwrite").>
Brotli -Komprimierung in Azure Databricks Runtime 15,4 LTs kann nicht verwendet werden ⇐ Java
-
- Similar Topics
- Replies
- Views
- Last post