Ich versuche, ein Wörterbuch in einen SPARK-Datenrahmen umzuwandeln. Aber alle meine Werte werden an eine einzelne Zeile angehängt. Für mein Endergebnis möchte ich einen SPARK-Datenrahmen haben, der 3 Zeilen enthält, die jeder unique_survey_id entsprechen. Schreiben Sie einen PySpark-Code dafür.
inferenced_df=
{
**'unique_survey_id'**: ['0001', '0002', '0003'],
'**verbatim**': ["My name is John", "I am 23 yrs old, "I live in US"],
'**classification_critical_process_fg**': [0, 0, 0],
'**reason_critical_process_fg**': [**{**"Customer's Issue": "I wish there were more providers ", 'Status of Resolution': 'Unresolved', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': "Although the issue is unresolved, So flag is 0"**}**,
**{**"Customer's Issue": 'I am trying to make a payment', 'Status of Resolution': 'Unresolved', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': "Although the issue is unresolved So flag is 0"**}**,
**{**"Customer's Issue": '', 'Status of Resolution': '', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': 'The review does not mention any issue or negative experience. So the flag is 0'**}**],
'**classification_critical_technical_fg**': ['No', 'No', 'No'],
'**reason_critical_technical_fg**': ['The review mentions difficulty in finding provider.', 'The review mentions an unresolved issue ', 'The review does not mention any technical issues'],
'**classification_critical_crc_escalation_fg**': ['Yes', 'Yes', 'No'],
'**reason_critical_crc_escalation_fg**': ['The customer is expressing frustration.', 'The customer is expressing dissatisfaction', 'The review does not mention any unresolved issues.'],
'**classification_insight_experience_fg**': ['Yes', 'No', 'Yes'],
'**reason_insight_experience_fg**': ["The review mentions a suggestion", 'The review mentions an unresolved', "The review explicitly mentions positive feedback"],
'**classification_insight_process_fg**': [0, 0, 0],
'**reason_insight_process_fg**': [**{**"Customer's Issue": "I need a diabetic eye exam ", 'Status of Resolution': 'Unresolved', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': 'Customer has just stated the issue.**}**, **{**"Customer's Issue": 'I am trying to make a payment ', 'Status of Resolution': 'Unresolved', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': 'Customer has just stated the issue.**}**,**{**"Customer's Issue": '', 'Status of Resolution': '', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': "The customer review does not mention any issue or negative experience."**}**]
}
Ich versuche, [b]ein Wörterbuch in einen SPARK-Datenrahmen umzuwandeln.[/b] Aber alle meine Werte werden an eine einzelne Zeile angehängt. Für mein Endergebnis möchte ich einen SPARK-Datenrahmen haben, der 3 Zeilen enthält, die jeder unique_survey_id entsprechen. [b]Schreiben Sie einen PySpark-Code dafür.[/b][code]inferenced_df= {
**'unique_survey_id'**: ['0001', '0002', '0003'],
'**verbatim**': ["My name is John", "I am 23 yrs old, "I live in US"],
'**reason_critical_process_fg**': [**{**"Customer's Issue": "I wish there were more providers ", 'Status of Resolution': 'Unresolved', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': "Although the issue is unresolved, So flag is 0"**}**, **{**"Customer's Issue": 'I am trying to make a payment', 'Status of Resolution': 'Unresolved', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': "Although the issue is unresolved So flag is 0"**}**, **{**"Customer's Issue": '', 'Status of Resolution': '', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': 'The review does not mention any issue or negative experience. So the flag is 0'**}**],
'**reason_critical_technical_fg**': ['The review mentions difficulty in finding provider.', 'The review mentions an unresolved issue ', 'The review does not mention any technical issues'],
'**classification_critical_crc_escalation_fg**': ['Yes', 'Yes', 'No'], '**reason_critical_crc_escalation_fg**': ['The customer is expressing frustration.', 'The customer is expressing dissatisfaction', 'The review does not mention any unresolved issues.'],
'**reason_insight_experience_fg**': ["The review mentions a suggestion", 'The review mentions an unresolved', "The review explicitly mentions positive feedback"],
'**reason_insight_process_fg**': [**{**"Customer's Issue": "I need a diabetic eye exam ", 'Status of Resolution': 'Unresolved', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': 'Customer has just stated the issue.**}**, **{**"Customer's Issue": 'I am trying to make a payment ', 'Status of Resolution': 'Unresolved', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': 'Customer has just stated the issue.**}**,**{**"Customer's Issue": '', 'Status of Resolution': '', "Verbatim chunk explaining customer's efforts": '', 'Reason for classification': "The customer review does not mention any issue or negative experience."**}**]
Ich lade einen Datensatz aus BigQuery und möchte nach einigen Transformationen den transformierten DataFrame wieder in BigQuery speichern. Gibt es eine Möglichkeit, dies zu tun?
Ich habe eine Java -Spark -Anwendung, die Daten von Kafka erhält, einige Arbeiten an den Daten ausführt und dann mit dem Befehl toundswrite () Parquetdateien in S3 speichert. Bis zu diesem Zeitpunkt...
Ich habe eine Java -Spark -Anwendung, die Daten von Kafka erhält, einige Arbeiten an den Daten ausführt und dann mit dem Befehl toundswrite () Parquetdateien in S3 speichert. Bis zu diesem Zeitpunkt...
Ich versuche, die pfeiloptimierte Python-UDF von Spark 4 wie unten zu testen,
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, lit, udf
from pyspark.sql.types import...
Ich arbeite an einer C#-Anwendung, in der ich mit Roslyn C#-Code ausführen muss, der als Zeichenfolgeneingabe bereitgestellt wird. Die Herausforderung besteht darin, diesen Code auszuführen und auf...