Code: Select all
| Text_Dialog | joy | anger | sad | happy |
|--------------------|-----|-------|-----|-------|
| -Tom:"" -Jerry:"" | 10 | 50 | 10 | 30 |
| ... | 46 | 0 | 0 | 54 |
< /code>
Verwenden dieses Datensatzes [url=viewtopic.php?t=14917]Ich möchte[/url] ein Emotionsgefühl erstellen, das auf Textdaten basiert, die LLMs über Python 3.10 verwenden.FAISSCode: Select all
enter your question:hello
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
System: give sentiment on the emotion of the user question Context: ['Hello ? Hello ? ' ' ( No response ; Silence ) '
' Hello ? Who is calling , please ? ' ' ( No response ) '
' Listen , I know who you are . If you call this number again , I ’ ll call the police . You ’ ll be arrested . I ’ Ve got your number . ']
['Hi.Mike . ' ' Hi , Peter ! How are you ? ' ' Fine , thanks.And you ? '
' Very well , thanks . ']
['Hello , Mary . ' ' Hello , Brian . ' ' Here is my friend Bob . '
' Hello , Bob . ']
Human: hello! Welcome to my friend. [ Hello ('Hello? Hello? Hi! Hi? '! Hello??. Hello?') Hello Hello Hello ('Hello? Hi? '! Hello?.Hello?') Hello Hello Hello Hello
< /code>
Der Auflaufcode ist Folgendes: < /p>
# code for creating a vector store and indexing within the dataset
...
retriever = vector_store.as_retriever(search_type="similarity", search_kwargs={"k": 3})
# prompt code for emotion sentiment analysis
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
text_generator = pipeline("text-generation", model="gpt2", max_new_tokens=50)
llm = HuggingFacePipeline(pipeline=text_generator)
system_prompt = (
"give sentiment on the emotion of the user question "
"Context: {context}"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, prompt)
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
query = input("enter your question:")
response = rag_chain.invoke({"input": query})
print(response["answer"])
Mobile version