Ich kann die Modelle von Huggingface -Modellen aufgrund von SSL -Zertifikatfehlern nicht herunterladen. Vielleicht liegt es an meiner Firma Firewall. Ich versuche, dieses Modell von der Festplatte mit TfPretrainedModel.from_Pretrained () und autotokenizer.from_pretraier () zu laden. Aus den Dokumenten scheint dies eine gültige Option zu sein. >. Schätzen Sie jede Hilfe! /main Code
from transformers import pipeline, TFPreTrainedModel, AutoTokenizer
import os
dir = "./models/twitter-roberta-base-sentiment-latest/"
print(os.listdir(dir)) # confirm the folder contents
model = TFPreTrainedModel.from_pretrained(dir)
tokenizer = AutoTokenizer.from_pretrained(dir)
analyze = pipeline(task="sentiment-analyis", model=model, tokenizer=tokenizer)
print(analyze("this is good"))
print(analyze("this is bad"))
2025-02-21 16:40:05.896448: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-02-21 16:40:06.653841: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From C:\Users\xxxxx\.pyenv\pyenv-win\versions\3.12.8\Lib\site-packages\tf_keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
['config.json', 'gitattributes', 'merges.txt', 'pytorch_model.bin', 'README.md', 'special_tokens_map.json', 'tf_model.h5', 'vocab.json']
Traceback (most recent call last):
File "C:\Users\xxxxx\OneDrive - DuPont\Python Projects\huggingface\sentiment.py", line 8, in
model = TFPreTrainedModel.from_pretrained(dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxx\.pyenv\pyenv-win\versions\3.12.8\Lib\site-packages\transformers\modeling_tf_utils.py", line 2726, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'from_pretrained'
pretrained_model_name_or_path (str, optional) — Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
*A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.*
A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict).
Ich kann die Modelle von Huggingface -Modellen aufgrund von SSL -Zertifikatfehlern nicht herunterladen. Vielleicht liegt es an meiner Firma Firewall. Ich versuche, dieses Modell von der Festplatte mit TfPretrainedModel.from_Pretrained () und autotokenizer.from_pretraier () zu laden. Aus den Dokumenten scheint dies eine gültige Option zu sein. >. Schätzen Sie jede Hilfe! /main [b] Code [/b] [code]from transformers import pipeline, TFPreTrainedModel, AutoTokenizer import os
dir = "./models/twitter-roberta-base-sentiment-latest/" print(os.listdir(dir)) # confirm the folder contents
model = TFPreTrainedModel.from_pretrained(dir) tokenizer = AutoTokenizer.from_pretrained(dir)
analyze = pipeline(task="sentiment-analyis", model=model, tokenizer=tokenizer) print(analyze("this is good")) print(analyze("this is bad")) [/code] [b] Ausgabe [/b] [code]2025-02-21 16:40:05.896448: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-02-21 16:40:06.653841: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. WARNING:tensorflow:From C:\Users\xxxxx\.pyenv\pyenv-win\versions\3.12.8\Lib\site-packages\tf_keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
['config.json', 'gitattributes', 'merges.txt', 'pytorch_model.bin', 'README.md', 'special_tokens_map.json', 'tf_model.h5', 'vocab.json'] Traceback (most recent call last): File "C:\Users\xxxxx\OneDrive - DuPont\Python Projects\huggingface\sentiment.py", line 8, in model = TFPreTrainedModel.from_pretrained(dir) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxxxx\.pyenv\pyenv-win\versions\3.12.8\Lib\site-packages\transformers\modeling_tf_utils.py", line 2726, in from_pretrained config, model_kwargs = cls.config_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'from_pretrained' [/code] [b] docs [/b] Https://huggingface.co/docs/transformers/v4.49.0/en/main_classes/ Modell#transformators.tfPretrainedModel < /p> [code]pretrained_model_name_or_path (str, optional) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. *A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.* A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict). [/code]
Ich kann die Modelle von Huggingface -Modellen aufgrund von SSL -Zertifikatfehlern nicht herunterladen. Vielleicht liegt es an meiner Firma Firewall. Ich versuche, dieses Modell von der Festplatte...
Ich möchte manuell (nicht programmgesteuert) einen Eintrag in einem lokalen Speicher eines mobilen (Android) Browsers hinzufügen oder bearbeiten. Ich weiß, wie man das auf einem Desktop-Browser...
Dies ist mein erstes Mal, dass ich WordPress mit dem Grundgestein verwende, und ich verwende Komponist, um meine Pakete herunterzuladen, anstatt Plugins von WordPress manuell hinzuzufügen.
Ich gehe...
Laden fehlgeschlagen für mit den Abmessungen
Klasse com.bumptech .glide.load.engine.GlideException: Ressource konnte nicht geladen werden
Ursache (1 von 1): Klasse...
Ich stelle eine Flask -Anwendung auf Vercel mit Docker bereit. Wenn ich jedoch auf meinen Endpunkt zugreife, um ein Bild zu generieren, wird die Datei app.py heruntergeladen, anstatt das erwartete...