Wie kann ich ein vorgepresstes Transformatorenmodell laden, das manuell heruntergeladen wurde?Python

Python-Programme
Guest
 Wie kann ich ein vorgepresstes Transformatorenmodell laden, das manuell heruntergeladen wurde?

Post by Guest »

Ich kann die Modelle von Huggingface -Modellen aufgrund von SSL -Zertifikatfehlern nicht herunterladen. Vielleicht liegt es an meiner Firma Firewall. Ich versuche, dieses Modell von der Festplatte mit TfPretrainedModel.from_Pretrained () und autotokenizer.from_pretraier () zu laden. Aus den Dokumenten scheint dies eine gültige Option zu sein. >. Schätzen Sie jede Hilfe! /main
Code

Code: Select all

from transformers import pipeline, TFPreTrainedModel, AutoTokenizer
import os

dir = "./models/twitter-roberta-base-sentiment-latest/"
print(os.listdir(dir)) # confirm the folder contents

model = TFPreTrainedModel.from_pretrained(dir)
tokenizer = AutoTokenizer.from_pretrained(dir)

analyze = pipeline(task="sentiment-analyis", model=model, tokenizer=tokenizer)
print(analyze("this is good"))
print(analyze("this is bad"))
Ausgabe

Code: Select all

2025-02-21 16:40:05.896448: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-02-21 16:40:06.653841: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From C:\Users\xxxxx\.pyenv\pyenv-win\versions\3.12.8\Lib\site-packages\tf_keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

['config.json', 'gitattributes', 'merges.txt', 'pytorch_model.bin', 'README.md', 'special_tokens_map.json', 'tf_model.h5', 'vocab.json']
Traceback (most recent call last):
File "C:\Users\xxxxx\OneDrive - DuPont\Python Projects\huggingface\sentiment.py", line 8, in 
model = TFPreTrainedModel.from_pretrained(dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxxxx\.pyenv\pyenv-win\versions\3.12.8\Lib\site-packages\transformers\modeling_tf_utils.py", line 2726, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'from_pretrained'
docs
Https://huggingface.co/docs/transformer ... n_classes/ Modell#transformators.tfPretrainedModel < /p>

Code: Select all

pretrained_model_name_or_path (str, optional) — Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
*A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.*
A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.
None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict).

Quick Reply

Change Text Case: 
   
  • Similar Topics
    Replies
    Views
    Last post