Laufzeitfehler beim Versuch, RTDetrv2 mit Transformator zu trainierenPython

Python-Programme
Anonymous
 Laufzeitfehler beim Versuch, RTDetrv2 mit Transformator zu trainieren

Post by Anonymous »

Ich versuche, RTDetrv2 zur Erkennung auf der Ziffer der Wassermesser zu trainieren. Ich verwende hier ein IPYNB-Dateiformular hier https://colab.research.google.com/githu ... kzr8otgkio. Aber wenn ich den Code durchlasse: < /p>
model = AutoModelForObjectDetection.from_pretrained(
CHECKPOINT,
id2label=id2label,
label2id=label2id,
anchor_image_size=None,
ignore_mismatched_sizes=True,)
< /code>
Diese Warnung erscheint < /p>
`RuntimeError: Error(s) in loading state_dict for RTDetrV2ForObjectDetection:
size mismatch for model.denoising_class_embed.weight: copying a param with shape torch.Size([81, 256]) from checkpoint, the shape in current model is torch.Size([12, 256]).
size mismatch for model.enc_score_head.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.enc_score_head.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.0.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.0.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.1.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.2.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.3.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.3.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.4.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.4.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.5.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.5.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
< /code>
`< /p>
Ich Ausführen dieses Ipynb auf Pycharm. Es zeigt nicht den Fehler, aber wenn ich versuche, nur eine Epoche zu trainieren, trainiert es noch. IDK, wie man überprüft, ob es funktioniert oder nicht.

Quick Reply

Change Text Case: 
   
  • Similar Topics
    Replies
    Views
    Last post