Laufzeitfehler beim Versuch, RTDetrv2 mit Transformator zu trainieren

Post a reply

Smilies
:) :( :oops: :chelo: :roll: :wink: :muza: :sorry: :angel: :read: *x) :clever:
View more smilies

BBCode is ON
[img] is ON
[flash] is OFF
[url] is ON
Smilies are ON

Topic review
   

Expand view Topic review: Laufzeitfehler beim Versuch, RTDetrv2 mit Transformator zu trainieren

by Anonymous » 28 Feb 2025, 05:37

Ich versuche, RTDetrv2 zur Erkennung auf der Ziffer der Wassermesser zu trainieren. Ich verwende hier ein IPYNB-Dateiformular hier https://colab.research.google.com/githu ... kzr8otgkio. Aber wenn ich den Code durchlasse: < /p>
model = AutoModelForObjectDetection.from_pretrained(
CHECKPOINT,
id2label=id2label,
label2id=label2id,
anchor_image_size=None,
ignore_mismatched_sizes=True,)
< /code>
Diese Warnung erscheint < /p>
`RuntimeError: Error(s) in loading state_dict for RTDetrV2ForObjectDetection:
size mismatch for model.denoising_class_embed.weight: copying a param with shape torch.Size([81, 256]) from checkpoint, the shape in current model is torch.Size([12, 256]).
size mismatch for model.enc_score_head.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.enc_score_head.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.0.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.0.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.1.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.1.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.2.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.2.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.3.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.3.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.4.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.4.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
size mismatch for model.decoder.class_embed.5.weight: copying a param with shape torch.Size([80, 256]) from checkpoint, the shape in current model is torch.Size([11, 256]).
size mismatch for model.decoder.class_embed.5.bias: copying a param with shape torch.Size([80]) from checkpoint, the shape in current model is torch.Size([11]).
< /code>
`< /p>
Ich Ausführen dieses Ipynb auf Pycharm. Es zeigt nicht den Fehler, aber wenn ich versuche, nur eine Epoche zu trainieren, trainiert es noch. IDK, wie man überprüft, ob es funktioniert oder nicht.

Top