Benötigt eine benutzerdefinierte Keras-Ebene from_config, wenn sie intern eine andere benutzerdefinierte Ebene instanziiPython

Python-Programme
Anonymous
 Benötigt eine benutzerdefinierte Keras-Ebene from_config, wenn sie intern eine andere benutzerdefinierte Ebene instanzii

Post by Anonymous »

Ich arbeite mit Keras 3 und habe zwei benutzerdefinierte Ebenen: SingleConv und DoubleConv. Beide werden mit @keras. saving.register_keras_serializable registriert.

Code: Select all

DoubleConv
akzeptiert keine Ebeneninstanz als Argument. Stattdessen nimmt es grundlegende Typen (int, str) und erstellt SingleConv-Instanzen innerhalb seines __init__.
Laut der Keras-Dokumentation:

"Für komplexere Objekte wie Ebenen oder Modelle, die an __init__ übergeben werden, muss die Deserialisierung explizit gehandhabt werden..."

Da ich bestanden habe int und str an DoubleConv, aber diese werden verwendet, um intern eine benutzerdefinierte SingleConv zu erstellen. Muss ich from_config für DoubleConv implementieren, da SingleConv intern verwendet wird?

Code: Select all

@keras.saving.register_keras_serializable(package="MyLayers", name="DoubleConv")
class DoubleConv(keras.layers.Layer):
"""
A module consisting of two consecutive convolution layers (e.g. BatchNorm3d+ReLU+Conv3d).
We use (Conv3d+ReLU) by default.
This can be changed however by providing the 'order' argument, e.g.  in order
to change to Conv3d+BatchNorm3d+ELU use order='cbe'.

Args:
in_channels (int): number of input channels
out_channels (int): number of output channels
encoder (bool): if True we're in the encoder path, otherwise we're in the decoder
kernel_size (int or tuple): size of the convolving kernel
order (string): determines the order of layers, e.g.
'cr' -> conv + ReLU
'crb' -> conv + ReLU + batchnorm
'cl' -> conv + LeakyReLU
'ce' -> conv + ELU
num_groups (int): number of groups for the GroupNorm
upscale (int): number of the convolution to upscale in encoder if DoubleConv, default: 2
dropout_prob (float or tuple): dropout probability for each convolution, default 0.1
is3d (bool): if True use Conv3d instead of Conv2d layers
"""
def __init__(self, in_channels, out_channels, encoder=True, padding='same', kernel_size=3, order='cr',
dropout_prob=0.1, upscale=2, is3d=True, **kwargs):
super().__init__(**kwargs)

self.in_channels = in_channels
self.out_channels = out_channels
self.encoder = encoder
self.padding = padding
self.kernel_size = kernel_size
self.order = order
self.dropout_prob = dropout_prob
self.upscale = upscale
self.is3d = is3d

if self.encoder:
# we're in the encoder path
conv1_in_channels = self.in_channels
if upscale == 1:
conv1_out_channels = self.out_channels
else:
conv1_out_channels = self.out_channels // 2
if conv1_out_channels < self.in_channels:
conv1_out_channels = self.in_channels

conv2_in_channels = conv1_out_channels
conv2_out_channels = self.out_channels

else:
# we're in the decoder path, decrease the number of channels in the 1st convolution
conv1_in_channels, conv1_out_channels = self.in_channels, self.out_channels
conv2_in_channels, conv2_out_channels = self.out_channels, self.out_channels

# check if dropout_prob is a tuple and if so
# split it for different dropout probabilities for each convolution.

if isinstance(self.dropout_prob, list) or isinstance(self.dropout_prob, tuple):
dropout_prob1 = self.dropout_prob[0]
dropout_prob2 = self.dropout_prob[1]
else:
dropout_prob1 = dropout_prob2 = self.dropout_prob

self.conv1 = SingleConv(out_channels=conv1_out_channels, padding=self.padding, kernel_size=self.kernel_size, order=self.order,
dropout_prob=dropout_prob1, is3d= self.is3d)

self.conv2 = SingleConv( out_channels=conv2_out_channels, padding=self.padding, kernel_size=self.kernel_size, order=self.order,
dropout_prob=dropout_prob2, is3d=self.is3d)

def call(self, inputs, training=None):
x = self.conv1(inputs, training=training)
return self.conv2(x, training=training)

def get_config(self):
config = super().get_config()
# Update the config with the custom layer's parameters
config.update(
{
"in_channels": self.in_channels,
"out_channels": self.out_channels,
"encoder": self.encoder,
"padding": self.padding,
"kernel_size": self.kernel_size,
"order": self.order,
"dropout_prob": self.dropout_prob,
"upscale": self.upscale,
"is3d": self.is3d,
}
)
return config

Quick Reply

Change Text Case: 
   
  • Similar Topics
    Replies
    Views
    Last post