Functional layers

class dynn.layers.functional_layers.AdditionLayer(layer1, layer2)

Bases: dynn.layers.functional_layers.BinaryOpLayer

Addition of two layers.

This is the layer returned by the addition syntax:

AdditionLayer(layer1, layer2)(x) == layer1(x) + layer2(x)
# is the same thing as
add_1_2 = layer1 + layer2
add_1_2(x) == layer1(x) + layer2(x)
Parameters:
  • layer1 (base_layers.BaseLayer) – First layer
  • layer2 (base_layers.BaseLayer) – Second layer
__init__(layer1, layer2)

Initialize self. See help(type(self)) for accurate signature.

class dynn.layers.functional_layers.BinaryOpLayer(layer1, layer2, binary_operation)

Bases: dynn.layers.base_layers.BaseLayer

This layer wraps two layers with a binary operation.

BinaryOpLayer(layer1, layer2, op)(x) == op(layer1(x), layer2(x))

This is useful to express the addition of two layers as another layer.

Parameters:
  • layer1 (base_layers.BaseLayer) – First layer
  • layer2 (base_layers.BaseLayer) – Second layer
  • binary_operation (function) – A binary operation on dynet.Expression objects
__call__(*args, **kwargs)

Execute forward pass

__init__(layer1, layer2, binary_operation)

Initialize self. See help(type(self)) for accurate signature.

class dynn.layers.functional_layers.CmultLayer(layer1, layer2)

Bases: dynn.layers.functional_layers.BinaryOpLayer

Coordinate-wise multiplication of two layers.

CmultLayer(layer1, layer2)(x) == dy.cmult(layer1(x), layer2(x))
Parameters:
  • layer1 (base_layers.BaseLayer) – First layer
  • layer2 (base_layers.BaseLayer) – Second layer
__init__(layer1, layer2)

Initialize self. See help(type(self)) for accurate signature.

class dynn.layers.functional_layers.ConstantLayer(constant)

Bases: dynn.layers.base_layers.BaseLayer

This is the “zero”-ary layer.

# Takes in numbers
ConstantLayer(5)() == dy.inputTensor([5])
# Or lists
ConstantLayer([5, 6])() == dy.inputTensor([5, 6])
# Or numpy arrays
ConstantLayer(np.ones((10, 12)))() == dy.inputTensor(np.ones((10, 12)))
Parameters:constant (number, np.ndarray) – The constant. It must be a type that can be turned into a dynet.Expression
__call__(*args, **kwargs)

Execute forward pass

__init__(constant)

Initialize self. See help(type(self)) for accurate signature.

init_layer(test=True, update=False)

Initializes only this layer’s parameters (not recursive) This needs to be implemented for each layer

class dynn.layers.functional_layers.IdentityLayer

Bases: dynn.layers.functional_layers.Lambda

The identity layer does literally nothing

IdentityLayer()(x) == x

It passes its input directly as the output. Still, it can be useful to express more complicated layers like residual connections.

__init__()

Initialize self. See help(type(self)) for accurate signature.

class dynn.layers.functional_layers.Lambda(function)

Bases: dynn.layers.base_layers.BaseLayer

This layer applies an arbitrary function to its input.

Lambda(f)(x) == f(x)

This is useful if you want to wrap activation functions as layers. The unary operation should be a function taking dynet.Expression to dynet.Expression.

You shouldn’t use this to stack layers though, op oughtn’t be a layer. If you want to stack layers, use combination_layers.Sequential.

Parameters:
  • layer (base_layers.BaseLayer) – The layer to which output you want to apply the unary operation.
  • binary_operation (function) – A unary operation on dynet.Expression objects
__call__(*args, **kwargs)

Returns function(*args, **kwargs)

__init__(function)

Initialize self. See help(type(self)) for accurate signature.

class dynn.layers.functional_layers.NegationLayer(layer)

Bases: dynn.layers.functional_layers.UnaryOpLayer

Negates the output of another layer:

NegationLayer(layer)(x) == - layer(x)

It can also be used with the - syntax directly:

negated_layer = - layer
# is the same as
negated_layer = NegationLayer(layer)
Parameters:layer (base_layers.BaseLayer) – The layer to which output you want to apply the negation.
__init__(layer)

Initialize self. See help(type(self)) for accurate signature.

class dynn.layers.functional_layers.SubstractionLayer(layer1, layer2)

Bases: dynn.layers.functional_layers.BinaryOpLayer

Substraction of two layers.

This is the layer returned by the substraction syntax:

SubstractionLayer(layer1, layer2)(x) == layer1(x) - layer2(x)
# is the same thing as
add_1_2 = layer1 - layer2
add_1_2(x) == layer1(x) - layer2(x)
Parameters:
  • layer1 (base_layers.BaseLayer) – First layer
  • layer2 (base_layers.BaseLayer) – Second layer
__init__(layer1, layer2)

Initialize self. See help(type(self)) for accurate signature.

class dynn.layers.functional_layers.UnaryOpLayer(layer, unary_operation)

Bases: dynn.layers.base_layers.BaseLayer

This layer wraps a unary operation on another layer.

UnaryOpLayer(layer, op)(x) == op(layer(x))

This is a shorter way of writing:

UnaryOpLayer(layer, op)(x) == Sequential(layer, Lambda(op))

You shouldn’t use this to stack layers though, op oughtn’t be a layer. If you want to stack layers, use combination_layers.Sequential.

Parameters:
  • layer (base_layers.BaseLayer) – The layer to which output you want to apply the unary operation.
  • binary_operation (function) – A unary operation on dynet.Expression objects
__call__(*args, **kwargs)

Returns unary_operation(layer(*args, **kwargs))

__init__(layer, unary_operation)

Initialize self. See help(type(self)) for accurate signature.