Функция активации relu python

Layer activation functions

Activations can either be used through an Activation layer, or through the activation argument supported by all forward layers:

model.add(layers.Dense(64, activation=activations.relu)) 
from tensorflow.keras import layers from tensorflow.keras import activations model.add(layers.Dense(64)) model.add(layers.Activation(activations.relu)) 

All built-in activations may also be passed via their string identifier:

model.add(layers.Dense(64, activation='relu')) 

Available activations

relu function

tf.keras.activations.relu(x, alpha=0.0, max_value=None, threshold=0.0) 

Applies the rectified linear unit activation function.

With default values, this returns the standard ReLU activation: max(x, 0) , the element-wise maximum of 0 and the input tensor.

Modifying default parameters allows you to use non-zero thresholds, change the max value of the activation, and to use a non-zero multiple of the input for values below the threshold.

>>> foo = tf.constant([-10, -5, 0.0, 5, 10], dtype = tf.float32) >>> tf.keras.activations.relu(foo).numpy() array([ 0., 0., 0., 5., 10.], dtype=float32) >>> tf.keras.activations.relu(foo, alpha=0.5).numpy() array([-5. , -2.5, 0. , 5. , 10. ], dtype=float32) >>> tf.keras.activations.relu(foo, max_value=5.).numpy() array([0., 0., 0., 5., 5.], dtype=float32) >>> tf.keras.activations.relu(foo, threshold=5.).numpy() array([-0., -0., 0., 0., 10.], dtype=float32) 
  • x: Input tensor or variable .
  • alpha: A float that governs the slope for values lower than the threshold.
  • max_value: A float that sets the saturation threshold (the largest value the function will return).
  • threshold: A float giving the threshold value of the activation function below which values will be damped or set to zero.
Читайте также:  From map to json java

A Tensor representing the input tensor, transformed by the relu activation function. Tensor will be of the same shape and dtype of input x .

sigmoid function

tf.keras.activations.sigmoid(x) 

Sigmoid activation function, sigmoid(x) = 1 / (1 + exp(-x)) .

Applies the sigmoid activation function. For small values (<-5), sigmoid returns a value close to zero, and for large values (>5) the result of the function gets close to 1.

Sigmoid is equivalent to a 2-element Softmax, where the second element is assumed to be zero. The sigmoid function always returns a value between 0 and 1.

>>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32) >>> b = tf.keras.activations.sigmoid(a) >>> b.numpy() array([2.0611537e-09, 2.6894143e-01, 5.0000000e-01, 7.3105860e-01, 1.0000000e+00], dtype=float32) 

softmax function

tf.keras.activations.softmax(x, axis=-1) 

Softmax converts a vector of values to a probability distribution.

The elements of the output vector are in range (0, 1) and sum to 1.

Each vector is handled independently. The axis argument sets which axis of the input the function is applied along.

Softmax is often used as the activation for the last layer of a classification network because the result could be interpreted as a probability distribution.

The softmax of each vector x is computed as exp(x) / tf.reduce_sum(exp(x)) .

The input values in are the log-odds of the resulting probability.

  • x : Input tensor.
  • axis: Integer, axis along which the softmax normalization is applied.

Tensor, output of softmax transformation (all values are non-negative and sum to 1).

Example 1: standalone usage

>>> inputs = tf.random.normal(shape=(32, 10)) >>> outputs = tf.keras.activations.softmax(inputs) >>> tf.reduce_sum(outputs[0, :]) # Each sample in the batch now sums to 1 tf.Tensor: shape=(), dtype=float32, numpy=1.0000001> 

Example 2: usage in a Dense layer

>>> layer = tf.keras.layers.Dense(32, . activation=tf.keras.activations.softmax) 

softplus function

tf.keras.activations.softplus(x) 

Softplus activation function, softplus(x) = log(exp(x) + 1) .

>>> a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32) >>> b = tf.keras.activations.softplus(a) >>> b.numpy() array([2.0611537e-09, 3.1326166e-01, 6.9314718e-01, 1.3132616e+00, 2.0000000e+01], dtype=float32) 

softsign function

tf.keras.activations.softsign(x) 

Softsign activation function, softsign(x) = x / (abs(x) + 1) .

>>> a = tf.constant([-1.0, 0.0, 1.0], dtype = tf.float32) >>> b = tf.keras.activations.softsign(a) >>> b.numpy() array([-0.5, 0. , 0.5], dtype=float32) 

tanh function

Hyperbolic tangent activation function.

>>> a = tf.constant([-3.0, -1.0, 0.0, 1.0, 3.0], dtype = tf.float32) >>> b = tf.keras.activations.tanh(a) >>> b.numpy() array([-0.9950547, -0.7615942, 0., 0.7615942, 0.9950547], dtype=float32) 
  • Tensor of same shape and dtype of input x , with tanh activation: tanh(x) = sinh(x)/cosh(x) = ((exp(x) — exp(-x))/(exp(x) + exp(-x))) .

selu function

Scaled Exponential Linear Unit (SELU).

The Scaled Exponential Linear Unit (SELU) activation function is defined as:

where alpha and scale are pre-defined constants ( alpha=1.67326324 and scale=1.05070098 ).

Basically, the SELU activation function multiplies scale (> 1) with the output of the tf.keras.activations.elu function to ensure a slope larger than one for positive inputs.

The values of alpha and scale are chosen so that the mean and variance of the inputs are preserved between two consecutive layers as long as the weights are initialized correctly (see tf.keras.initializers.LecunNormal initializer) and the number of input units is «large enough» (see reference paper for more information).

>>> num_classes = 10 # 10-class problem >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal', . activation='selu')) >>> model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal', . activation='selu')) >>> model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal', . activation='selu')) >>> model.add(tf.keras.layers.Dense(num_classes, activation='softmax')) 

Notes: — To be used together with the tf.keras.initializers.LecunNormal initializer. — To be used together with the dropout variant tf.keras.layers.AlphaDropout (not regular dropout).

elu function

tf.keras.activations.elu(x, alpha=1.0) 

ELUs have negative values which pushes the mean of the activations closer to zero. Mean activations that are closer to zero enable faster learning as they bring the gradient closer to the natural gradient. ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer.

>>> import tensorflow as tf >>> model = tf.keras.Sequential() >>> model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='elu', . input_shape=(28, 28, 1))) >>> model.add(tf.keras.layers.MaxPooling2D((2, 2))) >>> model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu')) >>> model.add(tf.keras.layers.MaxPooling2D((2, 2))) >>> model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu')) 
  • x: Input tensor.
  • alpha: A scalar, slope of negative section. alpha controls the value to which an ELU saturates for negative net inputs.
  • The exponential linear unit (ELU) activation function: x if x > 0 and alpha * (exp(x) — 1) if x < 0 .

exponential function

tf.keras.activations.exponential(x) 

Exponential activation function.

>>> a = tf.constant([-3.0, -1.0, 0.0, 1.0, 3.0], dtype = tf.float32) >>> b = tf.keras.activations.exponential(a) >>> b.numpy() array([0.04978707, 0.36787945, 1., 2.7182817 , 20.085537], dtype=float32) 

Creating custom activations

You can also use a TensorFlow callable as an activation (in this case it should take a tensor and return a tensor of the same shape and dtype):

model.add(layers.Dense(64, activation=tf.nn.tanh)) 

About «advanced activation» layers

Activations that are more complex than a simple TensorFlow function (eg. learnable activations, which maintain a state) are available as Advanced Activation layers, and can be found in the module tf.keras.layers.advanced_activations . These include PReLU and LeakyReLU . If you need a custom activation that requires a state, you should implement it as a custom layer.

Note that you should not pass activation layers instances as the activation argument of a layer. They’re meant to be used just like regular layers, e.g.:

x = layers.Dense(10)(x) x = layers.LeakyReLU()(x) 

Источник

Функция активации ReLu в Python

Функция активации Relu в Python или Rectified Linear Activation является наиболее распространенным выбором функции активации в мире глубокого обучения. Relu обеспечивает самые современные результаты и в то же время очень эффективен с точки зрения вычислений.

Основная концепция функции активации Relu заключается в следующем:

Return 0 if the input is negative otherwise return the input as it is.

Мы можем представить это математически следующим образом:

Функция Relu

Псевдокод для Relu следующий:

if input > 0: return input else: return 0

В этом руководстве мы узнаем, как реализовать нашу собственную функцию ReLu, узнаем о некоторых ее недостатках и узнаем о лучшей версии.

Реализация функции ReLu

Напишем собственную реализацию Relu на Python. Мы будем использовать встроенную функцию max для ее реализации.

Чтобы протестировать функцию, запустим ее на нескольких входах.

x = 1.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x))) x = -10.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x))) x = 0.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x))) x = 15.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x))) x = -20.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))

Полный код

def relu(x): return max(0.0, x) x = 1.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x))) x = -10.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x))) x = 0.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x))) x = 15.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x))) x = -20.0 print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
Applying Relu on (1.0) gives 1.0 Applying Relu on (-10.0) gives 0.0 Applying Relu on (0.0) gives 0.0 Applying Relu on (15.0) gives 15.0 Applying Relu on (-20.0) gives 0.0

Производная ReLu

Посмотрим, каким будет градиент (производная) функции ReLu. При дифференцировании получим следующую функцию:

Мы видим, что для значений x меньше нуля градиент равен 0. Это означает, что веса и смещения для некоторых нейронов не обновляются. Это может быть проблемой в тренировочном процессе.

Чтобы решить эту проблему, у нас есть функция Leaky ReLu.

Функция Leaky ReLu

Функция Leaky ReLu – это импровизация обычной функции ReLu. Чтобы решить проблему нулевого градиента для отрицательного значения, Leaky ReLu дает чрезвычайно малую линейную составляющую x отрицательным входам.

Математически мы можем выразить Leaky ReLu как:

Здесь a – небольшая константа, подобная 0,01, которую мы взяли выше.

Графически это можно представить как:

Функция Leaky ReLu в Python

Градиент Leaky ReLu

Рассчитаем градиент для функции Leaky ReLu. Градиент может быть таким:

В этом случае градиент для отрицательных входов не равен нулю. Это значит, что обновится весь нейрон.

Реализация Leaky ReLu

Реализация Leaky ReLu приведена ниже:

def relu(x): if x>0 : return x else : return 0.01*x

Давайте попробуем это на месте.

x = 1.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x))) x = -10.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x))) x = 0.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x))) x = 15.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x))) x = -20.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))

Полный код

Полный код Leaky ReLu приведен ниже:

def leaky_relu(x): if x>0 : return x else : return 0.01*x x = 1.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x))) x = -10.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x))) x = 0.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x))) x = 15.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x))) x = -20.0 print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
Applying Leaky Relu on (1.0) gives 1.0 Applying Leaky Relu on (-10.0) gives -0.1 Applying Leaky Relu on (0.0) gives 0.0 Applying Leaky Relu on (15.0) gives 15.0 Applying Leaky Relu on (-20.0) gives -0.2

Заключение

Это руководство было посвящено функции ReLu в Python. Мы также увидели улучшенную версию функции Leaky ReLu, которая решает проблему нулевых градиентов для отрицательных значений.

Источник

Оцените статью