ReLU
ReLU
Rectified linear units, or reLU, are used as activation function in Deep Neural Networks. They are represented or defined as f(x) = max(x, 0), which means:
If input is negative or zero, output is 0.
If input is positive, output is equal to input.
The advantages of ReLUs over functions like tanh or sigmoid is that they are sparse and they deal well with vanishing gradient problem.