dynamic MyNET
► SmoothStep function....
There are many cases in which we need normalized values (computer graphics, machine learning, technical analysis). There are several ways to accomplish this, including SmoothStep. It belongs to the family of sigmoidal (clamping in this case, since this indicator, as it is, is not used for interpolation) functions, and it produces a subset of what is the built-in stochastic, except that in its original range of 0 to 1 and that it filters out some values that stochastic produces. This makes SmoothStep more suitable for some applications, such as clamping, where you don't want to have values above or below the desired range. Additionally, SmoothStep is computationally efficient, making it a good choice for applications where speed is a priority.
This version is using a property of a smooth step that it can be further smoothed - based on Kenneth H. Perlin's (a professor in the Department of Computer Science at New York University) idea. His work centered around something called “Sigmoid Function” having a characteristic “S”-shaped curve or sigmoid curve which eventually levels off.
The sigmoid function is a special case of a logistic function that has S-shaped characteristic.
Properties of Sigmoid Function
• Domain: (-infinite, +infinite)
• Range: (0, 1)
• x(0) = ½ = 0.5
• The sigmoid function is continuous and monotonically increasing everywhere.
• The function is differentiable everywhere.
• Maps feature space into probability function:
– When x-> + infinity, the value of the sigmoid function will be close to 1.
– When x-> -infinity, the value of the sigmoid function will be close to 0.
– For x = 0, the value of the sigmoid function will be ½.
Advantages and Limitations of Sigmoid Function
• It gives a smooth gradient that prevents jumps in output values.
• One of the best-normalized functions.
• When used with a linear function, it will return a value between 0 and 1, which does not make the activation value disappear.
• Gradient values are only significant for the range -3 to 3.
– The graph will have minimal gradients for values greater than 3 or smaller than -3.
• The main disadvantage of the sigmoid function is that it suffers from the vanishing gradient problem.
– As the gradient value approaches zero, the network ceases to learn and suffers from a vanishing gradient descent problem
• When sigmoid is used and:
Inputs are non-zero centered; then saturation is a concern
– Saturation implies gradient will be zero.
• Extremely greater or smaller values are mapped to the extremities to 0 and 1.
► UPDATES:
• A button has been added to the SmoothStep, and its calculation has been optimized.
• Restore the button for Vertical Horizontal Filter (VHF) as well
• New users
MUST read and follow these
STEPS
• New templates ( a must )
• You can download and read more about the system by clicking »»
HERE
Since Frank Sinatra sings in his own way, my charts sing... ♪ I did it, My... Way... ♬ ; )─