I find the sum of VQZL + Z Score assumptions interesting. not to mention that it is one of the editor picks.
Code: Select all
//@version=4
//Indicator created by Invsto
//Adapted from user @sarangab
study(title="VQZL Z Score + MA Options", shorttitle="ZVQZL")
PriceSmoothing = input(15, title="Price Smoothing")
Mode = input(title = "MA Mode", defval="SMMA", options=["TMA","ALMA", "EMA", "DEMA", "TEMA", "WMA", "VWMA", "SMA", "SMMA", "HMA", "LSMA", "JMA", "VAMA", "FRAMA", "ZLEMA", "KAMA", "Kijun", "Kijun v2", "McGinley", "IDWMA", "FLMSA", "PEMA", "HCF", "TIF", "MF", "ARMA", "DAF","WRMA", "RMA", "RAF", "GF", "A2RMA", "EDSMA"])
vq0_th = input(1.64, title="Threshold 1")
vq0_th2 = input(1.96, title="Threshold 2")
vq0_th3 = input(2.58, title="Threshold 3")
vq0_mid = 0
zlength = input(title="Z Length", defval=100, minval=1)
lsma_offset = input(defval=0, title="* Least Squares (LSMA) Only - Offset Value", minval=0)
alma_offset = input(defval=0.85, title="* Arnaud Legoux (ALMA) Only - Offset Value", minval=0, step=0.01) //0.85
alma_sigma = input(defval=6, title="* Arnaud Legoux (ALMA) Only - Sigma Value", minval=0) //6
jurik_phase = input(title="* Jurik (JMA) Only - Phase", type=input.integer, defval=50)
jurik_power = input(title="* Jurik (JMA) Only - Power", type=input.integer, defval=2)
volatility_lookback = input(51, title="* Volatility Adjusted (VAMA) Only - Volatility lookback length")
frama_FC = input(defval=1,minval=1, title="* Fractal Adjusted (FRAMA) Only - FC")
frama_SC = input(defval=200,minval=1, title="* Fractal Adjusted (FRAMA) Only - SC")
kama_fast_len = input(title="* Kaufman (KAMA) Only - Fast EMA Length", type=input.integer, defval=2)
kama_slow_len = input(title="* Kaufman (KAMA) Only - Slow EMA Length", type=input.integer, defval=30)
center = input(defval=10, title="* Trend Impulse Filter Only - Center")
//MF
beta = input(0.8,minval=0,maxval=1, title="Modular Filter, General Filter Only - Beta")
feedback = input(false, title="Modular Filter Only - Feedback")
z = input(0.5,title="Modular Filter Only - Feedback Weighting",minval=0,maxval=1)
//ARMA
gamma = input(3.,type=input.float, title="ARMA, A2RMA, General Filter Gamma")
zl = input(false,title="ARMA, DAF, WRMA, Zero-Lag")
//GF
zeta = input(1,maxval=1,type=input.float,title="General Filter Zeta")
//EDSMA
ssfLength = input(title="EDSMA - Super Smoother Filter Length", type=input.integer, minval=1, defval=20)
ssfPoles = input(title="EDSMA - Super Smoother Filter Poles", type=input.integer, defval=2, options=[2, 3])
//IDWMA
calcWeight(src, length, i) =>
distanceSum = 0.0
for j = 0 to length - 1
distanceSum := distanceSum + abs(nz(src[i]) - nz(src[j]))
distanceSum
//HCF
//study("Hybrid Convolution Filter",overlay=true)
//length = input(14)
//----
f(x) =>
.5*(1 - cos(x*3.14159))
d(x,length) =>
f(x/length) - f((x-1)/length)
//----
filter(a,b,length) =>
sum = 0.
for i = 1 to length
sgn = f(i/length)
sum := sum + (sgn*b + (1 - sgn)*a[i-1]) * d(i,length)
sum
//----
//A2RMA
ama(er,x)=>
a = 0.
a := er*x+(1-er)*nz(a[1],x)
//EDSMA
get2PoleSSF(src, length) =>
PI = 2 * asin(1)
arg = sqrt(2) * PI / length
a1 = exp(-arg)
b1 = 2 * a1 * cos(arg)
c2 = b1
c3 = -pow(a1, 2)
c1 = 1 - c2 - c3
ssf = 0.0
ssf := c1 * src + c2 * nz(ssf[1]) + c3 * nz(ssf[2])
get3PoleSSF(src, length) =>
PI = 2 * asin(1)
arg = PI / length
a1 = exp(-arg)
b1 = 2 * a1 * cos(1.738 * arg)
c1 = pow(a1, 2)
coef2 = b1 + c1
coef3 = -(c1 + b1 * c1)
coef4 = pow(c1, 2)
coef1 = 1 - coef2 - coef3 - coef4
ssf = 0.0
ssf := coef1 * src + coef2 * nz(ssf[1]) + coef3 * nz(ssf[2]) + coef4 * nz(ssf[3])
ma(type, src, len) =>
float result = 0
if type=="TMA"
result := sma(sma(src, ceil(len / 2)), floor(len / 2) + 1)
if type=="SMA" // Simple
result := sma(src, len)
if type=="EMA" // Exponential
result := ema(src, len)
if type=="DEMA" // Double Exponential
e = ema(src, len)
result := 2 * e - ema(e, len)
if type=="TEMA" // Triple Exponential
e = ema(src, len)
result := 3 * (e - ema(e, len)) + ema(ema(e, len), len)
if type=="WMA" // Weighted
result := wma(src, len)
if type=="VWMA" // Volume Weighted
result := vwma(src, len)
if type=="SMMA" // Smoothed
w = wma(src, len)
result := na(w[1]) ? sma(src, len) : (w[1] * (len - 1) + src) / len
if type=="HMA" // Hull
result := wma(2 * wma(src, len / 2) - wma(src, len), round(sqrt(len)))
if type=="LSMA" // Least Squares
result := linreg(src, len, lsma_offset)
if type=="ALMA" // Arnaud Legoux
result := alma(src, len, alma_offset, alma_sigma)
if type=="JMA" // Jurik
/// Copyright © 2018 Alex Orekhov (everget)
/// Copyright © 2017 Jurik Research and Consulting.
phaseRatio = jurik_phase < -100 ? 0.5 : jurik_phase > 100 ? 2.5 : jurik_phase / 100 + 1.5
beta = 0.45 * (len - 1) / (0.45 * (len - 1) + 2)
alpha = pow(beta, jurik_power)
jma = 0.0
e0 = 0.0
e0 := (1 - alpha) * src + alpha * nz(e0[1])
e1 = 0.0
e1 := (src - e0) * (1 - beta) + beta * nz(e1[1])
e2 = 0.0
e2 := (e0 + phaseRatio * e1 - nz(jma[1])) * pow(1 - alpha, 2) + pow(alpha, 2) * nz(e2[1])
jma := e2 + nz(jma[1])
result := jma
if type=="VAMA" // Volatility Adjusted
/// Copyright © 2019 to present, Joris Duyck (JD)
mid=ema(src,len)
dev=src-mid
vol_up=highest(dev,volatility_lookback)
vol_down=lowest(dev,volatility_lookback)
result := mid+avg(vol_up,vol_down)
if type=="FRAMA" // Fractal Adaptive
int len1 = len/2
e = 2.7182818284590452353602874713527
w = log(2/(frama_SC+1)) / log(e) // Natural logarithm (ln(2/(SC+1))) workaround
H1 = highest(high,len1)
L1 = lowest(low,len1)
N1 = (H1-L1)/len1
H2_ = highest(high,len1)
H2 = H2_[len1]
L2_ = lowest(low,len1)
L2 = L2_[len1]
N2 = (H2-L2)/len1
H3 = highest(high,len)
L3 = lowest(low,len)
N3 = (H3-L3)/len
dimen1 = (log(N1+N2)-log(N3))/log(2)
dimen = iff(N1>0 and N2>0 and N3>0,dimen1,nz(dimen1[1]))
alpha1 = exp(w*(dimen-1))
oldalpha = alpha1>1?1:(alpha1<0.01?0.01:alpha1)
oldN = (2-oldalpha)/oldalpha
N = (((frama_SC-frama_FC)*(oldN-1))/(frama_SC-1))+frama_FC
alpha_ = 2/(N+1)
alpha = alpha_<2/(frama_SC+1)?2/(frama_SC+1):(alpha_>1?1:alpha_)
frama = 0.0
frama :=(1-alpha)*nz(frama[1]) + alpha*src
result := frama
if type=="ZLEMA" // Zero-Lag EMA
f_lag = (len - 1) / 2
f_data = src + (src - src[f_lag])
result := ema(f_data, len)
if type=="KAMA" // Kaufman Adaptive
mom = abs(change(src, len))
volatility = sum(abs(change(src)), len)
er = volatility != 0 ? mom / volatility : 0
fastAlpha = 2 / (kama_fast_len + 1)
slowAlpha = 2 / (kama_slow_len + 1)
sc = pow((er * (fastAlpha - slowAlpha)) + slowAlpha, 2)
kama = 0.0
kama := sc * src + (1 - sc) * nz(kama[1])
result := kama
if type=="Kijun" //Kijun-sen
kijun = avg(lowest(len), highest(len))
result :=kijun
if type=="Kijun v2"
kijun = avg(lowest(len), highest(len))
conversionLine = avg(lowest(len/2), highest(len/2))
delta = (kijun + conversionLine)/2
result :=delta
if type=="McGinley"
mg = 0.0
mg := na(mg[1]) ? ema(src, len) : mg[1] + (src - mg[1]) / (len * pow(src/mg[1], 4))
result :=mg
if type=="IDWMA"
sum = 0.0
weightSum = 0.0
for i = 0 to len - 1
weight = calcWeight(src, len, i)
sum := sum + nz(src[i]) * weight
weightSum := weightSum + weight
idwma = sum / weightSum
result := idwma
if type=="FLMSA"
n = bar_index
b = 0.
e = sma(abs(src - nz(b[1])),len)
z = sma(src - nz(b[1],src),len)/e
r = (exp(2*z) - 1)/(exp(2*z) + 1)
a = (n - sma(n,len))/stdev(n,len) * r
b := sma(src,len) + a*stdev(src,len)
result := b
if type=="PEMA"
// Copyright (c) 2010-present, Bruno Pio
// Copyright (c) 2019-present, Alex Orekhov (everget)
// Pentuple Exponential Moving Average script may be freely distributed under the MIT license.
ema1 = ema(src, len)
ema2 = ema(ema1, len)
ema3 = ema(ema2, len)
ema4 = ema(ema3, len)
ema5 = ema(ema4, len)
ema6 = ema(ema5, len)
ema7 = ema(ema6, len)
ema8 = ema(ema7, len)
pema = 8 * ema1 - 28 * ema2 + 56 * ema3 - 70 * ema4 + 56 * ema5 - 28 * ema6 + 8 * ema7 - ema8
result := pema
if type=="HCF"
output = 0.
output := filter(src, nz(output[1],src), len)
result := output
if type=="TIF"
b = 0.0
a = rising(src,len) or falling(src,len) ? 1 : 0
b := ema(a*src+(1-a)*nz(b[1],src),center)
result := b
if type=="MF"
ts=0.,b=0.,c=0.,os=0.
//----
alpha = 2/(len+1)
a = feedback ? z*src + (1-z)*nz(ts[1],src) : src
//----
b := a > alpha*a+(1-alpha)*nz(b[1],a) ? a : alpha*a+(1-alpha)*nz(b[1],a)
c := a < alpha*a+(1-alpha)*nz(c[1],a) ? a : alpha*a+(1-alpha)*nz(c[1],a)
os := a == b ? 1 : a == c ? 0 : os[1]
//----
upper = beta*b+(1-beta)*c
lower = beta*c+(1-beta)*b
ts := os*upper+(1-os)*lower
result := ts
if type=="ARMA"
//----
ma = 0.
mad = 0.
//----
src2 = zl ? src + change(src,len/2) : src
ma := nz(mad[1],src2)
d = cum(abs(src2[len] - ma))/bar_index * gamma
mad := sma(sma(src2 > nz(mad[1],src2) + d ? src2 + d : src2 < nz(mad[1],src) - d ? src2 - d : nz(mad[1],src2),len),len)
result := mad
if type=="DAF"
AC = zl ? 1 : 0
out = 0.
K = 0.
//----
src2 = src + (src - nz(out[1],src))
out := nz(out[1],src2) + nz(K[1])*(src2 - nz(out[1],src2)) + AC*(nz(K[1])*(src2 - sma(src2,len)))
K := abs(src2 - out)/(abs(src2 - out) + stdev(src2,len)*len)
result := out
if type=="WRMA"
//----
alpha = 2/(len+1)
p1 = zl ? len/4 : 1
p2 = zl ? len/4 : len/2
//----
a = 0.0
b = 0.0
A = 0.0
B = 0.0
a := nz(a[1]) + alpha*nz(A[1])
b := nz(b[1]) + alpha*nz(B[1])
y = ema(a + b,p1)
A := src - y
B := src - ema(y,p2)
result := y
//----
if type=="RMA"
ma = sma(src,len*3) + sma(src,len*2) - sma(src,len)
result := ma
if type=="RAF"
altma = 0.0
AR = 2*(highest(len) - lowest(len))
BR = 2*(highest(len*2) - lowest(len*2))
k1 = (1 - AR)/AR
k2 = (1 - BR)/BR
//
alpha = k2/k1
R1 = sqrt(highest(len))/4 * ((alpha - 1)/alpha) * (k2/(k2 + 1))
R2 = sqrt(highest(len*2))/4 * (alpha - 1) * (k1/(k1 + 1))
Factor = R2/R1
//
AltK = fixnan(pow(Factor >= 1 ? 1 : Factor,sqrt(len)))*(1/len)
altma := AltK*src+(1-AltK)*nz(altma[1],src)
result := altma
if type=="GF"
////////////////////////////////////////////////////////////////
//Coefficients Table :
//
//MA : beta = 2/gamma = 0.5
//EMA : beta = 3/gamma = 0.4
//HMA = beta = 4/gamma = 0.85
//LSMA : beta = 3.5/gamma = 0.9
//QLSMA : beta = 5.25/gamma = 1
//JMA : beta = pow*2/gamma = 0.5
//3 Poles Butterworth Filter : beta = 5.5/gamma = 0.5/zeta = 0
//
////////////////////////////////////////////////////////////////
b = 0.0
d = 0.0
p = len/beta
a = src - nz(b[p],src)
b := nz(b[1],src) + a/p*gamma
c = b - nz(d[p],b)
d := nz(d[1],src) + (zeta*a+(1-zeta)*c)/p*gamma
result := d
if type=="A2RMA"
er = abs(change(src,len))/sum(abs(change(src)),len)
//----
ma = 0.
d = cum(abs(src - nz(ma[1],src)))/bar_index * gamma
ma := ama(er,ama(er,src > nz(ma[1],src) + d ? src + d : src < nz(ma[1],src) - d ? src - d : nz(ma[1],src)))
//----
result := ma
if type=="EDSMA"
zeros = src - nz(src[2])
avgZeros = (zeros + zeros[1]) / 2
// Ehlers Super Smoother Filter
ssf = ssfPoles == 2
? get2PoleSSF(avgZeros, ssfLength)
: get3PoleSSF(avgZeros, ssfLength)
// Rescale filter in terms of Standard Deviations
stdev = stdev(ssf, len)
scaledFilter = stdev != 0
? ssf / stdev
: 0
alpha = 5 * abs(scaledFilter) / len
edsma = 0.0
edsma := alpha * src + (1 - alpha) * nz(edsma[1])
result := edsma
result
//---parameters
cHigh = ma(Mode,high, PriceSmoothing)
cLow = ma(Mode,low, PriceSmoothing)
cOpen = ma(Mode,open, PriceSmoothing)
cClose = ma(Mode,close, PriceSmoothing)
pClose = cClose[1]
trueRange = max(cHigh, cClose) - min(cLow, pClose)
rrange = cHigh - cLow
vqi = 0.0
vqi := rrange != 0 and trueRange != 0 ?
((cClose - pClose) / trueRange + (cClose - cOpen) / rrange) * 0.5 : 0 //vqi[1]
vqi_abs = abs(vqi) * (cClose - pClose + cClose - cOpen) * 0.5
//sumVqi = vqi_abs //v1.0
sumVqi = abs(vqi_abs) > 1 ? vqi_abs : abs(vqi_abs) * 10 > 1 ? vqi_abs * 10 : vqi_abs
//plot(sumVqi, color=secolor, title="VQI", linewidth=2)
hline(0, title = "Zero", color=color.white, linestyle=hline.style_dotted, linewidth=1)
hline(1.64, title = "90% Line - Threshold 1", color=color.black, linestyle=hline.style_dotted, linewidth=1)
hline(-1.64, title = "90% Line - Threshold 1", color=color.black, linestyle=hline.style_dotted, linewidth=1)
hline(1.96, title = "95% Line - Threshold 2", color=color.black, linestyle=hline.style_dotted, linewidth=1)
hline(-1.96, title = "95% Line - Threshold 2", color=color.black, linestyle=hline.style_dotted, linewidth=1)
hline(2.58, title = "99% Line - Threshold 3", color=color.black, linestyle=hline.style_dotted, linewidth=1)
hline(-2.58, title = "99% Line - Threshold 3", color=color.black, linestyle=hline.style_dotted, linewidth=1)
///// Z SCORE CALCULATION
VQI = sumVqi
xStdDev = stdev(VQI, zlength)
xMA = sma(VQI, zlength)
z2=(VQI-xMA)/xStdDev
PAL = (z2 > 0)
PAS = (z2 < 0)
chgPAL = PAL and not PAL[1]
chgPAS = PAS and not PAS[1]
secolor = z2 > (vq0_mid + vq0_th) and z2 < (vq0_mid + vq0_th2) ? color.yellow : z2 > (vq0_mid + vq0_th2) and z2 < (vq0_mid + vq0_th3) ? color.orange : z2 > (vq0_mid + vq0_th3) ? color.green : z2 < (vq0_mid - vq0_th) and z2 > (vq0_mid - vq0_th2) ? color.yellow : z2 < (vq0_mid - vq0_th2) and z2 > (vq0_mid - vq0_th3) ? color.orange : z2 < (vq0_mid - vq0_th3) ? color.red : color.gray
plot(z2, title = "Z Score ATR", color=secolor,style=plot.style_area, transp=0)
//bgcolor(chgPAL ? color.green : chgPAS ? color.red : na, transp=80, title="Change in Price Action")
//plotshape(PAL, title="Price Action Long", location=location.bottom, style=shape.square, size=size.auto, color=color.green, transp=20)
//plotshape(PAS, title="Price Action Short", location=location.bottom, style=shape.square, size=size.auto, color=color.red, transp=20)
Volatility Qaulity Zero Line attempts to keep a trader out of ranging markets, but the original calculation on TradingView had to be adjusted for each instrument. To avoid this issue, I have applied a z-score calculation to the VQZL so the result is standardized for all instruments. A Z-Score is simply a value's relationship to the mean (average) of a group of values, measured in terms of standard deviations from the mean.
This calculation allows us to compare current volatility to the mean (moving average) of the population (Z-Length). The closer the VQZL Z-Score is to the mean, the closer it will be to the Zero Line and therefore price is likely consolidating and choppy. The farther VQZL Z-Score is from the mean, the more likely price is trending.
The MA Mode determines the Moving Average used to calculate VQZL itself. The Z-Score is ALWAYS calculated with a simple moving average (as that is the standard calculation for Z-Score).
The Threshold Levels are the levels at which VQZL Z-Score will change from gray to yellow, orange, green ( bullish ), or red ( bearish ). These levels can be adjusted but you should adjust the Threshold Lines as well (in the style section), so they line up with your adjusted values.
Statistically speaking, confidence levels in relation to Z-Score are noted below. The built in Threshold Levels are the positive and negative values for 90%, 95%, and 99%. This would indicate when volatility is greater than these values they are out of the ordinary from the standard range. You may wish to adjust these levels for VQZL Z-Score to be more responsive to your trading need
As always, trade at your own risk.