Re: Priestley-Chao-Kernel / Gasser–Müller-Kernel / Priestley–Chao-Kernel Regressions

24
ZigZag wrote: Mon May 12, 2025 7:29 pm I think I got it now. Actual regressions between degrees dev. can start if so as next.
it's still not good.

you need to pass it throught the strategy tester, and then put another instance on the backtested chart and compare both

both instances have to be exactly identical
once you got that it's all good

by the way, are you sure the regression is causal ? because it's in the same type of non causal regressions like Nadaraya-Watson and whatnot
Scalping the Century TimeFrame since 1999

Re: Priestley-Chao-Kernel / Gasser–Müller-Kernel / Priestley–Chao-Kernel Regressions

25
Meanwhile, something to read.

https://trendspider.com/learning-center ... works-cnn/

I actually realized today metatrader does not have matrix support exist, but for for machinelearning there is a way, to use .CSV file to train Phyton Panda module for kernel regressions and metatrader can read this .CSV file.

These users thanked the author ZigZag for the post:
RodrigoRT7


Difference between Kernel and Linear regressions modelling,

26
Kernel Smoothing Regression is a flexible technique used to model complex, non-linear relationships in data. Unlike linear regression, which assumes a straight-line relationship, kernel smoothing adapts to the underlying patterns, making it ideal for data that doesn't fit simple models.

Opportunities:

✔️ Captures Complex Patterns: It identifies intricate relationships in data, providing a more accurate fit for non-linear trends.
✔️ Adaptable: Adjusts to data variations without being overly restricted by assumptions, making it useful for exploratory analysis.
✔️ Handles Noise Effectively: Smooths out random variations, which can help reveal the true signal in noisy data.

Challenges:

❌ Computationally Intensive: Requires significant computational power, especially with large data sets, which might slow down analysis.
❌ Sensitive to Parameters: The choice of kernel and bandwidth can significantly impact results. A poor choice may lead to overfitting or underfitting.
❌ Less Interpretability: Compared to simpler models like linear regression, the results of kernel smoothing can be harder to interpret and explain.

To handle Kernel Smoothing Regression in practice:

🔹 R: Use the ggplot2 package for visualization and the geom_smooth() function with method = "loess" to apply kernel smoothing.
🔹 Python: Use the seaborn library for visualization, specifically the sns.lmplot() function with the lowess=True parameter to perform kernel smoothing.

The visualization above shows the difference between linear regression (dashed red line) and kernel smoothing (solid green line). The kernel smoothing line adjusts to the data's natural curvature, providing a better fit for non-linear relationships.

Polynomial regression vs. Linear

27
Polynomial regression might seem different from linear regression at first glance, but it’s still considered a linear model. Why? It all comes down to how the model parameters are used.

✔️ Linear in Parameters: In polynomial regression, the model remains linear in terms of its coefficients. Even if we include terms like squared or cubic versions of the input variable, the relationship with the parameters stays linear.

✔️ Transformed Features: Instead of treating the data as simple inputs, polynomial regression transforms the original input features (e.g., turning an input into its squared or cubic form). However, the relationship between the coefficients and the target variable remains linear.

✔️ Optimization Stays Linear: The method for estimating the coefficients, such as Ordinary Least Squares, remains the same because the relationship with the parameters does not become non-linear.

❌ Non-Linear Model? If a regression model involves coefficients in non-linear ways, such as multiplying them together or applying complex functions to them, it’s no longer considered linear regression.