Logo Haohan's Blog
  • Home
  • About
  • Education
  • Experiences
  • Projects
  • More
    Skills Featured Posts Recent Posts Accomplishments
  • Posts
  • Notes
  • English
    English 简体中文
  • Dark Theme
    Light Theme Dark Theme System Theme
Logo Inverted Logo
  • Tags
  • Basic
  • Content Organization
  • Deep Learning
  • Easyconnect
  • Linear Algebra
  • Linear Model
  • Linear Regression
  • Markdown
  • Matrix
  • MLP
  • Multi-Factors Model
  • Multi-Lingual
  • Normalization
  • Performance Attribution
  • Pytorch
  • Quant
  • Remote Server
  • SFTP
  • SoftMax
  • Softwares
  • Ssh
  • Ubuntu
  • Vscode
Hero Image
Questions of R Square and Corr Coef

$y\sim(1,\boldsymbol{x})$, regress $y$ on $x$ with intercept. $y\sim(\boldsymbol{x})$, regress $y$ on $x$ without intercept. In the context of Statistics, SSE (Sum of Squares due to Error) and SSR (Sum of Squares due to Regression) are used more frequently. But in Economitrics, ESS (Explained Sum of Squares) and RSS (Residual Sum of Squares) are prefered. 0.1 Bivariate Regression Denote the $R^2$ of $y\sim(1,x_1)$ as $R_1^2$, $y\sim(1,x_2)$ as $R_2^2$, $y\sim(1,x_1,x_2)$ as $R_3^2$. And we have $corr(x_1,x_2)=\rho$.

  • Linear Model
  • Linear Regression
  • Quant
Tuesday, November 26, 2024 | 3 minutes Read
Hero Image
Questions of R Square

$y\sim(1,\boldsymbol{x})$, regress $y$ on $x$ with intercept. $y\sim(\boldsymbol{x})$, regress $y$ on $x$ without intercept. In the context of Statistics, SSE (Sum of Squares due to Error) and SSR (Sum of Squares due to Regression) are used more frequently. But in Economitrics, ESS (Explained Sum of Squares) and RSS (Residual Sum of Squares) are prefered. 0.1 Definition $$ R^2 = \frac{SSR}{SST} = \frac{||\hat{Y} - \overline{Y}||^2}{||Y - \overline{Y}||^2} = 1 - \frac{SSE}{SST} = 1 - \frac{||\hat{\epsilon}||^2}{||Y - \overline{Y}||^2} $$

  • Linear Model
  • Linear Regression
  • Quant
Tuesday, November 26, 2024 | 3 minutes Read
Hero Image
Questions of Assumptions

$y\sim(1,\boldsymbol{x})$, regress $y$ on $x$ with intercept. $y\sim(\boldsymbol{x})$, regress $y$ on $x$ without intercept. In the context of Statistics, SSE (Sum of Squares due to Error) and SSR (Sum of Squares due to Regression) are used more frequently. But in Economitrics, ESS (Explained Sum of Squares) and RSS (Residual Sum of Squares) are prefered. 0.1 Heteroskedasticity and Autocorrelation If the residuals ($\epsilon$) in a linear regression model exhibit heteroskedasticity (non-constant variance) or autocorrelation (correlation between residuals across observations), how will it impact the estimation and inference of $\beta$? How to test and solve these problems?

  • Linear Model
  • Linear Regression
  • Quant
Tuesday, November 26, 2024 | 7 minutes Read
Hero Image
Linear Regression and Stats

This post focuses on Ordinary Linear Regression. 0.1 Simple Linear Regression The most basic version of a linear model is Simple Linear Regression, which can be expressed by this formular: $$ y = \alpha + \beta \times x + \epsilon $$ where $\alpha$ is called intercept, $\beta$ is called slope, and $\epsilon$ is called residual. The coefficients of Simple Linear Regression can be solved using Least Squres Method, by minimizing $\sum_{i=1}^{n}(y_i-\hat{y}_i)^2$.

  • Linear Model
  • Linear Regression
Sunday, November 17, 2024 | 2 minutes Read
Hero Image
Linear Regression

0.1 General Expression $$y_{i}=\beta_{0}+\beta_{1}\times x_{i1}+\cdots+\beta_{p}\times x_{ip}+\epsilon_{i},\quad i=1,2,\cdots,n$$ $$ \begin{align*} \mathbf{y}&=(y_{1},y_{2},\cdots,y_{n})^{T} \cr \mathbf{X}&=\begin{bmatrix}1 & x_{11} & x_{12} & \cdots & x_{1p} \cr 1 & x_{21} & x_{22} & \cdots & x_{2p} \cr \vdots & \vdots & \vdots & \vdots & \vdots \cr 1 & x_{n1} & x_{n2} & \cdots & x_{np} \end{bmatrix} \cr \mathbf{\beta}&=(\beta_{0},\beta_{1},\cdots,\beta_{p})^{T} \cr \mathbf{\epsilon}&=(\epsilon_{1}, \epsilon_{2},\cdots,\epsilon_{n})^{T} \end{align*} $$ 0.2 OLS Assumptions The regression model is parametric linear. ${x_{i1},x_{i2},\cdots,x_{ip}}$ are nonstochastic variables. $E(\epsilon_{i})=0$. $Var(\epsilon_{i})=\sigma^{2}$. ${\epsilon_{i}}$ are independent random variables, so as to say: no autocorrelation, $cov(\epsilon_{i},\epsilon_{j})=0,i\neq j$. The regression model is set correctly, without setting bias. 0.3 OLS Estimators 0.3.1 Estimators of $\hat{\beta}$ Formally, the OLS estimator of $\beta$ is defined by the minimizer of the residual sum of squares (RSS): $$\hat{\mathbf{\beta}}=arg\ min_{\beta}\ S(\mathbf{\beta})$$ $$S(\mathbf{\beta})=(\mathbf{y}-\mathbf{X\beta})^{T}(\mathbf{y}-\mathbf{X\beta})=\sum\limits_{i=1}^{n}(y_{i}-\beta_{0}-\beta_{1}\times x_{i1}-\cdots-\beta_{p}\times x_{ip})^{2}$$ Derive it we can get: $$\hat{\mathbf{\beta}}=(\mathbf{X^{T}X})^{-1}\mathbf{X^{T}y}$$

  • Linear Model
  • Linear Regression
Sunday, November 17, 2024 | 5 minutes Read
Hero Image
Questions of Coefficients

$y\sim(1,\boldsymbol{x})$, regress $y$ on $x$ with intercept. $y\sim(\boldsymbol{x})$, regress $y$ on $x$ without intercept. In the context of Statistics, SSE (Sum of Squares due to Error) and SSR (Sum of Squares due to Regression) are used more frequently. But in Economitrics, ESS (Explained Sum of Squares) and RSS (Residual Sum of Squares) are prefered. 0.1 Product of $\beta$ Denote $\beta_1$ as the least squares optimal solution of $y=\beta x+\epsilon$, $\beta_2$ as the least squares optimal solution of $x=\beta y+\epsilon$. Find the min and max values of $\beta_1\beta_2$. $$ \beta_1 = \frac{Cov(X,Y)}{Var(X)},\quad \beta_2 = \frac{Cov(X,Y)}{Var(Y)}\Rightarrow \beta_1\beta_2 = \rho_{XY}^2 \in [-1,1] $$

  • Linear Model
  • Linear Regression
  • Quant
Saturday, November 16, 2024 | 4 minutes Read
Hero Image
Questions of Conceptions

With reference to donggua. I complete the answers of these questions. 0.1 Notations $y\sim(1,\boldsymbol{x})$, regress $y$ on $x$ with intercept. $y\sim(\boldsymbol{x})$, regress $y$ on $x$ without intercept. In the context of Statistics, SSE (Sum of Squares due to Error) and SSR (Sum of Squares due to Regression) are used more frequently. But in Economitrics, ESS (Explained Sum of Squares) and RSS (Residual Sum of Squares) are prefered. 0.2 Conceptions and Basic Definitions 0.2.1. The assumptions of LR Gauss-Markov Theory: Under the assumptions of classical linear regression, the ordinary least squares (OLS) estimator is the linear unviased estimator with the minimum variance. (BLUE)

  • Linear Model
  • Linear Regression
  • Quant
Saturday, November 16, 2024 | 5 minutes Read
Navigation
  • About
  • Education
  • Experiences
  • Projects
  • Skills
  • Featured Posts
  • Recent Posts
  • Accomplishments
Contact me:
  • zhhohoh27@gamil.com
  • online727
  • Haohan Zhao
  • +86 19551998168

Liability Notice: This site is built on hugo and github pages and uses the hugo-toha theme. The site is used for personal blogging, all content is owned by me and does not constitute any relevant advice, if you have any questions, please contact me!


Toha Theme Logo Toha
© 2020 Copyright.
Powered by Hugo Logo