assignmentutor™您的专属作业导师

assignmentutor-lab™ 为您的留学生涯保驾护航 在代写线性回归分析linear regression analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写线性回归分析linear regression analysis代写方面经验极为丰富，各种代写线性回归分析linear regression analysis相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础

## 统计代写|线性回归分析代写linear regression analysis代考|LEAST SQUARES ESTIMATION

Let $Y$ be a random variable that fluctuates about an unknown parameter $\eta$; that is, $Y=\eta+\varepsilon$, where $\varepsilon$ is the fluctuation or error. For example, $\varepsilon$ may be a “natural” fluctuation inherent in the experiment which gives rise to $\eta$, or it may represent the error in measuring $\eta$, so that $\eta$ is the true response and $Y$ is the observed response. As noted in Chapter 1, our focus is on linear models, so we assume that $\eta$ can be cxpressed in the form
$$\eta=\beta_0+\beta_1 x_1+\cdots+\beta_{p-1} x_{p-1},$$
where the explanatory variables $x_1, x_2, \ldots, x_{p-1}$ are known constants (e.g., experimental variables that are controlled by the experimenter and are measured with negligible error), and the $\beta_j(j=0,1, \ldots, p-1)$ are unknown parameters to be estimated. If the $x_j$ are varied and $n$ values, $Y_1, Y_2, \ldots, Y_n$, of $Y$ are observed, then
$$Y_i=\beta_0+\beta_1 x_{i 1}+\cdots+\beta_{p-1} x_{i, p-1}+\varepsilon_i \quad(i=1,2, \ldots, n),$$
where $x_{i j}$ is the $i$ th value of $x_j$. Writing these $n$ equations in matrix form, we have
$$\left(\begin{array}{c} Y_1 \ Y_2 \ \vdots \ Y_n \end{array}\right)=\left(\begin{array}{ccccc} x_{10} & x_{11} & x_{12} & \cdots & x_{1, p-1} \ x_{20} & x_{21} & x_{22} & \cdots & x_{2, p-1} \ \vdots & \vdots & \vdots & \vdots & \vdots \ ] x_{n 0} & x_{n 1} & x_{n 2} & \cdots & x_{n, p-1} \end{array}\right)\left(\begin{array}{c} \beta_0 \ \beta_1 \ \vdots \ \beta_{p-1} \end{array}\right)+\left(\begin{array}{c} \varepsilon_1 \ \varepsilon_2 \ \vdots \ \varepsilon_n \end{array}\right),$$
or
$$\mathbf{Y}-\mathbf{X} \boldsymbol{\beta}+\varepsilon$$ where $x_{10}=x_{20}=\cdots=x_{n 0}=1$. The $n \times p$ matrix $\mathbf{X}$ will be called the regression matrix, and the $x_{i j}$ ‘s are generally chosen so that the columns of $\mathbf{X}$ are linearly independent; that is, $\mathbf{X}$ has rank $p$, and we say that $\mathbf{X}$ has full rank. However, in some experimental design situations, the elements of $\mathbf{X}$ are chosen to be 0 or 1 , and the columns of $\mathbf{X}$ may be linearly dependent. In this case $\mathbf{X}$ is commonly called the design matrix, and we say that $\mathbf{X}$ has less than full rank.

## 统计代写|线性回归分析代写linear regression analysis代考|PROPERTIES OF LEAST SQUARES ESTIMATES

If we assume that the errors are unbiased (i.e., $E[\varepsilon]=0$ ), and the columns of $\mathbf{X}$ are linearly independent, then
\begin{aligned} E[\hat{\beta}] &=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} E[\mathbf{Y}] \ &=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{X} \boldsymbol{\beta} \ &=\beta, \end{aligned}
and $\hat{\beta}$ is an unbiased estimate of $\beta$. If we assume further that the $\varepsilon_i$ are uncorrelated and have the same variance, that is, $\operatorname{cov}\left[\varepsilon_i, \varepsilon_j\right]=\delta_{i j} \sigma^2$, then $\operatorname{Var}[\varepsilon]=\sigma^2 \mathbf{I}n$ and $$\operatorname{Var}[\mathbf{Y}]=\operatorname{Var}[\mathbf{Y}-\mathbf{X} \boldsymbol{\beta}]=\operatorname{Var}[\varepsilon]$$ Hence, by (1.7), \begin{aligned} \operatorname{Var}[\hat{\boldsymbol{\beta}}] &=\operatorname{Var}\left[\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{X}\right] \ &=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \operatorname{Var}[\mathbf{Y}] \mathbf{X}\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \ &=\sigma^2\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1}\left(\mathbf{X}^{\prime} \mathbf{X}\right)\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \ &=\sigma^2\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \end{aligned} The question now arises as to why we chose $\hat{\beta}$ as our estimate of $\beta$ and not some other estimate. We show below that for a reasonable class of estimates, $\hat{\beta}_j$ is the estimate of $\beta_j$ with the smallest variance. Here $\hat{\beta}_j$ can be extracted from $\hat{\beta}=\left(\hat{\beta}_0, \hat{\beta}_1, \ldots, \hat{\beta}{p-1}\right)^{\prime}$ simply by premultiplying by the row vector $\mathbf{c}^{\prime}$, which contains unity in the $(j+1)$ th position and zeros elsewhere. It transpires that this special property of $\hat{\beta}_j$ can be generalized to the case of any linear combination $\mathbf{a}^{\prime} \hat{\beta}$ using the following theorem.

THEOREM 3.2 Let $\hat{\theta}$ be the least squares estimate of $\boldsymbol{\theta}=\mathbf{X} \boldsymbol{\beta}$, where $\boldsymbol{\theta} \in$ $\Omega=\mathcal{C}(\mathbf{X})$ and $\mathbf{X}$ may not have full rank. Then among the class of linear unbiased estimates of $\mathbf{c}^{\prime} \theta, \mathbf{c}^{\prime} \hat{\theta}$ is the unique estimate with minimum variance. [We say that $\mathbf{c}^{\prime} \hat{\boldsymbol{\theta}}$ is the best linear unbiased estimate (BLUE) of $\mathbf{c}^{\prime} \boldsymbol{\theta}$.]

# 线性回归分析代写

## 统计代写|线性回归分析代写linear regression analysis代考|LEAST SQUARES ESTIMATION

$$\eta=\beta_0+\beta_1 x_1+\cdots+\beta_{p-1} x_{p-1},$$

$$Y_i=\beta_0+\beta_1 x_{i 1}+\cdots+\beta_{p-1} x_{i, p-1}+\varepsilon_i \quad(i=1,2, \ldots, n),$$

$$\mathbf{Y}-\mathbf{X} \boldsymbol{\beta}+\varepsilon$$

## 统计代写|线性回归分析代写linear regression analysis代考|PROPERTIES OF LEAST SQUARES ESTIMATES

$$E[\hat{\beta}]=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} E[\mathbf{Y}] \quad=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{X} \boldsymbol{\beta}=\beta,$$

$$\operatorname{Var}[\mathbf{Y}]=\operatorname{Var}[\mathbf{Y}-\mathbf{X} \boldsymbol{\beta}]=\operatorname{Var}[\varepsilon]$$

$$\operatorname{Var}[\hat{\boldsymbol{\beta}}]=\operatorname{Var}\left[\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \mathbf{X}\right] \quad=\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \mathbf{X}^{\prime} \operatorname{Var}[\mathbf{Y}] \mathbf{X}\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1}=\sigma^2\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1}\left(\mathbf{X}^{\prime} \mathbf{X}\right)\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1} \quad \sigma^2\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1}$$

## 有限元方法代写

assignmentutor™作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。

assignmentutor™您的专属作业导师
assignmentutor™您的专属作业导师