statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
assignmentutor™您的专属作业导师

## 统计代写|统计推断代写Statistical inference代考|Euclidean Separation, Repeated Observations

Assume that $X_{1}, X_{2}$ and $\mathcal{P}{\gamma}^{d}$ are as in the premise of Proposition $2.5$ and $K$-repeated observations are allowed, $K>1$. An immediate attempt to reduce the situation to the single-observation case by calling the $K$-repeated observation $\omega^{K}=\left(\omega{1}, \ldots, \omega_{K}\right)$ our new observation and thus reducing testing via repeated observations to the single-observation case seemingly fails: already in the simplest case of stationary $K$-repeated observations this reduction would require replacing the family $\mathcal{P}{\gamma}^{d}$ with the family of product distributions $\underbrace{P \times \ldots \times P}{K}$ stemming from $P \in \mathcal{P}{\gamma}^{d}$, and it is unclear how to apply to the resulting single-observation testing problem our machinery based on Euclidean separation. Instead, we will use the $K$-step majority test. 2.2.3.1 Preliminaries: Repeated observations in “signal plus noise” observation model We are in the situation where our inference should be based on observations $$\omega^{K}=\left(\omega{1}, \omega_{2}, \ldots, \omega_{K}\right),$$
and decide on hypotheses $\mathcal{H}{1}, \mathcal{H}{2}$ on the distribution $Q^{K}$ of $\omega^{K}$, and we are interested in the following three cases:

S [stationary $K$-repeated observations, cf. Section 2.1.3.1]: $\omega_{1}, \ldots, \omega_{K}$ are drawn independently of each other from the same distribution $Q$, that is, $Q^{K}$ is the product distribution $Q \times \ldots \times Q$. Further, under hypothesis $\mathcal{H}{\chi}, \chi=1,2, Q$ is the distribution of random variable $\omega=x+\xi$, where $x \in X{\chi}$ is deterministic, and the distribution $P$ of $\xi$ belongs to the family $\mathcal{P}_{\gamma}^{d}$;

SS [semi-stationary $K$-repeated observations, cf. Section 2.1.3.2]: there are two deterministic sequences, one of signals $\left{x_{k}\right}_{k=1}^{K}$, another of distributions $\left{P_{k} \in\right.$ $\left.\mathcal{P}{\gamma}^{d}\right}{k=1}^{K}$, and $\omega_{k}=x_{k}+\xi_{k}, 1 \leq k \leq K$, with $\xi_{k} \sim P_{k}$ independent across $k$. Under hypothesis $\mathcal{H}{\chi}$, all signals $x{k}, k \leq K$, belong to $X_{\chi}$.

QS [quasi-stationary $K$-repeated observations, cf. Section 2.1.3.3]: “in nature” there exists a random sequence of driving factors $\zeta^{K}=\left(\zeta_{1}, \ldots, \zeta_{K}\right)$ such that observation $\omega_{k}$, for every $k$, is a deterministic function of $\zeta^{k}=\left(\zeta_{1}, \ldots, \zeta_{k}\right): \omega_{k}=\theta_{k}\left(\zeta^{k}\right)$. On top of that, under $\ell$-th hypothesis $\mathcal{H}{\ell}$, for all $k \leq K$ and all $\zeta^{k-1}$, the conditional distribution of $\omega{k}$ given $\zeta^{k-1}$ belongs to the family $\mathcal{P}_{\ell}$ of distributions of all random vectors of the form $x+\xi$, where $x \in X_{\ell}$ is deterministic, and $\xi$ is random noise with distribution from $\mathcal{P}_{\gamma}^{d}$.

## 统计代写|统计推断代写Statistical inference代考|From Pairwise to Multiple Hypotheses Testing

Assume we are given $L$ families of probability distributions $\mathcal{P}{\ell}, 1 \leq \ell \leq L$, on observation space $\Omega$, and observe a realization of random variable $\omega \sim P$ taking values in $\Omega$. Given $\omega$, we want to decide on the $L$ hypotheses $$H{\ell}: P \in \mathcal{P}_{\ell}, 1 \leq \ell \leq L .$$

Our ideal goal would be to find a low-risk simple test deciding on the hypotheses. However, it may happen that this “ideal goal” cannot be achieved, for example, when some pairs of families $\mathcal{P}{\ell}$ have nonempty intersections. When $\mathcal{P}{\ell} \cap \mathcal{P}_{\ell^{\prime}} \neq \emptyset$ for some $\ell \neq \ell^{\prime}$, there is no way to decide on the hypotheses with risk $<1 / 2$.

But: Impossibility to decide reliably on all L hypotheses “individually” does not mean that no meaningful inferences can be made. For example, consider the three rectangles on the plane and three hypotheses, with $H_{\ell}, \ell \in{A, B, C}$, stating that our observation is $\omega=x+\xi$ with deterministic “signal” $x$ belonging to rectangle $\ell$ and $\xi \sim \mathcal{N}\left(0, \sigma^{2} I_{2}\right)$. However small $\sigma$ is, no test can decide on the three hypotheses with risk $<1 / 2$; e.g., there is no way to decide reliably on $H_{A}$ vs. $H_{B}$. However, we may hope that when $\sigma$ is small (or when repeated observations are allowed), observations allow us to discard reliably at least some of the hypotheses. For instance, when the signal belongs to rectangle $A$ (i.e., $H_{A}$ holds true), we hardly can discard reliably the hypothesis $H_{B}$ stating that the signal belongs to rectangle $\mathrm{B}$, but hopefully can reliably discard $H_{C}$ (that is, infer that the signal is not in rectangle $\mathrm{C}$ ).

When handling multiple hypotheses which cannot be reliably decided upon “as they are,” it makes sense to speak about testing the hypotheses “up to closeness.”

## 统计代写|统计推断代写Statistical inference代考|Detectors and their risks

Let $\Omega$ be an observation space, and $\mathcal{P}{\chi}, \chi=1,2$, be two families of probability distributions on $\Omega$. By definition, a detector associated with $\Omega$ is a real-valued function $\phi(\omega)$ of $\Omega$. We associate with a detector $\phi$ and families $\mathcal{P}{\chi}, \chi=1,2$, risks defined as follows:

Given a detector $\phi$, we can associate with it a simple test $\mathcal{T}{\phi}$ deciding via observation $\omega \sim P$ on the hypotheses $$H{1}: P \in \mathcal{P}{1}, H{2}: P \in \mathcal{P}{2} .$$ Namely, given observation $\omega \in \Omega$, the test $\mathcal{T}{\phi}$ accepts $H_{1}$ (and rejects $H_{2}$ ) whenever $\phi(\omega) \geq 0$, and accepts $H_{2}$ and rejects $H_{1}$ otherwise.
Let us make the following immediate observation:
Proposition 2.14. Let $\Omega$ be an observation space, $\mathcal{P}{\chi}, \chi=1,2$, be two families of probability distributions on $\Omega$, and $\phi$ be a detector. The risks of the test $\mathcal{T}{\phi}$ associated with this detector satisfy
\begin{aligned} &\operatorname{Risk}{1}\left(\mathcal{T}{\phi} \mid H_{1}, H_{2}\right) \leq \text { Risk }{-}\left[\phi \mid \mathcal{P}{1}\right] \ &\operatorname{Risk}{2}\left(\mathcal{T}{\phi} \mid H_{1}, H_{2}\right) \leq \text { Risk }{+}\left[\phi \mid \mathcal{P}{2}\right] \end{aligned}
Proof. Let $\omega \sim P \in \mathcal{P}{1}$. Then the $P$-probability of the event ${\omega: \phi(\omega)<0}$ does not exceed Risk $\left[\phi \mid \mathcal{P}{1}\right]$, since on the set ${\omega: \phi(\omega)<0}$ the integrand in $(2.45 . a)$ is $>1$, and this integrand is nonnegative everywhere, so that the integral in (2.45.a) is $\geq P{\omega: \phi(\omega)<0}$. Recalling what $\mathcal{T}{\phi}$ is, we see that the $P$-probability to reject $H_{1}$ is at most Risk- $\left[\phi \mid \mathcal{P}{1}\right]$, implying the first relation in (2.47). By a similar argument, with $(2.45 . b)$ in the role of $(2.45 . a)$, when $\omega \sim P \in \mathcal{P}{2}$, the $P$-probability of the event ${\omega: \phi(\omega) \geq 0}$ is upper-bounded by Risk ${ }{+}\left[\phi \mid \mathcal{P}{2}\right]$, implying the second relation in $(2.47)$

## 统计代写|统计推断代写Statistical inference代考|Euclidean Separation, Repeated Observations

S [静止ķ-重复观察，cf。第 2.1.3.1 节]：ω1,…,ωķ从同一分布中相互独立地抽取问， 那是，问ķ是产品分布问×…×问. 此外，根据假设Hχ,χ=1,2,问是随机变量的分布ω=X+X， 在哪里X∈Xχ是确定性的，分布磷的X属于家庭磷Cd;

SS [半静止ķ-重复观察，cf。第 2.1.3.2 节]：有两个确定性序列，一个是信号\left 的分隔符缺失或无法识别\left 的分隔符缺失或无法识别, 另一个分布\left 的分隔符缺失或无法识别\left 的分隔符缺失或无法识别， 和ωķ=Xķ+Xķ,1≤ķ≤ķ， 和Xķ∼磷ķ独立跨越ķ. 根据假设Hχ, 所有信号Xķ,ķ≤ķ， 属于Xχ.

QS [准静止ķ-重复观察，cf。第 2.1.3.3 节]：“在自然界中”存在驱动因素的随机序列Gķ=(G1,…,Gķ)这样观察ωķ, 对于每个ķ, 是一个确定性函数Gķ=(G1,…,Gķ):ωķ=θķ(Gķ). 最重要的是，在ℓ-第假设Hℓ， 对所有人ķ≤ķ和所有Gķ−1, 的条件分布ωķ给定Gķ−1属于家庭磷ℓ形式的所有随机向量的分布X+X， 在哪里X∈Xℓ是确定性的，并且X是随机噪声，分布从磷Cd.

## 有限元方法代写

tatistics-lab作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。