statistics-lab™ 为您的留学生涯保驾护航 在代写统计推断Statistical inference方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写统计推断Statistical inference代写方面经验极为丰富，各种代写统计推断Statistical inference相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• Advanced Probability Theory 高等概率论
• Advanced Mathematical Statistics 高等数理统计学
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
assignmentutor™您的专属作业导师

## 统计代写|统计推断代写Statistical inference代考|Simple observation schemes—Motivation

A natural conclusion one can extract from the previous section is that it makes sense, to say the least, to learn how to build detector-based tests with minimal risk. Thus, we arrive at the following design problem:
Given an observation space $\Omega$ and two families, $\mathcal{P}{1}$ and $\mathcal{P}{2}$, of probability distributions on $\Omega$, solve the optimization problem
$$\mathrm{Opt}=\min {\phi=\Omega \rightarrow \mathrm{R}} \max [\underbrace{\sup {P \in \mathcal{P}{1}} \int{\Omega} \mathrm{e}^{-\phi(\omega)} P(d \omega)}{F[\phi]}, \underbrace{\sup {P \in \mathcal{P}{2}} \int{\Omega} \mathrm{e}^{\phi(\omega)} P(d \omega)}{G[\phi]}]$$ While being convex, problem (2.53) is typically computationally intractable. First, it is infinite-dimensional-candidate solutions are multivariate functions; how do we represent them on a computer, not to mention, how do we optimize over them? Besides, the objective to be optimized is expressed in terms of suprema of infinitely many (provided $\mathcal{P}{1}$ and/or $\mathcal{P}_{2}$ are infinite) expectations, and computing just a single expectation can be a difficult task …. We are about to consider “favorable” cases-simple observation schemes – where (2.53) is efficiently solvable.

To arrive at the notion of a simple observation scheme, consider the case when all distributions from $\mathcal{P}{1}, \mathcal{P}{2}$ admit densities taken w.r.t. some reference measure $\Pi$ on $\Omega$, and these densities are parameterized by a “parameter” $\mu$ running through some parameter space $\mathcal{M}$. In other words, $\mathcal{P}{1}$ is comprised of all distributions with densities $p{\mu}(\cdot)$ and $\mu$ belonging to some subset $M_{1}$ of $\mathcal{M}$, while $\mathcal{P}{2}$ is comprised of distributions with densities $p{\mu}(\cdot)$ and $\mu$ belonging to another subset, $M_{2}$, of $\mathcal{M}$. To save words, we shall identify distributions with their densities taken w.r.t. II, so that
$$\mathcal{P}{\chi}=\left{p{\mu}: \mu \in M_{\chi}\right}, \chi=1,2,$$
where $\left{p_{\mu}(\cdot): \mu \in \mathcal{M}\right}$ is a given “parametric” family of probability densities. The quotation marks in “parametric” reflect the fact that at this point in time, the “parameter” $\mu$ can be infinite-dimensional (e.g, we can parameterize a density by itself), so that assuming “parametric” representation of the distributions from $\mathcal{P}{1}$, $\mathcal{P}{2}$ in fact does not restrict the generality.

Our first observation is that in our “parametric” setup, we can rewrite problem (2.53) equivalently as
$$\ln (\mathrm{Opt})=\min {\phi \cap \Omega \rightarrow \mathrm{R}} \sup {\mu \in M_{1}, \nu \in M_{2}} \underbrace{\frac{1}{2}\left[\ln \left(\int_{\Omega} \mathrm{e}^{-\phi(\omega)} p_{\mu}(\omega) \Pi(d \omega)\right)+\ln \left(\int_{\Omega} \mathrm{e}^{\phi(\omega)} p_{\nu}(\omega) \Pi(d \omega)\right)\right]}_{\Phi(\phi ; \mu, \nu)}$$

## 统计代写|统计推断代写Statistical inference代考|Simple observation schemes—The definition

Consider the situation in which we are given

1. A Polish (complete separable metric) observation space $\Omega$ equipped with a $\sigma$ finite $\sigma$-additive Borel reference measure $\Pi$ such that the support of $\Pi$ is the entire $\Omega$.

Those not fully comfortable with some of the notions from the previous sentence can be assured that the only observation spaces we indeed shall deal with are pretty simple:

• $\Omega=\mathbf{R}^{d}$ equipped with the Lebesgue measure $\Pi$, and
• a finite or countable set $\Omega$ which is discrete (distances between distinct points are equal to 1) and is equipped with the counting measure $\Pi$.
1. A parametric family $\left{p_{\mu}(\cdot): \mu \in \mathcal{M}\right}$ of probability densities, taken w.r.t. $\Pi$, such that
• the space $\mathcal{M}$ of parameters is a convex set in some $\mathbf{R}^{n}$ which coincides with its relative interior,
• the function $p_{\mu}(\omega): \mathcal{M} \times \Omega \rightarrow \mathbf{R}$ is continuous in $(\mu, \omega)$ and positive everywhere.
1. A finite-dimensional linear subspace $\mathcal{F}$ of the space of continuous functions on $\Omega$ such that
• $\mathcal{F}$ contains constants,
• all functions of the form $\ln \left(p_{\mu}(\omega) / p_{\nu}(\omega)\right)$ with $\mu, \nu \in \mathcal{M}$ are contained in $\mathcal{F}$,
• for every $\phi(\cdot) \in \mathcal{F}$, the function
$$\ln \left(\int_{\Omega} \mathrm{e}^{\phi(\omega)} p_{\mu}(\omega) \Pi(d \omega)\right)$$
is real-valued and concave on $\mathcal{M}$.

## 统计代写|统计推断代写Statistical inference代考|Simple observation schemes—Examples

In Gaussian o.s.,

• the observation space $(\Omega, \Pi)$ is the space $\mathbf{R}^{d}$ with Lebesgue measure;
• the family $\left{p_{\mu}(\cdot): \mu \in \mathcal{M}\right}$ is the family of Gaussian densities $\mathcal{N}(\mu, \Theta)$, with fixed positive definite covariance matrix $\Theta$; distributions from the family are parameterized by their expectations $\mu$. Thus,
$$\mathcal{M}=\mathbf{R}^{d}, p_{\mu}(\omega)=\frac{1}{(2 \pi)^{d / 2} \sqrt{\operatorname{Det}(\Theta)}} \exp \left{-\frac{1}{2}(\omega-\mu)^{T} \Theta^{-1}(\omega-\mu)\right}$$
• the family $\mathcal{F}$ is the family of all affine functions on $\mathbf{R}^{d}$.
It is immediately seen that Gaussian o.s. meets all requirements imposed on a simple o.s. For example,
$$\ln \left(p_{\mu}(\omega) / p_{\nu}(\omega)\right)=(\nu-\mu)^{T} \Theta^{-1} \omega+\frac{1}{2}\left[\nu^{T} \Theta^{-1} \nu-\mu^{T} \Theta^{-1} \mu\right]$$
is an affine function of $\omega$ and thus belongs to $\mathcal{F}$. Besides this, a function $\phi(\cdot) \in \mathcal{F}$ is affine: $\phi(\omega)=a^{T} \omega+b$, implying that
\begin{aligned} f(\mu) &:=\ln \left(\int_{\mathbf{R}^{d}} \mathrm{e}^{\phi(\omega)} p_{\mu}(\omega) d \omega\right)=\ln \left(\mathbf{E}{\xi \sim \mathcal{N}\left(0, I{d}\right)}\left{\exp \left{a^{T}\left(\Theta^{1 / 2} \xi+\mu\right)+b\right}\right}\right) \ &=a^{T} \mu+b+\text { const, } \ \text { const } &=\ln \left(\mathbf{E}{\xi \sim \mathcal{N}\left(0, I{d}\right)}\left{\exp \left{a^{T} \Theta^{1 / 2} \xi\right}\right}\right)=\frac{1}{2} a^{T} \Theta a \end{aligned}
is an affine (and thus a concave) function of $\mu$.
As we remember from Chapter 1, Gaussian o.s. is responsible for the standard signal processing model where one is given a noisy observation
$$\omega=A x+\xi \quad[\xi \sim \mathcal{N}(0, \Theta)]$$
of the image $A x$ of unknown signal $x \in \mathbf{R}^{n}$ under linear transformation with known $d \times n$ sensing matrix, and the goal is to infer from this observation some knowledge about $x$. In this situation, a hypothesis that $x$ belongs to some set $X$ translates into the hypothesis that the observation $\omega$ is drawn from Gaussian distribution with known covariance matrix $\Theta$ and expectation known to belong to the set $M={\mu=$ $A x: x \in X}$. Therefore, deciding upon various hypotheses on where $x$ is located reduces to deciding on hypotheses on the distribution of observations in Gaussian o.s.

## 统计代写|统计推断代写Statistical inference代考|Simple observation schemes—Motivation

$$\text { Opt }=\min \phi=\Omega \rightarrow \mathrm{R} \max [\underbrace{\left[\sup P \in \mathcal{P} 1 \int \Omega \mathrm{e}^{-\phi(\omega)} P(d \omega)\right.} F[\phi], \underbrace{\sup P \in \mathcal{P} 2 \int \Omega \mathrm{e}^{\phi(\omega)} P(d \omega)} G[\phi]]$$

$\backslash$ left 的分隔符缺失或无法识别

$$\ln (\mathrm{Opt})=\min \phi \cap \Omega \rightarrow \mathrm{R} \sup \mu \in M_{1}, \nu \in M_{2} \underbrace{\frac{1}{2}\left[\ln \left(\int_{\Omega} \mathrm{e}^{-\phi(\omega)} p_{\mu}(\omega) \Pi(d \omega)\right)+\ln \left(\int_{\Omega} \mathrm{e}^{\phi(\omega)} p_{\nu}(\omega) \Pi(d \omega)\right)\right]}_{\Phi(\phi ; \mu, \nu)}$$

## 统计代写|统计推断代写Statistical inference代考|Simple observation schemes—The definition

1. 波兰 (完全可分离度量) 观察空间 $\Omega$ 配备了一个 $\sigma$ 有限 $\sigma$ – 加法 Borel 参考测量П这样的支持 $\Pi$ 是整个 $\Omega$. 那些对上句中的一些概念不完全满意的人可以放心，我们确实要处理的唯一观察空间非常简单:
• 有限集或可数集 $\Omega$ 它是离散的 (不同点之间的距离等于 1) 并且配备了计数测量П.
1. 参数族 left 的分隔符缺失或无法识别
概率密度, wrtП, 这样
• 空间 $\mathcal{M}$ 的参数是一些凸集 $\mathbf{R}^{n}$ 与它的相对内部相吻合，
• 功能 $p_{\mu}(\omega): \mathcal{M} \times \Omega \rightarrow \mathbf{R}$ 是连续的 $(\mu, \omega)$ 到处都是积极的。
1. 有限维线性子空间 $\mathcal{F}$ 的连续函数空间 $\Omega$ 这样
• $\mathcal{F}$ 包含常数，
• 表格的所有功能 $\ln \left(p_{\mu}(\omega) / p_{\nu}(\omega)\right)$ 和 $\mu, \nu \in \mathcal{M}$ 包含在 $\mathcal{F}$ ，
• 对于每个 $\phi(\cdot) \in \mathcal{F} ，$ 功能
$$\ln \left(\int_{\Omega} \mathrm{e}^{\phi(\omega)} p_{\mu}(\omega) \Pi(d \omega)\right)$$
是实值的并且是凹的 $\mathcal{M}$.

## 统计代写|统计推断代写Statistical inference代考|Simple observation schemes—Examples

• 观察空间 $(\Omega, \Pi)$ 是空间 $\mathbf{R}^{d}$ 用勒贝格测度；
• 家庭 lleft 的分隔符缺失或无法识别
是高斯密度族 $\mathcal{N}(\mu, \Theta)$, 具有固定的正定协方差矩阵 $\Theta ;$ 来自家庭的分布由他们 的期望参数化 $\mu$. 因此，
$\backslash$ left 的分隔符缺失或无法识别
• 家庭 $\mathcal{F}$ 是所有仿射函数的族 $\mathbf{R}^{d}$.
立即可以看出，Gaussian os 满足了对简单 os 的所有要求，例如，
$$\ln \left(p_{\mu}(\omega) / p_{\nu}(\omega)\right)=(\nu-\mu)^{T} \Theta^{-1} \omega+\frac{1}{2}\left[\nu^{T} \Theta^{-1} \nu-\mu^{T} \Theta^{-1} \mu\right]$$
是一个仿射函数 $\omega$ 因此属于 $\mathcal{F}$. 除此之外，还有一个功能 $\phi(\cdot) \in \mathcal{F}$ 是仿射的: $\phi(\omega)=a^{T} \omega+b$, 意味看
$\backslash$ left 的分隔符缺失或无法识别
是仿射 (因此是凹) 函数 $\mu$.
正如我们在第 1 章中记得的那样，Gaussian os 负责标准信号处理模型，其中给定一个噪声观察值
$$\omega=A x+\xi \quad[\xi \sim \mathcal{N}(0, \Theta)]$$
图像的 $A x$ 末知信号 $x \in \mathbf{R}^{n}$ 在已知的线性变换下 $d \times n$ 感知矩阵，目标是从这个观察中推断出一些关于 $x$. 在这种情况下，假设 $x$ 属于 某个集合 $X$ 转化为假设观察 $\omega$ 从具有已知协方差矩阵的高斯分布中得出 $\Theta$ 和已知属于集合的期望 $M=\mu=\$ \$A x: x \in X$. 因此，决 定在哪里的各种假设 $x$ 位于降低到决定高斯 os 中观察分布的假设

## 有限元方法代写

assignmentutor™作为专业的留学生服务机构，多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务，包括但不限于Essay代写，Assignment代写，Dissertation代写，Report代写，小组作业代写，Proposal代写，Paper代写，Presentation代写，计算机作业代写，论文修改和润色，网课代做，exam代考等等。写作范围涵盖高中，本科，研究生等海外留学全阶段，辐射金融，经济学，会计学，审计学，管理学等全球99%专业科目。写作团队既有专业英语母语作者，也有海外名校硕博留学生，每位写作老师都拥有过硬的语言能力，专业的学科背景和学术写作经验。我们承诺100%原创，100%专业，100%准时，100%满意。

## MATLAB代写

MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中，其中问题和解决方案以熟悉的数学符号表示。典型用途包括：数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发，包括图形用户界面构建MATLAB 是一个交互式系统，其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题，尤其是那些具有矩阵和向量公式的问题，而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问，这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展，得到了许多用户的投入。在大学环境中，它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域，MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要，工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数（M 文件）的综合集合，可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。