statistics-lab™ 为您的留学生涯保驾护航 在代写量子场论Quantum field theory方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写量子场论Quantum field theory代写方面经验极为丰富，各种代写量子场论Quantum field theory相关的作业也就用不着说。

• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
assignmentutor™您的专属作业导师

## 物理代写|量子场论代写Quantum field theory代考|Metropolis–Hastings Algorithm

In the original Metropolis algorithm, the test probability (jump probability) is symmetric in $\omega$ and $\omega^{\prime}$. Most common is a normal distribution centered at $\omega$ the variance of which must be tuned to get a reasonable acceptance rate for new configurations. A generalization due to Hastings allows for asymmetric test distributions. Typically one takes a distribution $T\left(\omega, \omega^{\prime}\right)$ that is the same for all configurations $\omega^{\prime}$ that can be reached from $\omega[6,7]$. The test probability of the

remaining configurations is set to zero. Thus, if $N$ is the number of accessible configurations, then
$$T\left(\omega, \omega^{\prime}\right)= \begin{cases}1 / N & \text { if } \omega \rightarrow \omega^{\prime} \text { is possible } \ 0 & \text { otherwise }\end{cases}$$
We choose the acceptance rate
$$A\left(\omega, \omega^{\prime}\right)=\min \left(\frac{P_{\omega^{\prime}} T\left(\omega^{\prime}, \omega\right)}{P_{\omega} T\left(\omega, \omega^{\prime}\right)}, 1\right)$$
for which $W$ in (4.23) fulfills the condition for detailed balance. In fact, the condition
$$P_{\omega} T\left(\omega, \omega^{\prime}\right) \times \min \left(\frac{P_{\omega^{\prime}} T\left(\omega^{\prime}, \omega\right)}{P_{\omega} T\left(\omega, \omega^{\prime}\right)}, 1\right)=P_{\omega^{\prime}} T\left(\omega^{\prime}, \omega\right) \times \min \left(\frac{P_{\omega} T\left(\omega, \omega^{\prime}\right)}{P_{\omega^{\prime}} T\left(\omega^{\prime}, \omega\right)}, 1\right)$$
is fulfilled both for $P_{\omega^{\prime}} T\left(\omega^{\prime}, \omega\right)$ larger and smaller $P_{\omega} T\left(\omega, \omega^{\prime}\right)$.
A good choice of the initial configuration $\omega$ may save computing time. For example, at high temperatures the degrees of freedom are uncorrelated, and we choose the variables at random in contrast to low temperatures where they are strongly correlated.

We now discuss a particular implementation of the algorithm for a onedimensional quantum mechanical system discretized on time-lattice with $n$ points. We choose an initial configuration $q=\left(q_{1}, \ldots, q_{n}\right)$. The first lattice-variable $q_{1}$ is altered or remains unchanged according to the following rules:

1. Suggest a provisional change of $q_{1}$ to a randomly chosen $q_{1}^{\prime}$.
2. If the action decreases, that is, $\Delta S<0$, then permanently replace $q_{1}$ by $q_{1}^{\prime}$.
3. If the action increases, choose an uniformly distributed random number $r \in$ $[0,1]$. The suggestion $q_{1}^{\prime}$ is accepted if $\exp (-\Delta S)>r$. Otherwise the latticevariable $q_{1}$ remains unaltered.
4. Proceed with the variables $q_{2}, q_{3}, \ldots$ in the same way till all variables have been tested.
5. If the last lattice point is reached, a “sweep through the lattice” or a Monte Carlo iteration is finished, and one starts again with the first lattice point.

## 物理代写|量子场论代写Quantum field theory代考|Heat Bath Algorithm

For the heat bath algorithm, the transition probability $W\left(\omega, \omega^{\prime}\right)$ depends only on the final state $\omega^{\prime}$ such that the condition of detailed balance (4.21) implies $W\left(\omega, \omega^{\prime}\right) \propto$ $P_{\omega^{\prime}}$. The normalization conditions for $P$ and $W$ lead to
$$W\left(\omega, \omega^{\prime}\right)=P_{\omega^{\prime}}$$
The algorithm is particularly useful when the equilibrium distribution $P$ can be integrated or summed up easily. Let us first apply the heat bath algorithm to estimate one-dimensional integrals of the form $\langle O\rangle=\int O(x) P(x) \mathrm{d} x$ with fixed $P$ and varying $O$. Thus we need random numbers distributed according to $P(x)$. To this end we first generate uniformly distributed random numbers $y_{i}$ on the unit interval and consider the preimages $\left{F^{-1}\left(y_{i}\right)\right}$ of these numbers. Here $F$ denotes the monotonically increasing anti-derivative of the probahility density,
$$F(x)=\int_{-\infty}^{x} P(u) \mathrm{d} u \in[0,1]$$
Because of the identity
$$y_{2}-y_{1}=\int_{F^{-1}\left(y_{1}\right)}^{F^{-1}\left(y_{2}\right)} P(u) \mathrm{d} u,$$
these preimages are distributed according to $P$. This is made clear in Fig. 4.2.
Box-Muller Method
This method has been introduced by George Box and Mervin Muller [8] and nicely illustrates how to extend the previous algorithm to higher dimensions. Uniformly distributed random points $\boldsymbol{y}$ in the unit square are mapped into normally distributed random numbers $\boldsymbol{x} \in \mathbb{R}^{2}$ with mean $\overline{\boldsymbol{x}}$ and variance $\sigma$. Thus we demand
$$\mathrm{d}^{2} y=\operatorname{det}\left(\frac{\partial y_{i}}{\partial x_{j}}\right) \mathrm{d}^{2} x=P(\boldsymbol{x}) \mathrm{d}^{2} x, \quad P(\boldsymbol{x})=\frac{1}{2 \pi \sigma^{2}} \mathrm{e}^{-(\boldsymbol{x}-\bar{x})^{2} / 2 \sigma^{2}}$$
We introduce polar coordinates $x_{1}-\bar{x}{1}=r \cos \varphi$ and $x{2}-\bar{x}{2}=r \sin \varphi$ and set $\varphi=2 \pi y{2}$. Assuming that $r$ only depends on $y_{1}$, we arrive at
$$\mathrm{d}^{2} y=\frac{\mathrm{d} y_{1}}{\mathrm{~d} r} \mathrm{~d} r \frac{\mathrm{d} \varphi}{2 \pi}=\frac{1}{2 \pi \sigma^{2}} \mathrm{e}^{-r^{2} / 2 \sigma^{2}} r \mathrm{~d} r \mathrm{~d} \varphi \quad \text { or } \quad \frac{\mathrm{d} y_{1}}{\mathrm{~d} r}=\frac{r}{\sigma^{2}} \mathrm{e}^{-r^{2} / 2 \sigma^{2}}$$

## 物理代写|量子场论代写Quantum field theory代考|The Anharmonic Oscillator

We return to one-dimensional quantum mechanical systems at imaginary time and discretized on a time-lattice. They are characterized by their Euclidean lattice action
$$S(q)=\varepsilon \sum_{j=0}^{n-1}\left{\frac{m}{2} \frac{\left(q_{j+1}-q_{j}\right)^{2}}{\varepsilon^{2}}+V\left(q_{j}\right)\right}$$
In particular we shall consider the anharmonic oscillator with quartic potential
$$V(q)=\mu q^{2}+\lambda q^{4}$$
in more detail. The choice of the number $n$ of lattice points and of the lattice constant $\varepsilon$ is limited mainly by two aspects:

• $\varepsilon$ should be sufficiently small to be near the continuum limit $\varepsilon \rightarrow 0$.
• The quantities of interest should fit into the interval $n \varepsilon$. For instance, the width of the ground state should be less than $n \varepsilon$.

If $\lambda_{0}$ is a typical length scale of the system at hand, then the quantities $n$ and $\varepsilon$ should satisfy constraints of the type
$$\varepsilon \lesssim \frac{\lambda_{0}}{10} \quad \text { and } \quad n \varepsilon \gtrsim 10 \lambda_{0}$$

Another problem concerns the size of statistical fluctuations in any Monte Carlo simulation. The relative standard deviation of a random variable $O$ is
$$\Delta_{O}=\sqrt{\frac{\left\langle O^{2}\right\rangle-\langle O\rangle^{2}}{\langle O\rangle^{2}}} \propto \text { (number of lattice points) }{ }^{-1 / 2}$$
As an estimate for the expectation value $\langle O\rangle$, we take
$$\bar{O}=\frac{1}{M} \sum_{\mu=1}^{M} O\left(q_{\mu}\right)$$
with Boltzmann-distributed configurations $\boldsymbol{q}{\mu}$. Depending on the initial configuration, the Markov chain may need some “time” to equilibrate. In the simulations of the anharmonic oscillator presented below, equilibrium is reached after approximately $10-100$ sweeps through the lattice. In addition, since configurations of successive sweeps are correlated, only every MA’th sweep is used to estimate expectation values. The number $M A$ should be larger than the relevant autocorrelation time – the time over which the values $O\left(q{\mu}\right)$ are correlated. Different random variables may have vastly different auto-correlation times. As a general rule, they are large for spatially averaged quantities.

## 物理代写|量子场论代写Quantum field theory代考|Metropolis–Hastings Algorithm

1. 建议临时更改q1到一个随机选择的q1′.
2. 如果作用减小，即Δ小号<0，然后永久替换q1经过q1′.
3. 如果动作增加，选择一个均匀分布的随机数r∈ [0,1]. 建议q1′被接受，如果经验⁡(−Δ小号)>r. 否则格变量q1保持不变。
4. 继续变量q2,q3,…以同样的方式测试所有变量。
5. 如果到达最后一个格点，则完成“扫过格”或蒙特卡罗迭代，然后从第一个格点重新开始。

## 物理代写|量子场论代写Quantum field theory代考|Heat Bath Algorithm

F(X)=∫−∞X磷(在)d在∈[0,1]

Box-Muller 方法
George Box 和 Mervin Muller [8] 介绍了该方法，并很好地说明了如何将先前的算法扩展到更高的维度。均匀分布的随机点是在单位平方被映射成正态分布的随机数X∈R2平均X―和方差σ. 因此我们要求
d2是=这(∂是一世∂Xj)d2X=磷(X)d2X,磷(X)=12圆周率σ2和−(X−X¯)2/2σ2

## 物理代写|量子场论代写Quantum field theory代考|The Anharmonic Oscillator

\left 的分隔符缺失或无法识别\left 的分隔符缺失或无法识别

• e应该足够小以接近连续极限e→0.
• 感兴趣的数量应适合区间ne. 例如，基态的宽度应该小于ne.

e≲λ010 和 ne≳10λ0

Δ○=⟨○2⟩−⟨○⟩2⟨○⟩2∝ （格点数） −1/2

○¯=1米∑μ=1米○(qμ)