私は以下を試しました:
\documentclass{article}
\usepackage{amsmath}%amssymb,
\usepackage{bm}
\begin{document}
Outputs to machine learning models are also often represented as vectors. For instance, consider an object recognition model that takes an image as input and emits a set of numbers indicating the probabilities that the image contains a dog, human, or cat, respectively. The output of such a model is a three element vector $\vec{y} = \begin{bmatrix}y_{0}\\y_{1}\\y_{2}\\\dfrac{1}{2}\end{bmatrix}$, where the number $y_{0}$ denotes the probability that the image contains a dog, $y_{1}$ denotes the~probability that the image contains a human, and $y_{2}$ denotes the probability that the image contains a cat. Figure~\ref{fig:vec_out} shows some possible input images and corresponding output vectors.
\begin{align*}
p\left( x \right)
&= \overbrace{ \pi_{1}}^{0.33}\mathcal{N}\left( \vec{x}; \, \overbrace{ \vec{\mu}_{1} }^{\begin{bmatrix}
152\\55
\end{bmatrix}} \overbrace{ \bm{\Sigma}_{1}}^{ \begin{bmatrix}
20 &0\\0 &28
\end{bmatrix} } \right)
+ \overbrace{ \pi_{2} }^{0.33} \mathcal{N}\left(\vec{x}; \, \overbrace{ \vec{\mu}_{2} }^{ \begin{bmatrix}
175\\70
\end{bmatrix} }, \overbrace{ \bm{\Sigma}_{2}}^{ \begin{bmatrix}
35 & 39\\39 & 51
\end{bmatrix} } \right)\\
&+ \overbrace{ \pi_{3} }^{0.33} \mathcal{N}\left(\vec{x}; \, \overbrace{ \vec{\mu}_{3} }^{ \begin{bmatrix}
135\\40
\end{bmatrix} }, \overbrace{ \bm{\Sigma}_{3}}^{ \begin{bmatrix}
10 & 0\\0 & 10
\end{bmatrix} } \right)
\end{align*}
\end{document}
出力は次のようになります:
マトリックスの上部と下部に均等なスペースを作るにはどうすればいいでしょうか?アドバイスをお願いします
また、なぜこのようなことが起こったのかを誰かが説明してくれるとさらに助かります。
答え1
これがご希望の内容かどうかはわかりませんが、各部分の内容を で囲むこともできます\vcenter{\hbox{$ ... $}}
。
\documentclass{article}
\usepackage{amsmath}%amssymb,
\usepackage{bm}
\begin{document}
\begin{align*}
p\left( x \right)
&= \overbrace{ \pi_{1}}^{0.33}\mathcal{N}\left(\vcenter{\hbox{$ \vec{x}; \, \overbrace{ \vec{\mu}_{1} }^{\begin{bmatrix}
152\\55
\end{bmatrix}} \overbrace{ \bm{\Sigma}_{1}}^{ \begin{bmatrix}
20 &0\\0 &28
\end{bmatrix} } $}}\right)
+ \overbrace{ \pi_{2} }^{0.33} \mathcal{N}\left(\vcenter{\hbox{$ \vec{x}; \, \overbrace{ \vec{\mu}_{2} }^{ \begin{bmatrix}
175\\70
\end{bmatrix} }, \overbrace{ \bm{\Sigma}_{2}}^{ \begin{bmatrix}
35 & 39\\39 & 51
\end{bmatrix} } $}}\right)\\
&+ \overbrace{ \pi_{3} }^{0.33} \mathcal{N}\left(\vcenter{\hbox{$ \vec{x}; \, \overbrace{ \vec{\mu}_{3} }^{ \begin{bmatrix}
135\\40
\end{bmatrix} }, \overbrace{ \bm{\Sigma}_{3}}^{ \begin{bmatrix}
10 & 0\\0 & 10
\end{bmatrix} } $}}\right)
\end{align*}
\end{document}
答え2
素晴らしい回答をいただきましたが、私は次のようにすることを心からお勧めします\overbrace
:
\documentclass{article}
\usepackage{amsmath}
\usepackage{bm}
\begin{document}
Outputs to machine learning models are also often represented as vectors.
For instance, consider an object recognition model that takes an image as
input and emits a set of numbers indicating the probabilities that the
image contains a dog, human, or cat, respectively. The output of such
a model is a three element vector
$\vec{y} = [\begin{matrix}y_{0} & y_{1} & y_{2} & \frac{1}{2}\end{matrix}]^T$,
where the number $y_{0}$ denotes the probability that the image contains a dog,
$y_{1}$ denotes the~probability that the image contains a human, and $y_{2}$
denotes the probability that the image contains a cat. Figure~\ref{fig:vec_out}
shows some possible input images and corresponding output vectors.
\begin{gather*}
p(x) = \pi_{1} \mathcal{N} ( \vec{x}; \, \vec{\mu}_{1}, \bm{\Sigma}_{1})
+ \pi_{2} \mathcal{N} ( \vec{x}; \, \vec{\mu}_{2}, \bm{\Sigma}_{2})
+ \pi_{3} \mathcal{N} ( \vec{x}; \, \vec{\mu}_{3}, \bm{\Sigma}_{3})
\\[1ex]
\begin{aligned}
\pi_1&=0.33 & \pi_2&=0.33 & \pi_3&=0.33
\\
\vec{\mu}_{1}&=\begin{bmatrix} 152 \\ 55 \end{bmatrix}, &
\vec{\mu}_{2}&=\begin{bmatrix} 175 \\ 70 \end{bmatrix}, &
\vec{\mu}_{3}&=\begin{bmatrix} 135 \\ 40 \end{bmatrix}
\\
\bm{\Sigma}_{1}&=\begin{bmatrix} 20 & 0 \\ 0 & 28 \end{bmatrix}, &
\bm{\Sigma}_{2}&=\begin{bmatrix} 35 & 39 \\ 39 & 51 \end{bmatrix}, &
\bm{\Sigma}_{3}&=\begin{bmatrix} 10 & 0 \\ 0 & 10 \end{bmatrix}
\end{aligned}
\end{gather*}
\end{document}
行ベクトルの転置として記述された列ベクトルにより、行間のギャップが回避されることに注意してください。
代替の配置
\documentclass{article}
\usepackage{amsmath}
\usepackage{bm}
\begin{document}
Outputs to machine learning models are also often represented as vectors.
For instance, consider an object recognition model that takes an image as
input and emits a set of numbers indicating the probabilities that the
image contains a dog, human, or cat, respectively. The output of such
a model is a three element vector
$\vec{y} = [\begin{matrix}y_{0} & y_{1} & y_{2} & \frac{1}{2}\end{matrix}]^T$,
where the number $y_{0}$ denotes the probability that the image contains a dog,
$y_{1}$ denotes the~probability that the image contains a human, and $y_{2}$
denotes the probability that the image contains a cat. Figure~\ref{fig:vec_out}
shows some possible input images and corresponding output vectors.
\begin{alignat*}{3}
p(x) = \pi_{1} &\mathcal{N} ( \vec{x}; \, \vec{\mu}_{1}, \bm{\Sigma}_{1})
&{}+ \pi_{2} &\mathcal{N} ( \vec{x}; \, \vec{\mu}_{2}, \bm{\Sigma}_{2})
&{}+ \pi_{3} &\mathcal{N} ( \vec{x}; \, \vec{\mu}_{3}, \bm{\Sigma}_{3})
\\[1ex]
\pi_1&=0.33 & \pi_2&=0.33 & \pi_3&=0.33
\\
\vec{\mu}_{1}&=\begin{bmatrix} 152 \\ 55 \end{bmatrix}, &
\vec{\mu}_{2}&=\begin{bmatrix} 175 \\ 70 \end{bmatrix}, &
\vec{\mu}_{3}&=\begin{bmatrix} 135 \\ 40 \end{bmatrix}
\\
\bm{\Sigma}_{1}&=\begin{bmatrix} 20 & 0 \\ 0 & 28 \end{bmatrix}, &
\bm{\Sigma}_{2}&=\begin{bmatrix} 35 & 39 \\ 39 & 51 \end{bmatrix}, &
\bm{\Sigma}_{3}&=\begin{bmatrix} 10 & 0 \\ 0 & 10 \end{bmatrix}
\end{alignat*}
\end{document}
完全を期すために、提案されたタスクを実行する方法を以下に示します。大きなインライン列ベクトルを残したのは、それがなぜ本当に悪いのかを示すためです。
出力を比較すると、何の疑いもありません。
\documentclass{article}
\usepackage{amsmath}
\usepackage{bm}
\usepackage{delarray}
\begin{document}
Outputs to machine learning models are also often represented as vectors.
For instance, consider an object recognition model that takes an image as
input and emits a set of numbers indicating the probabilities that the
image contains a dog, human, or cat, respectively. The output of such
a model is a three element vector
$\vec{y} = \begin{bmatrix}y_{0} \\ y_{1} \\ y_{2} \\ \dfrac{1}{2}\end{bmatrix}$,
where the number $y_{0}$ denotes the probability that the image contains a dog,
$y_{1}$ denotes the~probability that the image contains a human, and $y_{2}$
denotes the probability that the image contains a cat. Figure~\ref{fig:vec_out}
shows some possible input images and corresponding output vectors.
\begin{equation*}
\begin{aligned}
p(x)
&= \overbrace{ \pi_{1}}^{0.33}\mathcal{N}
\begin{array}[b]({c})
\vec{x}; \, \overbrace{ \vec{\mu}_{1} }^{\begin{bmatrix}
152\\55
\end{bmatrix}} \overbrace{ \bm{\Sigma}_{1}}^{ \begin{bmatrix}
20 &0\\0 &28
\end{bmatrix} }
\end{array}
+ \overbrace{ \pi_{2} }^{0.33} \mathcal{N}
\begin{array}[b]({c})
\vec{x}; \, \overbrace{ \vec{\mu}_{2} }^{ \begin{bmatrix}
175\\70
\end{bmatrix} }, \overbrace{ \bm{\Sigma}_{2}}^{ \begin{bmatrix}
35 & 39\\39 & 51
\end{bmatrix} }
\end{array}\\
&+ \overbrace{ \pi_{3} }^{0.33} \mathcal{N}
\begin{array}[b]({c})\vec{x}; \, \overbrace{ \vec{\mu}_{3} }^{ \begin{bmatrix}
135\\40
\end{bmatrix} }, \overbrace{ \bm{\Sigma}_{3}}^{ \begin{bmatrix}
10 & 0\\0 & 10
\end{bmatrix} }
\end{array}
\end{aligned}
\end{equation*}
\end{document}
答え3
\left( ... \right)
の3 つのインスタンスを に置き換えるだけで十分です\bigl( ... \bigr)
。大きな\overbrace
構成要素の 2 番目の引数は定義的ではなく説明的であるため、(それほど高くない) 括弧で囲む必要がないことに注意してください。
\vec{y}
ああ、環境の前の段落の の定義に多くの注目を集めたいのでなければalign*
、列ベクトルではなく行ベクトルとして記述します。
\documentclass{article}
\usepackage{amsmath,amssymb,bm}
\begin{document}
Outputs to machine learning models are also often represented as vectors. For instance, consider an object recognition model that takes an image as input and emits a set of numbers indicating the probabilities that the image contains a dog, human, or cat, respectively. The output of such a model is a three element vector
$\vec{y} = \begin{bmatrix} y_{0} & y_{1} & y_{2} \end{bmatrix}'$,
where the number $y_{0}$ denotes the probability that the image contains a dog, $y_{1}$ denotes the~probability that the image contains a human, and $y_{2}$ denotes the probability that the image contains a cat. Figure~\ref{fig:vec_out} shows some possible input images and corresponding output vectors.
\begin{align*}
p(x)
&=\overbrace{ \pi_{1}\mathstrut}^{0.33}\mathcal{N}
\bigl( \vec{x};
\overbrace{ \vec{\mu}_{1} }^{
\begin{bmatrix} 152\\55 \end{bmatrix}} ,
\overbrace{ \bm{\Sigma}_{1}}^{
\begin{bmatrix} 20 &0\\0 &28 \end{bmatrix} }
\bigr)
+\overbrace{ \pi_{2}\mathstrut}^{0.33} \mathcal{N}
\bigl(\vec{x};
\overbrace{ \vec{\mu}_{2} }^{
\begin{bmatrix} 175\\70 \end{bmatrix} },
\overbrace{ \bm{\Sigma}_{2}}^{
\begin{bmatrix} 35 & 39\\39 & 51 \end{bmatrix} }
\bigr)
\\[2\jot] % insert a bit more vertical whitespace
&\quad+\overbrace{ \pi_{3}\mathstrut}^{0.33} \mathcal{N}
\bigl(\vec{x};
\overbrace{ \vec{\mu}_{3} }^{
\begin{bmatrix} 135\\40 \end{bmatrix} },
\overbrace{ \bm{\Sigma}_{3}}^{
\begin{bmatrix} 10 & 0\\0 & 10 \end{bmatrix} }
\bigr)
\end{align*}
\end{document}