1.displaymath 單行數學環境,不帶編號。
\begin{displaymath}
This\ is\ displaymath\ envirment.\ I\ don
't\ have\ a\ tag
\end{displaymath}
2.equation 單行數學環境,全文按序編號。
\begin{equation}
This\ is\ equation\ envirment.\ I\ have\ a\ tag
\end{equation}
3.itemize 條目環境,按小圓點排列。
\begin{itemize}
\item This is
\item itemize environment
\end{itemize}
4.enmerate 枚舉環境,按數字序號排列。
\begin{enumerate}
\item This is
\item enumerate enviroment
\end{enumerate}
5.quotation 引用環境,將輸入看作純文本,有大縮進。
\begin{quotation}
This is quotation environment. I have big indent, and output plaintext.
\end{quotation}
6.verbatim 復讀環境,字體特殊,將輸入看作純文本。
\begin{verbatim}
This is verbatim enviroment.I also output plaintext.
\end{verbatim}
7.tabular 表格環境。
\begin{tabular}{l|c|c}
Aloha&This is tabular environment & I can have many rows\\
\hline
I am BJ&Hello World &I love you\\
&I can make multiple lines & I can even enter $\int$
\end{tabular}
其中{}框住的三個字母lcc表示表格有三列,l:本列左對齊,c:本列居中,r:本列右對齊。&符號分割表項,\\換行,\hline添加水平線。
8.description 描述環境,將輸入看作純文本。
\begin{description}
\item[This is describe environment.]
\item[It seems cool.]
\end{description}
9.matrix 矩陣環境,使用時要加載amsmath包,並用美元括住。編譯器會將matrix看作數學符號處理。
$$
\begin{matrix}
I& am& a\\
Matrix& I& am\\
seen& as& a\ symbol
\end{matrix}
$$
10.table 浮動表格環境,浮動體位置更靈活。
\begin{table}[hbt]
\begin{tabular}{l|cc}
& I& just\\
\hline
& like & tabular\\
& environment&but more complete
\end{tabular}
\caption{This is a floating table.}
\end{table}
11.preamble 引言環境。
\title{This is a preamble}
\author{Chester}
\date{\today}
\maketitle
12.figure 圖片環境。
\begin{figure}[hbt]
\centering
\includegraphics{lenna.png}
\caption{lenna}
\end{figure
13.一個更靈活的圖片環境,並且可以居中與縮放圖片,需要graphicx包
{\centering\includegraphics[scale=0.85]{test.png}
}\\ 注意這里必須要空一行,這和tex的對齊方式有關,留待日后
14.一個表格的實例,比較細節,含有合並單元格的操作,需要algorithm和algorithmic包
\begin{tabular}{|l|c|c|c}
\multicolumn{2}{}&&\multicolumn{2}{|c}{Predicted Classes}\\ \cline{3-4}
\multicolumn{2}{}&&\multicolumn{1}{|c|}{zero }& nonzero\\
\hline
Real&zero&975&5\\ \cline{2-4}
Class&nonzero&53&927\\ \hline
\end{tabular}\\
15.一個偽代碼的實例(Naive Bayes)
\begin{algorithm}
\caption{Training Naive Bayes Classifier}
\label{alg:train_bayes}
\textbf{Input:} The training set with the labels $\mathcal{D}=\{(\mathbf{x}_i,y_i)\}.$
\begin{algorithmic}[1]
\STATE $\mathcal{V}\leftarrow$ the set of distinct words and other tokens found in $\mathcal{D}$\\
\FOR{each target value $c$ in the labels set $\mathcal{C}$}
\STATE $\mathcal{D}_c\leftarrow$ the training samples whose labels are $c$\\
\STATE $P(c)\leftarrow\frac{|\mathcal{D}_c|}{|\mathcal{D}|}$\\
\STATE $T_c\leftarrow$ a single document by concatenating all training samples in $\mathcal{D}_c$\\
\STATE $n_c\leftarrow |T_c|$
\FOR{each word $w_k$ in the vocabulary $\mathcal{V}$}
\STATE $n_{c,k}\leftarrow$ the number of times the word $w_k$ occurs in $T_c$\\
\STATE $P(w_k|c)=\frac{n_{c,k}+1}{n_c+|\mathcal{V}|}$
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}



