Maximum_Likelihood

Statistics with Prof. Liu Sept 6, 2024

Statistics has two streams: frequenties, and Bayes.

Maximum likelihood is frequenties method.

Likelihood function is powerful, which contains all information for the data. We don't need others, just this function.

Likelihood function is the joint probability of all data. Likelihood 就是data的概率。所有data的信息全在这个likelihood finction里面了!

We assume data are iid. In statistics we say iid is "random sample". It means, e.g. 每次抽球的概率一样。

That Joint probability is just the product of all P(data | parameter)

It's data's probability! Not parameters' probability.

例:两个盒子,一个有5个黑球5个白球,另一个有9个黑球1个白球。现抽4次,每次放回地抽1个球。4个都是黑球。问最可能从哪个盒子抽的。

For box1, if 4 times, P(4 black) = P(black)^4 = 0.5^4

For box2, if 4 times, P(4 black) = P(black)^4 = 0.9^4 that's why iid and we make product.

The latter probability is larger, so we choose box2.

The parameters for each box: box1, binomial, n=4, p=0.5. box2, n=4, p=0.9

例,无穷多个盒子,它们有黑球的比例是从0到1不等。抽4次,4个都是黑球。问从哪个盒子抽的。

Now the parameters, n=4, but don't know p. Want to know p, once know p, we then know which box.

**We still choose the box with the highest p. We choose the max P(data given p) ie P(data given box). **

But what is the probability that box2 is what I have done? What is the probability for box1?

But these numbers are not the probabilities for the two boxes! It's more intuitive to make decisions based on their probabilities, such as the prob of rain 40% and not rain 60%.

It has logics. We are choosing the parameter which can make the data to be most likely to stand.

Likelihood is just the probability, the probability of data. 0 to 1.

**应用到科学方法论,We can measure the distance of a theory to the real world data, ie, to examine a theory is good or bad, using likelihood. **


Statistics do inference: estimation and prediction.

Estimation has two categories: 1. Assume we know the population distribution, we just estimate its parameters. 2. We don't even know the population distribution.

After that, if we use our model to fit new data, then it's prediction.

**Prediction error is larger than estimation error. ** Estimatiin error is just RSS, the sum of square residuals. But for prediction, a new dataset will introduce new noise, and plus the model's original RSS.


In logistics regression, and linear regression, and linear discriminate analysis, the conditional class probabilities sum up to 1, and thus is posterior probability P(parameter given data). Just compare them directly. We can use Bayes optimal classifier. It's not related to likelihood.

相关推荐
hetao173383729 分钟前
2025-12-21~22 hetao1733837的刷题笔记
c++·笔记·算法
一声沧海笑41 分钟前
【GEE学习笔记】GEE中如何上传矢量图?
笔记·学习
呱呱巨基1 小时前
Linux 进程控制
linux·c++·笔记·学习
阿恩.7702 小时前
前沿科技计算机国际期刊征稿:电子、AI与网络计算
人工智能·经验分享·笔记·计算机网络·考研·云计算
代码游侠2 小时前
应用——MPlayer 媒体播放器系统代码详解
linux·运维·笔记·学习·算法
悠哉悠哉愿意2 小时前
【EDA学习笔记】电子技术基础知识:基本元件
笔记·嵌入式硬件·学习·eda
不解风水2 小时前
【教程笔记】KalmanFilter
笔记·学习·算法·矩阵·ekf
ZSandGQ3 小时前
简支梁ANSYS加载模拟
经验分享·笔记
❀͜͡傀儡师3 小时前
运维问题排查笔记:磁盘、Java进程与SQL执行流程
java·运维·笔记
民乐团扒谱机4 小时前
【读论文】2021美赛D题 O奖(3)2121604
数学建模