前面使用宏细特征结合的lenet突破了70分,今天又突破71,真不容易!
70分中的初始化错误:
if(prelayer_map_count==12&&map_count==16&&(is_pooling==false))//c3层
{ init_kernel新16和1(layer->kernel[valued_index].weight,5 * 5);}else...
改成: init_kernel新121(layer->kernel[valued_index].weight, 5 * 5);
就是上不了70分,很纳闷!
在vgg中,我们仍然保留了宏细结合的方法,但是改正了这个错误,仍然上了71分!
那是不是不改更高呢?非也!所以很纳闷!
好,这个vgg宏细结合的版本,我们也先看日志:
17:06 2025/3/11
三色方案加开头33卷积代替55卷积,效果不错,5万次训练,39.7
25万次训练,61.8,57.4,15万次,上57.7
保存一个,打算试试轮廓引入。
20:42 2025/3/11十五万次,上59.6
已经引入轮廓,效果不错,内存不足,没测试,5万次训练,48.3
25万次训练,61.3
训练了35万次,64.4分
保存一个。今天最好成绩
11:13 2025/3/14昨天成绩66.11,使用x64,内存问题解决,测试正常,60.69
11:14 2025/3/14
今天把dataG改成轮廓镜像,比昨天差58.76-57.91=0.85这些。
先保存一个,打算增加一个图像,输入图像变5个。
11:12 2025/3/16
基改5,rg361加白板,训练达到66.4,test达到60.46,又掉到59.99
可以,保存一下
16:59 2025/7/14
这个随机噪声创造了历史,25万次训练test60.5,50万次test65,保存一个
买电脑是奖励自己吗?
7:26 2025/7/15
这是一个很好的版本,开始尝试动量的概念
13:52 2025/7/22
这是vgg实验成功版本,保存一个
14:01 2025/11/26
改后,看能不能上70分
14:42 2025/11/27
遭遇动量失败,有c31是正常的,打算改成4输入,向70分看齐整
14:46 2025/11/27
还有就是加入宏特征,仍然使用xi*1/255,轮廓0,1好像距离70分有距离!
14:47 2025/11/27
另外,考虑lr下降慢一些
12:24 2025/11/28
一切改好,最好69.30/0.0003=lr
14:22 2025/11/29
继续干,vgg最好版本产生lr=0.00005,test=71.01,lr=0.00001,test=71.03
人生艰难是不是就像程序的日志!每有小小的进步!都那么不容易!
我们这里只讲vgg的3*3卷积,如何使得32*32仍然是32*32,在forward中如何改造原有函数,以及在backward中改造函数,其他训练手法如70分版本,得以保留!
先看看我们的网络结构:基本相同:
//初始化vgg各层,先实现一个最简单的vgg
init_layer(&input_layer, 0, 4, 0, 0, 32, 32, false);
init_layer(&c1_convolution_layer,4, 12, 3, 3, 32, 32, false);
init_layer(&c11_convolution_layer, 12, 12, 3, 3, 32, 32, false);
init_layer(&s2_pooling_layer, 1, 12, 1, 1, 16, 16, true);
init_layer(&c3_convolution_layer, 12, 16, 3, 3, 16, 16, false);
init_layer(&c31_convolution_layer, 16, 16, 3, 3, 16, 16, false);
init_layer(&s4_pooling_layer, 1, 16, 1, 1, 8, 8, true);
init_layer(&c5_convolution_layer, 16, 120, 8, 8, 1, 1, false);
init_layer(&output_layer, 120, 10, 1, 1, 1, 1, false);
我们vgg最关键的四层已经标红,也是为最简单resnet做准备!
我们最初实现vgg,最好成绩test在65左右,我知道改成功了!其实还有一个失败版本!想法很好,也实现了,没错误,能训练,每次10分左右,故我知道有问题,但到现在为止,还没复盘查找!
只是这个新的vgg思路成功了,所以有了复盘的机会,这不,上71分了!
好,我们从forward看起:
void Ccifar10交叉熵lenet5mfcDlg:: forward_propagation()
{
// log_exec("forward_propagation");
for(int i=0;i<10;i++)
{
softm[i]=0;
}
convolution_forward_propagation(&input_layer, &c1_convolution_layer, NULL);
convolution_forward_propagation(&c1_convolution_layer, &c11_convolution_layer, NULL);
max_pooling_forward_propagation(&c11_convolution_layer, &s2_pooling_layer);
convolution_forward_propagation(&s2_pooling_layer, &c3_convolution_layer, NULL);
convolution_forward_propagation(&c3_convolution_layer, &c31_convolution_layer, NULL);
max_pooling_forward_propagation(&c31_convolution_layer, &s4_pooling_layer);
convolution_forward_propagation(&s4_pooling_layer, &c5_convolution_layer);
fully_connection_forward_propagation(&c5_convolution_layer, &output_layer);
double maxtemp=*(output_layer.map[0].data);
for (int i = 1; i < 10; i++)
{
/* if (yOcnn[i] > maxtemp)
maxtemp = yOcnn[i];*/
if (output_layer.map[i].data[0] > maxtemp)
maxtemp =output_layer.map[i].data[0];
}
double sum = 0;
// 首先找出yo中最大,然后softmax
for (int i = 0; i < 10; i++)
{
sum += ::exp(output_layer.map[i].data[0] - maxtemp);
}
for (int i = 0; i < 10; i++)
{
softm[i] = ::exp(output_layer.map[i].data[0] - maxtemp) / sum;//概率计算完毕
}
}
上面标红的卷积函数实际调用了同一函数,你可以对比以前版本一起看:
void Ccifar10交叉熵lenet5mfcDlg:: convolution_forward_propagation(Layer* pre_layer, Layer* cur_layer, bool* connection_table) //checked and fixed /checked 2.0
{
// log_exec("convolution_forward_propagation");//24*8*8->120
int index_layer = 0;
int layer_size = cur_layer->map_height * cur_layer->map_width;
for (int i = 0; i < cur_layer->map_count; i++)
{
memset(cur_layer->map_common, 0, layer_size * sizeof(double)); //清空公共map的暂存数据。
for (int j = 0; j < pre_layer->map_count; j++)
{
index_layer = j * cur_layer->map_count + i;
///* if (connection_table != NULL && !connection_table[index_layer])
// {
// continue;
// }*/
// //fix para 3 map height
// convolution_calcu(
// pre_layer->map[j].data, pre_layer->map_width, pre_layer->map_height,//32*32
// cur_layer->kernel[index_layer].weight, cur_layer->kernel_width, cur_layer->kernel_height,//3*3
// cur_layer->map_common, cur_layer->map_width, cur_layer->map_height//32832
// );
convolution_calcuVgg(
pre_layer->map[j].data, pre_layer->map_width, pre_layer->map_height,//32*32
cur_layer->kernel[index_layer].weight, cur_layer->kernel_width, cur_layer->kernel_height,//3*3
cur_layer->map_common, cur_layer->map_width, cur_layer->map_height//32*32
);
}
for (int k = 0; k < layer_size; k++)//32*32
{
cur_layer->map[i].data[k] = activation_function::tan_h(cur_layer->map_common[k] + cur_layer->map[i].bias);
}
}
}
不一样的已经标红:注释的是原版,这是我改造vgg后成功的函数:
void Ccifar10交叉熵lenet5mfcDlg::convolution_calcuVgg(double* input_map_data, int input_map_width, int input_map_height, double* kernel_data, int kernel_width,
int kernel_height, double* result_map_data, int result_map_width, int result_map_height)
{
// log_exec("convolution_calcu");
double sum = 0.0;
//for (int i = 0; i < result_map_height; i++)//32;i:=h,n
//{
// for (int j = 0; j < result_map_width; j++)//32;j:=w;m
// {
for (int i = 0; i < result_map_height-kernel_height+1; i++)//32;i:=h,n,vgg
{
for (int j = 0; j < result_map_width-kernel_width+1; j++)//32;j:=w;m,vgg
{
sum = 0.0;
for (int n = 0; n < kernel_height; n++)//3
{
for (int m = 0; m < kernel_width; m++)//3
{
int index_input_reshuffle = (i + n) * input_map_width + j + m;//32,i:=h
int index_kernel_reshuffle = n * kernel_width + m;//n:=h
sum += input_map_data[index_input_reshuffle] * kernel_data[index_kernel_reshuffle];
}
}
/* int index_result_reshuffle = i * result_map_width + j;*/
int index_result_reshuffle = (i+1) * result_map_width + j+1;//w=32
result_map_data[index_result_reshuffle] += sum;
}
}
}
前向传播就讲完了,我们看反向传播的backward关于vgg的处理:
void Ccifar10交叉熵lenet5mfcDlg::backward_propagation(double* label) //checked
{
for (int i = 0; i < output_layer.map_count; i++)
{
output_layer.map[i].error[0] =softm[i] * activation_function::d_tan_h(output_layer.map[i].data[0]);
}
fully_connection_backward_propagation(&output_layer, &c5_convolution_layer);//10-120
convolution_backward_propagation(&c5_convolution_layer, &s4_pooling_layer, NULL);//120->24*8*8
max_pooling_backward_propagation(&s4_pooling_layer, &c31_convolution_layer);//24*8*8-》24*16*16
convolution_backward_propagation(&c31_convolution_layer, &c3_convolution_layer);//24*16*16->24*16*16
convolution_backward_propagation(&c3_convolution_layer, &s2_pooling_layer);//24*16*16->12*16*16
max_pooling_backward_propagation(&s2_pooling_layer, &c11_convolution_layer);//12*16*16-》12*32*32
convolution_backward_propagation(&c11_convolution_layer, &c1_convolution_layer);//12*32*32->12*32*32
convolution_backward_propagation(&c1_convolution_layer, &input_layer);//12*32*32->4*32*32
}
上面出现两个不一样的反向卷积传播函数:原版不变c5-》s4,我们只讲vgg版:
void Ccifar10交叉熵lenet5mfcDlg::convolution_backward_propagation(Layer* cur_layer, Layer* pre_layer) //checked checked 2.0 fixed
{
//24*16*16->24*16*16,c31->c3
// log_exec("convolution_backward_propagation");
int connected_index = 0;
int pre_layer_mapsize = pre_layer->map_height * pre_layer->map_width;//16
//更新S4 error
for (int i = 0; i < pre_layer->map_count; i++)
{
memset(pre_layer->map_common, 0, sizeof(double) * pre_layer_mapsize);
for (int j = 0; j < cur_layer->map_count; j++)
{
connected_index = i * cur_layer->map_count + j;
/* if (connection_table != NULL && !connection_table[connected_index])
{
continue;
}*/
for (int n = 1; n < cur_layer->map_height-1; n++)
{
for (int m = 1; m < cur_layer->map_width-1; m++)
{
int valued_index =( n) * cur_layer->map_width + m;//取中心14*14,c31
double error = cur_layer->map[j].error[valued_index];
for (int kernel_y = 0; kernel_y < cur_layer->kernel_height; kernel_y++)//3
{
for (int kernel_x = 0; kernel_x < cur_layer->kernel_width; kernel_x++)//3
{
int index_convoltion_map = (n-1 + kernel_y) * pre_layer->map_width + m-1 + kernel_x;//16
int index_kernel = connected_index;
int index_kernel_weight = kernel_y * cur_layer->kernel_width + kernel_x;
pre_layer->map_common[index_convoltion_map] += error * cur_layer->kernel[index_kernel].weight[index_kernel_weight];
}
}
}
}
}
for (int k = 0; k < pre_layer_mapsize; k++)//c31cur中心14*14到达c3,16*16(pre)
{
pre_layer->map[i].error[k] = pre_layer->map_common[k] * activation_function::d_tan_h(pre_layer->map[i].data[k]);
//pre_layer->map[i].error[k] = pre_layer->map_common[k] * activation_func::dtan_h(prev_layer->map[i].data[k]); source
}
}
//更新 S_x ->> C_x kernel 的 delta_weight
for (int i = 0; i < pre_layer->map_count; i++)//24*16*16
{
for (int j = 0; j < cur_layer->map_count; j++)//24*16*16
{
connected_index = i * cur_layer->map_count + j;
/* if (connection_table != NULL && !connection_table[connected_index])
{
continue;
}*/
//fixed cur_layer->map[i] -->> cur_layer->map[j]
convolution_calcuVggBP(
pre_layer->map[i].data, pre_layer->map_width, pre_layer->map_height,//16 16
cur_layer->map[j].error, cur_layer->map_width, cur_layer->map_height,//16 16
cur_layer->kernel[connected_index].delta_weight, cur_layer->kernel_width, cur_layer->kernel_height//3 3
);
}
}
//更新C_x 的delta_bias
int cur_layer_mapsize = cur_layer->map_height * cur_layer->map_width;
for (int i = 0; i < cur_layer->map_count; i++)
{
double delta_sum = 0.0;
for (int j = 0; j < cur_layer_mapsize; j++)
{
delta_sum += cur_layer->map[i].error[j];
}
cur_layer->map[i].delta_bias += delta_sum;
}
}
没有极大的热情,走不了这么远!知道为啥pytorch成功了吗?
我们forward中vgg怎么走,那么backward中就怎么返回:
void Ccifar10交叉熵lenet5mfcDlg::convolution_calcuVggBP(double* input_map_data, int input_map_width, int input_map_height, double* kernel_data, int kernel_width,
int kernel_height, double* result_map_data, int result_map_width, int result_map_height)
{
// log_exec("convolution_calcu");
double sum = 0.0;
for (int i = 0; i < result_map_height; i++)//3
{
for (int j = 0; j < result_map_width; j++)//3
{
sum = 0.0;
for (int n = 1; n < kernel_height-1; n++)//14
{
for (int m = 1; m < kernel_width-1; m++)//14
{
int index_input_reshuffle = (i-1 + n) * input_map_width + j-1 + m;//
int index_kernel_reshuffle = n * kernel_width + m;//14*14
sum += input_map_data[index_input_reshuffle] * kernel_data[index_kernel_reshuffle];//16*16
}
}
/* int index_result_reshuffle = i * result_map_width + j;*/
int index_result_reshuffle = (i) * result_map_width + j;//3*3
result_map_data[index_result_reshuffle] += sum;
}
}
}
最重要的东西,希望你能学到手,没有申请专利,大胆放心用!