在python中训练Gnian的Caffe模型

复制代码
train.py [-h] -m MODEL -p PREFIX [-d DATA_ROOT] [-n FOLDNUMS] [-a]
                [-i ITERATIONS] [-s SEED] [-t TEST_INTERVAL] [-o OUTPREFIX]
                [-g GPU] [-c CONT] [-k] [-r] [--avg_rotations] [--keep_best]
                [--dynamic] [--cyclic] [--solver SOLVER] [--lr_policy LR_POLICY]
                [--step_reduce STEP_REDUCE] [--step_end STEP_END]
                [--step_when STEP_WHEN] [--base_lr BASE_LR]
                [--momentum MOMENTUM] [--weight_decay WEIGHT_DECAY]
                [--gamma GAMMA] [--power POWER] [--weights WEIGHTS]
                [-p2 PREFIX2] [-d2 DATA_ROOT2] [--data_ratio DATA_RATIO]

# Database Layer

* Layer type: `Data`

* [Doxygen Documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1DataLayer.html)

* Header: [`./include/caffe/layers/data_layer.hpp`]

* CPU implementation: [`./src/caffe/layers/data_layer.cpp`]

* Parameters (`DataParameter data_param`)

* Parameters

  • Required

  • `source`: the name of the directory containing the database

  • `batch_size`: the number of inputs to process at one time

  • Optional

  • `rand_skip`: skip up to this number of inputs at the beginning; useful for asynchronous sgd

  • `backend`[default `LEVELDB`]: choose whether to use a `LEVELDB` or `LMDB`

# Absolute Value Layer

* Layer type: `AbsVal`

* Sample

layer {

name: "layer"

bottom: "in"

top: "out"

type: "AbsVal"

}

The `AbsVal` layer computes the output as abs(x) for each input element x.

# Accuracy and Top-k

`Accuracy` scores the output as the accuracy of output with respect to target -- it is not actually a loss and has no backward step.

* Layer type: `Accuracy`

* Header: [`./include/caffe/layers/accuracy_layer.hpp`]

* CPU implementation: [`./src/caffe/layers/accuracy_layer.cpp`]

* CUDA GPU implementation: [`./src/caffe/layers/accuracy_layer.cu`]

Parameters

* Parameters (`AccuracyParameter accuracy_param`)

* From [`./src/caffe/proto/caffe.proto`]

{% highlight Protobuf %}

{% include proto/AccuracyParameter.txt %}

{% endhighlight %}

# Bias Layer

* Layer type: `Bias`

# BNLL Layer

* Layer type: `BNLL`

The `BNLL` (binomial normal log likelihood) layer computes the output as log(1 + exp(x)) for each input element x."BNLL"(二项式正态对数似然)层将每个输入元素 x 的输出计算为 log(1 + exp(x))。

Parameters

No parameters.

Sample

layer {

name: "layer"

bottom: "in"

top: "out"

type: BNLL

}

# Concat Layer

* Layer type: `Concat`

* Input

  • `n_i * c_i * h * w` for each input blob i from 1 to K.

* Output

  • if `axis = 0`: `(n_1 + n_2 + ... + n_K) * c_1 * h * w`, and all input `c_i` should be the same.

  • if `axis = 1`: `n_1 * (c_1 + c_2 + ... + c_K) * h * w`, and all input `n_i` should be the same.

* Sample

layer {

name: "concat"

bottom: "in1"

bottom: "in2"

top: "out"

type: "Concat"

concat_param {

axis: 1

}

}

The `Concat` layer is a utility layer that concatenates its multiple input blobs to one single output blob.

# Contrastive Loss Layer# 对比损失层

* Layer type: `ContrastiveLoss`

Parameters

* Parameters (`ContrastiveLossParameter contrastive_loss_param`)

# Crop Layer 裁剪

* Layer type: `Crop`

* [Doxygen Docum

# Deconvolution Layer 反卷积

* Layer type: `Deconvolution`

Uses the same parameters as the Convolution layer.

# Dummy Data Layer 虚拟数据层

# Parameter Layer

# Threshold Layer

# Python Layer

* Layer type: `Python`

* Header: [`./include/caffe/layers/python_layer.hpp`]

The Python layer allows users to add customized layers without modifying the Caffe core code.

附录


Caffe的python使用方法:

Caffe提供了一个Python API,使您能够使用Python语言更方便地使用Caffe的功能。通过Caffe的Python API,您可以加载和训练模型、进行推理、执行网络层操作等。

以下是一些常用的Caffe Python API功能和用法示例:

  1. 导入Caffe模块:

```python

import caffe

```

  1. 加载网络和模型:

```python

net = caffe.Net('path/to/deploy.prototxt', 'path/to/model.caffemodel', caffe.TEST)

```

  1. 执行前向传播(推理):

```python

input_data = ... # 输入数据,numpy数组格式

net.blobs['data'].data[...] = input_data # 将输入数据设置到网络的'data'层

net.forward() # 进行前向传播计算

output_data = net.blobs['output'].data # 获取输出数据

```

  1. 执行反向传播(训练):

```python

loss = net.blobs['loss'].data # 获取损失值

net.zero_grad() # 清空梯度信息

net.backward() # 执行反向传播计算

net.update() # 更新网络参数

```

  1. 提取特征(特征提取):

```python

input_data = ... # 输入数据,numpy数组格式

feature = net.blobs['fc7'].data # 获取'fc7'层的特征向量

feature = net.forward(data=input_data, end='fc7')['fc7'] # 通过前向传播获取特征

```

  1. 使用预训练模型进行迁移学习:

```python

pretrained_net = caffe.Net('path/to/pretrained.prototxt', 'path/to/pretrained.caffemodel', caffe.TEST)

new_net = caffe.Net('path/to/new_net.prototxt', caffe.TRAIN)

将pretrained_net的权重复制到new_net

for layer_name, blob in pretrained_net.params.items():

if layer_name in new_net.params:

for i in range(len(blob)):

new_net.params[layer_name][i].data[...] = blob[i].data

```

相关推荐
艾伦~耶格尔32 分钟前
【集合框架LinkedList底层添加元素机制】
java·开发语言·学习·面试
yujkss1 小时前
Python脚本每天爬取微博热搜-终版
开发语言·python
yzx9910131 小时前
小程序开发APP
开发语言·人工智能·python·yolo
飞翔的佩奇1 小时前
【完整源码+数据集+部署教程】二维码与查找模式检测系统源码和数据集:改进yolo11-CSwinTransformer
python·yolo·计算机视觉·数据集·yolo11·二维码与查找模式检测
大霞上仙1 小时前
实现自学习系统,输入excel文件,能学习后进行相应回答
python·学习·excel
啊阿狸不会拉杆1 小时前
《算法导论》第 32 章 - 字符串匹配
开发语言·c++·算法
Caven771 小时前
【pytorch】reshape的使用
pytorch·python
无规则ai1 小时前
动手学深度学习(pytorch版):第四章节—多层感知机(5)权重衰减
人工智能·pytorch·python·深度学习
你知道网上冲浪吗3 小时前
【原创理论】Stochastic Coupled Dyadic System (SCDS):一个用于两性关系动力学建模的随机耦合系统框架
python·算法·数学建模·数值分析
钢铁男儿3 小时前
Python 正则表达式核心元字符全解析
python