贪心算法应用:神经网络剪枝详解

Java中的贪心算法应用:神经网络剪枝详解

1. 神经网络剪枝概述

神经网络剪枝(Neural Network Pruning)是一种模型压缩技术,旨在通过移除神经网络中对输出影响较小的连接、神经元或权重,从而减少模型的大小和计算复杂度,同时尽量保持模型的性能。

1.1 剪枝的基本概念

  • 权重剪枝(Weight Pruning):移除网络中不重要的权重
  • 神经元/滤波器剪枝(Neuron/Filter Pruning):移除整个神经元或卷积滤波器
  • 结构化剪枝(Structured Pruning):移除整个结构单元(如通道、滤波器)
  • 非结构化剪枝(Unstructured Pruning):移除单个权重,不遵循特定模式

1.2 剪枝的三种主要方法

  1. 基于重要性的剪枝:根据权重或神经元的重要性进行剪枝(常用贪心算法)
  2. 基于正则化的剪枝:通过L1/L2正则化自动学习稀疏性
  3. 基于优化的剪枝:将剪枝作为优化问题的一部分

本文将重点介绍基于贪心算法的重要性剪枝方法在Java中的实现。

2. 贪心算法在神经网络剪枝中的应用

贪心算法在神经网络剪枝中的应用主要体现在:每次迭代都选择当前看起来最优的剪枝决策,即移除对当前模型影响最小的权重或神经元。

2.1 贪心剪枝的基本流程

  1. 训练原始网络至收敛
  2. 评估网络中每个参数的重要性
  3. 根据重要性标准移除最不重要的参数
  4. 微调/重新训练剪枝后的网络
  5. 重复步骤2-4直到满足停止条件

2.2 常用的重要性评估标准

  1. 权重绝对值:绝对值小的权重被认为不重要
  2. 梯度信息:对损失函数影响小的权重
  3. 泰勒展开:基于一阶或二阶泰勒展开近似重要性
  4. 激活值:激活值小的神经元被认为不重要

3. Java实现神经网络剪枝

下面我们将用Java实现一个完整的基于贪心算法的神经网络剪枝流程。

3.1 神经网络基础结构

首先定义神经网络的基本组件:

java 复制代码
public class NeuralNetwork {
    private List<Layer> layers;
    private double learningRate;
    
    // 网络初始化、前向传播、反向传播等方法
    // ...
}

public abstract class Layer {
    protected int numNeurons;
    protected double[][] weights;
    protected double[] biases;
    protected double[] outputs;
    
    public abstract void forward(double[] inputs);
    public abstract void backward(double[] errors, double learningRate);
}

public class DenseLayer extends Layer {
    // 全连接层的具体实现
    // ...
}

3.2 基于权重绝对值的贪心剪枝实现

java 复制代码
public class WeightPruner {
    private NeuralNetwork network;
    private double pruningRate; // 每次剪枝的比例
    private int fineTuneEpochs; // 剪枝后的微调轮数
    
    public WeightPruner(NeuralNetwork network, double pruningRate, int fineTuneEpochs) {
        this.network = network;
        this.pruningRate = pruningRate;
        this.fineTuneEpochs = fineTuneEpochs;
    }
    
    /**
     * 执行迭代式剪枝
     * @param targetSparsity 目标稀疏度(0-1)
     * @param maxIterations 最大迭代次数
     */
    public void iterativePrune(double targetSparsity, int maxIterations) {
        double currentSparsity = 0;
        int iteration = 0;
        
        while (currentSparsity < targetSparsity && iteration < maxIterations) {
            // 1. 评估权重重要性
            Map<WeightPosition, Double> importanceMap = evaluateWeights();
            
            // 2. 贪心选择要剪枝的权重
            List<WeightPosition> toPrune = selectWeightsToPrune(importanceMap);
            
            // 3. 执行剪枝
            pruneWeights(toPrune);
            
            // 4. 计算当前稀疏度
            currentSparsity = calculateSparsity();
            System.out.printf("Iteration %d: Sparsity = %.2f%%\n", 
                             iteration, currentSparsity * 100);
            
            // 5. 微调网络
            fineTuneNetwork();
            
            iteration++;
        }
    }
    
    /**
     * 评估所有权重的重要性(这里使用绝对值作为重要性标准)
     */
    private Map<WeightPosition, Double> evaluateWeights() {
        Map<WeightPosition, Double> importanceMap = new HashMap<>();
        
        for (int l = 0; l < network.getLayers().size(); l++) {
            Layer layer = network.getLayers().get(l);
            if (layer instanceof DenseLayer) {
                DenseLayer denseLayer = (DenseLayer) layer;
                double[][] weights = denseLayer.getWeights();
                
                for (int i = 0; i < weights.length; i++) {
                    for (int j = 0; j < weights[i].length; j++) {
                        WeightPosition pos = new WeightPosition(l, i, j);
                        importanceMap.put(pos, Math.abs(weights[i][j]));
                    }
                }
            }
        }
        
        return importanceMap;
    }
    
    /**
     * 选择要剪枝的权重(贪心选择最小的绝对值)
     */
    private List<WeightPosition> selectWeightsToPrune(Map<WeightPosition, Double> importanceMap) {
        // 将权重按重要性(绝对值)升序排序
        List<Map.Entry<WeightPosition, Double>> sorted = new ArrayList<>(importanceMap.entrySet());
        Collections.sort(sorted, Comparator.comparing(Map.Entry::getValue));
        
        // 计算要剪枝的数量
        int totalWeights = importanceMap.size();
        int toPruneCount = (int) (totalWeights * pruningRate);
        
        // 选择最不重要的权重
        List<WeightPosition> toPrune = new ArrayList<>();
        for (int i = 0; i < toPruneCount && i < sorted.size(); i++) {
            toPrune.add(sorted.get(i).getKey());
        }
        
        return toPrune;
    }
    
    /**
     * 执行剪枝操作(将权重设为0)
     */
    private void pruneWeights(List<WeightPosition> toPrune) {
        for (WeightPosition pos : toPrune) {
            Layer layer = network.getLayers().get(pos.layerIndex);
            if (layer instanceof DenseLayer) {
                DenseLayer denseLayer = (DenseLayer) layer;
                denseLayer.getWeights()[pos.neuronIndex][pos.weightIndex] = 0;
            }
        }
    }
    
    /**
     * 计算当前网络的稀疏度(0权重比例)
     */
    private double calculateSparsity() {
        int totalWeights = 0;
        int zeroWeights = 0;
        
        for (Layer layer : network.getLayers()) {
            if (layer instanceof DenseLayer) {
                DenseLayer denseLayer = (DenseLayer) layer;
                double[][] weights = denseLayer.getWeights();
                
                for (int i = 0; i < weights.length; i++) {
                    for (int j = 0; j < weights[i].length; j++) {
                        totalWeights++;
                        if (weights[i][j] == 0) {
                            zeroWeights++;
                        }
                    }
                }
            }
        }
        
        return (double) zeroWeights / totalWeights;
    }
    
    /**
     * 剪枝后微调网络
     */
    private void fineTuneNetwork() {
        // 这里简化实现,实际应用中需要使用训练数据
        for (int epoch = 0; epoch < fineTuneEpochs; epoch++) {
            // 遍历训练数据,执行前向传播和反向传播
            // ...
        }
    }
    
    /**
     * 权重位置标识类
     */
    private static class WeightPosition {
        int layerIndex;
        int neuronIndex;
        int weightIndex;
        
        public WeightPosition(int layerIndex, int neuronIndex, int weightIndex) {
            this.layerIndex = layerIndex;
            this.neuronIndex = neuronIndex;
            this.weightIndex = weightIndex;
        }
        
        // 需要实现equals和hashCode方法以便在HashMap中使用
        // ...
    }
}

3.3 基于神经元重要性的贪心剪枝实现

除了权重剪枝,我们还可以实现基于神经元重要性的剪枝:

java 复制代码
public class NeuronPruner {
    private NeuralNetwork network;
    private double pruningRate;
    private int fineTuneEpochs;
    
    public NeuronPruner(NeuralNetwork network, double pruningRate, int fineTuneEpochs) {
        this.network = network;
        this.pruningRate = pruningRate;
        this.fineTuneEpochs = fineTuneEpochs;
    }
    
    /**
     * 执行神经元剪枝
     */
    public void pruneNeurons(double targetSparsity, int maxIterations) {
        double currentSparsity = 0;
        int iteration = 0;
        
        while (currentSparsity < targetSparsity && iteration < maxIterations) {
            // 1. 评估神经元重要性(这里使用平均激活值作为标准)
            Map<NeuronPosition, Double> importanceMap = evaluateNeurons();
            
            // 2. 贪心选择要剪枝的神经元
            List<NeuronPosition> toPrune = selectNeuronsToPrune(importanceMap);
            
            // 3. 执行剪枝
            pruneNeurons(toPrune);
            
            // 4. 计算当前稀疏度
            currentSparsity = calculateNeuronSparsity();
            System.out.printf("Iteration %d: Neuron Sparsity = %.2f%%\n", 
                             iteration, currentSparsity * 100);
            
            // 5. 微调网络
            fineTuneNetwork();
            
            iteration++;
        }
    }
    
    /**
     * 评估神经元重要性(使用平均激活值)
     */
    private Map<NeuronPosition, Double> evaluateNeurons() {
        Map<NeuronPosition, Double> importanceMap = new HashMap<>();
        
        // 这里简化实现,实际应用中需要使用验证数据集
        // 遍历验证数据,收集每个神经元的平均激活值
        
        for (int l = 0; l < network.getLayers().size() - 1; l++) { // 不剪枝输出层
            Layer layer = network.getLayers().get(l);
            if (layer instanceof DenseLayer) {
                DenseLayer denseLayer = (DenseLayer) layer;
                int numNeurons = denseLayer.getNumNeurons();
                
                // 模拟计算平均激活值
                for (int n = 0; n < numNeurons; n++) {
                    // 实际应用中应该使用验证数据计算平均激活值
                    double avgActivation = Math.random(); // 模拟值
                    importanceMap.put(new NeuronPosition(l, n), avgActivation);
                }
            }
        }
        
        return importanceMap;
    }
    
    /**
     * 选择要剪枝的神经元(贪心选择平均激活值最小的)
     */
    private List<NeuronPosition> selectNeuronsToPrune(Map<NeuronPosition, Double> importanceMap) {
        List<Map.Entry<NeuronPosition, Double>> sorted = new ArrayList<>(importanceMap.entrySet());
        Collections.sort(sorted, Comparator.comparing(Map.Entry::getValue));
        
        int totalNeurons = importanceMap.size();
        int toPruneCount = (int) (totalNeurons * pruningRate);
        
        List<NeuronPosition> toPrune = new ArrayList<>();
        for (int i = 0; i < toPruneCount && i < sorted.size(); i++) {
            toPrune.add(sorted.get(i).getKey());
        }
        
        return toPrune;
    }
    
    /**
     * 执行神经元剪枝(将神经元的所有输入和输出权重设为0)
     */
    private void pruneNeurons(List<NeuronPosition> toPrune) {
        for (NeuronPosition pos : toPrune) {
            // 1. 将该神经元的所有输出权重设为0
            DenseLayer currentLayer = (DenseLayer) network.getLayers().get(pos.layerIndex);
            for (int j = 0; j < currentLayer.getWeights()[pos.neuronIndex].length; j++) {
                currentLayer.getWeights()[pos.neuronIndex][j] = 0;
            }
            
            // 2. 将该神经元的所有输入权重设为0(来自前一层的连接)
            if (pos.layerIndex > 0) {
                DenseLayer prevLayer = (DenseLayer) network.getLayers().get(pos.layerIndex - 1);
                for (int i = 0; i < prevLayer.getWeights().length; i++) {
                    prevLayer.getWeights()[i][pos.neuronIndex] = 0;
                }
            }
        }
    }
    
    /**
     * 计算神经元稀疏度(被剪枝的神经元比例)
     */
    private double calculateNeuronSparsity() {
        // 简化实现,实际应用中需要更精确的计算
        return 0; // 实现类似于权重稀疏度的计算
    }
    
    /**
     * 神经元位置标识类
     */
    private static class NeuronPosition {
        int layerIndex;
        int neuronIndex;
        
        public NeuronPosition(int layerIndex, int neuronIndex) {
            this.layerIndex = layerIndex;
            this.neuronIndex = neuronIndex;
        }
        
        // 需要实现equals和hashCode方法
        // ...
    }
}

3.4 更高级的贪心剪枝策略

我们可以实现基于泰勒展开的贪心剪枝策略,这种方法考虑了权重对损失函数的影响:

java 复制代码
public class TaylorPruner {
    private NeuralNetwork network;
    private double pruningRate;
    private int fineTuneEpochs;
    private Dataset validationSet;
    
    public TaylorPruner(NeuralNetwork network, Dataset validationSet, 
                       double pruningRate, int fineTuneEpochs) {
        this.network = network;
        this.validationSet = validationSet;
        this.pruningRate = pruningRate;
        this.fineTuneEpochs = fineTuneEpochs;
    }
    
    /**
     * 基于泰勒展开的贪心剪枝
     */
    public void taylorPrune(double targetSparsity, int maxIterations) {
        double currentSparsity = 0;
        int iteration = 0;
        
        while (currentSparsity < targetSparsity && iteration < maxIterations) {
            // 1. 计算每个权重的泰勒重要性
            Map<WeightPosition, Double> importanceMap = computeTaylorImportance();
            
            // 2. 选择要剪枝的权重
            List<WeightPosition> toPrune = selectWeightsToPrune(importanceMap);
            
            // 3. 执行剪枝
            pruneWeights(toPrune);
            
            // 4. 计算稀疏度
            currentSparsity = calculateSparsity();
            System.out.printf("Iteration %d: Sparsity = %.2f%%\n", 
                             iteration, currentSparsity * 100);
            
            // 5. 微调网络
            fineTuneNetwork();
            
            iteration++;
        }
    }
    
    /**
     * 计算泰勒重要性: |weight * gradient|
     */
    private Map<WeightPosition, Double> computeTaylorImportance() {
        Map<WeightPosition, Double> importanceMap = new HashMap<>();
        
        // 遍历验证集计算梯度
        for (Example example : validationSet.getExamples()) {
            // 前向传播
            double[] output = network.forward(example.getInput());
            
            // 计算误差(这里假设是分类问题)
            double[] error = computeError(output, example.getTarget());
            
            // 反向传播计算梯度
            network.backward(error);
            
            // 收集权重和梯度的乘积
            for (int l = 0; l < network.getLayers().size(); l++) {
                Layer layer = network.getLayers().get(l);
                if (layer instanceof DenseLayer) {
                    DenseLayer denseLayer = (DenseLayer) layer;
                    double[][] weights = denseLayer.getWeights();
                    double[][] gradients = denseLayer.getGradients();
                    
                    for (int i = 0; i < weights.length; i++) {
                        for (int j = 0; j < weights[i].length; j++) {
                            WeightPosition pos = new WeightPosition(l, i, j);
                            double importance = Math.abs(weights[i][j] * gradients[i][j]);
                            
                            // 累加多个样本的重要性
                            importanceMap.merge(pos, importance, Double::sum);
                        }
                    }
                }
            }
        }
        
        // 计算平均重要性
        int numExamples = validationSet.getExamples().size();
        for (Map.Entry<WeightPosition, Double> entry : importanceMap.entrySet()) {
            importanceMap.put(entry.getKey(), entry.getValue() / numExamples);
        }
        
        return importanceMap;
    }
    
    // 其他方法与WeightPruner类似
    // ...
}

4. 剪枝策略的评估与比较

4.1 评估指标

在实现剪枝算法后,我们需要评估剪枝效果:

java 复制代码
public class PruningEvaluator {
    /**
     * 评估剪枝前后的模型性能
     */
    public static void evaluate(NeuralNetwork original, NeuralNetwork pruned, Dataset testSet) {
        double originalAccuracy = computeAccuracy(original, testSet);
        double prunedAccuracy = computeAccuracy(pruned, testSet);
        
        int originalSize = computeModelSize(original);
        int prunedSize = computeModelSize(pruned);
        
        System.out.println("Original Model - Accuracy: " + originalAccuracy + 
                          "%, Size: " + originalSize + " parameters");
        System.out.println("Pruned Model - Accuracy: " + prunedAccuracy + 
                          "%, Size: " + prunedSize + " parameters");
        System.out.println("Reduction: " + 
                          (100 * (originalSize - prunedSize) / (double) originalSize) + 
                          "% size reduction");
    }
    
    private static double computeAccuracy(NeuralNetwork network, Dataset testSet) {
        int correct = 0;
        for (Example example : testSet.getExamples()) {
            double[] output = network.forward(example.getInput());
            if (argmax(output) == argmax(example.getTarget())) {
                correct++;
            }
        }
        return 100 * correct / (double) testSet.getExamples().size();
    }
    
    private static int computeModelSize(NeuralNetwork network) {
        int size = 0;
        for (Layer layer : network.getLayers()) {
            if (layer instanceof DenseLayer) {
                DenseLayer denseLayer = (DenseLayer) layer;
                size += denseLayer.getWeights().length * denseLayer.getWeights()[0].length;
                size += denseLayer.getBiases().length;
            }
        }
        return size;
    }
    
    private static int argmax(double[] array) {
        int maxIndex = 0;
        for (int i = 1; i < array.length; i++) {
            if (array[i] > array[maxIndex]) {
                maxIndex = i;
            }
        }
        return maxIndex;
    }
}

4.2 不同剪枝策略的比较

我们可以比较不同贪心策略的效果:

java 复制代码
public class PruningComparison {
    public static void main(String[] args) {
        // 1. 加载数据和初始化网络
        Dataset trainSet = loadDataset("train.csv");
        Dataset testSet = loadDataset("test.csv");
        NeuralNetwork original = createNetwork();
        trainNetwork(original, trainSet);
        
        // 2. 创建不同剪枝策略的实例
        WeightPruner weightPruner = new WeightPruner(original.copy(), 0.1, 5);
        NeuronPruner neuronPruner = new NeuronPruner(original.copy(), 0.1, 5);
        TaylorPruner taylorPruner = new TaylorPruner(original.copy(), trainSet, 0.1, 5);
        
        // 3. 执行剪枝
        weightPruner.iterativePrune(0.5, 10);
        neuronPruner.pruneNeurons(0.5, 10);
        taylorPruner.taylorPrune(0.5, 10);
        
        // 4. 评估结果
        System.out.println("=== Weight Pruning ===");
        PruningEvaluator.evaluate(original, weightPruner.getNetwork(), testSet);
        
        System.out.println("\n=== Neuron Pruning ===");
        PruningEvaluator.evaluate(original, neuronPruner.getNetwork(), testSet);
        
        System.out.println("\n=== Taylor Pruning ===");
        PruningEvaluator.evaluate(original, taylorPruner.getNetwork(), testSet);
    }
    
    // 辅助方法...
}

5. 高级主题与优化

5.1 结构化剪枝的实现

结构化剪枝比非结构化剪枝更复杂,因为它需要移除整个结构单元:

java 复制代码
public class ChannelPruner {
    // 针对卷积层的通道剪枝实现
    
    /**
     * 剪枝卷积层的通道
     */
    public void pruneChannels(double targetSparsity) {
        // 1. 评估通道重要性(例如使用通道的L1范数)
        Map<ChannelPosition, Double> importanceMap = evaluateChannels();
        
        // 2. 贪心选择要剪枝的通道
        List<ChannelPosition> toPrune = selectChannelsToPrune(importanceMap, targetSparsity);
        
        // 3. 重构网络(移除选中的通道)
        reconstructNetwork(toPrune);
    }
    
    // 其他实现细节...
}

5.2 渐进式剪枝策略

渐进式剪枝可以带来更好的结果:

java 复制代码
public class GradualPruner extends WeightPruner {
    private double initialSparsity;
    private double finalSparsity;
    private int totalIterations;
    
    public GradualPruner(NeuralNetwork network, double initialSparsity, 
                        double finalSparsity, int totalIterations, int fineTuneEpochs) {
        super(network, 0, fineTuneEpochs); // pruningRate将在每次迭代中计算
        this.initialSparsity = initialSparsity;
        this.finalSparsity = finalSparsity;
        this.totalIterations = totalIterations;
    }
    
    @Override
    public void iterativePrune(double targetSparsity, int maxIterations) {
        double currentSparsity = initialSparsity;
        int iteration = 0;
        
        while (currentSparsity < finalSparsity && iteration < totalIterations) {
            // 计算当前迭代的目标稀疏度
            double target = initialSparsity + (finalSparsity - initialSparsity) * 
                          (iteration / (double) totalIterations);
            
            // 计算本次需要达到的稀疏度增量
            double increment = target - currentSparsity;
            
            // 计算需要的剪枝比例
            double requiredPruningRate = increment / (1 - currentSparsity);
            
            // 设置剪枝比例并执行剪枝
            this.pruningRate = requiredPruningRate;
            super.iterativePrune(target, 1); // 每次只执行一次剪枝
            
            // 更新当前稀疏度
            currentSparsity = calculateSparsity();
            
            iteration++;
        }
    }
}

5.3 剪枝与量化结合

剪枝可以与量化技术结合以获得更好的压缩效果:

java 复制代码
public class PruningWithQuantization {
    public static NeuralNetwork pruneAndQuantize(NeuralNetwork original, 
                                               double pruningSparsity, 
                                               int quantizationBits) {
        // 1. 执行剪枝
        WeightPruner pruner = new WeightPruner(original, 0.1, 5);
        pruner.iterativePrune(pruningSparsity, 10);
        NeuralNetwork pruned = pruner.getNetwork();
        
        // 2. 执行量化
        Quantizer quantizer = new Quantizer(quantizationBits);
        NeuralNetwork quantized = quantizer.quantize(pruned);
        
        return quantized;
    }
}

class Quantizer {
    private int bits;
    
    public Quantizer(int bits) {
        this.bits = bits;
    }
    
    public NeuralNetwork quantize(NeuralNetwork network) {
        // 实现权重量化(将浮点权重转换为低精度表示)
        // ...
        return network;
    }
}

6. 实际应用中的注意事项

6.1 剪枝的挑战与解决方案

  1. 精度损失问题

    • 解决方案:采用渐进式剪枝,结合知识蒸馏技术
  2. 硬件加速限制

    • 解决方案:优先考虑结构化剪枝,使用专门的稀疏计算库
  3. 训练不稳定性

    • 解决方案:使用较小的学习率进行微调,添加正则化

6.2 性能优化技巧

  1. 稀疏矩阵表示

    java 复制代码
    public class SparseWeights {
        private Map<Integer, Double> nonZeroWeights; // 键: 编码的位置, 值: 权重值
        private int rows;
        private int cols;
        
        // 实现稀疏矩阵的各种操作
        // ...
    }
  2. 并行化剪枝评估

    java 复制代码
    // 使用Java并行流加速重要性评估
    importanceMap.entrySet().parallelStream()
        .forEach(entry -> {
            WeightPosition pos = entry.getKey();
            double importance = computeImportance(pos);
            entry.setValue(importance);
        });
  3. 剪枝与训练流水线

    java 复制代码
    public class PruningPipeline {
        public static void pipeline(NeuralNetwork network, Dataset dataset, 
                                  int totalEpochs, double finalSparsity) {
            int pruneStart = totalEpochs / 3;
            int pruneEnd = 2 * totalEpochs / 3;
            
            for (int epoch = 0; epoch < totalEpochs; epoch++) {
                // 训练阶段
                trainOneEpoch(network, dataset);
                
                // 剪枝阶段
                if (epoch >= pruneStart && epoch <= pruneEnd) {
                    double progress = (epoch - pruneStart) / (double)(pruneEnd - pruneStart);
                    double targetSparsity = finalSparsity * progress;
                    pruneWeights(network, targetSparsity);
                }
            }
        }
    }

7. 总结

贪心算法在神经网络剪枝中表现出色,因为它能够:

  • 逐步移除最不重要的参数
  • 保持每次剪枝决策的局部最优性
  • 与微调过程良好配合

通过合理选择重要性标准和剪枝策略,可以在保持模型精度的同时显著减少模型大小和计算需求。Java的实现虽然不如Python生态中框架丰富,但通过精心设计的数据结构和算法,仍然能够实现高效的神经网络剪枝流程。

相关推荐
可触的未来,发芽的智生6 小时前
追根索源-神经网络的灾难性遗忘原因
人工智能·神经网络·算法·机器学习·架构
睡不醒的kun14 小时前
leetcode算法刷题的第三十二天
数据结构·c++·算法·leetcode·职场和发展·贪心算法·动态规划
Yingjun Mo16 小时前
1. 统计推断-基于神经网络与Langevin扩散的自适应潜变量建模与优化
人工智能·神经网络·算法·机器学习·概率论
七芒星20231 天前
ResNet(详细易懂解释):残差网络的革命性突破
人工智能·pytorch·深度学习·神经网络·学习·cnn
补三补四1 天前
神经网络基本概念
人工智能·深度学习·神经网络
愚润求学1 天前
【贪心算法】day9
c++·算法·leetcode·贪心算法
孤廖1 天前
【算法磨剑:用 C++ 思考的艺术・Dijkstra 实战】弱化版 vs 标准版模板,洛谷 P3371/P4779 双题精讲
java·开发语言·c++·程序人生·算法·贪心算法·启发式算法
许怀楠2 天前
【主页介绍】
linux·c++·贪心算法·visual studio
三之又三2 天前
卷积神经网络CNN-part5-NiN
人工智能·神经网络·cnn