决策树模型python代码实现

计算熵函数(二元熵)

python 复制代码
# UNQ_C1
# GRADED FUNCTION: compute_entropy

def compute_entropy(y):
    """
    Computes the entropy for 
    
    Args:
       y (ndarray): Numpy array indicating whether each example at a node is
           edible (`1`) or poisonous (`0`)
       
    Returns:
        entropy (float): Entropy at that node
        
    """
    # You need to return the following variables correctly
    entropy = 0
    
    ### START CODE HERE ###
    p=len(y[y==1])/len(y)
    if pos==size or pos==0:
        entropy=0
    else:
        entropy=-p*np.log2(p)-(1-p)*np.log2(1-p)
    ### END CODE HERE ###        
    
    return entropy

根据特征分裂结点

python 复制代码
# UNQ_C2
# GRADED FUNCTION: split_dataset

def split_dataset(X, node_indices, feature):
    """
    Splits the data at the given node into
    left and right branches
    
    Args:
        X (ndarray):             Data matrix of shape(n_samples, n_features)
        node_indices (ndarray):  List containing the active indices. I.e, the samples being considered at this step.
        feature (int):           Index of feature to split on
    
    Returns:
        left_indices (ndarray): Indices with feature value == 1
        right_indices (ndarray): Indices with feature value == 0
    """
    
    # You need to return the following variables correctly
    left_indices = []
    right_indices = []
    
    ### START CODE HERE ###
    for i in node_indices:
        if X[i,feature]==1:
            left_indices.append(i)
        else:
            right_indices.append(i)
    ### END CODE HERE ###
        
    return left_indices, right_indices

计算信息增益

python 复制代码
# UNQ_C3
# GRADED FUNCTION: compute_information_gain

def compute_information_gain(X, y, node_indices, feature):
    
    """
    Compute the information of splitting the node on a given feature
    
    Args:
        X (ndarray):            Data matrix of shape(n_samples, n_features)
        y (array like):         list or ndarray with n_samples containing the target variable
        node_indices (ndarray): List containing the active indices. I.e, the samples being considered in this step.
   
    Returns:
        cost (float):        Cost computed
    
    """    
    # Split dataset
    left_indices, right_indices = split_dataset(X, node_indices, feature)
    
    # Some useful variables
    X_node, y_node = X[node_indices], y[node_indices]
    X_left, y_left = X[left_indices], y[left_indices]
    X_right, y_right = X[right_indices], y[right_indices]
    
    # You need to return the following variables correctly
    information_gain = 0
    
    ### START CODE HERE ###
    
    # Weights 
    lefts=len(y_left)
    rights=len(y_right)
    w_left=lefts/(lefts+rights)
    w_right=rights/(lefts+rights)
    #Weighted entropy
    H_left=compute_entropy(y_left)
    H_right=compute_entropy(y_right)
    #Information gain                                                   
    H_root=compute_entropy(y_node)
    information_gain=H_root-(w_left*H_left+w_right*H_right)
    ### END CODE HERE ###  
    
    return information_gain

选择最佳分裂特征

python 复制代码
# UNQ_C4
# GRADED FUNCTION: get_best_split

def get_best_split(X, y, node_indices):   
    """
    Returns the optimal feature and threshold value
    to split the node data 
    
    Args:
        X (ndarray):            Data matrix of shape(n_samples, n_features)
        y (array like):         list or ndarray with n_samples containing the target variable
        node_indices (ndarray): List containing the active indices. I.e, the samples being considered in this step.

    Returns:
        best_feature (int):     The index of the best feature to split
    """    
    
    # Some useful variables
    num_features = X.shape[1]
    
    # You need to return the following variables correctly
    best_feature = -1

    IG=0
    ### START CODE HERE ###
    for i in range(num_features):
        ig=compute_information_gain(X,y,node_indices,i)
        if ig>IG:
            IG=ig
            best_feature=i
    ### END CODE HERE ##    
   
    return best_feature

随机森林算法

python 复制代码
def build_tree_recursive(X, y, node_indices, branch_name, max_depth, current_depth):
    """
    Build a tree using the recursive algorithm that split the dataset into 2 subgroups at each node.
    This function just prints the tree.
    
    Args:
        X (ndarray):            Data matrix of shape(n_samples, n_features)
        y (array like):         list or ndarray with n_samples containing the target variable
        node_indices (ndarray): List containing the active indices. I.e, the samples being considered in this step.
        branch_name (string):   Name of the branch. ['Root', 'Left', 'Right']
        max_depth (int):        Max depth of the resulting tree. 
        current_depth (int):    Current depth. Parameter used during recursive call.
   
    """ 

    # Maximum depth reached - stop splitting
    if current_depth == max_depth:
        formatting = " "*current_depth + "-"*current_depth
        print(formatting, "%s leaf node with indices" % branch_name, node_indices)
        return
   
    # Otherwise, get best split and split the data
    # Get the best feature and threshold at this node
    best_feature = get_best_split(X, y, node_indices) 
    tree.append((current_depth, branch_name, best_feature, node_indices))
    
    formatting = "-"*current_depth
    print("%s Depth %d, %s: Split on feature: %d" % (formatting, current_depth, branch_name, best_feature))
    
    # Split the dataset at the best feature
    left_indices, right_indices = split_dataset(X, node_indices, best_feature)
    
    # continue splitting the left and the right child. Increment current depth
    build_tree_recursive(X, y, left_indices, "Left", max_depth, current_depth+1)
    build_tree_recursive(X, y, right_indices, "Right", max_depth, current_depth+1)
相关推荐
而后笑面对1 分钟前
cf Codeforces Round 1062 (Div. 4) Editorial的一些反思
算法
合作小小程序员小小店1 小时前
基于可视化天气系统demo,基于python+ matplotlib+request爬虫,开发语言python,数据库无,10个可视化界面,需要的可以了联系。
开发语言·爬虫·python·matplotlib
倔强青铜三1 小时前
苦练Python第71天:一行代码就搭出服务器?别眨眼,http.server真有这么爽!
人工智能·python·面试
倔强青铜三1 小时前
苦练Python第70天:征服网络请求!揭开urllib.request的神秘面纱
人工智能·python·面试
倔强青铜三1 小时前
苦练Python第72天:colorsys 模块 10 分钟入门,让你的代码瞬间“好色”!
人工智能·python·面试
MicroTech20251 小时前
MLGO微算法科技发布多用户协同推理批处理优化系统,重构AI推理服务效率与能耗新标准
人工智能·科技·算法
一匹电信狗1 小时前
【牛客CM11】链表分割
c语言·开发语言·数据结构·c++·算法·leetcode·stl
不染尘.1 小时前
图的邻接矩阵实现以及遍历
开发语言·数据结构·vscode·算法·深度优先
AndrewHZ2 小时前
【图像处理基石】多波段图像融合算法入门:从概念到实践
图像处理·人工智能·算法·图像融合·遥感图像·多波段·变换域
胖哥真不错2 小时前
Python基于PyTorch实现多输入多输出进行BP神经网络回归预测项目实战
pytorch·python·毕业设计·论文·毕设·多输入多输出·bp神经网络回归预测