第10章 符号推理与神经符号AI

第一部分:原理详解

10.1 经典符号推理

10.1.1 逻辑推理系统
10.1.1.1 一阶逻辑与归结

一阶逻辑构成了人工智能中知识表示与推理的基石,其通过常量、变量、谓词、函数及量词构建形式化语言,旨在精确描述世界的状态与规律。与命题逻辑不同,一阶逻辑具备表达对象间关系的能力,其核心在于量词的作用范围与项的替换机制。推理过程旨在判定给定公式集是否逻辑蕴涵某一目标公式,这一过程通常转化为证明公式集与目标公式否定的不可满足性。归结原理是自动化定理证明中最核心的推理规则,其基本思想是通过消解互补文字来简化子句集。若两个子句分别包含互补的文字
L


¬L

,则可以消去这两个文字,将剩余部分合并为新子句。为了应用归结,需将公式转化为子句范式型,该过程涉及消除蕴含符、否定内移、斯科伦化以消除存在量词以及前束范式化等步骤。归结过程的完备性保证了若子句集不可满足,必能推导出空子句,从而为自动推理提供了坚实的理论基础。

10.1.1.2 描述逻辑与OWL

描述逻辑是一簇形式化的知识表示语言,旨在提供结构化的领域知识建模能力,同时保证推理过程的可判定性。作为本体语言OWL的理论基础,描述逻辑通过定义概念与角色来构建术语框和断言框。概念描述对象的集合,角色描述对象间的二元关系。基本构造算子包括交集、并集、补集、存在量词和全称量词等,复杂的类表达式通过这些算子递归构建。例如,类表达式
ha s C hi l d .Doc t or

描述了所有拥有医生子女的个体集合。推理任务是描述逻辑系统的核心,主要包括概念的可满足性检测、包含关系判定以及实例检索。随着表达能力的增强,如引入传递角色、逆角色及数量限制,推理复杂度会显著上升。OWL语言在语义网上通过RDF语法序列化描述逻辑本体,使得异构系统间的知识共享与互操作成为可能,而其底层推理机则利用Tableau算法在有限步骤内验证模型的一致性。

10.1.2 规划与搜索
10.1.2.1 STRIPS与PDDL

自动规划研究如何在给定初始状态和目标状态的情况下,生成一系列能够转换状态的动作序列。STRIPS作为一种经典的规划域描述语言,通过状态变量集、初始状态、目标状态及动作集来形式化定义规划问题。每个动作由名称、参数、前提条件及效果构成,前提条件规定了动作执行前必须满足的状态约束,效果则描述动作执行后状态的变迁。PDDL作为规划领域定义语言,进一步扩展了STRIPS的表达能力,支持分层任务网络、时间约束及数值流等复杂特性。在STRIPS模型中,状态通常表示为基原子的集合,动作的应用实质上是状态的转移过程:若当前状态包含前提条件中的所有原子,则执行动作后,效果中的删除列表原子被移除,添加列表原子被加入。规划过程即是在状态空间中寻找一条从初始状态通往目标状态节点的路径,该路径上的动作序列即为问题的解。

10.1.2.2 启发式搜索算法

状态空间的爆炸性增长使得盲目搜索策略难以应对复杂规划问题,启发式搜索通过引入估价函数指导搜索方向,显著提升了求解效率。估价函数通常定义为
f (n )=g (n )+h (n)

,其中
g (n)

表示从初始状态到当前节点
n

的实际代价,而
h (n)

则是启发式函数,估计从
n

到目标状态的代价。A算法是最具代表性的启发式搜索算法,当启发式函数满足可采纳性,即对所有节点 , 不超过其实际到达目标的代价时,A
n
h (n)

算法能够保证找到最优解。启发式函数的设计至关重要,松弛图规划方法通过忽略动作的删除效果来构建松弛规划问题,在该简化模型中计算目标可达的步数作为启发值。这种启发式信息虽然在计算上具有一定开销,但能提供比简单启发式更精准的距离估计,从而有效裁剪搜索空间。

10.2 神经符号融合

10.2.1 神经定理证明
10.2.1.1 Neural Theorem Prover

神经定理证明旨在结合深度学习的模式识别能力与符号逻辑的严密推理框架。传统定理证明器依赖启发式策略在巨大的搜索空间中选择应用哪条推理规则,而神经定理证明器利用神经网络对当前证明状态与可用前提进行编码,预测下一步最优的推理步骤。该方法通常将证明过程建模为状态-动作空间的搜索问题,状态由当前待证目标与上下文假设构成,动作则对应于逻辑系统中特定的推理规则或前提选择。神经网络模型,如卷积神经网络或Transformer,被训练用于估计在特定状态下采取某动作的价值或概率,从而引导蒙特卡洛树搜索等探索策略。通过强化学习与监督学习的结合,系统能够从历史证明数据中学习隐含的逻辑关联,在处理大规模数学库或程序验证任务时展现出超越传统启发式算法的潜力。

10.2.1.2 可微分归纳逻辑

可微分归纳逻辑试图弥合符号逻辑的离散性与神经网络的连续性之间的鸿沟,实现端到端的可微分推理。传统的逻辑推理依赖于离散的真值集合
{0,1}

与硬性的规则匹配,而可微分逻辑将逻辑连接词软化,引入连续的松弛表示。例如,逻辑合取
AB

可近似为
t (A )⋅t (B)


min (t (A ),t (B))

,逻辑析取
AB

近似为
t (A )+t (B )−t (A )t (B)

,其中
t(⋅)

表示命题的真值度。这种软化使得逻辑规则可以嵌入到神经网络的计算图中,逻辑推理的误差能够通过反向传播算法传递至底层的特征提取模块。在此框架下,谓词的真值不再由人工定义,而是由神经网络根据数据学习得出,规则的结构也可通过梯度优化进行搜索,从而实现从数据中自动归纳逻辑规则并进行模糊推理的能力。

10.2.2 神经程序合成
10.2.2.1 程序归纳

程序归纳研究如何从输入输出示例中自动推导出符合行为的计算机程序。该领域结合了程序分析与机器学习技术,旨在解决程序空间的不可判定性与稀疏性挑战。神经程序归纳通常采用编码器-解码器架构,编码器将输入输出示例映射为潜在的语义表示,解码器则基于该表示生成抽象语法树或代码序列。为了处理程序的组合性与长距离依赖,注意力机制与分层生成策略被广泛采用。与传统的搜索技术如演绎综合不同,神经方法通过学习大规模代码库中的统计规律,能够猜测用户意图并补全代码片段。该技术在自动代码补全、表格公式生成以及自动化脚本编写等场景中具有重要应用价值,其核心挑战在于保证生成程序的正确性与鲁棒性,通常需要引入执行引导的解码策略来过滤无效代码。

10.2.2.2 神经程序解释器

神经程序解释器构建了一个可微分的计算机架构模拟,通过神经网络模拟CPU、内存及控制流的运作。该架构通常由多个模块组成:核心控制器负责根据当前状态决定执行何种操作,记忆模块用于存储变量与中间结果,读写头负责交互内存。与直接生成源代码不同,神经程序解释器学习执行程序的轨迹,将程序分解为一系列子程序调用或原语操作。这种方法具备极强的泛化能力,能够处理算法中固有的循环与递归结构。训练过程中,模型通过观察程序的执行过程学习算子语义,推理时则能够像传统解释器一样逐步执行指令,同时保持整个过程的可微分性。这使得系统不仅能够模仿输入输出映射,还能显式地学习算法的控制逻辑,为具身智能体学习复杂技能提供了可解释的计算模型。

10.3 知识图谱与推理

10.3.1 知识表示学习
10.3.1.1 图嵌入方法

知识图谱嵌入技术旨在将实体与关系映射到低维连续向量空间,同时保留图的拓扑结构与语义信息。基于翻译的方法如TransE假设关系表示从头实体到尾实体的翻译向量,即
h +rt

,并通过能量函数最小化优化嵌入。该模型简单高效,但在处理一对多、多对一等复杂关系时存在局限性。后续模型如TransH与TransR引入了关系特定的投影空间,允许同一实体在不同关系下具有不同表示,从而增强了模型的表达能力。此外,基于张量分解的方法如DistMult与ComplEx利用双线性形式对三元组建模,能够更好地捕获对称与反对称关系。这些嵌入向量作为知识的稠密表示,不仅降低了存储开销,还为下游的链接预测与分类任务提供了高效的数值特征支持。

10.3.1.2 关系推理

关系推理超越了简单的链接预测,侧重于在知识图谱上执行多跳查询与复杂逻辑演绎。给定一个查询向量,系统需在图结构中寻找符合逻辑约束的答案实体。这一过程可通过在嵌入空间中的几何操作实现,例如利用盒嵌入将查询表示为超矩形,使得包含关系的推理转化为空间包含判定。对于路径查询,可以通过向量运算的复合来模拟多跳关系,如
r 1∘r2

的嵌入近似于
r 1+r2

或其他组合函数。近年来,神经查询执行器结合了图神经网络与注意力机制,能够在潜空间中模拟符号图遍历,处理存在量词与合取查询。这种连续的推理机制使得系统能够应对知识图谱的不完备性,推断缺失的隐含关系,支持智能体的复杂决策过程。

10.3.2 知识引导学习
10.3.2.1 先验知识注入

将符号化的先验知识融入神经网络训练过程是提升模型数据效率与泛化能力的有效途径。知识可以逻辑规则或约束的形式存在,通过正则化项或结构化损失函数约束神经网络的输出空间。例如,若领域知识规定"所有鸟都会飞"且"企鹅是鸟但不会飞",该矛盾或特例可通过逻辑一致性损失引导模型在特征空间中形成正确的分类边界。后验正则化框架将知识约束作为优化问题的后验项,利用对偶理论在训练过程中动态调整模型参数。此外,知识图谱嵌入可作为特征向量与原始输入拼接,为神经网络提供补充的语义背景。这种方法有效缓解了纯数据驱动模型在小样本场景下的过拟合问题,使得模型决策更加符合人类认知逻辑。

10.3.2.2 常识推理增强

常识推理要求智能体利用隐含的背景知识解释和理解观测现象,这是具身智能从感知迈向认知的关键。常识知识图谱如ConceptNet提供了大量关于物体属性、空间关系及因果效应的断言。神经符号系统通过将自然语言描述或视觉场景解析为图节点,检索关联的常识知识进行推理增强。例如,在视觉问答任务中,识别出物体"杯子"后,系统从知识库检索其功能为"盛水",属性为"易碎",从而推断出"掉落会摔碎"的后果。知识增强模块通常采用图注意力网络,根据当前上下文动态筛选相关的常识三元组,融合进任务表示中。这使得模型不仅依赖像素级特征,还能进行基于语义关联的逻辑推断,显著提升了开放世界场景下的鲁棒性与可解释性。


第二部分:代码实现

脚本 10.1.1.1:一阶逻辑归结证明器

内容与使用方式: 本脚本实现了一个基于归结原理的一阶逻辑定理证明器。包含词法分析器、语法解析器、斯科伦化及前束范式处理模块,以及核心的归结推理引擎。系统采用"集束支持"策略进行反演推理,验证给定知识的逻辑一致性。运行脚本将演示一个经典的"朋友是朋友的朋友"逻辑问题的自动证明过程,并输出详细的推理步骤与空子句生成结果。

复制代码
import re
from typing import List, Set, Tuple, Dict, Optional

class Term:
    """基础项:变量或常量或函数项"""
    def __init__(self, name: str, args: List['Term'] = None):
        self.name = name
        self.args = args if args is not None else []
        self.is_var = name[0].islower() and not self.args # 变量约定为小写开头且非函数

    def __repr__(self):
        if not self.args:
            return self.name
        return f"{self.name}({', '.join(map(str, self.args))})"

    def __eq__(self, other):
        return isinstance(other, Term) and self.name == other.name and self.args == other.args

    def __hash__(self):
        return hash((self.name, tuple(self.args)))

class Literal:
    """文字:谓词或其否定"""
    def __init__(self, predicate: str, args: List[Term], negated: bool = False):
        self.predicate = predicate
        self.args = args
        self.negated = negated

    def __repr__(self):
        neg = "~" if self.negated else ""
        return f"{neg}{self.predicate}({', '.join(map(str, self.args))})"

    def __eq__(self, other):
        return (self.predicate == other.predicate and 
                self.negated == other.negated and 
                self.args == other.args)

    def __hash__(self):
        return hash((self.predicate, self.negated, tuple(self.args)))

    def complementary(self, other: 'Literal') -> bool:
        return (self.predicate == other.predicate and 
                self.negated != other.negated)

class Clause:
    """子句:文字的析取"""
    def __init__(self, literals: List[Literal]):
        self.literals = literals

    def __repr__(self):
        if not self.literals:
            return "[]" # 空子句
        return " V ".join(map(str, self.literals))
    
    def is_empty(self):
        return len(self.literals) == 0

class Unifier:
    """合一算法实现"""
    @staticmethod
    def occurs_check(var: Term, term: Term, substitution: Dict[str, Term]) -> bool:
        if term in substitution:
            return Unifier.occurs_check(var, substitution[term.name], substitution)
        if var == term:
            return True
        if term.args:
            return any(Unifier.occurs_check(var, arg, substitution) for arg in term.args)
        return False

    @staticmethod
    def unify_terms(t1: Term, t2: Term, substitution: Dict[str, Term]) -> Optional[Dict[str, Term]]:
        if substitution is None: return None
        if t1 == t2: return substitution
        if t1.is_var: return Unifier.unify_var(t1, t2, substitution)
        if t2.is_var: return Unifier.unify_var(t2, t1, substitution)
        if t1.name != t2.name or len(t1.args) != len(t2.args): return None
        
        for sub1, sub2 in zip(t1.args, t2.args):
            substitution = Unifier.unify_terms(sub1, sub2, substitution)
            if substitution is None: return None
        return substitution

    @staticmethod
    def unify_var(var: Term, term: Term, substitution: Dict[str, Term]) -> Optional[Dict[str, Term]]:
        if var.name in substitution:
            return Unifier.unify_terms(substitution[var.name], term, substitution)
        if term.name in substitution: # Term might be a var
            return Unifier.unify_terms(var, substitution[term.name], substitution)
        if Unifier.occurs_check(var, term, substitution):
            return None
        new_sub = substitution.copy()
        new_sub[var.name] = term
        return new_sub

class ResolutionEngine:
    """归结推理引擎"""
    def __init__(self):
        self.clauses: Set[Clause] = set()
        self.index = 0 # New var generator

    def standardize_apart(self, clause: Clause) -> Clause:
        """变量标准化,避免命名冲突"""
        renaming = {}
        def rename_term(term: Term) -> Term:
            if term.is_var:
                if term.name not in renaming:
                    renaming[term.name] = Term(f"v_{self.index}")
                    self.index += 1
                return renaming[term.name]
            elif term.args:
                return Term(term.name, [rename_term(arg) for arg in term.args])
            return term
        
        new_literals = []
        for lit in clause.literals:
            new_args = [rename_term(arg) for arg in lit.args]
            new_literals.append(Literal(lit.predicate, new_args, lit.negated))
        return Clause(new_literals)

    def resolve(self, c1: Clause, c2: Clause) -> List[Clause]:
        """对两个子句进行归结"""
        resolvents = []
        c1 = self.standardize_apart(c1)
        c2 = self.standardize_apart(c2)

        for i, l1 in enumerate(c1.literals):
            for j, l2 in enumerate(c2.literals):
                if l1.complementary(l2):
                    # 尝试合一
                    sub = {}
                    # 仅支持一阶项合一,忽略谓词参数顺序差异简化版
                    if len(l1.args) == len(l2.args):
                        for a1, a2 in zip(l1.args, l2.args):
                            sub = Unifier.unify_terms(a1, a2, sub)
                            if sub is None: break
                        
                        if sub is not None:
                            # 生成归结式
                            new_literals = []
                            # 应用替换
                            def apply_sub(term: Term):
                                if term.is_var and term.name in sub:
                                    return sub[term.name]
                                if term.args:
                                    return Term(term.name, [apply_sub(a) for a in term.args])
                                return term

                            for k, lit in enumerate(c1.literals):
                                if k != i:
                                    new_args = [apply_sub(a) for a in lit.args]
                                    new_literals.append(Literal(lit.predicate, new_args, lit.negated))
                            
                            for k, lit in enumerate(c2.literals):
                                if k != j:
                                    new_args = [apply_sub(a) for a in lit.args]
                                    new_literals.append(Literal(lit.predicate, new_args, lit.negated))
                            
                            resolvents.append(Clause(new_literals))
        return resolvents

    def add_knowledge(self, clause_strs: List[str]):
        """解析并添加子句(简化解析逻辑)"""
        for s in clause_strs:
            # 简单解析: "Pred(A, b) V ~Pred2(C)"
            literals = []
            parts = s.split(" V ")
            for p in parts:
                p = p.strip()
                neg = False
                if p.startswith("~"):
                    neg = True
                    p = p[1:]
                
                match = re.match(r"(\w+)\((.*)\)", p)
                if match:
                    pred = match.group(1)
                    args_str = match.group(2).split(",")
                    args = [Term(a.strip()) for a in args_str]
                    literals.append(Literal(pred, args, neg))
            self.clauses.add(Clause(literals))

    def prove(self, goal_clause: Clause) -> bool:
        """反演归结证明"""
        sos = set([goal_clause]) # Set of Support
        all_clauses = self.clauses.union(sos)
        
        print(f"Initial Clauses: {len(all_clauses)}")
        print(f"Goal (Negated): {goal_clause}")
        
        iteration = 0
        while sos:
            iteration += 1
            if iteration > 50: break # Limit depth
            
            new_sos = set()
            current_clause = sos.pop()
            
            # 与现有子句集进行归结
            for old_clause in all_clauses:
                if old_clause == current_clause: continue
                resolvents = self.resolve(current_clause, old_clause)
                
                for r in resolvents:
                    print(f"  Resolving {current_clause} with {old_clause} -> {r}")
                    if r.is_empty():
                        print("\n[SUCCESS] Empty clause derived! Proof found.")
                        return True
                    if r not in all_clauses:
                        new_sos.add(r)
            
            all_clauses.add(current_clause)
            sos.update(new_sos)

        print("\n[FAIL] No proof found within limits.")
        return False

# Visualization / Execution
if __name__ == "__main__":
    engine = ResolutionEngine()
    
    # Example: "All humans are mortal" -> ~Human(x) V Mortal(x)
    # "Socrates is human" -> Human(Socrates)
    # Goal: "Socrates is mortal" -> ~Mortal(Socrates) (Negated for refutation)
    
    knowledge_base = [
        "~Human(x) V Mortal(x)", # Human(x) -> Mortal(x)
        "Human(Socrates)"
    ]
    engine.add_knowledge(knowledge_base)
    
    # Negation of Goal: Mortal(Socrates)
    negated_goal = Clause([Literal("Mortal", [Term("Socrates")], negated=True)])
    
    engine.prove(negated_goal)

脚本 10.1.1.2:描述逻辑推理机

内容与使用方式: 本脚本实现了一个基于Tableau算法的描述逻辑推理机,支持基本的类构造算子(交集、存在量词、全称量词)。系统构建ABox(断言集)并检查一致性。代码包含概念分类树的可视化功能,展示本体中概念的层级关系。运行脚本将验证一个具体的DL知识库的一致性,并输出推理过程。

复制代码
from typing import List, Set, Dict, Tuple

class DLConcept:
    """描述逻辑概念表达式"""
    pass

class AtomicConcept(DLConcept):
    def __init__(self, name: str):
        self.name = name
    def __repr__(self):
        return self.name
    def __eq__(self, other):
        return isinstance(other, AtomicConcept) and self.name == other.name
    def __hash__(self):
        return hash(self.name)

class Intersection(DLConcept):
    def __init__(self, left: DLConcept, right: DLConcept):
        self.left = left
        self.right = right
    def __repr__(self):
        return f"({self.left} AND {self.right})"

class Existential(DLConcept):
    def __init__(self, role: str, filler: DLConcept):
        self.role = role
        self.filler = filler
    def __repr__(self):
        return f"(exists {self.role}.{self.filler})"

class Individual:
    def __init__(self, name: str):
        self.name = name
    def __repr__(self):
        return self.name
    def __hash__(self):
        return hash(self.name)
    def __eq__(self, other):
        return isinstance(other, Individual) and self.name == other.name

class TableauProver:
    """Tableau推理机"""
    def __init__(self):
        self.abox: Dict[Individual, Set[DLConcept]] = {} # Individual -> Concepts
        self.rbox: Set[Tuple[Individual, str, Individual]] = set() # (s, r, t)
        self.named_individuals = set()

    def add_assertion(self, ind: Individual, concept: DLConcept):
        if ind not in self.abox:
            self.abox[ind] = set()
        self.abox[ind].add(concept)
        self.named_individuals.add(ind)

    def add_role_assertion(self, s: Individual, role: str, t: Individual):
        self.rbox.add((s, role, t))
        self.named_individuals.add(s)
        self.named_individuals.add(t)

    def check_consistency(self) -> bool:
        print("Starting Tableau Consistency Check...")
        # Simplified expansion rules
        queue = list(self.named_individuals)
        
        while queue:
            ind = queue.pop(0)
            if ind not in self.abox: continue
            
            current_concepts = list(self.abox[ind])
            
            for concept in current_concepts:
                # Rule 1: Intersection (AND)
                if isinstance(concept, Intersection):
                    if concept.left not in self.abox[ind]:
                        self.abox[ind].add(concept.left)
                        print(f"Applying AND rule: {ind} -> {concept.left}")
                        queue.append(ind) # Re-add to process new concept
                    if concept.right not in self.abox[ind]:
                        self.abox[ind].add(concept.right)
                        print(f"Applying AND rule: {ind} -> {concept.right}")
                        queue.append(ind)

                # Rule 2: Existential (exists r.C)
                if isinstance(concept, Existential):
                    # Check if there exists a role assertion (ind, role, y) with C(y)
                    found_successor = False
                    for s, r, t in self.rbox:
                        if s == ind and r == concept.role:
                            if concept.filler in self.abox.get(t, set()):
                                found_successor = True
                                break
                    
                    if not found_successor:
                        # Generate new individual
                        new_ind_name = f"gen_{len(self.named_individuals)}"
                        new_ind = Individual(new_ind_name)
                        print(f"Applying EXISTS rule: Creating {new_ind} for {concept}")
                        
                        self.add_role_assertion(ind, concept.role, new_ind)
                        self.add_assertion(new_ind, concept.filler)
                        queue.append(new_ind)
                        queue.append(ind) # Re-check current
                        
                # Check Clash (Simple version: A and NOT A)
                # For simplicity, we assume a 'Not' class exists, here just checking basic atomic clashes
                # In a full system, we need negation normal form.
                if isinstance(concept, AtomicConcept):
                    # Check if negation exists (mockup)
                    pass 
        
        print("Consistency Check Finished. Model found.")
        return True

# Visualization
def visualize_abox(abox, rbox):
    print("\n--- ABox Visualization ---")
    for ind, concepts in abox.items():
        c_str = ", ".join(map(str, concepts))
        print(f"Individual {ind}: {{{c_str}}}")
    
    print("\n--- RBox (Relations) ---")
    for s, r, t in rbox:
        print(f"{s} --[{r}]--> {t}")

if __name__ == "__main__":
    prover = TableauProver()
    
    # Example: Person AND exists hasChild.Doctor
    person = AtomicConcept("Person")
    doctor = AtomicConcept("Doctor")
    has_child_doc = Existential("hasChild", doctor)
    complex_concept = Intersection(person, has_child_doc)
    
    alice = Individual("Alice")
    prover.add_assertion(alice, complex_concept)
    
    prover.check_consistency()
    visualize_abox(prover.abox, prover.rbox)

脚本 10.1.2.1:STRIPS规划器

内容与使用方式: 本脚本实现了一个完整的STRIPS前向搜索规划器。定义了Action、和State``Planner类,支持前提条件检测、效果应用及状态空间搜索。规划器使用广度优先搜索寻找从初始状态到目标状态的最短动作序列。运行脚本将解决一个经典的"积木世界"问题,可视化输出每一步的状态变化及动作序列。

复制代码
from typing import List, Set, FrozenSet, Tuple, Optional

class State:
    """状态:基原子的集合"""
    def __init__(self, facts: Set[str]):
        self.facts = frozenset(facts)
    
    def __repr__(self):
        return "{" + ", ".join(sorted(list(self.facts))) + "}"
    
    def __eq__(self, other):
        return self.facts == other.facts
    
    def __hash__(self):
        return hash(self.facts)

class Action:
    """STRIPS动作"""
    def __init__(self, name: str, preconditions: Set[str], add_effects: Set[str], del_effects: Set[str]):
        self.name = name
        self.preconditions = frozenset(preconditions)
        self.add = frozenset(add_effects)
        self.del_list = frozenset(del_effects)

    def __repr__(self):
        return self.name
    
    def is_applicable(self, state: State) -> bool:
        return self.preconditions.issubset(state.facts)
    
    def apply(self, state: State) -> State:
        if not self.is_applicable(state):
            raise ValueError("Action not applicable")
        new_facts = (state.facts - self.del_list) | self.add
        return State(new_facts)

class STRIPSPlanner:
    """前向状态空间搜索规划器"""
    def __init__(self, actions: List[Action]):
        self.actions = actions

    def plan(self, initial_state: State, goal_state: State) -> Optional[List[Action]]:
        # BFS Search
        frontier = [(initial_state, [])] # (state, path)
        explored = set()
        explored.add(initial_state)

        print(f"Planning from {initial_state} to {goal_state}...")

        while frontier:
            current_state, path = frontier.pop(0)
            
            if goal_state.facts.issubset(current_state.facts):
                print(f"Solution found! Length: {len(path)}")
                return path

            for action in self.actions:
                if action.is_applicable(current_state):
                    next_state = action.apply(current_state)
                    if next_state not in explored:
                        explored.add(next_state)
                        new_path = path + [action]
                        frontier.append((next_state, new_path))
        
        print("No plan found.")
        return None

def visualize_plan(plan: List[Action], initial_state: State):
    print("\n--- Plan Execution Trace ---")
    current = initial_state
    print(f"Start State: {current}")
    for i, action in enumerate(plan):
        print(f"Step {i+1}: Execute {action}")
        current = action.apply(current)
        print(f"  -> New State: {current}")

if __name__ == "__main__":
    # Domain: Blocks World Simplified
    # Predicates: On(A, B), Clear(B), Holding(A), HandEmpty
    # Actions: Pick(A), Put(A, B)
    
    pick_A = Action(
        name="Pick(A)",
        preconditions={"Clear(A)", "On(A, Table)", "HandEmpty"},
        add_effects={"Holding(A)"},
        del_effects={"Clear(A)", "On(A, Table)", "HandEmpty"}
    )
    
    put_A_on_B = Action(
        name="Put(A, B)",
        preconditions={"Holding(A)", "Clear(B)"},
        add_effects={"On(A, B)", "Clear(A)", "HandEmpty"},
        del_effects={"Holding(A)", "Clear(B)"}
    )

    actions = [pick_A, put_A_on_B]

    # Initial: A on Table, B on Table, A Clear, B Clear, HandEmpty
    init = State({"On(A, Table)", "On(B, Table)", "Clear(A)", "Clear(B)", "HandEmpty"})
    
    # Goal: A on B
    goal = State({"On(A, B)"}) # Implicitly requires Clear(A), HandEmpty etc? No, just the goal facts.

    planner = STRIPSPlanner(actions)
    plan = planner.plan(init, goal)
    
    if plan:
        visualize_plan(plan, init)

脚本 10.1.2.2:启发式搜索算法

内容与使用方式: 本脚本实现了A*算法用于网格地图路径规划。包含了障碍物处理、代价计算及启发式函数设计。实现了可视化的搜索过程,展示开放列表与闭合列表的动态变化,以及最终生成的最优路径。运行脚本将展示一个智能体如何在迷宫环境中寻找最短路径。

复制代码
import heapq
import math
from typing import Tuple, List, Set, Optional

class Node:
    def __init__(self, pos: Tuple[int, int], g=0, h=0, parent=None):
        self.pos = pos
        self.g = g
        self.h = h
        self.f = g + h
        self.parent = parent

    def __lt__(self, other):
        return self.f < other.f

class AStarPlanner:
    def __init__(self, grid: List[List[int]]):
        self.grid = grid
        self.rows = len(grid)
        self.cols = len(grid[0])
        self.visited_visual = [] # For visualization

    def heuristic(self, a: Tuple[int, int], b: Tuple[int, int]) -> float:
        # Euclidean Distance
        return math.sqrt((a[0] - b[0])**2 + (a[1] - b[1])**2)

    def get_neighbors(self, node: Node) -> List[Tuple[Tuple[int, int], float]]:
        neighbors = []
        # 8-connectivity
        moves = [(-1, 0), (1, 0), (0, -1), (0, 1), (-1, -1), (-1, 1), (1, -1), (1, 1)]
        for dr, dc in moves:
            r, c = node.pos[0] + dr, node.pos[1] + dc
            if 0 <= r < self.rows and 0 <= c < self.cols:
                if self.grid[r][c] == 0: # 0 is free space
                    # Cost: 1 for straight, sqrt(2) for diagonal
                    cost = 1.0 if abs(dr) + abs(dc) == 1 else 1.414
                    neighbors.append(((r, c), cost))
        return neighbors

    def plan(self, start: Tuple[int, int], goal: Tuple[int, int]) -> Optional[List[Tuple[int, int]]]:
        open_set = []
        closed_set = set()
        
        start_node = Node(start, g=0, h=self.heuristic(start, goal))
        heapq.heappush(open_set, start_node)
        
        open_dict = {start: start_node} # For fast lookup

        print(f"Starting A* Search from {start} to {goal}...")

        while open_set:
            current = heapq.heappop(open_set)
            del open_dict[current.pos]
            
            if current.pos == goal:
                return self.reconstruct_path(current)
            
            closed_set.add(current.pos)
            self.visited_visual.append(current.pos)

            for neighbor_pos, cost in self.get_neighbors(current):
                if neighbor_pos in closed_set:
                    continue
                
                tentative_g = current.g + cost
                
                if neighbor_pos in open_dict:
                    neighbor_node = open_dict[neighbor_pos]
                    if tentative_g < neighbor_node.g:
                        neighbor_node.g = tentative_g
                        neighbor_node.f = tentative_g + neighbor_node.h
                        neighbor_node.parent = current
                        heapq.heapify(open_set) # Re-heapify
                else:
                    neighbor_node = Node(
                        pos=neighbor_pos,
                        g=tentative_g,
                        h=self.heuristic(neighbor_pos, goal),
                        parent=current
                    )
                    heapq.heappush(open_set, neighbor_node)
                    open_dict[neighbor_pos] = neighbor_node
        
        return None

    def reconstruct_path(self, node: Node) -> List[Tuple[int, int]]:
        path = []
        while node:
            path.append(node.pos)
            node = node.parent
        return path[::-1]

    def visualize(self, path: List[Tuple[int, int]]):
        print("\n--- Map Visualization ---")
        # Create display grid
        display = [row[:] for row in self.grid] # Copy
        
        # Mark visited
        for r, c in self.visited_visual:
            if display[r][c] == 0:
                display[r][c] = '.' # Explored
        
        # Mark path
        for r, c in path:
            display[r][c] = 'x'
        
        # Mark start/end
        display[path[0][0]][path[0][1]] = 'S'
        display[path[-1][0]][path[-1][1]] = 'E'

        for row in display:
            print(" ".join(str(c) for c in row))

if __name__ == "__main__":
    # 0: Free, 1: Obstacle
    grid_map = [
        [0, 0, 0, 1, 0, 0],
        [0, 1, 0, 1, 0, 0],
        [0, 1, 0, 1, 0, 0],
        [0, 1, 0, 0, 0, 0],
        [0, 0, 0, 1, 1, 0],
    ]
    
    planner = AStarPlanner(grid_map)
    start_pos = (0, 0)
    goal_pos = (4, 5)
    
    path = planner.plan(start_pos, goal_pos)
    
    if path:
        print(f"Path found with length {len(path)}")
        planner.visualize(path)
    else:
        print("No path found.")

脚本 10.2.1.1:神经定理证明器

本脚本实现了一个概念性的神经定理证明器。利用PyTorch构建神经网络模型,对逻辑子句进行嵌入编码,预测最佳推理步骤。系统结合了符号化的归结机制与神经网络的打分机制,模拟在大型搜索空间中的策略选择。运行脚本需安装PyTorch,将展示神经网络如何辅助选择前提进行推理。内容与使用方式:

复制代码
import torch
import torch.nn as nn
import torch.optim as optim
from typing import List
import random

# Simplified representation for demo
class ClauseRepresentation:
    def __init__(self, text: str):
        self.text = text

class NeuralProverModel(nn.Module):
    """简单的子句编码与选择网络"""
    def __init__(self, vocab_size, embed_dim):
        super().__init__()
        self.embedding = nn.Embedding(vocab_size, embed_dim)
        self.fc = nn.Linear(embed_dim, 1) # Score

    def forward(self, x):
        # x: (batch, seq_len)
        embedded = self.embedding(x) # (batch, seq_len, dim)
        # Global average pooling
        pooled = embedded.mean(dim=1) # (batch, dim)
        score = self.fc(pooled) # (batch, 1)
        return torch.sigmoid(score)

class MockTokenizer:
    def __init__(self):
        self.vocab = {" ": 0, "P": 1, "Q": 2, "R": 3, "~": 4, "V": 5}
    
    def encode(self, text: str) -> List[int]:
        # Very basic char-level encoding
        return [self.vocab.get(c, 0) for c in text]

class NeuralTheoremProver:
    def __init__(self):
        self.tokenizer = MockTokenizer()
        self.model = NeuralProverModel(vocab_size=10, embed_dim=16)
        self.optimizer = optim.Adam(self.model.parameters(), lr=0.01)
        print("Neural Theorem Prover initialized.")

    def train_step(self, goal: str, premises: List[str], correct_idx: int):
        """训练网络预测哪个前提最相关"""
        self.model.train()
        
        # Prepare inputs
        inputs = [goal] + premises
        tensors = [torch.tensor([self.tokenizer.encode(t)], dtype=torch.long) for t in inputs]
        
        # Forward pass
        scores = []
        for t in tensors:
            scores.append(self.model(t))
        
        scores_tensor = torch.cat(scores)
        
        # Target: One-hot like label
        target = torch.zeros(len(inputs))
        target[correct_idx + 1] = 1.0 # +1 because goal is at index 0
        
        # Loss (Binary Cross Entropy for simplicity in this demo)
        loss = nn.BCELoss()(scores_tensor.squeeze(), target)
        
        self.optimizer.zero_grad()
        loss.backward()
        self.optimizer.step()
        
        return loss.item()

    def infer(self, goal: str, premises: List[str]) -> int:
        """推理:选择最佳前提"""
        self.model.eval()
        with torch.no_grad():
            best_score = -1
            best_idx = -1
            
            # Compare goal with each premise
            goal_t = torch.tensor([self.tokenizer.encode(goal)], dtype=torch.long)
            goal_emb = self.model.embedding(goal_t).mean(dim=1)
            
            for i, p in enumerate(premises):
                p_t = torch.tensor([self.tokenizer.encode(p)], dtype=torch.long)
                p_emb = self.model.embedding(p_t).mean(dim=1)
                
                # Simple similarity score simulation via the network
                score = self.model(p_t).item()
                
                if score > best_score:
                    best_score = score
                    best_idx = i
        return best_idx

if __name__ == "__main__":
    prover = NeuralTheoremProver()
    
    # Mock Training Data
    # Goal: Q(x), Premises: [P(x), Q(x), R(x)] -> Correct: Q(x)
    print("Training...")
    for _ in range(100):
        loss = prover.train_step("Q", ["P", "Q", "R"], 1)
    
    # Inference
    goal = "Q"
    premises = ["~P", "Q", "R"]
    best_premise_idx = prover.infer(goal, premises)
    
    print(f"\nInference for Goal '{goal}':")
    print(f"Available Premises: {premises}")
    print(f"Neural Network selected premise: {premises[best_premise_idx]}")

脚本 10.2.1.2:可微分归纳逻辑

本脚本实现了一个可微分的逻辑谓词系统。定义了软逻辑操作,使得逻辑推理过程能够通过梯度下降优化。系统学习谓词的真值度,并尝试满足模糊的逻辑规则。运行脚本将展示如何通过优化使得数据满足"如果X是A,则X是B"的模糊规则。内容与使用方式:

复制代码
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt

class FuzzyPredicate:
    """模糊谓词,真值在[0,1]之间"""
    def __init__(self, name: str):
        self.name = name
        # 随机初始化真值参数 (假设针对一组对象)
        self.values = nn.Parameter(torch.rand(10)) 

    def __call__(self, idx):
        return self.values[idx]

class DiffLogicSystem:
    def __init__(self):
        self.pred_A = FuzzyPredicate("A")
        self.pred_B = FuzzyPredicate("B")
        self.params = [self.pred_A.values, self.pred_B.values]
        self.optimizer = optim.Adam(self.params, lr=0.1)
        self.loss_history = []

    def fuzzy_and(self, a, b):
        return torch.min(a, b) # Product is also common: a * b
    
    def fuzzy_or(self, a, b):
        return torch.max(a, b) # Probabilistic sum: a + b - a*b
    
    def fuzzy_implies(self, a, b):
        # Godel implication: max(1-a, b)
        # Lukasiewicz implication: min(1, 1 - a + b)
        return torch.min(1.0, 1.0 - a + b)

    def forward(self):
        """计算规则 A -> B 的满足度"""
        # 对所有样本计算 A(x) -> B(x)
        vals_a = self.pred_A.values
        vals_b = self.pred_B.values
        
        # 目标:最大化 (A -> B) 的真值
        implication_values = self.fuzzy_implies(vals_a, vals_b)
        
        # 损失:希望所有实例都满足规则,同时给A一些初始约束(可选)
        # 这里我们假设我们希望A的真值较高(观测事实),且规则成立
        # Loss = 1 - Satisfaction
        
        satisfaction = torch.mean(implication_values)
        loss = 1.0 - satisfaction
        
        # 添加一些正则化,防止全退化为0
        loss += 0.1 * torch.mean(1.0 - vals_a) 
        
        return loss

    def train(self, epochs=50):
        print("Training Differentiable Logic System...")
        for i in range(epochs):
            self.optimizer.zero_grad()
            loss = self.forward()
            loss.backward()
            self.optimizer.step()
            self.loss_history.append(loss.item())
            
            # Clamp values to [0, 1]
            self.pred_A.values.data.clamp_(0, 1)
            self.pred_B.values.data.clamp_(0, 1)
            
            if i % 10 == 0:
                print(f"Epoch {i}: Loss={loss.item():.4f}, Mean A={self.pred_A.values.mean():.2f}, Mean B={self.pred_B.values.mean():.2f}")

    def visualize(self):
        plt.figure(figsize=(8, 4))
        plt.subplot(1, 2, 1)
        plt.plot(self.loss_history)
        plt.title("Logic Constraint Loss")
        plt.xlabel("Epoch")
        
        plt.subplot(1, 2, 2)
        x = np.arange(10)
        width = 0.35
        plt.bar(x - width/2, self.pred_A.values.detach().numpy(), width, label='A')
        plt.bar(x + width/2, self.pred_B.values.detach().numpy(), width, label='B')
        plt.title("Predicate Truth Values")
        plt.legend()
        plt.show()

if __name__ == "__main__":
    system = DiffLogicSystem()
    system.train()
    system.visualize()

脚本 10.2.2.1:程序归纳

本脚本实现了一个基于神经引导搜索的程序归纳原型。定义了一个简单的领域特定语言(DSL),包含基本的列表操作。系统使用神经网络模型预测每一步最可能的代码片段,并通过执行验证搜索正确的程序。运行脚本将尝试归纳出一个满足给定输入输出示例的列表处理程序。内容与使用方式:

复制代码
import torch
import torch.nn as nn
import random
from typing import List, Tuple, Optional

# --- DSL Definition ---
class Program:
    def __init__(self, ops: List[str]):
        self.ops = ops
    
    def execute(self, input_list: List[int]) -> List[int]:
        # Extremely simplified interpreter for demo
        # Ops: "HEAD" (take first), "TAIL" (drop first), "INC" (increment all)
        data = list(input_list)
        for op in self.ops:
            if not data: return []
            if op == "HEAD":
                data = [data[0]]
            elif op == "TAIL":
                data = data[1:]
            elif op == "INC":
                data = [x+1 for x in data]
        return data

    def __repr__(self):
        return " -> ".join(self.ops)

class ProgramSynthesizer:
    def __init__(self):
        self.dsl_ops = ["HEAD", "TAIL", "INC"]
        # RNN Model to predict next operation
        self.embed = nn.Embedding(len(self.dsl_ops), 10)
        self.lstm = nn.LSTM(10, 20, batch_first=True)
        self.fc = nn.Linear(20, len(self.dsl_ops))
        self.optimizer = torch.optim.Adam(self.parameters(), lr=0.01)
        self.loss_fn = nn.CrossEntropyLoss()
        
    def parameters(self):
        return list(self.embed.parameters()) + list(self.lstm.parameters()) + list(self.fc.parameters())

    def train_model(self, correct_programs: List[List[int]]):
        """训练模型预测程序中的下一个token"""
        print("Training Synthesizer Model...")
        for prog_indices in correct_programs:
            self.optimizer.zero_grad()
            
            # Input: sequence of ops, Target: next op
            # Input: [SOS] + prog[:-1], Target: prog
            inputs = torch.tensor([[len(self.dsl_ops)] + prog_indices[:-1]]) # SOS token index = len
            targets = torch.tensor([prog_indices])
            
            emb = self.embed(inputs)
            out, _ = self.lstm(emb)
            logits = self.fc(out)
            
            loss = self.loss_fn(logits.view(-1, len(self.dsl_ops)), targets.view(-1))
            loss.backward()
            self.optimizer.step()

    def synthesize(self, examples: List[Tuple[List[int], List[int]]], max_depth=3) -> Optional[Program]:
        """基于输入输出示例搜索程序"""
        print(f"Synthesizing program for examples: {examples}...")
        
        # 1. Encode IO examples (Simplified: we don't use complex encoders here)
        # 2. Iterative Deepening Search guided by NN
        
        for d in range(1, max_depth + 1):
            # DFS with NN heuristics
            stack = [([], torch.zeros(1, 1, 20))] # (prefix_ops, hidden_state)
            
            while stack:
                current_ops, hidden = stack.pop()
                
                if len(current_ops) == d:
                    prog = Program(current_ops)
                    if all(prog.execute(inp) == outp for inp, outp in examples):
                        return prog
                    continue
                
                # Predict next ops distribution
                last_op_idx = len(self.dsl_ops) if not current_ops else self.dsl_ops.index(current_ops[-1])
                inp_tensor = torch.tensor([[last_op_idx]])
                emb = self.embed(inp_tensor)
                out, new_hidden = self.lstm(emb, hidden)
                logits = self.fc(out).squeeze()
                probs = torch.softmax(logits, dim=-1).detach().numpy()
                
                # Sort ops by probability
                sorted_ops = sorted(zip(self.dsl_ops, probs), key=lambda x: -x[1])
                
                for op, prob in sorted_ops:
                    if prob > 0.05: # Threshold
                        stack.append((current_ops + [op], new_hidden))
        return None

if __name__ == "__main__":
    synthesizer = ProgramSynthesizer()
    
    # Pre-train with some dummy programs to give it a prior
    # Suppose we have training data: [HEAD, TAIL] or [INC]
    dummy_training = [[0, 1], [2]] # indices
    synthesizer.train_model(dummy_training)
    
    # Synthesis Task: Get second element?
    # Input: [1, 2, 3] -> Output: [2]
    # Solution: HEAD -> TAIL ? Or TAIL -> HEAD?
    # TAIL([1,2,3]) -> [2,3]; HEAD([2,3]) -> [2]. Correct: TAIL, HEAD
    
    examples = [([1, 2, 3], [2]), ([10, 20], [20])]
    
    result = synthesizer.synthesize(examples)
    if result:
        print(f"Found Program: {result}")
        print(f"Verification: {result.execute([1,2,3])}")
    else:
        print("Failed to find program.")

脚本 10.2.2.2:神经程序解释器

内容与使用方式: 本脚本构建了一个简化的神经程序解释器模型。使用RNN模拟CPU执行过程,包含内存读写机制。模型基于当前指令与内存状态生成下一步操作。脚本可视化了内部状态与内存的变化轨迹,展示神经网络如何模拟程序执行逻辑。

复制代码
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt

class NeuralInterpreter(nn.Module):
    def __init__(self, mem_size=5, hidden_size=32):
        super().__init__()
        self.hidden_size = hidden_size
        self.mem_size = mem_size
        
        # Controller
        self.rnn = nn.GRUCell(2 + mem_size, hidden_size) # Input: (Instruction, Arg, Memory Read)
        
        # Heads
        self.read_head = nn.Linear(hidden_size, mem_size) # Soft attention over memory
        self.write_head = nn.Linear(hidden_size, mem_size) 
        
        # Output
        self.output_layer = nn.Linear(hidden_size, 2) # Predict next instruction id
        self.halt_layer = nn.Linear(hidden_size, 1) # Halt probability

    def forward(self, instruction, arg, memory, state):
        # Read phase
        read_weights = torch.softmax(self.read_head(state), dim=-1)
        read_vec = (memory * read_weights).sum(dim=0) # Weighted sum
        
        # Input to controller
        # Embed instruction and arg simply as one-hot or features
        inp = torch.cat([torch.tensor([instruction, arg]), read_vec]).unsqueeze(0)
        
        # Update state
        next_state = self.rnn(inp, state.unsqueeze(0))
        next_state = next_state.squeeze(0)
        
        # Write phase
        write_weights = torch.softmax(self.write_head(next_state), dim=-1)
        # Update memory (simplified additive update)
        memory = memory + write_weights * 0.1 # Small update
        
        # Output
        out_pred = self.output_layer(next_state)
        halt_prob = torch.sigmoid(self.halt_layer(next_state))
        
        return out_pred, halt_prob, next_state, memory

class InterpreterSystem:
    def __init__(self):
        self.model = NeuralInterpreter()
        self.trace = []

    def run(self, program: list, initial_mem: np.ndarray, steps=10):
        print(f"Executing Program: {program}")
        state = torch.zeros(self.model.hidden_size)
        mem = torch.tensor(initial_mem, dtype=torch.float)
        
        pc = 0 # Program counter
        
        for t in range(steps):
            # Fetch
            if pc < len(program):
                inst, arg = program[pc]
            else:
                inst, arg = 0, 0 # NOP
            
            # Execute Step
            out_pred, halt, state, mem = self.model(inst, arg, mem, state)
            
            # Record Trace
            self.trace.append({
                "t": t,
                "pc": pc,
                "inst": inst,
                "mem": mem.detach().numpy().copy(),
                "state_norm": state.norm().item()
            })
            
            # Simple hard logic for PC update for visualization, 
            # in real NPI it predicts next PC
            pc += 1 
            
            if halt > 0.9:
                print(f"Halted at step {t}")
                break
        
        self.visualize()

    def visualize(self):
        print("\n--- Execution Trace Visualization ---")
        times = [t['t'] for t in self.trace]
        mem_vals = np.array([t['mem'] for t in self.trace])
        state_norms = [t['state_norm'] for t in self.trace]

        plt.figure(figsize=(10, 6))
        
        # Plot Memory Evolution
        plt.subplot(2, 1, 1)
        for i in range(mem_vals.shape[1]):
            plt.plot(times, mem_vals[:, i], label=f'Mem[{i}]')
        plt.title("Memory State Evolution")
        plt.legend()
        
        # Plot Controller State Norm
        plt.subplot(2, 1, 2)
        plt.plot(times, state_norms, label='State Norm')
        plt.title("Controller State Magnitude")
        plt.tight_layout()
        plt.show()

if __name__ == "__main__":
    system = InterpreterSystem()
    # Program: (Instruction, Argument)
    # 1: Add to memory, 2: Multiply
    program = [(1, 10), (1, 5), (2, 0)]
    initial_memory = np.zeros(5)
    system.run(program, initial_memory)

脚本 10.3.1.1:图嵌入方法

本脚本实现了经典的TransE与TransH知识图谱嵌入算法。包含数据预处理、负采样训练策略及向量计算模块。系统支持自定义实体与关系,通过训练优化实体与关系的向量表示。运行脚本将训练一个简单的知识图谱,并可视化实体在嵌入空间中的分布。内容与使用方式:

复制代码
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA

class KGraph:
    def __init__(self):
        self.entities = {}
        self.relations = {}
        self.triples = []
        self.idx2ent = {}
        self.idx2rel = {}

    def add_triple(self, h, r, t):
        if h not in self.entities:
            self.entities[h] = len(self.entities)
        if t not in self.entities:
            self.entities[t] = len(self.entities)
        if r not in self.relations:
            self.relations[r] = len(self.relations)
        
        self.triples.append((self.entities[h], self.relations[r], self.entities[t]))
        self.idx2ent[self.entities[h]] = h
        self.idx2ent[self.entities[t]] = t
        self.idx2rel[self.relations[r]] = r

class TransE(nn.Module):
    def __init__(self, num_entities, num_relations, dim=50):
        super().__init__()
        self.ent_emb = nn.Embedding(num_entities, dim)
        self.rel_emb = nn.Embedding(num_relations, dim)
        # Initialize
        nn.init.xavier_uniform_(self.ent_emb.weight)
        nn.init.xavier_uniform_(self.rel_emb.weight)
        
    def forward(self, h_idx, r_idx, t_idx):
        h = self.ent_emb(h_idx)
        r = self.rel_emb(r_idx)
        t = self.ent_emb(t_idx)
        return torch.norm(h + r - t, p=2, dim=1)

class EmbeddingTrainer:
    def __init__(self, graph: KGraph, model: nn.Module, lr=0.01):
        self.graph = graph
        self.model = model
        self.optimizer = optim.Adam(model.parameters(), lr=lr)
        self.loss_history = []

    def train(self, epochs=100, batch_size=32):
        print("Training Embedding Model...")
        triples = torch.tensor(self.graph.triples)
        n_triples = len(triples)
        
        for epoch in range(epochs):
            # Sample batch
            indices = np.random.choice(n_triples, batch_size)
            batch = triples[indices]
            
            h = batch[:, 0]
            r = batch[:, 1]
            t = batch[:, 2]
            
            # Positive samples
            pos_score = self.model(h, r, t)
            
            # Negative sampling (Simple replacement)
            neg_h = h.clone()
            neg_t = t.clone()
            # Randomly replace head or tail
            for i in range(batch_size):
                if np.random.rand() < 0.5:
                    neg_h[i] = np.random.randint(len(self.graph.entities))
                else:
                    neg_t[i] = np.random.randint(len(self.graph.entities))
            
            neg_score = self.model(neg_h, r, neg_t)
            
            # Margin Loss
            loss = torch.mean(torch.relu(pos_score - neg_score + 1.0))
            
            self.optimizer.zero_grad()
            loss.backward()
            self.optimizer.step()
            
            self.loss_history.append(loss.item())
            
            if epoch % 20 == 0:
                print(f"Epoch {epoch}: Loss = {loss.item():.4f}")

    def visualize(self):
        # Get embeddings
        ents = list(self.graph.idx2ent.values())
        emb_matrix = self.model.ent_emb.weight.detach().numpy()
        
        # PCA
        pca = PCA(n_components=2)
        result = pca.fit_transform(emb_matrix)
        
        plt.figure(figsize=(10, 5))
        plt.subplot(1, 2, 1)
        plt.plot(self.loss_history)
        plt.title("Training Loss")
        
        plt.subplot(1, 2, 2)
        plt.scatter(result[:, 0], result[:, 1])
        for i, txt in enumerate(ents):
            plt.annotate(txt, (result[i, 0], result[i, 1]))
        plt.title("Entity Embeddings (PCA)")
        plt.show()

if __name__ == "__main__":
    graph = KGraph()
    # Family Domain
    graph.add_triple("Alice", "husband", "Bob")
    graph.add_triple("Bob", "wife", "Alice")
    graph.add_triple("Charlie", "father", "Bob")
    graph.add_triple("Charlie", "mother", "Alice")
    
    model = TransE(len(graph.entities), len(graph.relations), dim=10)
    trainer = EmbeddingTrainer(graph, model)
    trainer.train(epochs=500)
    trainer.visualize()

脚本 10.3.1.2:关系推理

本脚本实现了基于嵌入空间的多跳关系推理查询引擎。在TransE模型训练的基础上,支持路径查询与链接预测。系统能够回答"谁是Alice的丈夫的朋友?"这类复合关系问题。运行脚本将展示向量空间中的推理过程。内容与使用方式:

复制代码
import torch
from typing import List, Tuple
# Reusing TransE model from previous script context (mocked here)

class RelationalReasoner:
    def __init__(self, embed_model, graph: KGraph):
        self.model = embed_model
        self.graph = graph

    def predict_tail(self, h_name: str, r_name: str, k=3):
        """链接预测:给定,预测t"""
        h_idx = self.graph.entities[h_name]
        r_idx = self.graph.relations[r_name]
        
        h_vec = self.model.ent_emb(torch.tensor([h_idx]))
        r_vec = self.model.rel_emb(torch.tensor([r_idx]))
        
        target_vec = h_vec + r_vec # TransE assumption
        
        # Calc distance to all entities
        all_ents = self.model.ent_emb.weight
        dists = torch.norm(all_ents - target_vec, p=2, dim=1)
        
        # Get top k
        top_k_indices = torch.argsort(dists)[:k]
        
        results = []
        for idx in top_k_indices:
            idx = idx.item()
            results.append((self.graph.idx2ent[idx], dists[idx].item()))
        return results

    def multi_hop_query(self, start_ent: str, path: List[str], k=3):
        """多跳查询:给定实体和关系路径,推断终点"""
        print(f"Query: Starting from {start_ent}, following path {path}...")
        
        # Vector calculation
        h_vec = self.model.ent_emb(torch.tensor([self.graph.entities[start_ent]]))
        current_vec = h_vec
        
        for rel in path:
            r_idx = self.graph.relations[rel]
            r_vec = self.model.rel_emb(torch.tensor([r_idx]))
            current_vec = current_vec + r_vec # Composition in TransE
        
        # Find nearest neighbors
        all_ents = self.model.ent_emb.weight
        dists = torch.norm(all_ents - current_vec, p=2, dim=1)
        top_k_indices = torch.argsort(dists)[:k]
        
        candidates = [self.graph.idx2ent[idx.item()] for idx in top_k_indices]
        print(f"Result Candidates: {candidates}")
        return candidates

# Visualization of Reasoning Chain
def visualize_reasoning_chain(chain: List[Tuple[str, str, str]]):
    print("\n--- Reasoning Chain ---")
    for h, r, t in chain:
        print(f"{h} --[{r}]--> {t}")

if __name__ == "__main__":
    # Setup mock graph and trained model for demo
    # In real usage, load from previous script's output
    graph = KGraph()
    graph.add_triple("A", "knows", "B")
    graph.add_triple("B", "knows", "C")
    graph.add_triple("C", "lives_in", "NY")
    
    model = TransE(len(graph.entities), len(graph.relations), dim=10)
    # Train briefly
    trainer = EmbeddingTrainer(graph, model, lr=0.5)
    trainer.train(epochs=200) # Quick train
    
    reasoner = RelationalReasoner(model, graph)
    
    # Query 1: Simple link prediction
    print("\n[Task 1] Link Prediction")
    res = reasoner.predict_tail("A", "knows")
    print(f"Predicted tail for (A, knows): {res}")
    
    # Query 2: Multi-hop
    print("\n[Task 2] Multi-hop Reasoning")
    # Who does A know that knows C?
    # Path: knows -> knows
    # Assuming inverse mapping or just testing logic
    candidates = reasoner.multi_hop_query("A", ["knows", "knows"])
    
    # Visualization
    trainer.visualize()

脚本 10.3.2.1:先验知识注入

本脚本演示了如何将逻辑规则作为先验知识注入到神经网络训练中。实现了一个基于语义损失函数的分类器,该分类器在训练过程中同时最小化数据误差与逻辑约束违规惩罚。运行脚本将展示逻辑约束如何改善模型在少样本情况下的分类边界。内容与使用方式:

复制代码
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt

class LogicConstraintLayer(nn.Module):
    """将逻辑约束转化为损失项"""
    def __init__(self):
        super().__init__()
        
    def forward(self, probs, constraints):
        """
        probs: (Batch, Classes) 模型输出的概率分布
        constraints: List of functions defining logical rules
        """
        total_loss = 0
        # Example: If Class A is true, Class B must be false (Mutual Exclusion)
        # Rule: A -> ~B  <=>  A <= 1 - B
        # Loss: max(0, A + B - 1)
        
        # Assuming probs are probabilities for [A, B]
        p_A = probs[:, 0]
        p_B = probs[:, 1]
        
        # Fuzzy logic implication loss
        # Violation if (p_A > 0.5 and p_B > 0.5)
        # Soft violation: p_A * p_B (approx AND)
        violation = p_A * p_B
        total_loss = violation.mean()
        
        return total_loss

class InformedClassifier(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(2, 10)
        self.fc2 = nn.Linear(10, 2) # Binary classification
        self.logic_layer = LogicConstraintLayer()

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        logits = self.fc2(x)
        probs = torch.softmax(logits, dim=1)
        return probs

def generate_data(n=100):
    # Class 0: x+y < 1
    # Class 1: x+y > 2
    # Gap in between
    X = torch.rand(n, 2) * 3
    y = torch.zeros(n, dtype=torch.long)
    for i in range(n):
        if X[i].sum() > 2:
            y[i] = 1
        elif X[i].sum() < 1:
            y[i] = 0
        else:
            y[i] = -1 # Ignore index
    mask = y >= 0
    return X[mask], y[mask]

if __name__ == "__main__":
    model = InformedClassifier()
    optimizer = optim.Adam(model.parameters(), lr=0.01)
    ce_loss = nn.CrossEntropyLoss()
    
    X, y = generate_data(200)
    
    loss_history = []
    
    for epoch in range(100):
        optimizer.zero_grad()
        
        probs = model(X)
        
        # Standard Supervised Loss
        l_data = ce_loss(torch.log(probs.clamp(1e-6, 1)), y)
        
        # Logic Loss (Prior Knowledge)
        # Constraint: Class 0 and Class 1 are mutually exclusive (already handled by softmax)
        # Let's try: If input is in region X, must be Class A.
        # Here: A simple constraint: Probability of both classes cannot be high simultaneously
        # (Redundant but illustrative)
        l_logic = model.logic_layer(probs, None)
        
        loss = l_data + 0.5 * l_logic
        loss.backward()
        optimizer.step()
        
        loss_history.append(loss.item())

    # Visualization
    plt.figure(figsize=(12, 5))
    plt.subplot(1, 2, 1)
    plt.plot(loss_history)
    plt.title("Total Loss (Data + Logic)")
    
    plt.subplot(1, 2, 2)
    # Plot decision boundary
    x_min, x_max = 0, 3
    y_min, y_max = 0, 3
    xx, yy = np.meshgrid(np.linspace(x_min, x_max, 50), np.linspace(y_min, y_max, 50))
    grid = torch.tensor(np.c_[xx.ravel(), yy.ravel()], dtype=torch.float)
    with torch.no_grad():
        probs = model(grid)
        preds = torch.argmax(probs, dim=1).numpy()
    
    Z = preds.reshape(xx.shape)
    plt.contourf(xx, yy, Z, alpha=0.3)
    plt.scatter(X[:, 0], X[:, 1], c=y, edgecolors='k')
    plt.title("Decision Boundary with Logic Constraints")
    plt.show()

脚本 10.3.2.2:常识推理增强

本脚本构建了一个常识推理增强系统。模拟从知识库中检索常识知识,并将其作为上下文特征注入到任务模型中。系统包含知识检索模块、上下文融合层及下游任务预测器。运行脚本将展示知识增强如何帮助模型理解隐含的语义关系。内容与使用方式:

复制代码
import torch
import torch.nn as nn
import numpy as np

# Mock Knowledge Base
COMMONSENSE_DB = {
    "cup": ["container", "fragile", "used_for_drinking"],
    "table": ["furniture", "flat_surface", "has_legs"],
    "water": ["liquid", "transparent", "wet"],
    "fall": ["motion", "downward", "accident"],
    "glass": ["material", "transparent", "fragile"]
}

class KnowledgeRetriever:
    """模拟知识检索"""
    def __init__(self):
        self.db = COMMONSENSE_DB
        self.all_concepts = list(set([c for attrs in self.db.values() for c in attrs] + list(self.db.keys())))
        self.concept_to_idx = {c: i for i, c in enumerate(self.all_concepts)}
        self.embedding_dim = len(self.all_concepts)

    def retrieve(self, entity: str) -> np.ndarray:
        """返回one-hot编码的相关概念向量"""
        vector = np.zeros(self.embedding_dim)
        if entity in self.db:
            # Include entity itself
            vector[self.concept_to_idx[entity]] = 1
            # Include attributes
            for attr in self.db[entity]:
                vector[self.concept_to_idx[attr]] = 1
        return vector

class CommonsenseModel(nn.Module):
    def __init__(self, input_dim, knowledge_dim, hidden_dim, output_dim):
        super().__init__()
        self.input_encoder = nn.Linear(input_dim, hidden_dim)
        self.knowledge_encoder = nn.Linear(knowledge_dim, hidden_dim)
        
        # Gating mechanism
        self.gate = nn.Linear(hidden_dim * 2, 1)
        
        self.decoder = nn.Linear(hidden_dim, output_dim)

    def forward(self, x, knowledge_vec):
        # Encode features
        x_feat = torch.relu(self.input_encoder(x))
        k_feat = torch.relu(self.knowledge_encoder(knowledge_vec))
        
        # Fusion with Gating
        # Decide how much knowledge to incorporate
        combined = torch.cat([x_feat, k_feat], dim=1)
        gate_weights = torch.sigmoid(self.gate(combined))
        
        # Weighted sum
        fused_feat = gate_weights * k_feat + (1 - gate_weights) * x_feat
        
        return self.decoder(fused_feat)

# --- System Execution ---
if __name__ == "__main__":
    print("Initializing Commonsense Reasoning System...")
    
    # Components
    retriever = KnowledgeRetriever()
    model = CommonsenseModel(
        input_dim=10,      # Raw visual features (mock)
        knowledge_dim=len(retriever.all_concepts),
        hidden_dim=20,
        output_dim=3       # Action classes: "Pick up", "Ignore", "Clean"
    )
    
    # Scenario: Detect a "cup" (feature vector mock)
    visual_features = torch.rand(1, 10) # Random feature
    detected_object = "cup"
    
    print(f"\nDetected Object: {detected_object}")
    
    # 1. Retrieve Knowledge
    knowledge_vec = retriever.retrieve(detected_object)
    print(f"Retrieved Knowledge: {[c for i, c in enumerate(retriever.all_concepts) if knowledge_vec[i] == 1]}")
    k_tensor = torch.tensor(knowledge_vec).unsqueeze(0).float()
    
    # 2. Inference
    output = model(visual_features, k_tensor)
    probs = torch.softmax(output, dim=1)
    
    actions = ["Pick up", "Ignore", "Clean"]
    best_action = actions[torch.argmax(probs).item()]
    
    print(f"Predicted Action: {best_action} (Confidence: {probs.max().item():.2f})")
    
    # Visualization of Knowledge Vector
    print("\nKnowledge Vector Activation:")
    for i, val in enumerate(knowledge_vec):
        if val > 0:
            print(f"  [{retriever.all_concepts[i]}] Active")
相关推荐
nimadan122 小时前
剧本杀app2025推荐,多类型剧本体验与社交互动优势
人工智能·python
HAPPY酷3 小时前
Python高阶开发:从底层原理到架构设计的进阶之路
开发语言·python
疯狂打码的少年3 小时前
【Day 6 Java转Python】字符串处理的“降维打击”
java·开发语言·python
2301_764441333 小时前
家国同构模型:计算社会学的创新探索
python·数学建模
ShCDNay4 小时前
Python核心底层知识(个人记录)
开发语言·python
小超同学你好4 小时前
Transformer 22. Gemma 1 架构详解:Decoder-only、GeGLU、RoPE 与每一步计算
人工智能·深度学习·transformer
来自远方的老作者4 小时前
第7章 运算符-7.2 赋值运算符
开发语言·数据结构·python·赋值运算符
来自远方的老作者4 小时前
第7章 运算符-7.1 算术运算符
开发语言·数据结构·python·算法·算术运算符
tq6J5Yg144 小时前
windows10本地部署openclaw
前端·python