
在过去几个月里,我一直在尝试使用 LLM 编程 agents。Claude Code 已经成为我的最爱。
它并非没有问题,但它让我在相对较短的时间内编写了大约 12 个程序/项目,我觉得如果没有它,我无法在相同的时间内完成所有这些工作。其中大部分项目,如果没有 Claude Code,我甚至不会费心去编写,仅仅因为它们会占用我太多时间。(本文末尾包含了一个列表。)
我仍然远非 Claude Code 专家,我还有大量可能有用的博客文章和文档需要阅读。但是------这很关键------你不必阅读所有的资料就能开始看到结果。你甚至不需要阅读这篇文章;只需输入一些提示词,看看会产生什么结果。
话虽如此,因为我刚刚为一份工作申请写了这些内容,以下是我如何从 Claude Code 获得良好结果的方法。我在适当的地方添加了一些示例的链接。
- 关键是提前编写一个清晰的规格说明,为 agent 在代码库中工作时提供上下文。(示例: 1, 2, 3, 4)
- 为 agent 准备一份文档,概述项目的结构以及如何运行构建和检查工具等,这很有帮助。(示例: 1, 2, 3)
- 要求 agent 对其自己的工作进行代码审查,这出人意料地富有成效。
- 最后,我有一份个人的"全局"agent 指南,描述了 agents 应遵循的最佳实践,指定了诸如问题解决方法、使用 TDD 等内容。(该文件列在本文末尾附近。)
然后是验证 LLM 编写的代码的问题。
AI 生成的代码确实经常是不正确的或低效的。
对我来说,重要的是要指出我相信我最终要对带有我名字的 PR 中的代码负责,无论它是如何产生的。
因此,特别是在任何专业环境中,我会手动审查所有 AI 编写的代码和测试用例。对于我认为缺失或需要改进的任何内容,我会添加测试用例,可以手动添加,也可以要求 LLM 编写这些用例(然后我再审查)。
归根结底,手动审查是必要的,以验证行为是否正确实现并得到适当测试。
个人"全局"agent 指南
这个文件位于 ~/.claude/CLAUDE.md
:
markdown
# Development Guidelines
## Philosophy
### Core Beliefs
- **Incremental progress over big bangs** - Small changes that compile and pass tests
- **Learning from existing code** - Study and plan before implementing
- **Pragmatic over dogmatic** - Adapt to project reality
- **Clear intent over clever code** - Be boring and obvious
### Simplicity Means
- Single responsibility per function/class
- Avoid premature abstractions
- No clever tricks - choose the boring solution
- If you need to explain it, it's too complex
## Process
### 1. Planning & Staging
Break complex work into 3-5 stages. Document in `IMPLEMENTATION_PLAN.md`:
```markdown
## Stage N: [Name]
**Goal**: [Specific deliverable]
**Success Criteria**: [Testable outcomes]
**Tests**: [Specific test cases]
**Status**: [Not Started|In Progress|Complete]
```
* Update status as you progress
* Remove file when all stages are done
### 2. Implementation Flow
1. **Understand** - Study existing patterns in codebase
2. **Test** - Write test first (red)
3. **Implement** - Minimal code to pass (green)
4. **Refactor** - Clean up with tests passing
5. **Commit** - With clear message linking to plan
### 3. When Stuck (After 3 Attempts)
**CRITICAL**: Maximum 3 attempts per issue, then STOP.
1. **Document what failed**:
* What you tried
* Specific error messages
* Why you think it failed
2. **Research alternatives**:
* Find 2-3 similar implementations
* Note different approaches used
3. **Question fundamentals**:
* Is this the right abstraction level?
* Can this be split into smaller problems?
* Is there a simpler approach entirely?
4. **Try different angle**:
* Different library/framework feature?
* Different architectural pattern?
* Remove abstraction instead of adding?
## Technical Standards
### Architecture Principles
* **Composition over inheritance** - Use dependency injection
* **Interfaces over singletons** - Enable testing and flexibility
* **Explicit over implicit** - Clear data flow and dependencies
* **Test-driven when possible** - Never disable tests, fix them
### Code Quality
* **Every commit must**:
* Compile successfully
* Pass all existing tests
* Include tests for new functionality
* Follow project formatting/linting
* **Before committing**:
* Run formatters/linters
* Self-review changes
* Ensure commit message explains "why"
### Error Handling
* Fail fast with descriptive messages
* Include context for debugging
* Handle errors at appropriate level
* Never silently swallow exceptions
## Decision Framework
When multiple valid approaches exist, choose based on:
1. **Testability** - Can I easily test this?
2. **Readability** - Will someone understand this in 6 months?
3. **Consistency** - Does this match project patterns?
4. **Simplicity** - Is this the simplest solution that works?
5. **Reversibility** - How hard to change later?
## Project Integration
### Learning the Codebase
* Find 3 similar features/components
* Identify common patterns and conventions
* Use same libraries/utilities when possible
* Follow existing test patterns
### Tooling
* Use project's existing build system
* Use project's test framework
* Use project's formatter/linter settings
* Don't introduce new tools without strong justification
## Quality Gates
### Definition of Done
* [ ] Tests written and passing
* [ ] Code follows project conventions
* [ ] No linter/formatter warnings
* [ ] Commit messages are clear
* [ ] Implementation matches plan
* [ ] No TODOs without issue numbers
### Test Guidelines
* Test behavior, not implementation
* One assertion per test when possible
* Clear test names describing scenario
* Use existing test utilities/helpers
* Tests should be deterministic
## Important Reminders
**NEVER**:
* Use `--no-verify` to bypass commit hooks
* Disable tests instead of fixing them
* Commit code that doesn't compile
* Make assumptions - verify with existing code
**ALWAYS**:
* Commit working code incrementally
* Update plan documentation as you go
* Learn from existing implementations
* Stop after 3 failed attempts and reassess