手搓 Java 的用户行为跟踪系统

一、系统整体架构与1万并发设计

🔄 数据处理流程与核心架构

用户行为跟踪系统的核心在于构建一个能够处理海量点击流数据的实时处理架构。整个系统遵循"采集→传输→处理→存储→展示"的数据流动模式,针对1万并发用户的高并发场景,系统采用分层架构设计确保性能、可靠性和可扩展性。

核心架构组件与数据流向:

复制代码
客户端(Web/App) → 负载均衡器(Nginx) → 数据收集集群(Spring Boot) → 消息队列(Kafka集群) → 流处理引擎(Flink/Storm) → 分布式存储(ClickHouse/Redis)

🏗️ 五层架构设计详解

1. 数据采集层:精准捕捉用户行为
  • 实现方式:在前端嵌入轻量级JavaScript脚本,gzip后仅3KB,对性能影响极小

  • 数据规范:采用结构化埋点规范,定义通用字段(user_id、session_id、event_timestamp、page_url等),使用蛇形命名法保证一致性

  • 优化策略:采用批量上报和缓存重试机制,减少网络请求次数并应对短时网络中断

2. 数据传输与接入层:高可用数据枢纽
  • 技术选型:Apache Kafka作为分布式消息队列,扮演"削峰填谷"的缓冲角色

  • 关键优势:高吞吐、持久化、支持多消费者,确保数据传输不丢失

  • 实践建议:按业务类型划分不同Kafka Topic(page_view_events、click_events等),合理设置分区数提升并行处理能力

3. 实时计算层:流处理核心引擎
  • 技术选型:Apache Flink因其真正的流处理、低延迟和高可用性成为首选

  • 核心能力

    • 数据清洗与转换:过滤无效数据、补全缺失字段、格式化数据

    • 实时聚合:基于时间窗口(如5分钟滚动窗口)计算PV、UV等指标

    • 复杂分析:实现实时用户会话分析、漏斗分析和路径分析

4. 数据存储层:平衡速度与容量
  • OLAP数据库:ClickHouse擅长海量数据亚秒级即席查询,适合存储明细数据和生成灵活报表

  • 缓存与搜索:Redis用于存储热数据或实时大屏所需的聚合结果,提供毫秒级响应

  • 冷热存储策略:根据数据热度采用不同存储方案,优化成本与性能

5. 可视化与应用层:数据价值呈现
  • 实时监控大屏:使用Grafana等工具连接数据源,直观展示核心指标实时变化

  • 灵活分析平台:使用Metabase或Superset等BI工具,支持业务人员自主拖拽分析

⚡ 1万并发架构优化策略

高可用性设计
  • 负载均衡:通过Nginx组建数据收集集群,避免单点故障导致数据丢失

  • 组件集群化:关键组件(Kafka、Flink、Redis)均部署集群,确保系统高可用

  • 数据可靠性:实现端到端Exactly-Once语义,确保数据不重不漏

性能优化措施
  • 异步处理:采集服务收到数据后立即异步写入Kafka,有效削峰填谷

  • 水平扩展:每个组件都支持独立水平扩展,Kafka通过增加分区和Broker提升吞吐量

  • 连接池优化:合理配置数据库连接池参数,避免连接等待成为瓶颈

数据治理与质量保障
  • 统一数据规范:建立公司级数据采集规范,明确每个事件的名称、属性及含义

  • 时间同步:统一使用服务器时间戳作为事件发生时间,避免时间序列错乱

  • 隐私合规:遵循隐私保护原则,确保符合GDPR等数据法规要求

📊 系统容量规划与监控

针对1万并发用户的场景,需要建立全方位的监控体系:

  • 关键指标监控:Kafka队列堆积情况、处理延迟、系统资源(CPU、内存、磁盘IO)

  • 性能基准:通过合理设置计算任务并行度,确保系统处理能力匹配业务需求

  • 扩展性测试:定期进行压力测试,验证系统水平扩展能力

该架构方案基于Kafka + Flink + ClickHouse的技术组合,被广泛认为是构建高并发用户行为跟踪系统的黄金标准,兼顾了高吞吐、低延迟、强一致性和强大的分析能力。

二、前端埋点与JavaScript SDK实现

🎯 SDK核心架构设计

基于系统整体架构要求,前端SDK采用模块化设计,确保在1万并发场景下的高性能和稳定性。SDK压缩后体积严格控制在3KB以内,支持异步加载和批量上报。

2.1 SDK基础结构
复制代码
class UserBehaviorSDK {
    constructor(config = {}) {
        this.config = Object.assign({
            appId: '',
            endpoint: '/api/track',
            batchSize: 10,
            delay: 3000,
            maxRetries: 3,
            debug: false,
            enablePageView: true,
            enableClick: true,
            enableScroll: true
        }, config);
        
        this.queue = [];
        this.isSending = false;
        this.retryCount = 0;
        this.init();
    }
}

📊 数据采集规范实现

严格遵循系统架构中定义的snake_case命名规范和结构化埋点标准:

2.2 事件数据模型
复制代码
// 基础事件结构
const BASE_EVENT = {
    user_id: null,          // 用户ID
    session_id: null,        // 会话ID
    event_timestamp: null,   // 事件时间戳
    page_url: null,         // 页面URL
    user_agent: null,       // 用户代理
    ip_address: null,       // IP地址
    event_type: null,       // 事件类型
    event_properties: {}    // 事件属性
};

// 具体事件类型定义
const EVENT_TYPES = {
    PAGE_VIEW: 'page_view',
    CLICK: 'click',
    SCROLL: 'scroll',
    FORM_SUBMIT: 'form_submit',
    ERROR: 'error'
};

🖱️ 用户行为采集实现

2.3 页面浏览跟踪
复制代码
trackPageView() {
    const event = {
        ...BASE_EVENT,
        event_type: EVENT_TYPES.PAGE_VIEW,
        page_url: window.location.href,
        referrer: document.referrer,
        page_title: document.title,
        viewport_size: `${window.innerWidth}x${window.innerHeight}`,
        load_time: performance.timing.loadEventEnd - performance.timing.navigationStart
    };
    
    this.addToQueue(event);
    
    // SPA路由变化监听
    this.setupSPATracking();
}

setupSPATracking() {
    const originalPushState = history.pushState;
    const originalReplaceState = history.replaceState;
    
    history.pushState = function(...args) {
        originalPushState.apply(this, args);
        this.trackPageView();
    }.bind(this);
    
    window.addEventListener('popstate', () => this.trackPageView());
}
2.4 点击事件跟踪
复制代码
trackClicks() {
    document.addEventListener('click', (e) => {
        const target = e.target;
        const event = {
            ...BASE_EVENT,
            event_type: EVENT_TYPES.CLICK,
            element_id: target.id,
            element_class: target.className,
            element_tag: target.tagName,
            element_text: target.textContent?.slice(0, 100),
            click_position: `${e.clientX},${e.clientY}`,
            target_url: target.href || null
        };
        
        this.addToQueue(event);
    }, { capture: true, passive: true });
}
2.5 滚动深度跟踪
复制代码
trackScroll() {
    let scrollDepth = 0;
    let lastReportedDepth = 0;
    
    window.addEventListener('scroll', () => {
        const scrollPosition = window.scrollY;
        const pageHeight = document.documentElement.scrollHeight;
        const viewportHeight = window.innerHeight;
        
        const currentDepth = Math.round((scrollPosition + viewportHeight) / pageHeight * 100);
        
        if (currentDepth > scrollDepth) {
            scrollDepth = currentDepth;
            
            // 每25%报告一次滚动深度
            if (scrollDepth - lastReportedDepth >= 25) {
                const event = {
                    ...BASE_EVENT,
                    event_type: EVENT_TYPES.SCROLL,
                    scroll_depth: scrollDepth,
                    scroll_position: scrollPosition,
                    max_scroll: pageHeight - viewportHeight
                };
                
                this.addToQueue(event);
                lastReportedDepth = scrollDepth;
            }
        }
    }, { passive: true });
}
2.6 表单提交跟踪
复制代码
trackFormSubmissions() {
    document.addEventListener('submit', (e) => {
        const form = e.target;
        const event = {
            ...BASE_EVENT,
            event_type: EVENT_TYPES.FORM_SUBMIT,
            form_id: form.id,
            form_class: form.className,
            form_action: form.action,
            form_method: form.method,
            field_count: form.elements.length
        };
        
        this.addToQueue(event);
    }, { capture: true });
}

🔄 批量上报与重试机制

2.7 队列管理与批量发送
复制代码
addToQueue(event) {
    // 填充基础字段
    event.user_id = this.getUserId();
    event.session_id = this.getSessionId();
    event.event_timestamp = Date.now();
    event.user_agent = navigator.userAgent;
    
    this.queue.push(event);
    
    // 达到批量大小或超时立即发送
    if (this.queue.length >= this.config.batchSize) {
        this.sendBatch();
    } else if (!this.sendTimer) {
        this.sendTimer = setTimeout(() => this.sendBatch(), this.config.delay);
    }
}

async sendBatch() {
    if (this.isSending || this.queue.length === 0) {
        return;
    }
    
    this.isSending = true;
    const batch = this.queue.splice(0, this.config.batchSize);
    
    try {
        // 根据事件类型路由到不同Kafka Topic
        const topic = this.getTopicForBatch(batch);
        const response = await fetch(this.config.endpoint, {
            method: 'POST',
            headers: { 'Content-Type': 'application/json' },
            body: JSON.stringify({
                topic: topic,
                events: batch
            })
        });
        
        if (!response.ok) throw new Error(`HTTP ${response.status}`);
        
        this.retryCount = 0;
        if (this.config.debug) {
            console.log(`成功发送 ${batch.length} 条事件到Topic: ${topic}`);
        }
    } catch (error) {
        // 重试逻辑
        this.queue.unshift(...batch);
        this.retryCount++;
        
        if (this.retryCount <= this.config.maxRetries) {
            setTimeout(() => this.sendBatch(), 1000 * this.retryCount);
        } else {
            console.error('事件发送失败,已达到最大重试次数');
            this.saveToStorage(batch); // 本地存储以备后续发送
        }
    } finally {
        this.isSending = false;
        this.sendTimer = null;
    }
}

getTopicForBatch(batch) {
    const eventTypes = [...new Set(batch.map(event => event.event_type))];
    
    // 根据事件类型映射到Kafka Topic
    const topicMap = {
        'page_view': 'page_view_events',
        'click': 'click_events',
        'scroll': 'scroll_events',
        'form_submit': 'form_events'
    };
    
    return eventTypes.length === 1 ? topicMap[eventTypes[0]] : 'mixed_events';
}

💾 本地存储与离线支持

2.8 数据持久化保障
复制代码
saveToStorage(events) {
    try {
        const stored = JSON.parse(localStorage.getItem('tracking_queue') || '[]');
        const updated = [...stored, ...events].slice(-1000); // 最多保存1000条
        
        localStorage.setItem('tracking_queue', JSON.stringify(updated));
    } catch (error) {
        console.warn('本地存储失败,部分数据可能丢失');
    }
}

restoreFromStorage() {
    try {
        const stored = JSON.parse(localStorage.getItem('tracking_queue') || '[]');
        if (stored.length > 0) {
            this.queue.unshift(...stored);
            localStorage.removeItem('tracking_queue');
            this.sendBatch();
        }
    } catch (error) {
        console.warn('恢复本地存储数据失败');
    }
}

🔒 隐私合规与性能优化

2.9 GDPR合规支持
复制代码
// 用户控制接口
setUserConsent(granted) {
    if (!granted) {
        this.disableTracking();
        this.clearUserData();
    }
    localStorage.setItem('tracking_consent', granted);
}

disableTracking() {
    this.queue = [];
    clearTimeout(this.sendTimer);
    
    // 移除所有事件监听器
    this.removeEventListeners();
}

// 匿名化处理
anonymizeData(event) {
    if (!this.config.enablePersonalization) {
        const { ip_address, user_id, ...anonymized } = event;
        return { ...anonymized, user_id: 'anonymous' };
    }
    return event;
}
2.10 性能监控与自检
复制代码
// SDK性能指标收集
monitorPerformance() {
    const observer = new PerformanceObserver((list) => {
        list.getEntries().forEach(entry => {
            if (entry.entryType === 'navigation') {
                this.trackPerformance(entry);
            }
        });
    });
    
    observer.observe({ entryTypes: ['navigation'] });
}

trackPerformance(navigationEntry) {
    const event = {
        ...BASE_EVENT,
        event_type: 'performance',
        performance_metrics: {
            dns_lookup: navigationEntry.domainLookupEnd - navigationEntry.domainLookupStart,
            tcp_connect: navigationEntry.connectEnd - navigationEntry.connectStart,
            ttfb: navigationEntry.responseStart - navigationEntry.requestStart,
            dom_ready: navigationEntry.domContentLoadedEventStart - navigationEntry.navigationStart,
            full_load: navigationEntry.loadEventStart - navigationEntry.navigationStart
        }
    };
    
    this.addToQueue(event);
}

🚀 SDK初始化与使用

2.11 快速集成示例
复制代码
<script>
// 自动初始化
window.trackingSDK = new UserBehaviorSDK({
    appId: 'your-app-id',
    endpoint: ' https://api.yourdomain.com/track',
    batchSize: 10,
    delay: 5000,
    debug: process.env.NODE_ENV === 'development'
});

// 手动跟踪自定义事件
function trackCustomEvent(eventName, properties) {
    window.trackingSDK.trackEvent({
        event_type: 'custom',
        custom_event_name: eventName,
        event_properties: properties
    });
}
</script>

📈 监控与调试支持

2.12 开发调试工具
复制代码
// 调试模式支持
enableDebugMode() {
    window.trackingDebug = {
        getQueue: () => this.queue,
        getConfig: () => this.config,
        forceSend: () => this.sendBatch(),
        clearQueue: () => this.queue = []
    };
    
    console.log('Tracking SDK调试模式已开启');
}

// 健康状态检查
getHealthStatus() {
    return {
        queueLength: this.queue.length,
        isSending: this.isSending,
        retryCount: this.retryCount,
        lastSendTime: this.lastSendTime,
        sessionId: this.getSessionId(),
        userId: this.getUserId()
    };
}

该前端SDK实现完全遵循系统架构规范,确保在1万并发用户场景下的高性能数据采集,同时提供完整的隐私合规支持和丰富的调试工具,为后续的数据处理和分析奠定坚实基础。

三、SpringBoot后端服务与MyBatis Plus集成

3.1 后端服务架构设计

基于observation的系统架构要求,SpringBoot后端服务采用无状态设计 ,通过Nginx负载均衡实现横向扩展,支撑1万并发用户场景。后端核心职责是接收前端JavaScript SDK发送的用户行为数据,并异步写入Kafka进行削峰填谷。

核心架构组件

  • 统一接收端点/api/track POST接口,接收标准化JSON格式数据

  • Kafka生产者:实现数据异步转发,与实时计算层解耦

  • MyBatis Plus:用于运营后台的元数据管理,与高并发数据流隔离

3.2 数据接收端点实现

3.2.1 控制器层设计
复制代码
@RestController
@RequestMapping("/api")
@Slf4j
public class TrackingController {
    
    @Autowired
    private KafkaTemplate<String, Object> kafkaTemplate;
    
    @PostMapping("/track")
    public ResponseEntity<TrackingResponse> receiveTrackingData(
            @RequestBody TrackingRequest request,
            HttpServletRequest httpRequest) {
        
        // 隐私合规检查
        if (!isUserConsentValid(request)) {
            return ResponseEntity.status(HttpStatus.NO_CONTENT).build();
        }
        
        // 数据验证
        if (!validateRequest(request)) {
            return ResponseEntity.badRequest().body(
                TrackingResponse.error("Invalid request format"));
        }
        
        // 异步发送到Kafka
        CompletableFuture.runAsync(() -> sendToKafka(request, httpRequest));
        
        return ResponseEntity.ok(TrackingResponse.success());
    }
    
    private boolean isUserConsentValid(TrackingRequest request) {
        return request.getUserConsent() != null && request.getUserConsent();
    }
}
3.2.2 请求响应模型

请求体结构(严格遵循observation中的snake_case规范):

复制代码
{
  "topic": "page_view_events",
  "user_consent": true,
  "events": [
    {
      "event_type": "click",
      "session_id": "sess_123456",
      "user_id": "user_789",
      "page_url": " https://example.com/home ",
      "element_id": "login_button",
      "event_timestamp": 1633046400000,
      "ip_address": "192.168.1.100"
    }
  ]
}

响应模型

复制代码
@Data
public class TrackingResponse {
    private boolean success;
    private String message;
    private Long timestamp;
    
    public static TrackingResponse success() {
        return new TrackingResponse(true, "OK", System.currentTimeMillis());
    }
}

3.3 Kafka集成配置

3.3.1 Kafka生产者配置
复制代码
# application.yml Kafka配置
spring:
  kafka:
    bootstrap-servers: localhost:9092
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
      properties:
        enable.idempotence: true  # 幂等性保证
        acks: all                # 高可靠性
        retries: 3               # 重试机制
        max.block.ms: 5000       # 连接超时
        delivery.timeout.ms: 30000 # 发送超时
3.3.2 Kafka服务实现
复制代码
@Service
@Slf4j
public class KafkaProducerService {
    
    @Autowired
    private KafkaTemplate<String, Object> kafkaTemplate;
    
    public void sendTrackingEvent(String topic, List<BaseEvent> events) {
        try {
            // 数据脱敏处理(GDPR合规)
            events.forEach(this::anonymizeSensitiveData);
            
            kafkaTemplate.send(topic, events)
                .addCallback(
                    result -> log.debug("Successfully sent {} events to topic: {}", 
                                      events.size(), topic),
                    ex -> log.error("Failed to send events to Kafka topic: {}", topic, ex)
                );
                
        } catch (Exception e) {
            log.error("Kafka发送异常,topic: {}, 错误: {}", topic, e.getMessage());
            // 记录失败日志,不阻塞前端响应
        }
    }
    
    private void anonymizeSensitiveData(BaseEvent event) {
        // IP地址匿名化(前24位掩码)
        if (event.getIpAddress() != null) {
            String[] segments = event.getIpAddress().split("\\.");
            if (segments.length == 4) {
                event.setIpAddress(segments[0] + "." + segments[1] + "." + segments[2] + ".0");
            }
        }
    }
}

3.4 MyBatis Plus集成实现

3.4.1 数据源与配置

MySQL数据源配置(用于运营后台元数据管理):

复制代码
spring:
  datasource:
    url: jdbc:mysql://localhost:3306/tracking_metadata?useUnicode=true&characterEncoding=utf8
    username: tracking_user
    password: ${METADATA_DB_PASSWORD}
    hikari:
      maximum-pool-size: 10
      minimum-idle: 5

mybatis-plus:
  configuration:
    map-underscore-to-camel-case: true
    log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
  global-config:
    db-config:
      id-type: ASSIGN_ID
      logic-delete-field: deleted
      logic-delete-value: 1
      logic-not-delete-value: 0
3.4.2 元数据实体设计

事件定义表(支持运营后台配置管理):

复制代码
@Data
@TableName("event_definitions")
public class EventDefinition {
    @TableId(type = IdType.ASSIGN_ID)
    private Long id;
    
    @TableField("event_type")
    private String eventType;
    
    @TableField("event_name")
    private String eventName;
    
    @TableField("description")
    private String description;
    
    @TableField("topic_name")
    private String topicName;
    
    @TableField("is_active")
    private Boolean isActive;
    
    @TableField(value = "create_time", fill = FieldFill.INSERT)
    private LocalDateTime createTime;
}

Topic配置表

复制代码
@Data
@TableName("topic_configurations")
public class TopicConfiguration {
    @TableId(type = IdType.ASSIGN_ID)
    private Long id;
    
    @TableField("topic_name")
    private String topicName;
    
    @TableField("partition_count")
    private Integer partitionCount;
    
    @TableField("retention_days")
    private Integer retentionDays;
    
    @TableField("description")
    private String description;
}
3.4.3 MyBatis Plus服务实现
复制代码
@Service
public class MetadataService extends ServiceImpl<EventDefinitionMapper, EventDefinition> {
    
    /**
     * 获取活跃的事件定义列表
     */
    public List<EventDefinition> getActiveEvents() {
        LambdaQueryWrapper<EventDefinition> query = new LambdaQueryWrapper<>();
        query.eq(EventDefinition::getIsActive, true)
             .orderByAsc(EventDefinition::getEventType);
        return baseMapper.selectList(query);
    }
    
    /**
     * 根据事件类型获取Topic配置
     */
    public String getTopicByEventType(String eventType) {
        EventDefinition definition = lambdaQuery()
            .eq(EventDefinition::getEventType, eventType)
            .eq(EventDefinition::getIsActive, true)
            .one();
        return definition != null ? definition.getTopicName() : "default_events";
    }
}

3.5 监控与健康检查

3.5.1 Spring Boot Actuator集成
复制代码
# 监控端点配置
management:
  endpoints:
    web:
      exposure:
        include: health,metrics,info,kafka
  endpoint:
    health:
      show-details: always
    metrics:
      enabled: true
3.5.2 自定义健康检查
复制代码
@Component
public class KafkaHealthIndicator implements HealthIndicator {
    
    @Autowired
    private KafkaTemplate<String, Object> kafkaTemplate;
    
    @Override
    public Health health() {
        try {
            // 测试Kafka连接性
            kafkaTemplate.send("health-check", "test").get(5, TimeUnit.SECONDS);
            return Health.up()
                .withDetail("bootstrapServers", kafkaTemplate.getProducerFactory()
                    .getBootstrapServers())
                .build();
        } catch (Exception e) {
            return Health.down(e).build();
        }
    }
}
3.5.3 业务指标监控
复制代码
@Component
public class TrackingMetrics {
    
    private final MeterRegistry meterRegistry;
    private final Counter receivedEventsCounter;
    private final Timer kafkaLatencyTimer;
    
    public TrackingMetrics(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
        this.receivedEventsCounter = Counter.builder("tracking.events.received")
            .description("Total number of received tracking events")
            .register(meterRegistry);
            
        this.kafkaLatencyTimer = Timer.builder("tracking.kafka.latency")
            .description("Kafka write latency")
            .register(meterRegistry);
    }
    
    public void recordEvent(String topic, int eventCount) {
        receivedEventsCounter.increment(eventCount);
        meterRegistry.counter("tracking.events.by.topic", "topic", topic)
                    .increment(eventCount);
    }
}

3.6 错误处理与重试机制

3.6.1 全局异常处理
复制代码
@ControllerAdvice
public class GlobalExceptionHandler {
    
    @ExceptionHandler(KafkaException.class)
    public ResponseEntity<ErrorResponse> handleKafkaException(KafkaException ex) {
        log.error("Kafka处理异常", ex);
        // 返回429状态码触发前端重试
        return ResponseEntity.status(HttpStatus.TOO_MANY_REQUESTS)
            .body(ErrorResponse.of("SYSTEM_BUSY", "系统繁忙,请稍后重试"));
    }
    
    @ExceptionHandler(Exception.class)
    public ResponseEntity<ErrorResponse> handleGenericException(Exception ex) {
        log.error("系统异常", ex);
        return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
            .body(ErrorResponse.of("INTERNAL_ERROR", "内部服务器错误"));
    }
}
3.6.2 异步处理线程池
复制代码
@Configuration
@EnableAsync
public class AsyncConfig {
    
    @Bean("trackingTaskExecutor")
    public ThreadPoolTaskExecutor trackingTaskExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(5);
        executor.setMaxPoolSize(20);
        executor.setQueueCapacity(1000);
        executor.setThreadNamePrefix("TrackingAsync-");
        executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
        executor.setWaitForTasksToCompleteOnShutdown(true);
        executor.setAwaitTerminationSeconds(60);
        return executor;
    }
}

3.7 配置管理与环境适配

3.7.1 多环境配置
复制代码
# application-prod.yml 生产环境配置
spring:
  kafka:
    bootstrap-servers: kafka-cluster-1:9092,kafka-cluster-2:9092,kafka-cluster-3:9092
  datasource:
    url: jdbc:mysql://mysql-master:3306/tracking_metadata?useSSL=true

tracking:
  batch-size: 50    # 生产环境增大批次大小
  timeout-ms: 10000 # 延长超时时间
3.7.2 配置类封装
复制代码
@Configuration
@ConfigurationProperties(prefix = "tracking")
@Data
public class TrackingConfig {
    private Integer batchSize = 10;
    private Long timeoutMs = 5000L;
    private Boolean enableValidation = true;
    private List<String> allowedTopics = Arrays.asList(
        "page_view_events", "click_events", "scroll_events", "form_events");
}

本SpringBoot后端服务实现了与前端JavaScript SDK的无缝集成,通过Kafka异步处理支撑1万并发用户场景,同时利用MyBatis Plus管理运营元数据,确保系统的高性能和高可靠性。所有实现严格遵循observation中的技术规范和架构约束。

四、高并发写入优化与连接池配置

🚀 核心写入性能瓶颈分析

基于系统1万并发用户的设计目标,前端SDK每3秒或每10条事件触发批量上报,理论峰值写入量约为:

  • 1万用户 × 10条/批 × 1/3秒 ≈ 3.3万条/秒

当前架构中,MySQL主要用于运营后台元数据查询,实时埋点数据通过Kafka异步处理。因此本章重点优化Kafka生产端的并发写入能力。

🔧 Kafka生产者高并发配置

关键参数调优表
参数类别 参数名 推荐值 优化说明
批量发送 batch.size 32768 (32KB) 增大批次大小,减少网络请求次数
linger.ms 50 适当增加等待时间,积累更多消息批量发送
内存缓冲 buffer.memory 67108864 (64MB) 为3.3万条/秒峰值提供充足缓冲空间
可靠性 acks all 保证数据不丢失,已配置
retries 3 合理重试次数,避免无限重试
并发性能 max.in.flight.requests.per.connection 5 允许更多请求并行发送
SpringBoot配置实现
复制代码
spring:
  kafka:
    producer:
      bootstrap-servers: localhost:9092
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
      properties:
        # 高并发优化参数
        batch.size: 32768
        linger.ms: 50
        buffer.memory: 67108864
        compression.type: snappy  # 启用压缩减少网络带宽
        max.in.flight.requests.per.connection: 5
        # 可靠性保障(已配置)
        enable.idempotence: true
        acks: all
        retries: 3

⚡ 线程池优化策略

当前trackingTaskExecutor配置为core=5, max=20, queue=1000,在3.3万条/秒峰值下可能成为瓶颈。

优化后的线程池配置
复制代码
@Configuration
public class KafkaProducerConfig {
    
    @Bean("trackingTaskExecutor")
    public ThreadPoolTaskExecutor trackingTaskExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(10);      // 提高核心线程数
        executor.setMaxPoolSize(30);       // 最大线程数适应峰值
        executor.setQueueCapacity(2000);   // 增大队列容量
        executor.setThreadNamePrefix("kafka-producer-");
        executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
        executor.setWaitForTasksToCompleteOnShutdown(true);
        executor.setAwaitTerminationSeconds(30);
        executor.initialize();
        return executor;
    }
}
多Producer实例方案

对于极端高并发场景,可考虑创建多个KafkaProducer实例分担负载:

复制代码
@Component
public class MultiProducerManager {
    private final List<KafkaProducer<String, Object>> producers = new ArrayList<>();
    private final AtomicInteger counter = new AtomicInteger(0);
    
    @PostConstruct
    public void init() {
        // 创建3个Producer实例平衡负载
        for (int i = 0; i < 3; i++) {
            Properties props = new Properties();
            props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
            props.put(ProducerConfig.BATCH_SIZE_CONFIG, 32768);
            // ... 其他配置
            producers.add(new KafkaProducer<>(props));
        }
    }
    
    public KafkaProducer<String, Object> getProducer() {
        // 轮询选择Producer实例
        int index = counter.incrementAndGet() % producers.size();
        return producers.get(index);
    }
}

📊 连接池配置(后台查询用途)

虽然实时埋点不直接写入MySQL,但运营后台的元数据查询需要优化的连接池配置。

Druid连接池高并发配置
复制代码
spring:
  datasource:
    type: com.alibaba.druid.pool.DruidDataSource
    druid:
      # 连接池基础配置
      initial-size: 5
      min-idle: 5
      max-active: 20
      max-wait: 60000
      
      # 连接检测配置
      validation-query: SELECT 1
      test-while-idle: true
      test-on-borrow: false
      time-between-eviction-runs-millis: 60000
      min-evictable-idle-time-millis: 300000
      
      # 监控配置
      filters: stat,wall,slf4j
      web-stat-filter:
        enabled: true
      stat-view-servlet:
        enabled: true
        url-pattern: /druid/*
        login-username: admin
        login-password: admin

🔍 性能监控与调优验证

监控指标完善

在原有监控基础上增加Kafka生产者关键指标:

复制代码
@Component
public class KafkaProducerMetrics {
    
    private final MeterRegistry meterRegistry;
    private final Counter batchSizeCounter;
    private final Timer produceLatencyTimer;
    
    public KafkaProducerMetrics(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
        this.batchSizeCounter = Counter.builder("kafka.producer.batch.size")
                .description("Kafka生产者批次大小分布")
                .register(meterRegistry);
        this.produceLatencyTimer = Timer.builder("kafka.producer.latency")
                .description("Kafka生产者发送延迟")
                .register(meterRegistry);
    }
    
    public void recordProduceMetrics(int batchSize, long duration) {
        batchSizeCounter.increment(batchSize);
        produceLatencyTimer.record(duration, TimeUnit.MILLISECONDS);
    }
}
压力测试验证要点
  1. 基准测试:测量优化前后的TPS对比

  2. 稳定性测试:持续运行检查内存泄漏和GC情况

  3. 峰值测试:模拟3.3万条/秒峰值流量验证系统稳定性

💡 实战优化建议

  1. 渐进式调优 :先调整batch.sizelinger.ms,观察效果后再调整线程池

  2. 监控驱动:依靠监控数据判断优化效果,避免盲目调参

  3. 容错考虑 :保持acks=all确保数据可靠性,不可因性能牺牲数据安全

  4. 资源预留:生产环境预留20-30%的性能余量应对突发流量

通过上述优化组合,系统能够稳定处理1万并发用户产生的实时行为数据,为后续的ClickHouse分析提供高质量数据流。

五、完整代码示例与部署说明

5.1 项目结构与核心代码资产

项目完整目录结构

复制代码
user-tracking-system/
├── frontend-sdk/                 # 前端埋点SDK
│   ├── src/
│   │   ├── WebTracingSDK.js      # 核心SDK类(3KB gzip后)
│   │   ├── tracking.d.ts         # TypeScript类型声明
│   │   └── utils/                # 工具函数
│   ├── package.json
│   └── webpack.config.js
├── backend/                      # SpringBoot后端服务
│   ├── src/main/java/com/tracking/
│   │   ├── controller/
│   │   │   └── TrackingController.java    # /api/track接收端点
│   │   ├── service/
│   │   │   ├── KafkaProducerService.java  # Kafka发送服务
│   │   │   └── MetadataService.java       # 元数据CRUD
│   │   ├── config/
│   │   │   ├── AsyncConfig.java           # 异步线程池配置
│   │   │   └── MybatisPlusConfig.java    # MyBatis Plus配置
│   │   └── entity/               # 数据实体
│   ├── src/main/resources/
│   │   ├── application.yml       # 主配置文件(含dev/prod环境)
│   │   └── mapper/               # MyBatis映射文件
│   └── pom.xml
├── deployment/                   # 部署配置
│   ├── docker-compose.yml        # 本地开发环境
│   ├── k8s/                      # Kubernetes生产部署
│   │   ├── deployment.yaml
│   │   ├── service.yaml
│   │   └── hpa.yaml              # 水平自动扩缩容
│   └── scripts/
│       ├── init-db.sql           # 数据库初始化脚本
│       └── start.sh              # 一键启动脚本
└── monitoring/                   # 监控配置
    ├── prometheus.yml            # 指标采集配置
    └── grafana-dashboard.json    # 预置监控面板

5.2 核心代码文件详解

前端SDK核心实现(WebTracingSDK.js)

复制代码
class WebTracingSDK {
    constructor(config) {
        this.config = Object.assign({
            appId: 'default',
            endpoint: '/api/track',
            batchSize: 10,
            sendInterval: 3000,
            maxRetries: 3,
            debug: false
        }, config);
        
        this.queue = [];
        this.init();
    }

    // 自动埋点初始化
    init() {
        this.autoTrackPageView();
        this.autoTrackClicks();
        this.autoTrackErrors();
        
        // 定时批量发送
        setInterval(() => this.sendBatch(), this.config.sendInterval);
        
        // 页面关闭前立即发送
        window.addEventListener('beforeunload', () => this.sendImmediate());
    }

    track(eventType, properties = {}) {
        const event = {
            event_type: eventType,
            properties,
            timestamp: Date.now(),
            page_url: window.location.href,
            user_agent: navigator.userAgent,
            session_id: this.getSessionId()
        };
        
        this.queue.push(event);
        if (this.queue.length >= this.config.batchSize) {
            this.sendBatch();
        }
    }

    async sendBatch() {
        if (this.queue.length === 0) return;
        
        const batch = [...this.queue];
        this.queue = [];
        
        try {
            const response = await fetch(this.config.endpoint, {
                method: 'POST',
                headers: { 'Content-Type': 'application/json' },
                body: JSON.stringify({ events: batch })
            });
            
            if (!response.ok) throw new Error(`HTTP ${response.status}`);
        } catch (error) {
            // 重试逻辑
            this.retrySend(batch);
        }
    }
}

后端接收端点(TrackingController.java)

复制代码
@RestController
@RequestMapping("/api")
@Slf4j
public class TrackingController {
    
    @Autowired
    private KafkaProducerService kafkaService;
    
    @PostMapping("/track")
    public ResponseEntity<Map<String, Object>> trackUserBehavior(
            @RequestBody TrackRequest request) {
        
        // 1. 数据验证与清洗
        if (!validateTrackRequest(request)) {
            return ResponseEntity.badRequest().body(Map.of("error", "Invalid request"));
        }
        
        // 2. 异步处理到Kafka
        CompletableFuture.runAsync(() -> {
            try {
                kafkaService.sendTrackingEvent(request);
                log.debug("成功发送跟踪事件到Kafka: {}", request.getSessionId());
            } catch (Exception e) {
                log.error("Kafka发送失败: {}", e.getMessage());
            }
        });
        
        return ResponseEntity.ok(Map.of("status", "success", "timestamp", System.currentTimeMillis()));
    }
    
    private boolean validateTrackRequest(TrackRequest request) {
        return request != null && 
               request.getEvents() != null && 
               !request.getEvents().isEmpty() &&
               request.getUserConsent() != null &&
               request.getUserConsent();
    }
}

Kafka生产者服务(KafkaProducerService.java)

复制代码
@Service
@Slf4j
public class KafkaProducerService {
    
    @Value("${kafka.topic.tracking-events}")
    private String trackingTopic;
    
    @Autowired
    private KafkaTemplate<String, Object> kafkaTemplate;
    
    // 配置幂等生产者
    @Bean
    public ProducerFactory<String, Object> producerFactory() {
        Map<String, Object> props = new HashMap<>();
        props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
        props.put(ProducerConfig.ACKS_CONFIG, "all");
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, 32768);  // 32KB
        props.put(ProducerConfig.LINGER_MS_CONFIG, 50);
        return new DefaultKafkaProducerFactory<>(props);
    }
    
    public void sendTrackingEvent(TrackRequest request) {
        // IP匿名化处理
        String anonymizedIp = anonymizeIp(request.getIpAddress());
        request.setIpAddress(anonymizedIp);
        
        kafkaTemplate.send(trackingTopic, request.getSessionId(), request)
                   .addCallback(
                       result -> log.debug("Kafka发送成功"),
                       ex -> log.error("Kafka发送异常: {}", ex.getMessage())
                   );
    }
    
    private String anonymizeIp(String ipAddress) {
        if (ipAddress == null) return null;
        return ipAddress.replaceAll("(\\.\\d+)$", ".0");
    }
}

5.3 三种部署模式详解

5.3.1 本地Docker Compose快速体验

docker-compose.yml

复制代码
version: '3.8'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.0
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      
  kafka:
    image: confluentinc/cp-kafka:7.3.0
    depends_on: [zookeeper]
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
    ports: ["9092:9092"]
    
  clickhouse:
    image: clickhouse/clickhouse-server:23.3
    ports: ["8123:8123", "9000:9000"]
    volumes:
      - clickhouse_data:/var/lib/clickhouse
    
  redis:
    image: redis:7-alpine
    ports: ["6379:6379"]
    
  grafana:
    image: grafana/grafana:9.5.0
    ports: ["3000:3000"]
    environment:
      GF_SECURITY_ADMIN_PASSWORD: admin123
    
  app:
    build: ./backend
    ports: ["8080:8080"]
    depends_on: [kafka, clickhouse, redis]
    environment:
      KAFKA_BOOTSTRAP_SERVERS: kafka:9092
      CLICKHOUSE_URL: jdbc:clickhouse://clickhouse:8123/default

一键启动脚本(start.sh

复制代码
#!/bin/bash
echo "🚀 启动用户行为跟踪系统..."

# 检查Docker环境
if ! command -v docker &> /dev/null; then
    echo "❌ Docker未安装,请先安装Docker"
    exit 1
fi

# 构建并启动服务
docker-compose down
docker-compose build --no-cache
docker-compose up -d

echo "⏳ 等待服务启动..."
sleep 30

# 健康检查
echo "🔍 检查服务状态..."
curl -f http://localhost:8080/actuator/health || echo "❌ 应用服务异常"
curl -f http://localhost:3000/api/health || echo "❌ Grafana服务异常"

echo "✅ 系统启动完成!"
echo "📊 Grafana面板: http://localhost:3000 (admin/admin123)"
echo "🔗 API文档: http://localhost:8080/swagger-ui.html "
5.3.2 生产环境Kubernetes部署

k8s/deployment.yaml

复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tracking-app
  namespace: tracking-system
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tracking-app
  template:
    metadata:
      labels:
        app: tracking-app
    spec:
      containers:
      - name: tracking-app
        image: your-registry/tracking-app:1.0.0
        ports:
        - containerPort: 8080
        env:
        - name: KAFKA_BOOTSTRAP_SERVERS
          value: "kafka-cluster:9092"
        - name: METADATA_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-secret
              key: password
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /actuator/health
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: tracking-service
spec:
  selector:
    app: tracking-app
  ports:
  - port: 80
    targetPort: 8080
  type: LoadBalancer

水平Pod自动扩缩容(hpa.yaml)

复制代码
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: tracking-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: tracking-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
5.3.3 传统服务器部署

systemd服务配置(/etc/systemd/system/tracking.service)

复制代码
[Unit]
Description=User Tracking System
After=network.target

[Service]
Type=simple
User=tracking
WorkingDirectory=/opt/tracking-system
ExecStart=/usr/bin/java -jar tracking-app.jar
Environment=SPRING_PROFILES_ACTIVE=prod
Environment=KAFKA_BOOTSTRAP_SERVERS=kafka1:9092,kafka2:9092,kafka3:9092
Environment=JAVA_OPTS=-Xmx2g -Xms1g -XX:+UseG1GC
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

5.4 数据库初始化与样例数据

init-db.sql

复制代码
-- 元数据管理表
CREATE TABLE event_definitions (
    id BIGINT AUTO_INCREMENT PRIMARY KEY,
    event_name VARCHAR(100) NOT NULL UNIQUE,
    event_description TEXT,
    properties_schema JSON COMMENT '事件属性JSON Schema',
    is_active BOOLEAN DEFAULT TRUE,
    created_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);

CREATE TABLE topic_configurations (
    id BIGINT AUTO_INCREMENT PRIMARY KEY,
    topic_name VARCHAR(100) NOT NULL UNIQUE,
    partition_count INT DEFAULT 3,
    retention_hours INT DEFAULT 168 COMMENT '数据保留时间(小时)',
    created_time TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- 插入样例数据
INSERT INTO event_definitions (event_name, event_description, properties_schema) VALUES
('page_view', '页面浏览事件', '{"required":["page_url","referrer"],"properties":{"page_url":{"type":"string"},"referrer":{"type":"string"},"stay_duration":{"type":"integer"}}}'),
('button_click', '按钮点击事件', '{"required":["button_id","page_url"],"properties":{"button_id":{"type":"string"},"page_url":{"type":"string"},"text_content":{"type":"string"}}}');

INSERT INTO topic_configurations (topic_name, partition_count, retention_hours) VALUES
('page_view_events', 5, 720),
('click_events', 3, 168);

5.5 监控与告警配置

Prometheus监控配置(prometheus.yml)

复制代码
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'springboot-app'
    metrics_path: '/actuator/prometheus'
    static_configs:
      - targets: ['tracking-app:8080']
    
  - job_name: 'kafka'
    static_configs:
      - targets: ['kafka-exporter:9308']
    
  - job_name: 'clickhouse'
    static_configs:
      - targets: ['clickhouse-exporter:9116']

alerting:
  alertmanagers:
    - static_configs:
        - targets: ['alertmanager:9093']

rule_files:
  - "alerts.yml"

关键告警规则(alerts.yml)

复制代码
groups:
- name: tracking-system
  rules:
  - alert: KafkaLagHigh
    expr: kafka_consumergroup_lag > 100000
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Kafka消费延迟过高"
      
  - alert: HighThreadPoolQueueUsage
    expr: thread_pool_queue_usage > 0.8
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: "线程池队列使用率超过80%"

5.6 性能压测验证

Locust压测脚本(locustfile.py

复制代码
from locust import HttpUser, task, between
import random
import time

class TrackingUser(HttpUser):
    wait_time = between(0.5, 2)
    
    @task(3)
    def track_page_view(self):
        event_data = {
            "topic": "page_view_events",
            "user_consent": True,
            "events": [{
                "event_type": "page_view",
                "page_url": f" https://example.com/page/ {random.randint(1,100)}",
                "referrer": " https://google.com ",
                "timestamp": int(time.time() * 1000),
                "session_id": f"session_{random.randint(1000,9999)}"
            }]
        }
        self.client.post("/api/track", json=event_data)
    
    @task(1)
    def track_click_event(self):
        event_data = {
            "topic": "click_events", 
            "user_consent": True,
            "events": [{
                "event_type": "button_click",
                "button_id": f"btn_{random.randint(1,50)}",
                "page_url": " https://example.com/product/123 ",
                "timestamp": int(time.time() * 1000)
            }]
        }
        self.client.post("/api/track", json=event_data)

压测执行命令

复制代码
# 启动1万用户并发,每秒增加100用户
locust -f locustfile.py --host= http://localhost:8080 --users=10000 --spawn-rate=100

5.7 系统验证清单

部署完成后,执行以下验证步骤:

  1. 基础功能验证

    复制代码
    curl -X POST http://localhost:8080/api/track \
      -H "Content-Type: application/json" \
      -d '{"topic":"page_view_events","user_consent":true,"events":[{"event_type":"page_view","page_url":" https://test.com "}]}'
  2. 监控面板验证

  3. 数据流水线验证

    • 确认Kafka Topic数据流入

    • 验证ClickHouse数据写入

    • 检查Grafana图表数据更新

  4. 容灾测试

    • 重启单个服务实例验证自动恢复

    • 模拟网络分区测试系统韧性

相关推荐
盖世英雄酱581361 小时前
java深度调试技术【第六七八章:宽字节与多字节】
java·后端
爱丽_2 小时前
深入理解 Java Socket 编程与线程池:从阻塞 I/O 到高并发处理
java·开发语言
济南壹软网络科技有限公司2 小时前
云脉IM的高性能消息路由与离线推送机制摘要:消息的“零丢失、低延迟”之道
java·即时通讯源码·开源im·企业im
Seven972 小时前
剑指offer-46、孩⼦们的游戏(圆圈中最后剩下的数)
java
serendipity_hky2 小时前
互联网大厂Java面试故事:核心技术栈与场景化业务问题实战解析
java·spring boot·redis·elasticsearch·微服务·消息队列·内容社区
我真不会起名字啊2 小时前
C、C++中的sprintf和stringstream的使用
java·c语言·c++
十点摆码2 小时前
Spring Boot2 使用 Flyway 管理数据库版本
java·flyway·数据库脚本·springboo2·数据库脚本自动管理
毕设源码-钟学长2 小时前
【开题答辩全过程】以 基于Javaweb的电动汽车充电桩管理系统为例,包含答辩的问题和答案
java·spring boot
多敲代码防脱发2 小时前
为何引入Spring-cloud以及远程调用(RestTemplate)
java·开发语言