淘客返利app后端系统架构设计:从数据一致性到高可用方案

发布于:2025-09-15 ⋅ 阅读:(19) ⋅ 点赞:(0)

淘客返利app后端系统架构设计:从数据一致性到高可用方案

大家好,我是阿可,微赚淘客系统及省赚客APP创始人,是个冬天不穿秋裤,天冷也要风度的程序猿!

一、淘客返利系统的核心架构设计

淘客返利APP后端系统需处理商品推广、订单跟踪、佣金结算等核心业务,其架构设计需兼顾数据一致性与高可用性。聚娃科技省赚客APP采用"微服务集群+分布式存储"架构,核心模块包括:

  • 推广服务:管理推广链接生成与渠道追踪
  • 订单服务:对接电商平台API,同步订单状态
  • 佣金服务:计算返利金额并处理提现请求
  • 账户服务:管理用户资产与佣金流水

基础架构采用Spring Cloud Alibaba生态,服务间通过Dubbo实现RPC通信,数据层采用MySQL+Redis+MongoDB的混合存储方案。
在这里插入图片描述

二、数据一致性方案设计

2.1 分布式事务处理

佣金结算场景需保证订单状态与佣金记录的一致性,采用Seata的TCC模式实现:

package cn.juwatech.commission.service;

@Service
public class CommissionTccService implements TccAction {

    @Autowired
    private CommissionMapper commissionMapper;

    // Try阶段:预扣减佣金额度
    @Override
    @TwoPhaseBusinessAction(name = "commissionTccAction", commitMethod = "commit", rollbackMethod = "rollback")
    public boolean prepare(@BusinessActionContextParameter(paramName = "commissionDTO") CommissionDTO dto) {
        BusinessActionContext context = BusinessActionContextHolder.getContext();
        context.setActionContext("commissionId", dto.getOrderId());
        
        // 检查用户可提现余额
        BigDecimal available = accountService.getAvailableBalance(dto.getUserId());
        if (available.compareTo(dto.getAmount()) < 0) {
            throw new InsufficientBalanceException("可用佣金不足");
        }
        
        // 冻结佣金
        return commissionMapper.freezeCommission(dto.getUserId(), dto.getAmount(), dto.getOrderId()) > 0;
    }

    // Confirm阶段:确认扣减
    @Override
    public boolean commit(BusinessActionContext context) {
        String orderId = context.getActionContext("commissionId").toString();
        return commissionMapper.confirmFrozenCommission(orderId) > 0;
    }

    // Cancel阶段:取消冻结
    @Override
    public boolean rollback(BusinessActionContext context) {
        String orderId = context.getActionContext("commissionId").toString();
        return commissionMapper.cancelFrozenCommission(orderId) > 0;
    }
}

2.2 订单状态最终一致性

通过本地消息表实现跨系统订单状态同步:

package cn.juwatech.order.service;

@Service
public class OrderSyncService {

    @Autowired
    private OrderMapper orderMapper;
    
    @Autowired
    private LocalMessageMapper messageMapper;
    
    @Autowired
    private RocketMQTemplate rocketMQTemplate;

    @Transactional
    public void updateOrderStatus(Long orderId, OrderStatus newStatus) {
        // 1. 更新本地订单状态
        OrderDO order = new OrderDO();
        order.setId(orderId);
        order.setStatus(newStatus);
        orderMapper.updateById(order);
        
        // 2. 插入本地消息表
        LocalMessageDO message = new LocalMessageDO();
        message.setBizType("ORDER_STATUS_CHANGE");
        message.setBizId(orderId.toString());
        message.setContent(JSON.toJSONString(order));
        message.setStatus(MessageStatus.PENDING);
        messageMapper.insert(message);
        
        // 3. 发送消息
        try {
            rocketMQTemplate.send("order-status-topic", 
                MessageBuilder.withPayload(message).build());
            // 4. 标记消息为已发送
            messageMapper.updateStatus(message.getId(), MessageStatus.SENT);
        } catch (Exception e) {
            log.error("发送订单状态消息失败", e);
            // 消息发送失败由定时任务重试
        }
    }
}

三、高可用架构实践

3.1 服务容错设计

采用Sentinel实现服务熔断与限流:

package cn.juwatech.product.service;

@Service
public class ProductServiceImpl implements ProductService {

    @Autowired
    private TaobaoProductClient taobaoClient;
    
    @SentinelResource(value = "queryProductDetail", 
                     fallback = "queryProductFallback",
                     blockHandler = "queryProductBlocked")
    @Override
    public ProductDTO queryProductDetail(String itemId) {
        return taobaoClient.getItemDetail(itemId);
    }
    
    // 降级处理
    public ProductDTO queryProductFallback(String itemId, Throwable e) {
        log.warn("查询商品详情降级", e);
        // 返回缓存的基础商品信息
        return productCacheService.getCachedProduct(itemId);
    }
    
    // 限流处理
    public ProductDTO queryProductBlocked(String itemId, BlockException e) {
        log.warn("查询商品详情被限流", e);
        return new ProductDTO().setId(itemId).setName("商品信息加载中...");
    }
}

3.2 多级缓存架构

实现"本地缓存+Redis+数据库"三级缓存:

package cn.juwatech.cache.config;

@Configuration
public class CacheConfig {

    // Caffeine本地缓存配置
    @Bean
    public Cache<String, Object> localCache() {
        return Caffeine.newBuilder()
                .maximumSize(10_000)
                .expireAfterWrite(5, TimeUnit.MINUTES)
                .recordStats()
                .build();
    }
    
    // Redis缓存配置
    @Bean
    public RedisCacheManager cacheManager(RedisConnectionFactory factory) {
        RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
                .entryTtl(Duration.ofHours(1))
                .serializeKeysWith(RedisSerializationContext.SerializationPair
                        .fromSerializer(new StringRedisSerializer()))
                .serializeValuesWith(RedisSerializationContext.SerializationPair
                        .fromSerializer(new GenericJackson2JsonRedisSerializer()));
        
        // 不同缓存设置不同过期时间
        Map<String, RedisCacheConfiguration> configMap = new HashMap<>();
        configMap.put("productCache", config.entryTtl(Duration.ofHours(6)));
        configMap.put("userCache", config.entryTtl(Duration.ofHours(24)));
        
        return RedisCacheManager.builder(factory)
                .cacheDefaults(config)
                .withInitialCacheConfigurations(configMap)
                .build();
    }
}

3.3 数据库高可用

采用主从复制+读写分离架构,使用Sharding-JDBC配置:

spring:
  shardingsphere:
    datasource:
      names: master,slave1,slave2
      master:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbc-url: jdbc:mysql://master:3306/taoke_db
        username: root
        password: root
      slave1:
        # 从库1配置
      slave2:
        # 从库2配置
    rules:
      readwrite-splitting:
        data-sources:
          taoke-db:
            type: Static
            props:
              write-data-source-name: master
              read-data-source-names: slave1,slave2
              load-balancer-name: round_robin
    props:
      sql-show: false

四、监控与容灾设计

4.1 全链路监控

集成SkyWalking实现分布式追踪:

package cn.juwatech.trace.config;

@Configuration
public class TraceConfig {

    @Bean
    public FilterRegistrationBean<TraceFilter> traceFilter() {
        FilterRegistrationBean<TraceFilter> registration = new FilterRegistrationBean<>();
        registration.setFilter(new TraceFilter());
        registration.addUrlPatterns("/*");
        registration.setName("traceFilter");
        registration.setOrder(Ordered.HIGHEST_PRECEDENCE);
        return registration;
    }
    
    public static class TraceFilter implements Filter {
        @Override
        public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
                throws IOException, ServletException {
            String traceId = MDC.get("traceId");
            if (StringUtils.isEmpty(traceId)) {
                traceId = IdUtil.fastSimpleUUID();
                MDC.put("traceId", traceId);
            }
            try {
                chain.doFilter(request, response);
            } finally {
                MDC.remove("traceId");
            }
        }
    }
}

4.2 服务自动扩缩容

基于K8s HPA实现弹性伸缩:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: order-service-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: order-service
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 60
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 70

本文著作权归聚娃科技省赚客app开发者团队,转载请注明出处!