1)今日内容介绍
搜索结果,搜索记录,搜索联想
搭建环境
索引,存储,分词
多条件复合查询
结果高亮处理
索引数据同步(文章发布后创建索引 kafka)
搭建mongodb,存储链和性能好过mysql
异步保存搜索历史
查看搜索历史列表
联想词初始化 ,来源
2)Es环境搭建
2.1)镜像拉取
docker pull elasticsearch:7.4.0
2.2)创建容器
docker run -id --name elasticsearch -d --restart=always -p 9200:9200 -p 9300:9300 -v /usr/share/elasticsearch/plugins:/usr/share/elasticsearch/plugins -e "discovery.type=single-node" elasticsearch:7.4.0
2.3)配置中文分词器
创建容器时映射了目录,自动创建出来了,我们在plugins目录下配置分词器,版本需对应
#切换目录
cd /usr/share/elasticsearch/plugins
#新建目录并进入
mkdir analysis-ik && cd analysis-ik
#移动文件
mv /root/elasticsearch-analysis-ik-7.4.0.zip /usr/share/elasticsearch/plugins/analysis-ik
#解压文件
cd /usr/share/elasticsearch/plugins/analysis-ik && unzip elasticsearch-analysis-ik-7.4.0.zip
2.4)postman测试
{
"analyzer":"ik_max_word",
"text":"欢迎来到黑马程序员学习"
}
3)app端文章搜索
3.1)需求说明
搜索出结果后展示,搜索的词与匹配的关键词高亮,点击后能查看文章的详情
3.2)思路分析
- 审核成功存es
- 用户搜索查询es库,展示文章列表
3.3)创建索引和映射
展示内容
标题,布局,封面,发布时间,作者, (文章作者id,和静态url不显示)
索引分词内容
标题和内容
3.3.1)映射表
java的实体类
3.3.2)创建映射表
http://192.168.233.136:9200/app_info_article
请求体
{
"mappings":{
"properties":{
"id":{
"type":"long"
},
"publishTime":{
"type":"date"
},
"layout":{
"type":"integer"
},
"images":{
"type":"keyword",
"index": false
},
"staticUrl":{
"type":"keyword",
"index": false
},
"authorId": {
"type": "long"
},
"authorName": {
"type": "text"
},
"title":{
"type":"text",
"analyzer":"ik_smart"
},
"content":{
"type":"text",
"analyzer":"ik_smart"
}
}
}
}
3.3.3)GET查询所有映射
GET请求查询映射:http://192.168.233.136:9200/app_info_article
DELETE请求,删除索引及映射:http://192.168.233.136:9200/app_info_article
GET请求,查询所有文档:http://192.168.233.136:9200/app_info_article/_search
3.4)初始化索引库数据
3.4.1)导入es-init初始化模块
依赖
版本要和镜像容器版本一致
<!--elasticsearch-->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.4.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-client</artifactId>
<version>7.4.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>7.4.0</version>
</dependency>
配置类
通过配置文件注入
ip不一致得来这里改吧改吧
mapper
查询三张表,要求是上架和未删除的文章
pojo
对应上es映射中的所有字段
3.4.2)查询所有的文章信息,批量导入到es索引库中
package com.heima.es;
import com.alibaba.fastjson.JSON;
import com.heima.es.mapper.ApArticleMapper;
import com.heima.es.pojo.SearchArticleVo;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.xcontent.XContentType;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.List;
@SpringBootTest
@RunWith(SpringRunner.class)
public class ApArticleTest {
@Autowired
private ApArticleMapper apArticleMapper;
@Autowired
private RestHighLevelClient restHighLevelClient;
/**
* 注意:数据量的导入,如果数据量过大,需要分页导入
* @throws Exception
*/
@Test
public void init() throws Exception {
//1.查询所有符合条件的文章数据
List<SearchArticleVo> searchArticleVos = apArticleMapper.loadArticleList();
//2.批量导入到es索引库
BulkRequest bulkRequest = new BulkRequest("app_info_article");
for (SearchArticleVo searchArticleVo : searchArticleVos) {
IndexRequest indexRequest = new IndexRequest().id(searchArticleVo.getId().toString())
.source(JSON.toJSONString(searchArticleVo), XContentType.JSON);
//批量添加数据
bulkRequest.add(indexRequest);
}
restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
}
}
查所有文章
指定大批导入请求的索引库名称
循环创建索引index,和对应的文档信息source(存储的是json格式需转一下,当然文章id也得从Long类型转为string)
构建好请求之后批量添加数据(感觉有点像jdbc还是什么的数据库连接池,存放多条statement最后在一次性传输过去)
最后调用bulk方法,传递大批请求传递到es索引库,后面的常量类型值无所谓
再次发送请求查看索引库的值
导入了40条数据
3.5)搜索微服务创建
步骤
3.5.1)导入工程
记得修改配置文件的IP地址
3.5.2)nacos添加服务
spring:
autoconfigure:
exclude: org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
elasticsearch:
host: 192.168.233.136
port: 9200
接口搜索定义
最小时间和前面 分页的那玩意差不多应该
dtos
package com.heima.model.search.dtos;
import lombok.Data;
import java.util.Date;
@Data
public class UserSearchDto {
/**
* 搜索关键字
*/
String searchWords;
/**
* 当前页
*/
int pageNum;
/**
* 分页条数
*/
int pageSize;
/**
* 最小时间
*/
Date minBehotTime;
public int getFromIndex(){
if(this.pageNum<1)return 0;
if(this.pageSize<1) this.pageSize = 10;
return this.pageSize * (pageNum-1);
}
}
service
package com.heima.search.service;
import com.heima.model.common.dtos.ResponseResult;
import com.heima.model.search.dtos.UserSearchDto;
import org.springframework.web.bind.annotation.RequestBody;
import java.io.IOException;
public interface ArticleSearchService {
/**
* 文章分页检索
* @param dto
* @return
*/
public ResponseResult search( UserSearchDto dto) ;
}
impl
实现思路
- 参数校验
- 查询条件
- 关键词分词之后查询
- 小于mindate数据
- 分页查询
- 时间降序
- 设置高亮title
- 结果封装返回
package com.heima.search.service.impl;
import com.alibaba.fastjson.JSON;
import com.heima.model.common.dtos.ResponseResult;
import com.heima.model.common.enums.AppHttpCodeEnum;
import com.heima.model.search.dtos.UserSearchDto;
import com.heima.model.user.pojos.ApUser;
import com.heima.search.service.ApUserSearchService;
import com.heima.search.service.ArticleSearchService;
import com.heima.utils.thread.AppThreadLocalUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.index.query.*;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.sort.SortOrder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
@Service
@Slf4j
public class ArticleSearchServiceImpl implements ArticleSearchService {
@Autowired
private RestHighLevelClient restHighLevelClient;
@Autowired
private ApUserSearchService apUserSearchService;
/**
* es文章分页检索
*
* @param dto
* @return
*/
@Override
public ResponseResult search(UserSearchDto dto) throws IOException {
//1.检查参数
if(dto == null || StringUtils.isBlank(dto.getSearchWords())){
return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);
}
ApUser user = AppThreadLocalUtil.getUser();
//异步调用 保存搜索记录
if(user != null && dto.getFromIndex() == 0){
apUserSearchService.insert(dto.getSearchWords(), user.getId());
}
//2.设置查询条件
SearchRequest searchRequest = new SearchRequest("app_info_article");
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
//布尔查询
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
//关键字的分词之后查询
QueryStringQueryBuilder queryStringQueryBuilder = QueryBuilders.queryStringQuery(dto.getSearchWords()).field("title").field("content").defaultOperator(Operator.OR);
boolQueryBuilder.must(queryStringQueryBuilder);
//查询小于mindate的数据
RangeQueryBuilder rangeQueryBuilder = QueryBuilders.rangeQuery("publishTime").lt(dto.getMinBehotTime().getTime());
boolQueryBuilder.filter(rangeQueryBuilder);
//分页查询
searchSourceBuilder.from(0);
searchSourceBuilder.size(dto.getPageSize());
//按照发布时间倒序查询
searchSourceBuilder.sort("publishTime", SortOrder.DESC);
//设置高亮 title
HighlightBuilder highlightBuilder = new HighlightBuilder();
highlightBuilder.field("title");
highlightBuilder.preTags("<font style='color: red; font-size: inherit;'>");
highlightBuilder.postTags("</font>");
searchSourceBuilder.highlighter(highlightBuilder);
searchSourceBuilder.query(boolQueryBuilder);
searchRequest.source(searchSourceBuilder);
SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
//3.结果封装返回
List<Map> list = new ArrayList<>();
SearchHit[] hits = searchResponse.getHits().getHits();
for (SearchHit hit : hits) {
String json = hit.getSourceAsString();
Map map = JSON.parseObject(json, Map.class);
//处理高亮
if(hit.getHighlightFields() != null && hit.getHighlightFields().size() > 0){
Text[] titles = hit.getHighlightFields().get("title").getFragments();
String title = StringUtils.join(titles);
//高亮标题
map.put("h_title",title);
}else {
//原始标题
map.put("h_title",map.get("title"));
}
list.add(map);
}
return ResponseResult.okResult(list);
}
}
思路
- 前面就设置一堆前置条件
- key命中了分词后的数据时,如果该字段为高亮字段,获取其片段,组装font标签高亮返回前端
接口
package com.heima.search.controller.v1;
import com.heima.model.common.dtos.ResponseResult;
import com.heima.model.search.dtos.UserSearchDto;
import com.heima.search.service.ArticleSearchService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.io.IOException;
@RestController
@RequestMapping("/api/v1/article/search")
public class ArticleSearchController {
@Autowired
private ArticleSearchService articleSearchService;
@PostMapping("/search")
public ResponseResult search(@RequestBody UserSearchDto dto) throws IOException {
return articleSearchService.search(dto);
}
}
app网关的nacos
#搜索微服务
- id: leadnews-search
uri: lb://leadnews-search
predicates:
- Path=/search/**
filters:
- StripPrefix= 1
重启微服务测试
如果出现503错误的话。应该是网关处理异常,
我在这里 #号和 文字中间加了一个空格就好了,然后重启网关服务
3.6)新增文章时分词标题和内容索引
3.6.1)思路分析
文章审核成功时使用kafka发消息 文章微服务作为生产者
搜索微服务接收消息,添加数据到索引库 搜索微服务作为消费者
添加文章,审核通过,在制作静态文件时调用制作vo方法(content和url,其余属性title。。。结合)
然后发消息,搜索服务收消息,指定索引库,保存该索引内容
3.6.2)文章微服务发消息
- model/search/pojos创建vo
package com.heima.model.search.vos;
import lombok.Data;
import java.util.Date;
@Data
public class SearchArticleVo {
// 文章id
private Long id;
// 文章标题
private String title;
// 文章发布时间
private Date publishTime;
// 文章布局
private Integer layout;
// 封面
private String images;
// 作者id
private Long authorId;
// 作者名词
private String authorName;
//静态url
private String staticUrl;
//文章内容
private String content;
}
- 文章微服务的ArticleFreemarkerService中的buildArticleToMinIO方法中收集数据并发送消息 (因为该方法中有content和path和apArticle)
package com.heima.article.service.impl;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONArray;
import com.baomidou.mybatisplus.core.toolkit.Wrappers;
import com.heima.article.mapper.ApArticleContentMapper;
import com.heima.article.service.ApArticleService;
import com.heima.article.service.ArticleFreemarkerService;
import com.heima.common.constants.ArticleConstants;
import com.heima.file.service.FileStorageService;
import com.heima.model.article.pojos.ApArticle;
import com.heima.model.search.vos.SearchArticleVo;
import freemarker.template.Configuration;
import freemarker.template.Template;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.BeanUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.io.StringWriter;
import java.util.HashMap;
import java.util.Map;
@Service
@Slf4j
@Transactional
public class ArticleFreemarkerServiceImpl implements ArticleFreemarkerService {
@Autowired
private ApArticleContentMapper apArticleContentMapper;
@Autowired
private Configuration configuration;
@Autowired
private FileStorageService fileStorageService;
@Autowired
private ApArticleService apArticleService;
/**
* 生成静态文件上传到minIO中
* @param apArticle
* @param content
*/
@Async
@Override
public void buildArticleToMinIO(ApArticle apArticle, String content) {
//已知文章的id
//4.1 获取文章内容
if(StringUtils.isNotBlank(content)){
//4.2 文章内容通过freemarker生成html文件
Template template = null;
StringWriter out = new StringWriter();
try {
template = configuration.getTemplate("article.ftl");
//数据模型
Map<String,Object> contentDataModel = new HashMap<>();
contentDataModel.put("content", JSONArray.parseArray(content));
//合成
template.process(contentDataModel,out);
} catch (Exception e) {
e.printStackTrace();
}
//4.3 把html文件上传到minio中
InputStream in = new ByteArrayInputStream(out.toString().getBytes());
String path = fileStorageService.uploadHtmlFile("", apArticle.getId() + ".html", in);
//4.4 修改ap_article表,保存static_url字段
apArticleService.update(Wrappers.<ApArticle>lambdaUpdate().eq(ApArticle::getId,apArticle.getId())
.set(ApArticle::getStaticUrl,path));
//发送消息,创建索引
createArticleESIndex(apArticle,content,path);
}
}
@Autowired
private KafkaTemplate<String,String> kafkaTemplate;
/**
* 送消息,创建索引
* @param apArticle
* @param content
* @param path
*/
private void createArticleESIndex(ApArticle apArticle, String content, String path) {
SearchArticleVo vo = new SearchArticleVo();
BeanUtils.copyProperties(apArticle,vo);
vo.setContent(content);
vo.setStaticUrl(path);
kafkaTemplate.send(ArticleConstants.ARTICLE_ES_SYNC_TOPIC, JSON.toJSONString(vo));
}
}
在ArticleConstants类中添加新的常量,完整代码如下
package com.heima.common.constants;
public class ArticleConstants {
public static final Short LOADTYPE_LOAD_MORE = 1;
public static final Short LOADTYPE_LOAD_NEW = 2;
public static final String DEFAULT_TAG = "__all__";
public static final String ARTICLE_ES_SYNC_TOPIC = "article.es.sync.topic";
public static final Integer HOT_ARTICLE_LIKE_WEIGHT = 3;
public static final Integer HOT_ARTICLE_COMMENT_WEIGHT = 5;
public static final Integer HOT_ARTICLE_COLLECTION_WEIGHT = 8;
public static final String HOT_ARTICLE_FIRST_PAGE = "hot_article_first_page_";
}
指定了ARTICLE_ES_SYNC_TOPIC 为文章索引
3.文章微服务集成kafka发送消息
文章服务nacos配置中心添加
kafka:
bootstrap-servers: 192.168.233.136:9092
producer:
retries: 10
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
3.6.3)搜索微服务接收消息并创建索引
- 搜索微服务nacos添加如下配置
spring:
kafka:
bootstrap-servers: 192.168.233.136:9092
consumer:
group-id: ${spring.application.name}
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
2.定义监听接收消息,保存索引数据
建listener包
package com.heima.search.listener;
import com.alibaba.fastjson.JSON;
import com.heima.common.constants.ArticleConstants;
import com.heima.model.search.vos.SearchArticleVo;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.xcontent.XContentType;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;
import java.io.IOException;
@Component
@Slf4j
public class SyncArticleListener {
@Autowired
private RestHighLevelClient restHighLevelClient;
@KafkaListener(topics = ArticleConstants.ARTICLE_ES_SYNC_TOPIC)
public void onMessage(String message){
if(StringUtils.isNotBlank(message)){
log.info("SyncArticleListener,message={}",message);
SearchArticleVo searchArticleVo = JSON.parseObject(message, SearchArticleVo.class);
IndexRequest indexRequest = new IndexRequest("app_info_article");
indexRequest.id(searchArticleVo.getId().toString());
indexRequest.source(message, XContentType.JSON);
try {
restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
} catch (IOException e) {
e.printStackTrace();
log.error("sync es error={}",e);
}
}
}
}
指定topic,参数校验,指定索引库名,索引id,索引值,发送添加索引
3.6.4)启动微服务测试
3.7)搜索记录
3.7.1需求
展示搜索记录十条,时间降序排列,可删除,保存10条历史记录,多余删除
3.7.2springboot集成
镜像拉取
docker pull mongo
创建容器
docker run -di --name mongo-service --restart=always -p 27017:27017 -v ~/data/mongodata:/data mongo
导模块包
依赖
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
配置文件
server:
port: 9998
spring:
data:
mongodb:
host: 192.168.233.136
port: 27017
database: leadnews-history
映射
package com.itheima.mongo.pojo;
import lombok.Data;
import org.springframework.data.mongodb.core.mapping.Document;
import java.io.Serializable;
import java.util.Date;
/**
* <p>
* 联想词表
* </p>
*
* @author itheima
*/
@Data
@Document("ap_associate_words")
public class ApAssociateWords implements Serializable {
private static final long serialVersionUID = 1L;
private String id;
/**
* 联想词
*/
private String associateWords;
/**
* 创建时间
*/
private Date createdTime;
}
指定好了database(库)和document文档(表)后teset插入
3.7.3核心方法
package com.itheima.mongo.test;
import com.itheima.mongo.MongoApplication;
import com.itheima.mongo.pojo.ApAssociateWords;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.domain.Sort;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.test.context.junit4.SpringRunner;
import java.util.Date;
import java.util.List;
@SpringBootTest(classes = MongoApplication.class)
@RunWith(SpringRunner.class)
public class MongoTest {
@Autowired
private MongoTemplate mongoTemplate;
//保存
@Test
public void saveTest(){
/*for (int i = 0; i < 10; i++) {
ApAssociateWords apAssociateWords = new ApAssociateWords();
apAssociateWords.setAssociateWords("黑马头条");
apAssociateWords.setCreatedTime(new Date());
mongoTemplate.save(apAssociateWords);
}*/
ApAssociateWords apAssociateWords = new ApAssociateWords();
apAssociateWords.setAssociateWords("黑马直播");
apAssociateWords.setCreatedTime(new Date());
mongoTemplate.save(apAssociateWords);
}
//查询一个
@Test
public void saveFindOne(){
ApAssociateWords apAssociateWords = mongoTemplate.findById("60bd973eb0c1d430a71a7928", ApAssociateWords.class);
System.out.println(apAssociateWords);
}
//条件查询
@Test
public void testQuery(){
Query query = Query.query(Criteria.where("associateWords").is("黑马头条"))
.with(Sort.by(Sort.Direction.DESC,"createdTime"));
List<ApAssociateWords> apAssociateWordsList = mongoTemplate.find(query, ApAssociateWords.class);
System.out.println(apAssociateWordsList);
}
@Test
public void testDel(){
mongoTemplate.remove(Query.query(Criteria.where("associateWords").is("黑马头条")),ApAssociateWords.class);
}
}
自行尝试crud
依赖,配置,映射,调用核心方法
3.8)保存搜索记录
用户输入关键字进行搜索的异步记录关键字
搜索时,查搜索记录,如果已存在,更新为最新时间,不存在,查看总的数据量,大于10干掉最古老的一条,确保永远是最近的记录呈现
3.8.1)实现步骤
1.搜索微服务集成mongodb
①:pom依赖
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
②:nacos配置(在search下添加)
spring:
data:
mongodb:
host: 192.168.233.136
port: 27017
database: leadnews-history
③:实体类
放在search下的pojos而非model下
package com.heima.search.pojos;
import lombok.Data;
import org.springframework.data.mongodb.core.mapping.Document;
import java.io.Serializable;
import java.util.Date;
/**
* <p>
* APP用户搜索信息表
* </p>
* @author itheima
*/
@Data
@Document("ap_user_search")
public class ApUserSearch implements Serializable {
private static final long serialVersionUID = 1L;
/**
* 主键
*/
private String id;
/**
* 用户ID
*/
private Integer userId;
/**
* 搜索词
*/
private String keyword;
/**
* 创建时间
*/
private Date createdTime;
}
④:导入sql脚本
右下角选为* 否则不显示
2.创建ApUserSearchService新增insert方法
service
参数:1搜索记录 2谁搜的,后期展示
当前方法为Articlesearch调用,其keyword包含在userSearchDto中
public interface ApUserSearchService {
/**
* 保存用户搜索历史记录
* @param keyword
* @param userId
*/
public void insert(String keyword,Integer userId);
}
impl
思路
这种远程调用一般都是用了spring集成的 template进行crud,redis,和elasticSearch和kafka都是如此
1.查记录是否存在
构建查询条件 条件为 当前用户,记录
2.存在则更新最新时间
3.不存在就新增,判断总记录有无大于10
初始化记录
初始化查询条件
package com.heima.search.service.impl;
import com.heima.search.pojos.ApUserSearch;
import com.heima.search.service.ApUserSearchService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Sort;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Service;
import java.util.Date;
import java.util.List;
@Service
@Slf4j
public class ApUserSearchServiceImpl implements ApUserSearchService {
@Autowired
private MongoTemplate mongoTemplate;
/**
* 保存用户搜索历史记录
*
* @param keyword
* @param userId
*/
@Override
@Async
public void insert(String keyword, Integer userId) {
// 1.查记录是否存在
// 构建查询条件 条件为 当前用户,记录
Query query = Query.query(Criteria.where("userId").is(userId).and("keyword").is(keyword));
ApUserSearch apUserSearch = mongoTemplate.findOne(query, ApUserSearch.class);
// 2.存在则更新最新时间
if (apUserSearch != null) {
apUserSearch.setCreatedTime(new Date());
mongoTemplate.save(apUserSearch);
return;
}
// 3.不存在就新增,判断总记录有无大于10
// 初始化记录
apUserSearch = new ApUserSearch();
apUserSearch.setUserId(userId);
apUserSearch.setCreatedTime(new Date());
apUserSearch.setKeyword(keyword);
// 初始化查询条件
Query query1 = Query.query(Criteria.where("userId").is(userId));
query1.with(Sort.by(Sort.Direction.DESC, "createdTime"));
List<ApUserSearch> apUserSearches = mongoTemplate.find(query1, ApUserSearch.class);
if (apUserSearches==null||apUserSearches.size() < 10) {
mongoTemplate.save(apUserSearch);
} else {
//不小于10那就替换
// 先查出来到数第一个,然后获取id调用替换方法
ApUserSearch userSearch = apUserSearches.get(apUserSearches.size() - 1);
mongoTemplate.findAndReplace(Query.query(Criteria.where("id").is(userSearch.getId())),apUserSearch);
}
}
}
ok搞完了,接下来就是让别人来调用该方法了
3.搜索微服务获取当前用户
模仿wmedia中的获取用户id,()
之前存素材库和查询文章的时候用到过
具体做法是,网关请求头设置user,转发给服务时请求拦截器拦截获取,设置到全局线程,当请求结束后清理线程
同样的我们给他设置到app的网关headers
//获取用户id
Object userId = claimsBody.get("id");
//将用户id放到请求头中
// request.mutate():创建一个请求的副本,以便对其进行修改。
// headers(httpHeaders -> { ... }):提供一个函数,用于修改请求头。httpHeaders是一个HttpHeaders对象,表示请求头。
// httpHeaders.add("userId", userId.toString()):向请求头中添加一个名为userId的字段,其值为userId变量的字符串表示。
// build():构建修改后的请求。
ServerHttpRequest httpRequest = request.mutate().headers(httpHeaders -> {
httpHeaders.add("userId",userId.toString());
}).build();
// 1. 更新请求信息
// 在处理请求时,您可能需要在请求头中添加或修改某些信息(例如用户 ID)。
// 通过重置请求,您确保后续的处理逻辑(如过滤器、拦截器或控制器)能够获取到这些更新后的请求头。
exchange.mutate().request(httpRequest);
拦截器appInterceptor
package com.heima.search.interceptor;
import com.heima.model.user.pojos.ApUser;
import com.heima.utils.thread.AppThreadLocalUtil;
import org.springframework.web.servlet.HandlerInterceptor;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class AppTokenInterceptor implements HandlerInterceptor {
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
String userId = request.getHeader("userId");
if(userId != null){
//存入到当前线程中
ApUser apUser = new ApUser();
apUser.setId(Integer.valueOf(userId));
AppThreadLocalUtil.setUser(apUser);
}
return true;
}
@Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
AppThreadLocalUtil.clear();
}
}
这里的拦截器和wmInterceptor不同之处是之前postHandler那个在 controller抛异常后不会清理
而afterCom… 抛异常后仍清理,这里我们切换为这种模式
webmvcConfig注册拦截器
package com.heima.search.config;
import com.heima.search.interceptor.AppTokenInterceptor;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.servlet.config.annotation.InterceptorRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
@Configuration
public class WebMvcConfig implements WebMvcConfigurer {
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(new AppTokenInterceptor()).addPathPatterns("/**");
}
}
appThreadUtil
package com.heima.utils.thread;
import com.heima.model.user.pojos.ApUser;
public class AppThreadLocalUtil {
private final static ThreadLocal<ApUser> WM_USER_THREAD_LOCAL = new ThreadLocal<>();
//存入线程中
public static void setUser(ApUser apUser){
WM_USER_THREAD_LOCAL.set(apUser);
}
//从线程中获取
public static ApUser getUser(){
return WM_USER_THREAD_LOCAL.get();
}
//清理
public static void clear(){
WM_USER_THREAD_LOCAL.remove();
}
}
4.search方法顺便把记录save了
注入并调用
@Autowired
private ApUserSearchService apUserSearchService;
ApUser user = AppThreadLocalUtil.getUser();
if (user!=null&&dto.getFromIndex()==0){
apUserSearchService.insert(dto.getSearchWords(), AppThreadLocalUtil.getUser().getId());
}
getFromIndex()为0意思是搜索完结果会跳转到首页,
之后每切换一页都会进行search查询,切换页面说明用户已经查到结果了,不需要search
5.保存历史记录异步注解,引导类开启注解
6.测试,搜索后查看结果
我们修改了search微服务和app网关,重启这俩即可
搜索同样的搜索记录,查看时间有无更新
我们搜索超过10条,看下最老的那条有没有被替换
3.9)搜索历史加载
3.9.1)思路分析
根据用户id,降序查询
3.9.2)业务层
在ApUserSearchService中新增方法
/**
查询搜索历史
@return
*/
ResponseResult findUserSearch();
impl
/**
* 查询搜索历史
*
* @return
*/
@Override
public ResponseResult findUserSearch() {
//获取当前用户
ApUser user = AppThreadLocalUtil.getUser();
if(user == null){
return ResponseResult.errorResult(AppHttpCodeEnum.NEED_LOGIN);
}
//根据用户查询数据,按照时间倒序
List<ApUserSearch> apUserSearches = mongoTemplate.find(Query.query(Criteria.where("userId").is(user.getId())).with(Sort.by(Sort.Direction.DESC, "createdTime")), ApUserSearch.class);
return ResponseResult.okResult(apUserSearches);
}
判断user不为空的目的是 有游客登录的可能
3.9.3)controller
/**
* <p>
* APP用户搜索信息表 前端控制器
* </p>
* @author itheima
*/
@Slf4j
@RestController
@RequestMapping("/api/v1/history")
public class ApUserSearchController{
@Autowired
private ApUserSearchService apUserSearchService;
@PostMapping("/load")
public ResponseResult findUserSearch() {
return apUserSearchService.findUserSearch();
}
}
3.9.4)测试
3.10)删除历史搜索记录
dto
@Data
public class HistorySearchDto {
/**
* 接收搜索历史记录id
*/
String id;
}
业务层
service
/**
删除搜索历史
@param historySearchDto
@return
*/
ResponseResult delUserSearch(HistorySearchDto historySearchDto);
impl
/**
* 删除历史记录
*
* @param dto
* @return
*/
@Override
public ResponseResult delUserSearch(HistorySearchDto dto) {
//1.检查参数
if(dto.getId() == null){
return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);
}
//2.判断是否登录
ApUser user = AppThreadLocalUtil.getUser();
if(user == null){
return ResponseResult.errorResult(AppHttpCodeEnum.NEED_LOGIN);
}
//3.删除
mongoTemplate.remove(Query.query(Criteria.where("userId").is(user.getId()).and("id").is(dto.getId())),ApUserSearch.class);
return ResponseResult.okResult(AppHttpCodeEnum.SUCCESS);
}
controller
@PostMapping("/del")
public ResponseResult delUserSearch(@RequestBody HistorySearchDto historySearchDto) {
return apUserSearchService.delUserSearch(historySearchDto);
}
小提问
为什么这个方法不能用线程获取id? 异步进程无线程记录,其为新线程
测测测
3.11)关键字联想词
b友发言
es是点击后搜索才用,联想词是自动展示可能需要的词语
老师这个是先用mongoDB查询到你想搜索的词条,之后再发送给es的查询接口进行分词查询。直接用es也有实现方案,可以自己去了解一下
3.11.1)需求分析
3.11.2)联想词数据来源
这里我们用黑马提供的长尾词
3.11.3)所需类
实体
package com.heima.search.pojos;
import lombok.Data;
import org.springframework.data.mongodb.core.mapping.Document;
import java.io.Serializable;
import java.util.Date;
/**
* <p>
* 联想词表
* </p>
*
* @author itheima
*/
@Data
@Document("ap_associate_words")
public class ApAssociateWords implements Serializable {
private static final long serialVersionUID = 1L;
private String id;
/**
* 联想词
*/
private String associateWords;
/**
* 创建时间
*/
private Date createdTime;
}
业务层
service
package com.heima.search.service;
import com.heima.model.common.dtos.ResponseResult;
import com.heima.model.search.dtos.UserSearchDto;
/**
* <p>
* 联想词表 服务类
* </p>
*
* @author itheima
*/
public interface ApAssociateWordsService {
/**
联想词
@param userSearchDto
@return
*/
ResponseResult findAssociate(UserSearchDto userSearchDto);
}
impl
package com.heima.search.service.impl;
import com.heima.model.common.dtos.ResponseResult;
import com.heima.model.common.enums.AppHttpCodeEnum;
import com.heima.model.search.dtos.UserSearchDto;
import com.heima.search.pojos.ApAssociateWords;
import com.heima.search.service.ApAssociateWordsService;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.stereotype.Service;
import java.util.List;
/**
* @Description:
* @Version: V1.0
*/
@Service
public class ApAssociateWordsServiceImpl implements ApAssociateWordsService {
@Autowired
MongoTemplate mongoTemplate;
/**
* 联想词
* @param userSearchDto
* @return
*/
@Override
public ResponseResult findAssociate(UserSearchDto userSearchDto) {
//1 参数检查
if(userSearchDto == null || StringUtils.isBlank(userSearchDto.getSearchWords())){
return ResponseResult.errorResult(AppHttpCodeEnum.PARAM_INVALID);
}
//分页检查
if (userSearchDto.getPageSize() > 20) {
userSearchDto.setPageSize(20);
}
//3 执行查询 模糊查询
Query query = Query.query(Criteria.where("associateWords").regex(".*?\\" + userSearchDto.getSearchWords() + ".*"));
query.limit(userSearchDto.getPageSize());
List<ApAssociateWords> wordsList = mongoTemplate.find(query, ApAssociateWords.class);
return ResponseResult.okResult(wordsList);
}
}
Query.query(...)
:- 这是创建一个查询对象的方法。
Criteria.where("associateWords")
:- 这里指定了查询的字段名,即
"associateWords"
。查询将会在这个字段中查找匹配的记录。
- 这里指定了查询的字段名,即
.regex(...)
:- 这个方法用于指定一个正则表达式,用于匹配
associateWords
字段中的内容。
- 这个方法用于指定一个正则表达式,用于匹配
".\*?\" + userSearchDto.getSearchWords() + ".\*"
:- 这是构建的正则表达式。
.*?
表示可以匹配任意字符(包括零个字符),?
表示尽可能少地匹配(非贪婪模式)。userSearchDto.getSearchWords()
是从userSearchDto
对象中获取的用户搜索词。- 整个表达式的意思是匹配包含用户搜索词的字符串。
"\\"
:- 这个双反斜杠是为了在字符串中正确转义字符,确保在正则表达式中将用户输入的特殊字符(如果有的话)处理正确。
controller
package com.heima.search.controller.v1;
import com.heima.model.common.dtos.ResponseResult;
import com.heima.model.search.dtos.UserSearchDto;
import com.heima.search.service.ApAssociateWordsService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* <p>
* 联想词表 前端控制器
* </p>
* @author itheima
*/
@Slf4j
@RestController
@RequestMapping("/api/v1/associate")
public class ApAssociateWordsController{
@Autowired
private ApAssociateWordsService apAssociateWordsService;
@PostMapping("/search")
public ResponseResult findAssociate(@RequestBody UserSearchDto userSearchDto) {
return apAssociateWordsService.findAssociate(userSearchDto);
}
}