You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
|
|
3 days ago | |
|---|---|---|
| .. | ||
| src/main | 4 days ago | |
| README.md | 7 days ago | |
| pom.xml | 3 days ago | |
README.md
微服务改造成单体工程步骤
准备工作
1、拉取单体工程分支
git clone -b hzinfo_data http://192.168.65.86:3000/platform-boot/hnac-framework-boot.git
其中hzinfo_data为分支名称,根据项目需求选择对应的分支版本
| 分支名称 | 包括的微服务 |
|---|---|
| base | blade-gateway,blade-auth,blade-system,blade-resource |
| hzinfo_data | 除包含base分支的,还集成了数据平台相关的hzinfo-data-config,hzinfo-data-handler,hzinfo-data-socket,hzinfo-realmonitor |
下面以集成blade-visual(大屏设计器)的微服务为例进行说明
1、微服务打包改造 在需要集成到单体工程的微服务pom.xml文件添加如下内容
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
<configuration>
<classifier>exec</classifier>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
这样项目构建时将同时发布二个包:
1) blade-visual-4.5.0.RELEASE.jar
用于集中部署的依赖包
2) blade-visual-4.5.0.RELEASE-exec.jar
用于可独立运行的微服务包
2、修改单体工程的pom.xml,引入需要集成的服务
<dependency>
<groupId>org.springblade</groupId>
<artifactId>blade-visual</artifactId>
<version>4.5.0.RELEASE.RELEASE</version>
<exclusions>
<exclusion>
<groupId>org.springblade</groupId>
<artifactId>blade-core-cloud</artifactId>
</exclusion>
</exclusions>
</dependency>
3、整合项目配置文件
将微服务中application.yml,application-dev{|prod|test}.yml 整合到单体工程对应的文件中
4、url重写
由于在微服务中,前端请求后端的api地址中都带有微服务的服务名,而单体工程肯定是没有这些服务名的,所以需要对url进行重写
在单体工程的 urlrewrite-boot.xml文件添加类似配置即可
<rule>
<from>^/blade-visual/(.*)</from>
<to>/$1</to>
</rule>
5、修改启动类
由于单体工程是集成的微服务的jar,所以会存在多个启动类,这时候需要排除
@SpringBootApplication(exclude = {
FeignAutoConfiguration.class,
DruidDataSourceAutoConfigure.class,
ISmsClientFallback.class,
LogClientFallback.class,
IDictClientFallback.class,
IStationClientFallBack.class
})
@ComponentScan(
basePackages = {"org.springblade","com.hnac.hzinfo","com.xxl.job"},
excludeFilters = @ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE,
classes = {
SystemApplication.class,
LogClientFallback.class,
ISmsClientFallback.class,
FeignAutoConfiguration.class,
DataConfigApplication.class,
DataHandlerApplication.class,
WebSocketApplication.class,
ModuleController.class,
RealMonitorApplication.class,
VisualApplication.class
DruidDataSourceAutoConfigure.class
}),nameGenerator = UniqueNameGenerator.class)
@MapperScan(basePackages = {"com.hnac.**.mapper.**","com.hnac.**.dao.**"},
nameGenerator = UniqueNameGenerator.class)
上面配置需要注意的
- SpringBootApplication 需要排除 feign接口的Fallback类
- ComponentScan 注解上需要配置 basePackages,将需要集成的微服务的根包配置上
- ComponentScan 注解上需要排除集成的微服务启动类
- MapperScan 注解上需要配置basePackages,将集成的微服务的mybatis接口的包配置上
前面配置nginx
前端部署跟微服务的部署没什么区别,只是把微服务中的网关地址改成单体工程的服务地址即可
server {
listen 8081;
server_name localhost;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'DNT,web-token,app-token,Authorization,Accept,Origin,Keep-Alive,User-Agent,X-Mx-ReqToken,X-Data-Type,X-Auth-Token,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
add_header X-Frame-Options SAMEORIGIN;
location / {
root html/single-port/hzinfo-main;
index index.html;
try_files $uri $uri/ /index.html;
}
location /subapp {
alias html/single-port/subapp;
try_files $uri $uri/ /index.html;
}
# 单体工程地址
location /api/ {
proxy_pass http://192.168.65.162:18000/;
}
}
注意事项
1、如果使用了多数据源,类似下面这种需要将启动类上的所有注解都去掉
@SpringBootApplication(exclude = DruidDataSourceAutoConfigure.class)
2、如果工程里使用了@PostConstruct,请去掉。可以使用CommandLineRunner等接口进行初始化
3、如果服务对外提供了websocket接口,则nginx代理里需要做如下配置
location /api/hzinfo-data-socket/websocket/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
# 192.168.65.162:18000 为服务地址
proxy_pass http://192.168.65.162:18000/websocket/;
proxy_connect_timeout 7200;
proxy_send_timeout 7200;
proxy_read_timeout 7200;
}