This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

Koupleless Product Documentation

1 - 1. Product Introduction

1.1 - 1.1 Introduction and Use Cases

Introduction to Koupleless and its Use Cases

Introduction

Koupleless is a modular Serverless technology solution that enables ordinary applications to evolve into Serverless development mode at a relatively low cost, allowing code and resources to be decoupled and easily maintained independently. At the same time, it supports capabilities such as second-level build deployment, merged deployment, and dynamic scalability, providing users with the ultimate development and operation experience, ultimately helping enterprises achieve cost reduction and efficiency improvement. With the increasing digital transformation across various industries, enterprises are facing more and more pain points in terms of development efficiency, collaboration efficiency, resource costs, and service governance. Let’s explore these pain points one by one and see how they are addressed in Koupleless.

Use Cases

Pain Point 1: Slow Application Build and Deployment or Tedious SDK Upgrades

Traditional application image building usually takes 3-5 minutes, and deploying code to start completion also takes 3-5 minutes. Developers have to undergo multiple 6-10 minute build and deployment waits each time they verify or deploy code changes, severely impacting development efficiency. Additionally, each SDK upgrade (such as middleware frameworks, RPC, logging, JSON, etc.) requires modifying all application code and re-building and deploying, causing unnecessary disruption to developers.
By using Koupleless Universal Base and accompanying tools, you can cost-effectively split applications into “base” and “module”. The base encapsulates common SDKs of the company or a specific business department, and base upgrades can be handled by specialists without affecting business developers, who only need to write modules. In our currently supported Java technology stack, a module is a SpringBoot application code package (FatJar). However, the SpringBoot framework itself and other enterprise dependencies are pre-loaded and warmed up by the base at runtime. Each time a module is deployed, it finds a base with pre-heated SpringBoot for hot deployment, similar to AppEngine, enabling users to achieve 10-second build and deployment for applications and seamless SDK upgrades.

应用构建发布速度

Pain Point 2: High Resource Costs for Long-tail Applications

In enterprises, 80% of applications serve less than 20% of the traffic. Along with business changes, enterprises have many long-tail applications. These long-tail applications have CPU utilization rates of less than 10% for an extended period, leading to significant resource waste.
By using Koupleless merged deployment and accompanying tools, you can cost-effectively achieve the merged deployment of multiple applications, solving the problem of excessive fragmentation of enterprise applications and the resource waste caused by low-traffic businesses, thus saving costs.
应用机器成本
Here, “Business A Application 1” is called a “module” in Koupleless terminology. Multiple module applications can be merged into the same base using SOFAArk technology. The base can be a completely empty SpringBoot application (Java technology stack), or it can sink some common SDKs to the base. Each time a module application is deployed, it restarts the base machine. In this way, module applications maximize the reuse of the base’s memory (Metaspace and Heap), reduce the size of build artifacts from hundreds of MB to tens of MB or even more aggressively, and effectively improve CPU usage.

Pain Point 3: Low Collaboration Efficiency in Enterprise R&D

In enterprises, some applications require multi-person development collaboration. In the traditional development mode, each person’s code change requires the entire application to be released, leading to a fire-fighting approach to application development iteration. Everyone needs to develop iteratively within a unified time window and release online at a unified time, resulting in a large number of demand release waits, environment machine preemptions, iteration conflicts, and other situations.
By using Koupleless, you can easily split applications into a base and multiple functional modules, where each functional module is a group of code files. Different functional modules can be developed and deployed iteratively simultaneously, and they are independent of each other, eliminating the traditional application iteration fire-fighting approach. Each module has its independent iteration, greatly improving the efficiency of demand delivery. If you enable hot deployment for modules (or restart the entire base each time a module is deployed), the single build + deployment time for modules will also be reduced from 6-10 minutes of normal applications to seconds. 协作效率低

Pain Point 4: Difficulties in Accumulating Business Assets to Improve Middle Platform Efficiency

In some medium and large enterprises, various business middle platforms are accumulated. Middle platforms generally encapsulate the common API implementation and SPI definitions of the business. The SPI definition allows plugins on the middle platform to implement their own business logic. After the traffic enters the middle platform application, it calls the corresponding SPI implementation component to complete the corresponding business logic. Components in the middle platform application have relatively simple business logic. Deploying them as independent applications will incur high resource and operation costs and slow build and deployment speeds, significantly increasing the development burden and affecting development efficiency.
With Koupleless, you can easily split the middle platform application into a base and multiple functional modules. The base can accumulate relatively thick business dependencies, common logic, API implementations, SPI definitions, etc. (referred to as business assets), and provide them to the modules above. The modules can use the capabilities of the base through direct calls between objects or beans, with almost no code modification. Moreover, multiple modules can be developed and deployed iteratively simultaneously without affecting each other, greatly improving collaborative delivery efficiency. Additionally, for relatively simple modules, you can also enable hot deployment, and the single build + deployment time will be reduced from 6-10 minutes of normal applications to within 30 seconds. 提高中台效率

Pain Point 5: High Cost of Microservice Evolution

Different businesses in enterprises have different development stages, so applications also have their own lifecycles.

Startup Phase: A startup application generally adopts a monolithic architecture.

Growth Phase: As the business grows, the number of application developers also increases. At this time, you may be uncertain about the future prospects of the business and do not want to split the business into multiple applications prematurely to avoid unnecessary maintenance, governance, and resource costs. Therefore, you can use Koupleless to cost-effectively split the application into a base and multiple functional modules, allowing different functional modules to be developed, operated, and iterated independently in parallel, thus improving the collaboration and demand delivery efficiency of the application in this stage.

Mature Phase: As the business further expands, you can use Koupleless to cost-effectively split some or all functional modules into independent applications for development and operation.

Long-tail Phase: Some businesses may gradually enter a low-activity or long-tail phase after experiencing growth or maturity phases. At this time, you can use Koupleless to easily convert these applications into modules, merge them, and deploy them together to achieve cost reduction and efficiency improvement.

It can be seen that Koupleless supports enterprises to smoothly transition between the startup, growth, mature, and long-tail phases of applications at a low cost and even switch back and forth, thereby easily keeping the application architecture synchronized with business development.
Application lifecycle evolution. 微服务演进成本



1.2 - 1.2 Industry Background

Koupleless Background

Issues with Microservices

As application architectures evolved from monolithic to microservices, combined with the development of software engineering from waterfall models to the current DevOps model, various problems such as scalability, distribution, and collaborative work have been addressed, providing enterprises with better agility and execution efficiency, bringing significant value. However, despite solving some problems, the microservices model has gradually exposed some issues that are currently receiving continuous attention:

Complex Infrastructure

High Cognitive Load

image.png
To fulfill a business requirement, there are actually many dependencies, components, and platforms providing various capabilities behind the scenes. If any component below the business layer encounters an exception that is perceived by the business, it will impose a significant cognitive burden and corresponding time cost on the business development personnel.
image.png
Various types of exceptions.

Heavy Operations Burden

The dependencies included in the business application also undergo continuous iterative upgrades, such as frameworks, middleware, various SDKs, etc. When encountering situations such as:

  1. Major feature releases
  2. Urgent bug fixes
  3. Encountering significant security vulnerabilities

These dependencies’ new versions need to be upgraded as quickly as possible by the business. This leads to two problems:

For Business Development Personnel

If these dependency upgrades occur only once or twice, it’s not a problem. However, a business application relies on many frameworks, middleware, and various SDKs, and each dependency upgrade requires the involvement of business developers. Managing so many dependencies becomes a significant operational burden for business development personnel in the long term. Additionally, it’s important to note that the business’s common layer also imposes a significant burden on business developers.

For Infrastructure Personnel

Similarly, like the developers of various dependencies, each release of such a new version requires the business applications using them to be upgraded as quickly as possible. However, business development personnel are more concerned with delivering business requirements, so pushing them to complete upgrades quickly is not very realistic, especially in enterprises with many developers.

Slow Startup

Each business application startup process involves many steps, resulting in long waiting times for functionality verification.

Low Release Efficiency

Due to the aforementioned issues of slow startup and numerous exceptions, the deployment process takes a long time, and encountering exceptions that cause delays requires recovery and handling. In addition to platform exceptions, the probability of machine exceptions increases with the increasing number of machines. For example, if the probability of a machine completing a release without any issues (without encountering exceptions) is 99.9%, meaning the success rate in one attempt is 99.9%, then for 100 machines, it becomes 90%, and for 1000 machines, it decreases to only 36.7%. Therefore, applications with many machines often encounter deployment delays, requiring developer intervention, leading to low efficiency.

High Collaboration and Resource Costs

Monolithic/Large Applications are too Big

image.png

Blockage in Multilateral Cooperation

As businesses continue to grow, applications become larger, mainly reflected in the increasing number of developers, resulting in blockages in multilateral cooperation.

Large Impact of Changes, High Risk

As businesses continue to grow, online traffic increases, and the number of machines grows. However, a single change can affect all code and machine traffic, resulting in a large impact and high risk from changes.

Too Many Small Applications

image.png
During the evolution of microservices, over time, due to factors such as excessive application splitting, some businesses shrinking, or organizational restructuring, there is a continuous accumulation of small or long-tail applications online, resulting in an increasing number of applications. For example, in the past three years, the number of applications at Ant Group has tripled.
image.png

High Resource Costs

These applications require several machines in each data center, but in reality, the traffic is not significant, and CPU usage is very low, resulting in resource waste.

High Long-Term Maintenance Costs

These applications also require personnel for maintenance tasks, such as upgrading SDKs and fixing security vulnerabilities, leading to high long-term maintenance costs.

Inevitability of the Problem

A microservices system is an ecosystem, and after several years of evolution within a company, according to the 28 Law, a few large applications occupy a significant portion of the traffic. It is inevitable that problems such as oversized large applications and too many small applications will arise. However, there is no defined standard for what constitutes a large application or how many small applications are too many. Therefore, the pain points experienced by developers due to these problems are subtle, and unless the pain reaches a certain threshold, it is difficult to attract the attention and action of the company’s management.

How to Properly Decompose Microservices

The proper decomposition of microservices has always been a challenging problem, as there are no clear standards. This is also why the issues of oversized large applications and too many small applications exist. The root cause behind these problems is the flexibility of business and organization and the high cost of microservice decomposition, which results in inconsistent agility between the two.

Misalignment between Microservices Decomposition and Business/Organizational Agility

image.png
Business development is flexible, and organizational structures are constantly adjusting. However, microservice decomposition requires machine resources and incurs long-term maintenance costs. The misalignment in agility between the two leads to problems such as under-decomposition or over-decomposition, resulting in oversized large applications and too many small applications. If these problems are not fundamentally addressed, microservices governance will continue to encounter issues, causing developers to remain stuck in a cycle of low efficiency and governance challenges.

Problems Faced by Enterprises of Different Sizes

image.png

Industry Attempts at Solutions

The industry has many good ideas and projects attempting to solve these problems, such as service meshes, runtime applications, platform engineering, Spring Modulith, and Google ServiceWeaver. These solutions have had some effect but also come with limitations:

  1. From the perspective of business developers, only part of the infrastructure is shielded, and the business’s common parts are not shielded.
  2. Only some of the problems are addressed.
  3. High cost of retrofitting existing applications.

Koupleless is evolving as a development framework and platform capability to address these issues.



1.3 - 1.3 Architecture Introduction

1.3.1 - 1.3.1 Architecture Principles

Koupleless Architecture Principles

Modular Application Architecture

To address these issues, we have performed both horizontal and vertical splits on applications. Firstly, the vertical split involves dividing the application into base and business layers, corresponding to two layers of organizational responsibilities. The base team, like traditional applications, is responsible for machine maintenance, common logic abstraction, business architecture governance, and providing runtime resources and environments for business. By separating concerns, all underlying infrastructure below the business layer is shielded, allowing the focus to remain on the business itself. Secondly, we horizontally partition the business into multiple modules, allowing independent parallel iteration among them without interference. Since modules exclude the base part, their build artifacts are lightweight, and their startup logic only encompasses the business itself, enabling rapid startup and providing sub-second verification capabilities, thus optimizing module development efficiency to the utmost extent.
image.png
Before the split, each developer may have perceived all the code and logic from the framework to middleware to business common parts to business itself. After the split, the collaboration within the team has changed, and developers are divided into two roles: base and module developers. Module developers are not concerned with resources and capacity but enjoy the ability to deploy and verify changes in seconds, focusing solely on business logic.
image.png

Here it’s crucial to understand how we perform these vertical and horizontal splits. The split is for isolation, and isolation is for independent iteration, stripping unnecessary dependencies. However, isolation alone, without sharing, is akin to merely relocating deployment positions, which may not yield desirable results. Thus, besides isolation, we also emphasize sharing capabilities. Therefore, it’s essential to focus on understanding the principles behind modular architecture in terms of isolation and sharing.

Module Definition

Before delving further, let’s clarify what modules are in this context. Modules are derived by subtracting the base part from the original application. This subtraction is achieved by setting the scope of dependencies in the module as provided.
image.png
image.png
A module can be defined by these three points:

  1. A jar package generated by SpringBoot packaging
  2. A module: a SpringContext + a ClassLoader
  3. Hot deployment (no need to restart the process during upgrade)

Isolation and Sharing of Modules

Modules are isolated in terms of configuration and code through ClassLoader and SpringContext. They share configurations and code classes between modules and bases through SOFAArk and between multiple modules through SpringContext Manager.
image.png
Within the JVM, this is achieved through:

  1. Ark Container providing a multi-ClassLoader runtime environment
  2. Arklet managing module lifecycles
  3. Framework Adapter associating SpringBoot lifecycle with module lifecycle
  4. SOFAArk’s default delegation loading mechanism bridging module and base class delegation loading
  5. SpringContext Manager providing bean and service discovery mechanisms
  6. Bases essentially being modules with independent SpringContext and ClassLoader

image.png

However, modularization technologies in the Java domain have developed for 20 years. Why can modularization technologies be scaled within Ant Group? The core reason lies in the multi-module capabilities based on SOFAArk and SpringContext Manager, which provide a low-cost usage approach.

Isolation Aspect

Compared to other modularization technologies, from an isolation perspective, JPMS and Spring Modulith impose limitations through custom rules, with Spring Modulith requiring verification in unit tests. The isolation capabilities are relatively weak and somewhat tricky, with significant retrofitting costs for legacy applications, and even infeasible for legacy applications. Similar to OSGi, SOFAArk employs ClassLoader and SpringContext for configuration and code, as well as bean and service isolation, maintaining consistency with the native application startup mode.

Sharing Aspect

While SOFAArk shares isolation mechanisms with OSGi, OSGi, JPMS, and Spring Modulith all require defining import/export lists or other configurations between source and target modules, resulting in high usage costs for business modules that need to understand and perceive multi-module technologies. SOFAArk defines a default class delegation loading mechanism and cross-module bean and service discovery mechanisms, enabling business usage of multi-module capabilities without modification.
Additionally, why can the modularization technology based on SOFAArk provide these default capabilities at low cost and emphasize low-cost usage? The main reason is that we have differentiated roles for modules, distinguishing between bases and modules. Based on this core reason, we have also attached importance to low-cost usage and made important design considerations and trade-offs. For specific design considerations and trade-offs, refer to the technical implementation article.

Inter-Module Communication

Inter-module communication relies primarily on the bean and service discovery mechanism provided by SpringContext Manager.
image.png

Module Evolution

Looking back at the mentioned major issues, it can be seen that through the isolation and sharing capabilities of modular architecture, problems such as complex infrastructure, collaboration blocking, and high resources and long-term maintenance costs can be solved. However, the issue of inconsistent agility between microservices splitting and business remains unresolved.
image.png
Here, we address this by reducing the cost of microservices splitting. So how do we reduce the cost of microservices splitting? The main approach is to introduce modular architecture between monolithic and microservices architectures.

  1. Modules do not occupy resources, so splitting incurs no resource costs.
  2. Modules do not include business common parts, frameworks, or middleware parts, so modules incur no long-term SDK upgrade and maintenance costs.
  3. Modules themselves are SpringBoot, and we provide tools to assist in the low-cost splitting of monolithic applications into modular applications.
  4. Modules have flexible deployment capabilities; they can be deployed together in one JVM or separately, allowing modules to evolve into microservices or revert to monolithic application modes at low cost.

image.png
The arrows in the diagram are bidirectional. If there are too many microservices currently split, multiple microservices can also be cost-effectively transformed into modules and deployed together in one JVM. So, the essence here is to add a bidirectional transitional modular architecture between monolithic and microservices architectures, reducing the cost of transformation while allowing developers to evolve or roll back according to business needs. This can solve several problems of microservices.

Advantages of Modular Architecture

The main advantages of modular architecture are concentrated in these four points: speed, savings, flexible deployment, and evolvability,
image.png

When compared to traditional applications, the data below show more than a 10x improvement in development, deployment, and runtime stages.
image.png

Platform Architecture

Just having application architecture is not enough. It is necessary to provide complete supporting capabilities from the development stage to the operation and runtime stages to truly realize the advantages of modular application architecture to developers.
image.png
During the development stage, it is necessary to provide capabilities for base access, module creation, and more importantly, local rapid build and debugging capabilities for modules; during the operation stage, provide fast module deployment capabilities, and on top of module deployment, provide A/B testing and second-level scaling capabilities; during the runtime stage, provide reliability capabilities for modules, fine-grained control of observability, traffic, scheduling, and scaling.

image.png
Component View

Within the entire platform, four components are needed:

  1. Development tool Arkctl, providing module creation, rapid debugging testing, and other capabilities.
  2. Runtime components SOFAArk, Arklet, providing module operation, module lifecycle management, and multi-module runtime environment.
  3. Control plane components ModuleController
    1. ModuleDeployment provides module deployment and operation capabilities.
    2. ModuleScheduler provides module scheduling capabilities.
    3. ModuleScaler provides module scaling capabilities.

2 - 2. Quick Start

Koupleless Quick Start

This quick start guide mainly introduces the dynamic merge deployment model, which is used to save resources and improve R&D efficiency. If you only want to save resources, you can use static merge deployment. This guide includes:

  1. Base Access
  2. Module Access
  3. Module Development Verification
  4. Module Deployment (not available yet, updates pending)

Video tutorials are also available, click here to view.

Prerequisites

Development Tools

  • JDK 8, JDK 17, JDK 21+
  • Maven v3.9.0+
  • arkctl v0.2.1+, installation instructions can be found here

Operation and Maintenance Tools (not required for static merge deployment)

  • Docker
  • Kubectl
  • K8s Cluster such as minikube v1.10+

Base Access

Refer to this link

Module Access

Refer to this link

Local Environment Development Verification

Check here

Module Deployment Example with Minikube Cluster (not available yet, updates pending)

Step 1: Deploy Operation and Maintenance Component ModuleController

kubectl apply -f xxx/xxx.yaml  

Step 2: Publish Using Sample Base

  1. Deploy the base to the K8s cluster, create a service for the base, exposing the port,
    you can reference here
  2. Execute minikube service base-web-single-host-service to access the base service
Microservice Evolution Cost

Step 3: Release the Module

There are two ways to release a module:

  1. Directly deploy the local module jar package to the K8s cluster
arkctl deploy ${path to the jar package} --pod ${namespace}/${podname}  
  1. Deploy and release via K8s module deployment
    Create a module deployment and use kubectl apply to publish
kubectl apply -f xxx/xxxxx/xx.yaml  

Step 4: Test Verification

For More Experiments, Please View Sample Cases

Click here

2.1 - 2.2 Module Operations

Quick Start with Koupleless

This quick start guide mainly introduces Module operations based on Module Controller V2. It includes:

  1. Environment Preparation
  2. Module Controller V2 Deployment
  3. Test Base Preparation
  4. Module Deployment and Status Checking

Environment Preparation

K8S Environment Deployment

Module Controller V2 builds Module operation capabilities based on K8S, so a basic K8S environment is needed first.

Note: Module Controller currently only supports arm64 / amd64 environments.

If you already have a K8S cluster, skip this section.

For local testing, it is recommended to use Minikube to quickly set up K8S locally. Minikube is an open-source tool for local Kubernetes deployment, helping quickly deploy K8S components.

To install Minikube, first, install the Docker environment: Docker Official Website

After installing Docker and starting the Docker daemon, Minikube installation preparation is complete.

Refer to the official documentation for Minikube installation.

Module Controller V2 Deployment

Module Controller V2 can be deployed in two ways:

  1. Local execution (requires go environment, not recommended)
  2. Image deployment (recommended)

Next, we will use image deployment as an example.

First, prepare necessary RBAC configuration for Module Controller V2.

  1. Download Service Account YAML
  2. Download Cluster Role YAML
  3. Download Cluster Role Binding YAML

Then apply the above three YAML files to set permissions and bindings for the service account.

Next, prepare the Pod Yaml for Module Controller deployment:

apiVersion: v1
kind: Pod
metadata:
  name: module-controller
  labels:
    app: module-controller
spec:
  serviceAccountName: virtual-kubelet # Service Account configured in the previous step
  containers:
    - name: module-controller
      image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/release/module-controller-v2:v2.1.2
      imagePullPolicy: Always
      resources:
        limits:
          cpu: "1000m"
          memory: "400Mi"
      ports:
        - name: httpTunnel
          containerPort: 7777
      env:
        - name: ENABLE_HTTP_TUNNEL
          value: "true"

Apply the above YAML to the K8S cluster, and wait for the Module Controller Pod to reach the Running state.

The Module operations capability is now set up. Next, prepare the test base and test Module.

Test Base Deployment

To facilitate onboarding, a Docker image of a test base is provided. First, download the Base Yaml:

apiVersion: v1
kind: Pod
metadata:
  name: base
  labels:
    app: base
spec:
  containers:
    - name: base
      image: serverless-registry.cn-shanghai.cr.aliyuncs.com/opensource/test/base-web:1.1.1 # Pre-packaged image
      imagePullPolicy: Always
      ports:
        - name: base
          containerPort: 8080
        - name: arklet
          containerPort: 1238
      env:
        - name: MODULE_CONTROLLER_ADDRESS
          value: {YOUR_MODULE_CONTROLLER_IP}

Replace {YOUR_MODULE_CONTROLLER_IP} with the actual Module Controller Pod IP in the YAML.

Apply the modified YAML to the K8S cluster and wait for the Base Pod to reach the Running state.

Once the base has started, verify its successful mapping to a VNode with:

kubectl get nodes

If a node named vnode.test-base.dev appears and is Ready, the base is successfully started and mapped.

The UUID above is generated at base startup and changes each restart.

Next, use port-forward to expose the base container’s service for verification, using the command:

kubectl port-forward base 8080:8080

Visit link to verify if it maps successfully.

Module Deployment and Status Checking

Module Deployment

First, verify the state before Module installation by visiting the base service: Module Test.

It should return an error page indicating the Module is not installed.

Next, deploy the Module using a Deployment. Apply the Module YAML to K8S for Module deployment. Here is an example for a single Module:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: biz1
  labels:
    virtual-kubelet.koupleless.io/component: module-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      module: biz1
  template:
    metadata:
      labels:
        module: biz1
        virtual-kubelet.koupleless.io/component: module
    spec:
      containers:
        - name: biz1
          image: https://serverless-opensource.oss-cn-shanghai.aliyuncs.com/module-packages/stable/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
          env:
            - name: BIZ_VERSION
              value: 0.0.1-SNAPSHOT
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: base.koupleless.io/version
                    operator: In
                    values:
                      - 1.0.0
                  - key: base.koupleless.io/name
                    operator: In
                    values:
                      - koupleless-sample
      tolerations:
        - key: "schedule.koupleless.io/virtual-node"
          operator: "Equal"
          value: "True"
          effect: "NoExecute"
        - key: "schedule.koupleless.io/node-env"
          operator: "Equal"
          value: "dev"
          effect: "NoExecute"

Once deployment is complete, use kubectl get pods to check the status of all Module pods.

When the pods created by the deployment reach the Running state, the Module installation is complete. Verify by visiting the base service again: Module Test.

You should see the content: hello to /biz1 deploy, indicating the Module installation is complete.

Module Deletion

Modules can be removed by deleting their Deployment with:

kubectl delete deployment biz1

Check the pod deletion success with kubectl get pods.

After deletion, visit the base service Module Test to verify Module uninstallation.

The page should revert to the state indicating the Module is uninstalled.

3 - 3. Video Tutorials

Koupleless Video Tutorials

Koupleless (SOFAServerless) Module Local Development and Deployment Video Tutorial

Quick Tip: Only takes two minutes



For detailed written instructions, please click here experiment one

Koupleless (SOFAServerless) Platform and Development Framework Full Video Tutorial

This learning course is interactive and equipped with a complete sandbox environment for hands-on learning. Feel free to give it a try!

Step 1: Click here to register for an Open Source College account.

Step 2: After registration, if the page does not change, you will be directed to the “SofaServerless Interactive Lab” homepage. Then, choose the following course:

Koupleless (SOFAServerless) 平台和研发框架完整视频教程

Click on “SOFAServerless Development Framework and Product Introduction” and then click “Start Learning”.

4 - 4. User Manual

Koupleless User Manual

4.1 - 4.1 Base Access

4.1.1 - 4.1.1 SpringBoot or SOFABoot Upgrade to Base

Upgrade SpringBoot or SOFABoot to Koupleless Base

We can create Biz Module in three ways, and this article introduces the second one:

  1. Splitting a large application into multiple modules
  2. Transforming an existing application into a single module
  3. Directly creating a module using a scaffold
  4. Transform ordinary code fragments into a module

This article introduces the operation and verification steps of how to upgrade existing SpringBoot or SOFABoot to modules at a low cost. It only requires adding an ark packaging plugin and configuring module slimming to achieve the one-click upgrade of a regular application to a module application. With the same set of code branches, the application can be independently started like the original SpringBoot, and can also be merged and deployed together with other applications as a module.

Prerequisites

  1. SpringBoot version >= 2.1.9.RELEASE (for SpringBoot users)
  2. SOFABoot version >= 3.9.0 or SOFABoot >= 4.0.0 (for SOFABoot users)
  3. Install maven version >= 3.9.0 locally

Note: SpringBoot version == 2.1.9.RELEASE, see Upgrade SpringBoot 2.1.9 to base

Access Steps

Code and Configuration Modifications

Modify application.properties

# Need to define the application name
spring.application.name = ${Replace with actual base app name}

Modify the main pom.xml

<properties>
    <sofa.ark.version>2.2.16</sofa.ark.version>
    <koupleless.runtime.version>1.3.3</koupleless.runtime.version>
</properties>
<!-- Place this as the first dependency in your build pom -->
<dependency>
    <groupId>com.alipay.koupleless</groupId>
    <artifactId>koupleless-base-starter</artifactId>
    <version>${koupleless.runtime.version}</version>
</dependency>

<!-- If using Spring Boot web, add this dependency. For more details, see https://www.sofastack.tech/projects/sofa-boot/sofa-ark-multi-web-component-deploy/ -->
<dependency>
    <groupId>com.alipay.sofa</groupId>
    <artifactId>web-ark-plugin</artifactId>
</dependency>

<!-- 为了让三方依赖和 koupleless 模式适配,需要引入以下构建插件 -->
<build>
    <plugins>
        <plugin>
            <groupId>com.alipay.sofa.koupleless</groupId>
            <artifactId>koupleless-base-build-plugin</artifactId>
            <version>${koupleless.runtime.version}</version>
            <executions>
                <execution>
                    <goals>
                        <goal>add-patch</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

Integration for Other Versions

Upgrade SpringBoot 2.1.9 to Base

After modifying the above configurations, additional modifications are required:

Modify main pom.xml
<!-- Place this as the first dependency in your pom -->
<dependency>
    <groupId>com.alipay.sofa.koupleless</groupId>
    <artifactId>koupleless-base-starter</artifactId>
    <version>${koupleless.runtime.version}</version>
</dependency>

<!-- If using Spring Boot web, add this dependency. For more details, see https://www.sofastack.tech/projects/sofa-boot/sofa-ark-multi-web-component-deploy/ -->
<dependency>
    <groupId>com.alipay.sofa</groupId>
    <artifactId>web-ark-plugin</artifactId>
</dependency>
<dependency>
    <groupId>com.github.oshi</groupId>
    <artifactId>oshi-core</artifactId>
    <version>3.9.1</version>
</dependency>

        
<!-- 为了让三方依赖和 koupleless 模式适配,需要引入以下构建插件 -->
<build>
    <plugins>
        <plugin>
            <groupId>com.alipay.sofa.koupleless</groupId>
            <artifactId>koupleless-base-build-plugin</artifactId>
            <version>${koupleless.runtime.version}</version>
            <executions>
                <execution>
                    <goals>
                        <goal>add-patch</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>
Modify base startup class

If version of koupleless is equals 1.1.0 or higher than 1.1.0, no need to change。

If version of koupleless is lower than 1.1.0, exclude the HealthAutoConfiguration class in the @SpringBootApplication annotation of the base Springboot startup class, as shown below:

import com.alipay.sofa.koupleless.arklet.springboot.starter.health.HealthAutoConfiguration;
@SpringBootApplication(exclude = { HealthAutoConfiguration.class })
public class BaseApplication {
    public static void main(String[] args) {
        SpringApplication.run(BaseApplication.class, args);
    }
}

Startup Verification

If the foundation application can start normally, the validation is successful!



4.2 - 4.2 Module Access

Koupleless Module Access

4.2.1 - 4.2.1 Upgrade to Module from existing SpringBoot or SOFABoot

Upgrade to Module from existing SpringBoot or SOFABoot

We can create Biz Module in three ways, and this article introduces the second one:

  1. Splitting a large application into multiple modules
  2. Transforming an existing application into a single module
  3. Directly creating a module using a scaffold

This article introduces how existing SpringBoot or SOFABoot applications can be cost-effectively upgraded to modules with the operational and validation steps. It requires only the addition of an ark packaging plugin + configuration for module slimming to enable a conventional application to be upgraded to a module application at the push of a button. Moreover, the same set of code branches can be used for independent startup as before, just like a regular SpringBoot application, as well as being capable of being deployed together with other applications as a module.

Prerequisites

  1. SpringBoot version >= 2.3.0 (for SpringBoot users)
  2. SOFABoot >= 3.9.0 or SOFABoot >= 4.0.0 (for SOFABoot users)

Access Steps

Step 1: Modify application.properties

# Need to define the application name
spring.application.name = ${Replace with actual module app name}

Step 2: Add Dependencies and Packaging Plugins for the Module

Note: The order of defining the sofa-ark plugin must be before the springboot packaging plugin;

<!-- Dependencies required for the module, mainly for inter-module communication --> 
<dependencies>
    <dependency>
        <groupId>com.alipay.sofa.koupleless</groupId>
        <artifactId>koupleless-app-starter</artifactId>
        <scope>provided</scope>
    </dependency>
</dependencies>

<plugins>
<!-- Add the ark packaging plugin here -->
    <plugin>
        <groupId>com.alipay.sofa</groupId>
        <artifactId>sofa-ark-maven-plugin</artifactId>
        <version>{sofa.ark.version}</version>
        <executions>
            <execution>
                <id>default-cli</id>
                <goals>
                    <goal>repackage</goal>
                </goals>
            </execution>
        </executions>
        <configuration>
            <skipArkExecutable>true</skipArkExecutable>
            <outputDirectory>./target</outputDirectory>
            <bizName>${Replace with module name}</bizName>
            <webContextPath>${Module's custom web context path}</webContextPath>
            <declaredMode>true</declaredMode>
        </configuration>
    </plugin>
<!-- Build a regular SpringBoot fat jar, used for independent deployment, can be removed if not needed -->
    <plugin>
        <!-- Original spring-boot packaging plugin -->
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
</plugins>

Step 3: Automate Module Slimming

You can leverage the automated slimming capability provided by the ark packaging plugin to slim down the Maven dependencies in your module application. This step is mandatory; otherwise, the resulting module JAR file will be very large, and startup may fail. Extended Reading: If the module does not optimize its dependenciesWhat will happen if SpringBoot framework is imported independently?

Step 4: Build the Module Jar Package

Execute mvn clean package -DskipTest, you can find the packaged ark biz jar in the target directory, or you can find the packaged regular springboot jar in the target/boot directory.

TipFull Middleware Compatibility List Supported in the Module

Experiment: Verifying that the module can be started independently and deployed as a combined module

After adding the module packaging plugin (sofa-ark-maven-plugin) for packaging, only the ark-biz.jar build artifact will be added, which does not conflict with or affect the executable Jar built by the native spring-boot-maven-plugin. When deploying on the server, if you want to start independently, use the executable Jar built by the native spring-boot-maven-plugin as the build artifact; if you want to deploy as an ark module to the base, use the ark-biz.jar built by the sofa-ark-maven-plugin as the build artifact.

Verification of Deployment to the Base

  1. Start the base from the previous step (verification of independent startup).
  2. Initiate module deployment
curl --location --request POST 'localhost:1238/installBiz' \
--header 'Content-Type: application/json' \
--data '{
    "bizName": "${Module Name}",
    "bizVersion": "${Module Version}",
    "bizUrl": "file:///path/to/ark/biz/jar/target/xx-xxxx-ark-biz.jar"
}'

If the following information is returned, it indicates that the module is installed successfully.
image.png

  1. View Current Module Information: Besides the base “base,” there is also a module named “dynamic-provider.”

image.png

  1. Uninstall the module
curl --location --request POST 'localhost:1238/uninstallBiz' \
--header 'Content-Type: application/json' \
--data '{
    "bizName": "dynamic-provider",
    "bizVersion": "0.0.1-SNAPSHOT"
}'

If the following information is returned, it indicates that the uninstallation was successful.

{
    "code": "SUCCESS",
    "data": {
        "code": "SUCCESS",
        "message": "Uninstall biz: dynamic-provider:0.0.1-SNAPSHOT success."
    }
}

Verification of Independent Startup

After transforming a regular application into a module, it can still be started independently to verify some basic startup logic. Simply check the option to automatically add provided scope to the classpath in the startup configuration, and then use the same startup method as for regular applications. Modules transformed through automatic slimming can also be started directly using the SpringBoot jar package located in the target/boot directory. For more details, please refer to this link
image.png

4.2.2 - 4.2.2 Creating Modules Using Maven Archetype

We can create Biz Module in three ways, and this article introduces the second one:

  1. Splitting a large application into multiple modules
  2. Transforming an existing application into a single module
  3. Directly creating a module using a scaffold
  4. Transform ordinary code fragments into a module

It’s easy to creating a module from maven archetype, all you need to do is input the Maven groupId and artifactId for the archetype in IDEA.

<dependency>
    <groupId>com.alipay.sofa.koupleless</groupId>
    <artifactId>koupleless-common-module-archetype</artifactId>
    <version>{koupleless.runtime.version}</version>
</dependency>

The module created from this archetype has already integrated the module packaging plugin and automatic slimming configuration. It can be directly packaged as a module and installed on the base, or started independently locally.

4.2.3 -

title: 4.2.3 Java Code Fragment as Module
date: 2024-01-25T10:28:32+08:00
description: Java Code Fragment as Module
weight: 310

Module creation has four methods, and this article introduces the fourth method:

  1. Split multiple modules from a large application
  2. Transform existing applications into a single module
  3. Create a module directly using scaffolding
  4. Transform ordinary code fragments into a module

This article introduces the operation and verification steps of upgrading Java code fragments to modules, and only requires adding an ark packaging plugin and configuring module slimming to achieve the one-click upgrade of Java code fragments into module applications. It enables the same set of code branches to be independently started like the original Java code fragments, and can also be deployed and started with other applications as a module.

Prerequisites

  • JDK 8
    • sofa.ark.version >= 2.2.14-SNAPSHOT
    • koupleless.runtime.version >= 1.3.1-SNAPSHOT
  • JDK 17/JDK 21
    • sofa.ark.version >= 3.1.7-SNAPSHOT
    • koupleless.runtime.version >= 2.1.6-SNAPSHOT

Integration Steps

Step 1: Add dependencies and packaging plugins required for the module

<properties>
    <sofa.ark.version>${see-prerequisites-above}</sofa.ark.version>
    <!-- Use different koupleless versions for different JDK versions, see: https://koupleless.io/docs/tutorials/module-development/runtime-compatibility-list/#%E6%A1%86%E6%9E%B6%E8%87%AA%E8%BA%AB%E5%90%84%E7%89%88%E6%9C%AC%E5%85%BC%E5%AE%B9%E6%80%A7%E5%85%B3%E7%B3%BB -->
    <koupleless.runtime.version>${see-prerequisites-above}</koupleless.runtime.version>
</properties>

<dependencies>
    <dependency>
        <groupId>com.alipay.sofa.koupleless</groupId>
        <artifactId>koupleless-app-starter</artifactId>
        <version>${koupleless.runtime.version}</version>
        <scope>provided</scope>
    </dependency>
</dependencies>
<plugins>
    <!-- Add the ark packaging plugin here -->
    <plugin>
        <groupId>com.alipay.sofa</groupId>
        <artifactId>sofa-ark-maven-plugin</artifactId>
        <version>{sofa.ark.version}</version>
        <executions>
            <execution>
                <id>default-cli</id>
                <goals>
                    <goal>repackage</goal>
                </goals>
            </execution>
        </executions>
        <configuration>
            <skipArkExecutable>true</skipArkExecutable>
            <outputDirectory>./target</outputDirectory>
            <bizName>${replace-with-module-name}</bizName>
            <declaredMode>true</declaredMode>
        </configuration>
    </plugin>
    
    <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-jar-plugin</artifactId>
        <version>3.2.0</version>
        <executions>
            <execution>
                <goals>
                    <goal>jar</goal>
                </goals>
                <phase>package</phase>
                <configuration>
                    <classifier>lib</classifier>
                    <!-- Ensure other necessary configuration here -->
                </configuration>
            </execution>
        </executions>
    </plugin>
</plugins>

Step 2: Add initialization logic

Add MainApplication.init() in the code snippet to initialize the container.

public static void main(String[] args) {
        // Initialize the module's instance container
        MainApplication.init();
        // ...
    }

In terms of communication between modules and the base, the module registers instances in the container, and the base obtains module instances through SpringServiceFinder. Using biz3 as an example:

  1. biz3 implements two instances that are based on the AppService interface: Biz3AppServiceImpl and Biz3OtherAppServiceImpl:
public class Biz3OtherAppServiceImpl implements AppService {
    // Get the base bean
    private AppService baseAppService = SpringServiceFinder.getBaseService(AppService.class);
    @Override
    public String getAppName() {
        return "biz3OtherAppServiceImpl in the base: " + baseAppService.getAppName();
    }
}
public class Biz3AppServiceImpl implements AppService {
  // Get the base bean
  private AppService baseAppService = SpringServiceFinder.getBaseService(AppService.class);
  public String getAppName() {
    return "biz3AppServiceImpl in the base: " + baseAppService.getAppName();
  }
}

In which, the module obtains the base bean using: SpringServiceFinder.getBaseService(XXX.class), details can be found in: Module and Base Communication under ‘Module calls the base approach two: programming API SpringServiceFinder’.

  1. biz3 registers instances of these two classes in the container:
public static void main(String[] args) {
        // Initialize the module's instance container
        MainApplication.init();
        // Register instances in the module container
        MainApplication.register("biz3AppServiceImpl", new Biz3AppServiceImpl());
        MainApplication.register("biz3OtherAppServiceImpl", new Biz3OtherAppServiceImpl());
        }
  1. The base obtains instances from biz3:
@RestController
public class SampleController {
    // Get specific instances from biz3 through annotation
    @AutowiredFromBiz(bizName = "biz3", bizVersion = "0.0.1-SNAPSHOT", name = "biz3AppServiceImpl")
    private AppService biz3AppServiceImpl;
    @RequestMapping(value = "/", method = RequestMethod.GET)
    public String hello() {
        System.out.println(biz3AppServiceImpl.getAppName());
        // Get specific instances from biz3 through an API
        AppService biz3OtherAppServiceImpl = SpringServiceFinder.getModuleService("biz3", "0.0.1-SNAPSHOT",
                "biz3OtherAppServiceImpl", AppService.class);
        System.out.println(biz3OtherAppServiceImpl.getAppName());
        // Get all instances of AppService class from biz3 through an API
        Map<String, AppService> appServiceMap = SpringServiceFinder.listModuleServices("biz3",
                "0.0.1-SNAPSHOT", AppService.class);
        for (AppService appService : appServiceMap.values()) {
            System.out.println(appService.getAppName());
        }
        return "hello to ark master biz";
    }
}

Where SpringBoot / SOFABoot base can obtain module instances through the @AutowiredFromBiz annotation or SpringServiceFinder.getModuleService() programming API, details can be found in: Module and Base Communication under ‘Base calls module’.

Step 3: Automate module slimming

Typically, module dependencies for code fragments are relatively simple. You can set the scope of dependencies in the module that are consistent with the base to “provided”, or use the automated slimming capability of the ark packaging plugin to automatically slim down the maven dependencies in the module. This step is mandatory, otherwise the module jar package will be very large and will result in startup errors.

Step 4: Build the module into a jar package

Execute mvn clean package -DskipTest, and you can find the packaged ark biz jar in the target directory.

Experiment: Verify the module can be deployed and merged

  1. Start the base from the previous step (verify independent start-up steps)
  2. Initiate module deployment Refer to the sample module deployment of biz3: https://github.com/koupleless/samples/blob/main/springboot-samples/service/README-zh_CN.md

4.3 - 4.3 Module Development

Koupleless Module Development

4.3.1 - 4.3.1 Coding Standards

Koupleless Coding Standards

Basic Specifications

  1. The list of middleware clients officially verified and compatible in Koupleless modules can be found here. Any middleware client can be used in the base.

  2. If you need to use System.setProperties() and System.getProperties() in module without sharing with the base, please add MultiBizProperties.initSystem() in the main method of the base platform. For details, refer to samples.
  3. If the module hot unload capability is used, you can use the following API to decorate ExecutorService (typical for various thread pools), Timer, and Thread objects declared in the module code. When the module is unloaded, the Koupleless Arklet client will attempt to automatically clean up the decorated ExecutorService, Timer, and Thread:
    • In the module code, decorate the ExecutorService that needs to be automatically cleaned up. The underlying code will call the shutdownNow and awaitTermination interfaces of the ExecutorService object, attempting to gracefully release threads (not guaranteed to release 100%, such as when threads are waiting). The specific usage is:
      ShutdownExecutorServicesOnUninstallEventHandler.manageExecutorService(myExecutorService);
      
      Where myExecutorService needs to be a subtype of ExecutorService. You can also configure com.alipay.koupleless.executor.cleanup.timeout.seconds in the module’s SpringBoot or SOFABoot properties file to specify the graceful waiting time for thread pool awaitTermination.

    • In the module code, decorate the Timer that needs to be automatically cleaned up. The underlying code will call the cancel method of the Timer object. The specific usage is:
      CancelTimersOnUninstallEventHandler.manageTimer(myTimer);
      


    • In the module code, decorate the Thread that needs to be automatically cleaned up. The underlying code will forcibly call the stop method of the Thread object. The specific usage is:
      ForceStopThreadsOnUninstallEventHandler.manageThread(myThread);
      
      Note: JDK does not recommend forcibly stopping threads, as it may cause unexpected problems such as forcibly releasing locks on threads. Unless you are sure that forcibly closing threads will not cause any related issues, use it with caution.

  4. If the module hot unload capability is used and there are other resources or objects that need to be cleaned up, you can listen for the Spring ContextClosedEvent event and clean up the necessary resources and objects in the event handler function. You can also specify their destroy-method at the place where Beans are defined in Spring XML. When the module is unloaded, Spring will automatically execute the destroy-method.

  5. When the base is started, all modules will be deployed. Therefore, when coding the base, make sure to be compatible with all modules, otherwise the base deployment will fail. If there are incompatible changes that cannot be bypassed (usually there will be many incompatible changes between the base and modules during the module splitting process), please refer to Incompatible Base and Module Upgrade

Knowledge Points

Module Slimming (Important)
Module-to-Module and Module-to-Base Communication (Important)
Module Testing (Important)
Reuse Base Interceptors in Modules
Reuse Base Data Sources in Modules
Introduction to the Principle of Class Delegation Between Base and Modules Multiple Configurations for Modules



4.3.2 - 4.3.2 Module Slimming

Koupleless Module Slimming

Why Slimming?

Using the underlying SOFAArk framework, Koupleless achieves class isolation between modules and between modules and the base. When the module starts, it initializes various objects and prioritizes using the module’s class loader to load classes, resources, and JAR files from the FatJar build artifact. Classes that cannot be found will be delegated to the base’s class loader for retrieval.

Based on this class delegation loading mechanism, the common classes, resources, and JAR files shared by the base and modules all sink into the base, allowing the module build artifact to be very small, resulting in very low memory consumption for the module and very fast startup.

Furthermore, after the module starts, many objects will be created in the Spring context. If module hot-unloading is enabled, complete recycling may not be possible, and excessive installations can cause high overhead in the Old generation and Metaspace, triggering frequent FullGC. Therefore, it is necessary to control the size of individual module packages to be < 5MB. In this way, the base can hot deploy and hot unload hundreds of times without replacement or restarting.

The so-called “module slimming” means that the JAR dependencies already present in the base do not participate in the module packaging and construction, thus achieving the two benefits mentioned above:

  • Increase the speed of module installation, reduce module package size, reduce startup dependencies, and control module installation time < 30 seconds, or even < 5 seconds.
  • In the hot deploy and hot unload scenario, the base can hot deploy and hot unload hundreds of times without replacement or restart.

Slimming Principles

The principle of building the ark-biz jar package is to place common packages such as frameworks and middleware in the base as much as possible while ensuring the functionality of the module, and reuse the base packages in the module, making the resulting ark-biz jar more lightweight.

In different scenarios, complex applications can choose different slimming methods.

Scenarios and Corresponding Slimming Methods

Scenario 1: The base and the module have close cooperation, such as the middle platform mode/shared library mode

In the case of close cooperation between the base and modules, the modules should perceive some facade classes of the base and the dependency versions currently used by the base during development, and import the required dependencies as needed. During module packaging, only two types of dependencies should be included: dependencies that the base does not have, and dependencies whose versions are inconsistent with those of the base.

Therefore, the base needs to:

  1. Unified control over module dependency versions to let module developers know which dependencies the base has during development, to mitigate risks, and allow module developers to import part of the dependencies as needed without specifying versions.

The module needs to:

  1. Only include dependencies that are not in the base and dependencies whose versions are inconsistent with those of the base during packaging to reduce the cost of slimming the module

Step 1: Packaging “base-dependencies-starter”

Objective

This step will produce “base-dependencies-starter” for unified control of module dependency versions.

Pom configuration for base bootstrap:

Note: The dependencyArtifactId in the following configuration needs to be modified, generally to ${baseAppName}-dependencies-starter

<build>
<plugins>
    <plugin>
        <groupId>com.alipay.sofa.koupleless</groupId>
        <artifactId>koupleless-base-build-plugin</artifactId>
        <!--        koupleless.runtime.version >= 1.3.0 -->
        <version>${koupleless.runtime.version}</version>
        <configuration>
            <!-- Generate the artifactId of the starter (groupId consistent with the base), which needs to be modified here!! -->
            <dependencyArtifactId>${baseAppName}-dependencies-starter</dependencyArtifactId>
            <!-- Generate the version number of the jar -->
            <dependencyVersion>0.0.1-SNAPSHOT</dependencyVersion>
            <!-- For debugging, change to true to see the intermediate products of the packaging -->
            <cleanAfterPackageDependencies>false</cleanAfterPackageDependencies>
        </configuration>
    </plugin>
  </plugins>
</build>

Local test

  1. Pack the base-dependencies-starter jar: execute the command in the root directory of the base:
mvn com.alipay.sofa.koupleless:koupleless-base-build-plugin::packageDependency -f ${Relative path of the base bootstrap pom to the root directory of the base} 

The constructed pom will be in the outputs directory and will be automatically installed in the local Maven repository.

Note, this step will not upload “base-dependencies-starter” to the maven repository. We welcome further discussion to supplement the solution of “uploading to the maven repository”.

Step 2: Module modification packaging plugin and parent

Objective

  1. When developing the module, use the “base-dependencies-starter” from Step 1 as the parent of the module project for unified management of dependency versions;
  2. Modify the module packaging plug-in to only include “dependencies not in the base” and “dependencies whose versions are inconsistent with those of the base” when packaging the module, eliminating the need to manually configure “provided” and achieving automatic slimming of the module.

In addition: For some dependencies, even if the module and base use the same dependency version, the dependency needs to be retained when the module is packaged, i.e., the module slimming dependency whitelist needs to be configured. This feature will be launched at the end of July.

Configure the parent in the module’s root directory pom:

<parent>
   <groupId>com.alipay</groupId>
   <artifactId>${baseAppName}-dependencies-starter</artifactId>
   <version>0.0.1-SNAPSHOT</version>
</parent>

Configure plugin in the module’s packaging pom:

<build>
   <plugins>
       <plugin>
           <groupId>com.alipay.sofa</groupId>
           <artifactId>sofa-ark-maven-plugin</artifactId>
           <!--           since ${sofa.ark.version} >= 2.2.13    -->
           <version>${sofa.ark.version}</version>
           <executions>
               <execution>
                   <id>default-cli</id>
                   <goals>
                       <goal>repackage</goal>
                   </goals>
               </execution>
           </executions>
           <configuration>
               <!-- Configure the identifier of "base-dependencies-starter", standardized as '${groupId}:${artifactId}':'version' -->
               <baseDependencyParentIdentity>com.alipay:${baseAppName}-dependencies-starter:0.0.1-SNAPSHOT</baseDependencyParentIdentity>
           </configuration>
       </plugin>
   </plugins>
</build>

Step 3: Configure Module Dependency Whitelist

For some dependencies, even if the module and base use the same version of the dependency, the dependency needs to be retained when the module is packaged. This requires configuring a module slimming dependency whitelist. This feature will be launched by the end of July.

Step 4: Package Building

Scenario 2: The base and the module have loose cooperation, such as resource saving in multi-application merge deployment

In the case of loose cooperation between the base and the module, the module should not perceive the dependency versions currently used by the base during development, so the module needs to focus more on the low-cost access to module slimming. Dependencies that need to be excluded from module packaging can be configured.

Method 1: SOFAArk Configuration File Combining

Step 1

SOFAArk Module Slimming reads configuration from two places:

  • “Module Project Root Directory/conf/ark/bootstrap.properties”, such as: my-module/conf/ark/bootstrap.properties
  • “Module Project Root Directory/conf/ark/bootstrap.yml”, such as: my-module/conf/ark/bootstrap.yml

Configuration

Configure the common package of frameworks and middleware that need to be sunk to the base in “Module Project Root Directory/conf/ark/bootstrap.properties” in the following format, such as:

# excludes config ${groupId}:{artifactId}:{version}, split by ','
excludes=org.apache.commons:commons-lang3,commons-beanutils:commons-beanutils
# excludeGroupIds config ${groupId}, split by ','
excludeGroupIds=org.springframework
# excludeArtifactIds config ${artifactId}, split by ','
excludeArtifactIds=sofa-ark-spi

Configure the common package of frameworks and middleware that need to be sunk to the base in “Module Project Root Directory/conf/ark/bootstrap.yml” in the following format, such as:

# excludes config ${groupId}:{artifactId}:{version}, split by '-'
# excludeGroupIds config ${groupId}, split by '-'
# excludeArtifactIds config ${artifactId}, split by '-'
excludes:
  - org.apache.commons:commons-lang3
  - commons-beanutils:commons-beanutils
excludeGroupIds:
  - org.springframework
excludeArtifactIds:
  - sofa-ark-spi

Step 2

Upgrade the module packaging plug-in sofa-ark-maven-plugin version >= 2.2.12

    <!-- Plugin 1: Packaging plug-in for sofa-ark biz to package as ark biz jar -->
    <plugin>
        <groupId>com.alipay.sofa</groupId>
        <artifactId>sofa-ark-maven-plugin</artifactId>
        <version>${sofa.ark.version}</version>
        <executions>
            <execution>
                <id>default-cli</id>
                <goals>
                    <goal>repackage</goal>
                </goals>
            </execution>
        </executions>
        <configuration>
            <skipArkExecutable>true</skipArkExecutable>
            <outputDirectory>./target</outputDirectory>
            <bizName>biz1</bizName>
            <webContextPath>biz1</webContextPath>
            <declaredMode>true</declaredMode>
        </configuration>
    </plugin>

Step 3

Simply build the module ark-biz jar package, and you will see a significant difference in the size of the slimmed ark-biz jar package.

You can click here to view the complete example project for module slimming.

4.3.3 - 4.3.3 Module Startup

Module Startup

Module Startup Parameters

Modules can be deployed in two ways: static merged deployment and hot deployment.
Static merged deployment does not support configuration startup parameters. Most of the startup parameters for the module can be placed in the module configuration (application.properties); for example, when configuring the profile: change the startup parameter --spring.profiles.active=dev to spring.profiles.active=true in the application.properties file.
Hot deployment modules support configuration of startup parameters. For example, when using Arklet to install a module via a web request, you can configure startup parameters and environment variables:

curl --location --request POST 'localhost:1238/installBiz' \
--header 'Content-Type: application/json' \
--data '{
    "bizName": "${Module Name}",
    "bizVersion": "${Module Version}",
    "bizUrl": "file:///path/to/ark/biz/jar/target/xx-xxxx-ark-biz.jar",
    "args": ["--spring.profiles.active=dev"],
    "env": {
        "XXX": "YYY"
    }
}'

Module Startup Acceleration

Design Concept for Module Startup Acceleration

The overall idea for module startup acceleration is:

  1. The base platform should start the services in advance, which only requires the base platform to pre-import the dependencies.
  2. The module can reuse the base platform’s services in various ways. The methods for reusing the base services include, but are not limited to, analyzing the specific use case; if there are any questions, feel free to discuss in the community group:
    1. Reuse through sharing class static variables.
    2. Reuse by having the base platform encapsulate some service interface APIs, allowing the module to call these APIs directly.
    3. Obtain proxy objects of base platform objects through annotations, using tools provided by Koupleless like @AutowiredFromBase, @AutowiredFromBiz, SpringServiceFinder, and some annotations supporting JVM service calls provided by Dubbo or SOFARpc.
    4. Find objects across modules to directly obtain base platform objects, using tools like SpringBeanFinder provided by Koupleless. There is an implicit issue here: for modules to successfully invoke base platform services, they need to use certain model classes. Therefore, modules typically need to import the dependencies corresponding to those services, leading to these service configurations being scanned during module startup, which may result in reinitializing these services. This can cause unnecessary services to start and slow down the module startup, increasing memory consumption. Thus, to accelerate module startup, three tasks must be completed:
  3. The base platform should start the services in advance.
  4. The module should prohibit the startup of these services, which is the focus of this article.
  5. The module should reuse base platform services.

How Modules Can Prohibit Startup of Certain Services

Starting from version 1.1.0, Koupleless provides the following configuration capability:

koupleless.module.autoconfigure.exclude # Services that do not need to start during module startup
koupleless.module.autoconfigure.include # Services that need to start during module startup. If a service is configured with both include and exclude, the service will start.

This configuration can be set in the base platform or in the module. If configured in the base platform, it applies to all modules. If configured in the module, it only applies to that module and will override the configuration in the base platform.

Benchmark

Detailed benchmark information is yet to be added.

4.3.4 - 4.3.4 Module Communication Module to Module and Module to Base Communication

Koupleless Module Communication Module to Module and Module to Base Communication

Between the base and modules, and among modules, there is spring context isolation, meaning their beans do not conflict and are not visible to each other. However, in many scenarios such as the middleware mode and independent module mode, there are situations where the base calls the module, the module calls the base, and modules call each other. Currently, three methods are supported for invocation: @AutowiredFromBiz, @AutowiredFromBase, and SpringServiceFinder method calls. Note that the usage of these three methods varies.

Spring Environment

Importing Dependencies in Modules

<dependency>
    <groupId>com.alipay.koupleless</groupId>
    <artifactId>koupleless-app-starter</artifactId>
    <version>0.5.6</version>
    <scope>provided</scope>
</dependency>

Base Calling Module

Only SpringServiceFinder can be used.

@RestController
public class SampleController {

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public String hello() {

        Provider studentProvider = SpringServiceFinder.getModuleService("biz", "0.0.1-SNAPSHOT",
                "studentProvider", Provider.class);
        Result result = studentProvider.provide(new Param());

        Provider teacherProvider = SpringServiceFinder.getModuleService("biz", "0.0.1-SNAPSHOT",
                "teacherProvider", Provider.class);
        Result result1 = teacherProvider.provide(new Param());
        
        Map<String, Provider> providerMap = SpringServiceFinder.listModuleServices("biz", "0.0.1-SNAPSHOT",
                Provider.class);
        for (String beanName : providerMap.keySet()) {
            Result result2 = providerMap.get(beanName).provide(new Param());
        }

        return "hello to ark master biz";
    }
}

Module Calling Base

Method 1: Annotation @AutowiredFromBase

@RestController
public class SampleController {

    @AutowiredFromBase(name = "sampleServiceImplNew")
    private SampleService sampleServiceImplNew;

    @AutowiredFromBase(name = "sampleServiceImpl")
    private SampleService sampleServiceImpl;

    @AutowiredFromBase
    private List<SampleService> sampleServiceList;

    @AutowiredFromBase
    private Map<String, SampleService> sampleServiceMap;

    @AutowiredFromBase
    private AppService appService;

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public String hello() {

        sampleServiceImplNew.service();

        sampleServiceImpl.service();

        for (SampleService sampleService : sampleServiceList) {
            sampleService.service();
        }

        for (String beanName : sampleServiceMap.keySet()) {
            sampleServiceMap.get(beanName).service();
        }

        appService.getAppName();

        return "hello to ark2 dynamic deploy";
    }
}

Method 2: Programming API SpringServiceFinder

@RestController
public class SampleController {

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public String hello() {

        SampleService sampleServiceImplFromFinder = SpringServiceFinder.getBaseService("sampleServiceImpl", SampleService.class);
        String result = sampleServiceImplFromFinder.service();
        System.out.println(result);

        Map<String, SampleService> sampleServiceMapFromFinder = SpringServiceFinder.listBaseServices(SampleService.class);
        for (String beanName : sampleServiceMapFromFinder.keySet()) {
            String result1 = sampleServiceMapFromFinder.get(beanName).service();
            System.out.println(result1);
        }

        return "hello to ark2 dynamic deploy";
    }
}

Module Calling Module

Referencing the module calling the base, the annotation is used with @AutowiredFromBiz and the programming API is supported by SpringServiceFinder.

Method 1: Annotation @AutowiredFromBiz

@RestController
public class SampleController {

    @AutowiredFromBiz(bizName = "biz", bizVersion = "0.0.1-SNAPSHOT", name = "studentProvider")
    private Provider studentProvider;

    @AutowiredFromBiz(bizName = "biz", name = "teacherProvider")
    private Provider teacherProvider;

    @AutowiredFromBiz(bizName = "biz", bizVersion = "0.0.1-SNAPSHOT")
    private List<Provider> providers;

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public String hello() {

        Result provide = studentProvider.provide(new Param());

        Result provide1 = teacherProvider.provide(new Param());

        for (Provider provider : providers) {
            Result provide2 = provider.provide(new Param());
        }

        return "hello to ark2 dynamic deploy";
    }
}

Method 2: Programming API SpringServiceFinder

@RestController
public class SampleController {

    @RequestMapping(value = "/", method = RequestMethod.GET)
    public String hello() {

        Provider teacherProvider1 = SpringServiceFinder.getModuleService("biz", "0.0.1-SNAPSHOT", "teacherProvider", Provider.class);
        Result result1 = teacherProvider1.provide(new Param());

        Map<String, Provider> providerMap = SpringServiceFinder.listModuleServices("biz", "0.0.1-SNAPSHOT", Provider.class);
        for (String beanName : providerMap.keySet()) {
            Result result2 = providerMap.get(beanName).provide(new Param());
        }

        return "hello to ark2 dynamic deploy";
    }
}

Complete Example

SOFABoot Environment

Please refer to this documentation



4.3.5 - 4.3.5 Module Local Development and Debugging

Local development and debugging of the Koupleless module

Arkctl Tool Installation

The Arkctl module installation mainly provides automated packaging and deployment capabilities, including invoking the mvn command to automatically build the module as a JAR file and calling the API interface provided by Arklet for completion of deployment. The installation method for Arkctl can refer to the documentation: arkctl Installation in the Local Environment Development Verification section.

Installation Method 1: Using the Golang Toolchain

  1. Download the corresponding version of Golang from the Golang official website; the version must be above 1.21.
  2. Execute the command go install github.com/koupleless/arkctl@v0.2.1 to install the Arkctl tool.

Installation Method 2: Downloading Binary Files

  1. Download Arkctl based on the actual operating system. Download Arkctl.
  2. Unzip the corresponding binary file and place it in a directory that is included in the system’s PATH variable.
  3. After the base and module have been modified and the base has been started, the Arkctl tool can be used to quickly complete the build and deployment of the module into the base.
     

How to Find the PATH Value on Linux/Mac?

Execute in the terminal:

echo $PATH  
# Choose a directory and place arkctl in that directory  

How to Find the PATH Value on Windows?

Press Windows + R, type cmd, and then press Enter to open the command prompt. In the command prompt window, enter the following command and press Enter:

echo %PATH%  

Note: In the Windows environment, if Windows Defender is enabled, it may falsely report issues when downloading binaries through the browser, as shown below:


You can refer to the [Go official documentation](https://go.dev/doc/faq#virus) for the reason behind the error. This error can be ignored; feel free to download. > Since Arkctl deployment is actually completed by calling the API, if you prefer not to use the command-line tool, you can directly use the Arklet [API interface](/docs/contribution-guidelines/arklet/architecture) to complete the deployment operation. We also provide a telnet method for module deployment; [detailed instructions can be found here](https://www.sofastack.tech/projects/sofa-boot/sofa-ark-ark-telnet/).

Local Quick Deployment

You can use the Arkctl tool to quickly build and deploy modules, improving the efficiency of local debugging and development.

Scenario 1: Building a Module JAR and Deploying to a Locally Running Base.

Preparation:

  1. Start a base locally.
  2. Open a module project repository.
    Execute the command:
# This needs to be executed in the root directory of the repository.  
# For example, if it is a Maven project, execute it in the directory where the root pom.xml is located.  
arkctl deploy  

Once the command completes, it is successfully deployed, and the user can debug and validate the relevant module functionalities.

Scenario 2: Deploying a Locally Built JAR to a Locally Running Base.

Preparation:

  1. Start a base locally.
  2. Prepare a built JAR file.
    Execute the command:
arkctl deploy /path/to/your/pre/built/bundle-biz.jar  

Once the command completes, it is successfully deployed, and the user can debug and validate the relevant module functionalities.

Scenario 3: Deploying a Locally Unbuilt JAR to a Locally Running Base.

Preparation:

  1. Start a base locally.
    Execute the command:
arkctl deploy ./path/to/your/biz/  

Note: This command is applicable if the module can be built independently (e.g., if commands like mvn package can be successfully executed in the biz directory), the command will automatically build the module and deploy it to the base.

Scenario 4: Building and Deploying Submodule JARs in a Multi-Module Maven Project from the Root.

Preparation:

  1. Start a base locally.
  2. Open a multi-module Maven project repository.
    Execute the command:
# This needs to be executed in the root directory of the repository.  
# For example, if it is a Maven project, execute it in the directory where the root pom.xml is located.  
arkctl deploy --sub ./path/to/your/sub/module  

Once the command completes, it is successfully deployed, and the user can debug and validate the relevant module functionalities.

Scenario 5: Building a Module JAR and Deploying to a Remote Running K8s Base.

Preparation:

  1. Ensure that a base pod is already running remotely.
  2. Open a module project repository.
  3. You must have a K8s certificate with exec permissions and the kubectl command-line tool available locally.
    Execute the command:
# This needs to be executed in the root directory of the repository.  
# For example, if it is a Maven project, execute it in the directory where the root pom.xml is located.  
arkctl deploy --pod {namespace}/{podName}  

Once the command completes, it is successfully deployed, and the user can debug and validate the relevant module functionalities.

Scenario 6: How to Use This Command More Quickly

You can create a Shell Script in IDEA, set the running directory, and then enter the corresponding Arkctl command as shown in the image below.

Local Module Debugging

Module and Base in the Same IDEA Project

Since the IDEA project can see the module code, debugging the module is no different from normal debugging. Just set breakpoints in the module code and start the base in debug mode.

Module and Base in Different IDEA Projects

  1. Add the debug configuration to the base startup parameters: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000, then start the base.
  2. Add remote JVM debug to the module, setting host to localhost: -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000.
  3. Set breakpoints in the module.
  4. After installing the module, you can begin debugging.

Checking Deployment Status

Scenario 1: Querying Modules Already Deployed in the Current Base.

Preparation:

  1. Start a base locally.
    Execute the command:
arkctl status  

Scenario 2: Querying Modules Already Deployed in the Remote K8s Environment Base.

Preparation:

  1. Start a base in the remote K8s environment.
  2. Ensure you have Kube certificates and the necessary permissions locally.
    Execute the command:
arkctl status --pod {namespace}/{name}  

Viewing Runtime Module Status and Information Using Arthas

Retrieve All Biz Information

vmtool -x 1 --action getInstances --className com.alipay.sofa.ark.container.model.BizModel --limit 100  

For example:

image.png

Retrieve Specific Biz Information

# Please replace ${bizName}  
vmtool -x 1 --action getInstances --className com.alipay.sofa.ark.container.model.BizModel --limit 100 | grep ${bizName} -A 4  

For example:

image.png

Retrieve Biz Information Corresponding to a Specific BizClassLoader

# Please replace ${BizClassLoaderHashCode}  
vmtool -x 1 --action getInstances --className com.alipay.sofa.ark.container.model.BizModel --limit 100 | grep ${BizClassLoaderHashCode} -B 1 -A 3  

For example:

image.png

4.3.6 - 4.3.6 Reusing Base Data Source

Koupleless Module Reusing Base Data Source

Recommendation

It is highly recommended to use the approach outlined in this document to reuse the base data source within the module whenever possible. Failing to do so may result in repeated creation and consumption of data source connections during module deployments, leading to slower module publishing and operations, as well as increased memory usage.

SpringBoot Solution

Simply create a MybatisConfig class in the module’s code. This way, the transaction template is reused from the base, and only the Mybatis SqlSessionFactoryBean needs to be newly created. Refer to the demo: /koupleless/samples/springboot-samples/db/mybatis/biz1

Use SpringBeanFinder.getBaseBean to obtain the base Bean object, and then register it as the module’s Bean:


@Configuration
@MapperScan(basePackages = "com.alipay.sofa.biz1.mapper", sqlSessionFactoryRef = "mysqlSqlFactory")
@EnableTransactionManagement
public class MybatisConfig {

    // Note: Do not initialize a base DataSource, as it will be destroyed when the module is uninstalled. 
    // However, resources such as transactionManager, transactionTemplate, and mysqlSqlFactory can be safely destroyed.

    @Bean(name = "transactionManager")
    public PlatformTransactionManager platformTransactionManager() {
        return (PlatformTransactionManager) getBaseBean("transactionManager");
    }

    @Bean(name = "transactionTemplate")
    public TransactionTemplate transactionTemplate() {
        return (TransactionTemplate) getBaseBean("transactionTemplate");
    }

    @Bean(name = "mysqlSqlFactory")
    public SqlSessionFactoryBean mysqlSqlFactory() throws IOException {
        // The data source cannot be declared as a bean in the module's Spring context, as it will be closed when the module is uninstalled.

        DataSource dataSource = (DataSource) getBaseBean("dataSource");
        SqlSessionFactoryBean mysqlSqlFactory = new SqlSessionFactoryBean();
        mysqlSqlFactory.setDataSource(dataSource);
        mysqlSqlFactory.setMapperLocations(new PathMatchingResourcePatternResolver()
                .getResources("classpath:mappers/*.xml"));
        return mysqlSqlFactory;
    }
}

SOFABoot Solution

If the SOFABoot base does not enable multi-bundle (there is no MANIFEST.MF file in the Package), the solution is identical to the SpringBoot solution mentioned above. If there is a MANIFEST.MF file, you need to call BaseAppUtils.getBeanOfBundle to obtain the base Bean, where BASE_DAL_BUNDLE_NAME is theModule-Name in the MANIFEST.MF file:
image.png


@Configuration
@MapperScan(basePackages = "com.alipay.koupleless.dal.dao", sqlSessionFactoryRef = "mysqlSqlFactory")
@EnableTransactionManagement
public class MybatisConfig {

    // Note: Do not initialize a base DataSource, as it will be destroyed when the module is uninstalled. 
    // However, resources such as transactionManager, transactionTemplate, and mysqlSqlFactory can be safely destroyed

    private static final String BASE_DAL_BUNDLE_NAME = "com.alipay.koupleless.dal"

    @Bean(name = "transactionManager")
    public PlatformTransactionManager platformTransactionManager() {
        return (PlatformTransactionManager) BaseAppUtils.getBeanOfBundle("transactionManager",BASE_DAL_BUNDLE_NAME);
    }

    @Bean(name = "transactionTemplate")
    public TransactionTemplate transactionTemplate() {
        return (TransactionTemplate) BaseAppUtils.getBeanOfBundle("transactionTemplate",BASE_DAL_BUNDLE_NAME);
    }

    @Bean(name = "mysqlSqlFactory")
    public SqlSessionFactoryBean mysqlSqlFactory() throws IOException {
        // The data source cannot be declared as a bean in the module's Spring context, as it will be closed when the module is uninstalled.
        ZdalDataSource dataSource = (ZdalDataSource) BaseAppUtils.getBeanOfBundle("dataSource",BASE_DAL_BUNDLE_NAME);
        SqlSessionFactoryBean mysqlSqlFactory = new SqlSessionFactoryBean();
        mysqlSqlFactory.setDataSource(dataSource);
        mysqlSqlFactory.setMapperLocations(new PathMatchingResourcePatternResolver()
                .getResources("classpath:mapper/*.xml"));
        return mysqlSqlFactory;
    }
}


4.3.7 - 4.3.7 Reusing Base Interceptors

Koupleless Module Reusing Base Interceptors

Objective

In the base, many Aspect interceptors (Spring interceptors) are defined, and you may want to reuse them in the module. However, the Spring contexts of the module and the base are isolated, which means that Aspect interceptors will not take effect in the module.

Solution

Create a proxy object for the original interceptor class, allowing the module to invoke this proxy object. Then, the module initializes this proxy object through the AutoConfiguration annotation. The complete steps and example code are as follows:

Step 1:

The base code defines an interface that defines the execution method of the interceptor. This interface needs to be visible to the module (referenced in the module dependencies):

public interface AnnotionService {
    Object doAround(ProceedingJoinPoint joinPoint) throws Throwable;
}

Step 2:

Write the specific implementation of the interceptor in the base. This implementation class needs to be annotated with @SofaService (SOFABoot) or @SpringService (SpringBoot, under construction):

@Service
@SofaService(uniqueId = "facadeAroundHandler")
public class FacadeAroundHandler implements AnnotionService {

    private final static Logger LOG = LoggerConst.MY_LOGGER;

    public Object doAround(ProceedingJoinPoint joinPoint) throws Throwable {
        log.info("Start execution")
        joinPoint.proceed();
        log.info("Execution completed")
    }
}

Step 3:

In the module, use the @Aspect annotation to implement an Aspect. SOFABoot injects the FacadeAroundHandler on the base via @SofaReference. If it is SpringBoot, then use @AutowiredFromBase to inject FacadeAroundHandler on the base.

Note: Do not declare this as a bean, do not add @Component or @Service annotation, only @Aspect annotation is needed.

// Note: Do not declare this as a bean, do not add @Component or @Service annotation
@Aspect
public class FacadeAroundAspect {

    // If it is SOFABoot, use @SofaReference; if it is SpringBoot, use @AutowiredFromBase.
    @SofaReference(uniqueId = "facadeAroundHandler")
    //@AutowiredFromBase
    private AnnotionService facadeAroundHandler;

    @Pointcut("@annotation(com.alipay.linglongmng.presentation.mvc.interceptor.FacadeAround)")
    public void facadeAroundPointcut() {
    }

    @Around("facadeAroundPointcut()")
    public Object doAround(ProceedingJoinPoint joinPoint) throws Throwable {
        return facadeAroundHandler.doAround(joinPoint);
    }
}

Step 4:

Use the @Configuration annotation to create a Configuration class, and declare the aspectj objects needed by the module as Spring Beans.
Note: This Configuration class needs to be visible to the module, and related Spring Jar dependencies need to be imported with provided.

@Configuration
public class MngAspectConfiguration {
    @Bean
    public FacadeAroundAspect facadeAroundAspect() {
        return new FacadeAroundAspect();
    }
    @Bean
    public EnvRouteAspect envRouteAspect() {
        return new EnvRouteAspect();
    }
    @Bean
    public FacadeAroundAspect facadeAroundAspect() {
        return new FacadeAroundAspect();
    }
    
}

Step 5:

Explicitly depend on the Configuration class MngAspectConfiguration created in step 4 in the module code.

@SpringBootApplication
@ImportResource("classpath*:META-INF/spring/*.xml")
@ImportAutoConfiguration(value = {MngAspectConfiguration.class})
public class ModuleBootstrapApplication {
    public static void main(String[] args) {
        SpringApplicationBuilder builder = new SpringApplicationBuilder(ModuleBootstrapApplication.class)
        	.web(WebApplicationType.NONE);
        builder.build().run(args);
    }
}


4.3.8 - 4.3.8 Thread Pool Usage

Koupleless Thread Pool Usage

Background

When multiple modules or a module and a base share the same thread pool, the Classloader used by the thread executing a task in the thread pool may differ from the Classloader that was used when the task was created. This can lead to a ClassNotFoundException when the thread pool executes the task.

As a result, when multiple modules or a module and a base share the same thread pool, in order to ensure consistency between the Classloader used during task execution and the Classloader used at the creation of the task, we need to make some modifications to the thread pool.

⚠️Note: There will be no such issue if each module uses its own thread pool.

There are 4 common ways to use thread pools in Java:

  1. Directly create thread tasks and submit them to the thread pool, such as: Runnable, Callable, ForkJoinTask
  2. Customize ThreadPoolExecutor and submit tasks to ThreadPoolExecutor
  3. Use ThreadPoolExecutor or ScheduledThreadPoolExecutor from the third-party libraries.
  4. Create thread pools through Executors and submit tasks to ExecutorService, ScheduledExecutorService, ForkJoinPool
  5. For SpringBoot users, submit tasks to ThreadPoolTaskExecutor, SchedulerThreadPoolTaskExecutor

This article will introduce how each method is used on Koupleless.

How to Use

1. Directly create thread tasks and submit them to the thread pool

The original method:


threadPool.execute(new Runnable(){
    public void run() {
        //do something
    }
});


threadPool.execute(new Callable<String>(){
public String call() {
        //do something
        return "mock";
 }
});

If the threadPool remains unchanged, then it is necessary to wrap Runnable as KouplelessRunnable and Callable as KouplelessCallable, as follows:

// Runnable
// wrap function:
threadPool.execute(KouplelessRunnable.wrap(new Runnable(){
    public void run() {
        //do something
    }
});

// or new KouplelessRunnable:
threadPool.execute(new KouplelessRunnable(){
    public void run() {
        //do something
    }
});

// Runnable
// wrap function:
threadPool.execute(KouplelessCallable.wrap(new Callable<String>(){
    public String call() {
        //do something
        return "mock";
    }
});

// or new KouplelessRunnable
threadPool.execute(new KouplelessCallable<String>(){
    public String call() {
        //do something
        return "mock";
    }
});

2. Customize ThreadPoolExecutor

The original method:

ThreadPoolExecutor threadPool = new ThreadPoolExecutor(5, 5, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());

threadPool.execute(new Runnable(){
    public void run() {
        //do something
    }
});


threadPool.execute(new Callable<String>(){
public String call() {
        //do something
        return "mock";
 }
});

To keep Runnable and Callable unchanged, there are two ways to modify:

  1. Change threadPool to KouplelessThreadPoolExecutor
  2. Or use kouplelessExecutorService.

First, let’s take an example of the first modification method: change threadPool to KouplelessThreadPoolExecutor. As follows:

// modify ThreadPoolExecutor as KouplelessThreadPoolExecutor
ThreadPoolExecutor threadPool = new KouplelessThreadPoolExecutor(5, 5, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());

threadPool.execute(new Runnable(){
    public void run() {
        //do something
    }
});


threadPool.execute(new Callable<String>(){
public String call() {
        //do something
        return "mock";
 }
});

Then, illustrate the second method of modification: using kouplelessExecutorService. As follows:

// use kouplelessExecutorService
ExecutorService executor        = new KouplelessExecutorService(new ThreadPoolExecutor(5, 5, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>()));

// use executor to execute task
executor.execute(new Runnable(){
    public void run() {
        //do something
    }
});
executor.execute(new Callable<String>(){
public String call() {
        //do something
        return "mock";
 }
});

3. Use ThreadPoolExecutor or ScheduledThreadPoolExecutor from the third-party libraries.

The original method:

ThreadPoolExecutorA executorService = new ThreadPoolExecutorA();

executorService.execute(new Runnable(){
    public void run() {
        //do something
    }
});


executorService.execute(new Callable<String>(){
public String call() {
        //do something
        return "mock";
 }
});

ScheduledThreadPoolExecutorA scheduledExecutorService = new ScheduledThreadPoolExecutorA();

scheduledExecutorService.execute(new Runnable(){
    public void run() {
        //do something
    }
});

scheduledExecutorService.execute(new Callable<String>(){
    public String call() {
        //do something
        return "mock";
    }
});

To keep Runnable and Callable unchanged, it is necessary to use kouplelessExecutorService and kouplelessScheduledExecutorService, as follows:

// use KouplelessExecutorService
        ExecutorService executor        = new KouplelessExecutorService(new ThreadPoolExecutorA());

// use executor to execute tasks
executor.execute(new Runnable(){
    public void run() {
        //do something
    }
});
executor.execute(new Callable<String>(){
    public String call() {
        //do something
    return "mock";
    }
});

// use scheduledExecutorService 
ScheduledExecutorService scheduledExecutor = new KouplelessScheduledExecutorService(new ScheduledThreadPoolExecutorA());

// use scheduledExecutor to execute tasks
scheduledExecutor.execute(new Runnable(){
    public void run() {
        //do something
    }
});
scheduledExecutor.execute(new Callable<String>(){
    public String call() {
        //do something
        return "mock";
    }
});

4. Create thread pools through Executors

The original method:

ExecutorService executorService = Executors.newFixedThreadPool(6);

executorService.execute(new Runnable(){
    public void run() {
        //do something
    }
});


executorService.execute(new Callable<String>(){
public String call() {
        //do something
        return "mock";
 }
});

ScheduledExecutorService scheduledExecutorService = Executors.newSingleThreadScheduledExecutor();

scheduledExecutorService.execute(new Runnable(){
    public void run() {
        //do something
    }
});

scheduledExecutorService.execute(new Callable<String>(){
    public String call() {
        //do something
        return "mock";
    }
});

To keep Runnable and Callable unchanged, it is necessary to use kouplelessExecutorService and kouplelessScheduledExecutorService, as follows:

// use KouplelessExecutorService
ExecutorService executor        = new KouplelessExecutorService(Executors.newFixedThreadPool(6));

// use executor to execute tasks
executor.execute(new Runnable(){
    public void run() {
        //do something
    }
});
executor.execute(new Callable<String>(){
    public String call() {
        //do something
    return "mock";
    }
});

// use KouplelessScheduledExecutorService
ScheduledExecutorService scheduledExecutor = new KouplelessScheduledExecutorService(Executors.newSingleThreadScheduledExecutor());

// use scheduledExecutor to execute tasks
scheduledExecutor.execute(new Runnable(){
    public void run() {
        //do something
    }
});
scheduledExecutor.execute(new Callable<String>(){
    public String call() {
        //do something
        return "mock";
    }
});

5. For SpringBoot users, submit tasks to ThreadPoolTaskExecutor, SchedulerThreadPoolTaskExecutor

Due to koupeless having already adapted ThreadPoolTaskExecutor and SchedulerThreadPoolTaskExecutor for springboot (2.3.0-2.7.x), they can be used directly.

@Autowired
private ThreadPoolTaskExecutor threadPoolTaskExecutor;

@Autowired
private SchedulerThreadPoolTaskExecutor schedulerThreadPoolTaskExecutor;

threadPoolTaskExecutor.execute(new Runnable(){
    public void run() {
        //do something
    }
});

schedulerThreadPoolTaskExecutor.execute(new Runnable(){
    public void run() {
        //do something
    }
});

4.3.9 - 4.3.9 Multiple Configurations for Modules

Why Use Multiple Configurations

In different scenarios, a module’s code may be deployed to different applications but require different configurations.

How to Use

Step 1: When packaging a module’s code for different scenarios, configure different bizName, such as biz1, biz2.

<plugin>
    <groupId>com.alipay.sofa</groupId>
    <artifactId>sofa-ark-maven-plugin</artifactId>
    <version>${sofa.ark.version}</version>
    <executions>
        <execution>
            <id>default-cli</id>
            <goals>
                <goal>repackage</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <!-- Configure different bizName for different scenarios, such as biz1, biz2 -->
        <bizName>biz1</bizName>
        <!-- ... Other properties -->
    </configuration>
</plugin>

Step 2: In the resources directory of the module, add the following files, where config , biz1 and biz2 are folders:

  • config/biz1/application.properties
  • config/biz2/application.properties Step 3: Package two different ark-biz files with different bizName values (biz1, biz2):
  • biz1-0.0.1-SNAPSHOT-ark-biz.jar
  • biz2-0.0.1-SNAPSHOT-ark-biz.jar Step 4: Install the corresponding ark-biz module for different scenarios. When the module starts, it will read the configuration files based on the bizName value:
  • config/biz1/application.properties
  • config/biz2/application.properties

Principle

When the module starts, it reads the following files as property sources based on the module name and spring.profiles.active field:

  • config/${bizName}/application-${profile}.properties
  • config/${bizName}/application.properties If spring.profiles.active is not set, it reads the following file as the property source:
  • config/${bizName}/application.properties

4.3.10 - 4.3.10 Multimodule Integration Testing

[English](./README.md) | 简体中文

Why Do We Need an Integration Testing Framework?

Without an integration testing framework, the validation steps for developers when verifying koupleless module logic can be cumbersome and involve the following steps:

  1. Start a base process.
  2. Build the module JAR package.
  3. Install the module.
  4. Call the module’s HTTP interface (or other methods) to validate the logic. If the logic does not meet expectations, developers need to repeat the above steps, making such a validation process highly inefficient. To improve the validation efficiency for developers, we decided to provide the koupleless integration testing framework, allowing developers to start both the base and the module within a single process.

Integration Testing Framework

Principle

The integration testing framework simulates a multi-module deployment scenario by enhancing the class loading behavior of the base and the modules. The specific source code can be referenced in koupleless-test-suite.

How to Use

Taking webflux-samples as an example, the project structure of webflux-samples is as follows:

We create a new Maven module:

First, this module needs to add the integration testing framework dependency:

<dependency>
    <groupId>com.alipay.sofa.koupleless</groupId>
    <artifactId>koupleless-test-suite</artifactId>
    <version>${koupleless.runtime.version}</version>
</dependency>

Next, we need to add the dependencies for the base and the module:

<!-- Base Dependency -->
<dependency>
    <groupId>com.alipay.sofa.web.webflux</groupId>
    <artifactId>demowebflux</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <classifier>lib</classifier>
</dependency>
<!-- Module Dependency -->
<dependency>
    <groupId>com.alipay.sofa.web.webflux</groupId>
    <artifactId>bizwebflux</artifactId>
    <version>0.0.1-SNAPSHOT</version>
</dependency>

Then, we need to write the integration test case:

public static void setUp() {
    TestMultiSpringApplication multiApp = new TestMultiSpringApplication(
            MultiSpringTestConfig
                    .builder()
                    .baseConfig(
                            BaseSpringTestConfig
                                    .builder()
                                    // Pass in the base application's startup class.
                                    .mainClass(DemoWebfluxApplication.class)
                                    .build()
                    )
                    .bizConfigs(
                            Lists.newArrayList(
                                    BizSpringTestConfig
                                            .builder()
                                            .bizName("biz")
                                            // Pass in the module's startup class.
                                            .mainClass(BizWebfluxApplication.class)
                                            .build()))
                    .build());
    multiApp.run();
}

Finally, by starting the tests in IDEA, we will find that both the base and module’s Spring containers are up and running. This allows us to validate the multi-module logic within a single process.

Thus, we have completed an integration test case.

Summary

Through the above experiment, we have validated that the koupleless integration testing framework can quickly verify multi-module logic, improving developers’ validation efficiency.

4.3.11 - 4.3.11 Static Merge Deployment

Static Merge Deployment of Koupleless Module

Introduction

SOFAArk provides the capability of static merge deployment, where the Base package (foundation application) can start already constructed Biz package (module application) during startup. The default way of obtaining the module is through local directory, local file URL, and remote URL.

In addition, SOFAArk also provides an extension interface for static merge deployment, where developers can customize the way of obtaining the Biz package (module application).

Usage

Step 1: Package Module Application into Ark Biz

If developers wish for their application’s Ark Biz package to be used as a Jar package dependency by other applications, running on the same SOFAArk container, they need to package and publish the Ark Biz package. For details, see Ark Biz Introduction. The Ark Biz package is generated using the Maven plugin sofa-ark-maven-plugin.

<build>
    <plugin>
        <groupId>com.alipay.sofa</groupId>
        <artifactId>sofa-ark-maven-plugin</artifactId>
        <version>${sofa.ark.version}</version>
        <executions>
            <execution>
                <id>default-cli</id>
                <goals>
                    <goal>repackage</goal>
                </goals>
            </execution>
        </executions>
        <configuration>
            <!-- Default is 100, larger values indicate later installation, with Koupleless runtime version greater than or equal to 1.2.2 -->
            <priority>200</priority>
        </configuration>
    </plugin>
</build>

Step 2: Base Fetching Ark Biz for Merge Deployment

Requirements:

  • JDK8
    • sofa.ark.version >= 2.2.12
    • koupleless.runtime.version >= 1.2.3
  • JDK17/JDK21
    • sofa.ark.version >= 3.1.5
    • koupleless.runtime.version >= 2.1.4

Method 1: Using Official Default Retrieval Method, Supporting Local Directory, Local File URL, Remote URL

1. Base Configuration of Local Directory, Local File URL, Remote URL

Developers need to specify the Ark Biz package that needs to be merged and deployed in the base’s ark configuration file (conf/ark/bootstrap.properties or conf/ark/bootstrap.yml), supporting:

  • Local directory
  • Local file URL (windows system as file:\\, linux system as file://)
  • Remote URL (supporting http://,https://) In integrateBizURLs field for local file URL and remote URL, and integrateLocalDirs field for local directory. The configuration is as follows:
integrateBizURLs=file://${xxx}/koupleless_samples/springboot-samples/service/biz1/biz1-bootstrap/target/biz1-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar,\
  file://${xxx}/koupleless_samples/springboot-samples/service/biz2/biz2-bootstrap/target/biz2-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar,\
  https://oss.xxxxx/biz2-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar
integrateLocalDirs=/home/${xxx}/sofa-ark/biz,\
  /home/${xxx}/sofa-ark/biz2

or

integrateBizURLs:
  - file://${xxx}/springboot-samples/service/biz2/biz2-bootstrap/target/biz2-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar
  - file://${xxx}/koupleless_samples/springboot-samples/service/biz2/biz2-bootstrap/target/biz2-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar
integrateLocalDirs:
  - /home/${xxx}/sofa-ark/biz
  - /home/${xxx}/sofa-ark/biz2
2. Base Configuration of Packaged Plugin Target integrate-biz

Add the integrate-biz to koupleless-base-build-plugin in the base’s bootstrap pom, as shown below:

<plugin>
    <groupId>com.alipay.sofa.koupleless</groupId>
    <artifactId>koupleless-base-build-plugin</artifactId>
    <version>${koupleless.runtime.version}</version>
    <executions>
        <execution>
            <goals>
                <goal>add-patch</goal>
                <!-- Used for static merge deployment -->
                <goal>integrate-biz</goal>
            </goals>
        </execution>
    </executions>
</plugin>

After the build, if the packaged jar file is manually unpacked, the specified module ark-biz package can be seen in classPath/SOFA-ARK/biz.

Method 2: Using Custom Retrieval Method

1. Ark Extension Mechanism Principle

Refer to Ark Extension Mechanism Introduction

2. Implement AddBizToStaticDeployHook Interface

In the base/third-party package, implement the AddBizToStaticDeployHook interface, using AddBizInResourcesHook as an example, as shown below:

@Extension("add-biz-in-resources-to-deploy")
public class AddBizInResourcesHook implements AddBizToStaticDeployHook {
    @Override
    public List<BizArchive> getStaticBizToAdd() throws Exception {
        List<BizArchive> archives = new ArrayList<>();
        // ...
        archives.addAll(getBizArchiveFromResources());
        return archives;
    }
    protected List<BizArchive> getBizArchiveFromResources() throws Exception {
        // ... Read Ark Biz package in resources
        return archives;
    }
}
3. Configure SPI

Add the /META-INF/services/sofa-ark/ directory in the resources directory, then add a file named com.alipay.sofa.ark.spi.service.biz.AddBizToStaticDeployHook in /META-INF/services/sofa-ark/ directory, where the file contains the fully qualified name of the hook class:

com.alipay.sofa.ark.support.common.AddBizInResourcesHook

Rebuild the base.

Step 3: Start the Base

Add the JVM parameter configuration: -Dsofa.ark.embed.static.biz.enable=true

4.3.12 - 4.3.12 Officially Supported Middleware Clients in Modules

Koupleless Module Officially Supported Middleware Clients

Compatibility Relationships Across Different Versions of the Framework

Users can choose to import the Koupleless version as needed, based on actual JDK and SpringBoot versions.

JDKSpringBootSOFA-ARKKoupleless
1.82.x2.x.x1.x.x
173.0.x, 3.1.x3.0.7 (不再更新)2.0.4(不再更新)
17 & 213.2.x and above3.1.x2.1.x

For Koupleless SDK latest versions, please refer to https://github.com/koupleless/runtime/releases

In Koupleless modules, the official support currently includes and is compatible with common middleware clients.
Note: Here, “already supported” needs to be included in the base POM by importing the relevant client dependencies (strongly recommended to use the SpringBoot Starter method to import the dependencies), and also in the module POM by importing the relevant dependencies and setting <scope>provided</scope> to delegate the dependencies to the base for loading.


Compatibility Report for Various Components

Middleware ClientVersionRemarks
JDK8.x
17.x
Already Supported
SpringBoot>= 2.3.0 or 3.xAlready Supported
Base and module complete usage examples for JDK17 + SpringBoot3.x can be seen here
SpringBoot Cloud>= 2.7.xAlready Supported
Complete usage examples for base and modules can be seen here
SOFABoot>= 3.9.0 or 4.xAlready Supported
JMXN/AAlready Supported
Requires adding the -Dspring.jmx.default-domain=${spring.application.name} startup parameter to the base
log4j2AnyAlready Supported. Import log4j2 in the base and module, and additionally import the dependency:
<dependency>
  <groupId>com.alipay.koupleless</groupId>
  <artifactId>koupleless-adapter-log4j2</artifactId>
  <version>${latest Koupleless version}</version>
  <scope>provided</scope> <!– Module needs provided –>
  </dependency>
Complete usage examples for base and modules seen here
slf4j-api1.x and >= 1.7Already Supported
tomcat7.x, 8.x, 9.x, 10.x
and above
Already Supported
Complete usage examples for base and modules seen here
netty4.xAlready Supported
Complete usage examples for base and modules seen here
sofarpc>= 5.8.6Already Supported
dubbo3.xAlready Supported
Complete usage examples and considerations for base and modules can be seen here
grpc1.x and >= 1.42Already Supported
Complete usage examples and considerations for base and modules can be seen here
protobuf-java3.x and >= 3.17Already Supported
Complete usage examples and considerations for base and modules can be seen here
apollo1.x and >= 1.6.0Already Supported
Complete usage examples and considerations for base and modules can be seen here
nacos2.1.xAlready Supported
Complete usage examples and considerations for base and modules can be seen here
kafka-client>= 2.8.0 or
>= 3.4.0
Already Supported
Complete usage examples for base and modules can be seen here
rocketmq4.x and >= 4.3.0Already Supported
Complete usage examples for base and modules can be seen here
jedis3.xAlready Supported
Complete usage examples for base and modules can be seen here
xxl-job2.x and >= 2.1.0Already Supported
Needs to be declared as a compile dependency for use in modules
mybatis>= 2.2.2 or
>= 3.5.12
Already Supported
Complete usage examples for base and modules can be seen here
druid1.xAlready Supported
Complete usage examples for base and modules can be seen here
mysql-connector-java8.xAlready Supported
Complete usage examples for base and modules can be seen here
postgresql42.x and >= 42.3.8Already Supported
mongodb4.6.1Already Supported
Complete usage examples for base and modules can be seen here
hibernate5.x and >= 5.6.15Already Supported
j2cacheAnyAlready Supported
Needs to be declared as a compile dependency for independent use in modules
opentracing0.x and >= 0.32.0Already Supported
elasticsearch7.x and >= 7.6.2Already Supported
jaspyt1.x and >= 1.9.3Already Supported
OKHttp-Already Supported
Needs to be placed in the base, please use module automatic slimming capability
io.kubernetes:client10.x and >= 10.0.0Already Supported
net.java.dev.jna5.x and >= 5.12.1Already Supported
prometheus-Support to be verified
skywalking-The official does not support multiple service_names for one process. Tracing isolation can only be achieved by having each module print logs to separate directories. Please refer to the logging samples

Compatibility Relationships among Framework Versions

Note: Users can choose Koupleless versions according to their actual JDK and SpringBoot versions.

JDKSpringBootSOFAARKKoupleless
1.82.x2.2.101.x.x
173.0.x, 3.1.x3.0.x2.0.x
173.2.x and above3.1.x2.1.x

4.3.13 - 4.3.13 Koupleless Configuration

Koupleless configuration

Packaging Phase

Base Packaging Plugin Configuration

Plugin Parameter Configuration

The complete koupleless-base-build-plugin plugin configuration template is as follows:

<plugin>
  <groupId>com.alipay.sofa.koupleless</groupId>
  <artifactId>koupleless-base-build-plugin</artifactId>
  <version>${koupleless.runtime.version}</version>
  <executions>
    <execution>
      <goals>
        <goal>add-patch</goal>
        <!-- Used for static merger deployment-->
        <goal>integrate-biz</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
      <!--Base packaging directory, default is the project build directory-->
      <outputDirectory>./target</outputDirectory>
      
      <!--The groupId of the starter to be packaged, which defaults to the groupId of the project-->
      <dependencyGroupId>${groupId}</dependencyGroupId>
      
      <!--ArtifactId of the starter to be packaged-->
      <dependencyArtifactId>${baseAppName}-dependencies-starter</dependencyArtifactId>
      
      <!--Version number of the starter to be packaged-->
      <dependencyVersion>0.0.1-SNAPSHOT</dependencyVersion>
      
      <!--For debugging, change to true to see the intermediate products of the packaged starter-->
      <cleanAfterPackageDependencies>false</cleanAfterPackageDependencies>
  </configuration>
</plugin>

Static Integration Deployment Configuration

Developers need to specify the Ark Biz package that needs to be integrated and deployed in the ark configuration file of the base (conf/ark/bootstrap.properties or conf/ark/bootstrap.yml), with support for:

  • Local directory
  • Local file URL (File path for Windows is file:\\, and for Linux it is file://)
  • Remote URL (supports http://,https://) The configuration is as follows:
integrateBizURLs=file://${xxx}/koupleless_samples/springboot-samples/service/biz1/biz1-bootstrap/target/biz1-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar,\
  file://${xxx}/koupleless_samples/springboot-samples/service/biz2/biz2-bootstrap/target/biz2-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar,\
  https://oss.xxxxx/biz2-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar
integrateLocalDirs=/home/${xxx}/sofa-ark/biz,\
  /home/${xxx}/sofa-ark/biz2

or

integrateBizURLs:
  - file://${xxx}/springboot-samples/service/biz2/biz2-bootstrap/target/biz2-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar
  - file://${xxx}/koupleless_samples/springboot-samples/service/biz2/biz2-bootstrap/target/biz2-bootstrap-0.0.1-SNAPSHOT-ark-biz.jar
integrateLocalDirs:
  - /home/${xxx}/sofa-ark/biz
  - /home/${xxx}/sofa-ark/biz2

Module Packaging Plugin Configuration

Plugin Parameter Configuration

The complete sofa-ark-maven-plguin plugin configuration template is as follows:

<plugins>
    <plugin>
        <groupId>com.alipay.sofa</groupId>
        <artifactId>sofa-ark-maven-plugin</artifactId>
        <version>${sofa.ark.version}</version>
        <executions>
            <execution>
                <id>default-cli</id>
                <goals>
                    <goal>repackage</goal>
                </goals>
                <configuration>
                    <!--Ark package and ark biz packaging directory, default is the project build directory-->
                    <outputDirectory>./target</outputDirectory>
                    <!--Set the root directory of the application for reading the ${base.dir}/conf/ark/bootstrap.application configuration file, default to ${project.basedir}-->
                    <baseDir>./</baseDir>
                    <!--Generated ark package file name, default is ${artifactId}-->
                    <finalName>demo-ark</finalName>
                    <!--Whether to skip the goal:repackage execution, default to false-->
                    <skip>false</skip>
                    <!--Whether to package, install and publish ark biz, details please refer to the Ark Biz document, default to false-->
                    <attach>true</attach>
                    <!--Set the classifier of the ark package, default to empty-->
                    <arkClassifier>ark</arkClassifier>
                    <!--Set the classifier of the ark biz, default to ark-biz-->
                    <bizClassifier>ark-biz</bizClassifier>
                    <!--Set the biz name of the ark biz, default to ${artifactId}-->
                    <bizName>demo-ark</bizName>
                    <!--Set the biz version of the ark biz, default to ${artifactId}-->
                    <bizVersion>0.0.1</bizVersion>
                    <!--Set the startup priority of the ark biz, smaller priority has higher priority, ${artifactId}-->
                    <priority>100</priority>
                    <!--Set the startup entry of the ark biz, it will automatically search for the entry class that contains the main method and has the org.springframework.boot.autoconfigure.SpringBootApplication annotation-->
                    <mainClass>com.alipay.sofa.xx.xx.MainEntry</mainClass>
                    <!--Set whether to package dependencies with scope=provided, default to false-->
                    <packageProvided>false</packageProvided>
                    <!--Set whether to generate the Biz package, default to true-->
                    <keepArkBizJar>true</keepArkBizJar>
                    <!--For web applications, set the context path, default to /, each module should configure its own webContextPath, e.g.: biz1-->
                    <webContextPath>/</webContextPath>
                    <!--When packaging ark biz, exclude specified package dependencies; format: ${groupId:artifactId} or ${groupId:artifactId:classifier}-->
                    <excludes>
                        <exclude>org.apache.commons:commons-lang3</exclude>
                    </excludes>
                    <!--When packaging ark biz, exclude dependencies with the specified groupId-->
                    <excludeGroupIds>
                        <excludeGroupId>org.springframework</excludeGroupId>
                    </excludeGroupIds>
                    <!--When packaging ark biz, exclude dependencies with the specified artifactId-->
                    <excludeArtifactIds>
                        <excludeArtifactId>sofa-ark-spi</excludeArtifactId>
                    </excludeArtifactIds>
                    <!--When packaging ark biz, configure classes not covered by the ark plugin index; by default, ark biz will prioritize indexing all exported classes of ark plugin, which means it will only load the class locally, rather than delegating ark plugin to load-->
                    <denyImportClasses>
                        <class>com.alipay.sofa.SampleClass1</class>
                        <class>com.alipay.sofa.SampleClass2</class>
                    </denyImportClasses>
                    <!--Corresponding to the denyImportClasses configuration, package level can be configured-->
                    <denyImportPackages>
                        <package>com.alipay.sofa</package>
                        <package>org.springframework.*</package>
                    </denyImportPackages>
                    <!--When packaging ark biz, configure resources not covered by the ark plugin index; by default, ark biz will prioritize indexing all exported resources of the ark plugin, adding that configuration means that ark biz will only search for the resources internally without searching from the ark plugin-->
                    <denyImportResources>
                        <resource>META-INF/spring/test1.xml</resource>
                        <resource>META-INF/spring/test2.xml</resource>
                    </denyImportResources>
                  
                     <!--Isolates the dependencies that the ark biz has declared in its pom, default to false-->
                    <declaredMode>true</declaredMode>
                    <!--When packaging ark biz, only package dependencies that the base does not have, or dependencies of modules that are different from the base. This parameter specifies the "dependency management" identifier of the base, and is required to be a parent of module pom with the format ${groupId}:${artifactId}:${version}-->
                    <baseDependencyParentIdentity>${groupId}:${artifactId}:${version}</baseDependencyParentIdentity>
                </configuration>
            </execution>
        </executions>
    </plugin>
</plugins>

Module Slimming Configuration

SOFAArk module slimming reads configuration from two places:

  • module project root directory/conf/ark/bootstrap.properties, e.g.: my-module/conf/ark/bootstrap.properties
  • module project root directory/conf/ark/bootstrap.yml, e.g.: my-module/conf/ark/bootstrap.yml bootstrap.properties In the module project root directory/conf/ark/bootstrap.properties, configure the commonly used packages of the framework and middleware that need to be down to the base as follows:
# excludes config ${groupId}:{artifactId}:{version}, split by ','
excludes=org.apache.commons:commons-lang3,commons-beanutils:commons-beanutils
# excludeGroupIds config ${groupId}, split by ','
excludeGroupIds=org.springframework
# excludeArtifactIds config ${artifactId}, split by ','
excludeArtifactIds=sofa-ark-spi

bootstrap.yml In the module project root directory/conf/ark/bootstrap.yml, configure the commonly used packages of the framework and middleware that need to be down to the base as follows:

# excludes 中配置 ${groupId}:{artifactId}:{version}, 不同依赖以 - 隔开
# excludeGroupIds 中配置 ${groupId}, 不同依赖以 - 隔开
# excludeArtifactIds 中配置 ${artifactId}, 不同依赖以 - 隔开
excludes:
  - org.apache.commons:commons-lang3
  - commons-beanutils:commons-beanutils
excludeGroupIds:
  - org.springframework
excludeArtifactIds:
  - sofa-ark-spi

Development Phase

Arklet Configuration

Port Configuration

When the base is started, configure the port in the JVM parameters, default is 1238

-Dkoupleless.arklet.http.port=XXXX

Module Runtime Configuration

Configuration of Health Check

Configuration in the application.properties of the base:

# Or do not configure management.endpoints.web.exposure.include
management.endpoints.web.exposure.include=health
# If all information needs to be displayed, configure the following content
management.endpoint.health.show-components=always
management.endpoint.health.show-details=always
# Do not ignore module startup status
koupleless.healthcheck.base.readiness.withAllBizReadiness=true

Web Gateway Configuration

When traditional applications are split into modules, each module has its own webContextPath, and the upstream caller needs to modify the request path. To avoid the modification, you can configure Web Gateway forwarding rules in the application.properties or application.yaml, allowing the upstream caller not to modify. In the configuration, three strategies can be configured:

  • Domain matching: specifies that requests that meet HostA are forwarded to ModuleA
  • Path matching: specifies that requests that meet PathA are forwarded to specific PathB of ModuleA
  • Domain and path matching: specifies that requests that meet HostA and PathA will be forwarded to specific PathB of ModuleA application.yaml Configuration example:
koupleless:
  web:
    gateway:
      forwards:
# host in [a.xxx,b.xxx,c.xxx] path /${anyPath} --forward to--> biz1/${anyPath}
        - contextPath: biz1
        - hosts:
            - a
            - b
            - c
# /idx2/** -> /biz2/**, /t2/** -> /biz2/timestamp/**
        - contextPath: biz2
        - paths:
            - from: /idx2
            - to: /
            - from: /t2
            - to: /timestamp
# /idx1/** -> /biz1/**, /t1/** -> /biz1/timestamp/**
        - contextPath: biz1
        - paths:
            - from: /idx1
            - to: /
            - from: /t1
            - to: /timestamp

application.properties Configuration example:

# host in [a.xxx,b.xxx,c.xxx] path /${anyPath} --forward to--> biz1/${anyPath}
koupleless.web.gateway.forwards[0].contextPath=biz1
koupleless.web.gateway.forwards[0].hosts[0]=a
koupleless.web.gateway.forwards[0].hosts[1]=b
koupleless.web.gateway.forwards[0].hosts[2]=c
# /idx2/** -> /biz2/**, /t2/** -> /biz2/timestamp/**
koupleless.web.gateway.forwards[1].contextPath=biz2
koupleless.web.gateway.forwards[1].paths[0].from=/idx2
koupleless.web.gateway.forwards[1].paths[0].to=/
koupleless.web.gateway.forwards[1].paths[1].from=/t2
koupleless.web.gateway.forwards[1].paths[1].to=/timestamp
# /idx1/** -> /biz1/**, /t1/** -> /biz1/timestamp/**
koupleless.web.gateway.forwards[2].contextPath=biz1
koupleless.web.gateway.forwards[2].paths[0].from=/idx1
koupleless.web.gateway.forwards[2].paths[0].to=/
koupleless.web.gateway.forwards[2].paths[1].from=/t1
koupleless.web.gateway.forwards[2].paths[1].to=/timestamp

4.3.14 - 4.3.14 SOFAArk Key User Documentation

Module Lifecycle

Ark Event Mechanism

Ark Logging



4.4 - 5. Module Controller V2 Operation and Maintenance

Operation and Maintenance of Modules under the Koupleless Module Controller V2 Architecture

4.4.1 - 5.1 Module Release

Koupleless Module Online and Offline Procedures

Note: The current ModuleController v2 has only been tested on Kubernetes (K8S) version 1.24, with no testing on other versions. ModuleController V2 relies on certain Kubernetes (K8S) features; thus, the K8S version must not be lower than V1.10.

Module Release

ModuleController V2 supports deploying modules using any Pod deployment method, including but not limited to bare Pod deployment, Deployments, DaemonSets, and StatefulSets. Below, we demonstrate the release process using Deployment as an example; configurations for other methods can refer to the template configuration in Deployment:

kubectl apply -f samples/module-deployment.yaml --namespace yournamespace

The complete content is as follows:

apiVersion: apps/v1  # Specifies the API version, which must be listed in `kubectl api-versions`
kind: Deployment  # Specifies the role/type of resource to create
metadata:  # Metadata/attributes of the resource
  name: test-module-deployment  # Name of the resource, must be unique within the same namespace
  namespace: default # Namespace where it will be deployed
spec:  # Specification field of the resource
  replicas: 1
  revisionHistoryLimit: 3 # Retains historical versions
  selector: # Selector
    matchLabels: # Matching labels
      app: test-module-deployment
  strategy: # Strategy
    rollingUpdate: # Rolling update
      maxSurge: 30% # Maximum additional replicas that can exist, can be a percentage or an integer
      maxUnavailable: 30% # Maximum number of Pods that can become unavailable during the update, can be a percentage or an integer
    type: RollingUpdate # Rolling update strategy
  template: # Template
    metadata: # Metadata/attributes of the resource
      labels: # Sets resource labels
        module-controller.koupleless.io/component: module # Required, declares Pod type for management by module controller
        # Unique ID for Deployment
        app: test-module-deployment-non-peer
    spec: # Specification field of the resource
      containers:
        - name: biz1 # Required, declares the module's bizName, must match the artifactId declared in pom.xml
          image: https://serverless-opensource.oss-cn-shanghai.aliyuncs.com/module-packages/stable/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar
          env:
            - name: BIZ_VERSION # Required, declares module's biz_version, value must match the version declared in pom.xml
              value: 0.0.1-SNAPSHOT
      affinity:
        nodeAffinity: # Required, declares the base selector to ensure modules are scheduled onto designated bases
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: base.koupleless.io/version
                    operator: In
                    values:
                      - 1.1.1 # Specified base version, mandatory, at least one required
                  - key: base.koupleless.io/name
                    operator: In
                    values:
                      - base  # Specified base bizName, mandatory, at least one required
      tolerations: # Required, allows pods to be scheduled onto base nodes
        - key: "schedule.koupleless.io/virtual-node"
          operator: "Equal"
          value: "True"
          effect: "NoExecute"

All configurations align with a regular Deployment, except for mandatory fields; additional Deployment configurations can be added for custom functionality.

Subsequent module updates can be achieved by updating the module Deployment’s Container Image and BIZ_VERSION, utilizing the Deployment’s RollingUpdate for phased updates.

The Module Controller ensures lossless module traffic during rolling updates by controlling the Pod status update sequence on the same base. The process is as follows:

  1. After updating the Deployment, new version module Pods are created based on the update strategy.
  2. The K8S Scheduler schedules these Pods to the VNode, where the old module version is still installed.
  3. The Module Controller detects the successful scheduling of Pods and initiates the installation of the new module version.
  4. Once the installation is complete, the Module Controller checks the status of all modules on the current base, sorts the associated Pods by creation time, and updates their statuses in sequence. This causes the Pods corresponding to the old module version to become Not Ready first, followed by the new version Pods becoming Ready.
  5. The Deployment controller, upon detecting that the new Pods are Ready, begins cleaning up old version Pods. It prioritizes deleting Pods that are Not Ready. At this point, the old version Pods on the same base are already Not Ready and are deleted, preventing Ready state old version Pods on other bases from being deleted first.

Throughout this process, there is no instance where a base lacks a module, ensuring lossless traffic during the module update.

Checking Module Status

This requirement can be met by examining Pods with nodeName corresponding to the base’s node. First, understand the mapping between base services and nodes.

In the design of Module Controller V2, each base generates a globally unique UUID at startup as the identifier for the base service. The corresponding node’s Name includes this ID.

Additionally, the IP of the base service corresponds one-to-one with the node’s IP, allowing selection of the corresponding base Node via IP.

Therefore, you can use the following command to view all Pods (modules) installed on a specific base and their statuses:

kubectl get pod -n <namespace> --field-selector status.podIP=<baseIP>

Or

kubectl get pod -n <namespace> --field-selector spec.nodeName=virtual-node-<baseUUID>

Module Offline

Removing the module’s Pod or other controlling resources in the K8S cluster completes the module offline process. For instance, in a Deployment scenario, you can directly delete the corresponding Deployment to offline the module:

kubectl delete yourmoduledeployment --namespace yournamespace

Replace yourmoduledeployment with your ModuleDeployment name and yournamespace with your namespace.

For customizing module release and operation strategies (such as grouping, Beta testing, pausing, etc.), refer to Module Operation and Scheduling Strategies.

The demonstrated example uses kubectl; directly calling the K8S API Server to delete Deployment also achieves module group offline.

Module Scaling

Since ModuleController V2 fully leverages K8S’s Pod orchestration scheme, scaling only occurs on ReplicaSets, Deployments, StatefulSets, etc. Scaling can be implemented according to the respective scaling methods; below, we use Deployment as an example:

kubectl scale deployments/yourdeploymentname --namespace=yournamespace --replicas=3

Replace yourdeploymentname with your Deployment name, yournamespace with your namespace, and set the replicas parameter to the desired scaled quantity.

Scaling strategies can also be implemented through API calls.

Module Replacement

In ModuleController v2, modules are tightly bound to Containers. To replace a module, you need to execute an update logic, updating the module’s Image address on the Pod where the module resides.

The specific replacement method varies slightly depending on the module deployment method; for instance, directly updating Pod information replaces the module in-place, while Deployment executes the configured update strategy (e.g., rolling update, creating new version Pods before deleting old ones). DaemonSet also executes the configured update strategy but with a different logic – deleting before creating, which might cause traffic loss.

Module Rollback

Being compatible with native Deployments, rollback can be achieved using Deployment’s rollback method.

To view deployment history:

kubectl rollout history deployment yourdeploymentname

To rollback to a specific version:

kubectl rollout undo deployment yourdeploymentname --to-revision=<TARGET_REVISION>

Other Operational Issues

Module Traffic Service Implementation

A native Service can be created for the module, which can provide services only when the base and ModuleController are deployed within the same VPC.

As bases and ModuleController may not be deployed in the same VPC currently, interaction between them is realized through MQTT message queues. Base nodes integrate the IP of the Pod where the base resides, and module Pods integrate the IP of the base node. Therefore, when the base itself and ModuleController are not in the same VPC, the IP of the module is actually invalid, preventing external service provision.

A potential solution involves forwarding at the Load Balancer (LB) layer of the Service, redirecting the Service’s traffic to the base service on the corresponding IP of the K8S where the base resides. Further evaluation and optimization of this issue will be based on actual usage scenarios.

Incompatible Base and Module Release

  1. Deploy a module’s Deployment first, specifying the latest version of the module code package address in Container and the name and version information of the new version base in nodeAffinity. This Deployment will create corresponding Pods, but they won’t be scheduled until new version bases are created.
  2. Update the base Deployment to release the new version image, triggering the replacement and restart of the base. Upon startup, the base informs the ModuleController V2 controller, creating a corresponding version node.
  3. After the creation of the corresponding version base node, the K8S scheduler automatically triggers scheduling, deploying the Pods created in step 1 onto the base node for installation of the new version module, thus achieving simultaneous release.


4.4.2 - 5.2 Module Release Operations Strategy

Koupleless Module Release Operations Strategy

Operations Strategy

To achieve zero-downtime changes in the production environment, the module release operations leverage Kubernetes (K8S) native scheduling capabilities to provide secure and reliable update functionality. Users can deploy module Pods according to business requirements.

Scheduling Strategy

Dispersion Scheduling: Achieved through native Deployment controls, with PodAffinity configurations facilitating dispersion scheduling.

Peer and Non-Peer Deployment

Peer and non-peer deployment strategies can be realized by selecting different deployment methods.

Peer Deployment

Two implementation methods are provided:

  1. Using DaemonSet: Modules can be deployed as DaemonSets, where a DaemonSet controller automatically creates a module Pod for each base node upon its addition, ensuring peer deployment.

    Note that DaemonSet rolling updates occur by uninstalling before reinstalling; choose based on actual business needs.

  2. Via Deployment: Unlike DaemonSet, an additional component is required to maintain module replica count equivalent to the number of base nodes (under development, expected in the next release). Supports install-before-uninstall, avoiding backend traffic loss in a microservices architecture.

    While Deployments strive for dispersion, they do not guarantee complete dispersion; modules might be deployed multiple times to the same base. For strong dispersion, add Pod anti-affinity settings in the Deployment, as shown below:

apiVersion: apps/v1
kind: Deployment
metadata:
    name: test-module-deployment
    namespace: default
    labels:
        module-controller.koupleless.io/component: module-deployment
spec:
    replicas: 1
    revisionHistoryLimit: 3
    selector:
        matchLabels:
            module.koupleless.io/name: biz1
            module.koupleless.io/version: 0.0.1
    strategy:
        rollingUpdate:
            maxSurge: 30%
            maxUnavailable: 30%
        type: RollingUpdate
    template:
        metadata:
            labels:
                module-controller.koupleless.io/component: module
                module.koupleless.io/name: biz1
                module.koupleless.io/version: 0.0.1
        spec:
            containers:
            - name: biz1
              image: https://serverless-opensource.oss-cn-shanghai.aliyuncs.com/module-packages/test_modules/biz1-0.0.1-ark-biz.jar
              env:
              - name: BIZ_VERSION
                value: 0.0.1
            affinity:
                nodeAffinity:
                    requiredDuringSchedulingIgnoredDuringExecution:
                        nodeSelectorTerms:
                        - matchExpressions:
                          - key: base.koupleless.io/version
                            operator: In
                            values: ["1.0.0"] # If modules can only be scheduled to specific node versions, this field is mandatory.
                          - key: base.koupleless.io/name
                            operator: In
                            values: ["base"]
                podAntiAffinity: # Core configuration for dispersion scheduling
                    requiredDuringSchedulingIgnoredDuringExecution:
                    - labelSelector:
                        matchLabels:
                            module.koupleless.io/name: biz1
                            module.koupleless.io/version: 0.0.1
                      topologyKey: topology.kubernetes.io/zone
            tolerations:
            - key: "schedule.koupleless.io/virtual-node"
              operator: "Equal"
              value: "True"
              effect: "NoExecute"

Non-Peer Deployment

Achieved by deploying modules as Deployments or ReplicaSets, with deployments based on the replica count setting.

Batch Updates

The strategy for batch updates requires custom control logic. ModuleController V2 introduces a capability where, when different versions of the same-named module are installed sequentially on a base, the Pod of the earlier-installed module enters BizDeactivate status and transitions to the Failed phase. Exploit this logic to implement batch update strategies.



4.4.3 - 5.3 Health Check

Background

The purpose of health checks is to obtain the status of an application throughout its lifecycle, including the operational and runtime phases, so that users can make decisions based on this status. For instance, if the application status is DOWN, it indicates a malfunction in the application, and the user may choose to restart or replace the machine.

In the case of a single application, health checks are relatively simple:

  • Operational phase status:
    • If it’s starting up, the status is UNKNOWN;
    • If startup fails, the status is DOWN;
    • If startup is successful, the status is UP.
  • Runtime phase status:
    • If all health checkpoints of the application are healthy, the status is UP;
    • If any health checkpoint of the application is not healthy, the status is DOWN.

In multi-application scenarios, the situation can be much more complex. We need to consider the impact of the multi-application’s status during both the operational phase and the runtime phase on the overall application health. When designing health checks, we need to consider the following two issues:

  • During the module operational phase, should the module start-up status affect the overall application health status?

    In different operational scenarios, users have different expectations. koupleless module operations have three scenarios:

    ScenarioImpact of the Module on the Overall Application Health Status
    Module Hot-DeploymentProvide configuration to let users decide whether the hot-deployment result should affect the overall application health status (default configuration is: does not affect the original health status of the application)
    Static Merge DeploymentModule deployment occurs during the base startup, so the module startup status should directly affect the overall health status of the application
    Module ReplayModule replay occurs during the base startup, thus the module startup status should directly affect the overall health status of the application
  • During the module runtime phase, should the module running status affect the overall application health status?

    The module runtime phase status should have a direct impact on the overall application health status.

Under this context, we have designed a health check approach for multi-application scenarios.

Usage

Requirements

  • Koupleless version >= 1.1.0
  • sofa-ark version >= 2.2.9

Obtain the overall health status of the application

There are 3 types of health status for the base:

StatusMeaning
UPHealthy, indicating readiness
UNKNOWNCurrently starting up
DOWNUnhealthy (may be due to startup failure or unhealthy running state)

Since Koupleless supports hot deployment of modules, while obtaining the overall health status of the application, users may wish for the module deployment result to impact the overall application health status or not.

Module launch result does not affect the overall application health status (default)

  • Features: For a healthy base, if the module installation fails, it will not affect the overall application health status.
  • Usage: Same as the health check configuration for regular Spring Boot, configure in the base’s application.properties:
# or do not configure management.endpoints.web.exposure.include
management.endpoints.web.exposure.include=health
# If you need to display all information, configure the following content
management.endpoint.health.show-components=always
management.endpoint.health.show-details=always
  • Access: {baseIp:port}/actuator/health
  • Result:
{
    // Overall health status of the application
    "status": "UP",
    "components": {
        // Aggregated health status of the modules
        "arkBizAggregate": {
            "status": "UP",
            "details": {
                "biz1:0.0.1-SNAPSHOT": {
                    "status": "UP",
                    // Can see the health status of all active HealthIndicators in the modules
                    "details": {
                        "diskSpace": {
                          "status": "UP",
                          "details": {
                            "total": 494384795648,
                            "free": 272435396608,
                            "threshold": 10485760,
                            "exists": true
                            }
                        },
                        "pingHe": {
                          "status": "UP",
                          "details": {}
                        }
                    }
                }
            }
        },
        // Startup health status of base and modules
        "masterBizStartUp": {
            "status": "UP",
            // Including the startup status of each module.
            "details": {
                "base:1.0.0": {
                    "status": "UP"
                },
                "biz1:0.0.1-SNAPSHOT": {
                    "status": "UP"
                },
                "biz2:0.0.1-SNAPSHOT": {
                    "status": "DOWN"
                }
            }
        }
    }
}

Overall Health Status in Different Scenarios

Scenario 1: Start base

StatusMeaning
UPBase is healthy
UNKNOWNBase is starting up
DOWNBase is unhealthy

Scenario 2: Start base and install modules with static merge deployment

StatusMeaning
UPBase and module are healthy
UNKNOWNBase or module is starting up
DOWNBase startup failed / base is unhealthy / module startup failed / module is unhealthy

Scenario 3: After base starts, install modules with hot deployment

Provide configuration to let users decide whether the result of module hot deployment affects the overall health status of the application (The default configuration is: Does not affect the original health status of the application)

Default Configuration: In the scenario of hot deployment, whether or not a module is successfully installed does not affect the overall health status of the application, as follows:

StatusMeaning
UPBase and module are healthy
UNKNOWNBase is starting up
DOWNBase startup failed / base is unhealthy / module is unhealthy

Scenario 4: Base running

StatusMeaning
UPBase and module are healthy
UNKNOWN-
DOWNBase is unhealthy or module is unhealthy

Scenario 5: After base started, reinstall module

Reinstall module refers to automatically pulling the module baseline and installing the module after the base is started.

Reinstall module is not supported at the moment

StatusMeaning
UPBase and module are healthy
UNKNOWNBase or module is starting up
DOWNBase is unhealthy or module startup failed or module is unhealthy

Module launch result affects the overall application health status

  • Features: For a healthy base, if a module installation fails, the overall application health status will also fail.
  • Usage: In addition to the above configuration, you need to configure koupleless.healthcheck.base.readiness.withAllBizReadiness=true, that is, configure in the base’s application.properties:
# Alternatively, do not configure management.endpoints.web.exposure.include
management.endpoints.web.exposure.include=health
# If you need to display all information, configure the following content
management.endpoint.health.show-components=always
management.endpoint.health.show-details=always
# Do not ignore module startup status
koupleless.healthcheck.base.readiness.withAllBizReadiness=true
  • Access: {baseIp:port}/actuator/health
  • Result:
{
    // Overall health status of the application
    "status": "UP",
    "components": {
        // Aggregated health status of the modules
        "arkBizAggregate": {
            "status": "UP",
            "details": {
                "biz1:0.0.1-SNAPSHOT": {
                    "status": "UP",
                    // Can see the health status of all active HealthIndicators in the modules
                    "details": {
                        "diskSpace": {
                          "status": "UP",
                          "details": {
                            "total": 494384795648,
                            "free": 272435396608,
                            "threshold": 10485760,
                            "exists": true
                            }
                        },
                        "pingHe": {
                          "status": "UP",
                          "details": {}
                        }
                    }
                }
            }
        },
        // Startup health status of base and modules
        "masterBizStartUp": {
            "status": "UP",
            // Including the startup status of each module.
            "details": {
                "base:1.0.0": {
                    "status": "UP"
                },
                "biz1:0.0.1-SNAPSHOT": {
                    "status": "UP"
                }
            }
        }
    }
}

Overall Health Status in Different Scenarios

Scenario 1: Start base

StatusMeaning
UPBase is healthy
UNKNOWNBase is starting up
DOWNBase is unhealthy

Scenario 2: Start base and install modules with static merge deployment

StatusMeaning
UPBase and module are healthy
UNKNOWNBase or module is starting up
DOWNBase startup failed / base is unhealthy / module startup failed / module is unhealthy

Scenario 3: After base starts, install modules with hot deployment

Provide configuration to let users decide whether the result of module hot deployment affects the overall health status of the application (The default configuration is: Does not affect the original health status of the application)

When configuring as koupleless.healthcheck.base.readiness.withAllBizReadiness=true:

StatusMeaning
UPBase and module are healthy
UNKNOWNBase or module is starting up
DOWNBase startup failed / Module startup failed / base is unhealthy / module is unhealthy

Scenario 4: Base running

StatusMeaning
UPBase and module are healthy
UNKNOWN-
DOWNBase is unhealthy or module is unhealthy

Scenario 5: After base started, reinstall module

Reinstall module refers to automatically pulling the module baseline and installing the module after the base is started.

Reinstall module is not supported at the moment.

Obtaining the Health Status of a Single Module

  • Usage: Consistent with the normal springboot health check configuration, enable the health node, i.e. configure in the module’s application.properties:
# or do not configure management.endpoints.web.exposure.include
management.endpoints.web.exposure.include=health
  • Access: {baseIp:port}/{bizWebContextPath}/actuator/info
  • Result:
{
    "status": "UP",
    "components": {
        "diskSpace": {
            "status": "UP",
            "details": {
                "total": 494384795648,
                "free": 270828220416,
                "threshold": 10485760,
                "exists": true
            }
        },
        "ping": {
            "status": "UP"
        }
    }
}

Get information about base, modules, and plugins

  • Usage: Same as the regular springboot health check configuration, enable the info endpoint, i.e., configure in the base’s application.properties:
# Note: If the user configures management.endpoints.web.exposure.include on their own, they need to include the health endpoint, otherwise the health endpoint cannot be accessed
management.endpoints.web.exposure.include=health,info
  • Access: {baseIp:port}/actuator/info
  • Result:
{
    "arkBizInfo": [
      {
        "bizName": "biz1",
        "bizVersion": "0.0.1-SNAPSHOT",
        "bizState": "ACTIVATED",
        "webContextPath": "biz1"
      },
      {
        "bizName": "base",
        "bizVersion": "1.0.0",
        "bizState": "ACTIVATED",
        "webContextPath": "/"
      }
    ],
    "arkPluginInfo": [
        {
            "pluginName": "koupleless-adapter-log4j2",
            "groupId": "com.alipay.sofa.koupleless",
            "artifactId": "koupleless-adapter-log4j2",
            "pluginVersion": "1.0.1-SNAPSHOT",
            "pluginUrl": "file:/Users/lipeng/.m2/repository/com/alipay/sofa/koupleless/koupleless-adapter-log4j2/1.0.1-SNAPSHOT/koupleless-adapter-log4j2-1.0.1-SNAPSHOT.jar!/",
            "pluginActivator": "com.alipay.sofa.koupleless.adapter.Log4j2AdapterActivator"
        },
        {
            "pluginName": "web-ark-plugin",
            "groupId": "com.alipay.sofa",
            "artifactId": "web-ark-plugin",
            "pluginVersion": "2.2.9-SNAPSHOT",
            "pluginUrl": "file:/Users/lipeng/.m2/repository/com/alipay/sofa/web-ark-plugin/2.2.9-SNAPSHOT/web-ark-plugin-2.2.9-SNAPSHOT.jar!/",
            "pluginActivator": "com.alipay.sofa.ark.web.embed.WebPluginActivator"
        },
        {
            "pluginName": "koupleless-base-plugin",
            "groupId": "com.alipay.sofa.koupleless",
            "artifactId": "koupleless-base-plugin",
            "pluginVersion": "1.0.1-SNAPSHOT",
            "pluginUrl": "file:/Users/lipeng/.m2/repository/com/alipay/sofa/koupleless/koupleless-base-plugin/1.0.1-SNAPSHOT/koupleless-base-plugin-1.0.1-SNAPSHOT.jar!/",
            "pluginActivator": "com.alipay.sofa.koupleless.plugin.ServerlessRuntimeActivator"
        }
    ]
}

4.4.4 - 5.4 Deployment of Module Controller V2

Deployment methodology for Koupleless Module Controller V2

Note: ModuleController V2 has only been tested on K8S version 1.24 and relies on certain K8S features. Therefore, the K8S version should not be lower than V1.10.

Resource File Locations

  1. Role Definition
  2. RBAC Definition
  3. ServiceAccount Definition
  4. ModuleControllerV2 Deployment Definition

Deployment Method

Use the kubectl apply command to sequentially apply the above four resource files to complete the deployment of a single-instance ModuleController.

For using the Module Controller’s sharded cluster capability, modify the above deployment definition to a Deployment version, placing the Pod Spec content into the Deployment template.

To use load balancing in a sharded cluster, set the IS_CLUSTER parameter to true in the Module Controller ENV configuration.

Configurable Parameter Explanation

Environment Variable Configuration

Below are some configurable environment variables and their explanations:

  • ENABLE_MQTT_TUNNEL

    • Meaning: Flag to enable MQTT operations pipeline. Set to true to enable. If enabled, configure the related environment variables below.
  • MQTT_BROKER

    • Meaning: URL of the MQTT broker.
  • MQTT_PORT

    • Meaning: MQTT port number.
  • MQTT_USERNAME

    • Meaning: MQTT username.
  • MQTT_PASSWORD

    • Meaning: MQTT password.
  • MQTT_CLIENT_PREFIX

    • Meaning: MQTT client prefix.
  • ENABLE_HTTP_TUNNEL

    • Meaning: Flag to enable HTTP operations pipeline. Set to true to enable. Optionally configure the environment variables below.
  • HTTP_TUNNEL_LISTEN_PORT

    • Meaning: Module Controller HTTP operations pipeline listening port, default is 7777.
  • REPORT_HOOKS

    • Meaning: Error reporting links. Supports multiple links separated by ;. Currently only supports DingTalk robot webhooks.
  • ENV

    • Meaning: Module Controller environment, set as VNode label for operations environment isolation.
  • IS_CLUSTER

    • Meaning: Cluster flag. If true, Virtual Kubelet will start with cluster configuration.
  • WORKLOAD_MAX_LEVEL

    • Meaning: Cluster configuration indicating the maximum workload level for workload calculation in Virtual Kubelet. Default is 3. Refer to Module Controller architecture design for detailed calculation rules.
  • ENABLE_MODULE_DEPLOYMENT_CONTROLLER

    • Meaning: Flag to enable the Module Deployment Controller. If true, the deployment controller will start to modify Module deployment replicas and baselines.
  • VNODE_WORKER_NUM

    • Meaning: Number of concurrent processing threads for VNode Modules. Set to 1 for single-threaded.
  • CLIENT_ID

    • Meaning: Optional, Module Controller instance ID. need to be unique in one env, will generate a random UUID in default.

Documentation Reference

For detailed structure and implementation, refer to the documentation.

4.4.5 - 5.5 Module Information Retrieval

Koupleless Module Information Retrieval

View the names and statuses of all installed modules on a base instance

kubectl get module -n <namespace> -l koupleless.alipay.com/base-instance-ip=<pod-ip> -o custom-columns=NAME:.metadata.name,STATUS:.status.status

or

kubectl get module -n <namespace> -l koupleless.alipay.com/base-instance-name=<pod-name> -o custom-columns=NAME:.metadata.name,STATUS:.status.status

View detailed information of all installed modules on a base instance

kubectl describe module -n <namespace> -l koupleless.alipay.com/base-instance-ip=<pod-ip>

or

kubectl describe module -n <namespace> -l koupleless.alipay.com/base-instance-name=<pod-name>

Replace <pod-ip> with the IP of the base instance you want to view, <pod-name> with the name of the base instance you want to view, and <namespace> with the namespace of the resources you want to view.

4.4.6 - 5.6 Error Codes

This article mainly introduces the error codes of Arklet, ModuleController, and KouplelessBoard.

ErrorCode Rules

Two-level error codes, support dynamic combination, using PascalCase, different levels of error codes can only be separated by “."
such as Arklet.InstallModuleFailed
Level 1: Error Source
Level 2: Error Type

Suggestion

Briefly explain the solution for upstream operations to refer to.

Arklet Error Codes

Level 1: Error Source

CodeMeaning
UserErrors caused by the user
ArkletExceptions from Arklet itself
ModuleControllerExceptions caused by specific upstream components
OtherUpstreamExceptions caused by unknown upstream

Level 2: Error Type

Business TypeError SourceError TypeMeaningSolution
GeneralArkletUnknownErrorUnknown error (default)Please check

ModuleControllerInvalidParameterParameter validation failedPlease check the parameters
ModuleControllerInvalidRequestInvalid operation typePlease check the request
OtherUpstreamDecodeURLFailedURL parsing failedPlease check if the URL is valid
Query RelatedArkletNoMatchedBizModule query failed, no target biz exists-
ArkletInvalidBizNameModule query failed, query parameter bizName cannot be emptyPlease add the query parameter bizName
Installation RelatedArkletInstallationRequirementNotMetModule installation conditions are not metPlease check the necessary parameters for module installation
ArkletPullBizErrorPackage pulling failedPlease retry
ArkletPullBizTimeOutPackage pulling timed outPlease retry
UserDiskFullDisk full when pulling the packagePlease replace the base
UserMachineMalfunctionMachine malfunctionPlease restart the base
UserMetaspaceFullMetaspace exceeds the thresholdPlease restart the base
ArkletInstallBizExecutingModule is being installedPlease retry

ArkletInstallBizTimedOutUninstalling old module failed during module installationPlease check
ArkletInstallBizFailedNew module installation failed during module installationPlease check
UserInstallBizUserErrorModule installation failed, business exceptionPlease check the business code
Uninstallation RelatedArkletUninstallBizFailedUninstallation failed, current biz still exists in the containerPlease check
ArkletUnInstallationRequirementNotMetModule uninstallation conditions are not metThe current module has multiple versions, and the version to be uninstalled is in the active state, which is not allowed to be uninstalled

ModuleController Error Codes

Level 1: Error Source

CodeMeaning
UserErrors caused by the user
ModuleControllerExceptions from ModuleController itself
KouplelessBoardExceptions caused by specific upstream components
ArkletExceptions caused by specific downstream components
OtherUpstreamExceptions caused by unknown upstream
OtherDownstreamExceptions caused by unknown downstream

Level 2: Error Type

Business TypeError SourceError TypeMeaningSolution
GeneralModuleControllerUnknownErrorUnknown error (default)Please check

OtherUpstreamInvalidParameterParameter validation failedPlease check the parameters
ArkletArkletServiceNotFoundBase service not foundPlease ensure that the base has Koupleless dependency
ArkletNetworkErrorNetwork call exceptionPlease retry
OtherUpstreamSecretAKErrorSignature exceptionPlease confirm that there are operation permissions
ModuleControllerDBAccessErrorDatabase read/write failedPlease retry
OtherUpstreamDecodeURLFailedURL parsing failedPlease check if the URL is valid
ModuleControllerRetryTimesExceededMultiple retries failedPlease check
ModuleControllerProcessNodeMissedLack of available working nodesPlease retry later
ModuleControllerServiceMissedService missingPlease check if ModuleController version contains the template type
ModuleControllerResourceConstranedResource limited (thread pool, queue, etc. full)Please retry later
Installation RelatedArkletInstallModuleTimedOutModule installation timed outPlease retry
Arklet / UserInstallModuleFailedModule installation failedPlease check the failure reason
ArkletInstallModuleExecutingModule is being installedThe same module is being installed, please retry later
UserDiskFullDisk fullPlease replace
Uninstallation RelatedOtherUpstreamEmptyIPListIP list is emptyPlease enter the IP to be uninstalled

ArkletUninstallBizTimedOutModule uninstallation timed outPlease retry

ArkletUninstallBizFailedModule uninstallation failedPlease check
Base RelatedModuleControllerBaseInstanceNotFoundBase instance not foundPlease ensure that the base instance exists

KubeAPIServerGetBaseInstanceFailedFailed to query base informationPlease ensure that the base instance exists
ModuleControllerBaseInstanceInOperationBase instance is under operationPlease retry later
ModuleControllerBaseInstanceNotReadyBase data not read or base is not availablePlease ensure that the base is available
ModuleControllerBaseInstanceHasBeenReplacedBase instance has been replacedAdditional base instances will be added later, please wait
ModuleControllerInsufficientHealthyBaseInstanceInsufficient healthy base instancesPlease scale out
Scaling RelatedModuleControllerRescaleRequirementNotMetScaling conditions are not metPlease check if there are enough machines for scaling/Check the scaling ratio

⚠️ Note: The base runs on different base instances, such as pods. Therefore, BaseInstanceInOperation, BaseInstanceNotReady, BaseInstanceHasBeenReplaced, InsufficientHealthyBaseInstance error codes may refer to both the application status of the base and the status of the base instance.

DashBoard Error Codes

Level 1: Error Source

CodeMeaning
KouplelessBoardExceptions from KouplelessBoard itself
ModuleControllerExceptions caused by specific downstream components
OtherUpstreamExceptions caused by unknown upstream
OtherDownstreamExceptions caused by unknown downstream

Level 2: Error Type

Business TypeError SourceError TypeMeaningSolution
GeneralKouplelessBoardUnknownErrorUnknown error (default)

OtherUpstreamInvalidParameterParameter validation failedPlease check the parameters
Work OrderKouplelessBoardOperationPlanNotFoundWork order not foundPlease check
KouplelessBoardOperationPlanMutualExclusionWork order mutual exclusionPlease retry
Internal ErrorKouplelessBoardInternalErrorInternal system errorPlease check
KouplelessBoardThreadPoolErrorThread pool call exceptionPlease check
Operation and MaintenanceModuleControllerBaseInstanceOperationFailedOperation failedPlease check
ModuleControllerBaseInstanceUnderOperationUnder operationPlease retry
ModuleControllerBaseInstanceOperationTimeOutOperation timed outPlease retry
ModuleControllerOverFiftyPercentBaseInstancesUnavaliableMore than 50% of machine traffic is unreachablePlease check the base instance
KouplelessBoardBaselineInconsistencyConsistency check failed (inconsistent baseline)Please check
External Service Call ErrorOtherDownstreamExternalErrorExternal service call errorPlease check
KouplelessBoardNetworkErrorExternal service call timed outPlease retry

5 - 6. Participate in the community

5.1 - 6.1 The philosophy of openness and inclusivity

The mission and vision of Koupleless

Core Values

The core values of the Koupleless community are “openness” and “inclusivity”。All users and developers in the community are treated as equals, as reflected in the following aspects:

  1. The community follows the operating model of Apache open-source projects. Anyone who contributes to the community, especially those who contribute non-code contributions (such as documentation, website, issue replies, advocacy, development suggestions, etc.), are considered contributors and have the opportunity to become committers or even PMC (Project Management Committee) members of the community.

  2. All OKRs (Objectives and Key Results), roadmaps, discussions, meetings, technical solutions, etc., are completely open. Everyone can see and participate in them. The community listens and considers all suggestions and opinions, and once adopted, ensures execution and implementation. We encourage everyone to participate in the Koupleless community with an open mind and a spirit of seeking common ground while respecting differences.

  3. The community is not limited by geographical boundaries or nationality. All source code must be commented in English to ensure understanding by everyone, and the official website is bilingual in both Chinese and English. All WeChat groups, DingTalk groups, and GitHub issue discussions can be conducted in both Chinese and English. However, since we currently focus mainly on Chinese users, most of the documentation is only available in Chinese for now, but we plan to provide English versions in the future.

2023 OKRs

O1 Build a healthy and influential Serverless open-source product community

KR1 Add 10 new contributors, increase OpenRank index to > 15 (currently 5), and activity level to > 50 (currently 44)

KR1.1 Conduct 5 advocacy events and 5 article shares, reaching out to 200 enterprises and engaging in-depth with 30+ enterprises.
KR1.2 Establish a complete community collaboration mechanism (including issue management, documentation, problem response, training, and promotion mechanisms), release 2+ training courses and product manuals, and enable developers to onboard within a week with a development throughput of 20+ issues/week.

KR2 Onboard 5 new enterprises to production or complete pilot integration (currently 1), with 3 enterprises participating in the community

KR2.1 Produce initial industry analysis reports to identify key enterprise targets for different scenarios.
KR2.2 Onboard 5 enterprises to production or complete pilot integration, with 3 enterprises participating in the community, covering 3 scenarios and documenting 3+ user cases.

O2 Develop advanced and cost-effective solutions for cost reduction and efficiency improvement

KR1 Implement modular technology to reduce machines by 30%, deployment verification time to 30 seconds, and increase delivery efficiency by 50%

KR1.1 Establish a 1-minute rapid trial platform with comprehensive documentation, website, and support, enabling users to complete module decomposition in 10 minutes.
KR1.2 Complete governance of 20 middleware and third-party packages, and establish multi-application and hot-unloading evaluation and automatic detection standards.
KR1.3 Reduce hot deployment startup time to the 10-second level, reduce resource consumption by 30% for multiple modules, and increase user delivery efficiency by 50%.
KR1.4 Launch the open-source version of Arklet, supporting SOFABoot and SpringBoot, providing operations pipeline, metric collection, module lifecycle management, multi-module runtime environment, and bean and service discovery and invocation capabilities.
KR1.5 Launch the development tool ArkCtl, featuring rapid development verification, flexible deployment (merge and independent deployment), and low-cost module decomposition and transformation capabilities.

KR2 Launch Operations and Scheduling 1.0 version. Achieve a success rate of 99.9% for end-to-end testing of the entire chain and a P90 end-to-end time of < 500ms

KR2.1 Launch open-source Operations and Scheduling capabilities based on K8S Operator, with at least publishing, rollback, offline, scaling in/out, replacement, replica maintenance, 2+ scheduling strategies, module flow control, deployment strategy, peer-to-peer and non-peer-to-peer operations capabilities.
KR2.2 Establish an open-source CI and 25+ high-frequency end-to-end test cases, continuously polish and promote end-to-end P90 time < 500ms, all pre-rehearsal success rate > 99.9%, and single test coverage rate reach line > 80%, branch > 60% (pass rate 100%).

KR3 Preliminary release of open-source auto-scaling. Modules have the ability for manual profiling and time-based scaling.

RoadMap

  • Aug 2023 Complete deployment feature verification for SOFABoot, and establish compatibility benchmark baseline.
  • Sep 2023 Release ModuleController 0.5 version of basic operations and scheduling system.
  • Sep 2023 Release Arkctl and Arklet 0.5 versions of development and operations tools.
  • Sep 2023 Launch official website and complete user manual.
  • Oct 2023 Onboard 2+ companies for real use.
  • Nov 2023 Support full capabilities of SpringBoot and 5+ commonly used middleware in the community.
  • Nov 2023 Release Koupleless 0.8 version (ModuleController, Arkctl, Arklet, SpringBoot compatibility).
  • Dec 2023 Release Koupleless 0.9 version (including basic auto-scaling, module basic decomposition tool, compatibility with 20+ middleware and third-party packages).
  • Dec 2023 Onboard 5+ companies for real use, with 10+ contributors participating.


5.2 - 6.2 Communication channels

Koupleless Communication channels

Koupleless provides the following communication channels for collaboration and interaction. Feel free to join us to share, use, and benefit together:

Koupleless Community Communication and Collaboration DingTalk Group: 24970018417

If you are interested in Koupleless, have a preliminary intention to use Koupleless, are already a user of Koupleless / SOFAArk, or are interested in becoming a community contributor, you are welcome to join this DingTalk group to communicate, discuss, and contribute code together at any time.
Koupleless 用户钉钉群二维码

Koupleless WeChat Group

Koupleless 用户微信群二维码
If you are interested in Koupleless, have preliminary intentions to use Koupleless, or are already a user of Koupleless / SOFAArk, you are welcome to join this WeChat group to discuss and communicate anytime, anywhere.

Community Bi-weekly Meeting

Community Meeting Every Two Weeks on Tuesday Evening from 19:30 to 20:30, The next community bi-weekly meeting will be held on November 28, 2023, from 19:30 to 20:30. Everyone is welcome to actively participate, either by listening or joining the discussion. The meeting will be conducted via DingTalk.Please find the joining details for the DingTalk meeting below:
Meeting Link: https://meeting.dingtalk.com/dialin/?corpId=dingd8e1123006514592
DingTalk Meeting ID: 90957500367
Dial-in Numbers: 057128095818 (Mainland China)、02162681677 (Mainland China)
You can also follow the community DingTalk collaboration group (Group ID: 24970018417) for specific meeting updates.


The community PMC component members’ iteration planning meeting will be held on the last Monday of each month, discussing and finalizing the requirements planning for the next month.



5.3 - 6.3 Community Contributions

5.3.1 - 6.3.1 Local Development Testing

Koupleless Local Development Testing

SOFAArk and Arklet

SOFAArk is a regular Java SDK project that uses Maven as its dependency management and build tool. You only need to install Maven 3.6 or higher locally to develop code and run unit tests normally, without any other environment preparation.
For details on code submission, please refer to: Completing the First PR Submission.

ModuleController

ModuleController is a standard K8S Golang Operator component, which includes ModuleDeployment Operator, ModuleReplicaSet Operator, and Module Operator. You can use minikube for local development testing. For details, please refer to Local Quick Start.
To compile and build, execute the following command in the module-controller directory:

go mod download   # if compile module-controller first time
go build -a -o manager cmd/main.go  

To run unit tests, execute the following command in the module-controller directory:

make test

You can also use an IDE for compiling, building, debugging, and running unit tests.
The development approach for module-controller is exactly the same as the standard K8S Operator development approach. You can refer to the official K8S Operator development documentation

Arkctl

Arkctl is a regular Golang project, which is a command-line toolset that includes common tools for users to develop and maintain modules locally. You can refer here


5.3.2 - 6.3.2 Completing the First PR Submission

Completing the First Koupleless PR Submission

Claim or Submit an Issue

Regardless of whether you’re fixing a bug, adding a new feature, or improving an existing one, before you submit your code, please claim an issue on Koupleless or SOFAArk GitHub and assign yourself as the Assignee (novices are encouraged to claim tasks tagged with good-first-issue). Alternatively, submit a new issue describing the problem you want to fix or the feature you want to add or improve. Doing so helps avoid duplicate work with others.

Obtaining the Source Code

To modify or add features, after claiming or taking an existing issue, click the fork button in the upper left corner to make a copy of Koupleless or SOFAArk’s mainline code to your code repository.

Creating a Branch

All modifications to Koupleless and SOFAArk are made on individual branches. After forking the source code, you need to:

  • Download the code to your local machine, either via git/https:
git clone https://github.com/your-username/koupleless.git
git clone https://github.com/your-username/sofa-ark.git
  • Create a branch to prepare for modifying the code:
git branch add_xxx_feature


After executing the above command, your code repository will switch to the respective branch. You can verify your current branch by executing the following command:

  git branch -a

If you want to switch back to the mainline, execute the following command:

  git checkout -b master

If you want to switch back to a branch, execute the following command:

  git checkout -b "branchName"

Modifying and Submitting Code Locally

After creating a branch, you can start modifying the code.

Things to Consider When Modifying Code

  • Maintain consistent code style. Koupleless arklet and sofa-ark use Maven plugins to ensure consistent code formatting. Before submitting the code, make sure to execute:
mvn clean compile

The formatting capability for module-controller and arkctl’s Golang code is still under development.

  • Include supplementary unit test code.
  • Ensure that new modifications pass all unit tests.
  • If it’s a bug fix, provide new unit tests to demonstrate that the previous code had bugs and that the new code fixes them. For arklet and sofa-ark, you can run all tests with the following command:
mvn clean test

For module-controller and arkctl, you can run all tests with the following command:

make test

You can also use an IDE to assist.

Other Considerations

  • Please keep the code you edit in the original style, especially spaces, line breaks, etc.
  • Delete unnecessary comments. Comments must be in English.
  • Add comments to logic and functionalities that are not easily understood.
  • Ensure to update the relevant documents in the docs/content/zh-cn/ directory, specifically in the docs and contribution-guidelines directories.

After modifying the code, commit all changes to your local repository using the following command:

git commit -am 'Add xx feature'

Submitting Code to Remote Repository

After committing the code locally, it’s time to synchronize the code with the remote repository. Submit your local modifications to GitHub with the following command:

git push origin "branchname"

If you used fork earlier, then here “origin” pushes to your code repository, not Koupleless’s repository.

Requesting to Merge Code into Main Branch

After submitting the code to GitHub, you can request to merge your well-improved code into Koupleless’s or SOFAArk’s mainline code. At this point, you need to go to your GitHub repository and click the pull request button in the upper right corner. Select the target branch, usually master, and the Maintainer or PMC of the corresponding component as the Code Reviewer. If the PR pipeline check and Code Review are both successful, your code will be merged into the mainline and become a part of Koupleless.

PR Pipeline Check

The PR pipeline check includes:

  1. CLA signing. The first time you submit a PR, you must sign the CLA agreement. If you cannot open the CLA signing page, try using a proxy.
  2. Automatic appending of Apache 2.0 License declaration and author to each file.
  3. Execution of all unit tests, and all must pass.
  4. Checking if the coverage rate reaches line coverage >= 80% and branch coverage >= 60%.
  5. Detecting if the submitted code has security vulnerabilities.
  6. Checking if the submitted code complies with basic code standards.

All the above checks must pass for the PR pipeline to pass and enter the Code Review stage.

Code Review

If you choose the Maintainer or PMC of the corresponding component as the Code Reviewer, and after several days, there is still no response to your submission, you can leave a message below the PR and mention the relevant people, or directly mention them in the community DingTalk collaboration group (DingTalk group ID: 24970018417) to review the code. The comments on the Code Review will be directly noted in the corresponding PR or Issue. If you find the suggestions reasonable, please update your code accordingly and resubmit the PR.

Merging Code into Main Branch

After the PR pipeline check and Code Review are both successful, Koupleless maintainers will merge the code into the mainline. After the code is merged, you will receive a notification of successful merging.


5.3.3 - 6.3.3 Document, Issue, and Process Contribution

Contribution to Koupleless documentation, issues, and processes

Documentation Contribution

Maintaining user documentation, technical documentation, and website content is a collective effort of every contributor in the community. Any student who contributes to any document or website content is considered a contributor, and based on their activity, they may have the opportunity to become a Committer or even a PMC member of Koupleless, jointly leading the technical evolution of Koupleless.

Issue Submission and Response Contribution

Any issues, bugs, new features, or improvements encountered during usage should be reported via GitHub Issues. Community members take turns each day to follow up on issues. Anyone who raises or responds to issues is a contributor to Koupleless. Contributors who actively respond to issues may be promoted to Committer status, and exceptionally active contributors may even be promoted to PMC members, jointly leading the technical evolution of Koupleless.

Issue Templates

There are two templates for Koupleless (including SOFAArk) Issues: “Question or Bug Report” and “Feature Request”. image.png

Question or Bug Report

For any issues encountered during usage or suspected bugs, please select “Question or Bug Report” and provide detailed replication information as follows:

### Describe the question or bug

A clear and concise description of what the question or bug is.

### Expected behavior

A clear and concise description of what you expected to happen.

### Actual behavior

A clear and concise description of what actually happened.

### Steps to reproduce

Steps to reproduce the problem:

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

### Screenshots

If applicable, add screenshots to help explain your problem.

### Minimal yet complete reproducer code (or GitHub URL to code)

### Environment

- SOFAArk version:
- JVM version (e.g. `java -version`):
- OS version (e.g. `uname -a`):
- Maven version:
- IDE version:

Feature Request

For new features, improvements to existing features, or other discussions, please select “Feature Request”.

Process Contribution

Currently, Koupleless has established various collaboration standards such as code conventions, PR processes, CI pipelines, iteration management, weekly meetings, and communication channels. You can suggest improvements to our collaboration standards and processes on GitHub to become our contributor.



5.3.4 - 6.3.4 Organizing Meetings and Promoting Operations

Organizing Meetings and Promoting Operations for Koupleless

We encourage everyone to promote and advocate for Koupleless, and through operations, become Koupleless contributors, committers, and even PMC members. Each promotion of a contributor will also be rewarded with commemorative prizes. Operation methods include but are not limited to:

  1. Delivering speeches on the use or technical implementation of Koupleless at online or offline technical conferences and meetups.
  2. Sharing and exchanging Koupleless usage scenarios with other companies.
  3. Publishing articles or videos related to the use or technical implementation of Koupleless on various channels.
  4. Other operational methods.

5.4 - 6.4 Community Roles and Promotion

Koupleless Community Roles and Promotion

Roles and Promotion Mechanism in the Koupleless Community

The roles in the Koupleless community are based on the organization model of Apache open-source projects, with each component (SOFAArk, Arklet, ModuleController, ArkCtl) having its own set of roles. The roles, from lowest to highest, are Contributor, Committer, PMC (Project Management Committee), and Maintainer

RoleResponsibilities and PermissionsPromotion Mechanism to Higher Roles
ContributorAll students who provide Issues, answer Issues, operate externally, submit documentation content, or submit any code are Contributors of the corresponding component. Contributors have the permission to submit Issues, reply to Issues, submit content to the official website or documentation, submit code (excluding code review), and publish articles externally.When a Contributor has merged enough code or documentation content, they can be promoted to Committer by a vote of PMC members of the component. When a Contributor has answered enough Issues or participated in enough operational activities, they can also be voted as a Committer by PMC members.
CommitterAll students who actively answer Issues, operate externally, submit documentation content, or submit code have the potential to be voted as Committers by PMC members based on their activity level. Committers additionally have responsibilities and permissions for code review, technical proposal review, and Contributor cultivation.Committers who have shown long-term active involvement or outstanding contributions can be promoted to PMC members of the corresponding component by a vote of PMC members.
PMCStudents who make sustained contributions and are particularly active in the corresponding component have the opportunity to be promoted to PMC members. PMC members additionally have responsibilities and permissions for RoadMap formulation, technical proposal and code review, Issue and iteration management, Contributor and Committer cultivation, etc.
MaintainerMaintainers have additional permissions for key management and repository management. Apart from this, they have equivalent responsibilities and permissions as PMC members in other aspects.

Community Role Member List

SOFAArk

Maintainer

yuanyuancin
lvjing2

PMC (Project Management Comittee)

glmapper

Committer

zjulbj5
gaosaroma
QilongZhang133
straybirdzls13
caojie0911

Contributor

lylingzhen10
khotyn
FlyAbner (260+ commits, nominated for Comitter?)
alaneuler
sususama
ujjboy
JoeKerouac
Lunarscave
HzjNeverStop
AiWu4Damon
vchangpengfei
HuangDayu
shenchao45
DalianRollingKing
nobodyiam
lanicc
azhsmesos
wuqian0808
KangZhiDong
suntao4019
huangyunbin
jiangyunpeng
michalyao
rootsongjc
Zwl0113
tofdragon
lishiguang4
hionwi
343585776
g-stream
zkitcast
davidzj
zyclove
WindSearcher
lovejin52022
smalljunHw
vchangpengfei
sq1015
xwh1108
yuanChina
blysin
yuwenkai666
hadoop835
gitYupan
thirdparty-core
Estom
jijuanwang
DCLe-DA
linkoog
springcoco
zhaowwwjian
xingcici
ixufeng
jnan806
lizhi12q
kongqq
wangxiaotao00
Due to space limitations, Contributors who submitted issues before 23 are not listed here. We also thank everyone for their use and consultation of SOFAArk.

Arklet

Maintainer

yuanyuancin
lvjing2

PMC (Project Management Committee)

TomorJM

Committer

None

Contributor

glmapper
Lunarscave
lylingzhen

ModuleController

Maintainer

CodeNoobKing

PMC (Project Management Committee)

None

Committer

None

Contributor

liu-657667
Charlie17Li
lylingzhen

Arkctl

Maintainer

yuanyuancin
lvjing2

PMC (Project Management Committee)

None

Committer

None

Contributor

None


5.5 - 6.5 Technical Principles

5.5.2 - 6.5.2 Arklet Architecture and Api Design

English | 简体中文

Overview

Arklet provides an operational interface for delivery of SofaArk bases and modules. With Arklet, the release and operation of Ark Biz can be easily and flexibly operated.

Arklet is internally constructed by ArkletComponent

image

  • ApiClient: The core components responsible for interacting with the outside world
  • CommandService: Arklet exposes capability instruction definition and extension
  • OperationService: Ark Biz interacts with SofaArk to add, delete, modify, and encapsulate basic capabilities
  • HealthService: Based on health and stability, base, Biz, system and other indicators are calculated

The collaboration between them is shown in the figure overview

Of course, you can also extend Arklet’s component capabilities by implementing the ArkletComponent interface

Command Extension

The Arklet exposes the instruction API externally and handles the instruction internally through a CommandHandler mapped from each API.

CommandHandler related extensions belong to the unified management of the CommandService component

You can customize extension commands by inheriting AbstractCommandHandler

Build-in Command API

All of the following instruction apis access the arklet using the POST(application/json) request format

The http protocol is enabled and the default port is 1238

You can set koupleless.arklet.http.port JVM startup parameters override the default port

Query the supported commands

  • URL: 127.0.0.1:1238/help
  • input sample:
{}
  • output sample:
{
    "code":"SUCCESS",
    "data":[
        {
            "desc":"query all ark biz(including master biz)",
            "id":"queryAllBiz"
        },
        {
            "desc":"list all supported commands",
            "id":"help"
        },
        {
            "desc":"uninstall one ark biz",
            "id":"uninstallBiz"
        },
        {
            "desc":"switch one ark biz",
            "id":"switchBiz"
        },
        {
            "desc":"install one ark biz",
            "id":"installBiz"
        }
    ]
}

Install a biz

  • URL: 127.0.0.1:1238/installBiz
  • input sample:
{
    "bizName": "test",
    "bizVersion": "1.0.0",
    // local path should start with file://, alse support remote url which can be downloaded
    "bizUrl": "file:///Users/jaimezhang/workspace/github/sofa-ark-dynamic-guides/dynamic-provider/target/dynamic-provider-1.0.0-ark-biz.jar"
}
  • output sample(success):
{
  "code":"SUCCESS",
  "data":{
    "bizInfos":[
      {
        "bizName":"dynamic-provider",
        "bizState":"ACTIVATED",
        "bizVersion":"1.0.0",
        "declaredMode":true,
        "identity":"dynamic-provider:1.0.0",
        "mainClass":"io.sofastack.dynamic.provider.ProviderApplication",
        "priority":100,
        "webContextPath":"provider"
      }
    ],
    "code":"SUCCESS",
    "message":"Install Biz: dynamic-provider:1.0.0 success, cost: 1092 ms, started at: 16:07:47,769"
  }
}
  • output sample(failed):
{
  "code":"FAILED",
  "data":{
    "code":"REPEAT_BIZ",
    "message":"Biz: dynamic-provider:1.0.0 has been installed or registered."
  }
}

Uninstall a biz

  • URL: 127.0.0.1:1238/uninstallBiz
  • input sample:
{
    "bizName":"dynamic-provider",
    "bizVersion":"1.0.0"
}
  • output sample(success):
{
  "code":"SUCCESS"
}
  • output sample(failed):
{
  "code":"FAILED",
  "data":{
    "code":"NOT_FOUND_BIZ",
    "message":"Uninstall biz: test:1.0.0 not found."
  }
}

Switch a biz

  • URL: 127.0.0.1:1238/switchBiz
  • input sample:
{
    "bizName":"dynamic-provider",
    "bizVersion":"1.0.0"
}
  • output sample:
{
  "code":"SUCCESS"
}

Query all Biz

  • URL: 127.0.0.1:1238/queryAllBiz
  • input sample:
{}
  • output sample:
{
  "code":"SUCCESS",
  "data":[
    {
      "bizName":"dynamic-provider",
      "bizState":"ACTIVATED",
      "bizVersion":"1.0.0",
      "mainClass":"io.sofastack.dynamic.provider.ProviderApplication",
      "webContextPath":"provider"
    },
    {
      "bizName":"stock-mng",
      "bizState":"ACTIVATED",
      "bizVersion":"1.0.0",
      "mainClass":"embed main",
      "webContextPath":"/"
    }
  ]
}

Query Health

  • URL: 127.0.0.1:1238/health

Query All Health Info

  • input sample:
{}
  • output sample:
{
  "code": "SUCCESS",
  "data": {
    "healthData": {
      "jvm": {
        "max non heap memory(M)": -9.5367431640625E-7,
        "java version": "1.8.0_331",
        "max memory(M)": 885.5,
        "max heap memory(M)": 885.5,
        "used heap memory(M)": 137.14127349853516,
        "used non heap memory(M)": 62.54662322998047,
        "loaded class count": 10063,
        "init non heap memory(M)": 2.4375,
        "total memory(M)": 174.5,
        "free memory(M)": 37.358726501464844,
        "unload class count": 0,
        "total class count": 10063,
        "committed heap memory(M)": 174.5,
        "java home": "****\\jre",
        "init heap memory(M)": 64.0,
        "committed non heap memory(M)": 66.203125,
        "run time(s)": 34.432
      },
      "cpu": {
        "count": 4,
        "total used (%)": 131749.0,
        "type": "****",
        "user used (%)": 9.926451054656962,
        "free (%)": 81.46475495070172,
        "system used (%)": 6.249762806548817
      },
      "masterBizInfo": {
        "webContextPath": "/",
        "bizName": "bookstore-manager",
        "bizState": "ACTIVATED",
        "bizVersion": "1.0.0"
      },
      "pluginListInfo": [
        {
          "artifactId": "web-ark-plugin",
          "groupId": "com.alipay.sofa",
          "pluginActivator": "com.alipay.sofa.ark.web.embed.WebPluginActivator",
          "pluginName": "web-ark-plugin",
          "pluginUrl": "file:/****/2.2.3-SNAPSHOT/web-ark-plugin-2.2.3-20230901.090402-2.jar!/",
          "pluginVersion": "2.2.3-SNAPSHOT"
        },
        {
          "artifactId": "runtime-sofa-boot-plugin",
          "groupId": "com.alipay.sofa",
          "pluginActivator": "com.alipay.sofa.runtime.ark.plugin.SofaRuntimeActivator",
          "pluginName": "runtime-sofa-boot-plugin",
          "pluginUrl": "file:/****/runtime-sofa-boot-plugin-3.11.0.jar!/",
          "pluginVersion": "3.11.0"
        }
      ],
      "masterBizHealth": {
        "readinessState": "ACCEPTING_TRAFFIC"
      },
      "bizListInfo": [
        {
          "bizName": "bookstore-manager",
          "bizState": "ACTIVATED",
          "bizVersion": "1.0.0",
          "webContextPath": "/"
        }
      ]
    }
  }
}

Query System Health Info

  • input sample:
{
  "type": "system",
  // [OPTIONAL] if metrics is null -> query all system health info
  "metrics": ["cpu", "jvm"]
}
  • output sample:
{
  "code": "SUCCESS",
  "data": {
    "healthData": {
      "jvm": {...},
      "cpu": {...},
//      "masterBizHealth": {...}
    }
  }
}

Query Biz Health Info

  • input sample:
{
  "type": "biz",
  // [OPTIONAL] if moduleName is null and moduleVersion is null -> query all biz
  "moduleName": "bookstore-manager",
  // [OPTIONAL] if moduleVersion is null -> query all biz named moduleName
  "moduleVersion": "1.0.0"
}
  • output sample:
{
  "code": "SUCCESS",
  "data": {
    "healthData": {
      "bizInfo": {
        "bizName": "bookstore-manager",
        "bizState": "ACTIVATED",
        "bizVersion": "1.0.0",
        "webContextPath": "/"
      }
//      "bizListInfo": [
//        {
//          "bizName": "bookstore-manager",
//          "bizState": "ACTIVATED",
//          "bizVersion": "1.0.0",
//          "webContextPath": "/"
//        }
//      ]
    }
  }
}

Query Plugin Health Info

  • input sample:
{
  "type": "plugin",
  // [OPTIONAL] if moduleName is null -> query all biz
  "moduleName": "web-ark-plugin"
}
  • output sample:
{
  "code": "SUCCESS",
  "data": {
    "healthData": {
      "pluginListInfo": [
        {
          "artifactId": "web-ark-plugin",
          "groupId": "com.alipay.sofa",
          "pluginActivator": "com.alipay.sofa.ark.web.embed.WebPluginActivator",
          "pluginName": "web-ark-plugin",
          "pluginUrl": "file:/****/web-ark-plugin-2.2.3-20230901.090402-2.jar!/",
          "pluginVersion": "2.2.3-SNAPSHOT"
        }
      ]
    }
  }
}

Query Health Using Endpoint

use endpoint for k8s module to get helath info

default config

  • endpoints exposure include: *
  • endpoints base path: /
  • endpoints sever port: 8080

http code result

  • HEALTHY(200): get health if all health indicator is healthy
  • UNHEALTHY(400): get health once a health indicator is unhealthy
  • ENDPOINT_NOT_FOUND(404): endpoint path or params not found
  • ENDPOINT_PROCESS_INTERNAL_ERROR(500): get health process throw an error

query all health info

  • url: 127.0.0.1:8080/arkletHealth
  • method: GET
  • output sample
{   
    "healthy": true,
    "code": 200,    
    "codeType": "HEALTHY",    
    "data": {        
        "jvm": {...},        
        "masterBizHealth": {...},        
        "cpu": {...},        
        "masterBizInfo": {...},        
        "bizListInfo": [...],        
        "pluginListInfo": [...]    
    }
}  

query all biz/plugin health info

  • url: 127.0.0.1:8080/arkletHealth/{moduleType} (moduleType must in [‘biz’, ‘plugin’])
  • method: GET
  • output sample
{   
   "healthy": true,
   "code": 200,    
   "codeType": "HEALTHY",    
   "data": {        
       "bizListInfo": [...],  
       // "pluginListInfo": [...]      
   }
}  

query single biz/plugin health info

  • url: 127.0.0.1:8080/arkletHealth/{moduleType}/moduleName/moduleVersion (moduleType must in [‘biz’, ‘plugin’])
  • method: GET
  • output sample
{   
   "healthy": true,
   "code": 200,    
   "codeType": "HEALTHY",    
   "data": {        
       "bizInfo": {...},  
       // "pluginInfo": {...}      
   }
}  

5.5.3 - 6.5.3 Runtime Adaptation or Best Practices for Multi-Module Deployment

5.5.3.1 - 6.5.3.1 Koupleless Multi-Application Governance Patch Management

Koupleless Multi-Application Governance Patch Management

Why Koupleless Needs Multi-Application Governance Patching?

Koupleless is a multi-application architecture, and traditional middleware may only consider scenarios for a single application. Therefore, in some cases, it is incompatible with multi-application coexistence, leading to problems such as shared variable contamination, classloader loading exceptions, and unexpected class judgments. Thus, when using Koupleless middleware, we need to patch some potential issues, covering the original middleware implementation, allowing open-source middleware to be compatible with the multi-application mode.

Research on Multi-Application Governance Patching Solutions for Koupleless

In multi-application compatibility governance, we not only consider production deployment but also need to consider compatibility with local user development (IDEA click Debug), compatibility with unit testing (e.g., @SpringbootTest), and more.


Below is a comparison table of different solutions.

Solution Comparison

Solution NameAccess CostMaintainabilityDeployment CompatibilityIDE CompatibilityUnit Testing Compatibility
A: Place the patch package dependency at the beginning of maven dependency to ensure that the patch class is loaded first by the classLoader.Low.
Users only need to control the order of Maven dependencies.
Low
Users need to ensure that the relevant dependencies are at the front, and the classpath is not manually passed during startup.
Compatible✅Compatible✅Compatible✅
B: Modify the indexing file order of spring boot build artifacts using maven plugins.Low.
Just need to add a package cycle maven plugin, user perception is low.
Medium
Users need to ensure that the classpath is not manually passed during startup.
Compatible✅Not compatible❌
JetBrains cannot be compatible, JetBrains will build the CLI command line by itself to pass the classpath according to the order of Maven dependencies, which may lead to suboptimal loading order of the adapter.
Not compatible❌
Unit tests do not go through the repackage cycle and do not depend on the classpath.idx file.
C: Add a custom spring boot jarlaunch starter to control the classLoader loading behavior through the starter.High.
Users need to modify their own base startup logic to use Koupleless’ custom jarlaunch.
High
Custom jarlaunch can control the code loading order through hooks.
Compatible✅Compatible✅
But IDE needs to be configured to use custom jarlaunch.
Not compatible❌
Because unit tests do not go through the jarlaunch logic.
D: Enhance the base classloader to ensure priority searching and loading of patch classes.High.
Users need to initialize enhanced code, and this mode also has an impact on the sofa-ark recognition logic of the master biz, and needs to be refactored to support.
High
The base classloader can programmatically control the loading order of dependencies.
Compatible✅Compatible✅Compatible✅
E: Configure the maven plugin to copy patch class code to the current project, and the files in the current project will be loaded first.High.
Maven’s current copy plugin cannot use wildcards, so adding an adapter requires additional configuration.
High
As long as users configure it, they can ensure that dependencies are loaded first (because the classes of the local project are loaded first).
Compatible✅Compatible✅Not compatible❌
Because unit tests do not go through the package cycle, and the maven copy plugin takes effect during the package cycle.

Conclusion

Overall, it is not possible to achieve user 0 perception access completely, and each method requires minor business refactoring. Among many solutions, A and D can achieve full compatibility. However, the A solution does not require business code changes, nor does it intrude into runtime logic. It only requires users to add the following dependency at the beginning of the maven dependency:

<dependency>
  <groupId>com.alipay.koupleless</groupId>
  <artifactId>koupleless-base-starter</artifactId>
  <version>${koupleless.runtime.version}</version>
  <type>pom</type>
</dependency>

Therefore, we will adopt solution A.
If you have more ideas or input, welcome to discuss them with the open-source community!

5.5.3.2 - 6.5.3.2 Introduction to Multi-Module Integration Testing Framework

This article focuses on the design concepts, implementation details, and usage of the multi-module integration testing framework.

Why Do We Need a Multi-Module Integration Testing Framework?

Assuming there is no integration testing framework, when developers want to verify whether the deployment process of multiple modules behaves correctly, they need to follow these steps:

  1. Build the base and JAR packages for all modules.
  2. Start the base process.
  3. Install the module JAR packages into the base.
  4. Invoke HTTP/RPC interfaces.
  5. Verify whether the returned results are correct.

Although the above workflow appears simple, developers face several challenges:

  1. Constantly switching back and forth between the command line and the code.
  2. If the validation results are incorrect, they need to repeatedly modify the code and rebuild + remote debug.
  3. If the app only provides internal methods, they must modify the code to expose interfaces via HTTP/RPC to validate the behavior of the multi-module deployment.

These challenges lead to low efficiency and an unfriendly experience for developers. Therefore, we need an integration testing framework to provide a one-stop validation experience.

What Problems Should the Integration Testing Framework Solve?

The integration testing framework needs to simulate the behavior of multi-module deployment in the same process with a single start. It should also allow developers to directly call code from the modules/base to verify module behavior.

The framework needs to solve the following technical problems:

  1. Simulate the startup of the base Spring Boot application.
  2. Simulate the startup of module Spring Boot applications, supporting loading modules directly from dependencies instead of JAR packages.
  3. Simulate the loading of Ark plugins.
  4. Ensure compatibility with Maven’s testing commands.

By default, Sofa-ark loads modules through executable JAR packages and Ark plugins. Therefore, developers would need to rebuild JAR packages or publish to repositories during each validation, reducing validation efficiency. The framework needs to intercept the corresponding loading behavior and load modules directly from Maven dependencies to simulate multi-module deployment.

The code that accomplishes these tasks includes:

  1. TestBizClassLoader: Simulates loading the biz module and is a derived class of the original BizClassLoader, solving the problem of loading classes on demand to different ClassLoaders within the same JAR package.
  2. TestBiz: Simulates starting the biz module and is a derived class of the original Biz, encapsulating the logic for initializing TestBizClassLoader.
  3. TestBootstrap: Initializes ArkContainer and loads Ark plugins.
  4. TestClassLoaderHook: Controls the loading order of resources via a hook mechanism. For instance, application.properties in the biz JAR package will be loaded first.
  5. BaseClassLoader: Simulates normal base ClassLoader behavior and is compatible with testing frameworks like Surefire.
  6. TestMultiSpringApplication: Simulates the startup behavior of multi-module Spring Boot applications.

How to Use the Integration Testing Framework?

Start Both Base and Module Spring Boot Applications in the Same Process

Sample code is as follows:

public void demo() {
    new TestMultiSpringApplication(MultiSpringTestConfig
            .builder()
            .baseConfig(BaseSpringTestConfig
                    .builder()
                    .mainClass(BaseApplication.class) // Base startup class
                    .build())
            .bizConfigs(Lists.newArrayList(
                    BizSpringTestConfig
                            .builder()
                            .bizName("biz1") // Name of module 1
                            .mainClass(Biz1Application.class) // Startup class of module 1
                            .build(),
                    BizSpringTestConfig
                            .builder()
                            .bizName("biz2") // Name of module 2
                            .mainClass(Biz2Application.class) // Startup class of module 2
                            .build()
            ))
            .build()
    ).run();
}

Write Assert Logic

You can retrieve module services using the following method:

public void getService() {
    StrategyService strategyService = SpringServiceFinder.
            getModuleService(
                    "biz1-web-single-host",
                    "0.0.1-SNAPSHOT",
                    "strategyServiceImpl",
                    StrategyService.class
            );
}

After obtaining the service, you can write assert logic.

Reference Use Cases

For more comprehensive use cases, you can refer to Tomcat Multi-Module Integration Testing Cases.

5.5.3.3 - 6.5.3.3 Adapting to Multi-Module with Dubbo 2.7

Why Adaptation is Needed

The native Dubbo 2.7 cannot support module publishing its own Dubbo services in multi-module scenarios, leading to a series of issues such as serialization and class loading exceptions during invocation.

Multi-Module Adaptation Solutions

Dubbo 2.7 Multi-Module Adaptation SDK will be included when building by koupleless-base-build-plugin, the adapter mainly from aspects such as class loading, service publishing, service unloading, service isolation, module-level service management, configuration management, serialization, etc.

1. AnnotatedBeanDefinitionRegistryUtils Unable to Load Module Classes Using the Base Classloader

com.alibaba.spring.util.AnnotatedBeanDefinitionRegistryUtils#isPresentBean

public static boolean isPresentBean(BeanDefinitionRegistry registry, Class<?> annotatedClass) {
    ...

    //        ClassLoader classLoader = annotatedClass.getClassLoader(); // Original logic
    ClassLoader classLoader = Thread.currentThread().getContextClassLoader();   // Changed to use tccl to load classes

    for (String beanName : beanNames) {
        BeanDefinition beanDefinition = registry.getBeanDefinition(beanName);
        if (beanDefinition instanceof AnnotatedBeanDefinition) {
            ...
            String className = annotationMetadata.getClassName();
            Class<?> targetClass = resolveClassName(className, classLoader);
            ...
        }
    }

    return present;
}

2. Module-Level Service and Configuration Resource Management

  1. com.alipay.sofa.koupleless.support.dubbo.ServerlessServiceRepository Replaces the Native org.apache.dubbo.rpc.model.ServiceRepository

The native service uses the interfaceName as the cache key. When both the base and the module publish services with the same interface but different groups, it cannot distinguish between them. Replacing the native service caching model, using the Interface Class type as the key, and using the path containing the group as the key to support scenarios where the base and the module publish services with the same interface but different groups.

private static ConcurrentMap<Class<?>, ServiceDescriptor> globalClassServices = new ConcurrentHashMap<>();

private static ConcurrentMap<String, ServiceDescriptor>   globalPathServices  = new ConcurrentHashMap<>();
  1. com.alipay.sofa.koupleless.support.dubbo.ServerlessConfigManager Replaces the Native org.apache.dubbo.config.context.ConfigManager

    Adds a classloader dimension key to the original config to isolate different configurations according to classloader in different modules.

final Map<ClassLoader, Map<String, Map<String, AbstractConfig>>> globalConfigsCache = new HashMap<>();

public void addConfig(AbstractConfig config, boolean unique) {
    ...
    write(() -> {
        Map<String, AbstractConfig> configsMap = getCurrentConfigsCache().computeIfAbsent(getTagName(config.getClass()), type -> newMap());
        addIfAbsent(config, configsMap, unique);
    });
}
private Map<String, Map<String, AbstractConfig>> getCurrentConfigsCache() {
    ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();   // Based on the current thread classloader to isolate different configuration caches
    globalConfigsCache.computeIfAbsent(contextClassLoader, k -> new HashMap<>());
    return globalConfigsCache.get(contextClassLoader);
}

ServerlessServiceRepository and ServerlessConfigManager both depend on the dubbo ExtensionLoader’s extension mechanism to replace the original logic. For specific principles, please refer to org.apache.dubbo.common.extension.ExtensionLoader.createExtension.

3. Module-Level Service Install and Uninstall

override DubboBootstrapApplicationListener to prevent the original Dubbo module from starting or uninstalling when publishing or uninstalling services

  • com.alipay.sofa.koupleless.support.dubbo.BizDubboBootstrapListener

The native Dubbo 2.7 only publishes Dubbo services after the base module is started. In the case of multi-modules, it cannot support module-level service publishing. Ark listens for module startup events using a listener and manually calls Dubbo to publish module-level services.

private void onContextRefreshedEvent(ContextRefreshedEvent event) {
  try {
      ReflectionUtils.getMethod(DubboBootstrap.class, "exportServices")
          .invoke(dubboBootstrap);
      ReflectionUtils.getMethod(DubboBootstrap.class, "referServices").invoke(dubboBootstrap);
  } catch (Exception e) {
      
  }
}

The original Dubbo 2.7 unexports all services in the JVM when a module is uninstalled, leading to the unexporting of services from the base and other modules after the module is uninstalled. Ark listens for the spring context closing event of the module and manually unexports Dubbo services of the current module, retaining Dubbo services of the base and other modules.

private void onContextClosedEvent(ContextClosedEvent event) {
        // DubboBootstrap.unexportServices unexports all services, only need to unexport services of the current biz
        Map<String, ServiceConfigBase<?>> exportedServices = ReflectionUtils.getField(dubboBootstrap, DubboBootstrap.class, "exportedServices");

        Set<String> bizUnexportServices = new HashSet<>();
        for (Map.Entry<String, ServiceConfigBase<?>> entry : exportedServices.entrySet()) {
            String serviceKey = entry.getKey();
            ServiceConfigBase<?> sc = entry.getValue();
            if (sc.getRef().getClass().getClassLoader() == Thread.currentThread().getContextClassLoader()) {   // Distinguish module services based on the classloader of ref service implementation
                bizUnexportServices.add(serviceKey);
                configManager.removeConfig(sc);   // Remove service configuration from configManager
                sc.unexport();   // Unexport service
                serviceRepository.unregisterService(sc.getUniqueServiceName());   // Remove from serviceRepository
            }
        }
        for (String service : bizUnexportServices) {
            exportedServices.remove(service);    // Remove service from DubboBootstrap
        }
    }

4. Service Routing

  • com.alipay.sofa.koupleless.support.dubbo.ConsumerRedefinePathFilter

When invoking Dubbo services, the service model (including interface, param, return types, etc.) is obtained from the ServiceRepository based on the path to perform service invocation, parameter, and return value serialization. The original Dubbo 2.7 uses interfaceName as the path to find the service model, which cannot support the scenario where the base module and other modules publish services with the same interface. Ark adds group information to the path on the consumer side through a custom filter to facilitate correct service routing on the provider side.

public Result invoke(Invoker<?> invoker, Invocation invocation) throws RpcException {
  if (invocation instanceof RpcInvocation) {
      RpcInvocation rpcInvocation = (RpcInvocation) invocation;
      // Original path is interfaceName, such as com.alipay.sofa.rpc.dubbo27.model.DemoService
      // Modified path is serviceUniqueName, such as masterBiz/com.alipay.sofa.rpc.dubbo27.model.DemoService
      rpcInvocation.setAttachment("interface", rpcInvocation.getTargetServiceUniqueName());   // Original path is interfaceName, such as
  }
  return invoker.invoke(invocation);
}

5. Serialization

  • org.apache.dubbo.common.serialize.java.JavaSerialization
  • org.apache.dubbo.common.serialize.java.ClassLoaderJavaObjectInput
  • org.apache.dubbo.common.serialize.java.ClassLoaderObjectInputStream

When obtaining the serialization tool JavaSerialization, use ClassLoaderJavaObjectInput instead of the original JavaObjectInput and pass provider-side service classloader information.

// org.apache.dubbo.common.serialize.java.JavaSerialization
public ObjectInput deserialize(URL url, InputStream is) throws IOException {
    return new ClassLoaderJavaObjectInput(new ClassLoaderObjectInputStream(null, is));   // Use ClassLoaderJavaObjectInput instead of the original JavaObjectInput, pass provider-side service classloader information
}

// org.apache.dubbo.common.serialize.java.ClassLoaderObjectInputStream
private ClassLoader classLoader;

public ClassLoaderObjectInputStream(final ClassLoader classLoader, final InputStream inputStream) {
  super(inputStream);
  this.classLoader = classLoader;
}
  • org.apache.dubbo.rpc.protocol.dubbo.DecodeableRpcResult Client-side deserialization of return values
// patch begin
if (in instanceof ClassLoaderJavaObjectInput) {
   InputStream is = ((ClassLoaderJavaObjectInput) in).getInputStream();
   if (is instanceof ClassLoaderObjectInputStream) {
      ClassLoader cl = serviceDescriptor.getServiceInterfaceClass().getClassLoader();  // Set provider-side service classloader information to ClassLoaderObjectInputStream
      ((ClassLoaderObjectInputStream) is).setClassLoader(cl);
   }
}
// patch end
  • org.apache.dubbo.rpc.protocol.dubbo.DecodeableRpcResult Client-side deserialization of return values
// patch begin
if (in instanceof ClassLoaderJavaObjectInput) {
   InputStream is = ((ClassLoaderJavaObjectInput) in).getInputStream();
   if (is instanceof ClassLoaderObjectInputStream) {
       ClassLoader cl = invocation.getInvoker().getInterface().getClassLoader(); // Set consumer-side service classloader information to ClassLoaderObjectInputStream
       ((ClassLoaderObjectInputStream) is).setClassLoader(cl);
   }
}
// patch end

Example of Using Dubbo 2.7 in a Multi-Module Environment

Example of Using Dubbo 2.7 in a Multi-Module Environment

dubbo2.7 Multi-Module Adaptation SDK Source Code

5.5.3.4 - 6.5.3.4 Best Practices for Multi-Module with ehcache

Best practices for implementing multi-module architecture with ehcache in Koupleless.

Why Best Practices are Needed

During CacheManager initialization, there are shared static variables causing issues when multiple applications use the same Ehcache name, resulting in cache overlap.

Requirements for Best Practices

  1. Base module must include Ehcache, and modules should reuse the base.

In Spring Boot, Ehcache initialization requires creating it through the EhCacheCacheConfiguration defined in Spring, which belongs to Spring and is usually placed in the base module. image.png

During bean initialization, the condition check will lead to class verification, image.png if net.sf.ehcache.CacheManager is found, it will use a Java native method to search for the net.sf.ehcache.CacheManager class in the ClassLoader where the class belongs. Therefore, the base module must include this dependency; otherwise, it will result in ClassNotFound errors. image.png

  1. Modules should exclude the included Ehcache (set scope to provided or utilize automatic slimming capabilities).

When a module uses its own imported Ehcache, theoretically, it should avoid sharing static variables in the base CacheManager class, thus preventing potential errors. However, in our actual testing, during the module installation process, when initializing the EhCacheCacheManager, we encountered an issue where, during the creation of a new object, it required obtaining the CacheManager belonging to the class of the object, which in turn should be the base CacheManager. Importantly, we cannot include the CacheManager dependency in the module’s compilation, as it would lead to conflicts caused by a single class being imported by multiple different ClassLoaders.

When a module uses its own imported Ehcache, theoretically, it should avoid sharing static variables in the base CacheManager class, thus preventing potential errors. However, in our actual testing, during the module installation process, when initializing the EhCacheCacheManager, we encountered an issue where, image.png image.png during the creation of a new object, it required obtaining the CacheManager belonging to the class of the object, which in turn should be the base CacheManager. Importantly, we cannot include the CacheManager dependency in the module’s compilation, as it would lead to conflicts caused by a single class being imported by multiple different ClassLoaders. image.png

Therefore, all loading should be delegated to the base module.

Best Practice Approach

  1. Delegate module Ehcache slimming to the base.
  2. If multiple modules have the same cacheName, modify cacheName to be different.
  3. If you don’t want to change the code to modify cache name, you can dynamically replace cacheName through packaging plugins.
 <plugin>
    <groupId>com.google.code.maven-replacer-plugin</groupId>
    <artifactId>replacer</artifactId>
    <version>1.5.3</version>
    <executions>
        <!-- Perform replacement before packaging -->
        <execution>
            <phase>prepare-package</phase>
            <goals>
                <goal>replace</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <!-- Automatically recognize the project's target folder -->
        <basedir>${build.directory}</basedir>
        <!-- Directory rules for replacement files -->
        <includes>
            <include>classes/j2cache/*.properties</include>
        </includes>
        <replacements>
            <replacement>
                <token>ehcache.ehcache.name=f6-cache</token>
                <value>ehcache.ehcache.name=f6-${parent.artifactId}-cache</value>
            </replacement>

        </replacements>
    </configuration>
</plugin>
  1. Set the shared property of the FactoryBean to false.
@Bean
    public EhCacheManagerFactoryBean ehCacheManagerFactoryBean() {
        EhCacheManagerFactoryBean factoryBean = new EhCacheManagerFactoryBean();

        // Set the factoryBean's shared property to false
        factoryBean.setShared(true);
//        factoryBean.setShared(false);
        factoryBean.setCacheManagerName("biz1EhcacheCacheManager");
        factoryBean.setConfigLocation(new ClassPathResource("ehcache.xml"));
        return factoryBean;
    }

Otherwise, it will enter this logic, initializing the static variable instance of CacheManager. If this variable has a value, and if shared is true in the module, it will reuse the CacheManager’s instance, leading to errors. image.png image.png

Example of Best Practices

For an example project, pleaserefer to here

5.5.3.5 - 6.5.3.5 Logback's adaptation for multi-module environments

Why Adaptation is Needed

The native logback framework only provides a default logging context, making it impossible to isolate log configurations between different modules. Consequently, in scenarios involving deploying multiple modules together, modules can only utilize the logging configuration of the base application, causing inconvenience when logging from individual modules.

Multi-Module Adaptation Solution

Logback supports native extension ch.qos.logback.classic.selector.ContextSelector, which allows for a custom context selector. Ark provides a default implementation of ContextSelector to isolate LoggerContext for multiple modules (refer to com.alipay.sofa.ark.common.adapter.ArkLogbackContextSelector). Each module uses its independent LoggerContext, ensuring log configuration isolation.

During startup, the log configuration and context initialization are handled by Spring’s log system LogbackLoggingSystem.

Specify the context selector as com.alipay.sofa.ark.common.adapter.ArkLogbackContextSelector and add the JVM startup parameter:

-Dlogback.ContextSelector=com.alipay.sofa.ark.common.adapter.ArkLogbackContextSelector

When using SLF4J as the logging facade with logback as the logging implementation framework, during the base application startup, when the SLF4J static binding is first performed, the specific ContextSelector is initialized. If no custom context selector is specified, the DefaultContextSelector will be used. However, when we specify a context selector, the ArkLogbackContextSelector will be initialized as the context selector.

ch.qos.logback.classic.util.ContextSelectorStaticBinder.init

public void init(LoggerContext defaultLoggerContext, Object key) {
  ...

  String contextSelectorStr = OptionHelper.getSystemProperty(ClassicConstants.LOGBACK_CONTEXT_SELECTOR);
  if (contextSelectorStr == null) {
  contextSelector = new DefaultContextSelector(defaultLoggerContext);
  } else if (contextSelectorStr.equals("JNDI")) {
  // if jndi is specified, let's use the appropriate class
  contextSelector = new ContextJNDISelector(defaultLoggerContext);
  } else {
  contextSelector = dynamicalContextSelector(defaultLoggerContext, contextSelectorStr);
  }
}

static ContextSelector dynamicalContextSelector(LoggerContext defaultLoggerContext, String contextSelectorStr) {
  Class<?> contextSelectorClass = Loader.loadClass(contextSelectorStr);
  Constructor cons = contextSelectorClass.getConstructor(new Class[] { LoggerContext.class });
  return (ContextSelector) cons.newInstance(defaultLoggerContext);
}

In the ArkLogbackContextSelector, we utilize ClassLoader to differentiate between different modules and cache the LoggerContext of each module based on its ClassLoader

When obtaining the LoggerContext based on the ClassLoader, during the startup of the Spring environment, the logging context is initialized via the Spring logging system. This is achieved by calling org.springframework.boot.logging.logback.LogbackLoggingSystem.getLoggerContext, which returns the LoggerContext specific to each module using the custom context selector implemented by Ark, com.alipay.sofa.ark.common.adapter.ArkLogbackContextSelector.getLoggerContext().

public LoggerContext getLoggerContext() {
  ClassLoader classLoader = this.findClassLoader();
  if (classLoader == null) {
      return defaultLoggerContext;
  }
  return getContext(classLoader);
}

When obtaining the classloader, first, the thread’s context classloader is retrieved. If it is identified as the classloader of the module, it is returned directly. If the TCCL (thread context classloader) is not the classloader of the module, the call stack of the Class objects is traversed through the ClassContext. When encountering the classloader of the module in the call stack, it is returned directly. This approach is taken to accommodate scenarios where the TCCL is not guaranteed to be the classloader of the module. For example, when logging is performed in module code, and the current class is loaded by the module’s classloader itself, traversing the ClassContext allows us to eventually obtain the classloader of the module, ensuring the use of the module-specific LoggerContext.

private ClassLoader findClassLoader() {
  ClassLoader classLoader = Thread.currentThread().getContextClassLoader();
  if (classLoader != null && CONTAINER_CLASS_LOADER.equals(classLoader.getClass().getName())) {
      return null;
  }
  if (classLoader != null && BIZ_CLASS_LOADER.equals(classLoader.getClass().getName())) {
      return classLoader;
  }

  Class<?>[] context = new SecurityManager() {
      @Override
      public Class<?>[] getClassContext() {
          return super.getClassContext();
      }
  }.getClassContext();
  if (context == null || context.length == 0) {
      return null;
  }
  for (Class<?> cls : context) {
      if (cls.getClassLoader() != null
          && BIZ_CLASS_LOADER.equals(cls.getClassLoader().getClass().getName())) {
          return cls.getClassLoader();
      }
  }

  return null;
}

Once the appropriate ClassLoader is obtained, different LoggerContext instances are selected. All module contexts are cached in com.alipay.sofa.ark.common.adapter.ArkLogbackContextSelector.CLASS_LOADER_LOGGER_CONTEXT with the ClassLoader as the key.

private LoggerContext getContext(ClassLoader cls) {
  LoggerContext loggerContext = CLASS_LOADER_LOGGER_CONTEXT.get(cls);
  if (null == loggerContext) {
      synchronized (ArkLogbackContextSelector.class) {
          loggerContext = CLASS_LOADER_LOGGER_CONTEXT.get(cls);
          if (null == loggerContext) {
              loggerContext = new LoggerContext();
              loggerContext.setName(Integer.toHexString(System.identityHashCode(cls)));
              CLASS_LOADER_LOGGER_CONTEXT.put(cls, loggerContext);
          }
      }
  }
  return loggerContext;
}

Sample Usage of Multi-Module Logback

Sample Usage of Multi-Module Logback

View Source Code of ArkLogbackContextSelector

5.5.3.6 - 6.5.3.6 log4j2 Multi-Module Adaptation

Koupleless log4j2 Multi-Module Adaptation

Why Adaptation is Needed

In its native state, log4j2 does not provide individual log directories for modules in a multi-module environment. Instead, it logs uniformly to the base directory, which makes it challenging to isolate logs and corresponding monitoring for each module. The purpose of this adaptation is to enable each module to have its own independent log directory.

Initialization of log4j2 in Regular Applications

Before Spring starts, log4j2 initializes various logContexts and configurations using default values. During the Spring startup process, it listens for Spring events to finalize initialization. This process involves invoking the Log4j2LoggingSystem.initialize method via org.springframework.boot.context.logging.LoggingApplicationListener.

The method determines whether it has already been initialized based on the loggerContext.

Here, a problem arises in a multi-module environment.

The getLoggerContext method retrieves the LoggerContext based on the classLoader of org.apache.logging.log4j.LogManager. Relying on the classLoader of a specific class to extract the LoggerContext can be unstable in a multi-module setup. This instability arises because some classes in modules can be configured to delegate loading to the base, so when a module starts, it might obtain the LoggerContext from the base. Consequently, if isAlreadyInitialized returns true, the log4j2 logging for the module cannot be further configured based on user configuration files.

If it hasn’t been initialized yet, it enters super.initialize, which involves two tasks:

  1. Retrieving the log configuration file.
  2. Parsing the variable values in the log configuration file. Both of these tasks may encounter issues in a multi-module setup. Let’s first examine how these two steps are completed in a regular application.

Retrieving the Log Configuration File

You can see that the location corresponding to the log configuration file’s URL is obtained through ResourceUtils.getURL. Here, the URL is obtained by retrieving the current thread’s context ClassLoader, which works fine in a multi-module environment (since each module’s startup thread context is already its own ClassLoader).

Parsing Log Configuration Values

The configuration file contains various variables, such as these:

These variables are parsed in the specific implementation of org.apache.logging.log4j.core.lookup.AbstractLookup, including:

Variable SyntaxImplementation Class
${bundle:application:logging.file.path}org.apache.logging.log4j.core.lookup.ResourceBundleLookupLocates application.properties based on the ClassLoader of ResourceBundleLookup and reads the values inside.
${ctx:logging.file.path}org.apache.logging.log4j.core.lookup.ContextMapLookupRetrieves values stored in the LoggerContext ThreadContext. It’s necessary to set the values from application.properties into the ThreadContext.

Based on the above analysis, configuring via bundle method might not be feasible in a multi-module setup because ResourceBundleLookup might only exist in the base module, leading to always obtaining application.properties from the base module. Consequently, the logging configuration path of the modules would be the same as that of the base module, causing all module logs to be logged into the base module. Therefore, it needs to be modified to use ContextMapLookup.

Expected Logging in a Multi-Module Consolidation Scenario

Both the base module and individual modules should be able to use independent logging configurations and values, completely isolated from each other. However, due to the potential issues identified in the analysis above, which could prevent module initialization, additional adaptation of log4j2 is required.

Multi-Module Adaptation Points

  1. Ensure getLoggerContext() can retrieve the LoggerContext of the module itself.

  2. It’s necessary to adjust to use ContextMapLookup so that module logs can retrieve the module application name and be logged into the module directory.

    a. Set the values of application.properties to ThreadContext when the module starts. b. During logging configuration, only use the ctx:xxx:xxx configuration format.

Module Refactoring Approach

Check the source code for detailed information

5.5.3.7 - 6.5.3.7 Module Use Bes

koupleless-adapter-bes

koupleless-adapter-bes is used to adapt to the BaoLande (BES) container, the warehouse address is koupleless-adapter-bes (thanks to the community student Chen Jian for his contribution).

The project is currently only verified in BES 9.5.5.004 version, and other versions need to be verified by themselves, and necessary adjustments need to be made according to the same logic.

If multiple BIZ modules do not need to use the same port to publish services, only need to pay attention to the precautions mentioned in the installation dependency section below, and do not need to introduce the dependencies related to this project.

Quick Start

1. Install Dependencies

First, make sure that BES-related dependencies have been imported into the maven repository. (There is a key point here. Due to the conflicting package structure of BES’s dependency package with the recognition mechanism of the koupleless 2.2.9 project, users need to add the prefix sofa-ark- to the BES’s dependency package by themselves, and the specific recognition mechanism can refer to koupleless’ com.alipay.sofa.ark.container.model. BizModel class)

The reference import script is as follows:

mv XXX/BES-EMBED/bes-lite-spring-boot-2.x-starter-9.5.5.004.jar XXX/BES-EMBED/sofa-ark-bes-lite-spring-boot-2.x-starter-9.5.5.004.jar
mvn install:install-file -Dfile=XXX/BES-EMBED/sofa-ark-bes-lite-spring-boot-2.x-starter-9.5.5.004.jar -DgroupId=com.bes.besstarter -DartifactId=sofa-ark-bes-lite-spring-boot-2.x-starter -Dversion=9.5.5.004 -Dpackaging=jar
mvn install:install-file -Dfile=XXX/BES-EMBED/bes-gmssl-9.5.5.004.jar -DgroupId=com.bes.besstarter -DartifactId=bes-gmssl -Dversion=9.5.5.004 -Dpackaging=jar
mvn install:install-file -Dfile=XXX/BES-EMBED/bes-jdbcra-9.5.5.004.jar -DgroupId=com.bes.besstarter -DartifactId=bes-jdbcra -Dversion=9.5.5.004 -Dpackaging=jar
mvn install:install-file -Dfile=XXX/BES-EMBED/bes-websocket-9.5.5.004.jar -DgroupId=com.bes.besstarter -DartifactId=bes-websocket -Dversion=9.5.5.004 -Dpackaging=jar

2. Compile and Install the Project Plugin

Enter the bes9-web-adapter directory of the project and execute the mvn install command.

The project will install the “bes-web-ark-plugin” and “bes-sofa-ark-springboot-starter” two modules.

3. Use the Project Components

First, according to the koupleless documentation, upgrade the project to Koupleless Base

Then, replace the coordinates mentioned in the dependencies

<dependency>
    <groupId>com.alipay.sofa</groupId>
    <artifactId>web-ark-plugin</artifactId>
    <version>${sofa.ark.version}</version>
</dependency>

with the coordinates of this project

<dependency>
    <groupId>com.alipay.sofa</groupId>
    <artifactId>bes-web-ark-plugin</artifactId>
    <version>2.2.9</version>
</dependency>
<dependency>
   <groupId>com.alipay.sofa</groupId>
   <artifactId>bes-sofa-ark-springboot-starter</artifactId>
   <version>2.2.9</version>
</dependency>

Introduce BES-related dependencies (also need to exclude the dependency of tomcat). The reference dependence is as follows:

       <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>org.springframework.boot</groupId>
                    <artifactId>spring-boot-starter-tomcat</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>com.bes.besstarter</groupId>
            <artifactId>sofa-ark-bes-lite-spring-boot-starter</artifactId>
            <version>9.5.5.004</version>
        </dependency>
        <dependency>
            <groupId>com.bes.besstarter</groupId>
            <artifactId>bes-gmssl</artifactId>
            <version>9.5.5.004</version>
        </dependency>
        <dependency>
            <groupId>com.bes.besstarter</groupId>
            <artifactId>bes-jdbcra</artifactId>
            <version>9.5.5.004</version>
        </dependency>
        <dependency>
            <groupId>com.bes.besstarter</groupId>
            <artifactId>bes-websocket</artifactId>
            <version>9.5.5.004</version>
        </dependency>

4. Finished

After completing the above steps, you can start the project in Koupleless using BES.

5.5.3.8 - 6.5.3.8 Module Using Dubbo

Module Interceptor (Filter)

A module can use interceptors defined within itself or those defined on the base.

⚠️Note: Avoid naming module interceptors the same as those on the base. If the names are identical, the interceptors from the base will be used.

5.5.3.9 - 6.5.3.10 Introduction to the Principle of Class Delegation Loading between Foundation and Modules

Introduction to the principle of class delegation loading between Koupleless foundation and modules

Class Delegation Loading between Multiple Modules

The SOFAArk framework is based on a multi-ClassLoader universal class isolation solution, providing class isolation and application merge deployment capabilities. This document does not intend to introduce the principles and mechanismsof SOFAArk class isolation. Instead, it mainly introduces the current best practices of multi-ClassLoader.
The ClassLoader model between the foundation and modules deployed on the JVM at present is as shown in the figure below:
image.png

Current Class Delegation Loading Mechanism

The classes searched by a module during startup and runtime currently come from two sources: the module itself and the foundation. The ideal priority order of these two sources is to search from the module first, and if not found, then from the foundation. However, there are some exceptions currently:

  1. A whitelist is defined, and dependencies within the whitelist are forced to use dependencies in the foundation.
  2. The module can scan all classes in the foundation:
    • Advantage: The module can introduce fewer dependencies.
    • Disadvantage: The module will scan classes that do not exist in the module code, such as some AutoConfigurations. During initialization, errors may occur due to the inability to scan corresponding resources.
  3. The module cannot scan any resources in the foundation:
    • Advantage: It will not initialize the same beans as the foundation repeatedly.
    • Disadvantage: If the module needs resources from the foundation to start, errors will occur due to the inability to find resources unless the module is explicitly introduced (Maven dependency scope is not set to provided).
  4. When the module calls the foundation, some internal processes pass the class names from the module to the foundation. If the foundation directly searches for the classes passed by the module from the foundation ClassLoader, it will not find them. This is because delegation only allows the module to delegate to the foundation, and classes initiated from the foundation will not search the module again.

Points to Note When Using

When a module needs to upgrade the dependencies delegated to the foundation, the foundation needs to be upgraded first, and then the module can be upgraded.

Best Practices for Class Delegation

The principle of class delegation loading is that middleware-related dependencies need to be loaded and executed in the same ClassLoader. There are two best practices to achieve this:

Mandatory Delegation Loading

Since middleware-related dependencies generally need to be loaded and executed in the same ClassLoader, we will specify a whitelist of middleware dependency, forcing these dependencies to be delegated to the foundation for loading.

Usage

Add the configuration sofa.ark.plugin.export.class.enable=true to application.properties.

Advantages

Module developers do not need to be aware of which dependencies belong to the middleware that needs to be loaded by the same ClassLoader.

Disadvantages

The list of dependencies to be forcibly loaded in the whitelist needs to be maintained. If the list is missing, the foundation needs to be updated. Important upgrades require pushing all foundation upgrades.

Custom Delegation Loading

In the module’s pom, set the scope of the dependency to provided to actively specify which dependencies to delegate to the foundation for loading. By slimming down the module, duplicate dependencies with the foundation are delegated to the foundation for loading, and middleware dependencies are pre-deployed in the foundation (optional, although the module may not use them temporarily, they can be introduced in advance in case they are needed by subsequent modules without the need to redeploy the foundation). Here:

  1. The foundation tries to precipitate common logic and dependencies, especially those related to middleware named xxx-alipay-sofa-boot-starter.
  2. Pre-deploy some common dependencies in the foundation (optional).
  3. If the dependencies in the module are already defined in the foundation, the dependencies in the module should be delegated to the foundation as much as possible. This will make the module lighter (providing tools for automatic module slimming). There are two ways for the module to delegate to the foundation:
    1. Set the scope of the dependency to provided, and check whether there are other dependencies set to compile through mvn dependency:tree, and all places where dependencies are referenced need to be set to provided.
    2. Set excludeGroupIds or excludeArtifactIds in the sofa-ark-maven-plugin biz packaging plugin.
            <plugin>
                <groupId>com.alipay.sofa</groupId>
                <artifactId>sofa-ark-maven-plugin</artifactId>
                <configuration> 
                    <excludeGroupIds>io.netty,org.apache.commons,......</excludeGroupIds>
                    <excludeArtifactIds>validation-api,fastjson,hessian,slf4j-api,junit,velocity,......</excludeArtifactIds>
                    <declaredMode>true</declaredMode>
                </configuration>
            </plugin>

Using Method 2.b To ensure that all declarations are set to provided scope, it is recommended to use Method 2.b, where you only need to specify once.

  1. Only dependencies declared by the module can be delegated to the foundation for loading.

During module startup, the Spring framework has some scanning logic. If these scans are not restricted, they will search for all resources of both the module and the foundation, causing some modules to attempt to initialize functions they clearly do not need, resulting in errors. Since SOFAArk 2.0.3, the declaredMode of modules has been added to limit only dependencies declared within the module can be delegated to the foundation for loading. Simply add <declaredMode>true</declaredMode> to the module’s packaging plugin configurations.

Advantages

  • No need to maintain a forced loading list for plugins. When some dependencies that need to be loaded by the same ClassLoader are not set for uniform loading, you can fix them by modifying the module without redeploying the foundation (unless the foundation does require it).

Disadvantages

  • Strong dependency on slimming down modules.

Comparison and Summary

Dependency Missing Investigation CostRepair CostModule Refactoring CostMaintenance Cost
Forced LoadingModerateUpdate plugin, deploy foundation, highLowHigh
Custom DelegationModerateUpdate module dependencies, update foundation if dependencies are insufficient and deploy, moderateHighLow
Custom Delegation + Foundation Preloaded Dependencies + Module SlimmingModerateUpdate module dependencies, set to provided, lowLowLow

Conclusion: Recommend Custom Delegation Loading Method

  1. Module custom delegation loading + module slimming.
  2. Module enabling declaredMode.
  3. Preload dependencies in the base.

declaredMode 开启方式

declaredMode Activation Procedure

Activation Conditions

The purpose of declaredMode is to enable modules to be deployed to the foundation. Therefore, before activation, ensure that the module can start locally successfully.
If it is a SOFABoot application and involves module calls to foundation services, local startup can be skipped by adding these two parameters to the module’s application.properties (SpringBoot applications do not need to care):

# If it is SOFABoot, then:
# Configure health check to skip JVM service check
com.alipay.sofa.boot.skip-jvm-reference-health-check=true
# Ignore unresolved placeholders
com.alipay.sofa.ignore.unresolvable.placeholders=true

Activation Method

Add the following configuration to the module’s packaging plugin:
image.png

Side Effects After Activation

If the dependencies delegated to the foundation by the module include published services, then the foundation and the module will publish two copies simultaneously.


5.5.3.10 - 6.3.5.11 What happens if a module independently introduces part of the SpringBoot framework?

What happens if a module independently introduces part of the SpringBoot framework in Koupleless?

Since the logic of multi-module runtime is introduced and loaded in the base, such as some Spring Listeners. If the module starts using its own SpringBoot entirely, there may be some class conversion or assignment judgment failures, for example:

CreateSpringFactoriesInstances

image.png

name = ‘com.alipay.sofa.ark.springboot.listener.ArkApplicationStartListener’, ClassUtils.forName gets the class from the base ClassLoader.
image.png
However, the type is loaded when the module starts, which means it is loaded using the module’s BizClassLoader.
image.png
At this point, performing an isAssignable check here will cause an error.

com.alipay.sofa.koupleless.plugin.spring.ServerlessApplicationListener is not assignable to interface org.springframework.context.ApplicationListener

So the module framework part needs to be delegated to the base to load.



5.5.4 - 6.5.4 Module Split Tool

5.5.4.1 - 6.5.4.1 Semi-Automated Split Tool User Guide

Background

When extracting the Koupleless module from a large monolithic SpringBoot application, users face high learning and trial-and-error costs. Users need to analyze from the service entrance which classes to split into the module, then modify the module according to the Koupleless module coding method.

To reduce learning and trial-and-error costs, KouplelessIDE plugin provides semi-automated splitting capabilities: analyzing dependencies and automating modifications.

Quick Start

1. Install Plugin

Install the KouplelessIDE plugin from the IDEA plugin marketplace:

2. Configure IDEA

Ensure that IDEA -> Preferences -> Builder -> Compiler’s “User-local build process heap size” is at least 4096

3. Select Module

Step one: Open the SpringBoot application that needs splitting with IDEA, on the right panel open ServerlessSplit

Step two: Select the splitting method as needed, click “Confirm and Collapse”

4. Dependency Analysis

During splitting, it is necessary to analyze the dependencies between classes and Beans. The plugin allows for the visualization of dependency relationships, and it is up to the business side to decide whether a class should be split into a module.

Step one: Click to activate

Step two: Drag the service entry to the module, supporting cross-level dragging

Dragging result:

Step three: Drag the “Files for Analysis”, click to analyse dependencies, view Class/Bean dependencies as shown below:

Where the icons represent:

IconMeaningRequired Action
in-module-icon.jpgAlready in moduleNo action required
can-move-to-module.jpgCan be moved to moduleDrag to module can-move-to-module-action.jpg
recommend-to-analyse-icon.jpgRecommended to analyze dependencyDrag to analyze recommend-to-analyse-action.jpg
warning-icon.jpgShould not be moved to moduleHover to view dependency details warning-icon.jpg

Step four: Follow the prompts, through dragging, stepwise analyze, import the necessary module files

5. Detection

Click on “Preliminary Detection”, which will prompt the user about possible issues with this split, and which middleware might require manual intervention.

Open the lower sidebar in KouplelessIDE to view the prompts.

6. Splitting

Click to start the splitting.

Open the lower sidebar in KouplelessIDE to view the prompts.

5.5.4.2 - 6.5.4.2 Is it too difficult to collaborate on developing a monolithic application? Koupleless brings Split Plugin to help you streamline and improve the efficiency of collaborative development!

Background

Is the collaboration efficiency of your enterprise application low?

It takes ten minutes to compile and deploy the code even though only one line is changed;

When multiple developers work on a codebase, they frequently encounter resource contention and mutual coverage during debugging, resulting in mutual waiting for deployment…

As the project code gradually expands and the business develops, the problems of code coupling, release coupling, and resource coupling are increasingly serious, and the development efficiency keeps decreasing.

How to solve it? Try splitting a single Springboot application into multiple Springboot applications! After splitting, multiple Springboot applications can be developed in parallel without interfering with each other. In the Koupleless mode, the business can split the Springboot application into a base and multiple Koupleless modules (Koupleless modules are also Springboot applications).

split-example.jpg

🙌 Scroll down to the “Koupleless Split Plugin Solution” section to watch the demonstration video of splitting a monolithic application!

Key Challenges

There are three key challenges in splitting multiple Springboot applications from a single one:

First, before splitting the sub-application, the complex monolithic application has high code coupling, complex dependency relationships, and a complex project structure, making it difficult to analyze the coupling between files and even more difficult to split out sub-applications, hence the need to solve the problem of analyzing file dependencies in the complex monolithic application before splitting.

Second, when splitting the sub-application, the operation of splitting is cumbersome, time-consuming, and requires users to analyze dependency relationships while splitting, thus imposing high demands on users, therefore there is a need to reduce the user interaction cost during the splitting.

Third, after splitting the sub-application, the monolithic application evolves into a multi-application coexistence, and its coding mode will change. The way Bean is called transitions from a single application call to a cross-application call, and special multi-application coding modes need to be adjusted according to the framework documentation. For example, in Koupleless, in order to reduce the data source connection of modules, modules will use the data source of the base in a certain way, resulting in a very high learning and adjustment cost, hence the need to solve the problem of the evolution of coding modes in multiple applications after splitting.

Koupleless Split Plugin Solution

In response to the above three key challenges, the Koupleless IntelliJ IDEA Plugin divides the solution into 3 parts: analysis, interaction, and automated splitting, providing dependency analysis, user-friendly interaction, and automated splitting capabilities, as shown in the following figure:

split-solution.jpg

  • In the analysis, analyze the dependency relationships in the project, including class dependencies and Bean dependencies, to solve the problem of analyzing file dependencies in the complex monolithic application before splitting;
  • In the interaction, visualize the dependency relationships between class files to help users sort out the relationships. At the same time, visualize the module directory structure, allowing users to decide which module files to split by dragging and dropping, thus reducing the user interaction cost during splitting;
  • In the automated splitting, the plugin will build the modules and modify the code according to the special multi-application coding, solving the problem of the evolution of coding modes in multiple applications after splitting.

🙌 Here is a demonstration video of the semi-automatic splitting with Koupleless, which will help you better understand how the plugin provides assistance in analysis, interaction, and automated splitting.

Example of Understanding the Advantages of Koupleless Solution

Suppose a business needs to separate the code related to the system into modules, while keeping the common capabilities in the base. Here we take the entry service of the system, QuartzJobController, as an example.

Step 1: Analyze Project File Dependencies

First, we will analyze which classes and beans QuartzJobController depends on.

Method 1: Using IntelliJ IDEA Ultimate, perform bean and class analysis on the controller to obtain the following bean dependency diagram and class dependency diagram.

  • Advantage: Comprehensive analysis with the help of IntelliJ IDEA Ultimate
  • Disadvantage: Requires analysis of each class file, and the bean dependency diagram may not be very readable.

Method 2: Use mental analysis

When class A depends on classes B, C, D, … N, when separating them, it is necessary to analyze whether each class is being depended on by other classes and whether it can be separated into modules.

  • Advantage: Intuitive
  • Disadvantage: When class A has many dependencies, it requires recursive mental analysis.

Method 3: Use the Koupleless assistant tool for easy analysis! Select any class file you want to analyze, click “Analyze Dependencies,” and the plugin will help you analyze. It not only analyzes the classes and beans that the class file depends on, but also suggests which classes can be separated out and which cannot.

For example, when the selected module includes QuartzJobController, QuartzJobService, and QuartzJobServiceImpl, the dependency of QuartzJobController on classes and beans is as shown in the following diagram:

The dependent classes/beans of QuartzJobController are divided into four categories: already in the module, can be moved into the module, suggested to analyze the dependency relationship, and not recommended to be moved into the module.

  • If it is in the module, it is marked as green “already in the module,” such as QuartzJobService and QuartzJobServiceImpl.
  • If it is only depended on by module classes, then it is marked as blue “can be moved into the module,” such as JobQueryCriteria.
  • If it is only depended on by one non-module class, then it is marked as yellow “suggested to analyze the dependency relationship,” such as QuartLog.
  • If it is depended on by many non-module classes, then it is marked as red “not recommended to be moved into the module,” such as BadRequestException.

When using the plugin to analyze QuartzJobController and JobQueryCriteria, the dependency tree and the dependency by tree are as follows, corresponding to the analysis above:

  • Advantage: Intuitive, easy to use, and friendly prompts
  • Disadvantage: The plugin only supports the analysis of common bean definitions and class references

Step 2: Separate into Modules & Modify Single Application Coding to Multi-Application Coding Mode

Separate the relevant class files into modules.

Method 1: Copy and paste each file, mentally analyze the bean calls between all module and bases, and modify the code according to the multi-application coding mode.

When separating, questions may arise: Where did I just separate to? Is this file in the module? Do I need to refactor these package names? Are the bean calls cross-application? Where is the documentation for multi-application coding?

  • Advantage: Can handle multi-application coding modes that the plugin cannot handle
  • Disadvantage: Users not only need to analyze cross-application bean dependencies, but also need to learn the multi-application coding mode, resulting in high manual costs.

Method 2: Use the Koupleless assistant tool for easy separation!

Drag the files you want to separate into the panel according to the module directory structure. Click “Separate,” and the plugin will help you analyze and modify according to the Koupleless multi-application coding mode.

image.png

  • Advantage: Intuitive, convenient interaction, and the plugin automatically modifies the way cross-application bean calls are made and some special multi-application coding modes
  • Disadvantage: The plugin can only modify the code based on some multi-application coding modes, so users need to understand the capabilities of the plugin.

Technical Solution

The plugin divides the overall process into 3 stages: analysis stage, interaction stage, and automated separation stage, as shown in the following diagram:

  • In the analysis stage, it analyzes the dependencies in the project, including class dependencies, bean dependencies, and special multi-application coding analysis, such as MyBatis configuration dependencies.
  • In the interaction stage, it visualizes the dependencies between class files and the module directory structure.
  • In the automated separation stage, the plugin first builds the module and integrates the configuration, then refactors the package names according to the user’s needs, modifies the way module base bean calls are made, and modifies the code according to special multi-application coding modes, such as automatically reusing the base data source.

Next, we will briefly introduce the main technologies used in the analysis stage, interaction stage, and automated separation stage.

Analysis Phase

Plugins use JavaParser and commons-configuration2 to scan Java files and configuration files in the project.

Class Dependency Analysis

To accurately analyze the class dependency of the project, the plugin needs to fully analyze all the project classes used in a class file, that is: analyze each statement involving types in the code.

The plugin first scans all class information, then uses JavaParser to scan the code of each class, analyzes the types of project class files involved in the code, and finally records their relationships. The types of statements involved are as follows:

  1. Class definition analysis: Parsing the parent class type and implementing interface type as referenced types;
  2. Annotation analysis: Parsing the annotation type as referenced types;
  3. Field definition analysis: Parsing the field type as referenced types;
  4. Variable definition analysis: Parsing the variable type as referenced types;
  5. Method definition analysis: Parsing the return type of the method, parameter types, annotations, and thrown types as referenced types;
  6. Class object creation analysis: Parsing the object type of the class object creation statement as referenced types;
  7. Catch analysis: Parsing the object type of catch as referenced types;
  8. Foreach analysis: Parsing the object type of foreach as referenced types;
  9. For analysis: Parsing the object type of for as referenced types; To quickly parse object types, since directly using JavaParser for parsing is slow, first check if there are matching types through imports. If the match fails, then use JavaParser for parsing.

Bean Dependency Analysis

To accurately analyze the project’s bean dependency, the plugin needs to scan all the bean definitions and dependency injection methods in the project, and then analyze all the project beans that the class file depends on through static code analysis.

There are three main ways to define beans: class name annotation, method name annotation, and xml. Different ways of bean definition correspond to different bean dependency injection analysis methods, and the ultimately dependent beans are determined by the dependency injection type. The overall process is as follows:

When scanning beans, the plugin parses and records bean information, dependency injection type, and dependent bean information.

  1. For classes defined with class annotations, it will parse the field annotations and analyze the dependency injection type and dependent bean information of the field.
  2. For classes defined with methods, it will parse the parameter information and analyze the dependency injection type and dependent bean information of the parameter.
  3. For classes defined with xml, it will analyze the dependency injection by parsing the xml and class methods:
    1. Parse dependencies of type byName using and
    2. Parse the dependency injection type and dependent bean information by parsing the fields.
    3. If the dependency injection type of the xml is not ’no’, then parse the dependency injection type and the corresponding dependent bean information of the set method.

Finally, according to the dependency injection type, find the dependent bean information in the project’s recorded bean definitions to analyze the bean dependency relationship.

Special Multi-Application Code Analysis

Here we take the MyBatis configuration dependency analysis as an example.

When splitting out the Mapper to a module, the module needs to reuse the base data source, so the plugin needs to analyze all MyBatis configuration classes associated with the Mapper. The overall relationship between the various MyBatis configuration classes and Mapper files is connected through the MapperScanner configuration, as shown in the figure below:

Therefore, the plugin records all Mapper class files and XML files, analyzes the associated MapperScanner, and parses all Mybatis configuration bean information associated with the MapperScanner configuration.

Interaction Phase

Here is a brief description of the implementation of dependency visualization and cross-level import.

  • Visualization of dependency relationships: The plugin recursively analyzes the dependency relationships between all class files (including class dependency relationships and bean dependency relationships). Since there may be cyclic dependencies between class files, a cache is used to record all class file nodes. When recursing, the plugin prioritizes taking the dependency nodes from the cache to avoid stack overflow problems when constructing tree nodes.
  • Cross-level import: Record all selected files. If folders and files within folders are selected, only import the marked files during import.

Automation Phase of Splitting

Here is a brief description of the implementation of package renaming, configuration integration, bean invocation, and special multi-application code modification (using “reusing the base data source” as an example).

  • Package renaming: When the user customizes the package name, the plugin will modify the class package name and, according to the class dependency relationship, modify its import field to the new package name.
  • Configuration integration: For each module of the sub-application, read all the original module configurations where the split files are located and integrate them into the new module; automatically extract bean nodes related to the sub-application from XML.
  • Bean invocation: Based on the bean dependency relationship analyzed earlier, the plugin filters out the bean calls between the module and the base, and modifies the field annotations (@Autowired @Resource @Qualifier) to @AutowiredFromBase or @AutowiredFromBiz.
  • Reuse of the base data source: Based on the user’s selection of Mapper files and MyBatis configuration dependency relationships, extract the MyBatis configuration information related to the Mapper. Then fill in the configuration information to the data source reuse template file and save it in the module.

Future Outlook

The above-mentioned features have been completed internally but have not been officially open-sourced. It is expected to be open-sourced in the first half of 2024. Stay tuned.

In addition, in terms of functionality, there are still more challenges to be addressed in the future: how to split the unit tests and how to verify the consistency of the split multi-application ability and single-application ability.

We welcome more interested students to pay attention to the construction of the Koupleless community together to build the Koupleless ecosystem.

5.6 - 6.6 ModuleControllerV2 Technical Documentation

5.6.1 - 6.6.1 ModuleControllerV2 Architecture

Koupleless ModuleControllerV2 Architecture

Brief Introduction

ModuleControllerV2 is a K8S control plane component based on the capabilities of Virtual Kubelet. It disguises the base as a node in the K8S system and maps the Module as a Container in the K8S system, thereby mapping Module operations to Pod operations. Utilizing K8S’s Pod lifecycle management, scheduling, and existing controllers like Deployment, DaemonSet, and Service, it achieves second-level Serverless Module operation scheduling and base interaction capabilities.

Background

The original Module Controller (hereafter referred to as MC) was designed based on K8S Operator technology.

In this mode, the original MC logically defines a separate Module control panel isolated from the base, handling operations for the base using K8S’s native capabilities and Module operations through Operator-encapsulated logic.

While this method logically distinguishes between Module and base concepts, it also presents certain limitations:

  1. Modules are abstracted differently from the base model. Therefore, the original MC not only needs to load/unload Modules on the base but also:

    1. Be aware of all current bases
    2. Maintain base status (online status, Module loading, Module load, etc.)
    3. Maintain Module status (online status, etc.)
    4. Implement appropriate Module scheduling logic as required

    This results in high development and maintenance costs. (High cost for Operator development per scenario)

  2. Horizontal expansion of Module capabilities and roles is difficult. This implementation method is logically incompatible with traditional microservices architectures, where roles among services are similar. However, in the Operator implementation, Module and base abstraction levels differ, hindering interoperability. For example, in Koupleless’s proposal: “Modules can either attach to the base or run independently as services.” In the Operator architecture, achieving the latter requires custom scheduling logic and specific resource maintenance, leading to high development and maintenance costs for each new capability/role.

  3. In this architecture, Module becomes a new concept, increasing learning costs for users from a product perspective.

Architecture

ModuleControllerV2 currently includes the Virtual Kubelet Manager control plane component and the Virtual Kubelet component. The Virtual Kubelet component is the core of Module Controller V2, responsible for mapping base services as nodes and maintaining Pod states on them. The Manager maintains base-related information, monitors base online/offline status, and maintains the basic runtime environment for the Virtual Kubelet component.

Virtual Kubelet

Virtual Kubelet is implemented with reference to the official documentation.

In summary, VK is a programmable Kubelet.

Just like in programming languages, VK is a Kubelet interface that defines a set of Kubelet standards. By implementing this VK interface, we can create our own Kubelet.

The Kubelet originally running on nodes in K8S is an implementation of VK, enabling K8S control plane to utilize and monitor physical resources on nodes by implementing abstract methods in VK.

Therefore, VK has the capability to masquerade as a Node. To distinguish between traditional Nodes and VK-masqueraded Nodes, we call VK-masqueraded Nodes as VNodes.

Logical Structure

In the Koupleless architecture, base services run in Pods, scheduled and maintained by K8S, and run on actual nodes.

Module scheduling needs align with base scheduling. Thus, in MC V2 design, VK is used to disguise base services as traditional K8S Nodes, becoming base VNodes, while Modules are disguised as Pods, becoming module VPods. This logically abstracts a second layer of K8S to manage VNodes and VPods.

In summary, the overall architecture includes two logical K8S:

  1. Base K8S: Maintains real Nodes (virtual/physical machines), responsible for scheduling base Pods to real Nodes.
  2. Module K8S: Maintains virtual VNodes (base Pods), responsible for scheduling module VPods to virtual VNodes.

These are called logical K8S because they do not necessarily need to be two separate K8S. With good isolation, the same K8S can perform both tasks.

This abstraction allows utilizing K8S’s native scheduling and management capabilities without extra framework development, achieving:

  1. Management of base VNodes (not a core capability since they are already Pods in the underlying K8S but contain more information as Nodes)
  2. Management of VPods (core capability: including Module operations, Module scheduling, Module lifecycle status maintenance, etc.)

Multi-Tenant VK Architecture

Native VK uses K8S’s Informer mechanism and ListWatch to monitor pod events on the current VNode. This means each VNode requires its own monitoring logic. As the number of bases increases, API Server pressure grows rapidly, hindering horizontal scaling.

To solve this, Module Controller V2 extracts the ListWatch part of Virtual Kubelet, monitors events of specific Pods (those with certain labels in implementation), and forwards them to logical VNodes through in-process communication, reusing Informer resources. This way, each VNode only maintains local context without separate monitoring, reducing API Server pressure.

In the multi-tenant architecture, Module Controller V2 includes two core Modules:

  1. Base Registration Center: Discovers base services via a specific operations pipeline and maintains VK context and data transmission.
  2. VK: Maintains mappings between a specific base and node/pod, maintains node/pod states, and translates pod operations into corresponding Module operations for the base.

Sharded Architecture

A single Module Controller lacks disaster recovery capabilities and has an obvious upper limit. Thus, Module Controller V2 requires a more stable architecture with disaster recovery and horizontal scaling capabilities.

In Module operations, the core concern is the stability of scheduling capabilities. Under the current Module Controller architecture, scheduling stability consists of two parts:

  1. Stability of the dependent K8S
  2. Base stability

The first point cannot be guaranteed at the Module Controller layer, so high availability of the Module Controller focuses only on base-level stability.

Additionally, Module Controller’s load mainly involves monitoring and processing various Pod events, related to the number of Pods and bases under control. Due to K8S API Server’s rate limits on a single client, a single Module Controller instance has an upper limit on simultaneous event processing, necessitating load sharding capabilities at the Module Controller level.

Thus, the sharded architecture of Module Controller addresses two core issues:

  1. High availability of the base
  2. Load balancing of Pod events

In the Module Controller scenario, Pod events are strongly bound to the base, making load balancing of Pod events equivalent to balancing the managed base.

To address the above issues, Module Controller builds native sharding capability on multi-tenant Virtual Kubelet. The logic is as follows:

  1. Each Module Controller instance listens to the online information of all bases.
  2. Upon detecting a base going online, each Module Controller creates corresponding VNode data and attempts to create a VNode node lease.
  3. Due to naming conflicts of resources in K8S, only one Module Controller instance can successfully create a Lease, making its VNode the primary instance, while others become replicas, monitoring the Lease object and attempting to regain the primary role, achieving VNode high availability.
  4. Once VNode successfully starts, it listens to Pods scheduled on it for interaction, while unsuccessful VNodes ignore these events, achieving load sharding for the Module Controller.

Thus, the architecture forms: multiple Module Controllers shard VNode loads based on Lease, and multiple Module Controllers achieve VNode high availability through multiple VNode data.

Furthermore, we aim for load balancing among Module Controllers, with approximately balanced numbers of bases for each.

To facilitate open-source users and reduce learning costs, we implemented a self-balancing capability based on K8S without introducing additional components:

  1. Each Module Controller instance maintains its current workload, calculated as (number of VNodes currently managed / total number of VNodes). For example, if a Module Controller manages 3 VNodes out of 10, the actual workload is 3/10 = 0.3.
  2. Upon starting, Module Controllers can specify a maximum workload level. The workload is divided into segments based on this parameter. For example, if the maximum workload level is set to 10, each workload level contains 1/10 of the range, i.e., workload 0-0.1 is defined as workload=0, 0.1-0.2 as workload=1, and so on.
  3. In a sharded cluster configuration, before attempting to create a Lease, a Module Controller calculates its current workload level and waits according to the level. In this scenario, low workload Module Controllers attempt creation earlier, increasing success probability, achieving load balancing.

The process relies on K8S event broadcast mechanisms, with additional considerations depending on the operations pipeline selected during initial base onboarding:

  1. MQTT Operations Pipeline: Since MQTT inherently supports broadcasting, all Module Controller instances receive MQTT onboarding messages without additional configuration.
  2. HTTP Operations Pipeline: Due to HTTP’s nature, a base only interacts with a specific Module Controller instance during onboarding, requiring other capabilities to achieve initial load balancing. In actual deployment, multiple Module Controllers are served through a proxy (K8S Service/Nginx, etc.), allowing load balancing strategies to be configured at the proxy layer for initial onboarding balance.

5.6.2 - 6.6.2 ModuleControllerV2 Scheduling Principles

Koupleless ModuleControllerV2 Architecture Design

Brief Introduction

Module Controller V2 leverages the multi-tenant capabilities of Virtual Kubelet to map bases as Nodes in K8S. By defining modules as Pods, it reuses the K8S scheduler and various controllers to quickly build module operation and scheduling capabilities.

Base <-> VNode Mapping

Module Controller V2 implements base discovery through Tunnel, mapping it as a Node via Virtual Kubelet. Such Nodes are referred to as VNodes.

Upon base discovery, the configured Metadata and Network information are read. Metadata includes Name and Version, while Network includes IP and Hostname.

Metadata becomes Label information on the VNode to identify base details. Network information becomes the VNode’s network configuration. Future module pods scheduled onto the base will inherit the VNode’s IP for configuring Services, etc.

A VNode will also contain the following key information:

apiVersion: v1
kind: Node
metadata:
  labels:
    virtual-kubelet.koupleless.io/component: vnode # vnode marker
    virtual-kubelet.koupleless.io/env: dev # vnode environment marker
    base.koupleless.io/name: base # Name from base Metadata configuration
    vnode.koupleless.io/tunnel: mqtt_tunnel_provider # Current tunnel ownership of the base
    vnode.koupleless.io/version: 1.0.0 # Base version number
  name: vnode.2ce92dca-032e-4956-bc91-27b43406dad2 # vnode name, latter part is UUID from the base maintenance pipeline
spec:
  taints:
  - effect: NoExecute
    key: schedule.koupleless.io/virtual-node # vnode taint to prevent regular pod scheduling
    value: "True"
  - effect: NoExecute
    key: schedule.koupleless.io/node-env # node env taint to prevent non-current environment pod scheduling
    value: dev
status:
  addresses:
  - address: 127.0.0.1
    type: InternalIP
  - address: local
    type: Hostname

Module <-> Pod Mapping

Module Controller V2 defines a module as a Pod in the K8S system, allowing for rich scheduling capabilities through Pod YAML configuration.

A module Pod YAML configuration is as follows:

apiVersion: v1
kind: Pod
metadata:
  name: test-single-module-biz1
  labels:
    virtual-kubelet.koupleless.io/component: module # Necessary to declare pod type for module controller management
spec:
  containers:
    - name: biz1 # Module name, must strictly match the artifactId in the module's pom
      image: https://serverless-opensource.oss-cn-shanghai.aliyuncs.com/module-packages/stable/biz1-web-single-host-0.0.1-SNAPSHOT-ark-biz.jar # jar package address, supports local file, http/https link
      env:
        - name: BIZ_VERSION # Module version configuration
          value: 0.0.1-SNAPSHOT # Must strictly match the version in the pom
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms: # Base node selection
          - matchExpressions:
              - key: vnode.koupleless.io/version # Base version filtering
                operator: In
                values:
                  - 1.0.0 # Module may only be schedulable to certain versions of bases; if restricted, this field is required.
              - key: base.koupleless.io/name # Base name filtering
                operator: In
                values:
                  - base # Module may only be schedulable to certain specific bases; if restricted, this field is required.
  tolerations:
    - key: "schedule.koupleless.io/virtual-node" # Ensure the module can be scheduled onto a base vnode
      operator: "Equal"
      value: "True"
      effect: "NoExecute"
    - key: "schedule.koupleless.io/node-env" # Ensure the module can be scheduled onto a base node in a specific environment
      operator: "Equal"
      value: "test"
      effect: "NoExecute"

The above example shows only the basic configuration. Additional configurations can be added to achieve richer scheduling capabilities, such as adding Pod AntiAffinity in Module Deployment scenarios to prevent duplicate module installations.

Operations Workflow

Based on the above structure and mapping relationships, we can leverage Kubernetes (K8S) native control plane components to fulfill diverse and complex module operation requirements.

The following illustrates the entire module operations workflow using the Deployment module as an example, with the base already initialized and mapped:

  1. Create the Module Deployment (a native K8S Deployment, where the PodSpec within the Template defines module information). The Deployment Controller in the K8S ControllerManager will create a virtual Pod (vPod) according to the Deployment configuration. At this point, the vPod wasn’t scheduled,with a status of Pending.
  1. K8S Scheduler scans unscheduled vPods and schedules them onto appropriate virtual Nodes (vNodes) based on selector, affinity, taint/toleration configurations.
  1. Upon detecting the completion of vPod scheduling, the Module Controller retrieves the module information defined within the vPod and sends installation commands to the base.
  1. After the base completes the module installation, it synchronizes the module installation status with the Module Controller, which then translates the module status into Container Status and syncs it with Kubernetes.

  2. Concurrently, the base continuously reports its health status. The Module Controller maps Metaspace capacity and usage to Node Memory, updating this information in Kubernetes.


5.6.3 - 6.6.3 Core Process Timeline

Koupleless ModuleController V2 Core Process Timeline

Base Lifecycle

Module Release and O&M


6 - 6.7 FAQ

6.1 - 6.7.1 FAQ List (please search your problem in this page)

Usage Issue

arklet Installation Issue

Symptom

Unable to install arkctl using go install command. The command executed is as follows:

go install koupleless.alipay.com/koupleless/v1/arkctl@latest

The error message received is as follows:

go: koupleless.alipay.com/koupleless/v1/arkctl@latest: module koupleless.alipay.com/koupleless/v1/arkctl: Get "https://proxy.golang.org/koupleless.alipay.com/koupleless/v1/arkctl/@v/list": dial tcp 142.251.42.241:443: i/o timeout

Solution

As arkctl is present as a subdirectory of koupleless, it cannot be directly installed using go get. You can download the executable file from here and refer to the instructions to install arkctl.

Module Building Issues

Maven version too low

Symptom

During the build,

  • Error: Unable to parse configuration of mojo com.alipay.sofa:sofa-ark-maven-plugin:.*:repackage for parameter excludeArtifactIds
  • Error: com.google.inject.ProvisionException: Unable to provision, see the following errors:
  • Error: Error injecting: private org.eclipse.aether.spi.log.Logger org.apache.maven.repository.internal.DefaultVersionRangeResolver.logger
  • Error: Caused by: java.lang.IllegalArgumentException: Can not set org.eclipse.aether.spi.log.Logger field org.apache.maven.repository.internal.DefaultVersionRangeResolver.logger to org.eclipse.aether.internal.impl.slf4j.Slf4jLoggerFactory

Cause

Maven version is too low

Solution

Upgrade to version 3.6.1 or above

Configuration Issues

application.properties configuration

Symptom

spring.application.name must be configured

Cause

spring.application.name is not configured in application.properties

Solution

Configure spring.application.name in application.properties

Failure of SOFABoot base or module startup due to AutoConfiguration

Symptom

An error is reported: “The following classes could not be excluded because they are not auto-configuration classes: org.springframework.boot.actuate.autoconfigure.startup.StartupEndpointAutoConfiguration”.

Cause

SOFABoot needs to import spring-boot-actuator-autoconfiguration correctly because it defines “spring.exclude.autoconfiguration” as org.springframework.boot.actuate.autoconfigure.startup.StartupEndpointAutoConfiguration in the code here. An error will be reported if the class cannot be found during startup.

Solution

Import sprign-boot-actuator-autoconfiguration in the base or module.

Runtime Issues

koupleless dependency missing

Phenomenon

  • When installing the module, it throws an error com.alipay.sofa.ark.exception.ArkLoaderException: [ArkBiz Loader] module1:1.0-SNAPSHOT: can not load class: com.alipay.sofa.koupleless.common.spring.KouplelessApplicationListener

Cause

koupleless dependency missing

Solution

Please add the following dependency in the module:

<dependency>
    <groupId>com.alipay.sofa.koupleless</groupId>
    <artifactId>koupleless-app-starter</artifactId>
    <version>${koupleless.runtime.version}</version>
</dependency>

Alternatively, upgrade koupleless to the latest version.

koupleless version is too low

Symptom

  • Module installation error: Master biz environment is null
  • Module static merge deployment cannot find module package from the specified directory

Solution

Upgrade koupleless version to the latest version

<dependency>
    <groupId>com.alipay.sofa.koupleless</groupId>
    <artifactId>koupleless-app-starter</artifactId>
    <version>${latest_version}</version>
</dependency>

Class missing

Symptom

  • Error: java.lang.ClassNotFoundException
  • Error: java.lang.NoClassDefFoundError

Cause

Module/Foundation cannot find the class

Solution

Investigate the module class missing and foundation class missing.

Module class missing

Symptom

Error: ArkBiz Loader.*can not load class

Cause

The module is missing the dependency for the corresponding class!

Solution

Check if the module contains the dependency for the class. If not, add the corresponding dependency.

Foundation class missing

Symptom

Error: ArkLoaderException: Post find class .* occurs an error via biz ClassLoaderHook

Cause

The class delegate to the foundation loading is not found in the foundation, or the dependency version is incorrect.

Solution

Add the corresponding dependency to the foundation or modify the dependency version.

Module depends on the class with multiple different sources

Symptom:

  • Error: java.lang.LinkageError
  • Error: java.lang.ClassCastException
  • Error: previously initiated loading for a different type with name

Cause

The class introduced multiple identical dependencies between the foundation and the module, and the loaded class may come from different ClassLoader.

Solution

In the main pom of the module, set the package where the class is located to provided. (Ensure module slimming and dependency management between the foundation and the module.)

Method missing

Symptom

Error: java.lang.NoSuchMethodError

Cause

java.lang.NoSuchMethodError is thrown, indicating a possible jar conflict or unloaded dependency.

Solution

Check for jar conflicts or unloaded dependencies.

Module directly uses foundation’s datasource

Symptom

Error: No operation is allowed after dataSource is closed

Cause

The module directly uses the dataSource in the foundation, and the unloading of the module causes the foundation’s dataSource to close.

Solution

The dataSource has been closed. Check if the module directly uses the dataSource in the foundation through bean acquisition.

Module’s rest service webContextPath conflict

Symptom

Error: org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is java.lang.IllegalArgumentException: Child name xxx is not unique

Cause

webContextPath conflict

Solution

Check if other modules have set the same webContextPath

Incorrect JVM parameter configuration

Symptom

Error: Error occurred during initialization of VM

Cause

Error occurred during initialization of VM, generally indicating a problem with JVM parameter configuration.

Solution

Check JVM parameter configuration on the user side

Bean configuration issues

Symptom:

  • Error: org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Invalid bean definition with name
  • Error: java.lang.IllegalArgumentException: JVM Reference
  • Error creating bean with name
  • Error: BeanDefinitionStoreException: Invalid bean definition with name
  • Error: org.springframework.beans.FatalBeanException: Bean xx has more than one interface
  • Error: No qualifying bean of type
  • Error: BeanDefinitionStoreException: Invalid bean definition with name

Cause

Bean configuration issues in the project

Solution

  1. Check if the bean is incorrectly configured in the XML or if there are dependency issues.
  2. Bean initialization/definition exception, please check the business logic.

Duplicate Spring Bean definition

Symptom

Error: There is already xxx bean

Cause

Business coding issue: duplicate bean definition

Solution

Check the business-side code

XML configuration issues

Symptom

Error: Error parsing XPath XXX #### Cause java.io.IOException: Could not find resource

Cause

XML file parsing failed, unable to find the corresponding dependency configuration

Solution

Investigate the parsing failure issue

JMX configuration issues

Symptom

Error: org.springframework.jmx.export.UnableToRegisterMBeanException: Unable to register MBean

Cause

JMX needs to manually configure the application name

Solution

Add -Dspring.jmx.default-domain=${spring.application.name} as a startup parameter to the foundation

Dependency configuration

Symptom

Error: Dependency satisfaction failed XXX java.lang.NoClassDefFoundError

Cause

Jar dependency issue, class not found

Solution

Check jar dependencies, if the project depends on incorrect jar packages, make corrections

SOFA JVM Service lookup failure

Symptom:

  • Error: can not find corresponding jvm service
  • Error: JVM Reference XXX can not find the corresponding JVM service

Cause

The JVM service referenced by JVM Reference is not found

Solution

Check if the business code is correct and if the corresponding service exists.

Insufficient memory

Symptom:

  • Error: Insufficient space for shared memory
  • Error: java.lang.OutOfMemoryError: Metaspace

Cause

Insufficient memory or memory overflow

Solution

Replace or restart the machine

Hessian version conflict

Symptom

Error: Illegal object reference

Cause

Hessian version conflict

Solution

Use mvn dependency:tree to view the dependency tree and resolve the conflict dependencies

guice version is too low

Symptom: Caused by: java.Lang.ClassNotFoundException: com.google.inject.multibindings.Multibinder

guice_version_incompatibility.png

Cause

The version of Guice in the user’s project is not compatible with the version used in Koupleless, and it is an older version.

Solution

Upgrade the Guice version to a newer version, such as:

<dependency>
    <groupId>com.google.inject</groupId>
    <artifactId>guice</artifactId>
    <version>6.0.0</version>
</dependency>

Need to slim down the module

Symptom

  • Error java.lang.IllegalArgumentException: Cannot instantiate interface org.springframework.context.ApplicationListener: com.alipay.sofa.koupleless.common.spring.KouplelessApplicationListener
  • Error Caused by: java.lang.IllegalArgumentException: class com.alipay.sofa.koupleless.plugin.spring.BizApplicationContextInitializer is not assignable to interface org.springframework.context.ApplicationContextInitializer

Reason

The module should import the springboot dependency in a provided manner.

Solution

Slim down the module, refer to here: Module Slimming

SOFABoot health check failure

Symptom

Error: HealthCheck Failed

Cause

SOFABoot project HealthCheck failure

Solution

Investigate the specific cause of the failure on the user side.

When the module shares the library with the base, the module starts the logic of the base.

Symptom

For example, if the base introduces druid but the module does not, according to the design, the module should not need to initialize dataSource. However, if the module also initializes dataSource, this behavior is not as expected and may cause errors.

Solution

  1. Ensure that the module can be built independently, i.e., can execute mvn clean package in the module’s directory without errors.
  2. Upgrade the koupleless version to the latest version 0.5.7.

Unable to Initialize EnvironmentPostProcessor on Module Startup

Phenomenon

During the module startup, an error message like the following is reported:

Unable to instantiate com.baomidou.mybatisplus.autoconfigure.SafetyEncryptProcessor [org.springframework.boot.environment.EnvironmentPostProcessor]

Solution

Specify the ClassLoader of the ResourceLoader when launching Spring Boot in the module’s main method.

SpringApplicationBuilder builder = new SpringApplicationBuilder(Biz1Application.class);
// set the biz to use the resource loader.
ResourceLoader resourceLoader = new DefaultResourceLoader(
    Biz1Application.class.getClassLoader());
builder.resourceLoader(resourceLoader);
builder.build().run(args);

Error occurred when closing the base and shutting down the Tomcat server

Symptoms

When the base is closed, an error message “Unable to stop embedded Tomcat” is displayed.

Causes

When the base is closed, Tomcat has its own shutdown logic. However, koupleless adds additional shutdown logic, causing the base to attempt a second shutdown. This message is just a warning and does not affect the normal shutdown of the base.

Solution

No action is required.

### Module compile includes Tomcat causing startup error `Caused by: java.lang.Error: factory already defined`
#### Phenomenon
You can see the detailed error stack trace [here](https://github.com/sofastack/sofa-ark/issues/185).
#### Reason
The module compile introduces Tomcat, and upon module startup, Tomcat is reinitialized. At this time, `TomcatURLStreamHandlerFactory` tries to register itself via `URL::setURLStreamHandlerFactory` to URL, but since the base has already registered once, the duplicated registration throws an error. For more details, see [here](https://github.com/spring-projects/spring-boot/issues/10529).
#### Solution
Resolve the issue by setting `TomcatURLStreamHandlerFactory.disable()` in the code.

Module Startup Error in JdkDynamicAopProxy.getProxy Showing “xxx referenced from a method is not visible from class loader”

Cause

The version of spring-core is 6.0.9 which contains a bug in its internal logic. Here, even if BizClassLoader is passed in, due to BizClassLoader not having a parent, the class loader is forcibly switched to the base ClassLoader.

public Object getProxy(@Nullable ClassLoader classLoader) {
    if (logger.isDebugEnabled()) {
        logger.debug("Creating JDK dynamic proxy: " + this.advised.getTargetSource());
    }
    if (classLoader == null || classLoader.getParent() == null) {
        // JDK bootstrap loader or platform loader
        // Use a higher-level loader which can see spring infrastructure classes
        classLoader = getClass().getClassLoader();
    }
    return Proxy.newProxyInstance(classLoader, this.proxiedInterfaces, this);
}

Solution

Update to a newer version of spring-core, such as 6.0.11.

Hessian Serialization/Deserialization Does Not Support Multi-Application (classLoader)

Solution

Refer to Issue #196, you can upgrade the Hessian version to 4.x or above, or customize a SerializerFactory that supports multiple ClassLoaders.

Hessian Serialization Error

Symptoms

  • java.lang.RuntimeException: can't get field offset on a hidden class: private final XXXXX$$Lambda$XXXX
  • java.lang.UnsupportedOperationException: can't get field offset on a hidden class: private final XXXXX$$Lambda$XXXX

Cause

Hessian serialization does not support hidden classes generated by lambda expressions.

Solution

  1. First, place the dependency package containing the serializable class in the base module.
  2. Then, Depend on this dependency in your module, but slim down the dependency package. Refer to Module Slimming.

Error when start base: IllegalArgumentException: File must exist

Cause

New versions of Spring Boot have updated the file path format, leading to errors in certain file path logics. For details, see https://github.com/koupleless/koupleless/issues/161

Solution

Add the following configuration <loaderImplementation>CLASSIC</loaderImplementation> in the Spring Boot Maven plugin to revert to the original path format:

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <executions>
        <execution>
          <goals>
            <goal>repackage</goal>
          </goals>
          <configuration>
            <loaderImplementation>CLASSIC</loaderImplementation>
          </configuration>
        </execution>
      </executions>
    </plugin>
  </plugins>
</build>