Hyperledger Caliper


Caliper 簡介

Caliper是一個區塊鏈性能基准框架,允許用戶使用自定義用例測試不同的區塊鏈解決方案,並獲得一組性能測試結果。

Caliper is a blockchain benchmark framework which allows users to measure the performance of a specific blockchain implementation with a set of predefined use cases. Caliper will produce reports containing a number of performance indicators, such as TPS (Transactions Per Second), transaction latency, resource utilisation etc. The intent is for Caliper results to be used as a reference in supporting the choice of a blockchain implementation suitable for the user-specific use-cases. Given the variety of blockchain configurations, network setup, as well as the specific use-cases in mind, it is not intended to be an authoritative performance assessment, nor to be used for simple comparative purposes (e.g. blockchain A does 5 TPS and blockchain B does 10 TPS, therefore B is better). The Caliper project references the definitions, metrics, and terminology as defined by the Performance & Scalability Working Group (PSWG).

支持的區塊鏈解決方案

  • Hyperledger Besu
  • Hyperledger Burrow
  • Ethereum
  • Hyperledger Fabric
  • FISCO BCOS
  • Hyperledger Iroha
  • Hyperledger Sawtooth

支持的性能指標

  • Transaction/read throughput
  • Transaction/read latency (minimum, maximum, average, percentile)
  • Resource consumption (CPU, Memory, Network IO, …)

See the PSWG white paper for the exact definitions and corresponding measurement methods.

安裝Caliper

概覽

Caliper is published as the @hyperledger/caliper-cli NPM package and the hyperledger/caliper Docker image, both containing the CLI binary. Refer to the Installing from NPM and Using the Docker image sections for the available versions and their intricacies.

Installing and running Caliper usually consists of the following steps, thoroughly detailed by the remaining sections:

  1. Acquire the Caliper CLI either from NPM or from DockerHub.
  2. Execute a bind command through the CLI. This step pulls the specified version of SDK packages for the selected platform.
  3. Start the benchmark through the CLI or by starting the Docker container.

The examples in the rest of the documentation use the caliper-benchmarks repository as the Caliper workspace since it contains many sample artifacts for benchmarking. Make sure you check out the appropriate tag/commit of the repository, matching the version of Caliper you use.

To clone the caliper-benchmarks repository, run:

git clone https://github.com/hyperledger/caliper-benchmarks.git
cd caliper-benchmarks
git checkout <your Caliper version>

Note: If you are running your custom benchmark, then change this directory path (and other related configurations) accordingly in the examples.

The Caliper CLI

Unless you are embedding the Caliper packages in your own application, you will probably use Caliper through its command line interface (CLI). The other sections will introduce the different ways of acquiring and calling the Caliper CLI. This section simply focuses on the API it provides.

Note: The following examples assume a locally installed CLI in the ~/caliper-benchmarks directory, hence the npx call before the caliper binary. Refer to the Local NPM install section for the specifics.

安裝相關依賴

To install the Fabric SDK dependencies of the adapter, configure the binding command as follows:

Set the caliper-bind-sut setting key to fabric
Set the caliper-bind-sdk setting key to a supported SDK binding.

 

You can set the above keys either from the command line:

user@ubuntu:~/caliper-benchmarks$ npx caliper bind --caliper-bind-sut fabric --caliper-bind-sdk 1.4.0

 

or from environment variables:

user@ubuntu:~/caliper-benchmarks$ export CALIPER_BIND_SUT=fabric
user@ubuntu:~/caliper-benchmarks$ export CALIPER_BIND_SDK=1.4.0
user@ubuntu:~/caliper-benchmarks$ npx caliper bind

 

or from various other sources.

The entry point of the CLI is the caliper binary. You can confirm whether the CLI is installed correctly by checking its version:

user@ubuntu:~/caliper-benchmarks$ npx caliper --version
v0.2.0

 

The CLI provides multiple commands to perform different tasks. To check the available commands and their descriptions, execute:

user@ubuntu:~/caliper-benchmarks$ npx caliper --help
caliper <command>

Commands:
  caliper bind [options]       Bind Caliper to a specific SUT and its SDK version
  caliper launch <subcommand>  Launch a Caliper process either in a master or worker role.
  caliper completion           generate completion script

Options:
  --help, -h  Show usage information  [boolean]
  --version   Show version information  [boolean]

Examples:
  caliper bind
  caliper launch master
  caliper launch worker
 For more information on Hyperledger Caliper: https://hyperledger.github.io/caliper/ 

You can also request the help page of a specific command, as demonstrated by the next subsections.

Note: the command options can be set either through the command line, or from various other sources supported by the configuration mechanism of Caliper. This flexibility makes it easy to embed the CLI in different environments.

The bind command

Acquiring Caliper is as easy as installing a single NPM package, or pulling a single Docker image. However, this single point of install necessitates an additional step of telling Caliper which platform to target and which platform SDK version to use. This step is called binding, provided by the bind CLI command.

To have a look at the help page of the command, execute:

user@ubuntu:~/caliper-benchmarks$ npx caliper bind --help
Usage:
  caliper bind --caliper-bind-sut fabric:1.4.1 --caliper-bind-cwd ./ --caliper-bind-args="-g"

Options:
  --help, -h           Show usage information  [boolean]
  --version            Show version information  [boolean]
  --caliper-bind-sut   The name and version of the platform and its SDK to bind to  [string]
  --caliper-bind-cwd   The working directory for performing the SDK install  [string]
  --caliper-bind-args  Additional arguments to pass to "npm install". Use the "=" notation when setting this parameter  [string]
  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string]

 

The binding step technically consists of an extra npm install call with the appropriate packages and install settings, fully managed by the CLI. The following parameters can be set for the command:

  • SUT/platform name and SDK version: specifies the name of the target platform and its SDK version to install e.g., fabric:1.4.1
  • Working directory: the directory from which the npm install command must be performed. Defaults to the current working directory
  • User arguments: additional arguments to pass to npm install, e.g., --save

The following SUT name (column header) and SDK version (column value) combinations are supported:

besu burrow ethereum fabric fisco-bcos iroha sawtooth
1.3.2 0.23.0 1.2.1 1.0.0 2.0.0 0.6.3 1.0.0
latest latest latest 1.1.0 latest latest 1.0.1
      1.2.0     1.0.2
      1.3.0     1.0.4
      1.4.0     1.0.5
      1.4.1     latest
      1.4.3      
      1.4.4      
      1.4.5      
      1.4.6      
      1.4.7      
      latest      

Note: the latest value always points to the last explicit versions in the columns. However, it is recommended to explicitly specify the SDK version to avoid any surprise between two benchmark runs.

The bind command is useful when you plan to run multiple benchmarks against the same SUT version. Bind once, then run different benchmarks without the need to bind again. As you will see in the next sections, the launcher commands for the master and worker processes can also perform the binding step if the required parameter is present.

Note: the built-in bindings can be overridden by setting the caliper-bind-file parameter to a YAML file path. The file must match the structure of the default binding file. This way you can use experimental SDK versions that are not (yet) officially supported by Caliper. This also means that we cannot provide help for such SDK versions!

啟動命令

Caliper runs a benchmark by using worker processes to generate the workload, and by using a master process to coordinate the different benchmark rounds among the worker processes. Accordingly, the CLI provides commands for launching both master and worker processes.

To have a look at the help page of the command, execute:

user@ubuntu:~/caliper-benchmarks$ npx caliper launch --help
caliper launch <subcommand>

Launch a Caliper process either in a master or worker role.

Commands:
  caliper launch master [options]  Launch a Caliper master process to coordinate the benchmark run
  caliper launch worker [options]  Launch a Caliper worker process to generate the benchmark workload

Options:
  --help, -h  Show usage information  [boolean]
  --version   Show version information  [boolean] 

The launch master command

The Caliper master process can be considered as the entry point of a distributed benchmark run. It coordinates (and optionally spawns) the worker processes throughout the benchmark run.

To have a look at the help page of the command, execute:

user@ubuntu:~/caliper-benchmarks$ npx caliper launch master --help
Usage:
 caliper launch master --caliper-bind-sut fabric:1.4.1 [other options]

Options:
  --help, -h           Show usage information  [boolean]
  --version            Show version information  [boolean]
  --caliper-bind-sut   The name and version of the platform to bind to  [string]
  --caliper-bind-cwd   The working directory for performing the SDK install  [string]
  --caliper-bind-args  Additional arguments to pass to "npm install". Use the "=" notation when setting this parameter  [string]
  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string

As you can see, the launch master command can also process the parameters of the bind command, just in case you would like to perform the binding and the benchmark run in one step.

However, the command requires the following parameters to be set:

  • caliper-workspace: the directory serving as the root of your project. Every relative path in other configuration files or settings will be resolved from this directory. The workspace concept was introduced to make Caliper projects portable across different machines.
  • caliper-benchconfig: the path of the file containing the configuration of the test rounds, as detailed in the Architecture page. Should be relative to the workspace path.
  • caliper-networkconfig: the path of the file containing the network configuration/description for the selected SUT, detailed in the configuration pages of the respective adapters. Should be relative to the workspace path.

The launch worker command

The Caliper worker processes are responsible for generating the workload during the benchmark run. Usually more than one worker process is running, coordinated by the single master process.

To have a look at the help page of the command, execute:

user@ubuntu:~/caliper-benchmarks$ npx caliper launch worker --help
Usage:
 caliper launch master --caliper-bind-sut fabric:1.4.1 [other options]

Options:
  --help, -h           Show usage information  [boolean]
  --version            Show version information  [boolean]
  --caliper-bind-sut   The name and version of the platform to bind to  [string]
  --caliper-bind-cwd   The working directory for performing the SDK install  [string]
  --caliper-bind-args  Additional arguments to pass to "npm install". Use the "=" notation when setting this parameter  [string]
  --caliper-bind-file  Yaml file to override default (supported) package versions when binding an SDK  [string

As you can see, you can configure the worker processes the same way as the master process. Including the optional binding step, but also the three mandatory parameters mentioned in the previous section.

Installing from NPM

Caliper is published as the @hyperledger/caliper-cli NPM package, providing a single point of install for every supported adapter.

Versioning semantics

Before explaining the steps for installing Caliper, let’s take a look at the Versions page of the CLI package. You will see a list of tags and versions. If you are new to NPM, think of versions as immutable pointers to a specific version (duh) of the source code, while tags are mutable pointers to a specific version. So tags can change where they point to. Easy, right?

But why is all this important to you? Because Caliper is still in its pre-release life-cycle (< v1.0.0), meaning that even minor version bumps are allowed to introduce breaking changes. And if you use Caliper in your project, you might run into some surprises depending on how you install Caliper from time to time.

Note: Until Caliper reaches v1.0.0, always use the explicit version numbers when installing from NPM. So let’s forget about the latest and unstable tags, as of now they are just a mandatory hindrance of NPM. As you will see, we deliberately do not provide such tags for the Docker images.

Now that we ignored the tags, let’s see the two types of version numbers you will encounter:

  • 0.2.0: Version numbers of this form denote releases deemed stable by the maintainers. Such versions have a corresponding GitHub tag, both in the caliper and caliper-benchmarks repositories. Moreover, the latest stable version is documented by the latest version of the documentation page. So make sure to align the different versions if you run into some issue.
  • 0.3.0-unstable-20200206065953: Such version “numbers” denote unstable releases that are published upon every merged pull request (hence the timestamp at the end), and eventually will become a stable version, e.g., 0.3.0. This way you always have access to the NPM (and Docker) artifacts pertaining to the master branch of the repository. Let’s find and fix the bugs of new features before they make it to the stable release!

Note: The newest unstable release always corresponds to the up-to-date version of the related repositories, and the vNext version of the documentation page!

Pre-requisites

The following tools are required to install the CLI from NPM:

  • node-gyp, python2, make, g++ and git (for fetching and compiling some packages during install)
  • Node.js v8.X LTS or v10.X LTS (for running Caliper)
  • Docker and Docker Compose (only needed when running local examples, or using Caliper through its Docker image)

Local NPM install

Note: this is the highly recommended way to install Caliper for your project. Keeping the project dependencies local makes it easier to setup multiple Caliper projects. Global dependencies would require re-binding every time before a new benchmark run (to ensure the correct global dependencies).

  1. Set your NPM project details with npm init (or just execute npm init -y) in your workspace directory (if you haven’t done this already, i.e., you don’t have a package.json file).
  2. Install the Caliper CLI as you would any other NPM package. It is highly recommended to explicitly specify the version number, e.g., @hyperledger/caliper-cli@0.2.0
  3. Bind the CLI to the required platform SDK (e.g., fabric with the 1.4.0 SDK).
  4. Invoke the local CLI binary (using npx) with the appropriate parameters. You can repeat this step for as many Fabric 1.4.0 benchmarks as you would like.

Putting it all together:

user@ubuntu:~/caliper-benchmarks$ npm init -y
user@ubuntu:~/caliper-benchmarks$ npm install --only=prod \
    @hyperledger/caliper-cli@0.2.0
user@ubuntu:~/caliper-benchmarks$ npx caliper bind \
    --caliper-bind-sut fabric:1.4.0
user@ubuntu:~/caliper-benchmarks$ npx caliper launch master \
    --caliper-workspace . \
    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \
    --caliper-networkconfig networks/fabric/fabric-v1.4.1/2org1peergoleveldb/fabric-go.yaml

We could also perform the binding automatically when launching the master process (note the extra parameter for caliper launch master):

user@ubuntu:~/caliper-benchmarks$ npm init -y
user@ubuntu:~/caliper-benchmarks$ npm install --only=prod \
    @hyperledger/caliper-cli@0.2.0
user@ubuntu:~/caliper-benchmarks$ npx caliper launch master \
    --caliper-bind-sut fabric:1.4.0 \
    --caliper-workspace . \
    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \
    --caliper-networkconfig networks/fabric/fabric-v1.4.1/2org1peergoleveldb/fabric-go.yaml

Note: specifying the --only=prod parameter in step 2 will ensure that the default latest SDK dependencies for every platform will not be installed. Since we perform an explicit binding anyway (and only for a single platform), this is the desired approach, while also saving some storage and time.

Note: always make sure that the versions of the SUT, the bound SDK and the used artifacts match!

Global NPM install

Note: make sure that you have a really good reason for installing the Caliper CLI globally. The recommended approach is the local install. That way your project is self-contained and you can easily setup multiple projects (in multiple directories) that each target a different SUT (or just different SUT versions). Installing or re-binding dependencies globally can get tricky.

There are some minor differences compared to the local install:

  1. You don’t need a package.json file.
  2. You can perform the install, bind and run steps from anywhere (just specify the workspace accordingly).
  3. You need to install the CLI globally (-g flag).
  4. You need to tell the binding step to install the packages also globally (--caliper-bind-args parameter).
  5. You can omit the npx command, since caliper will be in your PATH.
user@ubuntu:~$ npm install -g --only=prod @hyperledger/caliper-cli@0.2.0
user@ubuntu:~$ caliper bind \
    --caliper-bind-sut fabric:1.4.0 \
    --caliper-bind-args=-g
user@ubuntu:~$ caliper launch master \
    --caliper-workspace ~/caliper-benchmarks \
    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \
    --caliper-networkconfig networks/fabric/fabric-v1.4.1/2org1peergoleveldb/fabric-go.yaml

Note: for global install you don’t need to change the directory to your workspace, you can simply specify --caliper-workspace ~/caliper-benchmarks. But this way you can’t utilize the auto complete feature of your commandline for the relative paths of the artifacts.

Depending on your NPM settings, your user might need write access to directories outside of its home directory. This usually results in “Access denied” errors. The following pointers here can guide you to circumvent the problem.

Using the Docker image

Caliper is published as the hyperledger/caliper Docker image, providing a single point of usage for every supported adapter. The image builds upon the node:10.16-alpine image to keep the image size as low as possible.

The important properties of the image are the following:

  • Working directory: /hyperledger/caliper/workspace
  • The commands are executed by the node user (created in the base image)
  • The environment variable CALIPER_WORKSPACE is set to the /hyperledger/caliper/workspace directory
  • The entry point is the globally installed caliper binary
  • The environment variable CALIPER_BIND_ARGS is set to -g, so the binding step also occurs globally.
  • The default command is set to --version. This must be overridden when using the image.

This has the following implications:

  1. It is recommended to mount your local workspace to the /hyperledger/caliper/workspace container directory. The default CALIPER_WORKSPACE environment variable value points to this location, so you don’t need to specify it explicitly, one less setting to modify.
  2. You need to choose a command to execute, either launch master or launch worker. Check the Docker and Docker-Compose examples for the exact syntax.
  3. The binding step is still necessary, similarly to the NPM install approach. Whether you use the launch master or launch worker command, you only need to set the required binding parameter. The easiest way to do this is through the CALIPER_BIND_SUT environment variable.
  4. You need to set the required parameters for the launched master or worker. The easiest way to do this is through the CALIPER_BENCHCONFIG and CALIPER_NETWORKCONFIG environment variables.

Starting a container

Parts of starting a Caliper container (following the recommendations above):

  1. Pick the required image version
  2. Mount your local working directory to a container directory
  3. Set the required binding and run parameters

Note: the latest (or any other) tag is not supported, i.e, you explicitly have to specify the image version you want: hyperledger/caliper:0.2.0, just like it’s the recommended approach for the NPM packages.

Putting it all together, split into multiple lines for clarity, and naming the container caliper:

user@ubuntu:~/caliper-benchmarks$ docker run \
    -v $PWD:/hyperledger/caliper/workspace \
    -e CALIPER_BIND_SUT=fabric:1.4.0 \
    -e CALIPER_BENCHCONFIG=benchmarks/scenario/simple/config.yaml \
    -e CALIPER_NETWORKCONFIG=networks/fabric/fabric-v1.4.1/2org1peergoleveldb/fabric-go.yaml \
    --name caliper hyperledger/caliper:0.2.0 launch master

Note: the above network configuration file contains a start script to spin up a local Docker-based Fabric network, which will not work in this form. So make sure to remove the start (and end) script, and change the node endpoints to remote addresses.

Using docker-compose

The above command is more readable when converted to a docker-compose.yaml file:

version: '2'

services:
    caliper:
        container_name: caliper
        image: hyperledger/caliper:0.2.0
        command: launch master
        environment:
        - CALIPER_BIND_SUT=fabric:1.4.0
        - CALIPER_BENCHCONFIG=benchmarks/scenario/simple/config.yaml
        - CALIPER_NETWORKCONFIG=networks/fabric/fabric-v1.4.1/2org1peergoleveldb/fabric-go.yaml
        volumes:
        - ~/caliper-benchmarks:/hyperledger/caliper/workspace

Once you navigate to the directory containing the docker-compose.yaml file, just execute:

docker-compose up

Note: if you would like to test a locally deployed SUT, then you also need to add the necessary SUT containers to the above file and make sure that Caliper starts last (using the depends_on attribute).

Installing locally from source

Note: this section is intended only for developers who would like to modify the Caliper code-base and experiment with the changes locally before raising pull requests. You should perform the following steps every time you make a modification you want to test, to correctly propagate any changes.

The workflow of modifying the Caliper code-base usually consists of the following steps:

  1. Bootstrapping the repository
  2. Modifying and testing the code
  3. Publishing package changes locally
  4. Building the Docker image

Bootstrapping the Caliper repository

To install the basic dependencies of the repository, and to resolve the cross-references between the different packages in the repository, you must execute the following commands from the root of the repository directory:

  1. npm i: Installs development-time dependencies, such as Lerna and the license checking package.
  2. npm run repoclean: Cleans up the node_modules directory of all packages in the repository. Not needed for a freshly cloned repository.
  3. npm run bootstrap: Installs the dependencies of all packages in the repository and links any cross-dependencies between the packages. It will take some time to finish installation. If it is interrupted by ctrl+c, please recover the package.json file first and then run npm run bootstrap again.

Or as a one-liner:

user@ubuntu:~/caliper$ npm i && npm run repoclean -- --yes && npm run bootstrap 

Note: do not run any of the above commands with sudo, as it will cause the bootstrap process to fail.

Testing the code

The easiest way to test your changes is to run the CI process locally. Currently, the CI process runs benchmarks for specific adapters. You can trigger these tests by running the following script from the root directory of the repository, setting the BENCHMARK environment variable to the platform name:

user@ubuntu:~/caliper$ BENCHMARK=fabric ./.travis/benchmark-integration-test-direct.sh 

The following platform tests (i.e., valid BENCHMARK values) are available:

  • besu
  • ethereum
  • fabric
  • fisco-bcos
  • sawtooth

The scripts will perform the following tests (also necessary for a successful pull request):

  • Linting checks
  • Licence header checks
  • Unit tests
  • Running sample benchmarks

If you would like to run other examples, then you can directly access the CLI in the packages/caliper-cli directory, without publishing anything locally.

Note: the SDK dependencies in this case are fixed (the binding step is not supported with this approach), and you can check (and change) them in the package.json files of the corresponding packages. In this case the repository needs to be bootstrapped again.

user@ubuntu:~/caliper$ node ./packages/caliper-cli/caliper.js launch master \
    --caliper-workspace ~/caliper-benchmarks \
    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \
    --caliper-networkconfig networks/fabric/fabric-v1.4.1/2org1peergoleveldb/fabric-go.yaml

Publishing to local NPM repository

The NPM publishing and installing steps for the modified code-base can be tested through a local NPM proxy server, Verdaccio. The steps to perform are the following:

  1. Start a local Verdaccio server to publish to
  2. Publish the packages from the local (and possible modified) Caliper repository to the Verdaccio server
  3. Install and bind the CLI from the Verdaccio server
  4. Run the integration tests or any sample benchmark

The packages/caliper-publish directory contains an internal CLI for easily managing the following steps. So the commands of the following sections must be executed from the packages/caliper-publish directory:

user@ubuntu:~/caliper$ cd ./packages/caliper-publish 

Note: use the --help flag for the following CLI commands and sub-commands to find out more details.

Starting Verdaccio

To setup and start a local Verdaccio server, simply run the following command:

user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js verdaccio start
...
[PM2] Spawning PM2 daemon with pm2_home=.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /home/user/projects/caliper/packages/caliper-tests-integration/node_modules/.bin/verdaccio in fork_mode (1 instance)
[PM2] Done.
┌───────────┬────┬──────┬────────┬────────┬─────────┬────────┬─────┬───────────┬────────┬──────────┐
│ App name  │ id │ mode │ pid    │ status │ restart │ uptime │ cpu │ mem       │ user   │ watching │
├───────────┼────┼──────┼────────┼────────┼─────────┼────────┼─────┼───────────┼────────┼──────────┤
│ verdaccio │ 0  │ fork │ 115203 │ online │ 0       │ 0s     │ 3%  │ 25.8 MB   │ user   │ disabled │
└───────────┴────┴──────┴────────┴────────┴─────────┴────────┴─────┴───────────┴────────┴──────────┘
 Use `pm2 show <id|name>` to get more details about an app

 

The Verdaccio server is now listening on the following address: http://localhost:4873

Publishing the packages

Once Verdaccio is running, you can run the following command to publish every Caliper package locally:

user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js npm --registry "http://localhost:4873"
...
+ @hyperledger/caliper-core@0.3.0-unstable-20200206065953
[PUBLISH] Published package @hyperledger/caliper-core@0.3.0-unstable-20200206065953
...
+ @hyperledger/caliper-fabric@0.3.0-unstable-20200206065953
[PUBLISH] Published package @hyperledger/caliper-fabric@0.3.0-unstable-20200206065953
...
+ @hyperledger/caliper-cli@0.3.0-unstable-20200206065953
[PUBLISH] Published package @hyperledger/caliper-cli@0.3.0-unstable-20200206065953

 

Take note of the dynamic version number you see in the logs, you will need it to install you modified Caliper version from Verdaccio (the unstable tag is also present on NPM, so Verdaccio would probably pull that version instead of your local one).

Since the published packages include a second-precision timestamp in their versions, you can republish any changes immediately without restarting the Verdaccio server and without worrying about conflicting packages.

Running package-based tests

Once the packages are published to the local Verdaccio server, we can use the usual NPM install approach. The only difference is that now we specify the local Verdaccio registry as the install source instead of the default, public NPM registry:

user@ubuntu:~/caliper-benchmarks$ npm init -y
user@ubuntu:~/caliper-benchmarks$ npm install --registry=http://localhost:4873 --only=prod \
    @hyperledger/caliper-cli@0.3.0-unstable-20200206065953
user@ubuntu:~/caliper-benchmarks$ npx caliper bind --caliper-bind-sut fabric:1.4.0
user@ubuntu:~/caliper-benchmarks$ npx caliper launch master \
    --caliper-workspace . \
    --caliper-benchconfig benchmarks/scenario/simple/config.yaml \
    --caliper-networkconfig networks/fabric/fabric-v1.4.1/2org1peergoleveldb/fabric-go.yaml 

Note: we used the local registry only for the Caliper packages. The binding happens through the public NPM registry. Additionally, we performed the commands through npx and the newly installed CLI binary (i.e., not directly calling the CLI code file).

Building the Docker image

Once the modified packages are published to the local Verdaccio server, you can rebuild the Docker image. The Dockerfile is located in the packages/caliper-publish directory.

To rebuild the Docker image, execute the following:

user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js docker
...
Successfully tagged hyperledger/caliper:0.3.0-unstable-20200206065953
[BUILD] Built Docker image "hyperledger/caliper:0.3.0-unstable-20200206065953"

Now you can proceed with the Docker-based benchmarking as described in the previous sections.

Note: once you are done with the locally published packages, you can clean them up the following way:

user@ubuntu:~/caliper/packages/caliper-publish$ ./publish.js verdaccio stop

簡單案例 

 

體系結構

Architecture

architecture

Adaptation Layer(適配層)

適配層用於將現有的區塊鏈系統集成到Caliper框架中。每個適配器使用相應的區塊鏈SDK或RESTful API實現’Caliper Blockchain NBI’。目前支持Hyperledger Fabric1.0-1.4、Sawtooth、Iroha、和burrow。Caliper后續將實現對以太坊和其他區塊鏈系統的支持。

Interface&Core Layer(接口及核心層)

接口和核心層提供 Blockchain NBI、資源監控、性能監控、報告生成模塊,並為上層應用提供四種相應的北向接口:

  • Blockchain operating interfaces: 包含諸如在后端區塊鏈上部署智能合約、調用合約、從賬本查詢狀態等操作。
  • Resource Monitor: 包含啟動/停止監視器和獲取后端區塊鏈系統資源消耗狀態的操作,包括CPU、內存、網絡IO等。現在提供兩種監視器,一種是監視本地/遠程docker容器,另一種則是監控本地進程。未來將實現更多功能。
  • Performance Analyzer: 包含讀取預定義性能統計信息(包括TPS、延遲、成功交易數等)和打印基准測試結果的操作。在調用區塊鏈北向接口時,每個交易的關鍵指標(如創建交易的時間、交易提交時間、交易返回結果等)都會被記錄下來,並用於生成最終的預定義性能指標統計信息。
  • Report Generator: 生成HTML格式測試報告。

Application Layer(應用層)

應用程序層包含針對典型區塊鏈方案實施的測試。每次測試都需要設置對應的配置文件,用於定義后端區塊鏈網絡信息和測試參數信息。基於這些配置,可以完成區塊鏈系統的性能測試。

我們預置了一個默認的基准測試引擎以幫助開發人員理解框架並快速實施自己的測試。下面將介紹如何使用基准測試引擎。當然,開發人員也可以不使用測試框架,直接使用NBI完成自有區塊鏈系統的測試。

Benchmark Engine

Benchmark Engine

Configuration File

我們使用兩種配置文件。一種是基准配置文件,它定義基准測試參數,如負載量(workload)等。另一種是區塊鏈網絡配置文件,它指定了有助於與待測試的系統(SUT)交互的必要信息。

以下是基准配置文件示例:

test:
  name: simple
  description: This is an example benchmark for caliper
  clients:
    type: local
    number: 5
  rounds:
  - label: open
    txNumber:
    - 5000
    - 5000
    - 5000
    rateControl:
    - type: fixed-rate
      opts: 
        tps: 100
    - type: fixed-rate
      opts:
        tps: 200
    - type: fixed-rate
      opts:
        tps: 300
    arguments:
      money: 10000
    callback: benchmark/simple/open.js
  - label: query
    txNumber:
    - 5000
    - 5000
    rateControl:
    - type: fixed-rate
      opts:
        tps: 300
    - type: fixed-rate
      opts:
        tps: 400
    callback" : benchmark/simple/query.js
monitor:
  type:
  - docker
  - process
  docker:
    name:
    - peer0.org1.example.com
    - http://192.168.1.100:2375/orderer.example.com
  process:
  - command: node
    arguments: local-client.js
    multiOutput: avg
  interval: 1

 

  • test - 定義測試的元數據和指定工作負載下的多輪測試。
    • name&description : 測試名及其描述,該信息會被報告生成器使用,並顯示在測試報告中。
    • clients : 定義客戶端類型和相關參數,其中’type’應該設置為’local’。
      • local: 此例中,Caliper的主進程將會創建多個子進程,每個子進程將會作為客戶端向后端區塊鏈系統發送交易。客戶端的數量由’number’定義。
    • label : 當前測試標簽名稱。例如,可以使用當前交易目的名稱(如開戶)作為標簽名稱,來說明當前性能測試的交易類型。該值還可用作blockchain.getContext()中的Context名稱。又例如,開發人員可能希望測試不同Fabric通道的性能,在這種情況下,具有不同標簽的測試可以綁定到不同的Fabric通道。
    • txNumber : 定義一個子輪測試數組,每個輪次有不同的交易數量。例如, [5000,400] 表示在第一輪中將生成總共5000個交易,在第二輪中將生成400個交易。
    • txDuration : 定義基於時間測試的子輪數組。例如 [150,400] 表示將進行兩次測試,第一次測試將運行150秒,第二次運行將運行400秒。如果當前配置文件中同時指定了txNumber和txDuration,系統將優先根據txDuration設置運行測試。
    • rateControl : 定義每個子輪測試期間使用的速率控制數組。如果未指定,則默認為“固定速率”,將以1TPS速率發送交易開始測試。如果已定義,務必保證所選用的速率控制機制名稱正確並且提供對應的發送速率及所需參數。在每一輪測試中, txNumber 或 txDuration 在 rateControl 中具有相應的速率控制項。有關可用速率控制器以及如何實現自定義速率控制器的更多信息,請參閱 速率控制部分。
    • trim : 對客戶端結果執行修剪(trim)操作,以消除warm-up和cool-down階段對於測試結果的影響。如果已指定修剪區間,該設置將被應用於該輪測試結果的修剪中。例如, 在txNumber測試模式中,值30表示每個客戶端發送的最初和最后的30個交易結果將被修剪掉; 在txDuration模式下, 則從每個客戶端發送的前30秒和后30秒的交易結果將會被忽略掉。
    • arguments : 用戶自定義參數,將被傳遞到用戶自定義的測試模塊中。
    • callback : 指明用戶在該輪測試中定義的測試模塊。請參閱User defined test module 獲取更多信息。
  • monitor - 定義資源監視器和受監視對象的類型,以及監視的時間間隔。
    • docker : docker monitor用於監視本地或遠程主機上的指定docker容器。Docker Remote API用於檢索遠程容器的統計信息。保留的容器名稱“all”表示將監視主機上的所有容器。在上面的示例中,監視器將每秒檢索兩個容器的統計信息,一個是名為“peer0.org1.example.com”的本地容器,另一個是位於主機’192.168.1.100’上的名為“orderer.example.com”的遠程容器。2375是該主機上Docker的偵聽端口。
    • process : 進程監視器用於監視指定的本地進程。例如,用戶可以使用此監視器來監視模擬區塊鏈客戶端的資源消耗。’command’和’arguments’屬性用於指定進程。如果找到多個進程,’multiOutput’屬性用於定義輸出的含義。’avg’表示輸出是這些過程的平均資源消耗,而’sum’表示輸出是總和消耗。
    • others : 待后續補充。

Master

實現默認測試流程,其中包含三個階段:

  • 准備階段:在此階段,主進程使用區塊鏈配置文件創建並初始化內部區塊鏈對象,按照配置中指定的信息部署智能合約,並啟動監控對象以監控后端區塊鏈系統的資源消耗。

  • 測試階段: 在此階段,主進程根據配置文件執行測試,將根據定義的workload生成任務並將其分配給客戶端子進程。最后將存儲各個客戶端返回的性能統計信息以供后續分析。

  • 報告階段: 分析每個測試輪次的所有客戶端的統計數據,並自動生成HTML格式報告。報告示例如下:

report example

Clients

Local Clients

在此模式下,主進程使用Node.js集群模塊啟動多個本地客戶端(子進程)來執行實際的測試工作。由於Node.js本質上是單線程的,因此本地集群可用於提高客戶端在多核機器上的性能。

此模式下,總工作負載被平均分配給子進程。每個子進程相當於區塊鏈客戶端,子進程擁有臨時生成的context,可以和后端區塊鏈系統交互。context通常包含客戶端的標識和加密信息,在測試結束后context將被釋放。

  • 對於Hyperledger Fabric,context也綁定到特定channel,該綁定關系在Fabric配置文件中有相關定義。

測試時客戶端將調用用戶定義的測試模塊,該模塊包含了自定義的測試邏輯。測試模塊的相關信息后文會給出解釋。

本地客戶端在第一輪測試時啟動,並在完成所有測試后被銷毀。

User Defined Test Module

該模塊實現交易生成和提交交易的功能。通過這種方式,開發人員可以實現自己的測試邏輯並將其與基准引擎集成。 測試模塊主要實現3個函數,所有這些函數都應該返回一個Promise對象。

  • init - 將在每個測試輪次開始時由客戶端調用。所需參數包括當前區塊鏈對象、上下文以及從基准配置文件中讀取的用戶定義的參數。在該函數內可以保存區塊鏈對象和context供以后使用,其他初始化工作也可在此處實現。
  • run - 應使用Caliper的區塊鏈API在此處生成和提交實際的事務。客戶端將根據工作負載重復調用此函數。建議每次調用只提交一個事務;如果每次提交多個事務,則實際工作負載可能與配置的工作負載不同。請確保該函數應以異步方式運行。
  • end - 將在每輪測試結束時調用,任何結束時需要釋放信息的工作都應在此處執行。

Benchmark 配置

  • Burrow Configuration
  • Ethereum Configuration
  • FISCO BCOS Configuration
  • Fabric Configuration
  • Iroha Configuration
  • Sawtooth Configuration

Overview

The benchmark configuration file is one of the required configuration files necessary to run a Caliper benchmark. In contrast to the runtime configurations, used for tweaking the internal behavior of Caliper, the benchmark configuration pertains only to the execution of the benchmark workload and collection of the results.

Note: In theory, a benchmark configuration is independent of the system under test (SUT) and the internal configuration of Caliper. However, this independence might be limited by the implementation details of the benchmark workload module, which could target only a single SUT type.

The benchmark configuration consists of three main parts:

  1. Test settings
  2. Observer settings
  3. Monitoring settings

For a complete benchmark configuration example, refer to the last section.

Note: The configuration file can be either a YAML or JSON file, conforming to the format described below. The benchmark configuration file path can be specified for the master and worker processes using the caliper-benchconfig setting key.

Benchmark test settings

The settings related to the benchmark workload all reside under the root test attribute, which has some general child attributes, and the important rounds attribute.

Attribute Description
test.name Short name of the benchmark to display in the report.
test.description Detailed description of the benchmark to display in the report.
test.clients Object of worker-related configurations.
test.clients.type Currently unused.
test.clients.number Specifies the number of worker processes to use for executing the workload.
test.rounds Array of objects, each describing the settings of a round.
test.rounds[i].label A short name of the rounds, usually corresponding to the types of submitted TXs.
test.rounds[i].txNumber The number of TXs Caliper should submit during the round.
test.rounds[i].txDuration The length of the round in seconds during which Caliper will submit TXs.
test.rounds[i].rateControl The object describing the rate controller to use for the round.
test.rounds[i].callback The path to the benchmark workload module that will construct the TXs to submit
test.rounds[i].arguments Arbitrary object that will be passed to the workload module as configuration.

A benchmark configuration with the above structure will define a benchmark run that consists of multiple rounds. Each round is associated with a rate controller that is responsible for the scheduling of TXs, and a workload module that will generate the actual content of the scheduled TXs.

Observer settings

The observer configuration determines how the master process gathers progress information from the worker processes. The configuration resides under the observer attribute. Refer to the observer configuration page for the details.

Monitoring settings

The monitoring configuration determines what kind of metrics the master process can gather and from where. The configuration resides under the monitor attribute. Refer to the monitor configuration page for the details.

Example

The example configuration below says the following:

  • Perform the benchmark run using 5 worker processes.
  • There will be two rounds.
  • The first init round will submit 500 TXs at a fixed 25 TPS send rate.
  • The content of the TXs are determined by the init.js workload module.
  • The second query round will submit TXs for 60 seconds at a fixed 5 TPS send rate.
  • The content of the TXs are determined by the query.js workload module.
  • The master process will observe the progress of the worker processes through a separate Prometheus instance at every 5 seconds.
  • The master process should include the predefined metrics of all local Docker containers in the report.
  • The master process should include the custom metric Endorse Time (s) based on the provided query for every available (peer) instance.
test:
  clients:
    type: local
    number: 5
  rounds:
  - label: init
    txNumber: 500
    rateControl:
      type: fixed-rate
      opts:
        tps: 25
    callback: benchmarks/samples/fabric/marbles/init.js
  - label: query
    txDuration: 60
    rateControl:
    - type: fixed-rate
      opts:
        tps: 5
    callback: benchmarks/samples/fabric/marbles/query.js
observer:
  type: prometheus
  interval: 5
monitor:
  interval: 1
  type: ['docker', 'prometheus']
  docker:
    containers: ['all']
  prometheus:
    url: "http://prometheus:9090"
    push_url: "http://pushGateway:9091"
    metrics:
      ignore: [prometheus, pushGateway, cadvisor, grafana, node-exporter]
      include:
        Endorse Time (s):
          query: rate(endorser_propsal_duration_sum{chaincode="marbles:v0"}[5m])/rate(endorser_propsal_duration_count{chaincode="marbles:v0"}[5m])
          step: 1
          label: instance
          statistic: avg 

工作模型

Overview

Workload modules are the essence of a Caliper benchmark since it is their responsibility to construct and submit TXs. Accordingly, workload modules implement the logic pertaining to your business, benchmark or user behavior. Think of the workload modules as the brain of an emulated SUT client, deciding what kind of TX to submit at the given moment.

Implementing the workload module

Workload modules are Node.JS modules that expose a certain API. There are no further restrictions on the implementation, thus arbitrary logic (using further arbitrary components) can be implemented.

The API

A workload module must export the following three asynchronous functions:

  • init(blockchain: BlockchainInterface, context: object, args: object)

    The init function is called before a round is started. It receives:

    • the SUT adapter instance in the blockchain parameter;
    • the adapter-specific context created by the adapter (usually containing additional data about the network);
    • and the user-provided settings object as args which is set in the benchmark configuration file’s test.rounds[i].arguments attribute (if the workload module is configurable).
  • run() => Promise<TxResult[]>

    The run function is called every time the set rate controller enables the next TX. The function must assemble the content of the next TX (using arbitrary logic) and call the invokeSmartContract or querySmartContract functions of the blockchain adapter instance. See the adapter configuration pages for the exact usage of the mentioned functions.

    At the end, the function must return the result of the invoke/query call!

  • end()

    The end function is called after the round has ended. The workload module can perform resource cleanup or any other maintenance activity at this point.

Example

A complete (albeit simple) example of a workload module implementation:

/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

'use strict';

const logger = require('@hyperledger/caliper-core').CaliperUtils.getLogger('my-module');

// save the objects during init
let bc, contx;

/**
* Initializes the workload module before the start of the round.
* @param {BlockchainInterface} blockchain The SUT adapter instance.
* @param {object} context The SUT-specific context for the round.
* @param {object} args The user-provided arguments for the workload module.
*/
module.exports.init = async (blockchain, context, args) => {
    bc = blockchain;
    contx = context;
    logger.debug('Initialized workload module');
};

module.exports.run = async () => {
    let txArgs = {
        // TX arguments for "mycontract"
    };
    
    return bc.invokeSmartContract(contx, 'mycontract', 'v1', txArgs, 30);
};

module.exports.end = async () => {
    // Noop
    logger.debug('Disposed of workload module');
};

 

Configuring the workload module

To use your workload module for a given round, you only need to reference it in the benchmark configuration file:

  1. Set the test.rounds[i].callback attribute to the path of your workload module file. The path can be either an absolute path, or a relative path to the configured workspace path.
  2. If your module supports different settings, set the test.rounds[i].arguments attribute object accordingly. It will be passed to your module upon initialization.

Tips & Tricks

The following advices might help you to improve your workload module implementation.

  1. You can use (require) any Node.JS module in your code (including the core Caliper module). Modularization is important for keeping your implementation clean and manageable.
  2. If you use third-party modules, then it is your responsibility to make them available to your workload module. This usually requires an npm install call in your module directory before you start Caliper.
  3. Caliper provides some core utilities that might make your life easier, such as logging and runtime configuration. Use them, don’t reinvent the wheel!
  4. The run function is on the hot path of the worker workload generation loop. Do computation-intensive tasks with care, it might hurt the scheduling precision of TXs! You can perform expensive pre-processing tasks in the init function instead. 

參考資料

doc:https://hyperledger.github.io/caliper/vLatest/getting-started/

github: https://github.com/hyperledger/caliper

wiki:

https://wiki.hyperledger.org/display/caliper 

https://hyperledger.github.io/caliper/

https://hyperledger.github.io/caliper/vLatest/getting-started/

相關博文:

https://www.codercto.com/a/33837.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM