This is an example to setup the project with multichannel network on a new Ubuntu 18.04 / 16.04 virtual machine from scratch. The instructions should also work on Mac OS.
- Install Prequisites
- Clone the repository
- Start / stop a Hyperledger Fabric network using
multichannelconfiguration. Seemultichannelpage for multi-channel configuration. - Create users and transactions using dummy application
- Start Elastic stack
- Fabricbeat agent
- Configuring Indices for the first time in Kibana
- Viewing dashboards that store data
- Starting more instances of fabricbeat agent
Please make sure that you have set up the environment for the project. Follow the steps listed in Prerequisites.
To get started with the project, clone the git repository. If you want to build Fabricbeat yourself, it is important that you place the project under $GOPATH/src/github.com. Otherwise, you can clone the repository anywhere you want (you do not need to install Go to use the pre-compiled executable or the Docker image).
$ mkdir $GOPATH/src/github.com -p
$ cd $GOPATH/src/github.com
$ git clone https://github.com/hyperledger-labs/blockchain-analyzer.git
It is a test network setup with four organizations, two peers per organization, a solo orderer communicating over TLS and two channels:
fourchannel:
- members: all four organizations
- chaincode:
dummycc: It writes deterministically generated hashes and (optionally) previous keys as value to the ledger.
twochannel:
- members: only
Org1andOrg3 - chaincode:
fabcar: The classic fabcar example chaincode extended with agetHistoryForCar()chaincode function.
The sample chaincode called dummycc (used in basic) also works with multichannel. It writes
deterministically generated hashes and (optionally) previous keys as value to the ledger.
Issue the following command in the network/multichannel directory
make start
Enter the Fabric CLI Docker container by issuing the command:
docker exec -it cli bash
Inside the CLI, the /scripts folder contains the scripts that can be used to install, instantiate and invoke chaincode (though the make start command takes care of installation and instantiation).
To stop the network and delete all the generated data (crypto material, channel artifacts and dummyapp wallet), run:
make destroy
The dummyapp application is used to create users and generate transactions for different scenarios so that the resulting transactions can be analyzed with the Elastic stack. Examples of scenarios include which channels to use, and which Fabric ca users to invoke transactions.
This application can connect to both the basic and the multichannel networks.
The commands in this section should be issued from the blockchain-analyzer/apps/dummyapp directory.
Before the first run, we have to install the necessary node modules:
npm install
Take a look at the config.json file. This file contains the transactions that will be invoked into the basic network.
To enroll admins, register and enroll users, run the following command:
NETWORK=multichannel CHANNEL=fourchannel make users
To add key-value pairs, run:
NETWORK=multichannel CHANNEL=fourchannel make invoke
To query a specific key, run
NETWORK=multichannel CHANNEL=fourchannel make query KEY=Key1
To query all key-value pairs, run
NETWORK=multichannel CHANNEL=fourchannel make query-all
This project includes an Elasticsearch and Kibana setup to index and visualize blockchain data.
The commands in this section should be issued from the blockchain-analyzer/stack folder.
If you are working in a machine with low memory, the Elasticsearch container may not start. In this case, issue the following command:
sudo sysctl -w vm.max_map_count=262144
to set the vm.max_map_count kernel setting to 262144, then destroy and bring up the Elastic Stack again.
This setup is borrowed from https://github.com/maxyermayank/docker-compose-elasticsearch-kibana
To start the containers, navigate to blockchain-analyzer/stack directory and issue:
make start
To view Kibana in browser, navigate to http://localhost:5601 . It can take some time (2-5 minutes) for Kibana to start depending on your machine configuration.
To stop the containers, issue
make destroy
To stop the containers and remove old data, run
make erase
The fabricbeat beats agent is responsible for connecting to a specified peer, periodically querying its ledger, processing the data and shipping it to Elasticsearch. Multiple instances can be run at the same time, each querying a different peer and sending its data to the Elasticsearch cluster.
Fabricbeat agent is also available as a Docker image (balazsprehoda/fabricbeat). You can use this image, or build it using the command
$ docker build -t <IMAGE NAME> .
from the project root directory.
To start the agent, you have to mount two configuration files, the necessary crypto materials and the folders that contain kibana dashboards and templates:
fabricbeat.yml: configuration file for the agent (seeblockchain-analyzer/agent/fabricbeat/fabricbeat.ymlfor reference)- connection profile yaml file referenced from
fabricbeat.yml - crypto materials referenced from the connection profile and
fabricbeat.yml - Kibana dashboard and template directories referenced as
dashboardDirectoryandtemplateDirectoryin the configuration file.
If you use environment variables in the configuration file, do not forget to set these variables in the container!
For a sample Docker setup, see /blockchain-analyzer/docker-agent/.
The commands in this section should be issued from the blockchain-analyzer/agent/fabricbeat directory.
You can build the agent yourself, or you can use a pre-built one from the blockchain-analyzer/agent/fabricbeat/prebuilt directory. To use an executable from the prebuilt dir, choose the appropriate for your system and copy it into the blockchain-analyzer/agent/fabricbeat folder.
Before configuring and building the fabricbeat agent, please make sure that the GOPATH variable is set correctly. Then, add $GOPATH/bin to the PATH:
export PATH=$PATH:$GOPATH/bin
Ensure that Python version is 2.7.*.
Get module dependencies:
make go-get
Build the agent:
make update
make
To start the agent, issue the following command from the fabricbeat directory:
./fabricbeat -e -d "*"
To use the agent with the multichannel network from the blockchain-analyzer/network folder, you can start the agent using:
ORG_NUMBER=1 PEER_NUMBER=0 NETWORK=multichannel ./fabricbeat -e -d "*"
The variables passed are used in the configuration (fabricbeat.yml). To connect to another network or peer, change the configuration (and/or the passed variables) accordingly.
To stop the agent, simply type Ctrl+C
Next, we navigate to http://localhost:5601.
Click the dashboards icon on the left:

Kibana takes us to select a default index pattern. Click fabricbeat-*, then the star in the upper right corner:

Three different Elasticsearch indices per Fabric organization are setup. One for blocks, one for transactions and one for single writes. If multiple agents are run for peers in the same organization, they are going to send their data to the same indices. You can then select the peer on the dashboards to view its data only.
If multiple instances are run for peers in different organizations, you will see the data of different organizations on different dashboards.
The name of the indices can be customized in the fabricbeat configuration file (_meta/beat.yml and make update or directly in fabricbeat.yml).
After that, we can click the dashboards and see the overview of our data on the Overview Dashboard (org1):
If the dashboards are empty, set the time range wider!
We can go on discovering the dashboards by scrolling and clicking the link fields, or by selecting another dashboards from the Dashboards menu.
To start more instances of the fabricbeat agent, open another tab/terminal, make sure that the GOPATH variable is set (export GOPATH=$HOME/go) , and run fabricbeat passing different variables from the previous run(s) (e.g.
ORG_NUMBER=2 PEER_NUMBER=0 NETWORK=multichannel ./fabricbeat -e -d "*"
will start an agent querying peer0.org2.el-network.com). If the started instance queries a peer from the same organization as the previous one, we can select the peer we want to see the data of from a dropdown on the dashboards. If the new peer is shipping data from a different organization, we can see its data on a different dashboard (click the dashboards menu on the left, and choose one).