what are the tools used by IBM Cloud to maintainin High Availability (HA) of certificate Authority (CA) in Hyperledger fabric?
IBM cloud deployment of Hyperledger fabric uses Kubernetes to maintain the CA replica set for high availability. In documentation
Kubernetes to maintain all the replica set and as a DB PostgreSQL database is used.
which tool and technique are used to maintain HA?
For example, HAProxy?
See also questions close to this topic
-
Hyperledger Fabric 2.3 connection issue
Facing an issue while querying ledger. Here is the how our network is laid out. There are 2 orgs on a kubernetes cluster inside the corp network and one on a docker swarm on an azure vm inside the network too. The azure vm node and the k8s cluster nodes communicate with each other over an nginx server. Now the reason behind this elaborate setup is because our supplychain usecase requires partners from different companies to join our network. So to simulate an external partner outside the corp network we use the azure vm. Since we plan to productionize the implementation we couldn't use Fabric crypto config generated certificates and got new certificates issued using our company's intermediate and root certs. Now there are chaincodes installed on this network setup with endorsement policy enabled that works perfectly on all 3 nodes. We are using Fabric 2.3.0
Now the first issue that I had faced, was the TLS certificate to use in the connection.json file. That was solved by chaining the certificates as described in the SO post here. The current issue is that the nodejs code is able to connect to the orgs in the k8s cluster but not the one on the Azure VM.
Here is my connection.json. All the 10.100.xx.xx ips are of the pods in the k8s cluster and the public.ip.address is that of the nginx server
{ "name": "byfn", "version": "1.0.0", "client": { "organization": "ORG2MSP", "connection": { "timeout": { "peer": { "endorser": "10000" }, "orderer": "10000" } } }, "channels": { "supplychain": { "orderers": [ "ord1.orderers.org1.com", "ord2.orderers.org1.com", "ord3.orderers.org1.com" ], "peers": { "peer1.peers.org1.com": { "endorsingPeer": true, "chaincodeQuery": true, "ledgerQuery": true, "eventSource": true }, "peer1.peers.org3.com": { "endorsingPeer": true, "chaincodeQuery": true, "ledgerQuery": true, "eventSource": true }, "peer1.peers.org2.com": { "endorsingPeer": true, "chaincodeQuery": true, "ledgerQuery": true, "eventSource": true } } } }, "organizations": { "ORG2MSP": { "mspid": "ORG2MSP", "peers": [ "peer1.peers.org2.com", "peer2.peers.org2.com" ] } }, "orderers": { "ord1.orderers.org1.com": { "url": "grpcs://10.100.xxx.xxx:7050", "grpcOptions": { "ssl-target-name-override": "ord1.orderers.org1.com", "request-timeout": 12000 }, "tlsCACerts": { "path": "temp.pem" } }, "ord2.orderers.org1.com": { "url": "grpcs://10.100.xxx.xxx:7050", "grpcOptions": { "ssl-target-name-override": "ord2.orderers.org1.com", "request-timeout": 12000 }, "tlsCACerts": { "path": "temp.pem" } }, "ord3.orderers.org1.com": { "url": "grpcs://10.100.xxx.xxx:7050", "grpcOptions": { "ssl-target-name-override": "ord3.orderers.org1.com", "request-timeout": 12000 }, "tlsCACerts": { "path": "temp.pem" } } }, "peers": { "peer1.peers.org1.com": { "url": "grpcs://10.100.xxx.xxx:7051", "grpcOptions": { "ssl-target-name-override": "peer1.peers.org1.com", "request-timeout": 12000, "grpc.keepalive_time_ms": 600000 }, "tlsCACerts": { "path": "temp.pem" } }, "peer1.peers.org3.com": { "url": "grpcs://public.ip.address:7051", "grpcOptions": { "ssl-target-name-override": "peer1.peers.org3.com", "request-timeout": 12000, "grpc.keepalive_time_ms": 600000 }, "tlsCACerts": { "path": "temp.pem" } }, "peer1.peers.org2.com": { "url": "grpcs://10.100.xxx.xxx:7051", "grpcOptions": { "ssl-target-name-override": "peer1.peers.org2.com", "request-timeout": 12000, "grpc.keepalive_time_ms": 600000 }, "tlsCACerts": { "path": "temp.pem" } } } }
Here is my code
'use strict'; const { Wallets, Gateway } = require('fabric-network'); const fs = require('fs'); const path = require('path'); const ccpPath = path.resolve(__dirname,'connection.json'); const ccpJSON = fs.readFileSync(ccpPath, 'utf8'); const ccp = JSON.parse(ccpJSON); async function main(){ try { // const walletPath = path.join(process.cwd(), 'wallet'); const wallet = await Wallets.newFileSystemWallet('wallet'); // console.log(`Wallet path: ${walletPath}`); // Check to see if we've already enrolled the user. const userExists = await wallet.get('usernew'); const tlsExists = await wallet.get('tlsid'); if (!userExists) { console.log('An identity for the user "usernew" does not exist in the wallet'); return; } if (!tlsExists) { console.log('An identity for the user "tls" does not exist in the wallet'); return; } console.log("Here"); // Create a new gateway for connecting to our peer node. const gateway = new Gateway(); await gateway.connect(ccp, { wallet, identity: 'usernew', discovery: { enabled: false, asLocalhost: false }, clientTlsIdentity: 'tlsid' }); console.log("Here1"); // Get the network (channel) our contract is deployed to. const network = await gateway.getNetwork('supplychain'); console.log("Here2"); //Get the channel object to fetch out peers const channel = network.getChannel(); console.log("Here3"); //Get peers for endorsement //channel.getEndorsers(); const org1Peer = channel.getPeer('peer1.peers.org1.com'); //console.log(org1Peer); const org2Peer = channel.getPeer('peer1.peers.org2.com'); //console.log(org2Peer); const org3Peer = channel.getPeer('peer1.peers.org3.com'); //console.log(org3Peer); // All the above logs print correct information // Get the contract from the network. const contract = network.getContract('mycontract'); const result = await contract.evaluateTransaction('queryAllObjects'); console.log(`Transaction has been evaluated, result is: ${result.toString()}`); } catch (error) { console.error(`Failed to evaluate transaction: ${error}`); } } main()
Here is the crypto folder tree
C:. ├───peers.org1.com │ └───users │ ├───Admin@peers.org1.com │ │ ├───msp │ │ │ ├───admincerts │ │ │ ├───cacerts │ │ │ ├───intermediatecerts │ │ │ ├───keystore │ │ │ ├───signcerts │ │ │ ├───tlscacerts │ │ │ └───tlsintermediatecerts │ │ └───tls │ └───User1@peers.org1.com │ ├───msp │ │ ├───admincerts │ │ ├───cacerts │ │ ├───intermediatecerts │ │ ├───keystore │ │ ├───signcerts │ │ ├───tlscacerts │ │ └───tlsintermediatecerts │ └───tls ├───peers.org2.com │ └───users │ ├───Admin@peers.org2.com │ │ ├───msp │ │ │ ├───admincerts │ │ │ ├───cacerts │ │ │ ├───intermediatecerts │ │ │ ├───keystore │ │ │ ├───signcerts │ │ │ ├───tlscacerts │ │ │ └───tlsintermediatecerts │ │ └───tls │ └───User1@peers.org2.com │ ├───msp │ │ ├───admincerts │ │ ├───cacerts │ │ ├───intermediatecerts │ │ ├───keystore │ │ ├───signcerts │ │ ├───tlscacerts │ │ └───tlsintermediatecerts │ └───tls └───peers.org3.com └───users ├───Admin@peers.org3.com │ ├───msp │ │ ├───admincerts │ │ ├───cacerts │ │ ├───intermediatecerts │ │ ├───keystore │ │ ├───signcerts │ │ ├───tlscacerts │ │ └───tlsintermediatecerts │ └───tls └───User1@peers.org3.com ├───msp │ ├───admincerts │ ├───cacerts │ ├───intermediatecerts │ ├───keystore │ ├───signcerts │ ├───tlscacerts │ └───tlsintermediatecerts └───tls
The temp.pem used in the connection file above is prepared by appending the ica.pem and ca.pem shown below. Here is how the cerificates look for Org2. Looks similar for other 2 orgs. msp/tlscacerts/ca.pem
Issuer: C=XX, ST=XXXX, L=XXXX, O=MyCompany, OU=Cybersecurity, CN=MyCompany Root Certificate Authority 2018 Validity Not Before: Jul 23 17:07:45 2018 GMT Not After : Jul 23 17:17:44 2043 GMT Subject: C=XX, ST=XXXX, L=XXXX, O=MyCompany, OU=Cybersecurity, CN=MyCompany Root Certificate Authority
msp/tlsintermediatecerts/ica.pem
Issuer: C=XX, ST=XXXX, L=XXXX, O=MyCompany, OU=Cybersecurity, CN=MyCompany Root Certificate Authority 2018 Validity Not Before: Nov 14 21:26:35 2018 GMT Not After : Nov 14 21:36:35 2025 GMT Subject: C=XX, ST=XXXX, L=XXXX, O=MyCompany, CN=MyCompany Issuing CA 101
tls/server.crt
Issuer: C=XX, ST=XXXX, L=XXXX, O=MyCompany, CN=MyCompany Issuing CA 101 Validity Not Before: Jan 18 20:30:30 2021 GMT Not After : Jan 18 20:30:30 2023 GMT Subject: C=XX, ST=XXXX, L=XXXX, O=MyCompany Inc., OU=org2client, CN=*.peers.org2.com . . . X509v3 Subject Alternative Name: DNS:*.peers.org2.com
Org2 NodeJs log
2021-02-25T10:21:33.736Z - error: [Endorser]: sendProposal[peer1.peers.org2.com] - Received error response from: grpcs://10.100.xxx.xxx:7051 error: Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP] 2021-02-25T10:21:33.738Z - error: [Endorser]: sendProposal[peer1.peers.org2.com] - rejecting with: Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP] 2021-02-25T10:21:33.738Z - error: [SingleQueryHandler]: evaluate: message=Query failed. Errors: ["Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP]"], stack=FabricError: Query failed. Errors: ["Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP]"] at SingleQueryHandler.evaluate (/fabric23/node_modules/fabric-network/lib/impl/query/singlequeryhandler.js:47:23) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async Transaction.evaluate (/fabric23/node_modules/fabric-network/lib/transaction.js:276:25) at async main (/fabric23/test.js:67:25), name=FabricError Failed to evaluate transaction: FabricError: Query failed. Errors: ["Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP]"]
Org2 peer logs
2021-02-25 10:21:33.732 UTC [endorser] Validate -> WARN 08f access denied: creator's signature over the proposal is not valid: The signature is invalid channel=supplychain txID=01bde838 mspID=ORG2MSP 2021-02-25 10:21:33.732 UTC [comm.grpc.server] 1 -> INFO 090 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.23.238.200:40928 grpc.peer_subject="CN=*.peers.org3.com,OU=org3client,O=MyCompany Inc.,L=XXXX,ST=XXXX,C=XX" error="error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP]" grpc.code=Unknown grpc.call_duration=12.335491ms
Org3 peer logs
2021-02-26 13:42:26.081 UTC [gossip.channel] publishStateInfo -> DEBU 6155d8 Empty membership, no one to publish state info to 2021-02-26 13:42:26.493 UTC [core.comm] ServerHandshake -> DEBU 6155d9 Server TLS handshake completed in 49.605106ms server=PeerServer remoteaddress=public.ip.address:291542021-02-26 13:42:26.597 UTC [grpc] InfoDepth -> DEBU 6155da [transport]transport: loopyWriter.run returning. connection error: desc = "transport is closing" 2021-02-26 13:42:26.927 UTC [gossip.channel] publishStateInfo -> DEBU 6155db Empty membership, no one to publish state info to
I have also tried deploying the same code on the docker swarm on the azure vm. But there it gives the same error what I was getting when I was using the wrong certificate as given in the SO post here
-
Connect APP with Hyperledger Fabric network
Can I deploy a WebApp client to connect with the Hyperledger Fabric network running in VM or GKE? I want to run the WebApp in Google Cloud Run.
I'm using fabric-sdk for go to build my APP and contractapi to build my SmartContract.
-
How to apply a pagination on array of objects in couchdb using hyperledger fabric?
{ "_id": "usq", "_rev": "5-f8e9a8853b15f0270df94c1ae71323216", "transactions": [ { "admin_notification": [], "admin_status": "pending", "payment_amount_usd": "1", "sp_tx_datetime": "Feb 26, 2021, 12:22 PM", "sp_tx_hash": "pi_tx1", "sp_tx_status": "succeeded", "sp_tx_toAddress": "Admin", "tx_admin_dateTime": "-", "user_buyplan_days": "7 Days" }, { "admin_notification": [], "admin_status": "pending", "payment_amount_usd": "2", "sp_tx_datetime": "Feb 26, 2021, 4:09 PM", "sp_tx_hash": "pi_tx2", "sp_tx_status": "succeeded", "sp_tx_toAddress": "Admin", "tx_admin_dateTime": "-", "user_buyplan_days": "7 Days" }, { "admin_notification": [], "admin_status": "pending", "payment_amount_usd": "1", "sp_tx_datetime": "Feb 26, 2021, 12:22 PM", "sp_tx_hash": "pi_tx3", "sp_tx_status": "succeeded", "sp_tx_toAddress": "Admin", "tx_admin_dateTime": "-", "user_buyplan_days": "7 Days" } ], "user_email": "s@mail.com", "user_fName": "Sam", "user_id": "user_2304354", "user_lName": "Smith", "user_password": "Abc@123456", "user_type": "user", "~version": "CgMBFgA=" }
Here I want only 2 transactions at the first time than the next ones. So I have used getQueryResultWithPagination methods But It doesn't work on a single object. So I created a view of CouchDB.
"views": { "tx-view": { "map": "function (doc) {if(doc.transactions.length > 0) { emit(doc.transactions); }}" }, "tx-view-2": { "map": "function (doc) { if(doc.transactions.length > 0) { doc.transactions.forEach(function (tag) { emit(doc.user_id, tag); });}}" } },
Can I add this view into chaincode query method and create a transaction for the same? How do I resolve this?
-
Openshift route AMQP passthrough
I want to create an Openshift route to a RabbitMQ.
As far as I understand the documentation at https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html, the "TLS with SNI" support should be able to inspect the SNI header and then route the traffic to the appropriate service.
Unfortunately, I'm having problems both with edge termination and with reencrypt termination. In both cases, the HAProxy seems to inspect the traffic and throws an error because the content is no HTTP.
With a simple Java application which uses the AMQP Java library, I can see the following traffic:
javax.net.ssl|DEBUG|01|main|2021-02-26 13:35:47.001 CET|SSLSocketOutputRecord.java:331|WRITE: TLS12 application_data, length = 8 javax.net.ssl|DEBUG|01|main|2021-02-26 13:35:47.001 CET|SSLCipher.java:1770|Plaintext before ENCRYPTION ( 0000: 41 4D 51 50 00 00 09 01 AMQP.... ) javax.net.ssl|DEBUG|0D|AMQP Connection 4.168.18.8:443|2021-02-26 13:35:47.044 CET|SSLSocketInputRecord.java:249|READ: TLSv1.2 application_data, length = 211 javax.net.ssl|DEBUG|0D|AMQP Connection 4.168.18.8:443|2021-02-26 13:35:47.045 CET|SSLCipher.java:1672|Plaintext after DECRYPTION ( 0000: 48 54 54 50 2F 31 2E 30 20 34 30 30 20 42 61 64 HTTP/1.0 400 Bad 0010: 20 72 65 71 75 65 73 74 0D 0A 43 61 63 68 65 2D request..Cache- 0020: 43 6F 6E 74 72 6F 6C 3A 20 6E 6F 2D 63 61 63 68 Control: no-cach 0030: 65 0D 0A 43 6F 6E 6E 65 63 74 69 6F 6E 3A 20 63 e..Connection: c 0040: 6C 6F 73 65 0D 0A 43 6F 6E 74 65 6E 74 2D 54 79 lose..Content-Ty 0050: 70 65 3A 20 74 65 78 74 2F 68 74 6D 6C 0D 0A 0D pe: text/html... 0060: 0A 3C 68 74 6D 6C 3E 3C 62 6F 64 79 3E 3C 68 31 .<html><body><h1 0070: 3E 34 30 30 20 42 61 64 20 72 65 71 75 65 73 74 >400 Bad request 0080: 3C 2F 68 31 3E 0A 59 6F 75 72 20 62 72 6F 77 73 </h1>.Your brows 0090: 65 72 20 73 65 6E 74 20 61 6E 20 69 6E 76 61 6C er sent an inval 00A0: 69 64 20 72 65 71 75 65 73 74 2E 0A 3C 2F 62 6F id request..</bo 00B0: 64 79 3E 3C 2F 68 74 6D 6C 3E 0A dy></html>. )
(The output is generated with
java -Djavax.net.debug=all -jar rabbitmqtest-1.0-SNAPSHOT-all.jar amqps://myroute:443 2> output.txt
)The traffic does not get routed to the RabbitMQ. If I open the router in the web browser, RabbitMQ receives a connection request (but of course cannot understand it, because it is HTTP traffic).
The Helm route template is:
{{- if .Values.route.enabled }} apiVersion: route.openshift.io/v1 kind: Route metadata: name: rabbitmq spec: host: rabbitmq-{{ .Values.route.identifier}}.{{ .Values.route.host }} port: targetPort: amqp-ssl tls: termination: reencrypt destinationCACertificate: {{ .Values.tls.ca_crt | quote }} to: kind: Service weight: 100 name: rabbitmq status: ingress: [] {{- end }}
Is there any way to use the router as a raw TCP proxy? I would prefer not to use a node port, as I would have to manage SSL certs on the RabbitMQ (Currently I have installed a long-lived self signed cert).
-
Sending API request through on public ip only
We are working on a third party api integration.
Due to security reasons the API hosting company has asked us to give one public ip to whitelist and all the requests will be allowed through this IP only.
But our users are sitting in remote locations with VPN connection so I don’t know how to send the request from one public ip which is available at Head office only. I have read one thread to achieve this through proxy server on Linux platform. I would really appreciate if someone can provide me steps to do the same in windows platform as I m completely new to proxies and Linux is not available with us at all. Thanks in advance for your help
-
502 bad gateway error observed intermittently in Openshift 4.5
I am using openshift 4.5. I used service and routes and it is externally accessible. Application is working fine, but sometimes i am getting 502 error. I am using ingress haproxy route. Appreciate your thoughts on resolving this issue.
-
HBase regions not fail over correctly, stuck in "OPENING" RIT
I am using the hbase-2.2.3 to setup a small cluster with 3 nodes. (both hadoop and HBase are HA mode) node1: NN, JN, ZKFC, ZK, HMaster, HRegionServer node2: NN, JN, ZKFC, DN, ZK, Backup HMaster, HRegionServer node3: DN, JN, ZK, HRegionServer
When I reboot node3, it cause the regions-in-transaction (some regions are in OPENING). In the master log, I can see: master.HMaster: Not running balancer because 5 region(s) in transition
Anyone know how to fix this issue? Great thanks
-
Is there a redis pub/sub replacement option, with high availability and redundancy, or, probably p2p messaging?
I have an app with hundreds of horizontally scaled servers which uses redis pub/sub, and it works just fine.
The redis server is a central point of failure. Whenever redis fails (well, it happens sometimes), our application falls into inconsistent state and have to follow recovery process which takes time. During this time the entire app is hardly useful.
Is there any messaging system/framework option, similar to redis pub/sub, but with redundancy and high availability so that if one instance fails, other will continue to deliver the messages exchanged between application hosts?
Or, better, is there any distributed messaging system in which app instances exchange the messages in a peer-to-peer manner, so that there is no single point of failure?
-
Apache flink high availability not working as expected
Tried to test High availability by bringing down Task manager along with Job manager and yarn Node manager at same time, I thought yarn will automatically assign that application to other node but it's not happening 😔😔 How this can be achieved?
-
2-org RAFT setup, 1 Orderer log has logSendFailure: orderer certificate signed by unknown authority
I'm setting up a multi-host network based on a modified fabric-samples test-network (v. 2.2.1).
Currently I have two orgs ('OEM' and 'S11') on separate servers, each with a 2 CAs, 1 Orderer, 1 Peer (as docker containers). The servers are joined by a Docker swarm overlay network.
After starting up the network, I get a
logSendFailure
withcertificate signed by unknown authority
in the 'OEM' orderer log:2021-02-10 19:32:27.395 UTC [orderer.consensus.etcdraft] Step -> INFO 14da5 1 is starting a new election at term 1 channel=system-channel node=1 2021-02-10 19:32:27.396 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 14da6 1 became pre-candidate at term 1 channel=system-channel node=1 2021-02-10 19:32:27.396 UTC [orderer.consensus.etcdraft] poll -> INFO 14da7 1 received MsgPreVoteResp from 1 at term 1 channel=system-channel node=1 2021-02-10 19:32:27.397 UTC [orderer.consensus.etcdraft] campaign -> INFO 14da8 1 [logterm: 1, index: 2] sent MsgPreVote request to 2 at term 1 channel=system-channel node=1 2021-02-10 19:32:27.398 UTC [orderer.consensus.etcdraft] consensusSent -> DEBU 14da9 Sending msg of 28 bytes to 2 on channel system-channel took 931.836µs 2021-02-10 19:32:27.398 UTC [orderer.consensus.etcdraft] logSendFailure -> DEBU 14daa Failed to send StepRequest to 2, because: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"ca.scm.com\")" channel=system-channel node=1 2021-02-10 19:32:29.042 UTC [orderer.common.cluster.step] handleMessage -> DEBU 14dab Received message from orderer(10.0.2.69:52996): ConsensusRequest for channel system-channel with payload of size 28
However, I don't see the same error in the 'S11' Orderer log:
2021-02-10 19:32:06.456 UTC [orderer.consensus.etcdraft] Step -> INFO 63c 2 is starting a new election at term 1 channel=system-channel node=2 2021-02-10 19:32:06.457 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 63d 2 became pre-candidate at term 1 channel=system-channel node=2 2021-02-10 19:32:06.457 UTC [orderer.consensus.etcdraft] poll -> INFO 63e 2 received MsgPreVoteResp from 2 at term 1 channel=system-channel node=2 2021-02-10 19:32:06.457 UTC [orderer.consensus.etcdraft] campaign -> INFO 63f 2 [logterm: 1, index: 2] sent MsgPreVote request to 1 at term 1 channel=system-channel node=2 2021-02-10 19:32:06.457 UTC [orderer.consensus.etcdraft] consensusSent -> DEBU 640 Sending msg of 28 bytes to 1 on channel system-channel took 29.279µs 2021-02-10 19:32:06.457 UTC [orderer.common.cluster.step] sendMessage -> DEBU 641 Send of ConsensusRequest for channel system-channel with payload of size 28 to orderer(orderer.oem.scm.com:6000) took 327.037µs
Based on this, My first guess is that either:
- My TLS/CA server certs from the 'S11' org are not correctly formatted, or
- I am sharing the wrong/incomplete certs from 'S11' with the 'OEM' orderer.
After looking at the orderer certificates (cacert.pem, signcert.pem, tls-server.crt), I don't see any major differences other than org names, so I don't understand why I see this error on one org but not the other.
Assuming that I have a problem with my certificate, which of these cert fields would likely be the issue?
-Issuer: C = US, ST = New York, L = New York, O = scm.com, CN = ca.scm.com
-Subject: C = US, ST = New York, L = New York, O = scm.com, CN = ca.scm.com
-X509v3 Subject Alternative Name: DNS:orderer.oem.scm.com, DNS:localhost
- something else?After generating certs (before creating the system channel), I share these 3 orderer certificates between organizations:
- msp/cacerts/localhost-${Org-orderer-port}-ca-orderer.pem
- msp/signcerts/cert.pem
- orderers/orderer.${org}.scm.com/tls/server.crt
I came across this similar post, although I already have 'localhost' listed on my server.crt cert. (Is there another file I should be looking at instead?)
Any tips on troubleshooting would be valued!
-
How to migrate Data from one hyperledger fabric Network to another Hyperledger fabric network?
Is there any way to migrate data from one hyperledger fabric network to another hyperledger fabric network with same structure but different
certs
. -
HAProxy binds the back-end port and not allowing backed server to start in TCP mode
Here, the problem is:
Case - I: I have started the HAProxy in TCP Mode. HAProxy configuration is shown below:
global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon defaults log global mode tcp option tcplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 frontend ca-client bind *:8056 bind *:7054 bind *:8054 mode tcp default_backend ca-server backend ca-server mode tcp balance roundrobin server org2-ica1 0.0.0.0:7054 check server org2-ica2 0.0.0.0:8054 check
And, I started both servers(in separate docker containers) that defined in the backend of the HAProxy configuration. It gives an error of
Error starting userland proxy: listen tcp 0.0.0.0:8054: bind: address already in use
as below:$ sudo docker-compose -f ca.yaml up Creating network "ca_production-network" with the default driver Creating org1-ica2 ... Creating org1-ica1 ... ERROR: for org1-ica1 a bytes-like object is required, not 'str' ERROR: for org1-ica2 a bytes-like object is required, not 'str' ERROR: for org1-ica1 a bytes-like object is required, not 'str' ERROR: for org1-ica2 a bytes-like object is required, not 'str' Traceback (most recent call last): File "/usr/lib/python3/dist-packages/docker/api/client.py", line 261, in _raise_for_status response.raise_for_status() File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.22/containers/8d54e16ecae0824c91c51b3c880d27ea96fc200574d4eb81d72173f4515ae386/start During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/compose/service.py", line 625, in start_container container.start() File "/usr/lib/python3/dist-packages/compose/container.py", line 241, in start return self.client.start(self.id, **options) File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 19, in wrapped return f(self, resource_id, *args, **kwargs) File "/usr/lib/python3/dist-packages/docker/api/container.py", line 1095, in start self._raise_for_status(res) File "/usr/lib/python3/dist-packages/docker/api/client.py", line 263, in _raise_for_status raise create_api_error_from_http_exception(e) File "/usr/lib/python3/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) docker.errors.APIError: 500 Server Error: Internal Server Error ("b'driver failed programming external connectivity on endpoint org1-ica2 (ed8d425881e09eb0c21eb5a638e986791d19f3239ad7ece98a4478a5e53dc8be): Error starting userland proxy: listen tcp 0.0.0.0:8054: bind: address already in use'") During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/bin/docker-compose", line 11, in <module> load_entry_point('docker-compose==1.25.0', 'console_scripts', 'docker-compose')() File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 72, in main command() File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 128, in perform_command handler(command, command_options) File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1107, in up to_attach = up(False) File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1088, in up return self.project.up( File "/usr/lib/python3/dist-packages/compose/project.py", line 565, in up results, errors = parallel.parallel_execute( File "/usr/lib/python3/dist-packages/compose/parallel.py", line 112, in parallel_execute raise error_to_reraise File "/usr/lib/python3/dist-packages/compose/parallel.py", line 210, in producer result = func(obj) File "/usr/lib/python3/dist-packages/compose/project.py", line 548, in do return service.execute_convergence_plan( File "/usr/lib/python3/dist-packages/compose/service.py", line 545, in execute_convergence_plan return self._execute_convergence_create( File "/usr/lib/python3/dist-packages/compose/service.py", line 460, in _execute_convergence_create containers, errors = parallel_execute( File "/usr/lib/python3/dist-packages/compose/parallel.py", line 112, in parallel_execute raise error_to_reraise File "/usr/lib/python3/dist-packages/compose/parallel.py", line 210, in producer result = func(obj) File "/usr/lib/python3/dist-packages/compose/service.py", line 465, in <lambda> lambda service_name: create_and_start(self, service_name.number), File "/usr/lib/python3/dist-packages/compose/service.py", line 457, in create_and_start self.start_container(container) File "/usr/lib/python3/dist-packages/compose/service.py", line 627, in start_container if "driver failed programming external connectivity" in ex.explanation: TypeError: a bytes-like object is required, not 'str'
Case - II: When I start the containers before enabling the HAProxy service then both servers work as expected. And I start the HAProxy service then It shows an error as :
$sudo service haproxy restart Job for haproxy.service failed because the control process exited with error code. See "systemctl status haproxy.service" and "journalctl -xe" for details.
To see the internal, I issued
haproxy -f /etc/haproxy/haproxy.cfg -db
haproxy -f /etc/haproxy/haproxy.cfg -db [ALERT] 030/151042 (120902) : Starting frontend GLOBAL: cannot bind UNIX socket [/run/haproxy/here/admin.sock] [ALERT] 030/151042 (120902) : Starting frontend ca-client: cannot bind socket [0.0.0.0:8056] [ALERT] 030/151042 (120902) : Starting frontend ca-client: cannot bind socket [0.0.0.0:7054] [ALERT] 030/151042 (120902) : Starting frontend ca-client: cannot bind socket [0.0.0.0:8054]
My question is :
What is this error cannot bind socket in HAProxy?
What is the proper configuration of HAproxy for TCP mode for two replicated servers?Thank you!