Replicated container of Certificate Authority (CA) server is not working
I am trying to Maintain the High Availability (HA) of the Certificate Authority (CA) server (Without using the Container orchestration Technique like K8). To achieve that, I used the YAML anchor and merge syntax. Both containers run and listen to the server port. The Problem here arises is, Only one server works as expected as previous as a normal, and another replicated using merge and anchor is not working. It throws an error while sending a request to the replicated server using SDK. I performed enrollAdmin operation using enrollAdmin.js provided by fabcar (sample provided by hyperledger fabric). The error code is as below :
gopal@gopal:~/Dappdev/first/fabric-samples/fabcar/javascript$ node enrollAdmin.js
Wallet path: /home/gopal/Dappdev/first/fabric-samples/fabcar/javascript/wallet
Enroll the admin user, and import the new identity into the wallet
2021-01-12T08:42:03.572Z - error: [FabricCAClientService.js]: Failed to enroll admin, error:%o message=Calling enrollment endpoint failed with error [Error: write EPROTO 139961596319552:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
], stack=Error: Calling enrollment endpoint failed with error [Error: write EPROTO 139961596319552:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
]
at ClientRequest.request.on (/home/gopal/Dappdev/first/fabric-samples/fabcar/javascript/node_modules/fabric-ca-client/lib/FabricCAClient.js:487:12)
at ClientRequest.emit (events.js:198:13)
at TLSSocket.socketErrorListener (_http_client.js:401:9)
at TLSSocket.emit (events.js:198:13)
at errorOrDestroy (internal/streams/destroy.js:107:12)
at onwriteError (_stream_writable.js:436:5)
at onwrite (_stream_writable.js:461:5)
at _destroy (internal/streams/destroy.js:49:7)
at TLSSocket.Socket._destroy (net.js:614:3)
at TLSSocket.destroy (internal/streams/destroy.js:37:8)
Failed to enroll admin user "admin": Error: Calling enrollment endpoint failed with error [Error: write EPROTO 139961596319552:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
]
gopal@gopal:~/Dappdev/first/fabric-samples/fabcar/javascript$
Additionally, to explain more, I am adding CA configuration file as below.
version: '2'
networks:
byfn:
services:
ca0: &name-me
image: hyperledger/fabric-ca:$IMAGE_TAG
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/key.pem
- FABRIC_CA_SERVER_PORT=7054
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/key.pem -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_server01
networks:
- byfn
# Replicated CA
ca01:
<<: *name-me # <- this is a merge (<<) with an alias (*name-me)
# keys below merge notation override those that declared under anchor
# so this:
ports:
- "8054:8054"
container_name: ca_server02
environment:
- FABRIC_CA_SERVER_PORT=8054
Further more, to confirm the configuration, I have added a connection profile for this CA.
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "https://localhost:8054",
"caName": "ca-org1",
"tlsCACerts": {
"pem": "-----BEGIN CERTIFICATE-----\nMIICUDCCAfegAwIBAgIQWmpv94Te6dBKBjMEJrZ/RDAKBggqhkjOPQQDAjBzMQsw\nCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy\nYW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu\nb3JnMS5leGFtcGxlLmNvbTAeFw0yMDEyMDQwODI1MDBaFw0zMDEyMDIwODI1MDBa\nMHMxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T\nYW4gRnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMRwwGgYDVQQD\nExNjYS5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE\nvhAwa7BeZTdV+Sevx0LEg+dptt1GIaQpukOhiEGmstF7Re8okIQXhQw/WjTVWlv8\nGccHPcoUuVe6nBklpHEL/qNtMGswDgYDVR0PAQH/BAQDAgGmMB0GA1UdJQQWMBQG\nCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MCkGA1UdDgQiBCBJ\n/ICyRsXWQVxtcPI0+8+ZtAYHGXb0z4VBd5yvvmv64zAKBggqhkjOPQQDAgNHADBE\nAiBYadQuHePis5gPkEoLR3yVaYzEADap31XcSg9P1L6akAIgMoxWuq58zpQrIY0X\nh4zC6aHdSt2u4hJtXLB+8JNzVy8=\n-----END CERTIFICATE-----\n"
},
"httpOptions": {
"verify": false
}
}
How to solve this issue of replicated docker container not working for CA server replication?
See also questions close to this topic
-
How to run mongodb with docker on MAC OS X with exsisting db on host
I have the following docker compose file:
version: '3.0' services: mongo: image: mongo:latest ports: - 27017:27017 volumes: - ./mongodb_data:/var/lib/mongodb
In the
./mongodb_data
I have already some database files (wired tiger filesystem).When I run
docker stack deploy mongodb_1 -c docker-compose.yaml
MongoDB is started correctly but there is nothing into the database.I've tried the following changes of the docker-compose file:
volumes: - ./mongodb_data:/data/db
volumes: - mongodb_data:/data/db volumes: mongodb_data:
I've also tried to copy the folder into the container after it is running with:
docker cp ./mongodb_data/. 9efe160dd9dc:/var/lib/mongodb/
anddocker cp ./mongodb_data/. 9efe160dd9dc:/data/db
.If I ssh into the container the files are there but if I login into mongo I cannot see the databases.
I'm sure that mongo is running from the container because if I stop it, I cannot enter into mongo anymore.
Any idea on how to solve this on MAC OS X? It seems impossible to me that I cannot copy/mount some existing data into the container, in other words, initialize the database with an existing folder containing the data.
Some help from docker experts would be really appreciated!
Thank you!
Versions:
MAC OS X: 10.15.7 Docker Desktop: 2.3.0.4 stable Docker Engine 19.03.12 Compose: 1.26.2
-
Kafka js container cannot connect to kafka container on the same docker stack
I have a ELKK docker compose stack and I'm trying to connect a kafka client (kafka js) to kafka but with no luck.
docker-compose.yml
version: "3.7" services: zookeeper: image: confluentinc/cp-zookeeper:5.4.1 container_name: zookeeper restart: unless-stopped ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 TZ: "${TZ-Europe/Berlin}" healthcheck: test: "echo stat | nc localhost $$ZOOKEEPER_CLIENT_PORT" start_period: 1m kafka: image: confluentinc/cp-kafka:5.4.1 container_name: kafka restart: unless-stopped depends_on: - filebeat - zookeeper ports: - "29092:29092" environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_DELETE_TOPIC_ENABLE: "true" TZ: "${TZ-Europe/Berlin}" healthcheck: test: [ "CMD", "nc", "-z", "localhost", "9092" ] start_period: 1m kafka-rest-proxy: image: confluentinc/cp-kafka-rest:5.4.1 container_name: kafka-rest-proxy restart: unless-stopped depends_on: - zookeeper - kafka ports: - "8082:8082" environment: KAFKA_REST_BOOTSTRAP_SERVERS: PLAINTEXT://kafka:9092 KAFKA_REST_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_REST_HOST_NAME: kafka-rest-proxy KAFKA_REST_LISTENERS: http://0.0.0.0:8082 KAFKA_REST_SCHEMA_REGISTRY_URL: http://schema-registry:8081 KAFKA_REST_CONSUMER_REQUEST_TIMEOUT_MS: 30000 TZ: "${TZ-Europe/Berlin}" healthcheck: test: "curl -f http://localhost:8082 || exit 1" start_period: 1m kafka-topics-ui: image: landoop/kafka-topics-ui:0.9.4 container_name: kafka-topics-ui restart: unless-stopped depends_on: - kafka-rest-proxy ports: - "8085:8000" environment: KAFKA_REST_PROXY_URL: http://kafka-rest-proxy:8082 PROXY: "true" healthcheck: test: "wget --quiet --tries=1 --spider http://localhost:8000 || exit 1" start_period: 1m elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.2 container_name: elasticsearch restart: unless-stopped ports: - "9200:9200" - "9300:9300" environment: cluster.name: docker-es-cluster discovery.type: single-node bootstrap.memory_lock: "true" ES_JAVA_OPTS: "-Xms512m -Xmx512m" cap_add: - ALL privileged: true volumes: - esdata:/usr/share/elasticsearch/data ulimits: memlock: soft: -1 hard: -1 healthcheck: test: "curl -f http://localhost:9200 || exit 1" start_period: 1m depends_on: - logstash zipkin: container_name: zipkin image: openzipkin/zipkin:2.20.2 restart: unless-stopped ports: - "9411:9411" healthcheck: test: [ "CMD", "nc", "-z", "localhost", "9411" ] start_period: 1m kafka-manager: container_name: kafka-manager image: hlebalbau/kafka-manager:3.0.0.4 restart: unless-stopped depends_on: - zookeeper ports: - "9000:9000" environment: ZK_HOSTS: zookeeper:2181 APPLICATION_SECRET: "random-secret" command: -Dpidfile.path=/dev/null healthcheck: test: "curl -f http://localhost:9000 || exit 1" start_period: 1m logstash: image: docker.elastic.co/logstash/logstash:5.3.0 container_name: logstash volumes: - ./logstash-config/config/logstash.yml:/usr/share/logstash/config/logstash.yml - ./logstash-config/pipeline:/usr/share/logstash/pipeline depends_on: - kafka kibana: image: docker.elastic.co/kibana/kibana-oss:6.4.2 container_name: kibana environment: elasticsearch.url: http://localhost:9200 depends_on: - elasticsearch ports: - "5601:5601" ulimits: nproc: 65535 memlock: soft: -1 hard: -1 healthcheck: test: "curl -f http://localhost:5601 || exit 1" start_period: 1m cap_add: - ALL filebeat: hostname: filebeat user: root container_name: filebeat build: context: ./filebeat volumes: - filebeat_data:/usr/share/filebeat/data:rw - /var/lib/docker/containers:/usr/share/filebeat/dockerlogs/data:ro - /var/run/docker.sock:/var/run/docker.sock visual-app: container_name: visual-app build: context: ./ dockerfile: visual-dockerfile volumes: - './app:/app' - '/app/node_modules' ports: - '8080:8080' depends_on: - kafka volumes: filebeat_data: esdata:
consumer.js
const { Kafka, logLevel } = require('kafkajs') const host = 'kafka:9092' const kafka = new Kafka({ logLevel: logLevel.INFO, brokers: [`${host}`], clientId: 'example-consumer', }) const topic = 'log_stream' const consumer = kafka.consumer({ groupId: 'test-group' }) const run = async () => { await consumer.connect() await consumer.subscribe({ topic, fromBeginning: true }) await consumer.run({ eachMessage: async ({ topic, partition, message }) => { const prefix = `${topic}[${partition} | ${message.offset}] / ${message.timestamp}` console.log(`- ${prefix} ${message.key}#${message.value}`) }, }) } run().catch(e => console.error(`[example/consumer] ${e.message}`, e))
The error I get is the following.
{"level":"ERROR","timestamp":"2021-02-23T22:04:33.736Z","logger":"kafkajs","message":"[Connection] Connection error: getaddrinfo ENOTFOUND kafka","broker":"kafka:9092","clientId":"example-consumer","stack":"Error: getaddrinfo ENOTFOUND kafka\n at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)"} {"level":"ERROR","timestamp":"2021-02-23T22:04:33.740Z","logger":"kafkajs","message":"[BrokerPool] Failed to connect to seed broker, trying another broker from the list: Connection error: getaddrinfo ENOTFOUND kafka","retryCount":0,"retryTime":314} [example/consumer] Connection error: getaddrinfo ENOTFOUND kafka KafkaJSNonRetriableError Caused by: KafkaJSConnectionError: Connection error: getaddrinfo ENOTFOUND kafka at Socket.onError (/app/node_modules/kafkajs/src/network/connection.js:152:23) at Socket.emit (events.js:315:20) at emitErrorNT (internal/streams/destroy.js:106:8) at emitErrorCloseNT (internal/streams/destroy.js:74:3) at processTicksAndRejections (internal/process/task_queues.js:80:21) { name: 'KafkaJSNumberOfRetriesExceeded', retriable: false, helpUrl: undefined, originalError: KafkaJSConnectionError: Connection error: getaddrinfo ENOTFOUND kafka at Socket.onError (/app/node_modules/kafkajs/src/network/connection.js:152:23) at Socket.emit (events.js:315:20) at emitErrorNT (internal/streams/destroy.js:106:8) at emitErrorCloseNT (internal/streams/destroy.js:74:3) at processTicksAndRejections (internal/process/task_queues.js:80:21) { retriable: true, helpUrl: undefined, broker: 'kafka:9092', code: 'ENOTFOUND' }, retryCount: 5, retryTime: 8114 }
I have also tried a network, for example
networks: kafkanet: driver: bridge
that was assigned to all services, but nothing changed.
I have also tried hostname: kafka to kafka service, but also nothing changed.
However, I can successfully connect to broker from an external client using localhost:29092.
So, what am I doing wrong?
-
pydio installation on docker with traefik, ubuntu 20.10
I try to install Pydio with traefik and domain (duckdns.org).
https://pydio.com/en/docs/kb/deployment/running-cells-container-behind-traefik-reverse-proxy
I have already in the same way installed phpmyadmin, owncloud etc. But I have a problem with Pydio installation. I can not connect via web browser. Can anybody help me with this, help build docker-compose?
I have try a lot of different combination but without luck :confused:
Page won't load :(
Thanks.
-
Hyperledger Fabric 2.3 connection issue
Facing an issue while querying ledger. Here is the how our network is laid out. There are 2 orgs on a kubernetes cluster inside the corp network and one on a docker swarm on an azure vm inside the network too. The azure vm node and the k8s cluster nodes communicate with each other over an nginx server. Now the reason behind this elaborate setup is because our supplychain usecase requires partners from different companies to join our network. So to simulate an external partner outside the corp network we use the azure vm. Since we plan to productionize the implementation we couldn't use Fabric crypto config generated certificates and got new certificates issued using our company's intermediate and root certs. Now there are chaincodes installed on this network setup with endorsement policy enabled that works perfectly on all 3 nodes. We are using Fabric 2.3.0
Now the first issue that I had faced, was the TLS certificate to use in the connection.json file. That was solved by chaining the certificates as described in the SO post here. The current issue is that the nodejs code is able to connect to the orgs in the k8s cluster but not the one on the Azure VM.
Here is my connection.json. All the 10.100.xx.xx ips are of the pods in the k8s cluster and the public.ip.address is that of the nginx server
{ "name": "byfn", "version": "1.0.0", "client": { "organization": "ORG2MSP", "connection": { "timeout": { "peer": { "endorser": "10000" }, "orderer": "10000" } } }, "channels": { "supplychain": { "orderers": [ "ord1.orderers.org1.com", "ord2.orderers.org1.com", "ord3.orderers.org1.com" ], "peers": { "peer1.peers.org1.com": { "endorsingPeer": true, "chaincodeQuery": true, "ledgerQuery": true, "eventSource": true }, "peer1.peers.org3.com": { "endorsingPeer": true, "chaincodeQuery": true, "ledgerQuery": true, "eventSource": true }, "peer1.peers.org2.com": { "endorsingPeer": true, "chaincodeQuery": true, "ledgerQuery": true, "eventSource": true } } } }, "organizations": { "ORG2MSP": { "mspid": "ORG2MSP", "peers": [ "peer1.peers.org2.com", "peer2.peers.org2.com" ] } }, "orderers": { "ord1.orderers.org1.com": { "url": "grpcs://10.100.xxx.xxx:7050", "grpcOptions": { "ssl-target-name-override": "ord1.orderers.org1.com", "request-timeout": 12000 }, "tlsCACerts": { "path": "temp.pem" } }, "ord2.orderers.org1.com": { "url": "grpcs://10.100.xxx.xxx:7050", "grpcOptions": { "ssl-target-name-override": "ord2.orderers.org1.com", "request-timeout": 12000 }, "tlsCACerts": { "path": "temp.pem" } }, "ord3.orderers.org1.com": { "url": "grpcs://10.100.xxx.xxx:7050", "grpcOptions": { "ssl-target-name-override": "ord3.orderers.org1.com", "request-timeout": 12000 }, "tlsCACerts": { "path": "temp.pem" } } }, "peers": { "peer1.peers.org1.com": { "url": "grpcs://10.100.xxx.xxx:7051", "grpcOptions": { "ssl-target-name-override": "peer1.peers.org1.com", "request-timeout": 12000, "grpc.keepalive_time_ms": 600000 }, "tlsCACerts": { "path": "temp.pem" } }, "peer1.peers.org3.com": { "url": "grpcs://public.ip.address:7051", "grpcOptions": { "ssl-target-name-override": "peer1.peers.org3.com", "request-timeout": 12000, "grpc.keepalive_time_ms": 600000 }, "tlsCACerts": { "path": "temp.pem" } }, "peer1.peers.org2.com": { "url": "grpcs://10.100.xxx.xxx:7051", "grpcOptions": { "ssl-target-name-override": "peer1.peers.org2.com", "request-timeout": 12000, "grpc.keepalive_time_ms": 600000 }, "tlsCACerts": { "path": "temp.pem" } } } }
Here is my code
'use strict'; const { Wallets, Gateway } = require('fabric-network'); const fs = require('fs'); const path = require('path'); const ccpPath = path.resolve(__dirname,'connection.json'); const ccpJSON = fs.readFileSync(ccpPath, 'utf8'); const ccp = JSON.parse(ccpJSON); async function main(){ try { // const walletPath = path.join(process.cwd(), 'wallet'); const wallet = await Wallets.newFileSystemWallet('wallet'); // console.log(`Wallet path: ${walletPath}`); // Check to see if we've already enrolled the user. const userExists = await wallet.get('usernew'); const tlsExists = await wallet.get('tlsid'); if (!userExists) { console.log('An identity for the user "usernew" does not exist in the wallet'); return; } if (!tlsExists) { console.log('An identity for the user "tls" does not exist in the wallet'); return; } console.log("Here"); // Create a new gateway for connecting to our peer node. const gateway = new Gateway(); await gateway.connect(ccp, { wallet, identity: 'usernew', discovery: { enabled: false, asLocalhost: false }, clientTlsIdentity: 'tlsid' }); console.log("Here1"); // Get the network (channel) our contract is deployed to. const network = await gateway.getNetwork('supplychain'); console.log("Here2"); //Get the channel object to fetch out peers const channel = network.getChannel(); console.log("Here3"); //Get peers for endorsement //channel.getEndorsers(); const org1Peer = channel.getPeer('peer1.peers.org1.com'); //console.log(org1Peer); const org2Peer = channel.getPeer('peer1.peers.org2.com'); //console.log(org2Peer); const org3Peer = channel.getPeer('peer1.peers.org3.com'); //console.log(org3Peer); // All the above logs print correct information // Get the contract from the network. const contract = network.getContract('mycontract'); const result = await contract.evaluateTransaction('queryAllObjects'); console.log(`Transaction has been evaluated, result is: ${result.toString()}`); } catch (error) { console.error(`Failed to evaluate transaction: ${error}`); } } main()
Here is the crypto folder tree
C:. ├───peers.org1.com │ └───users │ ├───Admin@peers.org1.com │ │ ├───msp │ │ │ ├───admincerts │ │ │ ├───cacerts │ │ │ ├───intermediatecerts │ │ │ ├───keystore │ │ │ ├───signcerts │ │ │ ├───tlscacerts │ │ │ └───tlsintermediatecerts │ │ └───tls │ └───User1@peers.org1.com │ ├───msp │ │ ├───admincerts │ │ ├───cacerts │ │ ├───intermediatecerts │ │ ├───keystore │ │ ├───signcerts │ │ ├───tlscacerts │ │ └───tlsintermediatecerts │ └───tls ├───peers.org2.com │ └───users │ ├───Admin@peers.org2.com │ │ ├───msp │ │ │ ├───admincerts │ │ │ ├───cacerts │ │ │ ├───intermediatecerts │ │ │ ├───keystore │ │ │ ├───signcerts │ │ │ ├───tlscacerts │ │ │ └───tlsintermediatecerts │ │ └───tls │ └───User1@peers.org2.com │ ├───msp │ │ ├───admincerts │ │ ├───cacerts │ │ ├───intermediatecerts │ │ ├───keystore │ │ ├───signcerts │ │ ├───tlscacerts │ │ └───tlsintermediatecerts │ └───tls └───peers.org3.com └───users ├───Admin@peers.org3.com │ ├───msp │ │ ├───admincerts │ │ ├───cacerts │ │ ├───intermediatecerts │ │ ├───keystore │ │ ├───signcerts │ │ ├───tlscacerts │ │ └───tlsintermediatecerts │ └───tls └───User1@peers.org3.com ├───msp │ ├───admincerts │ ├───cacerts │ ├───intermediatecerts │ ├───keystore │ ├───signcerts │ ├───tlscacerts │ └───tlsintermediatecerts └───tls
The temp.pem used in the connection file above is prepared by appending the ica.pem and ca.pem shown below. Here is how the cerificates look for Org2. Looks similar for other 2 orgs. msp/tlscacerts/ca.pem
Issuer: C=XX, ST=XXXX, L=XXXX, O=MyCompany, OU=Cybersecurity, CN=MyCompany Root Certificate Authority 2018 Validity Not Before: Jul 23 17:07:45 2018 GMT Not After : Jul 23 17:17:44 2043 GMT Subject: C=XX, ST=XXXX, L=XXXX, O=MyCompany, OU=Cybersecurity, CN=MyCompany Root Certificate Authority
msp/tlsintermediatecerts/ica.pem
Issuer: C=XX, ST=XXXX, L=XXXX, O=MyCompany, OU=Cybersecurity, CN=MyCompany Root Certificate Authority 2018 Validity Not Before: Nov 14 21:26:35 2018 GMT Not After : Nov 14 21:36:35 2025 GMT Subject: C=XX, ST=XXXX, L=XXXX, O=MyCompany, CN=MyCompany Issuing CA 101
tls/server.crt
Issuer: C=XX, ST=XXXX, L=XXXX, O=MyCompany, CN=MyCompany Issuing CA 101 Validity Not Before: Jan 18 20:30:30 2021 GMT Not After : Jan 18 20:30:30 2023 GMT Subject: C=XX, ST=XXXX, L=XXXX, O=MyCompany Inc., OU=org2client, CN=*.peers.org2.com . . . X509v3 Subject Alternative Name: DNS:*.peers.org2.com
Org2 NodeJs log
2021-02-25T10:21:33.736Z - error: [Endorser]: sendProposal[peer1.peers.org2.com] - Received error response from: grpcs://10.100.xxx.xxx:7051 error: Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP] 2021-02-25T10:21:33.738Z - error: [Endorser]: sendProposal[peer1.peers.org2.com] - rejecting with: Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP] 2021-02-25T10:21:33.738Z - error: [SingleQueryHandler]: evaluate: message=Query failed. Errors: ["Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP]"], stack=FabricError: Query failed. Errors: ["Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP]"] at SingleQueryHandler.evaluate (/fabric23/node_modules/fabric-network/lib/impl/query/singlequeryhandler.js:47:23) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async Transaction.evaluate (/fabric23/node_modules/fabric-network/lib/transaction.js:276:25) at async main (/fabric23/test.js:67:25), name=FabricError Failed to evaluate transaction: FabricError: Query failed. Errors: ["Error: 2 UNKNOWN: error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP]"]
Org2 peer logs
2021-02-25 10:21:33.732 UTC [endorser] Validate -> WARN 08f access denied: creator's signature over the proposal is not valid: The signature is invalid channel=supplychain txID=01bde838 mspID=ORG2MSP 2021-02-25 10:21:33.732 UTC [comm.grpc.server] 1 -> INFO 090 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=172.23.238.200:40928 grpc.peer_subject="CN=*.peers.org3.com,OU=org3client,O=MyCompany Inc.,L=XXXX,ST=XXXX,C=XX" error="error validating proposal: access denied: channel [supplychain] creator org [ORG2MSP]" grpc.code=Unknown grpc.call_duration=12.335491ms
Org3 peer logs
2021-02-26 13:42:26.081 UTC [gossip.channel] publishStateInfo -> DEBU 6155d8 Empty membership, no one to publish state info to 2021-02-26 13:42:26.493 UTC [core.comm] ServerHandshake -> DEBU 6155d9 Server TLS handshake completed in 49.605106ms server=PeerServer remoteaddress=public.ip.address:291542021-02-26 13:42:26.597 UTC [grpc] InfoDepth -> DEBU 6155da [transport]transport: loopyWriter.run returning. connection error: desc = "transport is closing" 2021-02-26 13:42:26.927 UTC [gossip.channel] publishStateInfo -> DEBU 6155db Empty membership, no one to publish state info to
I have also tried deploying the same code on the docker swarm on the azure vm. But there it gives the same error what I was getting when I was using the wrong certificate as given in the SO post here
-
Connect APP with Hyperledger Fabric network
Can I deploy a WebApp client to connect with the Hyperledger Fabric network running in VM or GKE? I want to run the WebApp in Google Cloud Run.
I'm using fabric-sdk for go to build my APP and contractapi to build my SmartContract.
-
How to apply a pagination on array of objects in couchdb using hyperledger fabric?
{ "_id": "usq", "_rev": "5-f8e9a8853b15f0270df94c1ae71323216", "transactions": [ { "admin_notification": [], "admin_status": "pending", "payment_amount_usd": "1", "sp_tx_datetime": "Feb 26, 2021, 12:22 PM", "sp_tx_hash": "pi_tx1", "sp_tx_status": "succeeded", "sp_tx_toAddress": "Admin", "tx_admin_dateTime": "-", "user_buyplan_days": "7 Days" }, { "admin_notification": [], "admin_status": "pending", "payment_amount_usd": "2", "sp_tx_datetime": "Feb 26, 2021, 4:09 PM", "sp_tx_hash": "pi_tx2", "sp_tx_status": "succeeded", "sp_tx_toAddress": "Admin", "tx_admin_dateTime": "-", "user_buyplan_days": "7 Days" }, { "admin_notification": [], "admin_status": "pending", "payment_amount_usd": "1", "sp_tx_datetime": "Feb 26, 2021, 12:22 PM", "sp_tx_hash": "pi_tx3", "sp_tx_status": "succeeded", "sp_tx_toAddress": "Admin", "tx_admin_dateTime": "-", "user_buyplan_days": "7 Days" } ], "user_email": "s@mail.com", "user_fName": "Sam", "user_id": "user_2304354", "user_lName": "Smith", "user_password": "Abc@123456", "user_type": "user", "~version": "CgMBFgA=" }
Here I want only 2 transactions at the first time than the next ones. So I have used getQueryResultWithPagination methods But It doesn't work on a single object. So I created a view of CouchDB.
"views": { "tx-view": { "map": "function (doc) {if(doc.transactions.length > 0) { emit(doc.transactions); }}" }, "tx-view-2": { "map": "function (doc) { if(doc.transactions.length > 0) { doc.transactions.forEach(function (tag) { emit(doc.user_id, tag); });}}" } },
Can I add this view into chaincode query method and create a transaction for the same? How do I resolve this?
-
HBase regions not fail over correctly, stuck in "OPENING" RIT
I am using the hbase-2.2.3 to setup a small cluster with 3 nodes. (both hadoop and HBase are HA mode) node1: NN, JN, ZKFC, ZK, HMaster, HRegionServer node2: NN, JN, ZKFC, DN, ZK, Backup HMaster, HRegionServer node3: DN, JN, ZK, HRegionServer
When I reboot node3, it cause the regions-in-transaction (some regions are in OPENING). In the master log, I can see: master.HMaster: Not running balancer because 5 region(s) in transition
Anyone know how to fix this issue? Great thanks
-
Is there a redis pub/sub replacement option, with high availability and redundancy, or, probably p2p messaging?
I have an app with hundreds of horizontally scaled servers which uses redis pub/sub, and it works just fine.
The redis server is a central point of failure. Whenever redis fails (well, it happens sometimes), our application falls into inconsistent state and have to follow recovery process which takes time. During this time the entire app is hardly useful.
Is there any messaging system/framework option, similar to redis pub/sub, but with redundancy and high availability so that if one instance fails, other will continue to deliver the messages exchanged between application hosts?
Or, better, is there any distributed messaging system in which app instances exchange the messages in a peer-to-peer manner, so that there is no single point of failure?
-
Apache flink high availability not working as expected
Tried to test High availability by bringing down Task manager along with Job manager and yarn Node manager at same time, I thought yarn will automatically assign that application to other node but it's not happening 😔😔 How this can be achieved?
-
Explore content of files of nginx container on my host machine
How can i see content of files of all my nginx container on my host machine ? I want to see all the configurations files for exemple.
-
Missing files from docker container created from an image through 'docker save'
A colleague is moving away and he's been trying to send me his work through docker images. We tried 'docker commit', 'docker save' and then on my side 'docker load' and files that would normally be there (ie, /blah_folder/blah_file.txt) do not show up, only the 'blah_folder'.
Are we doing something wrong?
-
Install MySQL in docker volume, is possible? Linux
for educational reasons my professor wants me to try to install the mysql server inside a docker volume associated with a certain container. Do you think it is possible? I don't know how to do it, I searched on the internet but nothing can be found.
The "apt-get" commands inside the volume don't work even though I'm using a container with ubuntu: 18.04
Thanks in advance for the answers!
-
2-org RAFT setup, 1 Orderer log has logSendFailure: orderer certificate signed by unknown authority
I'm setting up a multi-host network based on a modified fabric-samples test-network (v. 2.2.1).
Currently I have two orgs ('OEM' and 'S11') on separate servers, each with a 2 CAs, 1 Orderer, 1 Peer (as docker containers). The servers are joined by a Docker swarm overlay network.
After starting up the network, I get a
logSendFailure
withcertificate signed by unknown authority
in the 'OEM' orderer log:2021-02-10 19:32:27.395 UTC [orderer.consensus.etcdraft] Step -> INFO 14da5 1 is starting a new election at term 1 channel=system-channel node=1 2021-02-10 19:32:27.396 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 14da6 1 became pre-candidate at term 1 channel=system-channel node=1 2021-02-10 19:32:27.396 UTC [orderer.consensus.etcdraft] poll -> INFO 14da7 1 received MsgPreVoteResp from 1 at term 1 channel=system-channel node=1 2021-02-10 19:32:27.397 UTC [orderer.consensus.etcdraft] campaign -> INFO 14da8 1 [logterm: 1, index: 2] sent MsgPreVote request to 2 at term 1 channel=system-channel node=1 2021-02-10 19:32:27.398 UTC [orderer.consensus.etcdraft] consensusSent -> DEBU 14da9 Sending msg of 28 bytes to 2 on channel system-channel took 931.836µs 2021-02-10 19:32:27.398 UTC [orderer.consensus.etcdraft] logSendFailure -> DEBU 14daa Failed to send StepRequest to 2, because: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\" while trying to verify candidate authority certificate \"ca.scm.com\")" channel=system-channel node=1 2021-02-10 19:32:29.042 UTC [orderer.common.cluster.step] handleMessage -> DEBU 14dab Received message from orderer(10.0.2.69:52996): ConsensusRequest for channel system-channel with payload of size 28
However, I don't see the same error in the 'S11' Orderer log:
2021-02-10 19:32:06.456 UTC [orderer.consensus.etcdraft] Step -> INFO 63c 2 is starting a new election at term 1 channel=system-channel node=2 2021-02-10 19:32:06.457 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 63d 2 became pre-candidate at term 1 channel=system-channel node=2 2021-02-10 19:32:06.457 UTC [orderer.consensus.etcdraft] poll -> INFO 63e 2 received MsgPreVoteResp from 2 at term 1 channel=system-channel node=2 2021-02-10 19:32:06.457 UTC [orderer.consensus.etcdraft] campaign -> INFO 63f 2 [logterm: 1, index: 2] sent MsgPreVote request to 1 at term 1 channel=system-channel node=2 2021-02-10 19:32:06.457 UTC [orderer.consensus.etcdraft] consensusSent -> DEBU 640 Sending msg of 28 bytes to 1 on channel system-channel took 29.279µs 2021-02-10 19:32:06.457 UTC [orderer.common.cluster.step] sendMessage -> DEBU 641 Send of ConsensusRequest for channel system-channel with payload of size 28 to orderer(orderer.oem.scm.com:6000) took 327.037µs
Based on this, My first guess is that either:
- My TLS/CA server certs from the 'S11' org are not correctly formatted, or
- I am sharing the wrong/incomplete certs from 'S11' with the 'OEM' orderer.
After looking at the orderer certificates (cacert.pem, signcert.pem, tls-server.crt), I don't see any major differences other than org names, so I don't understand why I see this error on one org but not the other.
Assuming that I have a problem with my certificate, which of these cert fields would likely be the issue?
-Issuer: C = US, ST = New York, L = New York, O = scm.com, CN = ca.scm.com
-Subject: C = US, ST = New York, L = New York, O = scm.com, CN = ca.scm.com
-X509v3 Subject Alternative Name: DNS:orderer.oem.scm.com, DNS:localhost
- something else?After generating certs (before creating the system channel), I share these 3 orderer certificates between organizations:
- msp/cacerts/localhost-${Org-orderer-port}-ca-orderer.pem
- msp/signcerts/cert.pem
- orderers/orderer.${org}.scm.com/tls/server.crt
I came across this similar post, although I already have 'localhost' listed on my server.crt cert. (Is there another file I should be looking at instead?)
Any tips on troubleshooting would be valued!
-
How to migrate Data from one hyperledger fabric Network to another Hyperledger fabric network?
Is there any way to migrate data from one hyperledger fabric network to another hyperledger fabric network with same structure but different
certs
. -
HAProxy binds the back-end port and not allowing backed server to start in TCP mode
Here, the problem is:
Case - I: I have started the HAProxy in TCP Mode. HAProxy configuration is shown below:
global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon defaults log global mode tcp option tcplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 frontend ca-client bind *:8056 bind *:7054 bind *:8054 mode tcp default_backend ca-server backend ca-server mode tcp balance roundrobin server org2-ica1 0.0.0.0:7054 check server org2-ica2 0.0.0.0:8054 check
And, I started both servers(in separate docker containers) that defined in the backend of the HAProxy configuration. It gives an error of
Error starting userland proxy: listen tcp 0.0.0.0:8054: bind: address already in use
as below:$ sudo docker-compose -f ca.yaml up Creating network "ca_production-network" with the default driver Creating org1-ica2 ... Creating org1-ica1 ... ERROR: for org1-ica1 a bytes-like object is required, not 'str' ERROR: for org1-ica2 a bytes-like object is required, not 'str' ERROR: for org1-ica1 a bytes-like object is required, not 'str' ERROR: for org1-ica2 a bytes-like object is required, not 'str' Traceback (most recent call last): File "/usr/lib/python3/dist-packages/docker/api/client.py", line 261, in _raise_for_status response.raise_for_status() File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.22/containers/8d54e16ecae0824c91c51b3c880d27ea96fc200574d4eb81d72173f4515ae386/start During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/compose/service.py", line 625, in start_container container.start() File "/usr/lib/python3/dist-packages/compose/container.py", line 241, in start return self.client.start(self.id, **options) File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 19, in wrapped return f(self, resource_id, *args, **kwargs) File "/usr/lib/python3/dist-packages/docker/api/container.py", line 1095, in start self._raise_for_status(res) File "/usr/lib/python3/dist-packages/docker/api/client.py", line 263, in _raise_for_status raise create_api_error_from_http_exception(e) File "/usr/lib/python3/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception raise cls(e, response=response, explanation=explanation) docker.errors.APIError: 500 Server Error: Internal Server Error ("b'driver failed programming external connectivity on endpoint org1-ica2 (ed8d425881e09eb0c21eb5a638e986791d19f3239ad7ece98a4478a5e53dc8be): Error starting userland proxy: listen tcp 0.0.0.0:8054: bind: address already in use'") During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/bin/docker-compose", line 11, in <module> load_entry_point('docker-compose==1.25.0', 'console_scripts', 'docker-compose')() File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 72, in main command() File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 128, in perform_command handler(command, command_options) File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1107, in up to_attach = up(False) File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1088, in up return self.project.up( File "/usr/lib/python3/dist-packages/compose/project.py", line 565, in up results, errors = parallel.parallel_execute( File "/usr/lib/python3/dist-packages/compose/parallel.py", line 112, in parallel_execute raise error_to_reraise File "/usr/lib/python3/dist-packages/compose/parallel.py", line 210, in producer result = func(obj) File "/usr/lib/python3/dist-packages/compose/project.py", line 548, in do return service.execute_convergence_plan( File "/usr/lib/python3/dist-packages/compose/service.py", line 545, in execute_convergence_plan return self._execute_convergence_create( File "/usr/lib/python3/dist-packages/compose/service.py", line 460, in _execute_convergence_create containers, errors = parallel_execute( File "/usr/lib/python3/dist-packages/compose/parallel.py", line 112, in parallel_execute raise error_to_reraise File "/usr/lib/python3/dist-packages/compose/parallel.py", line 210, in producer result = func(obj) File "/usr/lib/python3/dist-packages/compose/service.py", line 465, in <lambda> lambda service_name: create_and_start(self, service_name.number), File "/usr/lib/python3/dist-packages/compose/service.py", line 457, in create_and_start self.start_container(container) File "/usr/lib/python3/dist-packages/compose/service.py", line 627, in start_container if "driver failed programming external connectivity" in ex.explanation: TypeError: a bytes-like object is required, not 'str'
Case - II: When I start the containers before enabling the HAProxy service then both servers work as expected. And I start the HAProxy service then It shows an error as :
$sudo service haproxy restart Job for haproxy.service failed because the control process exited with error code. See "systemctl status haproxy.service" and "journalctl -xe" for details.
To see the internal, I issued
haproxy -f /etc/haproxy/haproxy.cfg -db
haproxy -f /etc/haproxy/haproxy.cfg -db [ALERT] 030/151042 (120902) : Starting frontend GLOBAL: cannot bind UNIX socket [/run/haproxy/here/admin.sock] [ALERT] 030/151042 (120902) : Starting frontend ca-client: cannot bind socket [0.0.0.0:8056] [ALERT] 030/151042 (120902) : Starting frontend ca-client: cannot bind socket [0.0.0.0:7054] [ALERT] 030/151042 (120902) : Starting frontend ca-client: cannot bind socket [0.0.0.0:8054]
My question is :
What is this error cannot bind socket in HAProxy?
What is the proper configuration of HAproxy for TCP mode for two replicated servers?Thank you!