Unable to connect to db using scram authentication
I chnaged authentication method from md5 to scram-sha-256, but when i am trying to connect to db , I am able to connect via psql client but not able to connect through php module (PDO_PGSQL).
getting this error:
Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "ubteam" FATAL: password authentication failed for user "ubteam" in D:\Users\Shiv\xammp\htdocs\vihs\index.php on line 9 Connection attempt failed. Fatal error: Uncaught TypeError: pg_close(): Argument #1 ($connection) must be of type ?PgSql\Connection, bool given in D:\Users\Shiv\xammp\htdocs\vihs\index.php:16 Stack trace: #0 D:\Users\Shiv\xammp\htdocs\vihs\index.php(16): pg_close(false) #1 {main} thrown in D:\Users\Shiv\xammp\htdocs\vihs\index.php on line 16
rds logs:
2022-05-04 09:36:07 UTC:10.85.4.59(58167):ubteam@postgres:[7357]:FATAL: password authentication failed for user "ubteam" 2022-05-04 09:36:07 UTC:10.85.4.59(58167):ubteam@postgres:[7357]:DETAIL: Connection matched pg_hba.conf line 13: "host all all all md5" 2022-05-04 09:36:07 UTC:10.85.4.59(58171):ubteam@postgres:[7387]:FATAL: password authentication failed for user "ubteam" 2022-05-04 09:36:07 UTC:10.85.4.59(58171):ubteam@postgres:[7387]:DETAIL: Connection matched pg_hba.conf line 13: "host all all all md5"
I am using RDS postgres12 hosted by amazon
Can someone help here
Thanks in advance
do you know?
how many words do you know
See also questions close to this topic
-
How to upload a video using ACF in WordPress?
I am still at learning phase in WordPress. I cannot figure out in ACF which field type should I choose to upload a video file and which code should I write to show it at the front-end. Please look into the below code. I am using the_field('') but it isn't working. Can anyone help
<video width="954" height="535" controls class="tm-mb-40"> <source src="video/wheat-field.mp4" type="video/mp4"> Your browser does not support the video tag. </video>
-
delete a table form a database using laravel command
i need to delete a database table using laravel artisan command . not like this command php artisan migrate:rollback --step=5
i need to create like this route or controller code .
Route::get('/clear/database', function () {
Artisan::call('cache:clear'); return redirect('/');
});
. i also try public function dd()
{ Schema::drop('table_name'); }
but it not working . gives me error like this SQLSTATE[23000]: Integrity constraint violation: 1451 Cannot delete or update a parent row: a foreign key constraint fails (SQL: drop table
table_name
)no foreign key for the table . what should i do ?
thanks in advance!
-
Creating Sticky Navbar with Drop Down Menu HTML
I am creating a HTML web page which contains a sticky navbar with drop down menu. However, when I created one, the dropdown menu does not works in the sticky navbar and so goes vise versa. below is the screenshot of both the result of the two codes.
*image with dropdown menu but without sticky navbar
*image with sticky navbar but without dropdown menu
below is the code for "image with dropdown menu but without sticky navbar"
<!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font- awesome/4.7.0/css/font-awesome.min.css"> <style> body {margin:0;font-family:Arial} .topnav { overflow: hidden; background-color: #333; } .topnav a { list-style-type: none; float: left; display: block; color: #f2f2f2; text-align: center; padding: 14px 16px; text-decoration: none; font-size: 17px; position: sticky; } .active { background-color: #04AA6D; color: white; } .topnav .icon { display: none; } .dropdown { float: left; overflow: hidden; } .dropdown .dropbtn { font-size: 17px; border: none; outline: none; color: white; padding: 14px 16px; background-color: inherit; font-family: inherit; margin: 0; } .dropdown-content { display: none; position: absolute; background-color: #f9f9f9; min-width: 160px; box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); z-index: 1; } .dropdown-content a { float: none; color: black; padding: 12px 16px; text-decoration: none; display: block; text-align: left; } .topnav a:hover, .dropdown:hover .dropbtn { background-color: #555; color: white; } .dropdown-content a:hover { background-color: #ddd; color: black; } .dropdown:hover .dropdown-content { display: block; } @media screen and (max-width: 600px) { .topnav a:not(:first-child), .dropdown .dropbtn { display: none; } .topnav a.icon { float: right; display: block; } } @media screen and (max-width: 600px) { .topnav.responsive {position: relative;} .topnav.responsive .icon { position: absolute; right: 0; top: 0; } .topnav.responsive a { float: none; display: block; text-align: left; } .topnav.responsive .dropdown {float: none;} .topnav.responsive .dropdown-content {position: relative;} .topnav.responsive .dropdown .dropbtn { display: block; width: 100%; text-align: left; } } </style> </head> <body> <div class="header"> <h2>Scroll Down</h2> <p>Scroll down to see the sticky effect.</p> </div> <div class="topnav" id="myTopnav"> <a href="#home" class="active">Home</a> <a href="#news">News</a> <a href="#contact">Contact</a> <div class="dropdown"> <button class="dropbtn">Dropdown <i class="fa fa-caret-down"></i> </button> <div class="dropdown-content"> <a href="#">Link 1</a> <a href="#">Link 2</a> <a href="#">Link 3</a> </div> </div> <a href="#about">About</a> <a href="javascript:void(0);" style="font-size:15px;" class="icon" onclick="myFunction()">☰</a> </div> <div style="padding-left:16px"> <h2>Responsive Topnav with Dropdown</h2> <p>Resize the browser window to see how it works.</p> <p>Hover over the dropdown button to open the dropdown menu.</p> </div> <h3>Sticky Navigation Bar Example</h3> <p>The navbar will <strong>stick</strong> to the top when you reach its scroll position.</p> <p><strong>Note:</strong> Internet Explorer do not support sticky positioning and Safari requires a -webkit- prefix.</p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <script> function myFunction() { var x = document.getElementById("myTopnav"); if (x.className === "topnav") { x.className += " responsive"; } else { x.className = "topnav"; } } </script> </body> </html>
below is the code for "image with sticky navbar but without dropdown menu"
<!DOCTYPE html> <html> <head> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font- awesome/4.7.0/css/font-awesome.min.css"> <style> body { font-size: 20px; } body {margin:0;} ul { list-style-type: none; margin: 0; padding: 0; overflow: hidden; background-color: #333; position: -webkit-sticky; /* Safari */ position: sticky; top: 0; } li { float: left; } li a { display: block; color: white; text-align: center; padding: 16px 20px; text-decoration: none; } li a:hover { background-color: #111; } /*======================================================================*/ body { background-color:white; } ul { list-style-type: none; margin: 0; padding: 0; overflow: hidden; background-color: #38444d; } li { float: left; } li a, .dropbtn { display: inline-block; color: white; text-align: center; padding: 14px 16px; text-decoration: none; } li a:hover, .dropdown:hover .dropbtn { background-color: red; } li.dropdown { display: inline-block; } .dropdown-content { display: none; position: absolute; background-color: #f9f9f9; min-width: 160px; box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); z-index: 1; } .dropdown-content a { color: black; padding: 12px 16px; text-decoration: none; display: block; text-align: left; } .dropdown-content a:hover {background-color: #f1f1f1;} .dropdown:hover .dropdown-content { display: block; } footer { text-align: center; padding: 3px; background-color: DarkSalmon; color: white; } </style> </head> <body> <div class="header"> <h2>Scroll Down</h2> <p>Scroll down to see the sticky effect.</p> </div> <ul> <li><a href="#home">Home</a></li> <li><a href="#news">News</a></li> <li class="dropdown"> <a href="javascript:void(1)" class="dropbtn">Dropdown</a> <div class="dropdown-content"> <a href="#">Link 1</a> <a href="#">Link 2</a> <a href="#">Link 3</a> </div> </li> </ul> <h3>Sticky Navigation Bar Example</h3> <p>The navbar will <strong>stick</strong> to the top when you reach its scroll position.</p> <p><strong>Note:</strong> Internet Explorer do not support sticky positioning and Safari requires a -webkit- prefix.</p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <p>Some text to enable scrolling. </p> <footer> <p>Author: Hege Refsnes<br> <a href="mailto:hege@example.com">hege@example.com</a></p> </footer> </body> </html>
Please i need some help with this as i am new to html and css.
-
ERROR: invalid byte sequence for encoding WITH psql
I've seen numerous issues in other posts with the copy command and:
ERROR: invalid byte sequence for encoding "UTF8": 0xfc
And the consensus in these posts appears to be to specify the encoding in the command you're doing the copy with. I have done so:
psql -h localhost -p 5432 -d BOBDB -U BOB -c "\COPY \"BOBTB01\" FROM 'C:\Temp\file.csv' with csv HEADER ENCODING 'WIN1252'"; Password for user BOB: **ERROR: character with byte sequence 0x81 in encoding "WIN1252" has no equivalent in encoding "UTF8" CONTEXT: COPY BOBTB01, line 76589**
So, that confused me and I changed to UTF8 to WIN1252 and having done so I get a slightly different error, the failure is on a different line and the text is slightly different.
psql -h localhost -p 5432 -d BOBDB -U BOB -c "\COPY \"BOBTB01\" FROM 'C:\Temp\file.csv' with csv HEADER ENCODING 'UTF8'"; Password for user BOB: **ERROR: invalid byte sequence for encoding "UTF8": 0xfc CONTEXT: COPY BOBTB01, line 163**
This is the encoding shown in the database:
show client_encoding; client_encoding ----------------- UTF8 (1 row)
The file is from a reliable source and I happen to have "R" installed which also does .csv import. The file was pulled into "R" without issue, that's making me think it's not the file but something else. Is there another switch or syntax that can bypass these issues perhaps?
I'm not sure what is wrong.
Can you help?
Thanks.
-
Whats missing on my Ruby 'Inverse Of' relationship
I know this topic has been addressed, but I have been at this for 2 days and I'm just stuck. I know inverse of does not create a new query, so should I use another method?
Question: How to set up an 'inverse of' with a has_one, belongs_to situation & same class..
Explanation: A user 'has_one :spouse' and 'belongs_to :spouse_from'. They are inverse of each other. When a User signs up, they can invite their significant other. For Example
- user_a invites & creates user_b
- user_b.spouse_id is set to user_a.id
- In a separate method I want to be able to update like.. user_a.spouse_id = user_a.spouse.id
The only association that works at this point is user_b.spouse.
Class User has_one :spouse, class_name: 'User', foreign_key: :spouse_id, dependent: :nullify, inverse_of: :spouse_from belongs_to :spouse_from, class_name: 'User', foreign_key: :spouse_id, inverse_of: :spouse, optional: true
-
Normalizing data in postgresql
Flag This application will read an iTunes library in comma-separated-values (CSV) and produce properly normalized tables as specified below. Once you have placed the proper data in the tables, press the button below to check your answer.
We will do some things differently in this assignment. We will not use a separate "raw" table, we will just use ALTER TABLE statements to remove columns after we don't need them (i.e. we converted them into foreign keys).
We will use the same CSV track data as in prior exercises - this time we will build a many-to-many relationship using a junction/through/join table between tracks and artists.
To grade this assignment, the program will run a query like this on your database and look for the data it expects to see:
SELECT track.title, album.title, artist.name FROM track JOIN album ON track.album_id = album.id JOIN tracktoartist ON track.id = tracktoartist.track_id JOIN artist ON tracktoartist.artist_id = artist.id ORDER BY track.title LIMIT 3;
Expected out put is this
The expected result of this query on your database is: title album artist A Boy Named Sue (live) The Legend Of Johnny Cash Jo
DROP TABLE album CASCADE; CREATE TABLE album ( id SERIAL, title VARCHAR(128) UNIQUE, PRIMARY KEY(id) ); DROP TABLE track CASCADE; CREATE TABLE track ( id SERIAL, title TEXT, artist TEXT, album TEXT, album_id INTEGER REFERENCES album(id) ON DELETE CASCADE, count INTEGER, rating INTEGER, len INTEGER, PRIMARY KEY(id) ); DROP TABLE artist CASCADE; CREATE TABLE artist ( id SERIAL, name VARCHAR(128) UNIQUE, PRIMARY KEY(id) ); DROP TABLE tracktoartist CASCADE; CREATE TABLE tracktoartist ( id SERIAL, track VARCHAR(128), track_id INTEGER REFERENCES track(id) ON DELETE CASCADE, artist VARCHAR(128), artist_id INTEGER REFERENCES artist(id) ON DELETE CASCADE, PRIMARY KEY(id) ); \copy track(title,artist,album,count,rating,len) FROM 'library.csv' WITH DELIMITER ',' CSV; INSERT INTO album (title) SELECT DISTINCT album FROM track; UPDATE track SET album_id = (SELECT album.id FROM album WHERE album.title = track.album); INSERT INTO tracktoartist (track, artist) SELECT DISTINCT ... INSERT INTO artist (name) ... UPDATE tracktoartist SET track_id = ... UPDATE tracktoartist SET artist_id = ... -- We are now done with these text fields ALTER TABLE track DROP COLUMN album; ALTER TABLE track ... ALTER TABLE tracktoartist DROP COLUMN track; ALTER TABLE tracktoartist ... SELECT track.title, album.title, artist.name FROM track JOIN album ON track.album_id = album.id JOIN tracktoartist ON track.id = tracktoartist.track_id JOIN artist ON tracktoartist.artist_id = artist.id LIMIT 3;
What am i doing wrong with the code?
-
Updating RDS snapshot export into S3
We have some data in our Mysql RDS, which slows down our application, but it's no longer needed. So we want to remove old records but keep them somewhere so our Data Engineers can run analytic queries.
I've backed up my Mysql RDS instance into S3 and connected it with Athena. It works great. But if I remove old records from my DB and we will need to repeat the process in the next 6 months, our analytic team will need to query two DBs to get the data. Is there a way to make it simpler for them and update S3 snapshot? Or maybe we should use something different, Hive?
-
'Can't find the database or table.' On RDS ReadReplica database
I have a private RDS Database, and I made a read-replica that is public. I can access the read-replica from mysql locally, but I can't access it from Google Data Studio. I get the following error:
Can't find the database or table.
Error ID: idnumber
I also do have a database and a table inside.
Does anyone know why is this happening? It should be working.
-
How to run CLI migrations in a Continous Integration pipeline on a private database on AWS RDS
I am currently using a tool that allows you to apply database migrations only using a CLI (Prisma). My database is in a private network in AWS.
To do it manually, I currently do this:
ssh -i $SSH_PATH_TO__MY_IDENTITY_FILE ec2-user@${BASTION_HOSTNAME} \ -N -f -L $DB_PORT:${DB_HOSTNAME}:5432 &
A bastion, in AWS parlance, is just a VM that has public access but also can reach private networks. This
ssh
command creates a tunnel through the bastion so that I can reach the private machine in my local$DB_PORT
. Then, I apply the migrations locally but, since the database is listening on a local port, I can reach my production database.Here is the question: how do I move this to a CI/CD pipeline?
I was thinking about doing this
Use a docker image that has
ssh
andnodejs
installed,Move the identity file to a env variable in the CI/CD.
Install the migrations tool there.
Create a tunnel as I did above.
Modify the configuration file to point to the production database.
And then, finally, apply the migrations.
I think this could work, but it seems a lot of trouble and I was wondering that maybe there was a better, standard way to approach this. Maybe triggering a Lambda function that runs inside the private network?
-
Why aren't knex migration not showing in psql database?
I created
knexfile.js
module.exports = { development: { client: 'pg', connection: { filename: 'postgres://localhost/myDB' } }, };
Then I create migration
npx knex migrate:make items
And have this data
exports.up = function (knex) { return knex.schema.createTable("items", (table) => { table.increments("id").primary(); table.string("title").notNullable(); table.string("description"); table.decimal("price").notNullable(); table.decimal("quantity").unsigned().notNullable(); table.string("image"); }); }; exports.down = function (knex) { return knex.schema.dropTableIfExists("items"); };
Then I run
npx knex migrate:latest
And it indicates that one was migrated
Then I go into
psql myDB
, run\dt
and nothing comes up! -
Constant failure to import csv file via copy command in postgrsql
I have a file in my TEMP directory on a windows server
echo %TEMP% C:\Users\BOB\AppData\Local\Temp\2
Below command to insert file to table:
psql -d BOBDB01 -c "\COPY "BOBTB01" FROM 'C:\Users\BOB\AppData\Local\Temp\2\file.csv' with csv";
This results in an error:
Password for user postgres: <pw_given> ERROR: relation "bobtb01" does not exist
It does exist though:
\d List of relations Schema | Name | Type | Owner --------+-----------------------------------------------------------------+----------+-------- public | BOBTB01 | table | BOB
Can anyone please help me kick around some ideas as to why the copy command fails given that the table is there?
Thanks.
-
Scram Authentication
I am trying scram authentication,but not able to connect to application.
error:
2022-05-03 08:45:16 UTC:10.85.4.59(61313):postgres@postgres:[21033]:FATAL: password authentication failed for user "postgres" 2022-05-03 08:45:16 UTC:10.85.4.59(61313):postgres@postgres:[21033]:DETAIL: Password does not match for user "postgres". Connection matched pg_hba.conf line 13: "host all all all md5" 2022-05-03 08:45:18 UTC::@:[5773]:LOG: checkpoint starting: time 2022-05-03 08:45:18 UTC::@:[5773]:LOG: checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.202 s, sync=0.003 s, total=0.217 s; sync files=2, longest=0.003 s, average=0.002 s; distance=65536 kB, estimate=86101 kB
since I am using rds so can't edit pg_hba.config file
can someone pls tell how to connect through scram verifier to application.
Database: RDS postgres (hosted by amazon) Changed parameter group: (password_encryption='scram-sha-256') generated scram verifier using below code Altered password for the user with scram verifier But when tried to login to PHP application it says password authentication failed.
Pls help with this Thanks in advance
also pls someone explain the working of scram with postgres
""" Generate the password hashes / verifiers for use in PostgreSQL How to use this: pw = EncryptPassword( user="username", password="securepassword", algorithm="scram-sha-256", ) print(pw.encrypt()) The output of the ``encrypt`` function can be stored in PostgreSQL in the password clause, e.g. ALTER ROLE username PASSWORD {pw.encrypt()}; where you safely interpolate it in with a quoted literal, of course :) """ import base64 import hashlib import hmac import secrets import stringprep import unicodedata class EncryptPassword: ALGORITHMS = { # 'md5': { # 'encryptor': '_encrypt_md5', # 'digest': hashlib.md5, # 'defaults': {}, # }, 'scram-sha-256': { 'encryptor': '_encrypt_scram_sha_256', 'digest': hashlib.sha256, 'defaults': { 'salt_length': 16, 'iterations': 4096, }, } } # List of characters that are prohibited to be used per PostgreSQL-SASLprep SASLPREP_STEP3 = ( stringprep.in_table_a1, # PostgreSQL treats this as prohibited stringprep.in_table_c12, stringprep.in_table_c21_c22, stringprep.in_table_c3, stringprep.in_table_c4, stringprep.in_table_c5, stringprep.in_table_c6, stringprep.in_table_c7, stringprep.in_table_c8, stringprep.in_table_c9, ) def __init__(self, user, password, algorithm='scram-sha-256', **kwargs): self.user = user self.password = password self.algorithm = algorithm self.salt = None self.encrypted_password = None self.kwargs = kwargs def encrypt(self): try: algorithm = self.ALGORITHMS[self.algorithm] except KeyError: raise Exception('algorithm "{}" not supported'.format(self.algorithm)) kwargs = algorithm['defaults'].copy() kwargs.update(self.kwargs) return getattr(self, algorithm['encryptor'])(algorithm['digest'], **kwargs) def _bytes_xor(self, a, b): """XOR two bytestrings together""" return bytes(a_i ^ b_i for a_i, b_i in zip(a, b)) def _encrypt_md5(self, digest, **kwargs): self.encrypted_password = b"md5" + digest( self.password.encode('utf-8') + self.user.encode('utf-8')).hexdigest().encode('utf-8') return self.encrypted_password def _encrypt_scram_sha_256(self, digest, **kwargs): # requires SASL prep # password = SASLprep iterations = kwargs['iterations'] salt_length = kwargs['salt_length'] salted_password = self._scram_sha_256_generate_salted_password(self.password, salt_length, iterations, digest) client_key = hmac.HMAC(salted_password, b"Client Key", digest) stored_key = digest(client_key.digest()).digest() server_key = hmac.HMAC(salted_password, b"Server Key", digest) self.encrypted_password = self.algorithm.upper().encode("utf-8") + b"$" + \ ("{}".format(iterations)).encode("utf-8") + b":" + \ base64.b64encode(self.salt) + b"$" + \ base64.b64encode(stored_key) + b":" + base64.b64encode(server_key.digest()) return self.encrypted_password def _normalize_password(self, password): """Normalize the password using PostgreSQL-flavored SASLprep. For reference: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/common/saslprep.c using the `pg_saslprep` function Implementation borrowed from asyncpg implementation: https://github.com/MagicStack/asyncpg/blob/master/asyncpg/protocol/scram.pyx#L263 """ normalized_password = password # if the password is an ASCII string or fails to encode as an UTF8 # string, we can return try: normalized_password.encode("ascii") except UnicodeEncodeError: pass else: return normalized_password # Step 1 of SASLPrep: Map. Per the algorithm, we map non-ascii space # characters to ASCII spaces (\x20 or \u0020, but we will use ' ') and # commonly mapped to nothing characters are removed # Table C.1.2 -- non-ASCII spaces # Table B.1 -- "Commonly mapped to nothing" normalized_password = u"".join( [' ' if stringprep.in_table_c12(c) else c for c in normalized_password if not stringprep.in_table_b1(c)]) # If at this point the password is empty, PostgreSQL uses the original # password if not normalized_password: return password # Step 2 of SASLPrep: Normalize. Normalize the password using the # Unicode normalization algorithm to NFKC form normalized_password = unicodedata.normalize('NFKC', normalized_password) # If the password is not empty, PostgreSQL uses the original password if not normalized_password: return password # Step 3 of SASLPrep: Prohobited characters. If PostgreSQL detects any # of the prohibited characters in SASLPrep, it will use the original # password # We also include "unassigned code points" in the prohibited character # category as PostgreSQL does the same for c in normalized_password: if any([in_prohibited_table(c) for in_prohibited_table in self.SASLPREP_STEP3]): return password # Step 4 of SASLPrep: Bi-directional characters. PostgreSQL follows the # rules for bi-directional characters laid on in RFC3454 Sec. 6 which # are: # 1. Characters in RFC 3454 Sec 5.8 are prohibited (C.8) # 2. If a string contains a RandALCat character, it cannot containy any # LCat character # 3. If the string contains any RandALCat character, an RandALCat # character must be the first and last character of the string # RandALCat characters are found in table D.1, whereas LCat are in D.2 if any([stringprep.in_table_d1(c) for c in normalized_password]): # if the first character or the last character are not in D.1, # return the original password if not (stringprep.in_table_d1(normalized_password[0]) and stringprep.in_table_d1(normalized_password[-1])): return password # if any characters are in D.2, use the original password if any([stringprep.in_table_d2(c) for c in normalized_password]): return password # return the normalized password return normalized_password def _scram_sha_256_generate_salted_password(self, password, salt_length, iterations, digest): """This follows the "Hi" algorithm specified in RFC5802""" # first, need to normalize the password using PostgreSQL-flavored SASLprep normalized_password = self._normalize_password(password) # convert the password to a binary string - UTF8 is safe for SASL (though there are SASLPrep rules) p = normalized_password.encode("utf8") # generate a salt self.salt = secrets.token_bytes(salt_length) # the initial signature is the salt with a terminator of a 32-bit string ending in 1 ui = hmac.new(p, self.salt + b'\x00\x00\x00\x01', digest) # grab the initial digest u = ui.digest() # for X number of iterations, recompute the HMAC signature against the password # and the latest iteration of the hash, and XOR it with the previous version for x in range(iterations - 1): ui = hmac.new(p, ui.digest(), hashlib.sha256) # this is a fancy way of XORing two byte strings together u = self._bytes_xor(u, ui.digest()) return u pw = EncryptPassword( user="username", password="password", algorithm="scram-sha-256", ) print(pw.encrypt()) )```
-
SASL/SCRAM works with confluentinc images, but fails with wurstmeister
I have SASL/SCRAM config working with confluentinc kafka/zookeeper:
docker-compose.yml
# Based on: https://github.com/iwpnd/tile38-kafka-sasl version: "2" services: zookeeper: image: confluentinc/cp-zookeeper:6.0.1 hostname: zookeeper container_name: zookeeper environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://zookeeper:2181 ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOOKEEPER_SERVER_ID: 3 KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/sasl/zookeeper_jaas.conf \ -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider \ -Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider \ -Dquorum.auth.enableSasl=true \ -Dquorum.auth.learnerRequireSasl=true \ -Dquorum.auth.serverRequireSasl=true \ -Dquorum.auth.learner.saslLoginContext=QuorumLearner \ -Dquorum.auth.server.saslLoginContext=QuorumServer \ -Dquorum.cnxn.threads.size=20 \ -DrequireClientAuthScheme=sasl" volumes: - ./secrets:/etc/kafka/secrets/sasl zookeeper-add-kafka-users: image: confluentinc/cp-kafka:6.0.1 container_name: "zookeeper-add-kafka-users" depends_on: - zookeeper command: "bash -c 'echo Waiting for Zookeeper to be ready... && \ cub zk-ready zookeeper:2181 120 && \ kafka-configs --zookeeper zookeeper:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name admin && \ kafka-configs --zookeeper zookeeper:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name client '" environment: KAFKA_BROKER_ID: ignored KAFKA_ZOOKEEPER_CONNECT: ignored KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf volumes: - ./secrets:/etc/kafka/secrets/sasl broker: image: confluentinc/cp-kafka:6.0.1 hostname: broker container_name: broker depends_on: - zookeeper ports: - "9091:9091" - "9101:9101" - "9092:9092" expose: - "29090" environment: KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf" KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:9092,SASL_PLAINHOST://:9091 KAFKA_ADVERTISED_LISTENERS: INSIDE://broker:29090,OUTSIDE://localhost:9092,SASL_PLAINHOST://broker:9091 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_JMX_PORT: 9101 KAFKA_JMX_HOSTNAME: localhost KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512 KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT volumes: - ./secrets:/etc/kafka/secrets/sasl
sercrets/kafka_server_jaas.conf
org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="password" user_admin="password" user_client="password"; }; Client { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="password"; }; KafkaClient { org.apache.kafka.common.security.scram.ScramLoginModule required username="client" password="password"; };
sercerts/zk_server_jaas.conf
org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret" user_admin="admin-secret"; };
sercrets/zookeeper_jaas.conf zk_server_jaas.conf
Server { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin="password"; }; QuorumServer { org.apache.zookeeper.server.auth.DigestLoginModule required user_admin="password"; }; QuorumLearner { org.apache.zookeeper.server.auth.DigestLoginModule required username="admin" password="password"; };
Above config works as I expected with
confluentinc/cp-zookeeper:6.0.1
image, but when I change images towurstmeister/zookeeper
andwurstmeister/kafka:2.13-2.7.1
I get below errors:[36mbroker |[0m [Configuring] 'security.inter.broker.protocal' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'jmx.port' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'advertised.listeners' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'port' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'inter.broker.listener.name' in '/opt/kafka/config/server.properties' [36mbroker |[0m Excluding KAFKA_OPTS from broker config [36mbroker |[0m Excluding KAFKA_HOME from broker config [36mbroker |[0m [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'listeners' in '/opt/kafka/config/server.properties' [36mbroker |[0m Excluding KAFKA_VERSION from broker config [33mzookeeper |[0m ZooKeeper JMX enabled by default [33mzookeeper |[0m Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg [33mzookeeper |[0m 2021-12-04 13:17:55,364 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg [33mzookeeper |[0m 2021-12-04 13:17:55,370 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 [33mzookeeper |[0m 2021-12-04 13:17:55,370 [myid:] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1 [33mzookeeper |[0m 2021-12-04 13:17:55,371 [myid:] - WARN [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running in standalone mode [33mzookeeper |[0m 2021-12-04 13:17:55,376 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started. [33mzookeeper |[0m 2021-12-04 13:17:55,396 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed. [33mzookeeper |[0m 2021-12-04 13:17:55,396 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg [33mzookeeper |[0m 2021-12-04 13:17:55,397 [myid:] - INFO [main:ZooKeeperServerMain@98] - Starting server [33mzookeeper |[0m 2021-12-04 13:17:55,409 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT [33mzookeeper |[0m 2021-12-04 13:17:55,409 [myid:] - INFO [main:Environment@100] - Server environment:host.name=zookeeper [33mzookeeper |[0m 2021-12-04 13:17:55,409 [myid:] - INFO [main:Environment@100] - Server environment:java.version=1.7.0_65 [33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation [33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre [33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment@100] - Server environment:java.class.path=/opt/zookeeper-3.4.13/bin/../build/classes:/opt/zookeeper-3.4.13/bin/../build/lib/*.jar:/opt/zookeeper-3.4.13/bin/../lib/slf4j-log4j12-1.7.25.jar:/opt/zookeeper-3.4.13/bin/../lib/slf4j-api-1.7.25.jar:/opt/zookeeper-3.4.13/bin/../lib/netty-3.10.6.Final.jar:/opt/zookeeper-3.4.13/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper-3.4.13/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.13/bin/../lib/audience-annotations-0.5.0.jar:/opt/zookeeper-3.4.13/bin/../zookeeper-3.4.13.jar:/opt/zookeeper-3.4.13/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.13/bin/../conf: [33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib [33mzookeeper |[0m 2021-12-04 13:17:55,410 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp [33mzookeeper |[0m 2021-12-04 13:17:55,413 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=<NA> [33mzookeeper |[0m 2021-12-04 13:17:55,413 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux [33mzookeeper |[0m 2021-12-04 13:17:55,414 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64 [33mzookeeper |[0m 2021-12-04 13:17:55,414 [myid:] - INFO [main:Environment@100] - Server environment:os.version=5.11.0-40-generic [33mzookeeper |[0m 2021-12-04 13:17:55,414 [myid:] - INFO [main:Environment@100] - Server environment:user.name=root [33mzookeeper |[0m 2021-12-04 13:17:55,414 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/root [33mzookeeper |[0m 2021-12-04 13:17:55,415 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.13 [33mzookeeper |[0m 2021-12-04 13:17:55,422 [myid:] - INFO [main:ZooKeeperServer@836] - tickTime set to 2000 [33mzookeeper |[0m 2021-12-04 13:17:55,425 [myid:] - INFO [main:ZooKeeperServer@845] - minSessionTimeout set to -1 [33mzookeeper |[0m 2021-12-04 13:17:55,426 [myid:] - INFO [main:ZooKeeperServer@854] - maxSessionTimeout set to -1 [33mzookeeper |[0m 2021-12-04 13:17:55,443 [myid:] - INFO [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory [33mzookeeper |[0m 2021-12-04 13:17:55,453 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181 [32mzookeeper-add-kafka-users |[0m Waiting for Zookeeper to be ready... [32mzookeeper-add-kafka-users |[0m bash: line 1: cub: command not found [36mbroker |[0m [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'sasl.mechanism.inter.broker.protocol' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'offsets.topic.replication.factor' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'listener.security.protocol.map' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'jmx.hostname' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'sasl.enabled.mechanisms' in '/opt/kafka/config/server.properties' [36mbroker |[0m [Configuring] 'broker.id' in '/opt/kafka/config/server.properties' [32mzookeeper-add-kafka-users exited with code 127 [0m[36mbroker |[0m [2021-12-04 13:17:58,599] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) [36mbroker |[0m [2021-12-04 13:17:59,195] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) [36mbroker |[0m [2021-12-04 13:17:59,343] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) [36mbroker |[0m [2021-12-04 13:17:59,357] INFO starting (kafka.server.KafkaServer) [36mbroker |[0m [2021-12-04 13:17:59,360] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) [36mbroker |[0m [2021-12-04 13:17:59,398] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) [36mbroker |[0m [2021-12-04 13:17:59,429] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,434] INFO Client environment:host.name=broker (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,435] INFO Client environment:java.version=1.8.0_292 (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,435] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,435] INFO Client environment:java.home=/usr/lib/jvm/zulu8-ca/jre (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,435] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.7.1.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.7.1.jar:/opt/kafka/bin/../libs/connect-file-2.7.1.jar:/opt/kafka/bin/../libs/connect-json-2.7.1.jar:/opt/kafka/bin/../libs/connect-mirror-2.7.1.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.7.1.jar:/opt/kafka/bin/../libs/connect-runtime-2.7.1.jar:/opt/kafka/bin/../libs/connect-transforms-2.7.1.jar:/opt/kafka/bin/../libs/hk2-api-2.6.1.jar:/opt/kafka/bin/../libs/hk2-locator-2.6.1.jar:/opt/kafka/bin/../libs/hk2-utils-2.6.1.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.5.jar:/opt/kafka/bin/../libs/jackson-core-2.10.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.5.1.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.5.jar:/opt/kafka/bin/../libs/jakarta.inject-2.6.1.jar:/opt/kafka/bin/../libs/jakarta.validation-api-2.0.2.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.25.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.31.jar:/opt/kafka/bin/../libs/jersey-common-2.31.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.31.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.31.jar:/opt/kafka/bin/../libs/jersey-hk2-2.31.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.31.jar:/opt/kafka/bin/../libs/jersey-server-2.31.jar:/opt/kafka/bin/../libs/jetty-client-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-http-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-io-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-security-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-server-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-util-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jetty-util-ajax-9.4.38.v20210224.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.7.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.7.1.jar:/opt/kafka/bin/../libs/kafka-raft-2.7.1.jar:/opt/kafka/bin/../libs/kafka-streams-2.7.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.7.1.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.13-2.7.1.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.7.1.jar:/opt/kafka/bin/../libs/kafka-tools-2.7.1.jar:/opt/kafka/bin/../libs/kafka_2.13-2.7.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.13-2.7.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.59.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.59.Final.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.3.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka/bin/../libs/reflections-0.9.12.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.13-2.2.0.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/opt/kafka/bin/../libs/scala-library-2.13.3.jar:/opt/kafka/bin/../libs/scala-logging_2.13-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.13.3.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.7.jar:/opt/kafka/bin/../libs/zookeeper-3.5.9.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.9.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.5-6.jar (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,437] INFO Client environment:java.library.path=/usr/lib/jvm/zulu8-ca/jre/lib/amd64/server:/usr/lib/jvm/zulu8-ca/jre/lib/amd64:/usr/lib/jvm/zulu8-ca/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,437] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,440] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,441] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,441] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,441] INFO Client environment:os.version=5.11.0-40-generic (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,442] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,442] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,442] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,443] INFO Client environment:os.memory.free=1014MB (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,443] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,443] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,447] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@4cc451f2 (org.apache.zookeeper.ZooKeeper) [36mbroker |[0m [2021-12-04 13:17:59,459] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) [36mbroker |[0m [2021-12-04 13:17:59,469] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn) [36mbroker |[0m [2021-12-04 13:17:59,481] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [36mbroker |[0m [2021-12-04 13:17:59,580] INFO Client successfully logged in. (org.apache.zookeeper.Login) [36mbroker |[0m [2021-12-04 13:17:59,582] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient) [36mbroker |[0m [2021-12-04 13:17:59,595] INFO Opening socket connection to server zookeeper/172.20.0.2:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn) [33mzookeeper |[0m 2021-12-04 13:17:59,605 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /172.20.0.3:57480 [36mbroker |[0m [2021-12-04 13:17:59,609] INFO Socket connection established, initiating session, client: /172.20.0.3:57480, server: zookeeper/172.20.0.2:2181 (org.apache.zookeeper.ClientCnxn) [33mzookeeper |[0m 2021-12-04 13:17:59,621 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /172.20.0.3:57480 [33mzookeeper |[0m 2021-12-04 13:17:59,624 [myid:] - INFO [SyncThread:0:FileTxnLog@213] - Creating new log file: log.1 [36mbroker |[0m [2021-12-04 13:17:59,642] INFO Session establishment complete on server zookeeper/172.20.0.2:2181, sessionid = 0x100474bc7f70000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) [33mzookeeper |[0m 2021-12-04 13:17:59,642 [myid:] - INFO [SyncThread:0:ZooKeeperServer@694] - Established session 0x100474bc7f70000 with negotiated timeout 18000 for client /172.20.0.3:57480 [36mbroker |[0m [2021-12-04 13:17:59,646] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) [33mzookeeper |[0m 2021-12-04 13:17:59,657 [myid:] - ERROR [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@1063] - cnxn.saslServer is null: cnxn object did not initialize its saslServer properly. [36mbroker |[0m [2021-12-04 13:17:59,660] ERROR SASL authentication failed using login context 'Client' with exception: {} (org.apache.zookeeper.client.ZooKeeperSaslClient) [36mbroker |[0m javax.security.sasl.SaslException: Error in authenticating with a Zookeeper Quorum member: the quorum member's saslToken is null. [36mbroker |[0m at org.apache.zookeeper.client.ZooKeeperSaslClient.createSaslToken(ZooKeeperSaslClient.java:312) [36mbroker |[0m at org.apache.zookeeper.client.ZooKeeperSaslClient.respondToServer(ZooKeeperSaslClient.java:275) [36mbroker |[0m at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:882) [36mbroker |[0m at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:103) [36mbroker |[0m at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:365) [36mbroker |[0m at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) [36mbroker |[0m [2021-12-04 13:17:59,669] ERROR [ZooKeeperClient Kafka server] Auth failed. (kafka.zookeeper.ZooKeeperClient) [36mbroker |[0m [2021-12-04 13:17:59,672] INFO EventThread shut down for session: 0x100474bc7f70000 (org.apache.zookeeper.ClientCnxn) [33mzookeeper |[0m 2021-12-04 13:17:59,794 [myid:] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x100474bc7f70000, likely client has closed socket [33mzookeeper |[0m 2021-12-04 13:17:59,795 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /172.20.0.3:57480 which had sessionid 0x100474bc7f70000 [36mbroker |[0m [2021-12-04 13:17:59,823] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) [36mbroker |[0m org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers [36mbroker |[0m at org.apache.zookeeper.KeeperException.create(KeeperException.java:130) [36mbroker |[0m at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) [36mbroker |[0m at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:564) [36mbroker |[0m at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1662) [36mbroker |[0m at kafka.zk.KafkaZkClient.makeSurePersistentPathExists(KafkaZkClient.scala:1560) [36mbroker |[0m at kafka.zk.KafkaZkClient.$anonfun$createTopLevelPaths$1(KafkaZkClient.scala:1552) [36mbroker |[0m at kafka.zk.KafkaZkClient.$anonfun$createTopLevelPaths$1$adapted(KafkaZkClient.scala:1552) [36mbroker |[0m at scala.collection.immutable.List.foreach(List.scala:333) [36mbroker |[0m at kafka.zk.KafkaZkClient.createTopLevelPaths(KafkaZkClient.scala:1552) [36mbroker |[0m at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:467) [36mbroker |[0m at kafka.server.KafkaServer.startup(KafkaServer.scala:233) [36mbroker |[0m at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) [36mbroker |[0m at kafka.Kafka$.main(Kafka.scala:82) [36mbroker |[0m at kafka.Kafka.main(Kafka.scala) [36mbroker |[0m [2021-12-04 13:17:59,825] INFO shutting down (kafka.server.KafkaServer) [36mbroker |[0m [2021-12-04 13:17:59,836] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient) [36mbroker |[0m [2021-12-04 13:17:59,845] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient) [36mbroker |[0m [2021-12-04 13:17:59,849] INFO App info kafka.server for -1 unregistered (org.apache.kafka.common.utils.AppInfoParser) [36mbroker |[0m [2021-12-04 13:17:59,854] INFO shut down completed (kafka.server.KafkaServer) [36mbroker |[0m [2021-12-04 13:17:59,855] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable) [36mbroker |[0m [2021-12-04 13:17:59,859] INFO shutting down (kafka.server.KafkaServer) [36mbroker exited with code 1 [0m
Any tips how to get it work with wurstmeister images?