Performing layer 7 health check with HAProxy and uWSGI in Django application
I'm using this uwsgi.ini
to run a Django
application.
[uwsgi]
http-socket = :8000
enable-proxy-protocol = true
chdir = /usr/local/src/api
module = api.wsgi
uid = root
gid = root
pidfile = /var/run/api-uwsgi.pid
master = true
processes = 10
chmod-socket = 664
threaded-logger = true
logto = /var/log/api/uwsgi.log
log-maxsize = 10000000
logfile-chown = true
vacuum = true
die-on-term = true
I've added an API url to perform database and cache health checks under the /health-check
url. This API returns status code 200 if everything is fine. Now I want to be able to health check in layer 7 using this API with HAProxy
but using option httpchk
the response status code is 301, so the health check fails. Here is the backend part of my HAProxy
config.
backend http_server
mode http
balance leastconn
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk
http-check send meth GET uri /health-check ver HTTP/1.1 hdr Accept application\json
http-check expect rstatus 200
server app1 192.168.0.11:8000 check inter 500 downinter 5s fall 2 rise 3
server app2 192.168.0.12:8000 check inter 500 downinter 5s fall 2 rise 3
Here is the result of running the Django
apps with uWSGI
and HAProxy
. Note that the health check on layer 4 is working as expected.
Server http_server/app2 is DOWN, reason: Layer7 wrong status, code: 301, info: "Moved Permanently", check duration: 54ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
So what is causing this problem (I'm guessing uWSGI
) and is there a way to fix it?
See also questions close to this topic
-
Django Test Redirect to login with next parameter
I'm trying to Test non-login user redirect to login url. Is there a way to pass get parameter (example 'next') to reverse_lazy? I'm using class-based views.
class SecretIndexViewTest(TestCase): url = reverse_lazy('secret_index') login_url = reverse_lazy('login') client = Client() def test_http_status_code_302(self): response = self.client.get(self.url) self.assertEqual(response.status_code, 302) self.assertRedirects(response, self.login_url,)
above fails with
Response redirected to '/login?next=/secret', expected '/login'Expected '/login?next=/secret' to equal '/login'.
tried
class SecretIndexViewTest(TestCase): url = reverse_lazy('secret_index') login_url = reverse_lazy('login', kwargs={'next': url) client = Client() def test_http_status_code_302(self): response = self.client.get(self.url) self.assertEqual(response.status_code, 302) self.assertRedirects(response, self.login_url,)
result is no reverse match
django.urls.exceptions.NoReverseMatch: Reverse for 'login' with keyword arguments '{'next': '/secret'}' not found. 1 pattern(s) tried: ['/login$']
changing kwargs to args will result the same error.
I know I can handle it with below, but I'm looking for a better solution.
self.assertRedirects(response, self.login_url + f"?next={self.url}",)
or
login_url = reverse_lazy('login') + f"?next={url}"
-
How to get django-filter to not overwrite get params
I am using django-tables2 along with django-filter. The URL to load the page looks like
http://localhost:8000/pleasehelp/?param=abc
. I need the param value to load the page. I am using a filter that looks like:class MyFilter(django_filters.FilterSet): class Meta: model = MyModel fields = { 'name': ['icontains'], 'city': ['icontains'], }
which relates to the simple model:
class MyModel(models.Model): name = models.CharField(max_length=20) city = models.CharField(max_length=20)
I have also created a simple View which consists of:
class MyView(SingleTableMixin, FilterView): model = MyModel template_name = "pleasehelp/index.html" context_object_name = 'items' paginate_by = 25 table_class = MyTable filterset_class = MyFilter
And a table consisting of:
class MyTable(tables.Table): extra_column = tables.Column(empty_values=()) class Meta: model = MyModel template_name = "django_tables2/bootstrap4.html" fields = ('name', 'city', 'extra_column') def render_extra_column(self, record): """Render our custom distance column """ param = self.request.GET.get('param', '') return param
Lastly my index.html looks like:
{% load static %} {% load render_table from django_tables2 %} {% load bootstrap4 %} {% block content %} {% if filter %} <form action="" method="get" class="form form-inline"> {% bootstrap_form filter.form layout='inline' %} {% bootstrap_button 'filter' %} </form> {% endif %} {% if items %} {% render_table table %} {% else %} <p>No items were found</p> {% endif %} <script src="https://code.jquery.com/jquery-3.3.1.min.js"></script> <script src={% static 'site/javascript_helpers.js' %}></script> {% endblock content %}%
My issue is that when I try and use the filter it replaces the param in the URL. Ie my URL becomes
http://localhost:8000/pleasehelp/?name__icontains=a&city__icontains=
instead of this I would likehttp://localhost:8000/pleasehelp/?name__icontains=a&city__icontains=¶m=abc
(notice it has the daya django-filter needs but also the data i need) -
How to get the local machine time zone in a Django application?
In Django you have a TIME_ZONE setting, which, as I understand, somehow patches the standard date and time packages in runtime, making them think the application is working in the time zone specified. As a result, generic Python methods for determining local time zone do not work (they just show the configured time zone).
I can evaluate the link of
/etc/localtime
like in this answer or use another Linux-specific method but I am concerned about the portability issue, as some developers run the app on Windows.Can I find out in a platform independent way what was the original time zone on the machine?
-
Bad Gateway (502) on nginx > uwsgi > django, but only one SOME devices
My site looks fine on my Mac and my Windows laptop (using chrome to access homepage https://inspidered.org), but if I try any other devices, I get a
502 gateway error
from nginx. Nginx is the front end server running as a reverse proxy on digital ocean droplet, withuwsgi
behind it running django 3.1.Here are my relevant config files:
My debugging tips come from this page: https://digitalocean.com/community/questions/502-bad-gateway-nginx-2
sudo tail -30 /var/log/nginx/error.log ... 2021/03/01 17:45:47 [crit] 16656#0: *3420 connect() to unix:/run/uwsgi/inspidered.lock failed (2: No such file or directory) while connecting to upstream, client: 23.100.xxx.yyy, server: inspidered.org, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/run/uwsgi/inspidered.lock:", host: "inspidered.org"
nginx has no obvious errors:
sudo nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful
Also, nginx serves two other servers on the droplet (a python2 cherrypy and a python3 cherrypy) without any problems. Only this uwsgi/django site is affected. And only some devices.
[~]# systemctl status nginx ● nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2021-03-01 05:05:21 UTC; 13h ago Process: 16653 ExecReload=/bin/kill -s HUP $MAINPID (code=exited, status=0/SUCCESS) Process: 16399 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS) Process: 16395 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS) Process: 16394 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS) Main PID: 16402 (nginx) CGroup: /system.slice/nginx.service ├─16402 nginx: master process /usr/sbin/nginx ├─16655 nginx: worker process └─16656 nginx: worker process Mar 01 05:05:21 centos-divining systemd[1]: Stopped The nginx HTTP and reverse proxy server. Mar 01 05:05:21 centos-divining systemd[1]: Starting The nginx HTTP and reverse proxy server... Mar 01 05:05:21 centos-divining nginx[16395]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok Mar 01 05:05:21 centos-divining nginx[16395]: nginx: configuration file /etc/nginx/nginx.conf test is successful Mar 01 05:05:21 centos-divining systemd[1]: Started The nginx HTTP and reverse proxy server. Mar 01 05:18:09 centos-divining systemd[1]: Reloading The nginx HTTP and reverse proxy server. Mar 01 05:18:09 centos-divining systemd[1]: Reloaded The nginx HTTP and reverse proxy server.
I checked and I have no special domain filters on my Mac:
sudo nano /private/etc/hosts sudo nano /etc/hosts
my uwsgi .service settings
[Unit] Description=uWSGI Emperor service [Service] ExecStartPre=/bin/bash -c 'mkdir -p /run/uwsgi; chown nfsnobody:nfsnobody /run/uwsgi' ExecStart=/usr/local/bin/uwsgi --emperor /etc/uwsgi/sites --logger file:logfile=/var/log/uwsgi.log,maxsize=500000 Restart=always KillSignal=SIGQUIT Type=notify NotifyAccess=all [Install] WantedBy=multi-user.target
and
inspidered.ini
settings[uwsgi] project = inspidered uid = root base = /root/webapps/insp chdir = %(base)/%(project) home = %(base) module = %(project).wsgi:application master = true processes = 5 socket = /run/uwsgi/%(project).sock chown-socket = %(uid):root chmod-socket = 666 vacuum = true logto = /var/log/uwsgi.log
and finally, the
emperor is governing 0 vassals
-- which seems like an error, but perhaps they're just living in anautonomous collective
.[root@centos-divining log]# systemctl status uwsgi.service ● uwsgi.service - uWSGI Emperor service Loaded: loaded (/etc/systemd/system/uwsgi.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2021-03-01 18:51:10 UTC; 5min ago Process: 30019 ExecStartPre=/bin/bash -c mkdir -p /run/uwsgi; chown nfsnobody:nfsnobody /run/uwsgi (code=exited, status=0/SUCCESS) Main PID: 30022 (uwsgi) Status: "The Emperor is governing 0 vassals" CGroup: /system.slice/uwsgi.service ├─30022 /usr/local/bin/uwsgi --emperor /etc/uwsgi/sites --logger file:logfile=/var/log/uwsgi.log,maxsize=500000 └─30050 /usr/local/bin/uwsgi --emperor /etc/uwsgi/sites --logger file:logfile=/var/log/uwsgi.log,maxsize=500000 Mar 01 18:51:10 centos-divining systemd[1]: uwsgi.service holdoff time over, scheduling restart. Mar 01 18:51:10 centos-divining systemd[1]: Stopped uWSGI Emperor service. Mar 01 18:51:10 centos-divining systemd[1]: Starting uWSGI Emperor service... Mar 01 18:51:10 centos-divining systemd[1]: Started uWSGI Emperor service. Mar 01 18:51:24 centos-divining uwsgi[30022]: Mon Mar 1 18:51:24 2021 - logsize: 18446744073709551615, triggering rotation to /var/log/...24684... Mar 01 18:51:38 centos-divining uwsgi[30022]: Mon Mar 1 18:51:38 2021 - logsize: 18446744073709551615, triggering rotation to /var/log/...24698... Hint: Some lines were ellipsized, use -l to show in full.
-
Can't run UWSGI application
http :8000 --socket /tmp/my_gless.sock dont work for me but I dont know why. when I run in terminal:
uwsgi --http :8000 --socket /tmp/my_gless.sock --module my_gless.wsgi:application
or
uwsgi --http :8000 --socket /tmp/my_gless.sock --wsgi-file /home/krzyzak21/venv/my_gless/my_gless/wsgi.py
its works but i have it in uwsgi.ini file
[uwsgi] # variables projectname = my_gless base = /home/krzyzak21/venv chdir = %(base)/%(projectname) # configuration master = true virtualenv = %(base) env = DJANGO_SETTINGS_MODULE=%(projectname).settings module = %(projectname).wsgi:application socket = /tmp/%(projectname).sock logto = /tmp/%(projectname).log chmod-socket = 666
I have VPS server with root and i create sudo user krzyzak21 then logging in user terminal in VSCode, now I'm working in /home/krzyzak21/ where I'm create virtual venv and install in this django 3, python 3 newest pip etc. My tree file:
home/krzyzak21/venv/ : bin lib my_gless │ ├── config │ │ ├── my_gless_nginx.conf │ │ └── uwsgi.ini │ ├── logs │ │ └── error.log │ ├── manage.py │ ├── media │ │ └── a.png │ ├── my_gless │ │ ├── asgi.py │ │ ├── __init__.py │ │ ├── __pycache__ │ │ │ ├── __init__.cpython-38.pyc │ │ │ ├── pro.cpython-38.pyc │ │ │ ├── settings.cpython-38.pyc │ │ │ ├── urls.cpython-38.pyc │ │ │ └── wsgi.cpython-38.pyc │ │ ├── settings.py │ │ ├── urls.py │ │ └── wsgi.py │ ├── static │ ├── test.py │ └── uwsgi_params └── pyvenv.cfg
-
Can't fix error "RuntimeError: You need to use the gevent-websocket server." and "OSError: write error"
I am writing a website for
Flask
. I use a bunch ofuWSGI
+NGINX
+Flask-Socketio
. I usegevent
as an asynchronous module. Errors occur during operation:RuntimeError: You need to use the gevent-websocket server. uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during > Feb 23 12:57:55 toaa uwsgi[558436]: OSError: write error
I tried different configurations and also removed the
async_mode='gevent'
from thesocketio
initialization.wsgi.py
file:from webapp import app, socketio if __name__ == '__main__': socketio.run(app, use_reloader=False, debug=True, log_output=True)
project.ini:
[uwsgi] module = wsgi:app master = true gevent = 1024 gevent-monkey-patch = true buffer-size=32768 # optionally socket = /home/sammy/projectnew/projectnew.sock socket-timeout = 240 chmod-socket = 664 vacuum = true die-on-term = true
webapp/__init__.py
for application app:from gevent import monkey monkey.patch_all() import grpc.experimental.gevent grpc.experimental.gevent.init_gevent() from flask import Flask, session, request from config import DevelopConfig, MqttConfig, MailConfig, ProductionConfig from flask_sqlalchemy import SQLAlchemy from flask_migrate import Migrate from flask_mail import Mail from flask_script import Manager from flask_socketio import SocketIO # from flask_mqtt import Mqtt from flask_login import LoginManager from flask_babel import Babel from flask_babel_js import BabelJS from flask_babel import lazy_gettext as _l from apscheduler.schedulers.gevent import GeventScheduler # from celery import Celery app = Flask(__name__) app.config.from_object(ProductionConfig) app.config.from_object(MqttConfig) app.config.from_object(MailConfig) db = SQLAlchemy(app) migrate = Migrate(app, db) mail = Mail(app) manager = Manager(app, db) login_manager = LoginManager(app) login_manager.login_view = 'auth' login_manager.login_message = _l("Необходимо авторизоваться для доступа к закрытой странице") login_manager.login_message_category = "error" # celery = Celery(app.name, broker=Config.CELERY_BROKER_URL) # celery.conf.update(app.config) scheduler = GeventScheduler() # socketio = SocketIO(app) - Production Version socketio = SocketIO(app, async_mode='gevent') babel = Babel(app) babeljs = BabelJS(app=app, view_path='/translations/') import webapp.views @babel.localeselector def get_locale(): # if the user has set up the language manually it will be stored in the session, # so we use the locale from the user settings try: language = session['language'] except KeyError: language = None if language is not None: print(language) return language return request.accept_languages.best_match(app.config['LANGUAGES'].keys()) from webapp import models if __name__ == "__main__": manager.run()
The class in which the socket itself is used (
mqtt.py
):from webapp import socketio, app from flask import request from flask_mqtt import Mqtt from flask_babel import lazy_gettext as _l from webapp.tasks import SchedulerTask from webapp import translate_state_gate as tr_msg import json import copy import logging mqtt = Mqtt(app) logger = logging.getLogger('flask.flask_mqtt') logger.disabled = True class MqttTOAA(object): type_topic = ["/Control", "/Data"] m_request_state = {"comm": "3"} m_start = {"Gate": "Start"} m_stop = {"Gate": "Stop"} qos_request = 1 qos_sub = 2 struct_state_devices = None POOL_TIME = 2 end_publish = None devices = None schedulers_list = list() sch_task = None sid_mqtt = None code_list = list() def __init__(self, devices, lang): mqtt._connect() self.devices = devices self.sch_task = SchedulerTask() if lang not in app.config['LANGUAGES'].keys(): lang = 'ru' self.dict_gate = {"dict_state_button": {'con_Clos': tr_msg.MessageGate.t_message[lang]["f_open"], 'con_Open': tr_msg.MessageGate.t_message[lang]["f_close"], "fl_OpenClos": (tr_msg.MessageGate.t_message[lang]["f_continue"], tr_msg.MessageGate.t_message[lang]["f_stop"], tr_msg.MessageGate.t_message[lang]["f_abort"])}, "dict_state_text": {tr_msg.MessageGate.t_message[lang]["f_open"]:\ tr_msg.MessageGate.t_message[lang]["ps_close"], tr_msg.MessageGate.t_message[lang]["f_close"]:\ tr_msg.MessageGate.t_message[lang]["ps_open"], tr_msg.MessageGate.t_message[lang]["f_continue"]:\ tr_msg.MessageGate.t_message[lang]["ps_stop"], tr_msg.MessageGate.t_message[lang]["f_abort"]:\ tr_msg.MessageGate.t_message[lang]["pr_close"], tr_msg.MessageGate.t_message[lang]["f_stop"]:\ (tr_msg.MessageGate.t_message[lang]["pr_open"], tr_msg.MessageGate.t_message[lang]["pr_close"], tr_msg.MessageGate.t_message[lang]["pr_move"])}, "dict_type_element": {"button": u'', "text": u'', "device_code": u'', }, "state_gate": {}, "position": {"state": u'', "stop": False}, "reverse": False, } self.close_msg = tr_msg.MessageGate.t_message[lang]["pr_close"] self.open_msg = tr_msg.MessageGate.t_message[lang]["pr_open"] self.create_devices_dict() self.handle_mqtt_connect() self.mqtt_onmessage = mqtt.on_message()(self._handle_mqtt_message) self.mqtt_onlog = mqtt.on_log()(self._handle_logging) self.socketio_error = socketio.on_error()(self._handle_error) self.handle_change_state = socketio.on('change_state')(self._handle_change_state) self.handle_on_connect = socketio.on('connect')(self._handle_on_connect) self.handle_unsubscribe_all = socketio.on('unsubscribe_all')(self._handle_unsubscribe_all) def _handle_on_connect(self): self.sid_mqtt = request.sid def handle_mqtt_connect(self): task = None for dev in self.devices: if dev.device_code not in self.code_list: mqtt.subscribe("BK" + dev.device_code + self.type_topic[1], self.qos_sub) self.code_list.append(dev.device_code) task = self.sch_task.add_scheduler_publish(dev.device_code, mqtt, "BK" + dev.device_code + self.type_topic[0], self.m_request_state, self.qos_request, self.POOL_TIME) if task is not None: self.schedulers_list.append(task) if len(self.schedulers_list) > 0: self.sch_task.start_schedulers() self.code_list.clear() @staticmethod def _handle_error(): print(request.event["message"]) # "my error event" print(request.event["args"]) # (data,) @staticmethod def _handle_unsubscribe_all(): mqtt.unsubscribe_all() def _handle_change_state(self, code): print(code) # print(self.struct_state_devices[code]) message = None if code is not None: try: type_g = self.struct_state_devices[code]["state_gate"] if type_g["fl_OpenClos"] == 1: message = self.m_stop else: if self.struct_state_devices[code]["reverse"] is True: if self.struct_state_devices[code]["position"]["state"] == self.close_msg: message = self.m_stop self.struct_state_devices[code]["position"]["state"] = self.open_msg else: message = self.m_start else: message = self.m_start print("Msg:" + str(message)) except Exception as ex: print(ex) if message is not None: mqtt.publish("BK" + code + self.type_topic[0], json.dumps(message), self.qos_request) else: print("Error change state " + code) def _handle_mqtt_message(self, client, userdata, message): # print("Get message") # print(self.struct_state_devices) data = dict( topic=message.topic, payload=message.payload.decode(), qos=message.qos, ) try: data = json.loads(data['payload']) self.gate_msg(data) except Exception as ex: print("Exception: " + str(ex)) @staticmethod def _handle_logging(self, client, userdata, level, buf): print(level, buf) pass def create_devices_dict(self): if self.struct_state_devices is None: self.struct_state_devices = dict() for dev in self.devices: self.struct_state_devices[dev.device_code] = self.dict_gate.copy() if dev.typedev.reverse: self.struct_state_devices[dev.device_code]['reverse'] = True def gate_msg(self, data): k = "" code = data["esp_id"][2:] dict_dev = copy.deepcopy(self.struct_state_devices[code]) dict_dev["state_gate"] = data.copy() try: if dict_dev["state_gate"]["con_Clos"] == 0: # ворота закрыты # print("1") k = "con_Clos" dict_dev["position"]["state"] = k dict_dev["position"]["stop"] = False elif dict_dev["state_gate"]["con_Open"] == 0: # ворота открыты # print("2") k = "con_Open" dict_dev["position"]["state"] = k dict_dev["position"]["stop"] = False elif dict_dev["state_gate"]["fl_OpenClos"] == 0: # print("3") k = "fl_OpenClos" # обратный ход ворот при закрытии if dict_dev["position"]["state"] == self.close_msg and dict_dev["reverse"] is True: # print("4") k1 = 1 k2 = 0 dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k][k1]][k2] dict_dev["position"]["stop"] = False else: # print("5") k1 = 0 dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k][k1]] dict_dev["position"]["stop"] = True elif dict_dev["state_gate"]["fl_OpenClos"] == 1: # print("6") k = "fl_OpenClos" if len(dict_dev["position"]["state"]) == 0: # print("7") k1 = 1 k2 = 2 dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k][k1]][k2] elif dict_dev["position"]["state"] == "con_Clos" or \ dict_dev["position"]["state"] == self.open_msg: if dict_dev["position"]["stop"]: # print("8") k1 = 1 k2 = 1 dict_dev["position"]["stop"] = False dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k][k1]][k2] else: # print("9") k1 = 1 k2 = 0 dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k][k1]][k2] elif dict_dev["position"]["state"] == "con_Open" or \ dict_dev["position"]["state"] == self.close_msg: if dict_dev["reverse"]: # print("10") k1 = 2 dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k][k1]] else: if dict_dev["position"]["stop"]: # print("11") k1 = 1 k2 = 0 dict_dev["position"]["stop"] = False dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k][k1]][k2] else: # print("12") k1 = 1 k2 = 1 dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k][k1]][k2] if dict_dev["position"]["state"] != dict_dev["dict_type_element"]["text"]: # print("13") dict_dev["position"]["state"] = dict_dev["dict_type_element"]["text"] if k == "fl_OpenClos": dict_dev["dict_type_element"]["button"] = dict_dev["dict_state_button"][k][k1] else: dict_dev["dict_type_element"]["button"] = dict_dev["dict_state_button"][k] dict_dev["dict_type_element"]["text"] = \ dict_dev["dict_state_text"][dict_dev["dict_state_button"][k]] except Exception as ex: print("Exception (gate_msg): " + str(ex)) dict_dev["dict_type_element"]["device_code"] = data["esp_id"][2:] dict_dev["dict_type_element"]["temp"] = data["temp_1"] dict_dev["dict_type_element"]["button"] = copy.deepcopy(str(dict_dev["dict_type_element"]["button"])) dict_dev["dict_type_element"]["text"] = copy.deepcopy(str(dict_dev["dict_type_element"]["text"])) self.struct_state_devices[code] = copy.deepcopy(dict_dev) socketio.emit('mqtt_message', data=dict_dev["dict_type_element"], room=self.sid_mqtt) # print(dict_dev["state_gate"]["esp_id"] + str(dict_dev["dict_type_element"]))
-
Kaycloak docker Administration Console shows blank page with reverse proxy
I am using haproxy with keycloak, welcome page is showing fine but each time I enter Administration Console it shows me a blank page with no info with status code 200. I am using letsencrypt ssl certificate and here is my haproxy config and docker-compose.
Screenshot of the page:- link to screenshot
haproxy config:-
global log stdout local0 debug daemon maxconn 4096 defaults mode http option httplog option dontlognull timeout connect 5000ms timeout client 50000ms timeout server 50000ms log global log-format {"type":"haproxy","timestamp":%Ts,"http_status":%ST,"http_request":"%r","remote_addr":"%ci","bytes_read":%B,"upstream_addr":"%si","backend_name":"%b","retries":%rc,"bytes_uploaded":%U,"upstream_response_time":"%Tr","upstream_connect_time":"%Tc","session_duration":"%Tt","termination_state":"%ts"} frontend public bind *:80 bind *:443 ssl crt /usr/local/etc/haproxy/haproxy.pem alpn h2,http/1.1 http-request redirect scheme https unless { ssl_fc } default_backend web_servers backend web_servers option forwardfor server auth1 auth:8080
docker-compose.yaml :-
version: "3" networks: internal-network: services: reverse-proxy: build: ./reverse-proxy/. image: reverseproxy:latest ports: - "80:80" - "443:443" networks: - internal-network depends_on: - auth auth: image: quay.io/keycloak/keycloak:latest networks: internal-network: environment: PROXY_ADDRESS_FORWARDING: "true" KEYCLOAK_USER: *** KEYCLOAK_PASSWORD: *** # Uncomment the line below if you want to specify JDBC parameters. The parameter below is just an example, and it shouldn't be used in production without knowledge. It is highly recommended that you read the PostgreSQL JDBC driver documentation in order to use it. #JDBC_PARAMS: "ssl=false"
the url to the page i am trying to access is https:///auth/admin/master/console/
Notes: when trying to remove ssl from haproxy the keycloak open a page with error "https required"
-
While doing curl, getting REFUSED_STREAM, retrying a fresh connect in a loop
While running the below curl command, I am getting the error in loop.
curl -X POST -H "HOST:foo.bar.com" http://10.36.147.213:80/nlmf-loc/v1/determine-location -v -k --http2-prior-knowledge --header "Content-Type: application/json" --data "@./LMFsample.json"
Below is the error.
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)! * REFUSED_STREAM, retrying a fresh connect * Connection died, retrying a fresh connect * Closing connection 150 * Issue another request to this URL: 'http://10.36.147.213:80/nlmf-loc/v1/determine-location' * Hostname 10.36.147.213 was found in DNS cache * Trying 10.36.147.213... * TCP_NODELAY set * Connected to 10.36.147.213 (10.36.147.213) port 80 (#151) * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0xd43d50) > POST /nlmf-loc/v1/determine-location HTTP/2 > Host:foo.bar.com > User-Agent: curl/7.63.0 > Accept: */* > Content-Type: application/json > Content-Length: 774 >
In my HAProxy container logs I am getting
172.28.19.39:59447 [01/Mar/2021:13:23:04.443] https default-lmf-8443/SRV_2 0/0/0/-1/0 -1 0 - - SD-- 1/1/0/0/0 0/0 "POST foo.bar.com/nlmf-loc/v1/determine-location HTTP/2.0" 172.28.19.39:2071 [01/Mar/2021:13:23:04.463] http default-lmf-8443/SRV_1 0/0/1/-1/2 -1 0 - - CD-- 1/1/0/0/0 0/0 "POST foo.bar.com/nlmf-loc/v1/determine-location HTTP/2.0"
-
Haproxy seemingly substitutes brotli with gzip in "Accept-Encoding" header
Im struggling to figure out why haproxy seemingly replaces br with gzip in "Accept-Encoding" header as request passes haproxy.
My application currently structured like this:
HAPROXY(tls termination) -> varnish -> apache
So I test like this:
curl -I --http2 -H 'Accept-Encoding: br' -I https://mysite.dev:31753?tru
So - sending single GET request to haproxy that strictly asks for brotly only (using curl)...
So that's what I would expect to see coming to varnish, but what is actually coming into varnish is these 2 requests:- HEAD request with br
- GET request with gzip value instead...
I'm so confused - why are there 2 requests now? I did not configure compression in haproxy how can it be rewriting br to gzip.
Requests coming to varnish (I get this using tcpflow program):
172.030.000.035.41382-172.030.000.034.00080: HEAD /?tru HTTP/1.1 user-agent: curl/7.68.0 accept: */* accept-encoding: br host: mysite.dev:31753 x-client-ip: 192.168.10.103 x-forwarded-port: 31753 x-forwarded-proto: https x-forwarded-for: 192.168.10.103 connection: close 172.030.000.034.41882-172.030.000.033.00080: GET /?tru HTTP/1.1 user-agent: curl/7.68.0 accept: */* x-client-ip: 192.168.10.103 x-forwarded-port: 31753 x-forwarded-proto: https X-Forwarded-For: 192.168.10.103, 172.30.0.35 host: mysite:31753 Accept-Encoding: gzip X-Varnish: 328479
My haproxy config looks like so:
Haproxy
global maxconn 1024 log stdout format raw local0 ssl-default-bind-options ssl-min-ver TLSv1.2 defaults log global option httplog option http-server-close mode http option dontlognull timeout connect 5s timeout client 20s timeout server 45s frontend fe-wp-combined mode tcp bind *:31753 tcp-request inspect-delay 2s tcp-request content accept if HTTP tcp-request content accept if { req.ssl_hello_type 1 } use_backend be-wp-recirc-http if HTTP default_backend be-wp-recirc-https backend be-wp-recirc-http mode tcp server loopback-for-http abns@wp-haproxy-http send-proxy-v2 backend be-wp-recirc-https mode tcp server loopback-for-https abns@wp-haproxy-https send-proxy-v2 frontend fe-wp-https mode http bind abns@wp-haproxy-https accept-proxy ssl crt /certs/fullkeychain.pem alpn h2,http/1.1 # whatever you need todo for HTTPS traffic default_backend be-wp-real frontend fe-wp-http mode http bind abns@wp-haproxy-http accept-proxy # whatever you need todo for HTTP traffic redirect scheme https code 301 if !{ ssl_fc } backend be-wp-real mode http balance roundrobin option forwardfor # Send these request to check health option httpchk http-check send meth HEAD uri / ver HTTP/1.1 hdr Host haproxy.local http-response del-header Server http-response del-header via server wp-backend1 proxy-varnish:80 check http-request set-header x-client-ip %[src] http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc }
Please help if anyone knows what's happening here - extremely stumped.
-
How to have highly available Moodle in Kubernetes?
Want to set up highly available Moodle in K8s (on-prem). I'm using Bitnami Moodle with helm charts.
After a successful Moodle installation, it becomes work. But when a K8s node down, Moodle web page displays/reverts/redirects to the Moodle installation web page. It's like a loop.
Persistence storage is rook-ceph. Moodle PVC is ReadriteMany where Mysql is ReadWriteOnce.
The following command was used to deploy Moodle.
helm install moodle --set global.storageClass=rook-cephfs,replicaCount=3,persistence.accessMode=ReadWriteMany,allowEmptyPassword=false,moodlePassword=Moodle123,mariadb.architecture=replication bitnami/moodle
Any help on this is appreciated.
Thanks.
-
High-Availability not working in Hadoop cluster
I am trying to move my non-HA namenode to HA. After setting up all the configurations for JournalNode by following the Apache Hadoop documentation, I was able to bring the namenodes up. However, the namenodes are crashing immediately and throwing the follwing error.
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: There appears to be a gap in the edit log. We expected txid 43891997, but got txid 45321534.
I tried to recover the edit logs, initialize the shared edits etc., but nothing works. I am not sure how to fix this problem without formatting namenode since I do not want to loose any data.
Any help is greatly appreciated. Thanking in advance.
-
Apache Kafka Consume from Slave/ISR node
I understand the concept of master/slave and data replication in Kafka, but i don't understand why consumers and producers will always be routed to a master node when writing/reading from a partition instead of being able to read from any ISR (in-sync replica)/slave.
The way i think about it, if all consumers are redirected to one single master node, then more hardware is required to handle read/write operations from large consumer groups/producers.
Is it possible to read and write in slave nodes or the consumers/producers will always reach out to the master node of that partition?