Redis HA with Sentinel on Docker
I'm struggling now for a week with the Redis HA on Docker. I am not quite convinced if my intentions will even works. The documentation is understandable, but there are many examples which do not correspond to the documentation.
Well, what I want to do is set up a Redis Cluster with 1 master, 2 replicas and 3 Sentinel. This is hosted on a 192.168.1.10. I want to acess the Cluster via an App coming from 192.168.1.11. The RedisCluster is working properly without the Sentinel. Replication works fine.
When I start the sentinel, I get the following log entries on all 3 Redis-Container: redis-0, redis-1 and redis-2
1:S 22 Dec 2020 18:43:38.349 * Connecting to MASTER 172.20.0.2:6379
1:S 22 Dec 2020 18:43:38.350 * MASTER <-> REPLICA sync started
1:S 22 Dec 2020 18:43:38.350 * Non blocking connect for SYNC fired the event.
1:S 22 Dec 2020 18:43:38.350 * Master replied to PING, replication can continue...
1:S 22 Dec 2020 18:43:38.350 * Trying a partial resynchronization (request eac3aa540e767589e9673ae0ed844d985ed2abb2:1856).
1:S 22 Dec 2020 18:43:38.350 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
I tried to follow this tutorial but it didn't work. Same behavier as described. These are my Docker commands
# Redis (with custom redis.conf will not work the replication) so i keep it simple this way.
docker run --name redis-0 -d --network redis -p 6379:6379 redis redis-server
docker run --name redis-1 -d --network redis -p 6380:6379 redis redis-server --slaveof redis-0 6379
docker run --name redis-2 -d --network redis -p 6381:6379 redis redis-server --slaveof redis-0 6379
# Sentinel
docker run -d --name sentinel-0 --network redis -v ${PWD}/sentinel-0:/etc/redis/ redis redis-sentinel /etc/redis/sentinel.conf
docker run -d --name sentinel-1 --network redis -v ${PWD}/sentinel-1:/etc/redis/ redis redis-sentinel /etc/redis/sentinel.conf
docker run -d --name sentinel-2 --network redis -v ${PWD}/sentinel-2:/etc/redis/ redis redis-sentinel /etc/redis/sentinel.conf
These is the sentinel.conf
port 5000
# sentinel monitor <master-group-name> <ip> <port> <quorum>
sentinel monitor mymaster 172.20.0.2 6379 2
sentinel down-after-milliseconds mymaster 1000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
The Sentinel-Container have write access to the sentinel.conf
.
Those are my iptables
instructions
# Redis
/usr/sbin/iptables -t nat -A PREROUTING -p tcp --dport 6379 -j DNAT --to-destination 172.20.0.2:6379
/usr/sbin/iptables -t nat -A PREROUTING -p tcp --dport 6380 -j DNAT --to-destination 172.20.0.3:6379
/usr/sbin/iptables -t nat -A PREROUTING -p tcp --dport 6381 -j DNAT --to-destination 172.20.0.4:6379
# Sentinel
/usr/sbin/iptables -t nat -A PREROUTING -p tcp --dport 26379 -j DNAT --to-destination 172.20.0.5:6379
/usr/sbin/iptables -t nat -A PREROUTING -p tcp --dport 26380 -j DNAT --to-destination 172.20.0.6:6379
/usr/sbin/iptables -t nat -A PREROUTING -p tcp --dport 26381 -j DNAT --to-destination 172.20.0.7:6379
I'm well aware of the Documentation:
Since Sentinels auto detect replicas using masters INFO output information, the detected replicas will not be reachable, and Sentinel will never be able to failover the master, since there are no good replicas from the point of view of the system, so there is currently no way to monitor with Sentinel a set of master and replica instances deployed with Docker, unless you instruct Docker to map the port 1:1.
For the first problem, in case you want to run a set of Sentinel instances using Docker with forwarded ports (or any other NAT setup where ports are remapped), you can use the following two Sentinel configuration directives in order to force Sentinel to announce a specific set of IP and port:
sentinel announce-ip sentinel announce-port Note that Docker has the ability to run in host networking mode (check the
--net=host
option for more information). This should create no issues since ports are not remapped in this setup.
I just dont know where to place the announce-IP and Port and what the Value of those has to be. Also notice that the --net=host
will not work, because i have 3 container each on the same Host:port.
How can I run Sentinel in a Docker environment which serves me for the Redis HA?
Thanks for the help!
EDIT:
I did a Failover test and have following result (same result on Sentinel 0, 1 and 2)
# docker exec -it sentinel-0 redis-cli -p 5000
127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster
1) "172.20.0.2"
2) "6379"
# docker stop redis-0
redis-0
# docker exec -it sentinel-0 redis-cli -p 5000
127.0.0.1:5000> SENTINEL get-master-addr-by-name mymaster
1) "172.20.0.2"
2) "6379"
1 answer
-
answered 2020-12-23 03:40
Joe
follow your
Docker commands
andsentinel.conf
, it works to me.1:S 23 Dec 2020 03:14:59.370 * Connecting to MASTER redis-0:6379 1:S 23 Dec 2020 03:14:59.371 * MASTER <-> REPLICA sync started 1:S 23 Dec 2020 03:14:59.371 * Non blocking connect for SYNC fired the event. 1:S 23 Dec 2020 03:14:59.371 * Master replied to PING, replication can continue... 1:S 23 Dec 2020 03:14:59.372 * Trying a partial resynchronization (request 5c52aa10610b365f29fec2968e095c5b49eb6136:43). 1:S 23 Dec 2020 03:14:59.373 * Full resync from master: 1f843162cf808a500a5d57392baf585f6e1679a3:0
Maybe you can check redis-0 logs,does it accept replica' ask.
See also questions close to this topic
-
Docker compose, apache2: Could not reliably determine the server's fully qualified domain name,
I know a very similar answer has already been dealt with here BUT in my case I am using an already made docker image (https://hub.docker.com/r/prestashop/prestashop/) and I don't think I can modify the Dockerfile
I think I need to execute this command
echo "ServerName localhost" >> /etc/apache2/apache2.conf
but I don't know how, idealy I would like to gather all the configurations in the docker-compose.yml file, here it is:version: "3.7" services: app: image: prestashop/prestashop:1.7 ports: - 8080:80 working_dir: /var/www/html volumes: - ./:/var/www/html environment: DB_SERVER: mysql MYSQL_USER: root MYSQL_PASSWORD: mypass123 MYSQL_DB: prestashop mysql: image: mysql:5.7 environment: MYSQL_ROOT_PASSWORD: mypass123 MYSQL_DATABASE: prestashop phpmyadmin: image: phpmyadmin/phpmyadmin ports: - 8081:80 environment: MYSQL_ROOT_PASSWORD: mypass123 MYSQL_DATABASE: prestashop
Any idea ?
Thank you
Aymeric
-
How do you deploy aws-xray-daemon in Docker swarm?
I'm trying to deploy amazon/aws-xray-daemon to my docker swarm.
I didn't do much in terms of configuration because there's not much I can see to configure in the README.md
services: xrayd: image: amazon/aws-xray-daemon deploy: restart_policy: delay: 2m
I get the following in the logs
2021-02-27T04:50:38Z [Info] Initializing AWS X-Ray daemon 3.2.0 2021-02-27T04:50:38Z [Info] Using buffer memory limit of 78 MB 2021-02-27T04:50:38Z [Info] 1248 segment buffers allocated 2021-02-27T04:50:39Z [Error] Unable to retrieve the region from the EC2 instance EC2MetadataRequestError: failed to get EC2 instance identity document caused by: RequestError: send request failed caused by: Get http://169.254.169.254/latest/dynamic/instance-identity/document: dial tcp 169.254.169.254:80: connect: network is unreachable 2021-02-27T04:50:39Z [Error] Cannot fetch region variable from config file, environment variables and ec2 metadata.
I gave the EC2 instance full
xray:*
in IAM as well. -
Can't run Docker inside of Windows, which is inside of VirtualBox
I have a client who sent me instructions to connect to their VPN, but their VPN solution is only supported on Windows. Therefore, I'm trying to use Windows inside of VirtualBox to install Docker, so that I can pull down a container (Kali) to perform my assessment.
However, I've spent a little over 7 hours today troubleshooting. I'm trying to figure out what's wrong with either Windows or my configuration settings. I've gone from installing Windows Server 2019 in Amazon's EC2, Lightsail, Windows 10 in VMware Fusion, and now finally VirtualBox. No luck. Each of the solutions requires a lot of hacking and troubleshooting to figure out what's going on.
Host specs:
macOS Big Sur
2.9GHz Quad Core Intel Core i7
Memory: 16GB
Storage: 500GB flash storageI've allocated 4GB of memory to the host, along with 2 processor cores.
As you can see in the screenshot below, Windows 10 Pro is telling me that I need to enable something, which the Optional Features window on the left shows is enabled. As you can also see on the right hand side, I have hardware virtualization enabled in the VM.
It's literally the same thing documented here: https://www.configserverfirewall.com/windows-10/please-enable-the-virtual-machine-platform-windows-feature-and-ensure-virtualization-is-enabled-in-the-bios/
Here's all the things that I've tried thus far:
Attempt #1 (re: https://github.com/microsoft/WSL/issues/5363#issuecomment-640337948)
dism /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V bcdedit /set hypervisorlaunchtype auto
Here's the output:
After rebooting and trying to fire up Docker again, the same error occurs.
Attempt 2: Checking BIOS
Several have suggested doing this, but I have found no way to boot into the Windows BIOS using VirtualBox. Nothing here.
Attempt 3: Re-enabling WSL (re: https://github.com/microsoft/WSL/issues/5363#issuecomment-675786863)
Per the instructions in this comment, I have disabled WSL from the Optional Features section in Windows 10, rebooted, got an error about WSL 2 not being enabled, re-enabled WSL, rebooted again, and just ran into the exact same error.
Attempt 4: Disabled Hyper-V
Disabled Hyper-V and got the same error.
Attempt 5: Modifying the .VMX file (re: https://communities.vmware.com/t5/VMware-Fusion-Discussions/VMware-Fusion-12-1-0-Big-Sur-Host-Windows-10-Guest-Running-Slow/td-p/2814913)
However, no luck here either. Instead of the previous error, it fails with "The operation timed out because a response was not received from the virtual machine or container". I guess it got even slower.
Any new suggestions or pointers on how to resolve this?
-
Get value using RedisTemplate in springboot
I have a custom cache annotation: when the browser wants to fetch data, it fetches it from the database the first time and fetches it from the cache the second time. But it looks like it was taken out of the cache the first time.
@Aspect @Configuration public class CacheAspect { @Autowired RedisTemplate<String, Object> redisTemplate; @Around("@annotation(com.github.anno.Cache)") public Object cache(ProceedingJoinPoint joinPoint) throws Throwable { System.out.println(redisTemplate); MethodSignature signature = (MethodSignature) joinPoint.getSignature(); System.out.println(redisTemplate); String methodName = signature.getName(); Object cacheValue = redisTemplate.opsForValue().get(methodName); if (cacheValue != null) { System.out.println("Get value from Cache!"); return cacheValue; } else { System.out.println("Get value from database!"); Object realValue = joinPoint.proceed(); redisTemplate.opsForValue().set(methodName, realValue); return realValue; } }
Below is the information for the post-run console
:: Spring Boot :: (v2.2.2.RELEASE) 2021-02-27 07:09:44.516 INFO 11816 --- [ main] com.github.hcsp.Application : Starting Application on LAPTOP-LDST0LE4 with PID 11816 (D:\JAVApractice\spring-aop-redis-mysql\target\classes started by 96426 in D:\JAVApractice\spring-aop-redis-mysql) 2021-02-27 07:09:44.520 INFO 11816 --- [ main] com.github.hcsp.Application : No active profile set, falling back to default profiles: default 2021-02-27 07:09:45.505 INFO 11816 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode! 2021-02-27 07:09:45.508 INFO 11816 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data Redis repositories in DEFAULT mode. 2021-02-27 07:09:45.543 INFO 11816 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 8ms. Found 0 Redis repository interfaces. 2021-02-27 07:09:45.711 WARN 11816 --- [ main] o.m.s.mapper.ClassPathMapperScanner : No MyBatis mapper was found in '[com.github.hcsp]' package. Please check your configuration. 2021-02-27 07:09:46.148 INFO 11816 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2021-02-27 07:09:50.430 INFO 11816 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2021-02-27 07:09:50.461 INFO 11816 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2021-02-27 07:09:50.461 INFO 11816 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.29] 2021-02-27 07:09:50.883 INFO 11816 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2021-02-27 07:09:50.883 INFO 11816 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 6275 ms 2021-02-27 07:09:52.675 INFO 11816 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' 2021-02-27 07:09:52.768 INFO 11816 --- [ main] o.s.b.a.w.s.WelcomePageHandlerMapping : Adding welcome page template: index 2021-02-27 07:09:53.131 INFO 11816 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2021-02-27 07:09:53.134 INFO 11816 --- [ main] com.github.hcsp.Application : Started Application in 9.203 seconds (JVM running for 11.171) 2021-02-27 07:12:36.452 INFO 11816 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet' 2021-02-27 07:12:36.452 INFO 11816 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' 2021-02-27 07:12:36.488 INFO 11816 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 36 ms 2021-02-27 07:12:36.784 ERROR 11816 --- [nio-8080-exec-1] freemarker.runtime : Error executing FreeMarker template freemarker.core.InvalidReferenceException: The following has evaluated to null or missing: ==> items [in template "index.ftlh" at line 35, column 14] ---- Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)?? ---- ---- FTL stack trace ("~" means nesting-related): - Failed at: #list items as item [in template "index.ftlh" at line 35, column 7] ---- at freemarker.core.InvalidReferenceException.getInstance(InvalidReferenceException.java:134) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.Expression.assertNonNull(Expression.java:251) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.IteratorBlock.acceptWithResult(IteratorBlock.java:104) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.IteratorBlock.accept(IteratorBlock.java:94) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.Environment.visit(Environment.java:331) [freemarker-2.3.29.jar:2.3.29] at freemarker.core.Environment.visit(Environment.java:337) [freemarker-2.3.29.jar:2.3.29] at freemarker.core.Environment.process(Environment.java:310) [freemarker-2.3.29.jar:2.3.29] at freemarker.template.Template.process(Template.java:383) [freemarker-2.3.29.jar:2.3.29] at org.springframework.web.servlet.view.freemarker.FreeMarkerView.processTemplate(FreeMarkerView.java:391) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.view.freemarker.FreeMarkerView.doRender(FreeMarkerView.java:304) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.view.freemarker.FreeMarkerView.renderMergedTemplateModel(FreeMarkerView.java:255) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.view.AbstractTemplateView.renderMergedOutputModel(AbstractTemplateView.java:179) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:316) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1373) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1118) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1057) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) [spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat-embed-websocket-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) [spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) [spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) [spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) [spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) [spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) [spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:526) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:367) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:860) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1591) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.29.jar:9.0.29] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_181] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.29.jar:9.0.29] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181] 2021-02-27 07:12:36.792 ERROR 11816 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is freemarker.core.InvalidReferenceException: The following has evaluated to null or missing: ==> items [in template "index.ftlh" at line 35, column 14] ---- Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)?? ---- ---- FTL stack trace ("~" means nesting-related): - Failed at: #list items as item [in template "index.ftlh" at line 35, column 7] ----] with root cause freemarker.core.InvalidReferenceException: The following has evaluated to null or missing: ==> items [in template "index.ftlh" at line 35, column 14] ---- Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)?? ---- ---- FTL stack trace ("~" means nesting-related): - Failed at: #list items as item [in template "index.ftlh" at line 35, column 7] ---- at freemarker.core.InvalidReferenceException.getInstance(InvalidReferenceException.java:134) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.Expression.assertNonNull(Expression.java:251) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.IteratorBlock.acceptWithResult(IteratorBlock.java:104) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.IteratorBlock.accept(IteratorBlock.java:94) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.Environment.visit(Environment.java:331) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.Environment.visit(Environment.java:337) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.core.Environment.process(Environment.java:310) ~[freemarker-2.3.29.jar:2.3.29] at freemarker.template.Template.process(Template.java:383) ~[freemarker-2.3.29.jar:2.3.29] at org.springframework.web.servlet.view.freemarker.FreeMarkerView.processTemplate(FreeMarkerView.java:391) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.view.freemarker.FreeMarkerView.doRender(FreeMarkerView.java:304) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.view.freemarker.FreeMarkerView.renderMergedTemplateModel(FreeMarkerView.java:255) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.view.AbstractTemplateView.renderMergedOutputModel(AbstractTemplateView.java:179) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:316) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1373) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1118) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1057) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:943) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:634) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.2.2.RELEASE.jar:5.2.2.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:741) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) ~[spring-web-5.2.2.RELEASE.jar:5.2.2.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) ~[tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:526) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:367) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:860) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1591) [tomcat-embed-core-9.0.29.jar:9.0.29] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.29.jar:9.0.29] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_181] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.29.jar:9.0.29] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181] 2021-02-27 07:12:36.825 ERROR 11816 --- [nio-8080-exec-1] s.e.ErrorMvcAutoConfiguration$StaticView : Cannot render error page for request [/] and exception [The following has evaluated to null or missing: ==> items [in template "index.ftlh" at line 35, column 14] ---- Tip: If the failing expression is known to legally refer to something that's sometimes null or missing, either specify a default value like myOptionalVar!myDefault, or use <#if myOptionalVar??>when-present<#else>when-missing</#if>. (These only cover the last step of the expression; to cover the whole expression, use parenthesis: (myOptionalVar.foo)!myDefault, (myOptionalVar.foo)?? ---- ---- FTL stack trace ("~" means nesting-related): - Failed at: #list items as item [in template "index.ftlh" at line 35, column 7] ----] as the response has already been committed. As a result, the response may have the wrong status code. org.springframework.data.redis.core.RedisTemplate@6efffd20 org.springframework.data.redis.core.RedisTemplate@6efffd20 2021-02-27 07:12:39.846 INFO 11816 --- [nio-8080-exec-2] io.lettuce.core.EpollProvider : Starting without optional epoll library 2021-02-27 07:12:39.847 INFO 11816 --- [nio-8080-exec-2] io.lettuce.core.KqueueProvider : Starting without optional kqueue library Get value from Cache!
You can see it doesn't print at the end: Get the value from the database!
Instead: Get value from Cache!
Please correct me of my mistake
-
Save kafka stream dataframe to Redis in Databricks after data transformation
I am using pyspark to direct the kafka streams to redis after performing aggregations on the data. The final output is a streaming datafame.
The code I connect to kafka streams. (You might find my code is a layman job, Please ignore)
app_schema = StructType([ StructField("applicationId",StringType(),True), StructField("applicationTimeStamp",StringType(),True) ]) # group_id = "mygroup" topic = "com.mobile-v1" bootstrap_servers = "server-1:9093,server-2:9093,server-3:9093" options = { "kafka.sasl.jaas.config": 'org.apache.kafka.common.security.plain.PlainLoginModule required username="user@stream.com" password="xxxxx";',\ "kafka.ssl.ca.location": "/tmp/cert.crt",\ "kafka.sasl.mechanism": "PLAIN",\ "kafka.security.protocol" : "SASL_SSL",\ "kafka.bootstrap.servers": bootstrap_servers,\ "failOnDataLoss": "false",\ "subscribe": topic,\ "startingOffsets": "latest",\ "enable.auto.commit": "false",\ "auto.offset.reset": "false",\ "enable.partition.eof": "true",\ "key.deserializer": "org.apache.kafka.common.serialization.StringDeserializer",\ "value.deserializer": "org.apache.kafka.common.serialization.StringDeserializer" } kafka_mobile_apps_df = spark.readStream.format("kafka").options(**options).options().load() kafka_mobile_apps_df = kafka_mobile_apps_df\ .select(from_json(col("value").cast("string"), app_schema).alias("mob_apps"))
As subscribed to broker this gives me Streaming data frame. After this I have aggregated the data to count_df as shown
count_df = kafka_mobile_apps_df.withColumn("diff_days", ((col("TimeStamp_")) - (col("TimeStamp")))/(60.0*60.0*24))\ .withColumn("within_7d_ind", when(col("diff_days") < 7.0, 1).otherwise(0))\ .groupBy("_applicationId") .agg(sum(col("within_7d_ind")).alias(feature+"_7day_velocity"))
Now I am trying to write this count_df stream to redis. After my resreach I found I can use "spark-redis_2.11" for spark-redis connectivity.
I dont know scala, I found a spark-redis github exmaple with scala. Could someone help what is the exact way to to write in pyspark to writeStrem this count_df to redis with authentication
please find spark-redis github here
I have installed the required jar "com.redislabs:spark-redis_2.12:2.5.0" on the cluster.
Thanks.
Just found out they dont support python yet, Please let me know is there any other way to write this?
-
How do send key values from Redis to a websocket
A node.js collects a bunch of numbers from the internet and stores them in Redis like this:
Key1 Value1, Key2 Value2 and so on
The values get updated each 2 seconds, the key-names stay the same. Is there a way to send the values to a websocket, so that a website can display them in near real-time?
-
HBase regions not fail over correctly, stuck in "OPENING" RIT
I am using the hbase-2.2.3 to setup a small cluster with 3 nodes. (both hadoop and HBase are HA mode) node1: NN, JN, ZKFC, ZK, HMaster, HRegionServer node2: NN, JN, ZKFC, DN, ZK, Backup HMaster, HRegionServer node3: DN, JN, ZK, HRegionServer
When I reboot node3, it cause the regions-in-transaction (some regions are in OPENING). In the master log, I can see: master.HMaster: Not running balancer because 5 region(s) in transition
Anyone know how to fix this issue? Great thanks
-
Is there a redis pub/sub replacement option, with high availability and redundancy, or, probably p2p messaging?
I have an app with hundreds of horizontally scaled servers which uses redis pub/sub, and it works just fine.
The redis server is a central point of failure. Whenever redis fails (well, it happens sometimes), our application falls into inconsistent state and have to follow recovery process which takes time. During this time the entire app is hardly useful.
Is there any messaging system/framework option, similar to redis pub/sub, but with redundancy and high availability so that if one instance fails, other will continue to deliver the messages exchanged between application hosts?
Or, better, is there any distributed messaging system in which app instances exchange the messages in a peer-to-peer manner, so that there is no single point of failure?
-
Apache flink high availability not working as expected
Tried to test High availability by bringing down Task manager along with Job manager and yarn Node manager at same time, I thought yarn will automatically assign that application to other node but it's not happening 😔😔 How this can be achieved?
-
Terraform sentinel policy failed
My requirement is that the sentinel policy should allow only the following types of persistent volumes in AKS - azure_disk","azure_file","csi","flex_volume".
The policy that I wrote:
import "tfplan-functions" as plan aksstorage = plan.find_resources("kubernetes_persistent_volume") allowed_storage = ["azure_disk","azure_file","csi","flex_volume"] violating_storage = plan.filter_attribute_not_in_list(aksstorage, "spec.0.persistent_volume_source", allowed_storage, true) # Main rule violations = length(violating_storage["messages"]) main = rule { violations is 0 }
I am getting the below error.
kubernetes_persistent_volume.example has spec.0.persistent_volume_source with value [{azure_disk: [], glusterfs: [], cinder: [], iscsi: [], flocker: [], local: [], nfs: [], photon_persistent_disk: [], csi: [], fc: [], ceph_fs: [], flex_volume: [], vsphere_volume: [{fs_type: null, volume_path: /absolute/path}], host_path: [], gce_persistent_disk: [], azure_file: [], rbd: [], quobyte: [], aws_elastic_block_store: []}] that is not in the allowed list: [azure_disk, azure_file, csi, flex_volume]
I am new to Terraform and unable to find a way to get this requirement fulfilled.
-
Azure Sentinel Contributor Role is not available in Administrative Roles on Azure
According to this link, there should be 3 built in roles for azure sentinel. However, a global admin account is unable to see any of them in Administrative Roles on Azure.
-
(C++) How can I repeatedly prompt the user for the number of lines? It just stops running after one run
This program is supposed to print out a V shape x rows tall depending on the user input. It is supposed to keep running until the user inputs 0, but it keeps stopping after 1 run.
int main(){ int size; while(size <= 0){ int rows, cols; cout << "How many rows tall to draw 'V'? (0 to quit): "; cin >> size; for (rows = size - 1; rows >= 0; rows--) { // outer gap loop for (cols = size - 1; cols > rows; cols--) { cout << " "; } cout << "*"; // inner gap loop for (cols = 1; cols < (rows * 2); cols++) cout << " "; if (rows >= 1) cout <<"*"; cout << "\n"; } return 0; } }
-
Redis AOF fsync is taking too long and Sentinel sdown?
I am running 3 Sentinels and Redis Master/Slave. Version 2.6.10
And Redis properties are the same as master/slave.Under normal circumstances, sometimes the Sentinel will trigger a sdown alarm.
It is included in the same private band, so it does not go through firewalls or L3 devices.
In addition, no special issues occurred in the communication side.What is suspicious is that the "Aof Fsync is taking too long" log is recorded during AOF in Redis.
Could this affect the ping-pong and sdown alarms between Sentinel and Redis?redis conf
81) "aof-rewrite-incremental-fsync" 82) "yes" 83) "appendonly" 84) "yes" 89) "appendfsync" 90) "everysec"
redis log
... [38750] 24 Feb 09:48:49.097 * Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis. [38750] 24 Feb 09:48:57.006 * Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis. [38750] 24 Feb 09:49:04.012 * Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis.
sentinel log
+sdown slave 192.168.0.101:6379 192.168.0.101 6379 @ mymaster 192.168.0.100 6379
-
Redis Sentinel client-reconfig-script will not run systemctl stop
I am using redis 5 with sentinel
I have setup a script /var/redis/reconfig.sh
#!/bin/bash ROLE=$(redis-cli --no-auth-warning -a mysupersecretpass info | grep "role:master") ROLE_CLEAN=$(echo "$ROLE" | tr -d "[:space:]") if [ "$ROLE_CLEAN" != "role:master" ] then echo "PasswordForRoot" | su -c "systemctl stop keepalived" root else echo "PasswordForRoot" | su -c "systemctl start keepalived" root fi exit 0
Essentially what I expect to happen is when redis sentinel runs the reconfig script then it will check what role the node has and ether start or stop keepalived
I have verified that the script does get executed but it does not stop/start keepalived and I do not know what the issue is.
EDIT: The script does execute successfully when running as root but when sentinel runs it, it does not stop keepalived.
-
How to support Redis Sentinel in ASP .Net Core
I need to support Redis sentinel for high availability. I have created a simple .NET Core Console App to test Redis sentinel easily. I'm doing this via
ConnectionMultiplexer
ofStackExchange.Redis
What I'm doing on the Console side is
var conn = ConnectionMultiplexer.Connect("127.0.0.1:26379," + "serviceName=mymaster," + "allowAdmin=true," + "tiebreaker=\"\"," + "abortConnect=false"); IDatabase db = conn.GetDatabase(); ...
After the connection I'm trying to kill the master, then StackExchange correctly moves to slave and makes it master, everything is good.
But the issue is when I'm applying the same scenario via ASP .Net Core Application, after killing master manually, I am unable to send requests related to Redis.
Here's what I'm doing in
Startup.cs
var connectionMultiplexer = ConnectionMultiplexer.Connect($"127.0.0.1:26379," + "serviceName=mymaster," + "allowAdmin=true," + "tiebreaker=\"\"," + "abortConnect=false"); var database = connectionMultiplexer.GetDatabase(0); services.AddScoped(_ => database); services.AddMvc(options => { options.EnableEndpointRouting = false; });
After killing the master, then trying to send some requests regarding the Redis, I'm getting the below error.
StackExchange.Redis.RedisConnectionException: No connection is available to service this operation: SADD 100001; An existing connection was forcibly closed by the remote host.; IOCP: (Busy=0,Free=1000,Min=16,Max=1000), WORKER: (Busy=2,Free=32765,Min=16,Max=32767), Local-CPU: n/a ---> StackExchange.Redis.RedisConnectionException: SocketFailure (ReadSocketError/ConnectionReset, last-recv: 20) on 127.0.0.1:6379/Subscription, Idle/Faulted, last: PING, origin: ReadFromPipe, outstanding: 0, last-read: 13s ago, last-write: 13s ago, keep-alive: 60s, state: ConnectedEstablished, mgr: 8 of 10 available, in: 0, in-pipe: 0, out-pipe: 0, last-heartbeat: 0s ago, last-mbeat: 0s ago, global: 0s ago, v: 2.0.593.37019 ---> Pipelines.Sockets.Unofficial.ConnectionResetException: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException (10054): An existing connection was forcibly closed by the remote host. at Pipelines.Sockets.Unofficial.Internal.Throw.Socket(Int32 errorCode) in C:\code\Pipelines.Sockets.Unofficial\src\Pipelines.Sockets.Unofficial\Internal\Throw.cs:line 59 at Pipelines.Sockets.Unofficial.SocketAwaitableEventArgs.GetResult() in C:\code\Pipelines.Sockets.Unofficial\src\Pipelines.Sockets.Unofficial\SocketAwaitableEventArgs.cs:line 74 at Pipelines.Sockets.Unofficial.SocketConnection.DoReceiveAsync() in C:\code\Pipelines.Sockets.Unofficial\src\Pipelines.Sockets.Unofficial\SocketConnection.Receive.cs:line 64
What I'm expecting is after killing the master, it should be successfully made slave as master and so no error I will get.
Is there any idea? What I may be missing? Thanks