Scrapy - Proxy working only for the robots.txt

I currently trying to integrate proxy services with my Scrapy projects, which, unfortunately, is no easy task. I have the following code in my settings.py:

ROTATING_PROXY_LIST = [
    'proxy_user:proxy_passwd@ip:port'
]

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': 90,
    'rotating_proxies.middlewares.RotatingProxyMiddleware': 610,
    'rotating_proxies.middlewares.BanDetectionMiddleware': 620,
    'navigator.middlewares.SeleniumMiddleware': 700,
}

# Proxy mode
# 0 = Every requests have different proxy
# 1 = Take only one proxy from the list and assign it to every requests
# 2 = Put a custom proxy to use in the settings
PROXY_MODE = 2

# If proxy mode is 2 uncomment this sentence :
CUSTOM_PROXY = 'proxy_user:proxy_passwd@ip:port'

But when I start to run my spider I receive the following message:

2018-10-11 19:22:45 [rotating_proxies.middlewares] INFO: Proxies(good: 0, dead: 0, unchecked: 1, reanimated: 0, mean backoff time: 0s)
2018-10-11 19:22:46 [rotating_proxies.expire] DEBUG: Proxy <http://proxy_user:proxy_passwd@ip:port> is GOOD
2018-10-11 19:22:46 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.carrefour.com.br/robots.txt> (referer: None)
2018-10-11 19:22:47 [rotating_proxies.expire] DEBUG: GOOD proxy became DEAD: <http://proxy_user:proxy_passwd@ip:port>
2018-10-11 19:22:47 [rotating_proxies.middlewares] DEBUG: Retrying <GET https://www.carrefour.com.br/mapa-do-site> with another proxy (failed 1 times, max retries: 5)
2018-10-11 19:22:47 [rotating_proxies.middlewares] WARNING: No proxies available; marking all proxies as unchecked
2018-10-11 19:22:49 [rotating_proxies.expire] DEBUG: Proxy <http://proxy_user:proxy_passwd@ip:port> is DEAD

To be clear, I using a personal proxy with squid.