Change DNS Manual to DNS API
Is there a way to change my DNS setting from manual to DNS API mode? I tried renewing an SSL using acme.sh but it says that my dns-01 is in manual mode (which should not be run in prooduction) and need to manually add the TXT Value. I would like to make use of auto renew but it seems that my DNS should support API. How can I do this? Can I change my already built DNS to make use of API?
Honestly I have zero idea on what to do here. Please help me
See also questions close to this topic
-
Error occurred during build: Command start_install failed while creating Corda Enterprise on AWS
I am creating a single Corda node using AWS Corda template with new VPC configuration in Ohio region. I am facing an issue during build of CordaNodeStack. Below are my system logs from failure:
89.110614] cloud-init[1459]: Error occurred during build: Command start_install failed [ 89.124327] cloud-init[1459]: + cfn_fail [ 89.124653] cloud-init[1459]: + cfn-signal -e 1 --stack Corda-NotaryA-CordaStack-xxxx-CordaNodeStack-xxxx --region us-east-2 --resource CordaInstance [ 89.369313] cloud-init[1459]: + exit 1 [ 89.370188] cloud-init[1459]: Cloud-init v. 18.2 running 'modules:final' at Mon, 18 Feb 2019 12:43:10 +0000. Up 11.40 seconds. [ 89.370632] cloud-init[1459]: 2019-02-18 12:44:28,352 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1] [ 89.381955] cloud-init[1459]: 2019-02-18 12:44:28,364 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts) [ 89.389311] cloud-init[1459]: 2019-02-18 12:44:28,365 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_user.py'>) failed ci-info: +++++++++Authorized keys from /home/ubuntu/.ssh/authorized_keys for user ubuntu++++++++++ ci-info: +---------+-------------------------------------------------+---------+-----------------+ ci-info: | Keytype | Fingerprint (md5) | Options | Comment | ci-info: +---------+-------------------------------------------------+---------+-----------------+ ci-info: | ssh-rsa | xxxxx | - | xxxx | ci-info: | ssh-rsa | xxxxx | - | xxxx | ci-info: +---------+-------------------------------------------------+---------+-----------------+ <14>Feb 18 12:44:28 ec2: <14>Feb 18 12:44:28 ec2: ############################################################# <14>Feb 18 12:44:28 ec2: -----BEGIN SSH HOST KEY FINGERPRINTS----- <SSH Key> <14>Feb 18 12:44:28 ec2: -----END SSH HOST KEY FINGERPRINTS----- <14>Feb 18 12:44:28 ec2: ############################################################# -----BEGIN SSH HOST KEY KEYS----- ecdsa-sha2-nistp256 <hash> root@ip-xx-x-xxx-xxx ssh-ed25519 <hash> root@ip-xx-x-xxx-xxx ssh-rsa <key> root@ip-xx-x-xxx-xxx -----END SSH HOST KEY KEYS----- [ 89.530956] cloud-init[1459]: Cloud-init v. 18.2 finished at Mon, 18 Feb 2019 12:44:28 +0000. Datasource DataSourceEc2Local. Up 89.52 seconds [[0;1;31mFAILED[0m] Failed to start Execute cloud user/final scripts. See 'systemctl status cloud-final.service' for details. [[0;32m OK [0m] Reached target Cloud-init target. Starting Daily apt download activities... [[0;32m OK [0m] Started Daily apt download activities. Starting Daily apt upgrade and clean activities... Stopped Daily apt upgrade and clean activities.
Can anyone help me with this error.
Thank you in advance.
-
S3 registerStreamWrapper and fseek
I am using PHP SDK for AWS and S3. This is part of my code:
$context = stream_context_create(array( 's3' => array( 'seekable' => true ))); $stream = fopen("s3://{$bucketname}/{$key}", 'r',false,$context); $length = $byte_to - $byte_from + 1 ; $speed = 1000*1024 ;//1000KB file_put_contents("test.txt"," before \n",FILE_APPEND); $im = fseek($stream,$byte_from); file_put_contents("test.txt"," im=".$im." \n",FILE_APPEND); while($length>0) { if($length > $speed) $read = $speed ; else $read = $length; $length = $length - $read; echo fread($stream,$read); }
After checking test.txt I saw "before" was saved to test.txt (only).It seems that fseek doesn't work. How can I change this code to a better code?
-
Iterate S3 folder to get to get recent files as per lastmodifieddate
Iterate S3 folder to get to get recent files as per
lastmodifieddate
.We have a S3 bucket having folder structure as below(data coming hourly basis):
s3://my_bucket/stream/yy/mm/dd/hh:
examples:
s3://my_bucket/stream/2019/01/01/00 s3://my_bucket/stream/2019/01/01/01 s3://my_bucket/stream/2019/01/01/02 : : s3://my_bucket/stream/2019/01/02/00 s3://my_bucket/stream/2019/01/02/01
I need to iterate folders with recent files upload as per last modified date in AWS Lambda(python 2.7).
-
Strange mikrotik dns relation to firebird database
In one company there is windows server 2008 hosting firebird 2 database. Clients are using some software to connect from local machines to this database. Network is running on few mikrotik routers. When i change main gateway mikrotik router dns to cleanbrowsing ip addresses (185.228.168.10 and 185.228.169.11), software can not connect fo this firebird database. When i use 8.8.8.8 dns or 1.1.1.1 - no such problems. Software does not relate to dns, i know this because it is written by me in c#.
How possible is that and why it happens?
-
Changing URL Pathways when switching WebServer to another domain
So, I've got two domains that I own. I had a WebServer purchased for the old domain. Since I don't need the old domain, I asked my provider to transfer it to the new domain. They've done that and I received an instruction that I'll need to change URL pathways on my own.
I have no idea what is that or how to do it. Also, my website that's on new domain, it's not https secure.
Anyone can help me solve these two issues or as I guess, they're connected?
Thanks!
-
How can we make DNS entries using Route53 to a domain hosted with an external (third party) domain provider
We have purchased a domain lets say "xyz.com" from a third party domain provider. We have our resources in two AWS regions and we want to implement failover between the two regions using Route53.
We have created a hosted zone with the same name as of our domain i.e. "xyz.com" and created record sets in the hosted zone with failover as the routing policy.
But as our domain is external the record sets are not getting reflected.
Please suggest a way to achieve failover using route53 with domain hosted with an external provider without moving the DNS to Route53.
-
Wilcard lets'encrypt certificate on apache
I’m trying to create an HTTPS Wildcard certificate for all my subdomains * .booda.me
My server is hosted on Amazon web services on an “Amazon Linux AMI”.
When I run certbot with this command: letsencrypt certonly --manual --preferred-challenges dns --register -d booda.me -d * .booda.me
I’m asked to create a acme-challenge “TXT” DNS that contains a string. The certificates are validated with the confirmation message for “booda.me” and “* .booda.me”.
I also find my certificates by making “certbot certificates”:
When I validate the first DNS “TXT” I wait a few minutes for the propagation. Then I update the 2nd DNS “TXT” for the wildcard by modifying the first DNS, because AWS does not allow me to add a second “_acme-challenge.booda.me”. But I do not think that could be a problem …
By cons when I go https://booda.me it works but none of my subdomains detect the certificate Let’s encrypt.
I have this error when I try to access a subdomain: https://formation.booda.me/logon.php
my httpd-le-sll.conf configuration file looks like this:
<VirtualHost *: 443> DocumentRoot “/ var / www / html” ServerName “booda.me” ServerAlias "www.booda.me" SSLCertificateFile /etc/letsencrypt/live/booda.me-0001/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/booda.me-0001/privkey.pem Include /etc/letsencrypt/options-ssl-apache.conf </ VirtualHost>
I do not understand where it can come from. I made several attempts by choosing “(E) xpand” to update the certificates but it does not work.
Where can it come from, I’m starting to despair …
- Lets Encrypt Validation returns 403 forbidden
-
Automatic lets encrypt SSL certificate for wildcard subdomain
I am using ISPconfig as hosting panel on my Centos VPS Machine and Cloudflare for DNS management.
I have added rewrite rule to my vhost settings which automatically diverts sub-folders to sub-domains.
Code:
RewriteEngine On RewriteCond %{HTTP_HOST} !^www\.domain.com RewriteCond %{HTTP_HOST} ^(www\.)?(([^\.]+)\.){1}domain..com$ RewriteCond /var/www/backoffice.ge/web/build/%3 -d
All works fine and perfect on http (*80), but i have certificate issue on https (*443). based on my workaround issue is in lets-encrypt certificate which is generated for main domain only (domain.com).
If possible i want to create universal wildcard certificate which will automatically work for all sub domains, or create sub domain/directory certificates on fly via php.
I found some articles about Certbot but not sure how to make it work for above structure.
-
nginx redirecting even though no such file present
I am using nginx with cerbot on digital ocean. I am new to nginx. I have only one file in sites-enabled. This file has following contents: Now when I send request to albahriacssacademy.com, I am getting a redirect to another domain which I was using previously.
I know this might not be linked but I have deleted all the certificates and renewed them also. Please see the curl response below also. What is happening here and how to avoid this.
server { root /var/www/html/; index index.php index.html index.htm index.nginx-debian.html; server_name albahriacssacademy.com www.albahriacssacademy.com; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/albahriacssacademy.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/albahriacssacademy.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = www.albahriacssacademy.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = albahriacssacademy.com) { return 301 https://$host$request_uri; } # managed by Certbot server_name albahriacssacademy.com www.albahriacssacademy.com; listen 80; return 404; # managed by Certbot }
but I get following response
curl -L albahriacssacademy.com -v * Rebuilt URL to: albahriacssacademy.com/ * Trying 206.189.36.41... * Connected to albahriacssacademy.com (206.189.36.41) port 80 (#0) > GET / HTTP/1.1 > Host: albahriacssacademy.com > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 301 Moved Permanently < Server: nginx/1.10.3 (Ubuntu) < Date: Thu, 14 Feb 2019 21:41:14 GMT < Content-Type: text/html < Content-Length: 194 < Connection: keep-alive < Location: https://albahriacssacademy.com/ < * Ignoring the response-body * Connection #0 to host albahriacssacademy.com left intact * Issue another request to this URL: 'https://albahriacssacademy.com/' * Found bundle for host albahriacssacademy.com: 0x5556830c0330 [can pipeline] * Trying 206.189.36.41... * Connected to albahriacssacademy.com (206.189.36.41) port 443 (#1) * found 148 certificates in /etc/ssl/certs/ca-certificates.crt * found 594 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256 * server certificate verification OK * server certificate status verification SKIPPED * common name: albahriacssacademy.com (matched) * server certificate expiration date OK * server certificate activation date OK * certificate public key: RSA * certificate version: #3 * subject: CN=albahriacssacademy.com * start date: Thu, 14 Feb 2019 20:40:22 GMT * expire date: Wed, 15 May 2019 20:40:22 GMT * issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3 * compression: NULL * ALPN, server accepted to use http/1.1 > GET / HTTP/1.1 > Host: albahriacssacademy.com > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 301 Moved Permanently < Server: nginx/1.10.3 (Ubuntu) < Date: Thu, 14 Feb 2019 21:41:14 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < Location: https://albahriainstitute.com/ < * Ignoring the response-body * Connection #1 to host albahriacssacademy.com left intact * Issue another request to this URL: 'https://albahriainstitute.com/' * Trying 206.189.36.41... * Connected to albahriainstitute.com (206.189.36.41) port 443 (#2) * found 148 certificates in /etc/ssl/certs/ca-certificates.crt * found 594 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256 * server certificate verification OK * server certificate status verification SKIPPED * SSL: certificate subject name (albahriacssacademy.com) does not match target host name 'albahriainstitute.com' * Closing connection 2 curl: (51) SSL: certificate subject name (albahriacssacademy.com) does not match target host name 'albahriainstitute.com'
-
How to setup redirect to non-www and use certbot at the same time?
I have a domain
mydomain.com
. I'd like all traffic from www.mydomain.com to be redirected to mydomain.com.My dns settings for this domain are something like:
- A-Record: mydomain.com -> some.ip.address.0
- C-Name: www -> @
Using Apache, I direct both mydomain.com and www.mydomain.com to an app like this:
ServerName mydomain.com ServerAlias www.mydomain.com <Directory /var/www/something> ... whatever...
Using Certbot, I set up an SSL cert for mydomain.com and, when prompted, specified a forced redirect from http to https. Certbot adds the follow redirects:
RewriteEngine on RewriteCond %{SERVER_NAME} =mydomain.com [OR] RewriteCond %{SERVER_NAME} =www.mydomain.com RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
When I visit www.mydomain.com, I am presented with the "Not secure" warning.
How do I redirect all "www" traffic to "non-www" domain while maintaining https?
Any help would be appreciated!
-
LetsEncrypt Certbot rejects DNS TXT record for wildcard Certificate
Task: I want to create a wildcard certificate for both *.example.com and example.com in one go, using the DNS challenge method provided by the LetsEncrypt Certbot.
Reproduce: When trying to obtain the certificate files neccessary to set up my SSL-Certificate, I run into a catch22-situation with the LetsEncrypt Certbot.
I call the certbot command with these parameters
certbot certonly --agree-tos --manual --preferred-challenges dns --server https://acme-v02.api.letsencrypt.org/directory -d "*.example.com,example.com"
and am requested to enter two DNS TXT records in the response from the command afterwards.
So far, so good. But if I enter the two requested DNS TXT records for the given domains as requested by the certbot command, I receive an error message:
IMPORTANT NOTES: - The following errors were reported by the server:
Domain: example.com Type: unauthorized Detail: Incorrect TXT record "[authentication snippet for example.com]" found at
_acme-challenge.example.comTo fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.Problem: The Certbot does not accept the very same DNS TXT records is has just prompted me to set.
It seems that the Certbot is not able to cope with the fact that I am trying to request the certificate for both "*.example.com" and "example.com" at once, treating them as if they were belonging to two different domain realms and not accepting the two TXT records as expected.