Images, posts & videos related to "Let's Encrypt"
I've been trying to follow a few of the online guides to get SSL certs running through Let's Encrypt, but keep hitting brick walls.
I have a subdomain created through Google Domains, where I've enabled SSL and used redirection to point to either my *.synology.me address, or I've also tried linking it directly to <<my IP address>>:5001.
When I follow Mike Tabor's guide, after step four, I get the following error:
"Failed to connect to Let's Encrypt. Please make sure the domain name is valid."
I don't know, I can use the domain name to directly access the NAS, so I'm not sure how to make it more valid. I definitely have port 80 forwarding, I can confirm that outside this process.
Is there something else I should be doing to get this all working? Anything else I can troubleshoot?
Thanks for any recommendations.
Is it unprofessional-looking or insecure to use Let’s Encrypt certificates in production?
Does it lack functionality you would get from using one of the big name commercial CAs?
Hey everyone,
I've been reading up on creating a signed cert for opnsense using lets encrypt and cant seem to find an answer. from what i've found, ill need an FQDN.
Will I still be able to use the cert if i am not exposing the web GUI to the public internet? I only access my opnsense GUI while connected to the VPN which tells me its insecure. I'd like to encrypt the connection even though it's on the private network. I don't plan on pointing my FQDN at my router and making the gui available on the public network.
Thanks!
I'm a bit confused about how to configure Nginx to act as a front end to a very simple Express.js app. I want Nginx to handle redirecting port 80 to port 443, add an SSL certificate using Let's Encrypt and then pass all requests back to the Express.js app running on 127.0.0.1:3000. Can anyone explain in simple terms what I need to do in Nginx? It has been so long since I used it that I've completely confused myself and I can't find any specific tutorials online because I'm not sure what to search for.
Thank you for any help.
Despite my best efforts, I have been unable to get a certificate from Let's Encrypt. The readiness tool (letsdebug) says everything is good. Ports and permissions are correct, afaik.
"letsencrypt": {
"__comment__": "Requires NodeJS 8.x or better, Go to https://letsdebug.net/ first before trying Let's Encrypt.",
"email": "[email protected]",
"names": "mesh.mydomain.com",
"rsaKeySize": 3072,
"production": false
My understanding is that the new cert will be downloaded to meshcentral-data/letsencrypt-certs/ and once that occurs, I can change it to production: true
This is a self hosted system behind a pfsense firewall and it is operating as expected, except the cert. No errors are showing in the log.
Any guidance, would be appreciated.
I've installed a certificate through the Synology GUI on my NAS. I don't get all the warnings anymore when I try to log in, but once logged in the URL https:// is crossed out and it says 'not secure'.
When I click on the not sure message it still shows my old certificate which I have deleted from the NAS.
What do I do wrong?
Set-up
Domain - Google Domains
OS - Raspian
server - Nginx
dns forwarding
I am trying to run certbot. doman has been switch to my actual domain but the issue is certbot creates the verification but Google domains takes 24-48 to update when you upload a dns txt file. So when I run the script below it fails because it does not see the new dns verification. Is there a way to run the verification on the last challenge and prevent certbot from creating a new DNS txt challenge?
sudo certbot -d domain.com --manual --preferred- dns certonly
Hi i have a domain through 1and1 hosting and i would like to setup it so that i can connect to my nextcloud by typing in mydomain.com directly without using a subdomain. How would i set that up? I currently have a ddns.net working with https but for the life of me cant figure out how to get my NC directly to my domain. Sorry i am new to all of this. TIA
Hey All,
Last night I created the setup from the title using docker and docker-compose and have everything working beautifully. My question relates to expanding this setup and how exactly one would go about doing this. I have one public IP from which I want to host multiple websites. Currently I am hosting a dockerized nextcloud site using the various other containers mentioned in the title. So, how would I go about hosting a second website and integrating it into the docker containers I currently have?
I fairly sure I would use the NGINX reverse proxy container I already have and just add some new information to my docker compose file.. right?
Do I need to make any changes to the docker-compose information for the lets encrypt companion container?
Will the second website need its own accompanying database container?
I will also likely want to create a second docker network for intra-container communication prolly?
------
Please be as specific as possible with regards to the changes I'll need because i'm pretty green with docker and web stuff lol. Here is my docker-compose.yml:
THANKS FOR READING!
-------------------------------------------------------------------------------------------------
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./proxy/conf.d/client_max_body_size.conf:/etc/nginx/conf.d/client_max_body_size.conf**:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=************
... keep reading on reddit ➡Do you want to have a secure remote access to your Home Assistant from everywhere, but you don’t want to setup complicated VPNs or to pay for the cloud service? If yes, then this video is just for you.
Exposing a local server or service to the outside world has always been tricky. Exposing Home Assistant is not hard, but you have to do it the right way with SSL encryption and IP ban enabled for multiple failed logins.
Here is how…
The Video 🔴 🎥 https://youtu.be/jkBcGl7Vq8s
Home Assistant Remote Access with DuckDNS and Let’s Encrypt (HOW-TO)
If you prefer to read, this is the full Article ✍️📜 https://peyanski.com/home-assistant-remote-access/
And Web Story specially optimised for mobile devices (insta like) 📲
➡️ https://peyanski.com/stories/home-assistant-remote-access/
Cheers,
Kiril
Hi all!
I'm looking for ways to automate and sync SSL certificates from let's encrypt and configure reverse proxies to use them.
Currently, I have traefik setup on a home server using docker compose, which supports automatic HTTPS from LE plus it has other neat features like defining domain names for each service that I have in a docker-compose (just like Ingress in kubernetes), and now I want to move some services to a VPS to free up some resources and get better uptime, at first I thought of just copying the same docker compose with the services I want and keep the same configuration to the VPS and run it, but of course it means that both traefik instances might create requests to LE at the same time and exceed the rate limit.
I noticed that traefik did have some solution here https://docs.traefik.io/v1.7/user-guide/cluster/ where I just configure something like etcd and point traefik to it, then run the second instance with etcd configured and syncs the SSL certificates which makes only one traefik instance resolve and renew certificates while the other syncs, only to realize that this has been removed in version 2 https://github.com/containous/traefik/issues/6772 and is now only available on their enterprise solution.
So right now I'm just looking for simple solution that creates SSL certs from LE and syncs between servers easily, I know that I could use kubernetes to solve almost everything with ingress controllers and cert-bot but I just want to keep it simple (not to mention the complexity behind taking the current bare metal setup and add kubernetes on it)
Suggestions?
From what I've read, the certs aren't stored as files but rather in /conf/config.xml. Is there a command line program I can use through SSH to add my certs then restart the web UI?
So I gave a windows server, running jellyfin, and I've gotten an SSL Certificate from Lets Encrypt, I've added it to Jellyfin. But it's still not showing the site as secure. How can I make it work and not get that "not secure" error? Anyone know?
Hey everyone,
I've had quite a time getting this server setup. I'm running Ubuntu 16.04 with snap and nextcloud installed. After setup I created a self signed cert so I could quickly get HTTPS up. After a lot of trial and error (couldn't get port 80 to open) I managed to generate a let's encrypt cert. Just one problem, I can't figure out how to get rid of the self signed cert and get the system to use the new let's encrypt one. Any help?
i'm a bit of a noob when it comes to SSL certs, so my apologies in advance. i recently bought a domain and was able to get a Let's Encrypt cert installed via DSM for a subdomain (xx.myreallycooldomain.blah). i know that wildcard certs through Let's Encrypt require a DNS level challenge/authentication and that's not something implemented in the DSM method of generating the cert, but it is something that's easy to do using certbot on another Linux box. So here's the question -- if i use certbot on a different device to generate the Let's Encrypt cert, can i then manually install that cert in DSM and use it?
So I recently got into Docker and decided to use it to host my personal website. Problem is, for some reason Traefik serves a self-signed cert instead of one generated by Let's Encrypt (and I have HSTS enabled so this makes the site completely inaccessible)
This is Traefik's stack.yml:
version: "3.8"
services:
traefik:
image: traefik:2.2
container_name: traefik
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.web-secure.address=:443"
- "--certificatesresolvers.resolver.acme.tlschallenge=true"
- "[email protected]"
- "--certificatesresolvers.resolver.acme.storage=/letsencrypt/acme.json"
- "--log.level=DEBUG"
ports:
- 80:80
- 443:443
- 8080:8080
volumes:
- ./letsencrypt:/letsencrypt
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
default:
name: traefik-net
And here's my webserver's thing:
version: "3.8"
services:
ghost:
image: ghost:3-alpine
environment:
- url=${URL}
networks:
- traefik-net
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
- "traefik.http.routers.ghost.middlewares=redirect-to-https"
- "traefik.http.routers.ghost.rule=Host(`samvanderkris.xyz`)"
- "traefik.http.routers.ghost.entrypoints=ghost"
- "traefik.http.routers.ghost-secure.rule=Host(`samvanderkris.xyz`)"
- "traefik.http.routers.ghost-secure.tls=true"
- "traefik.http.routers.ghost-secure.tls.certresolver=resolver"
networks:
traefik-net:
external: true
My webserver's logs are here, but I couldn't really find anything interesting: https://pastebin.com/jaDAY77P
The weirdest thing is, I tried following Traefik's guide (https://docs.traefik.io/user-guides/docker-compose/acme-http/) and that didn't work either. I literally copy-pasted the docker-compose.yml
. My server is a single server Docker Swarm though, instead of docker-compose so that might be a difference with networking or something?
Any help is very much appreciated!
Hi guys,
Would love to learn how to create SSL certificates to add to my domain. Thanks!
So, today, Let’s Encrypt addon running on Hassio is just completely .... dead. It starts, presumably, but there are no entries about it in ANY logs. I don’t get certs, I don’t see traffic; any ideas? This has worked in the past and I’m coming up for renewal time soon, so I’m a bit stumped. Without logs, how can I diagnose?
I deployed cookiecutter django to digitalocean and pointed my domain at it and when I run the server I get this:
```traefik_1 | time="2020-08-10T04:07:32Z" level=error msg="Unable to obtain ACME certificate for domains \"example.com\": unable to generate a certificate for the domains [example.com]: acme: Error -> One or more domains had a problem:\n[example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://example.com/.well-known/acme-challenge/rLtIcAGYE4rBY3iUjbIm9p5XUqZBosTV_qJp8nPoBxw [2606:2800:220:1:248:1893:25c8:1946]: \"<!doctype html>\\n<html>\\n<head>\\n <title>Example Domain</title>\\n\\n <meta charset=\\\"utf-8\\\" />\\n <meta http-equiv=\\\"Content-type\", url: \n" routerName=flower-secure-router rule="Host(`example.com`)" providerName=letsencrypt.acme traefik_1 | time="2020-08-10T04:07:33Z" level=error msg="Unable to obtain ACME certificate for domains \"example.com,www.example.com\": unable to generate a certificate for the domains [example.com www.example.com]: acme: Error -> One or more domains had a problem:\n[example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://example.com/.well-known/acme-challenge/rLtIcAGYE4rBY3iUjbIm9p5XUqZBosTV_qJp8nPoBxw [2606:2800:220:1:248:1893:25c8:1946]: \"<!doctype html>\\n<html>\\n<head>\\n <title>Example Domain</title>\\n\\n <meta charset=\\\"utf-8\\\" />\\n <meta http-equiv=\\\"Content-type\", url: \n[www.example.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: Invalid response from http://www.example.com/.well-known/acme-challenge/dhSnRKK6iV_EdanHIF2HJ1xoGEf0MaAS-49R7zEmR0U [2606:2800:220:1:248:1893:25c8:1946]: \"<!doctype html>\\n<html>\\n<head>\\n <title>Example Domain</title>\\n\\n <meta charset=\\\"utf-8\\\" />\\n <meta http-equiv=\\\"Content-type\", url: \n" providerName=letsencrypt.acme route
... keep reading on reddit ➡I've installed Prosody 0.10 on my AWS Ubuntu 18.04 Bionic AMI EC2. Now I was trying to encrypt the server with the Let's Encrpyt (Certbot Client) Certificate. So I ran the following command:
sudo certbot certonly --standalone --rsa-key-size 4096 -d example.com
I passed my virtualhost in place of example.com. That worked, and I got the following message:
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for *my virtualhost*
Cleaning up challenges
Problem binding to port 80: Could not bind to IPv4 or IPv6.
IMPORTANT NOTES:
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should
make a secure backup of this folder now. This configuration
directory will also contain certificates and private keys
obtained by Certbot so making regular backups of this folder
is ideal.
According to the Prosody Documentation:
>Generally Prosody is unable to use certificates directly from the letsencrypt directory, because for security reasons the clients always ensure that the private key is only accessible by the root user. Meanwhile, also for security, Prosody does not run as root.
>
>There are a number of solutions, such as running a script to make the files readable by Prosody after every renewal. You can also change the groups of the Prosody user to give it access to the files that way, however this method can be tricky to get working on some systems.
>
>Our recommended method, if you have Prosody 0.10 or later, is to use prosodyctl cert import
, as described on this page.
So, I ran sudo prosodyctl --root cert import /etc/letsencrypt/live
as guided by the documentation, but it threw me few errors:
No certificate for host *my virtualhost* found :(
No certificate for host localhost found :(
No certificates imported :(
I wonder, where did I go wrong. Can you please help me in solving the error?
Hey, wondering about transferring my domain (and subs) over to a 3rd party for Let's Encrypt certs connected to Namecheap hosting.
Anyone doing this? Tips or suggestions appreciated. I don't want to have to manually add new certs every three months if possible. Looking into an alternative to domain SSL via namecheap. Thanks for any and all thoughts.
Hi,
I wrote a short tutorial on how to start with the cert-manager and Ambassador and how to request an SSL certificate from Let's Encrypt. You will need a cloud-managed cluster and a real domain name (haven't found out a good way to demonstrate this with a local cluster).
You can check it out here.
Hi there,
I'm a long time user of Laravel Forge and there is something I've been trying to solve for a while. I'm having issues with renewing SSL certs on a redirection domain.
Here is an example. Let's say we have be running a site for "CompanyName" on forge for x years, they have a domain "companyname.com". They run through a rebranding process and rename to "SecondCompanyName". They have a new url "secondcompanyname.com".
Within Forge, we set a new site, get a fresh Let's Encrypt SSL provisioned for "secondcompanyname.com". Naturally we setup a permanent redirect on the old domain via the Forge UI:
https://preview.redd.it/gj9kw6g1kmb51.png?width=2074&format=png&auto=webp&s=1704794b5f7444483b865fa54d422c3bd200d63f
Everything is fine until after the Let's Encrypt cert goes to renew on the original site "companyname.com" and the challenge fails because of the redirect.
With the current searching I've done, I understand that I need to add a location to Nginx to catch acme-challenge before the redirect happens. Something like:
location /.well-known/acme-challenge/ {
auth_basic off;
allow all;
alias /home/forge/.letsencrypt;
}
Sadly I'm not sure the best place to add this via the Forge UI, mainly because I'm not actually sure at what point in the request lifecycle the Forge UI redirects are added.
Has anyone got a solution for this?
Currently using Traefik 2 as my reverse proxy along with Cloudflare free account.
I'm using a wildcard in CNAME for some of my docker microservices and a few other CNAME entries for other things in my home network (pihole, etc.)
Cloudflare doesn't allow proxying of wildcards unless you're on their enterprise plan (as far as I'm aware).
I noticed when doing a reverse IP search, any subdomain that was reachable due to the wildcard and thus not proxied, showed my home IP address, and any that had it's own entry and was proxied, showed cloudflares IP.
So, here's the question, in my use case where Traefik automatically fetches a Let's Encrypt certificate, is the use of the wildcard so that Traefik only has to fetch one certificate?
Is there a limit to how many subdomains I can have under the free plan in Cloudflare? And is there a limit to how many certificates I can produce in Let's Encrypt?
Just as an fyi, I'm running around 19 services via docker that are accessed through the wildcard CNAME.
Hi everyone. I posted yesterday a tutorial on how to setup the unifi controller in ubuntu, and as promised here is the followup video on how to get a valid ssl certificate from Lets Encrypt for your controller. additionally how to automate the renewal.
no ads video
https://youtu.be/VPFjtK9A6Zs
Edit spelling
Hi guys, I was following this guide, and when i did this:
sudo certbot --apache -d
ampache.LinuXBuz.com
I got the following error:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for
ampache.linuxbuz.com
Waiting for verification...
Challenge failed for domain
ampache.linuxbuz.com
http-01 challenge for
ampache.linuxbuz.com
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain:
ampache.linuxbuz.com
Type: dns
Detail: DNS problem: NXDOMAIN looking up A for
ampache.linuxbuz.com
- check that a DNS record exists for this domain
Since I had lots of problems with Let’s Encrypt on pfSense and ended up using OVH for my domain, I want to share the correct link to issue the OVH API credentials.
https://api.ovh.com/createToken/index.cgi?GET=/*&PUT=/*&POST=/*&DELETE=/*
If you try to create other domain API credentials, you may end up with no consumer key. No need to modify the link or any parameters except putting the validity to unlimited.
Hey guys,
So I already have a webserver up and it is using SSL certs generated by Lets Encrypt. It is renewing automatically and working just fine.
I recently installed HMailer on the webserver and I'd like to use Lets Encrypt to secure HMailer SMTP connections as well.
I found a few documents online about this, mostly on HMailer forums but most seems incomplete or don't completely answer my questions.
Firstly. If I already have a cert being used by the webserver. It is possible to use the same cert for HMailer as well? I'm trying to avoid having to manually update Lets Encrypt cert every 90 days within HMailer and just let the Lets Encrypt auto renewal script to run as it does now.
The problem I'm running into is HMailer is asking for the Cert file and Private key of the cert. For me to get this I'd have to export the cert then import into HMailer. This would break the auto renewal script since the cert would now be a stand alone inside of HMailer.
Any ideas?
https://www.youtube.com/watch?v=J7ckr1Mzw_8
This video walks through a very simple way to create a TLS certificate with wildcard domain in a Docker Nginx container using CertCache in standalone mode.
This approach avoids having to use a DNS service with a supported Certbot plugin
EDIT: I GOT IT WORKING. I don't know how but I did.
While constantly editing the folder, trying to figure out what was going on, the Let's Encrypt proxy-conf folder somehow became corrupted. All files lost their ".sample" extension; or rather, all files became ".conf" (maybe corruption is the wrong word)
I tried re-installing Let's Encrypt but the files remained the same. I changed the name of the proxy-conf directory to proxy-conf1 and removed Let's Encrypt. I tried re-installing Let's encrypt again but the log gave me all sorts of errors: Let's Encrypt was looking for previous folders that I had renamed.
Welp...I needed a CLEAN install, a NUCLEAR install. Clear everything out! So I followed: https://linuxize.com/post/how-to-remove-docker-images-containers-volumes-and-networks/
In the command line, I entered: $ docker system prune (BE CAREFUL WITH THIS COMMAND; READ ABOUT IT IN THE LINK BEFORE EXECUTING IT)
The command removed everything. Clean.
Once I nuked the docker, I went back to the proxy-conf folder and found all the files were still missing the .sample extension. I went into Krusader, found the file, clicked properties. The files were still sample files but it allowed me to change them to ".conf"
I changed nextcloud.subdomain.conf.sample to nextcloud.subdomain.conf, restarted Let's Encrypt, and IT FUCKING WORKED. I was able to connect to NextCloud via my subdomain. Sometimes a nuclear install corrects the problem, I guess : /
Thanks to /u/gumby420 for helping and pushing me to this resolution : )
The original problem is outlined below. I really hope this helps people in the future. Don't give up!
I've troubleshooted this to hell, and I'm ripping my hair out.
I followed SpaceInvader One's video tutorial and have gone back through it with a fine tooth comb.
After setting everything up, when I attempt to access NextCloud via URL or LAN address, I get: "Welcome to our server. The website is currently being setup under this address."
I've searched forums and found other people with the same problem but no fixes.
Prior to setting up the reverse proxy, I got NextCloud working with MariaDB, and it worked great on the LAN.
I created a Docker Proxy-Net and have Next Cloud and Let's Encrypt running on it but not MariaDB (placing MariaDB on the Proxy-Net changes nothing).
Initially, I had log Syntax errors (I failed to put a comma on the line 'overwrite.cli.url') but have corrected the syntax and am now getting no
... keep reading on reddit ➡I'm looking at setting up a Jitsi server for "social" gatherings, because I'm a neckbeard with a freedom problem and don't want to use Zoom, GoToMeeting, or Google Hangouts.
ISTR that Comcast blocks port 80 & 443 incoming. Is that still the case? Let's Encrypt depends upon it, and if I want my friends to use the mobile apps for Jitsi Meet, I can't use a self-signed certificate.
I've installed a certificate through the Synology GUI on my NAS. I don't get all the warnings anymore when I try to log in, but once logged in the URL https:// is crossed out and it says 'not secure'.
When I click on the not sure message it still shows my old certificate which I have deleted from the NAS.
What do I do wrong?
Do you want to have a secure remote access to your Home Assistant from everywhere, but you don’t want to setup complicated VPNs or to pay for the cloud service? If yes, then this video is just for you.
Exposing a local server or service to the outside world has always been tricky. Exposing Home Assistant is not hard, but you have to do it the right way with SSL encryption and IP ban enabled for multiple failed logins.
Here is how…
The Video 🔴 🎥 https://youtu.be/jkBcGl7Vq8s
Home Assistant Remote Access with DuckDNS and Let’s Encrypt (HOW-TO)
If you prefer to read, this is the full Article ✍️📜 https://peyanski.com/home-assistant-remote-access/
And Web Story specially optimised for mobile devices (insta like) 📲
➡️ https://peyanski.com/stories/home-assistant-remote-access/
Cheers,
Kiril
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.