Suppose I update the kernel of a running Debian system using apt-get upgrade linux-image-amd64
to a higher minor version number (e.g., 5.10.10
to 5.10.11
). Do I have to reboot the Debian server in order for the update to take effect?
manifestor's questions
I'm running a Debian 10 server and I can't connect to other machines using Let's Enccrypt certificates anymore since LE's CA (DST Root CA X3
) expired a few days ago:
root#> curl -I https://example.com
curl: (60) SSL certificate problem: certificate has expired
What I've done so far:
- I updated the
ca-certificates
package - I installed
libgnutls-openssl27
andlibgnutls30
- I ran the
update-ca-certificates
command.
Still, the server is not able to establish a trusted connection to the target host. The LE certificate on the target host is fine, there are no SSL errors when I trigger curl
from any other hosts.
How can I solve this problem and establish a trusted SSL connection? Any help would be highly appreciated, thanks in advance!
we are running a mail server and always the same error in the logs (for a specific recipient):
Aug 23 05:39:17 Mailer postfix/smtp[13561]: warning: DANE
TLSA lookup problem: Host or domain name not found. Name
service error for name=_25._tcp.dhmx02.web.de type=TLSA:
Host not found, try again
Aug 23 05:39:17 Mailer postfix/smtp[13561]: warning: TLS
policy lookup for xyz.com/dhmx02.web.de: TLSA lookup
error for dhmx02.web.de:25
Aug 23 05:39:17 Mailer postfix/smtp[13561]: 9BEA23EC68:
to=<[email protected]>, relay=none, delay=4509,
delays=4236/0.05/272/0, dsn=4.7.5, status=deferred
(TLSA lookup error for dhmx02.web.de:25)
The emails come back with the following information:
This is the mail system at host mx00.unser-mail-server.com.
I'm sorry to have to inform you that your message could not
be delivered to one or more recipients. It's attached below.
For further assistance, please send mail to postmaster.
If you do so, please include this problem report. You can
delete your own text from the attached returned message.
The mail system
<[email protected]>: TLSA lookup error for dhmx02.web.de:25
Reporting-MTA: dns; mx00.unser-mail-server.com
X-Postfix-Queue-ID: 9BEA23EC68
X-Postfix-Sender: rfc822; [email protected]
Arrival-Date: Mon, 23 Aug 2021 04:24:08 +0200 (CEST)
Final-Recipient: rfc822; [email protected]
Original-Recipient: rfc822;[email protected]
Action: failed
Status: 4.7.5
Diagnostic-Code: X-Postfix; TLSA lookup error for dhmx02.web.de:25
The following software is used on the server (Debian 10):
root|mailer|/etc/rspamd|# dpkg --list | egrep -i -- "(unbound|postfix|rspam|dovecot)"
ii dovecot-core 1:2.3.4.1-5+deb10u5 amd64 secure POP3/IMAP server - core files
ii dovecot-imapd 1:2.3.4.1-5+deb10u5 amd64 secure POP3/IMAP server - IMAP daemon
ii dovecot-lmtpd 1:2.3.4.1-5+deb10u5 amd64 secure POP3/IMAP server - LMTP server
ii dovecot-managesieved 1:2.3.4.1-5+deb10u5 amd64 secure POP3/IMAP server - ManageSieve server
ii dovecot-mysql 1:2.3.4.1-5+deb10u5 amd64 secure POP3/IMAP server - MySQL support
ii dovecot-sieve 1:2.3.4.1-5+deb10u5 amd64 secure POP3/IMAP server - Sieve filters support
ii libunbound8:amd64 1.9.0-2+deb10u2 amd64 library implementing DNS resolution and validation
ii postfix 3.4.14-0+deb10u1 amd64 High-performance mail transport agent
ii postfix-mysql 3.4.14-0+deb10u1 amd64 MySQL map support for Postfix
ii rspamd 2.5-1~bpo10+1 amd64 Rapid spam filtering system
ii unbound 1.9.0-2+deb10u2 amd64 validating, recursive, caching DNS resolver
ii unbound-anchor 1.9.0-2+deb10u2 amd64 utility to securely fetch the root DNS trust anchor
Unbound is used as resolver on the server, can this be the reason?
How can I fix the TLSA lookup error for dhmx02.web.de:25
? IT seems that it's realated to the receivers server, but he claims that he can't receive emails only from our mail server.
Does anyone have a clue how to solve the problem?
Is there any way to keep an email in the Postfix mail queue for investigation/debugging purposes? I want delay the delivery and the email to be kept in the queue which let's me check it's contents. After I've done that I would flush the queue and let Postfix deliver it. How can I do this?
If it's not possible for a single email - would it be possible to queue all emails for a while?
We are running a Postfix MTA (version: 3.4.14, OS: Debian 10) and since a short time emails sent to a certain provider (Web.de/GMX) are not being accepted anymore (the emails are sent by an ancient Perl program, which has always worked reliably so far):
Jan 26 10:21:51 hostname postfix/smtp[24531]: 5915310CD43: to=<[email protected]>, relay=mx-ha02.web.de[212.227.17.8]:25, delay=0.34, delays=0. 01/0/0.18/0.15, dsn=5.0.0, status=bounced (host mx-ha02.web.de[212.227.17.8] said: 554-Transaction failed 554-Reject due to policy restrictions. 554 For explanation visit https://web.de/email/senderguidelines?ip=1.3.5.2&c=hi (in reply to end of DATA command))
I sent an email to the operator of Web.de and got the following answer:
E-mails will be rejected by our mail system if the information given in the e-mail header does not comply with the specifications in RFC 5321 and RFC 5322. This includes the following points:
- The following headers must be syntactically correct: Date, From, Sender, To
- The headers BCC, CC, Date, From, Sender, Subject and To must not appear more than once.
Therefore, please check the information provided by your system for correctness and contact the administrator of your system if necessary. Please also note our sender guidelines:
I checked the headers and they don't appear more than once (this is an email I sent to my private email address):
Return-Path: <[email protected]>
Delivered-To: [email protected]
Received: from mail.mypersonalmta.com
by mail.mypersonalmta.com with LMTP
id Mj4HKoIHEGATFAAA8lfkpQ
(envelope-from <[email protected]>)
for <[email protected]>; Tue, 26 Jan 2021 13:13:54 +0100
Received: from sender.com (sender.com [1.1.1.1])
by mail.mypersonalmta.com (Postfix) with ESMTPS id 311DC3EBDA
for <[email protected]>; Tue, 26 Jan 2021 13:13:54 +0100 (CET)
Received: by sender.com (Postfix, from userid 33)
id DFAF110CBE9; Tue, 26 Jan 2021 13:13:51 +0100 (CET)
X-Priority: 3 (Normal)
Reply-to: "My Name" <[email protected]>
From: "My Name" <[email protected]>
To: "My Name" <[email protected]>
Subject: =?UTF-8?B?U2VydmVya29uZmlndXJhdGlvbg==?=
Mime-Version: 1.0
Content-type: multipart/mixed;
boundary="==Serviceplaner==multipart/mixed==0=="
Message-Id: <[email protected]>
Date: Tue, 26 Jan 2021 13:13:51 +0100 (CET)
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com;
s=dk100; t=1611663234;
h=from:from:reply-to:reply-to:subject:subject:date:date:
message-id:message-id:to:to:cc:mime-version:mime-version:
content-type:content-type; bh=b1FPU+OUKvDQDm+TJNNJ4gnjC2tvP3esicGNZxlMRGU=;
b=NGDHmG4XaGZYCkDB4iT7MlzYTREHbpL5QrJZm1guR2CsL18B6efRf7SU+roM+p9vaY/8VI
5g77bu9XiQ1Uz9g2wqHfKQ45Kh7pPjlxxT9gugKBi+Wb0eo0oQQ/C+dLe/LdRRZqnY+4Gc
lnmpO6FXv9i7sfNXkcHUq62UPQIBT40=
ARC-Seal: i=1; s=dk100; d=mydomain.com; t=1611663234; a=rsa-sha256;
cv=none;
b=xMVJHET/VP+NQdzb2osJo1BVLMgCX60/0SL9ZSywsJiDEkUReK8wedi2Ahw+kSBypj+XWO
TKH7/OZjWxbzUlKeMqFo4kLpHj2ygIu2ThXpYXYbW/D+tNG7CK7f3byz+j8myaddressGj+g9hQ05
I0LnjAInYPniK8qGsFJG4sXvrUb/7CY=
ARC-Authentication-Results: i=1;
mail.mypersonalmta.com;
dkim=none;
spf=pass (mail.mypersonalmta.com: domain of [email protected] designates 1.1.1.1 as permitted sender) [email protected]
X-Spamd-Bar: ++++
X-Spam-Level: ****
Authentication-Results: mail.mypersonalmta.com;
dkim=none;
dmarc=none;
spf=pass (mail.mypersonalmta.com: domain of [email protected] designates 1.1.1.1 as permitted sender) [email protected]
I don’t know how to check if the email headers are syntactically correct and RFC5321/RFC5322 compliant. I have an idea, but I'm not quite sure: Maybe it's because there's UTF8 in the Subject header (UTF-8 that's converted to ASCII is maybe a problem, I don't know)? How would you proceed in this case? Any ideas? :)
We have a KVM hypervisor (L0
, AMD, Kernel: Linux level0kvmhypervisor 4.19.0-12-amd64 #1 SMP Debian 4.19.152-1 (2020-10-18) x86_64 GNU/Linux
), which runs a virtual machine that I would like to use as a nested VirtualBox hypervisor (L1
). So what I'm trying to do is running VirtualBox inside of KVM. Some details about the KVM hypervisor (L0
):
# cat /proc/cpuinfo
processor : 0 ... 23
vendor_id : AuthenticAMD
cpu family : 23
model : 113
model name : AMD Ryzen 9 3900 12-Core Processor
stepping : 0
microcode : 0x8701021
cpu MHz : 2193.155
cache size : 512 KB
physical id : 0
siblings : 24
core id : 0
cpu cores : 12
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 16
wp : yes
flags : ... svm ...
bugs : sysret_ss_attrs spectre_v1 spectre_v2 spec_store_bypass
bogomips : 6188.58
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 43 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]
# dpkg --list | egrep -- "(kvm|libvirt)" | sed 's/amd64.*//g'
ii libsys-virt-perl 5.0.0-1
ii libvirt-clients 5.0.0-4+deb10u1
ii libvirt-daemon 5.0.0-4+deb10u1
ii libvirt-daemon-system 5.0.0-4+deb10u1
ii libvirt-glib-1.0-0:
ii libvirt0:
ii python3-libvirt 5.0.0-1
ii qemu-kvm 1:3.1+dfsg-8+deb10u8
The nested option is enabled as well:
# cat /sys/module/kvm_amd/parameters/nested
1
Inside the KVM virtual machine, where VirtualBox is installed (L1): I'm trying to launch a VM created by Vagrant and get the following error message:
...
==> default: Booting VM...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["startvm", "357a07b4-7d81-4336-9ea6-0dbf0ab49d18", "--type", "headless"]
Stderr: VBoxManage: error: AMD-V is not available (VERR_SVM_NO_SVM)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole
These are the KVM XML CPU setting for the VirtualBox L1
hypervisor:
<vcpu placement='static'>1</vcpu>
<cpu mode='host-model' check='partial'>
<model fallback='allow'/>
</cpu>
I believe something must be changed here, but I'm not quite sure what. I can't find any resources on that topic, that's why I'm asking here. How can I get the VirtualBox hypervisor (L1
) running inside a KVM virtual machine in order to start a VM?
Any help would be highly appreciated.
I'm setting up Postfix right now and it should run as a send-only solution - no emails will be received. But still, TLS should be supported for outgoing emails, so I enabled it using smtp_tls_security_level = may
. Postfix has the smtpd_tls_cert_file
and smtpd_tls_key_file
and as far as I know, they concern incoming emails only. So I was just wondering: Is necessary that I setup my own separate SSL/TLS certificate for outgoing TLS connections?
As far as I understand Postfix will try to connect to the receiving server and will be provided a public key. Then my machine's OpenSSL will encrypt the email using the receiver's public key, so no SSL/TLS certificate needed from side, am I right?
Just want to make sure, because I don't want my emails to be treated as SPAM just because I don't have a valid SSL/TLS certificate.
I have the following problem: We have a dedicated (bare metal) hardware server (Debian 10) on which we have no direct physical access. Now I want to transfer all data and applications that are on this server to a VM and run it on a KVM host.
Why do I not install the application directly in the VM? The installation of this application (Perl stuff with Apache web server, running on the same server for about 10 years) is so complex that you would rather break something. So nobody dares to do it. But now we have to move and for this reason we need some kind of smart workaround.
I thought about switching off all Perl and Apache services and transferring the hard disk via dd
over the network - but the problem is that the target KVM host has less space than the sda
from the bare metal server is big (in the end it uses less space than is available, sda
is just oversized).
The second option would be to install the same packages on the KVM with exactly the same version numbers (according to dpkg --list
), disable all services on the bare metal server (to keep the data consistent) and put /etc
, /var/
, /usr
and everything else that is important from the bare metal server into a tarball and simply unpack it on the KVM. Of course I could also do this via rsync, but the principle is more or less the same.
What do you think about the last idea?
Do you have any other ideas?
How would you proceed with such a task?
I just installed a new Debian 10 and realized that both nftables and iptables are active and somehow my IPTables rules get mixed up and don't work properly.
How can I completely disable nftables and use IPTables-only instead?
The problem occurs on my mail server, which apart from that works perfectly. Clients can connect via IMAP and Postfix receives and sends out emails without any hassle.
Installed software:
$> dpkg --list
ii postfix 3.1.12-0+deb9u1 amd64
ii dovecot-core 1:2.2.27-3+deb9u5 amd64
ii dovecot-imapd 1:2.2.27-3+deb9u5 amd64
I get the following error message every minute (and as I'm monitoring log files this is driving me crazy as I get every time an alert because of the syscall failed
in my monitoring system):
Nov 27 18:30:17 localhost dovecot: imap-login:
Disconnected (no auth attempts in 0 secs):
user=<>, rip=127.0.0.1, lip=127.0.0.1,
TLS handshaking: SSL_accept() syscall failed: Success,
session=<lMywWVeYJpZ/AAAB>
As we can see the user
field is empty and the request comes from localhost
. Now I'm trying to debug my system and my questions are:
- Did someone experienced the same problem? Is that really some application, which resides on the same machine, which is trying to connect or is something else causing this log message?
- How can I find out which application is trying to connect?
Any help is much appreciated!
EDIT:
The log messages exactly before the aforementioned error message:
Nov 27 18:30:17 localhost postfix/postscreen[29370]: CONNECT from [127.0.0.1]:50844 to [127.0.0.1]:25
Nov 27 18:30:17 localhost postfix/postscreen[29370]: WHITELISTED [127.0.0.1]:50844
Nov 27 18:30:17 localhost postfix/smtpd[13455]: connect from localhost[127.0.0.1]
Nov 27 18:30:17 localhost postfix/smtpd[13455]: lost connection after CONNECT from localhost[127.0.0.1]
Nov 27 18:30:17 localhost postfix/smtpd[13455]: disconnect from localhost[127.0.0.1] commands=0/0
Nov 27 18:30:17 localhost postfix/submission/smtpd[15222]: connect from localhost[127.0.0.1]
Nov 27 18:30:17 localhost postfix/submission/smtpd[15222]: lost connection after CONNECT from localhost[127.0.0.1]
Nov 27 18:30:17 localhost postfix/submission/smtpd[15222]: disconnect from localhost[127.0.0.1] commands=0/0
I have the following DNS entry for one of my clients email servers:
_dmarc IN TXT "v=DMARC1; p=none; rua=mailto:[email protected]"
This is the only email server I'm administering, which has a DMARC DNS entry - in other cases SPF and DKIM was always sufficient for the email server to work fine.
The annoying thing is that I receive multiple DMARC reports from Gmail, Yahoo, etc. ervery day and I don't need them. How can I stop receiving those DMARC reports?
Should I just remove the rua
part of the DNS entry?
I have the following certificate:
# certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Found the following certs:
Certificate Name: domain.example
Domains: domain.example imap.domain.example mail.domain.example pop.domain.example smtp.domain.example www.domain.example
Expiry Date: 2019-09-09 03:34:20+00:00 (VALID: 62 days)
Certificate Path: /etc/letsencrypt/live/domain.example/fullchain.pem
Private Key Path: /etc/letsencrypt/live/domain.example/privkey.pem
Now what I want to do is to remove domain.example
and www.domain.example
from the certificate, because the web server has moved to another instance. The fact, that the DNS entries have been changed means, that the renewal process will fail if domain.example
and www.domain.example
are still part of the certificate, because the DNS entries point to another IP now.
How can I remove certain host names from a let's encrypt certificate without deleting the certificate and creating a new one?
I've created an IAM user (CLI only) with AmazonRDSReadOnlyAccess
permissions. Now every time I try to list my db instances I get an empty JSON object, even though I have one active RDS instance:
aws rds describe-db-instances
{
"DBInstances": []
}
Also I tried to specify my instance id (the instance identifier does exist for sure):
aws rds describe-db-instances --db-instance-identifier mydb
An error occurred (DBInstanceNotFound) when calling the \
DescribeDBInstances operation: DBInstance vipbilet-db not found.
What am I doing wrong here?
As the question implies I want to change/set some HTTP headers for a single object in a S3 bucket via CLI. This is what I have tried so far:
aws s3 sync --delete --acl public-read --cache-control \
max-age=31536000 --expires "Mon, 01 Oct 2035 20:30:00 GMT" \
js/jquery-blockui/jquery.blockUI.min.js \
s3://mybucket/js/jquery-blockui/jquery.blockUI.min.js --dryrun
The problem is, that aws s3 sync
tells me that he could not find the file /home/user/somedir/js/jquery-blockui/jquery.blockUI.min.js/
(see the last slash - the file DOES exist actually) on my file system:
warning: Skipping file
/home/user/somedir/js/jquery-blockui/jquery.blockUI.min.js/.
File does not exist.
So aws s3 sync
does only expect a directory to be passed as the source? Does anyone has an idea, how to do that for a single file? I want to write a script and pass some files, which need to be altered - that's why I am asking. Thanks.
I have the following infrastructure:
80 -> Varnish -> Backend (NGINX, port 8080)
443 -> NGINX (SSL-Termination with HTTP/2 enabled) -> Varnish -> Backend (NGINX, port 8080)
I know that it is possible to enable HTTP/2
protocol for frontend connections using the -p feature=+http2
parameter for Varnish (port 80), but what about the backend connections? varnishlog -b
shows me, that all of the backend communication is performed using HTTP/1.0
and HTTP/1.1
.
I would be very pleased if someone could tell me what common practice is regarding Varnish and NGINX:
- Is it possible to enable
HTTP/2
for the backend connections? - Does it make any sense to do so regarding performance?
- Does it make sense regarding performance to keep the
-p feature=+http2
parameter enabled for the443 -> NGINX (SSL-Termination with HTTP/2 enabled) -> Varnish
communication in terms of performance?
Regarding the backend communication (which is not encrypted): I know that HTTP/2
is bound to TLS encryption, but maybe there is some tweak I haven't heard about, so that's why I think is better to ask in order to be 100% sure. Thanks for your understanding.
I've configured systemd timesyncd to get it's time from a NTP server:
/etc/systemd/timesyncd.conf > NTP=ca.pool.ntp.org
systemctl restart systemd-timesyncd.service
timedatectl set-ntp true
The status is the following:
$ timedatectl status
...
Network time on: yes
NTP synchronized: no
As the output implies, the time is not synced, yet. Can someone please help me out with the following questions?
- How long will it take for timesyncd to sync with the NTP? At what intervals does it do that, where can I check and alter them?
- In urgent cases: Can I only set the time manually or can I force timesyncd to sync immediately with the NTP server?
If Varnish is set as the default Cache in front of my NGINX backend, how can I check in the NGINX backend for the clients original IP and make a decision based on that?
I want to allow a certain directory only to certain IPs. Varnish being in front of NGINX, means that every request comes from 127.0.0.1
. I'm thinking about setting some custom HTTP header, but how could I check that in conjunction with location ~ /folder/ {}
section?
I'm using Varnish and I'm not quite sure if I should also remove the Server: nginx
HTTP header. Why do someone needs to know that I'm using NGINX? Is it ok to remove this HTTP header from the response or is it needed somewhere? From a security perspective it's probably better to do so?
Every time I try to make a mysqldump
I get the following error:
$> mysqldump --single-transaction --host host -u user -p db > db.sql
mysqldump: Couldn't execute 'SELECT COLUMN_NAME, JSON_EXTRACT(HISTOGRAM,
'$."number-of-buckets-specified"') FROM
information_schema.COLUMN_STATISTICS WHERE SCHEMA_NAME = 'db' AND
TABLE_NAME = 'Absence';':
Unknown table 'COLUMN_STATISTICS' in information_schema (1109)
The result is a dump which is not complete. The strange thing is that the same command, executed from another host, works without throwing any errors. Did someone experienced the same problem?
I'm using mysql-client 8.0
and try to access a mysql 5-7
server - maybe that is the reason?
I'm writing a GUI where users can edit their NGINX vhost information. Before updating their vhost I would like to perform a syntax check. That's why my idea was to copy the new contents to tmp
first and then run a NGINX syntax check only on that certain file (instead of nginx -t
which will check all vhosts and only in the appropriate nginx-directories).
Is is somehow possible to syntax-check one certain file only before pasting it from /tmp/new_vhost_content
to /etc/nginx/sites-enables/vhost
?