BTRFS with compression enabled uses a heuristic not to compress files that are not well compressible. Does compressing existing files with "btrfs filesystem defrag -c" also use the heuristic, or does it compress all files even if they are not well compressible?
allo's questions
I have automysqlbackup installed on debian. I added USERNAME and PASSWORD to /etc/default/automysqlbackup
, but when automysqlbackup runs, I get the output:
/etc/cron.daily/automysqlbackup:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO
as if I would not have configured any credentials.
The automysqlbackup config is the default configuration with two lines added at the end:
USERNAME=root
PASSWORD="the root password"
The default config tries to get the credentials from /etc/mysql/debian.cnf
(using grep in the /etc/default/automysqlbackup
bash snippet) which contains a warning that the file is deprecated and does not contain any admin password anymore on new installations.
I tried automysqlbackup with strace and it at least reads /etc/default/automysqlbackup
even when not started by cron. It doesn't use the password, though.
I'd like to use an RBL in rspamd without using all the preconfigured RBLs, but it seems that the configuration in /etc/rspamd/local.d/rbl.conf
can only add new lists, not remove the default ones.
The documentation also describes a way to disable rules (I am not even sure if this disables the checking itself or just assigning points for a match), but it looks like you have to do this for each default rule, and future updates may add new RBLs that are active by default.
How can I disable all default RBLs with a configuration file in /etc/rspamd/local.d
without changing the configurations installed by the rspamd package, which will be overwritten by future updates?
When I use nginx as reverse proxy in front of some other web application, it seems not to forward PUT
requests, but shows an HTTP 405 generated by nginx (and not the upstream server).
I tried the proxy_next_upstream
method for http_405, but it did not work. I wonder why nginx itself checks the HTTP method anyway for a location block which has a reverse_proxy configured.
I have a server, where I can login using SSH, but do not get a shell (anymore). What can I do to login to a minimal shell to debug the issue? Here is the log of ssh -vvvv
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering ED25519 public key: user@myhost
debug1: Authentications that can continue: publickey,password
debug1: Offering RSA public key: /home/user/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 277
debug1: Authentication succeeded (publickey).
Authenticated to remotehost ([x.x.x.x]:22).
debug1: channel 0: new [client-session]
debug1: Requesting [email protected]
debug1: Entering interactive session.
debug1: pledge: network
Normally a session would continue with
debug3: receive packet: type 80
debug1: client_input_global_request: rtype [email protected] want_reply 0
debug3: receive packet: type 91
debug2: channel_input_open_confirmation: channel 0: callback start
But here ssh just hangs.
How can I get a login shell? I tried ssh -t user@host /bin/sh
but it did not work.
I am not sure if something in the ssh server is wrong (maybe waiting for rDNS resolution while there are network errors?) or if some login shell script is blocking the shell.
I have some nginx config snippets, which add locations, i.e. to send requests to a certain path to a fastcgi server.
For a clean deployment with ansible, I would like to use the /etc/nginx/conf.d
folder to add them there.
The problem is, location /something
belongs into a server block and the default server is already defined in /etc/nginx/sites-enabled/default
. And when I want to deploy more different locations, they should not need an own vhost for each include.
Possibly even a site in sites-enabled
would be useful, but it should still be composable. Different snippets can be included in the same config without knowing if other snippets are installed or not.
Is there a clean way to include location blocks from config snippets without modifying the default config?
With compatibility_level=2 in recent postfix versions, the default for the postfix daemons changed from chroot to non-chroot. While the page describes that it changed and what you can do to continue using chroot or stop using it, there are no reasons given.
Why did they change the default value? Is there any advantage in running it without chroot?
I want to restart a process monitored by monit, when the checksum of a file failed. Currently i use
check process prosody with pidfile /var/run/prosody/prosody.pid
depends certificate_file
start program "/etc/init.d/prosody start"
stop program "/etc/init.d/prosody stop"
restart program "/etc/init.d/prosody restart"
check file certificate_file with path /etc/prosody/certs/fullchain.pem
if changed checksum then exec "/usr/bin/monit restart prosody"
But i would like to have some command like if changed checksum then restart prosody
instead of using the monit binary via exec
.
The restart
action seems to be limited to restart the currently monitored process, so an action in a check file
block doesn't do anything.
When using an encrypted drive in a virtual machine, the VM image starts very small if you do not initialize the drive with random data. When you fill the drive, it grows, but when you delete files it doesn't shrink.
With a not encrypted drive, you can fill the drive with zeros (i.e. by creating a large file filled with zeros), so the VM-Software can compress it. But with a encrypted drive, the zeros get encrypted and the VM image cannot shrink.
Is there some kind of TRIM command like SSDs have, which zeros out the unused space of a filesystem on the underlying encrypted blockdevice?
As far as i understood, it should be sufficient to upgrade openssl (done a long time ago, now installed all available updates again (no openssl there)) and restart nginx.
I even tried to stop nginx fully (verified it with ps
) and start it again.
But ssllabs still tells me, that i am vulnerable. What else do i need to do, or what can be causing that its still vulnerable?
versions:
ii nginx 1.9.10-1 all small, powerful, scalable web/proxy server
ii nginx-common 1.9.10-1 all small, powerful, scalable web/proxy server - common files
ii nginx-full 1.9.10-1 amd64 nginx web/proxy server (standard version)
ii openssl 1.0.1t-1+deb8u2 amd64 Secure Sockets Layer toolkit - cryptographic utility
ii libssl-dev:amd64 1.0.1t-1+deb8u2 amd64 Secure Sockets Layer toolkit - development files
ii libssl-doc 1.0.1t-1+deb8u2 all Secure Sockets Layer toolkit - development documentation
ii libssl1.0.0:amd64 1.0.1t-1+deb8u2 amd64 Secure Sockets Layer toolkit - shared libraries
ii libssl1.0.2:amd64 1.0.2f-2 amd64 Secure Sockets Layer toolkit - shared libraries
lsof related to nginx
lsof 2>/dev/null |grep -i libssl|grep nginx
nginx 17928 root mem REG 251,0 430560 2884885 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.2
nginx 17929 www-data mem REG 251,0 430560 2884885 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.2
nginx 17930 www-data mem REG 251,0 430560 2884885 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.2
nginx 17932 www-data mem REG 251,0 430560 2884885 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.2
nginx 17933 www-data mem REG 251,0 430560 2884885 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.2
I have a server with a Gigabit Uplink, and testing with iperf3
with 100 parallel connections i get at least 600 MBit/s, depending on the other server (i tried some public test servers).
But when i run iperf3
with one connection i get 10-15 MBit/s
, with two 20-30 MBit/s
and so on.
I do not have very complicated iptables rules and no other idea, why it is so slow. What can be the limiting factor for single tcp connections, that they are 10 times slower than the possible bandwidth?
I currently have three ansible tasks:
- create vhosts
- test config
- reload nginx
I now registered the last two as handlers, but the forward notifications of ansible feel wrong for what i am doing:
- create vhosts, notify test config (okay)
- test config, notify reload (why does a config test imply a reload)
- reload nginx
I would like a structure like:
- create vhosts, notify nginx reload
- nginx reload: require config test
- config test: success
- nginx reload
Just because the semantics seem more correct. It should be neither just a sequence, nor should something like a config test notify a reload, because this is just implementing a sequence again without logic behind (like a reload requires a test first)
Debian jessie prompt on every apt-get upgrade
, that a newer kernel was installed and a reboot is needed. How can this warning be disabled, because i want to reboot when it fits and i know that there is a newer kernel available?
And a similiar dialog is shown for "you need to restart these services because libraries were updated", which is re-shown again and again, even when i already decided "i want to restart this three ones and the two other ones should not be restarted" before.
I try to configure vhosts with and without additional per-vhost config with nginx. I think of something like this:
server {
listen 81;
server_name ~^(www\.)?(?<sname>.+?)$;
root /var/www/$sname;
include /etc/nginx/sites/$sname;
access_log /var/log/nginx/$sname/access.log;
error_log /var/log/nginx/$sname/error.log;
}
Then i could just touch /etc/nginx/mysite.example.com
to add a new site with static html, while i can edit the file for the vhost to add for example a reverse proxy directive or some rewrite rules.
The problem is, nginx seems to include the config when starting. And it would be more clean to have something like
for $config in /etc/nginx/sites:
{
server_name $config
root /var/www/$config
include $config
[...]
}
which should run at the start and not on the first request to the vhost.
PAM allows to use sufficent
and required
for some logic, like
auth sufficient pam_a.so
auth required pam_b.so
auth required pam_c.so
which would mean "either a is true, or b must be true and then c must be true".
Is it possible to do more complex operations? like "(a or b) and (c or d)" or "(a and b) or (c and d)"? Possibly with even more layers of parenthesis.
What is the syntax to replace a postfix hash-Database, for example
domain.tld PREPEND my-header: foobar
with a static map?
docs define it as Example: "inline:{ key=value, { key = text with whitespace or comma }}"
.
Several ways to replace it seemed not to work and its even unclear, how many fields the hash: db really has. is the domain the only key and the rest a string? Should there be some list-syntax for the three fields associated to the domain? And what about a key with multiple values (multiple lines starting with the same domain in the hash: db)?
I use a lighttpd reverse proxy to serve django with gunicorn. Now this config worked:
proxy.server = ("" => ( "" => (
"host" => "127.0.0.1",
"port" => 8000,
)))
Now i moved the gunicorn into a container and use:
proxy.server = ("" => ( "" => (
"host" => "192.168.1.2",
"port" => 8000,
)))
Now every request has the ip 192.168.1.1
as seen by gunicorn. I would understand, if the reverse proxy obfuscates the real IP, but why did it work with localhost then?
for both i get
X-Forwarded-For: client-ip
X-Host: the.domain
X-Forwarded-Proto: http
where the client-ip is public ip space.
the requests comes from
host:
nc: connect to 127.0.0.1 8000 from localhost (127.0.0.1) 44953 [44953]
container:
nc: connect to 192.168.1.2 8000 from host (192.168.1.1) 60027 [60027]
the container itself has ip 192.168.1.2
, the host-bridge has 192.168.1.1
and the routes inside the container are:
default via 192.168.1.1 dev eth0
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.2
the host has:
192.168.1.0/24 dev bridge proto kernel scope link src 192.168.1.1
EDIT:
The X-Forwarded-For was the same for both requests. (Tested with nc -vlp 8000
).
Is there a way to use an wildcard for the domainname, like
webmaster@*
Wildcards for addresses on a domain are working with "@domain", but "hostmaster@" does not work.
current setup with *@domain wildcard:
main.cf:
virtual_alias_maps = hash:/etc/postfix/virtual
virtual (wildcard for domain
, one mailadress for domain2
):
domain anything
domain2 anything
@domain user@localhost
mail@domain2 user2@localhost
and now i want to have some standard addresses for every domain, like
webmaster@ user3@localhost
But this syntax does not work in that way. One option would be to add the address to each (non-wildcard) domain by hand, another option may be to use a pcre
map for the virtual table. But using a pcre-table for aliases seems too unclean and adding them by hand is what i want to avoid.
Is there a possiblity, to restrict a cgroup to a specific network interface? All packets from the cgroup should only be routed via a VPN connection, while other packets use the default route.
With unix users its possible with iptables "-m owner --set-mark" and then routing with "ip rule".
Is it possible to match a cgroup? iptables seems to have no support for this.
What is better suited for a normal Server:
- several partitions, which are bundled as several RAID1 devices (/dev/md0, /dev/md1, ...) without any partitions, which are not mirrored.
- one big /dev/md0, and partitions on this device
What are the biggest pros and cons of both approaches? Is there a big difference, which one is the better choice for a normal server without frequent changes to disks and partition setup?
I haven't found any sites giving actual advice on this decision. The only thing i frequently read was: DO NOT bundle /dev/hda /dev/hdb (without at least one partition) to a RAID, because this causes the kernel to detect the RAID partitions on the raw /dev/hdX devices, too.