I would like to block outgoing IPv6 connections for specific users on linux machine. I don't want to disable IPv6 for whole system. How can I do it? I can do it using ip6tables and rejecting OUTPUT connections with icmp6-adm-prohibited or icmp6-no-route or icmp6-addr-unreachable but that for some reason causes delays of about 1s with every connection made (IPv4 is tried only after waiting for 1s). If multiple connections are made this delay really stacks.
ndd's questions
Since Sep 30 14:01:15 2021 GMT any software using openssl <=1.0 (like curl, php etc.) can't connect to hosts with Let’s Encrypt certificates
* SSL certificate problem: certificate has expired
I set up tun device server on 1.2.3.4
socat -v -v -v -v -d -d TCP-LISTEN:11443,reuseaddr,fork TUN:10.3.33.20/16,up
I set up client 1:
socat TCP:1.2.3.4:11443 TUN:10.3.33.21/16,up
I can ping 10.3.33.20 from client 1
I set up client 2:
socat TCP:1.2.3.4:11443 TUN:10.3.33.22/16,up
I can't ping 10.3.33.20 from client 2 I can ping 10.3.33.20 from client 2 only if I terminate (ctrl+c) connection from client 1 (immediate effect)
Why? TUN device can't handle multiple connections? Or is that a socat limitation? How should I setup such simple tunneling so I don't have to setup separate listener for every client?
I would like for a program that binded to network interface that is not eth0 to use other default gateway when making connections.
ip rule add oif tun0 table 11
ip route add default via 10.3.33.20 table 11
ip route flush cache
ip rule list
0: from all lookup local
32764: from all iif tun0 lookup 11
32765: from all oif tun0 lookup 11
32766: from all lookup main
32767: from all lookup default
ip route list table all
default via 10.3.33.20 dev tun0 table 11
default via 172.104.159.1 dev eth0 proto static metric 100
10.3.0.0/16 dev tun0 proto kernel scope link src 10.3.33.21
172.104.159.0/24 dev eth0 proto kernel scope link src 172.104.159.249 metric 100
...
When I bind program to interface tun0 it does not transmit antything to gateway 10.3.33.20
I've benchmarked about 10 diffrent ssd devices with sysbench oltp_write_only.lua and I found no coroleation whatsoever between max sustained write IOPS of device (both from specification and from fio --bs=4k --iodepth=64 benchmark). I've tested nvme ssds with sustained write IOPS of 90k but those were much slower in oltp_write_only.lua test than one particular 15k write IOPS sata ssd while similar to most other sata ssds. WHY? What makes some ssd peform better in sysbench oltp_write_only.lua test? Also why oltp_write_only.lua test does not perform significantly better when mysql datadir in on tmpfs? Why raid levels 0 (2 devices), 10 (4 devices) do not affect oltp_write_only results at all? It's madness. And no, its not a bug in specific MySQL/MariaDB version - I've tried many versions and results were consistent. And yes, devices were trimmed before each test.
This code works perfectly in .htaccess or directory
RewriteEngine on
RewriteCond %{request_uri} !^/dir/$ [NC]
RewriteRule ^(.*)$ /dir/ [END]
but when put into VirtualHost it causes infinite redirection.
I've read documentation about diffrences between request_uri in scope of virtualhost and directory but I'm still unable to produce code that will work in virtualhost context. What should I change?
I want to pipe apache logs (with my custom format) to my script. I at the same time want to keep loging standard format for every vhost to separate file.
When I put CustomLog "|/path/to/my/script" myformat in httpd.conf it works perfect but only for vhosts that has no CustomLog /path/to/logs/vhostXX-logfile otherformat inside
For vhosts that has already logging in it it does not log anything to |/path/to/my/script
I could put CustomLog "|/path/to/my/script" myformat in every vhost but then script gets spawned in paralel for every single vhost and that is not acceptable.
What can I do?