How can I ensure that the deployment "foo" does not have the annotation "bar"?
I want to define this in a manifest so that Flux enforces my desired state.
Is that possible with the current Kubernetes Resource Model?
How can I ensure that the deployment "foo" does not have the annotation "bar"?
I want to define this in a manifest so that Flux enforces my desired state.
Is that possible with the current Kubernetes Resource Model?
I found no way to remove a mdraid from a server with one command.
I can stop it via mdadm --stop /dev/md0
But the superblock is still in the devices.
mdadm --zero-superblock DEVICE
needs every device (like /dev/sdb1)
I know that I can do mdadm --detail /dev/md0
and then I see the devices.
I could write a fragile script to fetch the /dev/sd... strings from the output of mdadm --detail /dev/md0
, but I would like to avoid this.
Is there a one-liner to remove the superblock from all devices of a mdraid?
I would like to avoid to parse the output of mdadm --detail
, since this feels fragile.
Is there a way to sync a PC with a nextcloud server, but without the desktop GUI?
Syncing should happen as soon as the PC has booted. Even if the user does not log in yet.
I know the nextcloudcmd. I could run a cron job and call ``nextcloudcmd` every N minutes.
But this is not nice.
I really prefer a solution where the sync happens immediately (for example via inotify).
How could this be done?
I think a shell script wrapping nextcloudcmd is just a work-around.
If nextcloud does not provide this, then I will use seafile which can do this. See: https://manual.seafile.com/deploy/start_seafile_at_system_bootup.html
I personally think this is very strange. The GUI can do this. I just want the same thing, but without a GUI. Yes, I could run the GUI in a "fake" framebuffer X environment ... but no, that's too dirty.
I received a public key like this and I should add it to .ssh/authorized_keys:
---- BEGIN SSH2 PUBLIC KEY ----
Comment: "rsa-key-20190107"
AAAAB3NzaC1yc2EAAAABJQAAAQEAucNIPbPoaEqyBAKtk3LTfM/hiZlWomTdQEf7
zUI4LGz91aZYIZNpWGTAUZKuFLdIEsktxQTNwEJNWMe2QocqQWyPGA+xL08ZP7Xk
VEbVOyH0nQ3ZHptgmyH4y4+bbAWXAROL3078h2iwtsCO343VQKg1iSNvemnLafA5
9/RtkcCR8SxH+NEXcc8MwGOE9gLX2pph4bxrFz9R6yyw3oRGVLt4uU9BlD3+LXg1
plUzc2KZXEt8Zr04I0Fd865zyiB8Q+2ZEPvHf7MMaW66FRe4BXCI7LMh/voXi0C8
H4NDIu1GZr7dNxgbEO05ZnASMofpLDU6cq7LFVl0BQG8gt1hOw==
---- END SSH2 PUBLIC KEY ----
I guess that I can edit this file in vi
and to create the corresponding one-liner which is required for .ssh/authorized_keys.
Is this true?
AFAIK this key was created according to this page: https://winscp.net/eng/docs/guide_public_key
I am not happy with the output of systemctl
I have a script which parses the output of
systemctl list-units -t service --full --all
The beginning of the output look like this:
UNIT LOAD ACTIVE SUB JOB DESCRIPTION
after-local.service loaded inactive dead /etc/init.d/after.local Compatibility
● amavis.service not-found inactive dead amavis.service
apparmor.service loaded active exited Load AppArmor profiles
auditd.service loaded active running Security Auditing Service
On a different systemd the column with the dot (before amavis.service) does not exist.
Is there a machine/script readable output of systemctl?
I search a way for a reliable export of journalctl logs.
I could use the --since=...
option, but this is a bit fuzzy.
In my case a script would call journalctl --output=json
every ten minutes.
I don't want to miss a single line and (if possible) I would like to avoid duplicate lines.
Some days after asking this question I came across RELP: https://en.wikipedia.org/wiki/Reliable_Event_Logging_Protocol
How can I disable all services except ssh on modern (systemd based) linux distributions?
I need to implement a maintenance mode.
All these services need to be down:
But ssh must not be shut down, since this gets used to do tasks during the maintenance mode.
Of course I could write a shell script which loops over a list of services which I would like to disable. But this feels like I reinventing something which already exists, but which I don't know up to now.
I am running this command:
pg_dumpall | bzip2 > cluster-$(date --iso).sql.bz2
It takes too long. I look at the processes with top
. The bzip2 process takes about 95% and postgres 5% of one core. The wa
entry is low. This means the disk is not the bottleneck.
What can I do to increase the performance?
Maybe let bzip2 use more cores. The servers has 16 cores.
Or use an alternative to bzip2?
What can I do to increase the performance?
I know how to enable/disable lingering with loginctl
.
But up to now I found no way to query the status of a user.
I want to know: Is lingering enable for user foo
?
How can I access this information?
I see this message in my logs:
systemd[1]: foo.service holdoff time over, scheduling restart.
I could not find the term "holdoff time" in here:
https://www.freedesktop.org/software/systemd/man/systemd.service.html
Where can I change the holdoff time?
I am running out of inodes. Only 11% available:
the-foo:~ # df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/system-home 9830400 8702297 1128103 89% /home
Is there a way to solve this without creating and copying to a new partition?
Details:
the-foo:~ # tune2fs -l /dev/mapper/system-home
tune2fs 1.42.6 (21-Sep-2012)
Filesystem volume name: <none>
Last mounted on: /home
Filesystem UUID: 55899b65-15af-437d-ac56-d323c702f305
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 9830400
Block count: 39321600
Reserved block count: 1966080
Free blocks: 22958937
Free inodes: 2706313
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1014
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Tue Jul 8 08:02:22 2014
Last mount time: Sun Apr 24 22:33:00 2016
Last write time: Thu Sep 8 09:18:01 2016
Mount count: 11
Maximum mount count: 10
Last checked: Tue Jul 8 08:02:22 2014
Check interval: 0 (<none>)
Lifetime writes: 349 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 2759586
Default directory hash: half_md4
Directory Hash Seed: e4402d28-9b15-46e2-9521-f0e25dfb58d0
Journal backup: inode blocks
Please let me know if more details are needed.
I fail to call a single state of an sls file.
This works:
salt-ssh w123 state.sls monitoring
This works:
salt-ssh w123 state.show_sls monitoring
One item of above output:
monitoring_packages:
----------
__env__:
base
__sls__:
monitoring.packages
pkg:
|_
----------
pkgs:
- python-psutil
- installed
|_
----------
order:
10000
Now I want to call only monitoring_packages
, not the whole sls file:
Fails:
salt:/srv # salt-ssh w123 state.sls_id monitoring_packages monitoring
w123:
Data failed to compile:
----------
No matching sls found for 'monitoring' in env 'base'
Fails:
salt:/srv # salt-ssh w123 state.single monitoring.monitoring_packages
w123:
TypeError encountered executing state.single: single() takes at least 2 arguments (1 given)
How to call my single state monitoring_packages
?
salt:/srv # salt-ssh --version
salt-ssh 2015.8.3 (Beryllium)
I try to create a reliable systemd service for autossh.
The service works, but if the host-keys changes, the service is in state ok (running).
I want it to be in state "failed" if the tunnel does not work.
Here is my current systemd service file:
# Source is in srv/salt/tunnel/autossh\@.service
# which is a git repo.
# Don't edit /etc/systemd/system/autossh\@.service
[Unit]
Description=Tunnel For %i
After=network.target
[Service]
User=autossh
# https://serverfault.com/a/563401/90324
ExecStart=/usr/bin/autossh -M 0 -N -o "ExitOnForwardFailure yes" -o "ConnectTimeout=1" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 40443:installserver:40443 -R 8080:installserver:8080 tunnel@%i
Restart=always
[Install]
WantedBy=multi-user.target
Here is the output of systemctl status autossh@foo-work
salt:/srv # systemctl status autossh@foo-work
[email protected] - Tunnel For foo-work
Loaded: loaded (/etc/systemd/system/[email protected]; enabled)
Active: active (running) since Wed, 2016-02-10 14:35:01 CET; 2 months and 3 days ago
Main PID: 17995 (autossh)
CGroup: name=systemd:/system/[email protected]/foo-work
└ 17995 /usr/bin/autossh -M 0 -N -o ExitOnForwardFailure yes -o ConnectTimeout=1 -o ServerAliveInterval 60 -o ServerAliveCountMax 3 -R 40443:installserver:40443 -R ...
Apr 14 12:35:43 salt autossh[17995]: Host key verification failed.
Apr 14 12:35:43 salt autossh[17995]: ssh exited with error status 255; restarting ssh
Apr 14 12:45:42 salt autossh[17995]: starting ssh (count 618)
Apr 14 12:45:42 salt autossh[17995]: ssh child pid is 22524
Apr 14 12:45:43 salt autossh[17995]: Host key verification failed.
Apr 14 12:45:43 salt autossh[17995]: ssh exited with error status 255; restarting ssh
My problem is not the changed host-key. That's ok.
I just want the service to tell me the truth: If the tunnel is not working, then I want it to see it.
How can I change the systemd service file to tell me the correct status?
Update: I wrote a second follow-up question: How does systemd decide if a service is ok or not
Here is my unit file of a systemd service:
[Unit]
Description=Tunnel For %i
After=network.target
[Service]
User=autossh
ExecStart=/usr/bin/autossh -M 0 -N -o "ExitOnForwardFailure yes" -o "ConnectTimeout=1" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -R 40443:installserver:40443 -R 8080:installserver:8080 tunnel@%i
Restart=always
[Install]
WantedBy=multi-user.target
The unit failed 15 days ago and systemd did not restart it, although "Restart=always" is in above unit file.
Here the status output of this service:
salt:/srv # systemctl status autossh@eins-work
[email protected] - Tunnel For eins-work
Loaded: loaded (/etc/systemd/system/[email protected]; enabled)
Active: failed (Result: start-limit) since Wed, 2016-02-10 14:33:34 CET; 2 weeks and 1 days ago
Main PID: 17980 (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/[email protected]/eins-work
Feb 10 14:33:34 salt systemd[1]: Stopping Tunnel For eins-work...
Feb 10 14:33:34 salt systemd[1]: Starting Tunnel For eins-work...
Feb 10 14:33:34 salt systemd[1]: Failed to start Tunnel For eins-work.
Feb 10 14:33:34 salt systemd[1]: Unit [email protected] entered failed state
Related: https://serverfault.com/a/563401/90324
How to configure a systemd service to always restart if something fails?
We use rsync to backup servers.
Unfortunately the network to some servers is slow.
It takes up to five minutes for rsync to detect, that nothing has changed in huge directories. These huge directory trees contain a lot of small files (about 80k files).
I guess that the rsync clients sends data for each of the 80k files.
Since the network is slow I would like to avoid to send 80k times information about each file.
Is there a way to tell rsync to make a hash-sum of a sub directory tree?
This way the rsync client would send only a few bytes for a huge directory tree.
Update
Up to now my strategy is to use rsync
. But if a different tools fits better here, I am able to switch. Both (server and client) are under my control.
Update2
There are 80k files in one directory tree. Each single directory does not have more than 2k files or sub-directories
Update3
Details on the slowness of the network:
time ssh einswp 'cd attachments/200 && ls -lLR' >/tmp/list
real 0m2.645s
Size of tmp/list file: 2MByte
time scp einswp:/tmp/list tmp/
real 0m2.821s
Conclusion: scp has the same speed (no surprise)
time scp einswp:tmp/100MB tmp/
real 1m24.049s
Speed: 1.2MB/s
There are several thousand blog posts about vsftp and allow_writeable_chroot=YES
The common error message:
Fixing 500 OOPS: vsftpd: refusing to run with writable root inside chroot ()
I solved the problem on my server.
But one question remains:
Why is it advisable to use allow_writeable_chroot=NO
?
Up to now I only found nebulous arguments like "For security reasons".
What are these "security reasons"?
Is there a way to do port-forwarding in lxd-containers like docker does?
I heard some rumours that there is no easy way.
According to the homepage of lxd this is their goal:
Intuitive (simple, clear API and crisp command line experience)
For me port forwarding is an important part.
I am not in a hurry. If it is planed for a future release, this would be a valid answer.
I use the nice feature of systemd: Instantiated Services.
Is there a simple way to reload all instantiated services with one call?
Example: I don't want to run all like this:
systemctl restart autossh@foo
systemctl restart autossh@bar
systemctl restart autossh@blu
I tried this, but this does not work
systemctl restart autossh@*
Related: Start N processes with one systemd service file
First I was fascinated by Instantiated Services, but later I realized that running a configuration management tool like Ansible makes more sense. I learned: Keep the tools simple. Many tools starts to implement condition-checking (if .. else ...) and loops. For example webservers or mailserver congfiguration. But this should be solved at a different (upper) level: configuration management. See: https://github.com/guettli/programming-guidelines#dont-use-systemd-instantiated-units
I found this systemd service file to start autossh to keep up a ssh tunnel: https://gist.github.com/thomasfr/9707568
[Unit]
Description=Keeps a tunnel to 'remote.example.com' open
After=network.target
[Service]
User=autossh
# -p [PORT]
# -l [user]
# -M 0 --> no monitoring
# -N Just open the connection and do nothing (not interactive)
# LOCALPORT:IP_ON_EXAMPLE_COM:PORT_ON_EXAMPLE_COM
ExecStart=/usr/bin/autossh -M 0 -N -q -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -p 22 -l autossh remote.example.com -L 7474:127.0.0.1:7474 -i /home/autossh/.ssh/id_rsa
[Install]
WantedBy=multi-user.target
Is there a way to configure systemd to start several tunnels in one service.
I don't want to create N system service files, since I want to avoid copy+paste.
All service files would be identical except "remote.example.com" would be replace with other host names.
I asked this question roughly 1.5 year ago.
My mind has changed. Yes, it's nice, that you can do this with systemd, but I will use configuration-management in the future.
Why should systemd implement a template language and substitute %h? .. I think it makes no sense.
Several months later I think this looping and templating should be solved on a different level. I would use Ansible or TerraForm for this now.
I use dovecot IMAP server and want to delete big unneeded mails:
cd /var/spool/foouser; du -a | sort -rn > /var/tmp/du-mail-foouser.log
Now I see big mails at the top and after looking at them I want to remove them.
Is it safe to just call
"rm ./foofolder/1318412893.M857530P4656.hz1,W=14463815,S=14268320:2,S"
?