Is it possible to disable the restart throttling imposed by systemd? I've found some docs on settings StartLimitBurst
, but I'm trying to disable all restart throttling.
Petrus Theron's questions
Purchased a dedicated Hetzner server with 2x small SSDs and 2x large HDDs paired with hardware RAID. Running installimage
from Rescue OS shows this as config:
# Adaptec RAID (LD 0): no name
DRIVE1 /dev/sda
# Adaptec RAID (LD 1): no name
DRIVE2 /dev/sdb
...
Lower down, it shows,
## your system has the following devices:
#
# Disk /dev/sda: 749 GB (=> 697 GiB).
# Disk /dev/sdb: 119 GB (=> 111 GiB).
I set SWRAID 0
. How do I tell the script to mount the SSD pair as the primary drive for installing the OS?
My first attempt was to swap the DRIVEx statements around, which resulted in this on Debian:
root@Debian-90-stretch-64-minimal / # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 698G 0 disk
|-sda1 8:1 1 12G 0 part
| `-md0 9:0 0 12G 0 raid1 [SWAP]
|-sda2 8:2 1 512M 0 part
| `-md1 9:1 0 511.4M 0 raid1 /boot
`-sda3 8:3 1 99.1G 0 part
`-md2 9:2 0 99G 0 raid1 /
sdb 8:16 1 111.6G 0 disk
|-sdb1 8:17 1 12G 0 part
|-sdb2 8:18 1 512M 0 part
`-sdb3 8:19 1 99.1G 0 part
As you can see the SSDs are a on sdb
and the larger 698GB HDDs are on sda
, but sda is not fully partitioned. It feels like potentially the drives are incorrectly paired.
Is there an "apt-get install .NET-Framework" for Windows Server 2008+? Some way I can update to the latest .NET Framework 4+, preferably using Saltstack?
How can I install New Relic on my Windows minions via SaltStack without having to RDP into each Windows minion?
I found some New Relic documentation for installing via the command-line on Windows, but would it be possible to use the Windows-based Salt package manager for this, without resorting to cmd.run
?
I tried using this against salt-minion v0.17.5-52, but the minions don't return anything:
salt '*' cmd.run "msiexec /i 'C:/Install/New Relic/newrelic.msi'
/L*v install.log NR_LICENSE_KEY='<my licence key>'" -v
I am unable to connect via vSphere or SSH to an ESXi hypervisor box that manages two important virtual machines. I do not recall changing any passwords and the data centre does not have any credentials. I suspect we messed up sshd_config, since I have tried every password I know 50 times.
If I restart the hypervisor, e.g. to reinstall ESXi, the VMs will stop and not be restarted automatically, which would be disastrous if we cannot reclaim access to the hypervisor.
What are my password reset/recovery options to get root access to the same hypervisor configuration with no data loss and as little downtime as possible?
If there is no way to reset root password, what safe steps can I take to backup/snapshot the two VMs (one Ubuntu, one Windows 2008 Server) and move them to a fresh server, bearing in mind that I cannot SSH into the hypervisor?
Edit: Great feedback so far, thanks guys. More details: Local storage, RAID-1, pretty standard hardware otherwise. Yes, I can arrange physical access to the box or schedule maintenance at the data centre. AFAIK, vSphere uses SSH to talk to the hypervisor, but I could be wrong here.
How should Salt State Files and Pillar configurations be structured to enable smooth deployment of varying minion roles for staged environments like dev, qa and production as well as feature branches?
I have arranged my root and pillar state files as follows in a separate repository from my Python project's source code:
salt-states/
pillar/
web/
init.sls
production.sls
qa.sls
dev.sls
db/
init.sls
production.sls
qa.sls
dev.sls
top.sls
roots/
web/
init.sls
production.sls
qa.sls
dev.sls
db/
init.sls
production.sls
qa.sls
dev.sls
top.sls
How should my top.sls file look to take advantage of this structure and how can I target feature branches in this fashion?
Sometimes my saltmaster hangs for a while on salt '*' test.ping
waiting for downed minions to reply. Is there a way so see a list of connected minions, regardless of whether they respond to test.ping
?
How do I output the contents of a file on all my minions using Salt Stack?
The only 'pull' functionality I can find is in this minion push commit, but this requires configuration changes on the master.
I'm learning Salt Stack to deploy my Python application to various stages of production on AWS. Right now I have all my source code and salt states in one big repository.
Are there any practical or security considerations in keeping minion state files with my source? Or should I split them up and why?
If I do move my State Files into a separate salt-states repo, where should I keep my master
and minion
configuration files, or don't they belong in version control?
How do I clear a directory on a salt-minion using a state file? I want to delete all *.conf
files in /etc/supervisord/conf.d/
before I set up other supervisor services.
The following top.sls
configuration has no effect:
/etc/supervisor/conf.d/*:
file.absent
file.remove
fails as being unavailable.
Given multiple domain bindings, how do I configure my IIS site to redirect all requests to a primary domain without creating a second site to handle only the redirect?
After every few days of faultless operation, the following information event pops up in my database server's event log:
Process 0:0:0 (0x890) Worker 0x5C55A0D8 appears to be non-yielding on Scheduler 0.
Thread creation time: 12945965386972. Approx Thread CPU Used: kernel 0 ms, user 0 ms.
Process Utilization 0%%. System Idle 98%%. Interval: 70427 ms.
The non-yielding worker process, whatever it is, causes the following connection timeout exception for all my applications' databases:
Exception message: Timeout expired. The timeout period elapsed prior to completion of
the operation or the server is not responding.
This message is repeated ad infinitum until I restart the SQL Service. However, the service won't stop, so I have had to forcefully kill the SQLServr.exe process, which is bad. Everything then starts up as normal and the service recovers.
How do I diagnose the cause of this problem?
I need to take a database offline immediately, but a latent connection is blocking the SET OFFLINE
operation. I attempted to take it offline from SQL Server Management Studio, which would have called ALTER DATABASE <database_name> SET OFFLINE
, instead of ALTER DATABASE <database_name> SET OFFLINE WITH NO-WAIT
like I should have.
Other attempts to access or take the database offline fail because a lock a cannot be obtained due to the blocking operation.
How can I cancel the blocking SET OFFLINE
operation without taking my entire instance offline?
Update: I ended up restarting my SQL Service instance, which quickly released the lock, but it was undesirable. I would still like to know how to kill the connections blocking a SET OFFLINE
operation.
I'm trying to schedule a file-sync between two dedicated servers on a LAN. The remote machine is running Windows Server 2003 and the local machine is running Windows Server 2008.
I mounted the remote folder as the J: network drive to overcome any permission issues and when I run the command manually everything works as expected and the folder contents are mirrored:
robocopy J:\\ C:\\Files /MIR > c:\\robocopy.log
But as soon as I put it in a scheduled task, it fails with return code 0x10 (16), which is a serious error. So I assumed a network permissions error and tried scheduling the action between two local folders. The same error occurred and no robocopy.log
output file is created. I am running the action as an Administrator.
Why is my scheduled task failing?
Output from schtasks /query /v /fo LIST /s localhost
for reference:
HostName: localhost
TaskName: \Sync Task
Next Run Time: 11/7/2010 3:00:00 AM
Status: Ready
Logon Mode: Interactive/Background
Last Run Time: 11/6/2010 2:49:21 PM
Last Result: 16
Author: HOST\Administrator
Task To Run: robocopy.exe "C:\\LocalFolder" "C:\\Destination" /MIR /LOG > c:\\robocopy.log
Start In: N/A
Comment: N/A
Scheduled Task State: Enabled
Idle Time: Disabled
Power Management: Stop On Battery Mode
Run As User: HOST\Administrator
Delete Task If Not Rescheduled: Enabled
Stop Task If Runs X Hours and X Mins: Disabled
Schedule: Scheduling data is not available in this format.
Schedule Type: Daily
Start Time: 3:00:00 AM
Start Date: 8/6/2010
End Date: N/A
Days: Every 1 day(s)
Months: N/A
Repeat: Every: Disabled
Repeat: Until: Time: Disabled
Repeat: Until: Duration: Disabled
Repeat: Stop If Still Running: Disabled
After a Windows update I cannot connect to a Windows Server 2008 machine via RDP. As an alternative, I remotely installed UltraVNC using PsExec.
The WinVNC service starts successfully but when I try to connect remotely, I receive the following error message:
This server does not have a valid password enabled.
Until a password is set, incoming connections cannot be enabled.
Since I don't have desktop access to the machine, how do I set the password?