I'm running Amazon Linux 2, on EC2 instances in AWS. I want to be able to add my own iptables rules, and have them survive reboots.
What is the correct way (or a correct way) to do this?
I'm running Amazon Linux 2, on EC2 instances in AWS. I want to be able to add my own iptables rules, and have them survive reboots.
What is the correct way (or a correct way) to do this?
I am creating a Windows "golden image" that will be rolled out to a network which does not have direct Internet access. Instead, HTTP/HTTPS traffic must be carried over a proxy server.
When running sysprep
, I am adding this to unattend.xml
, to automatically configure the proxy settings in the golden image:
<component name="Microsoft-Windows-IE-ClientNetworkProtocolImplementation" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State">
<POLICYProxySettingsPerUser>0</POLICYProxySettingsPerUser>
<HKLMProxyEnable>true</HKLMProxyEnable>
<HKLMProxyServer>outboundproxy.example.com:3128</HKLMProxyServer>
</component>
(This is derived from a post at https://blogs.technet.microsoft.com/chrad/2009/07/13/dynamic-provisioning-with-vmm-proxy-windows-updates-and-scripts/)
After running sysprep
, creating an image, and then deploying a VM from the image, I was able to log in to the desktop, go to Settings -> Proxy Settings, and verify that the proxy was set correctly. IE and other apps worked as expected.
However, I later discovered that software processes that run on startup were not using the proxy settings and were therefore failing. After some experimentation I discovered that the proxy settings were not taking effect until a user had logged to a desktop session. After a user had logged on, the same processes that did not work before then started to use the HTTP proxy successfully. It therefore seems that sysprep
was not configuring the proxy - instead, some user process invoked on logging was responsible for completing the configuration.
As this is an environment which relies heavily on automation, and as these are servers not user desktops, it's important that they work correctly without ever having a user log on to the desktop.
Is there a way to configure HTTP proxy settings in a sysprep
golden image that does not depend on a user logging in to the desktop?
In this configuration we are using Windows Server 2019 but I imagine this problem is common to quite a few Windows versions.
I would like Samhain to monitor a file, say for example, /root/somefile
. This file does not currently exist, but I would like to be notified if it gets created at any point.
I add this to samhainrc
:
[ReadOnly]
file = /root/somefile
This causes Samhain to emit these log entries:
Oct 18 22:54:04 ip-172-31-24-115 Samhain[17123]: CRIT : [2018-10-18T22:54:04+0000] interface=<lstat>, msg=<No such file or directory>, userid=<0>, path=</root/somefile>
Oct 18 22:54:04 ip-172-31-24-115 Samhain[17123]: CRIT : [2018-10-18T22:54:04+0000] msg=<POLICY MISSING>, path=</root/somefile>
Oct 18 22:54:19 ip-172-31-24-115 Samhain[17157]: INFO : [2018-10-18T22:54:19+0000] msg=<Checking [ReadOnly]>, path=</root/somefile>
Oct 18 22:54:19 ip-172-31-24-115 Samhain[17157]: NOTICE : [2018-10-18T22:54:19+0000] msg=<Check failed>, path=</root/somefile>
And if I create this file with echo test > /root/somefile
, then I do not get any policy violations logged - the addition of this file has been unnoticed.
How can I configure Samhain to notify me if a previously non-existent file of interest gets created?
The IgnoreMissing
configuration option would appear at first glance to be useful, but it is not. With IgnoreMissing = /root/somefile
in samhainrc
, there is no change in behaviour. It seems that this option is intended for files that are expected to go missing later - it suppresses an alert if a file used to exist, but now does not, for example if an automated process deletes files that are out of date.
Although /root/somefile
is obviously made up in this case, an example of where a non-existent file suddenly starts to exist is if the file /home/someuser/.ssh/authorized_keys
did not previously exist but then suddenly does exist - this could be a malicious user who exploited something to drop a backdoor allowing them to log on as a shell user. This is something I would like to be alerted about.
It is possible to use dir = /home/someuser/.ssh
to monitor all changes in the user's .ssh
folder, but this is unhelpful: if it's normal for the user to use SSH in their account, their .ssh/known_hosts
file may change, they may change their ssh_config
, etc., and I do not want to be alerted by those. Therefore I don't want to monitor the whole directory apart from some whitelisted file; I want to leave the directory unmonitored apart from specific, critical files.
I run an exim4 instance that is the primary MX for my domain and receives email from the public Internet. Mail for my users is forwarded on to other email addresses - I use the redirect
router and alias files to achieve this.
Sometimes, the email server receiving the forwarded message rejects it. In this case, exim bounces the message back to the original sender.
I would prefer that if the redirected delivery fails, exim does not cause the whole delivery to fail, but instead falls back to an alternative router, such as to make a local delivery.
Is this possible, and how can I configure this behaviour?