After an update to last binary version of Monit, Monit reports an issue about
SSL server certificate verification error: self signed certificate in certificate chain
After an update to last binary version of Monit, Monit reports an issue about
SSL server certificate verification error: self signed certificate in certificate chain
I have a role with some variables than I using several times with different parameters like below:
roles:
- role: my_role
vars:
role_uuid: uuud_1
first_param: first
- role: my_role
vars:
role_uuid: uuid_2
second_param: second
Problem is that when my role is executed:
To resume, both instance have the parameters first_param and second_param set.
It seems that the parameters of the instances of the role my_role are merged then only the part that differ is really different (here role_uuid).
Is there are way to avoid this merge ?
We are currently defining in the Global scope of our ProFTPd server the following lines:
# Allow max 3 unauthenticated connections by IP
MaxConnectionsPerHost 3 "Sorry, you may not connect more than (%m) times."
# Allow max 3 authenticated connections by IP
MaxClientsPerHost 3 "Sorry, the maximum number clients (%m) from your host are already connected."
# Allow max 10 connection by user
###### MaxClientsPerUser 10 "Sorry, there is already (%m) other connection for this account."
It works as attended but we would like to allow some specific (not all) authenticated users (or IPs as drawback), to open more connections than the ones specified upper.
Is that possible with ProFTPd ?
Yes -> any help would be appreciated.
No -> Is there any other Production grade free FTP server like PureFTP or vsftpd maybe, that fit these requirement ?
I'm trying to create an alias to intercept some url to serve from file system directly with Apache 2.4
In my virtualhost, I have: DocumentRoot /var/www/mysubroot
I have a location on "/" in order to send all to the apache balancer
<Location / >
ProxyPass balancer://my-cluster/
ProxyPassReverse /
# Add the unique id on the header
RequestHeader set UNIQUE_ID %{UNIQUE_ID}e
</Location>
I tried to add an alias to serve some content from the filesystem but it is never functional
Alias "/hidden/" "/var/www/hidden/"
<Location /hidden/ >
ExpiresActive On
ExpiresDefault "access plus 1 month"
</Location>
A call to http://myvirtualhost/hidden/mysecretfolder/test.txt is rendered by the Location / and not the alias
Any clue to how make it work (even with other solution than alias) ?
Also I have others location directives in the virtualhost and have no issue with them as they "proxy" as attended.
<Location /rainloop/ >
ProxyPass http://10.14.1.103/rainloop/
ProxyPassReverse /rainloop/
</Location>
I'm trying to load a hiera file according to a specific flag.
Hiera hierarachy configuration is
:hierarchy:
- "%{environment}/%{::fqdn}"
- "%{environment}/%{nodetype}"
- "%{environment}/%{calling_module}"
- "%{environment}"
- "common/%{calling_module}"
- "common"
In fact in want to factorize some configuration at the "nodetype" level. Goal is to avoid putting the same hiera "block" inside files:
but instead but common part in :
After that all servers would get their own specific values with the fqdn yaml file. (this part is ok)
Currently, i don't know how to provide the "nodetype" data to hiera context.
I tried to put it into the main manifest file like (yeah I read the doc and I know it a bad idea but even with despair attempt, it don't work anyway)
node 'nfs1.example.com', 'nfs2.example.com' {
$nodetype= 'nfs-server'
but the file environment/test/nfs-server.yaml is not loaded by hiera.
I also tried to use custom fact but using a custom fact with
modules/hosts/facts.d/host-fact-test.txt
File is send to the host of the agent but again here, hiera don't use the dedicated file.
Notice: /File[/var/lib/puppet/facts.d/host-fact-test.txt]/ensure: defined content as '{md5}d7492faae1bfe55f65f9958a7a5f6df9'
If I use a notify puppet command, the value is ok
if $nodetype== 'nfs-server' {
notify {"Running with \$nodetype ${nodetype} ID defined":
withpath => true,
}
}
result:
Notice: /Stage[main]/attemps/Notify[Running with $nodetype nfs-server ID defined]/message: Running with $nodetype nfs-server ID defined
Stack is Puppet opensource on Ubuntu 14 so versions are:
Any idea or suggestion to make it work (or achieve a similar behavior) ?
I have a script containing about 420k lines of "rm -rf" command like which where generated using a "find" statement. Each pdf's folder is containing between 1 and 30 files (no subfolder).
rm -rf /2012/128/211503/pdf
rm -rf /2012/128/212897/pdf
rm -rf /2012/128/211989/pdf
rm -rf /2012/128/211691/pdf
rm -rf /2012/128/212539/pdf
rm -rf /2012/218/358976/pdf
rm -rf /2012/218/358275/pdf
rm -rf /2012/218/358699/pdf
I'm searching how to increase the deletion speed of the script.
Currently, vmstat report only about (IO) wait time.
Platform is RHEL 5 deleting files on a RAID5/6 drive using ext3 and LVM.
I thought about splitting the script file into smaller files (like 10 files) in order to trigger several script in parallel but here I'm spotting a hardware speed limitation.
Would that be a good idea if the commitment of the deletion for the journalization taking time and could it take part of feature like NCQ ?