I can't seem to find a precompiled package that works for Arch. I believe I've tried all the debian-style packages, to no avail. There is no longer an actively-managed mongodb package. And compiling from source apparently requires like a couple hundred gigs of space. Is there a way to get an binary on Arch for MongoDB without compiling from source?
editor's questions
I've been struggling through some weird (to me) firewalld errors but am now seeing the firewall behavior I'd like. But, baffling to me, what works seems to be a mix of both the drop
zone and the trusted
[root@douglasii ~]# firewall-cmd --get-active-zones
drop
interfaces: eth0 veth879317c vethaff7c39 vethb2fec6e
trusted
sources: 192.168.0.0/16
[root@douglasii ~]# firewall-cmd --zone=drop --list-all
drop (default, active)
interfaces: eth0 veth879317c vethaff7c39 vethb2fec6e
sources:
services: ssh
ports: 443/tcp 80/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
[root@douglasii ~]# firewall-cmd --zone=trusted --list-all
trusted
interfaces:
sources: 192.168.0.0/16
services: ssh
ports: 443/tcp 80/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
I was under the impression that you set zones one at a time using set-default-zone
. I see whichever one I do that for gets the "active" label. Is that not the case? Can multiple firewalld zones active at any given time? Do they all apply at the same time? What is a default zone? It's not clear to me from reading the docs on FirewallD.
I'd like to:
- Drop all incoming connections from the external Web except 80 and 443
- Allow internal machines on
192.168.0.0/16
to connect to :9000 :8080
Here's what I did to setup my drop
zone via firewall-cmd
:
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
systemctl start firewalld.service
systemctl enable firewalld
firewall-cmd --set-default-zone=drop
firewall-cmd --permanent --zone=drop --add-service=ssh
firewall-cmd --permanent --zone=drop --add-port=80/tcp
firewall-cmd --permanent --zone=drop --add-port=443/tcp
firewall-cmd --zone=drop --permanent --add-rich-rule='rule source address="192.168.0.0/16" port port="9000" protocol="tcp" accept'
firewall-cmd --zone=drop --permanent --add-rich-rule='rule source address="192.168.0.0/16" port port="8080" protocol="tcp" accept'
firewall-cmd --reload
Here's what the active drop
zone looks like:
[root@machine ~]# firewall-cmd --zone=drop --list-all
drop (default, active)
interfaces: eth0 vethadc7c41 vethaef84e2 vethd53fa38
sources:
services: ssh
ports: 443/tcp 80/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="192.168.0.0/16" port port="9000" protocol="tcp" accept
rule family="ipv4" source address="192.168.0.0/16" port port="8080" protocol="tcp" accept
This appears OK; however, I run into issues after reload:
[root@machine ~]# systemctl status firewalld -l
firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
Active: active (running) since Sun 2014-12-21 19:48:53 UTC; 2s ago
Main PID: 21689 (firewalld)
CGroup: /system.slice/firewalld.service
└─21689 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
Dec 21 19:48:53 machine.hostname systemd[1]: Started firewalld - dynamic firewall daemon.
Dec 21 19:48:56 machine.hostname firewalld[21689]: 2014-12-21 19:48:56 ERROR: '/sbin/iptables -t filter -A DROP_allow -s 192.168.0.0/16 -m tcp -p tcp --dport 9000 -m conntrack --ctstate NEW -j ACCEPT' failed: iptables: No chain/target/match by that name.
Dec 21 19:48:56 machine.hostname firewalld[21689]: 2014-12-21 19:48:56 ERROR: '/sbin/iptables -t filter -A DROP_allow -s 192.168.0.0/16 -m tcp -p tcp --dport 9000 -m conntrack --ctstate NEW -j ACCEPT' failed: iptables: No chain/target/match by that name.
Dec 21 19:48:56 machine.hostname firewalld[21689]: 2014-12-21 19:48:56 ERROR: COMMAND_FAILED: '/sbin/iptables -t filter -A DROP_allow -s 192.168.0.0/16 -m tcp -p tcp --dport 9000 -m conntrack --ctstate NEW -j ACCEPT' failed: iptables: No chain/target/match by that name.
Dec 21 19:48:56 machine.hostname firewalld[21689]: 2014-12-21 19:48:56 ERROR: '/sbin/iptables -t filter -A DROP_allow -s 192.168.0.0/16 -m tcp -p tcp --dport 8080 -m conntrack --ctstate NEW -j ACCEPT' failed: iptables: No chain/target/match by that name.
Dec 21 19:48:56 machine.hostname firewalld[21689]: 2014-12-21 19:48:56 ERROR: '/sbin/iptables -t filter -A DROP_allow -s 192.168.0.0/16 -m tcp -p tcp --dport 8080 -m conntrack --ctstate NEW -j ACCEPT' failed: iptables: No chain/target/match by that name.
Dec 21 19:48:56 machine.hostname firewalld[21689]: 2014-12-21 19:48:56 ERROR: COMMAND_FAILED: '/sbin/iptables -t filter -A DROP_allow -s 192.168.0.0/16 -m tcp -p tcp --dport 8080 -m conntrack --ctstate NEW -j ACCEPT' failed: iptables: No chain/target/match by that name.
I'm a bit confused as I believed firewall-cmd
to be an abstraction and sort of mutually exclusive with iptables
, the latter being something I shouldn't mess with.
Here are my version vitals:
[machine@douglasii ~]# firewall-cmd -V
0.3.9
[machine@douglasii ~]# cat /proc/version
Linux version 3.16.7-x86_64-linode49 (maker@build) (gcc version 4.7.2 (Debian 4.7.2-5) ) #3 SMP Fri Nov 14 16:55:37 EST 2014
[machine@douglasii ~]# cat /etc/redhat-release
CentOS Linux release 7.0.1406 (Core)
[machine@douglasii ~]# iptables -v
iptables v1.4.21: no command specified
Try `iptables -h' or 'iptables --help' for more information.
Under CentOS 7, I understand we're moving from mysql-server
to implementation-compatible MariaDB. I'm using a docker image of centos:latest
, which puts me under the auspices of Centos 7.
mysqld_safe
runs blocking in the foreground. That makes it easy: I just need to 0) install the package 1) change the root password and 2) run a server from within a Dockerfile
In the docker paradigm, for I need to be able to install MariaDB as if it were in a bash script. I've found various ways to do this using aptitude
under Ubuntu but have yet to find an equivalent answer under yum
: How do I install, configure and run mariadb on Centos 7 as if it were being installed via Bash script? mysql_secure_installation
appears to require a TTY.
I've tried running the mysqladmin
password command manually, but it complaints that it can't connect to a running MySQL instance. Because the containers are thrown away between steps, I believe I need to somehow run mysql and change the password in the same step.
I've tried installing initscripts
package gets me /bin/service
but it tries to redirect me to use systemctl start mariadb.service
, which isn't usable because Docker containers get a fakesystemd
and not systemd
. Any ideas?
Here's my current Dockerfile variant (in this one, trying a tail -f
to keep the process alive as a CMD
)
FROM centos:latest
MAINTAINER Me ([email protected])
RUN yum -y install wget epel-release
RUN cd /usr/local/src && wget http://rpms.famillecollet.com/enterprise/remi-release-7.rpm && rpm -Uvh remi-*.rpm && rm remi-*.rpm
RUN sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/remi*.repo
RUN cd /usr/local/src && wget http://apt.sw.be/redhat/el7/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm && rpm -Uvh rpmforge-*.rpm && rm rpmforge-*.rpm
RUN rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
RUN sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/rpmforge*.repo
RUN yum -y update
RUN yum -y upgrade
# mysql
RUN yum -y install mariadb-server
RUN yum -y install initscripts
WORKDIR /usr
#RUN echo "bind-address=0.0.0.0" >> /etc/my.cnf
RUN /usr/bin/mysql_install_db --datadir="/var/lib/mysql" --user=mysql
RUN /usr/bin/mysqld_safe --datadir="/var/lib/mysql" --socket="/var/lib/mysql/mysql.sock" --user=mysql >/dev/null 2>&1 &
RUN /usr/bin/mysqladmin -u root password SOMEPASSWORD
CMD tail -f /var/log/mariadb/mariadb.log
EXPOSE 3306
Related:
- Scripted install of MySQL on Ubuntu
- https://askubuntu.com/questions/79257/how-do-i-install-mysql-without-a-password-prompt
- https://stackoverflow.com/questions/1202347/how-can-i-pass-a-password-from-a-bash-script-to-aptitude-for-installing-mysql
- https://stackoverflow.com/questions/7739645/install-mysql-on-ubuntu-without-password-prompt
I've got a bash script I'm working to improve, and have put on a great fix thanks to Dennis Williamson. Unfortunately, one of the lines no longer echoes into a variable I can manipulate, rather dumps the output directly. I'll be good to go if I fix this.
Why is this bash command not echoing into the $result variable and what can I do to improve?
result=$( time wget -q --output-document=/tmp/wget.$$.html http://domain.tomonitor.com 2>&1; );
EDIT: Various solutions I've tried
result=$( { time (/usr/local/bin/wget -q --output-document=/tmp/wget.$$.html --header="Host: blogs.forbes.com" http://$host) } &2>1 );
result=$( { time (/usr/local/bin/wget -q --output-document=/tmp/wget.$$.html --header="Host: blogs.forbes.com" http://$host) } );
result=$( ( time (/usr/local/bin/wget -q --output-document=/tmp/wget.$$.html --header="Host: blogs.forbes.com" http://$host) ) );
EDIT2:
I'm echoing out a line like this:
echo "$date, $host, $result"
Date and host are currently fine. $result is not.
I'm getting lines like this:
3.887
Tue Feb 15 08:39:53 PST 2011, 192.168.0.2,
3.910
Tue Feb 15 08:39:57 PST 2011, 192.168.0.3,
I'm expecting lines like this:
Tue Feb 15 08:39:53 PST 2011, 192.168.0.2, 3.887
Tue Feb 15 08:39:57 PST 2011, 192.168.0.3, 3.910
My application sits behind a load balancer, and every once in a while I like to do a status check on each machine to get an idea of the time it takes to return an index.html document on each machine.
The script looks like this:
for host in 192.168.0.7 192.168.0.8 192.168.0.9; do
result=$( ( time wget -q --header="Host: domain.tomonitor.com" http://$host/ ) 2>&1 | grep real | awk '{print $2}' )
date=$(date)
echo "$date, $host, $result"
done
Since the application thinks it's on domain.tomonitor.com
, I set that manually in the wget request header. It grep
s for the "real" time and awk
s out the time alone, dumping that into a $result variable. Empirically, it seems to work pretty well as a basic manual check -- responses typically take 2-3 seconds across my various servers, unless there's some unbalanced connections going on. I run it directly from my Mac OS X laptop against our private network.
The other day I wondered if I could log the results over time using a cron. I was amazed to find it had subsecond responses, for example .003 seconds. Tried mounting the script results to my Desktop with an OS X desktop widget called Geektool and saw similar, sub-second times reported.
I suspect the difference is due to some user error -- some reason why the time wget
command I'm running won't work. Can anyone tell me why the time it takes to run this script differs so much between user (me running by hand) and system (cronjob or Geektool) and how I might correct the discrepancy?
I feel a little silly asking what would seem to be a Google-able question but I'm trying to script out a repetitive task of (1) ssh'ing into a remote server (2) running script.sh in my home directory, and (3) copy/pasting the output.
It would be great if I could write a script to do this work for me. I've written some bash scripts that scp
files from these machines but never one that ran scripts on these machines.
Is it possible to have a script on machine 1 and log in and execute script.sh on machine 2 through machine n, dumping the output on machine 1? If so, how?
I'm working with an API that requires the machine's external IP. As far as I know, the PHP environment I'm using can only get our internal IP.
The option on the table is using an external service such as whatismyip.com to tell us:
wget -q -O - http://whatismyip.com/automation/n09230945.asp
My concern is what happens if that fails. Is there a bulletproof way of determining a machine's IP without relying on external services?
Continuing from a Stackoverflow question, I've got a .sh that tries to (1) run a script that sets environment vars and then (2) runs a php script.
The crontab entry (have tried with and without >>
output):
*/1 * * * * /home/user/public_html/domain.com/private/ec2-api-tools/php/doQueue.sh >> /home/user/output.txt
The script, doQueue.sh (running this by hand works):
#/bin/sh
. ./environment.sh
php process_queue.php
The environment.sh (again, works by hand):
#!/bin/sh
echo "running environment"
PATH=$EC2_HOME/bin:$PATH
EC2_HOME=/home/user/public_html/domain.com/private/ec2-api-tools
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/user/public_html/domain.com/private/ec2-api-tools/bin
MAIL=/var/mail/root
PWD=/home/user/public_html/domain.com/private/ec2-api-tools
JAVA_HOME=/usr/lib/jvm/java-6-sun/jre/
LANG=en_US.UTF-8
EC2_PRIVATE_KEY=/home/user/public_html/domain.com/private/ec2-api-tools/pk-hash.pem
EC2_CERT=/home/user/public_html/domain.com/private/ec2-api-tools/cert-hash.pem
I've tried variations using sh
and .
to no avail. What's the best way to troubleshoot this?
I know this is teeball for veteran sysadmins, but I'm looking to search a directory tree for file contents that match a regex (here, the word "Keyword"). I've gotten that far, but now I'm having trouble ignoring files in a hidden (.svn) file tree.
Here's what I'm working with. You can see that I am fine searching for files that include ".svn" in the name but I can't seem to invert the iname var with a ! as I've see in other docs.
find . -exec grep "Keyword" '{}' \; -iname .svn; -print
The above returns pretty much anything and everything.