I launched both and they appear to be identical. One says ebs
and the other is gp2
.
ami-b270a8cf amzn2-ami-hvm-2017.12.0.20180328.1-x86_64-ebs
ami-f973ab84 amzn2-ami-hvm-2017.12.0.20180328.1-x86_64-gp2
I launched both and they appear to be identical. One says ebs
and the other is gp2
.
ami-b270a8cf amzn2-ami-hvm-2017.12.0.20180328.1-x86_64-ebs
ami-f973ab84 amzn2-ami-hvm-2017.12.0.20180328.1-x86_64-gp2
I've tried setting the following but I'm not getting the desired state of rw-rw----
.
drwxrwsr-x 2 clientname airflow 28 Sep 11 15:17 incoming
Match group sftpusers
ForceCommand internal-sftp -u 0002
I want to log my app so that it writes its log into its own log file. I create the log file and save it under /etc/rsyslog.d
but my app doesn't create/write to the designated file. However, once I reload rsyslog, there are log entries the next time my app runs.
I package my app into an RPM so I can write post-install scripts if necessary. Is this the proper way to handle this via post-install scripts?
if $programname == 'serf' then /var/log/serf.log
& ~
Sending pkill -HUP rsyslog
works but I wasn't sure if that would cause any issues to other programs while they're in the middle of logging.
Array info:
/dev/md0
-> /dev/sda1
and /dev/sdb1
/dev/md2
-> /dev/sda2
and /dev/sdb2
Partition info:
/boot
-> /dev/md0
/
-> /dev/md1
I have two drives that are setup as RAID1 using software RAID on Redhat. I added two additional drives (same size) and I would like to conver the RAID1 to a RAID10. The problem I'm having is adding the last drive to the array. I've gotten as far as creating a RAID10 with two missing devices but as soon as I add the last drive, all hell breaks loose. It seems /dev/sda1 is the culprit.
What I'm not too sure about is how to create the RAID10. I've tried the following
mdadm --create /dev/md2 --level=raid10 --raid-device=4 /dev/sdc1 missing /dev/sdd1 missing
I then proceeded to fail /dev/sdb1
from /dev/md0
and added that partition to /dev/md2
. I proceeded to install the MBR on EACH partition since boot resides on /dev/sdx1
on each drive. As a test, all is well, I'm able to boot back into the system once I do a quick reboot. Now, when I go add the last drive /dev/sda1
, it breaks. I attempted to install grub on /dev/sda1
and I get the following ...
grub> root (hd0,0) /dev/sda
root (hd0,0) /dev/sda
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... no
Error 2: Bad file or directory type
At this point, the array is hosed I believe. I rebooted the server and it refuses to boot.
Let's say I want to use ONE tape a week for incremental. I would alternate two tapes per month so that a tape gets used every other week. I'll run a FULL every Saturday morning at 12:01am.
If I run an incremental Tues-Fri at 12:01am would the following directive work?
Volume Retention = 5
Volume Use Duration = 5
I'm using Bacula but this would pertain to any backup solution.
I'm using LTO5 tapes (1.5TB/3TB), but my data is nowhere near that capacity for the time being. So if I set the retention for my incrementals to 4 weeks, what happens to the tape if it's not full? Does Bacula go to the next tape?
We migrated our DNS servers from Linux onto Windows 2003 AD-based. Is it possible to replicate the reverse zones? We have about 6 DNS servers and it would suck if I had to go into each server to add a PTR address. I've been adding new A records on a local DNS server and things have been working as expected. However, the reverse mapping doesn't propagate between DNS servers.
Say I have a Dual-Core server, that's 4 cores w/ two physical processors.
I read numerous articles that states the dom0 should get one physical core to itself. By core, does that mean a single CPU core or one of the 4 logical cores? Ideally I would like to dedicate a single CPU core (2 logical) to the dom0. Then I would give the other CPU split between the 3 VMs. I've seen examples where ppl would assign more than the available number of cores to a VM and I don't know what good that would do. I mean, why would I want to assign 4 vCPU to a single VM when I only have 2 available (if my math is correct)? I assume I only have 2 available from the one core as I've given dom0 a CPU to itself.
Given the following backup sets ...
Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Tue Jun 21 11:27:26 2011
Chain end time: Tue Jun 21 11:27:59 2011
Number of contained backup sets: 2
Total number of contained volumes: 2
Type of backup set: Time: Num volumes:
Full Tue Jun 21 11:27:26 2011 1
Incremental Tue Jun 21 11:27:59 2011 1
If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011):
duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \
file:///storage/test/ restored-file.txt
However, if I run the following command, it restores the from the latest set.
duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \
ORIG_FILE file:///storage/test/ restored-file.txt
What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.
Also, I noticed the filenames have a different timestamp...
duplicity-inc.20110621T172520Z.to.20110621T172537Z.manifest
duplicity-full.20110621T172520Z.vol1.difftar.gz
In the %post section of my kickstart script, I've created multiple directories that's required before Puppet takes over. I noticed 2 of my directories under /mnt
do not get created. I'm wondering if this has to do w/ the way kickstart handles temporary images and what not. I know I'm able to create directories since I created something under / (mkdir -p /export/home
) during the process as well. Upon reboot I see /export/home
but not /mnt/volume1
and /mnt/volume2
I started to notice my web interface hasn't updated the graph in hours. Each time I restart the gmond
process on my clients, I see that the graphs does work. I come back an hour or so later and my graph is blank, just a white graph and nothing has been updated. if I started it again, it works just fine. I'm not sure what it is.
My setup is as follows.
Client -> gmond collector -> gmeta/web host
gmetad.conf
data_source "ENG1" 10.199.1.110
data_source "ENG2" 10.199.19.100
data_source "QA" 10.199.10.200
gmond.conf from 10.199.10.200
globals {
daemonize = yes
setuid = yes
user = nobody
debug_level = 0
max_udp_msg_len = 1472
mute = no
deaf = no
allow_extra_data = yes
host_dmax = 0 /*secs */
cleanup_threshold = 300 /*secs */
gexec = no
send_metadata_interval = 0 /*secs */
}
cluster {
name = "QA"
}
udp_send_channel {
host = 10.199.10.200
port = 8649
ttl = 1
}
udp_recv_channel {
port = 8649
}
**gmond.conf no my client files are the same as above except it doesn't have the udp_recv_channel
block defined. I forwarded the states from my client to a collector (such as 10.199.10.200), which then gets pulled by the gmeta server (10.199.1.110). This server also collects data from a group of servers defined as "ENG1."
Let's say I control the example.com zone but not the abc.example.com. "abc.example.com" is controlled by another admin so I need to forward any requests for that subdomain to his BIND server.
example.com is running on Win2k3 while abc.example.com is running on BIND.
Yes fdisk -l
works but what if the disks were setup as a hardware RAID?
I have 6 x 500GB SATA drives that I want to create a RAID 10 off of, this gives me roughly 1.3TB. Would it benefit me to create two datastores (split the 1.3TB in half) or just create one large one? I need to accommodate 22 VMs.
I thought of creating 2 x RAID5 arrays (3 disks per array) but everything points to running a RAID 10 as opposed to a RAID 5.
I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms?
I ask because I was going to throw each software into its own folder.
Example one, put all packages inside one folder (custom_software)
/admin/software/custom_software/5.4/i386
/admin/software/custom_software/5.4/x86_64
/admin/software/custom_software/4.6/i386
/admin/software/custom_software/4.6/x86_64
What I'm thinking of ...
/admin/software/custom_software/nagios/5.4/i386
/admin/software/custom_software/nagios/5.4/x86_64
/admin/software/custom_software/nagios/4.6/i386
/admin/software/custom_software/nagios/4.6/x86_64
/admin/software/custom_software/puppet/5.4/i386
/admin/software/custom_software/puppet/5.4/x86_64
/admin/software/custom_software/puppet/4.6/i386
/admin/software/custom_software/puppet/4.6/x86_64
Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?
I followed this link on how to create my own yum repository for base install and update purposes. If you notice, why would I need a 5 folder on top of 5.4? My installation at the moment is all 5.4, so when I ran "yum update" it wanted to go to http://domain.com/5/... instead of picking up the 5.4 directory. Is 5 basically 5.4 w/ the most updated packages for that tree? meaning, if 5.5 comes out, 5 would be the latest and greatest and if I wanted to track 5.4, I would still need a 5.4 folder to track changes with?
EDIT: I totally forgot about this thread. It turns out I had a bad hard disk. We had to redeploy this server for other needs so I finally got around to replacing the one bad disk and we're back in business.
For a few weeks now I couldn't figure out why I wasn't able to delete this one particular file. As root I can, but my shell script runs as a different user. So I go run ls -la and it's not there. However, if I call it as a parameter, it shows up! Sure enough, the owner is root, hence I'm not able to delete.
Notice, 6535 is missing ...
[root@server]# ls -la 653*
-rw-rw-r-- 1 svn svn 24002 Mar 26 01:00 653
-rw-rw-r-- 1 svn svn 7114 Mar 26 01:01 6530
-rw-rw-r-- 1 svn svn 8653 Mar 26 01:01 6531
-rw-rw-r-- 1 svn svn 6836 Mar 26 01:01 6532
-rw-rw-r-- 1 svn svn 3308 Mar 26 01:01 6533
-rw-rw-r-- 1 svn svn 3918 Mar 26 01:01 6534
-rw-rw-r-- 1 svn svn 3237 Mar 26 01:01 6536
-rw-rw-r-- 1 svn svn 3195 Mar 26 01:01 6537
-rw-rw-r-- 1 svn svn 27725 Mar 26 01:01 6538
-rw-rw-r-- 1 svn svn 263473 Mar 26 01:01 6539
Now it shows up if you call it directly.
[root@server]# ls -la 6535
-rw-rw-r-- 1 root root 3486 Mar 26 01:01 6535
Here's something interesting. So I caught this issue because in my shell script, it would fail to delete because 6535 is owned by root. The file actually shows up after I run "rm -rf ." I tried it earlier and it failed to remove the directory since it told me the directory isn't empty. I went in and looked and sure enough, file "6535" finally shows up. No idea why it's doing this.
strace says the following
#strace ls -la 653* 2>&1 | grep ^open
open("/etc/ld.so.cache", O_RDONLY) = 3
open("/lib64/tls/librt.so.1", O_RDONLY) = 3
open("/lib64/libacl.so.1", O_RDONLY) = 3
open("/lib64/libselinux.so.1", O_RDONLY) = 3
open("/lib64/tls/libc.so.6", O_RDONLY) = 3
open("/lib64/tls/libpthread.so.0", O_RDONLY) = 3
open("/lib64/libattr.so.1", O_RDONLY) = 3
open("/etc/selinux/config", O_RDONLY) = 3
open("/proc/mounts", O_RDONLY) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
open("/proc/filesystems", O_RDONLY) = 3
open("/usr/share/locale/locale.alias", O_RDONLY) = 3
open("/usr/share/locale/en_US.UTF-8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US.utf8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_US/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_TIME/coreutils.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/etc/nsswitch.conf", O_RDONLY) = 3
open("/etc/ld.so.cache", O_RDONLY) = 3
open("/lib64/libnss_files.so.2", O_RDONLY) = 3
open("/etc/passwd", O_RDONLY) = 3
open("/etc/group", O_RDONLY) = 3
open("/etc/mtab", O_RDONLY) = 3
open("/proc/meminfo", O_RDONLY) = 3
open("/etc/localtime", O_RDONLY) = 3
Found out today that running screen as a different user that I sudo into won't work!
i.e.
ssh bob@server # ssh into server as bob
sudo su "monitor" -
screen # fails: Cannot open your terminal '/dev/pts/0'
I have a script that runs as the "monitor" user. We run it in a screen session in order to see output on the screen. The problem is, we have a number of user who logs in with their own account (i.e. bob, james, susie, etc...) and then they sudo into the "monitor" user. Giving them access to the "monitor" user is out of the question.
Don't ask why I'm doing it this way but I have to.
Say I have a user name "bob" and he needs to run a program as "monitor." I want to allow "bob" to sudo into the monitor account and run the process. Obviously I could just give "bob" sudo access to run the app but I'm told it has to run as "monitor." Anyways, how can this be done?
Say I compiled and installed PHP and then I wanted to add an additional extention a few months later.
Do I need to specify everything I've included during my initial installation along w/ the new module I want?
What if I dont recall the exact command I used to compile my initial PHP installation with?
Do I have to go through the whole ...
./configure ... make && make install