As the question states, what if I pass
kernel /vmlinuz audit=1 audit=0
Will auditing enabled or disabled? Or will the kernel just freak out? Or is it undefined and will depend on the build of the kernel/argument being passed?
So we have some workstations with identical hardware.
The Fedora14 box has a couple weeks uptime and still get good performance.
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 21766 MB in 2.00 seconds = 10902.12 MB/sec
Timing buffered disk reads: 586 MB in 3.00 seconds = 195.20 MB/sec
The Cent 5.5 boxes however seem to be okay after a reboot,
/dev/sda:
Timing cached reads: 34636 MB in 2.00 seconds = 17354.64 MB/sec
Timing buffered disk reads: 498 MB in 3.01 seconds = 165.62 MB/sec
but some time later( unsure exactly, tested at approx 1 day uptime)
/dev/sda:
Timing cached reads: 2132 MB in 2.00 seconds = 1064.96 MB/sec
Timing buffered disk reads: 160 MB in 3.01 seconds = 53.16 MB/sec
drop to this. This is with very low load. I believe they all have the same bios settings. Any ideas what could cause this on Cent? Ask for more info. It might also be worth noting, that passing the --direct
flag causes the slow boxes to perform similarly to the non-slow ones for buffered disk reads.
Hey I have to make a file system with an encrypted partition with on ubuntu server. something like
Unencrypted:
/ - 10 GB
/home - 10GB
/var - 5GB
--------------
Encrypted:
/opt - 50GB
This I can figure out in the setup, just partition as normal, setup /opt as a encrypted volume with dm-crypt. However im not sure how to mirror this entire drive, so that if either failed i could still boot. and how will that affect the encrypted partition.
Any help would be appreciated.
Hey. I'm running cent 5.5 64-bit, 1 nic, eth0. I enabled ipv4 forwarding via:
echo '1' > /proc/sys/net/ipv4/ip_forward
and the machine locks up. cant switch ttys cant ssh into it. Had to hard reboot. Any ideas? I did this on a similar box a week ago with no problems. Id rather not take this machine down again needlessly, so, any ideas?
Hey, Sorry if this is a dupe. I looked but google was spammed with registry-hack answers.
I need to have multiple users able to RDP into a computer on out LAN. Windows XP/7 seem to only allow one used logged in at a time. Either locally or remotely. What about win server? Can I have up to 10 simultaneous users logged in and active? Sorry, I come from the unix side where this is somewhat trivial.
Hey all this is a repost from a question I asked on the cisco forums but never got a useful reply.
Hey I'm trying to convert the FreeBSD servers at work to dual-gig lagg links from regular gigabit links. Our production servers are on a 3560. I have a small test environment on a 3550. I have achieved fail-over, but am having troubles achieving the speed increase. All servers are running gig intel (em) cards. The configs for the servers are:
BSDServer:
#!/bin/sh
#bring up both interfaces
ifconfig em0 up media 1000baseTX mediaopt full-duplex
ifconfig em1 up media 1000baseTX mediaopt full-duplex
#create the lagg interface
ifconfig lagg0 create
#set lagg0's protocol to lacp, add both cards to the interface,
#and assign it em1's ip/netmask
ifconfig lagg0 laggproto lacp laggport em0 laggport em1 ***.***.***.*** netmask 255.255.255.0
The switches are configured as follows:
#clear out old junk
no int Po1
default int range GigabitEthernet 0/15 - 16
# config ports
interface range GigabitEthernet 0/15 - 16
description lagg-test
switchport
duplex full
speed 1000
switchport access vlan 192
spanning-tree portfast
channel-group 1 mode active
channel-protocol lacp
**** switchport trunk encapsulation dot1q ****
no shutdown
exit
interface Port-channel 1
description lagginterface
switchport access vlan 192
exit
port-channel load-balance src-mac
end
obviously change 1000's to 100's and GigabitEthernet to FastEthernet for the 3550's config, as that switch has 100Mbit speed ports.
With this config on the 3550, I get failover and 92Mbits/sec speed on both links, simultaneously, connecting to 2 hosts.(tested with iperf) Success. However this is only with the "switchport trunk encapsulation dot1q" line.
First, I do not understand why I need this, I thought it was only for connecting switches. Is there some other setting which this turns on that is actually responsible for the speed increase? Second,
This config does not work on the 3560. I get failover, but not the speed increase. Speeds drop from gig/sec to 500Mbit/sec when I make 2 simultaneous connections to the server with or without the encapsulation line. I should mention that both switches are using source-mac load balancing.
In my test I am using Iperf. I have the server(lagg box) setup as the server(iperf -s), and the client computers are client(iperf -c server-ip-address), so the source mac(and IP) are different for both connections.
Any ideas/corrections/questions would be helpful, as the gig switches are what I actually need the lagg links on. Ask if you need more information.
Hey, I am trying to setup Link Aggrigation with LACP (well, anything that provides increased bandwidth and failover using my setup will work). I'm running FreeBSD 8.0 on 3 machines. M1 is running 2 10/100 ethernetcards setup for link aggrigation using lagg. for reference:
ifconfig em0 up
ifconfig tx0 up
ifconfig create lagg0
ifconfig lagg0 laggproto lacp laggport tx0 laggport em0 192.168.1.16 netmask 255.255.255.0
I plugged them into ports 1 and 2 of a Cicso 3550. then ran:
configure terminal
interface range Fa0/1 - 2
switchport mode access
switchport access vlan 1
channel-group 1 mode active
(everythings in vlan 1) Now Im able to connect the other computers to other ports on the switch and failover works great, i can unplug cables in the middle of a transfer and the traffic gets rerouted. However, im not noticing any speed increase. My test setup: load balancing: i tried dst and src on the switch, neither seemed to give me a speed increase. I am SCPing 2 500 meg files from the lagg computer to other computers (one each) which are also running 10/100 full duplex cards. I get transfer speeds of about 11.2-11.4 Mbps to a single host, and about half that (5.9-6.2) Mbps when transferring to both at the same time. From what I understood with destination load balancing the router was suppose to balance traffic headed for 1 computer over 1 port and traffic headed for another over a diff(in this case) the other port.
With destination-MAC address forwarding, when packets are forwarded to an EtherChannel, the packets are distributed across the ports in the channel based on the destination host MAC address of the incoming packet. Therefore, packets to the same destination are forwarded over the same port, and packets to a different destination are sent on a different port in the channel. For the 3550 series switch, when source-MAC address forwarding is used, load distribution based on the source and destination IP address is also enabled for routed IP traffic. All routed IP traffic chooses a port based on the source and destination IP address. Packets between two IP hosts always use the same port in the channel, and traffic between any other pair of hosts can use a different port in the channel. (Link)
What am i doing wrong/what would i need to do to see a speed increase beyond what i could do with just a single card?
EDIT: IP/MACS M1: 192.168.1.18/00e0291aba80 M2: 192.168.1.14/000e0c7739af M3: 192.168.1.12/000874a627e5