I'm in search for a nice web front end to a medium sized active directory. The main use case is to automate tasks that can't be done with copy from templates (things like assigning proper UNIX attributes for IDMU, creating contacts for email forwards, etc.) and to be able to create certain simple interfaces - ie stuff that one could give HR to work with or certain special uses where user data is kept in AD and needs to be modified frequently by less privileged people for their custom apps that use AD as a credentials data base. Any recommendations?
pfo's questions
I'd like to know the exact math of how to calculate usable capacity on a NetApp filer's aggregates. Out of experience I've been using a magic factor of 0.65-0,7 times the net RG capacity of all the aggregate's RGs.
Just as a simple example: 3 shelfs, each with 24 1TB spindles with 3 spares form an aggregate with RG-size 15, ie 3 RGs, one plex, total 45 TB capacity. The usable capacity from the RGs is ((15-2)*3) = 39TB. There are no volumes on this aggregate and the aggregate snap reserve is 5%.
The system reports a usable capacity of 27TB on that aggregate which is pretty much 39TB times the magical factor. Can anyone provide some insight?
In my central syslog I can see a some instances ofo the following error from LSI's RDAC multi-pathing driver for Linux.
[RAIDarray.mpp]MY_NICE_STORAGE_ARRAY:1:0:7 Cmnd-failed try alt ctrl 0. vcmnd SN 2436 pdev H1:C0:T0:L7 0x05/0x94/0x01 0x08000002 mpp_status:1
also some instances of
[RAIDarray.mpp]MY_NICE_STORAGE_ARRAY:1:0:10 Illegal Request ASC/ASCQ 0x20/0x0, SKSBs 0x0/0x0/0x0
followed by
[RAIDarray.mpp]MY_NICE_STORAGE_ARRAY:1:0:10 IO FAILURE. vcmnd SN 887 pdev H2:C0:T0:L10 0x05/0x20/0x00 0x08000002 mpp_status:1
I get it from nearly all of my machines in the SAN during the day, but not all of them at once - usually one of them in 5 hours. All FC switches and all FC HBAs show no errors from today and all paths to any LUN are up when i check them. Performance (IOPS and sequential access) is also very fine. Anyone seen this?
What's the most obvious way of achieving the following: The site has an working AD infrastructure and certain parts of the infrastructure are tightly coupled GNU/Linux machines where people from the AD OU ou=linux-users,dc=example,dc=com
should be able to log onto the Linux part of the infrastructure using their AD credentials but without using the DC in the Linux machine's PAM stack, ie there should be some kind of synchronization plus augmentation with the POSIX attributes (uid,gid,homedir,password) from AD to slapd. The slapd on the linux machines is OpenLDAP, the schema of the AD is from Windows 2003 without the POSIX attributes.
Is there a sane method to set uEFI settings (for machines with no OS) from a remote machine? I've just spent hours for changing the boot order of a handful of machines. The extended boot up times for uEFI machines are horrible!
I just inserted a 10GbE 16 port line card into a Cisco Catalyst 6509 series switch and nothing happens(™). The chassis recognizes the new line card but doesn't boot it up. show power
and show modules
follow:
#show power system power redundancy mode = redundant system power total = 2771.16 Watts (65.98 Amps @ 42V) system power used = 1837.50 Watts (43.75 Amps @ 42V) system power available = 933.66 Watts (22.23 Amps @ 42V) Power-Capacity PS-Fan Output Oper PS Type Watts A @42V Status Status State ---- ------------------ ------- ------ ------ ------ ----- 1 WS-CAC-3000W 2771.16 65.98 OK OK on 2 WS-CAC-3000W 2771.16 65.98 OK OK on Pwr-Allocated Oper Fan Type Watts A @42V State ---- ------------------ ------- ------ ----- 1 WS-C6509-E-FAN 150.36 3.58 OK Pwr-Requested Pwr-Allocated Admin Oper Slot Card-Type Watts A @42V Watts A @42V State State ---- ------------------ ------- ------ ------- ------ ----- —— 5 (Redundant Sup) - - 282.24 6.72 - - 6 WS-SUP720-3B 282.24 6.72 282.24 6.72 on on 7 WS-X6716-10GE 457.80 10.90 - - on off (not supported) Inline Inline Inline Inline Pwr-Requested Pwr-Allocated Local-Pwr-Pool Power Slot Card-Type Watts A @42V Watts A @42V Watts A @42V Status ---- ------------------ ------- ------ ------- ------ ------- ------ ---------- 1 WS-F6K-VPWR-GE 4.62 0.11 99.54 2.37 34.44 0.82 On 2 WS-F6K-VPWR-GE - - 7.98 0.19 34.44 0.82 On 3 WS-F6K-VPWR-GE - - 7.98 0.19 34.44 0.82 On 4 WS-F6K-VPWR-GE - - - - 34.44 0.82 On
#show module Mod Ports Card Type Model Serial No. --- ----- -------------------------------------- ------------------ ----------- 6 2 Supervisor Engine 720 (Active) WS-SUP720-3B 7 16 FRU type (0x6003, 0x403(1027)) WS-X6716-10GE Mod MAC addresses Hw Fw Sw Status --- ---------------------------------- ------ ------------ ------------ ------- 7 0026.cbb2.0ee0 to 0026.cbb2.0eef 1.1 Unknown Unknown PwrDown Mod Sub-Module Model Hw Status ---- --------------------------- ————————— ------- ——— 7 Distributed Forwarding Card WS-F6700-DFC3C 1.4 PwrDown Mod Online Diag Status ---- ------------------- 7 Not Applicable
Note that I removed most modules that are irrelevant here.
Output from show version
:
# show version Cisco Internetwork Operating System Software IOS (tm) s72033_rp Software (s72033_rp-ENTSERVICESK9_WAN-M), Version 12.2(18)SXF15a, RELEASE SOFTWARE (fc1) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2008 by cisco Systems, Inc. Compiled Tue 21-Oct-08 00:29 by kellythw Image text-base: 0x40101040, data-base: 0x42DD4DF0 ROM: System Bootstrap, Version 12.2(17r)S2, RELEASE SOFTWARE (fc1) BOOTLDR: s72033_rp Software (s72033_rp-ENTSERVICESK9_WAN-M), Version 12.2(18)SXF15a, RELEASE SOFTWARE (fc1) System returned to ROM by s/w reset (SP by power-on) System restarted at 10:27:34 MEZ Fri Dec 12 2008 System image file is "disk0:s72033-entservicesk9_wan-mz.122-18.SXF15a.bin" cisco WS-C6509-E (R7000) processor (revision 1.2) with 458720K/65536K bytes of memory. Processor board ID SMG0932NAAV SR71000 CPU at 600Mhz, Implementation 0x504, Rev 1.2, 512KB L2 Cache Last reset from power-on SuperLAT software (copyright 1990 by Meridian Technology Corp). X.25 software, Version 3.0.0. Bridging software. TN3270 Emulation software. 41 Virtual Ethernet/IEEE 802.3 interfaces 224 Gigabit Ethernet/IEEE 802.3 interfaces 1917K bytes of non-volatile configuration memory. 8192K bytes of packet buffer memory. 65536K bytes of Flash internal SIMM (Sector size 512K). Configuration register is 0x2102
I've got a Sun Storage 7000 that I'd like to backup/restore via the built-in NDMP service, since we've got an secondary SAN attached which is available via two 10GbE links to the primary storage I'd like to deploy it as a VTL solution. The secondary storage could easily be expanded with a SCSI HBA connected to a (small) 28 slot library providing two LTO4 drives. It seems that currently no FOSS backup app is able to be used as a NDMP VTL (with copy to physical tape) solution. Any low cost solutions?
I've got a brand new DS5100 SAN connected to multiple hosts (HS22 Blade in BladeCenter H) via two independent fabrics. Switch (Brocade 20p for BladeCenter) is zoned properly, ie every host in the BladeCenter sees the LUNs via both fabrics. RHEL detects the qla2xxx driver for the builtin QLogic QMI2572 4G FC CIO for BladeCenter and I can ``see'' the LUNs beeing presented as output from dmesg:
qla2xxx 0000:24:00.0: Found an ISP2532, irq 209, iobase 0xffffc20000022000
qla2xxx 0000:24:00.0: Configuring PCI space...
PCI: Setting latency timer of device 0000:24:00.0 to 64
qla2xxx 0000:24:00.0: Configure NVRAM parameters...
qla2xxx 0000:24:00.0: Verifying loaded RISC code...
qla2xxx 0000:24:00.0: Allocated (64 KB) for EFT...
qla2xxx 0000:24:00.0: Allocated (1414 KB) for firmware dump...
scsi4 : qla2xxx
qla2xxx 0000:24:00.0:
QLogic Fibre Channel HBA Driver: 8.03.00.10.05.04-k
QLogic QMI2572 - QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter
ISP2532: PCIe (5.0Gb/s x4) @ 0000:24:00.0 hdma+, host#=4, fw=4.04.09 (85)
PCI: Enabling device 0000:24:00.1 (0140 -> 0143)
ACPI: PCI Interrupt 0000:24:00.1[B] -> GSI 42 (level, low) -> IRQ 138
qla2xxx 0000:24:00.1: Found an ISP2532, irq 138, iobase 0xffffc20000024000
qla2xxx 0000:24:00.1: Configuring PCI space...
PCI: Setting latency timer of device 0000:24:00.1 to 64
qla2xxx 0000:24:00.1: Configure NVRAM parameters...
qla2xxx 0000:24:00.1: Verifying loaded RISC code...
qla2xxx 0000:24:00.1: Allocated (64 KB) for EFT...
qla2xxx 0000:24:00.1: Allocated (1414 KB) for firmware dump...
scsi5 : qla2xxx
qla2xxx 0000:24:00.1:
QLogic Fibre Channel HBA Driver: 8.03.00.10.05.04-k
QLogic QMI2572 - QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter
ISP2532: PCIe (5.0Gb/s x4) @ 0000:24:00.1 hdma+, host#=5, fw=4.04.09 (85)
qla2xxx 0000:24:00.0: LOOP UP detected (4 Gbps).
qla2xxx 0000:24:00.1: LOOP UP detected (4 Gbps).
Vendor: IBM Model: 1818 FAStT Rev: 0730
Type: Direct-Access ANSI SCSI revision: 05
scsi 4:0:0:0: Attached scsi generic sg1 type 0
Vendor: IBM Model: 1818 FAStT Rev: 0730
Type: Direct-Access ANSI SCSI revision: 05
scsi 4:0:1:0: Attached scsi generic sg2 type 0
Vendor: IBM Model: 1818 FAStT Rev: 0730
Type: Direct-Access ANSI SCSI revision: 05
scsi 5:0:0:0: Attached scsi generic sg3 type 0
Vendor: IBM Model: 1818 FAStT Rev: 0730
Type: Direct-Access ANSI SCSI revision: 05
scsi 5:0:1:0: Attached scsi generic sg4 type 0
The problem now is that they aren't recognized as beeing SCSI disks, rather generic SCSI devices (/dev/sg{1-4}). An output from ``sg_map -i -sd -x'' displays:
/dev/sg1 4 0 0 0 0 IBM 1818 FAStT 0730
/dev/sg2 4 0 1 0 0 IBM 1818 FAStT 0730
/dev/sg3 5 0 0 0 0 IBM 1818 FAStT 0730
/dev/sg4 5 0 1 0 0 IBM 1818 FAStT 0730
My basic understanding is that even though this is a multi-pathed setup I don't have to have it enabled or actually use MPIO. I've tried a quick workaround via device mapper multi-pathing but wasn't getting any output from multipathd. ``sg_map'' shows that these devices are disks (-sd flag) but the LUNs are not beeing attached as /dev/sd*. Do I have to manually create the proper device nodes? Do i have to use IBM's RDAC or SDD driver to see them?