We are using a SAN for our datastorage. I recently got the SAN connected to a CentOS 5.3 server by using the EMC PowerPath software along with the Navisphere Agent running on CentOS. However I have now recieved the final production server and need to set this all up again. Getting the PowerPath to work is a huge problem, and I am trying to determine what the best course of action would be.
- Should I use the built in multipathd that is already on CentOS?
- If I do what will probably not work?
- Is setting up multipath any harder then setting up PowerPath?
Notes
- The server needs to mount the SAN as a LVM volume at boot
- The server boots off it's own internal drives, with only the software and data on the SAN
- CentOS 5.3 is loaded and up to date
- The server has 2 network cards connected to the SAN, Path A and B setup in failover. I did not set this up, it is done by the networks team. I am only dealing with the OS side of things
Additional Information
dmesg | grep ql
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) PCI: If a device doesn't work, try "pci=routeirq". If it helps, post a report VFS: Disk quotas dquot_6.5.1 Dquot-cache hash table entries: 512 (order 0, 4096 bytes) io scheduler cfq registered (default) serial8250: ttyS0 at I/O 0x3f8 (irq = 0) is a 16550A serial8250: ttyS1 at I/O 0x2f8 (irq = 0) is a 16550A 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A serio: i8042 KBD port at 0x60,0x64 irq 1 serio: i8042 AUX port at 0x60,0x64 irq 12 ehci_hcd 0000:00:1a.7: irq 50, io mem 0xdf0ff800 ehci_hcd 0000:00:1d.7: irq 58, io mem 0xdf0ffc00 uhci_hcd 0000:00:1a.0: irq 66, io base 0x0000cc40 uhci_hcd 0000:00:1a.1: irq 74, io base 0x0000cc60 uhci_hcd 0000:00:1d.0: irq 58, io base 0x0000cc80 uhci_hcd 0000:00:1d.1: irq 82, io base 0x0000cca0 ata1: SATA max UDMA/133 cmd 0xcc10 ctl 0xcc08 bmdma 0xcc20 irq 98 ata2: SATA max UDMA/133 cmd 0xcc18 ctl 0xcc0c bmdma 0xcc28 irq 98 qla2xxx 0000:04:00.0: Found an ISP2532, irq 106, iobase 0xffffc2000000e000 qla2xxx 0000:04:00.0: Configuring PCI space... qla2xxx 0000:04:00.0: Configure NVRAM parameters... qla2xxx 0000:04:00.0: Verifying loaded RISC code... qla2xxx 0000:04:00.0: Allocated (64 KB) for EFT... qla2xxx 0000:04:00.0: Allocated (1414 KB) for firmware dump... scsi3 : qla2xxx qla2xxx 0000:04:00.0: qla2xxx 0000:05:00.0: Found an ISP2532, irq 114, iobase 0xffffc20000022000 qla2xxx 0000:05:00.0: Configuring PCI space... qla2xxx 0000:05:00.0: Configure NVRAM parameters... qla2xxx 0000:05:00.0: Verifying loaded RISC code... qla2xxx 0000:05:00.0: Allocated (64 KB) for EFT... qla2xxx 0000:05:00.0: Allocated (1414 KB) for firmware dump... scsi4 : qla2xxx qla2xxx 0000:05:00.0: qla2xxx 0000:04:00.0: LIP reset occured (f8f7). qla2xxx 0000:04:00.0: LIP occured (f8f7). qla2xxx 0000:04:00.0: LIP reset occured (f700). qla2xxx 0000:04:00.0: LOOP UP detected (4 Gbps). qla2xxx 0000:05:00.0: LIP reset occured (f8f7). qla2xxx 0000:05:00.0: LIP occured (f8f7). qla2xxx 0000:05:00.0: LIP reset occured (f700). qla2xxx 0000:05:00.0: LOOP UP detected (4 Gbps). SELinux: initialized (dev mqueue, type mqueue), uses transition SIDs
I'm using same SAN from Dell EMC can provides dmesg |grep ql* or you can run emcgrab out of in .html other thing i need to now is your SAN box connected with SAN switch or direct why i'm asking with switch or without switch bcoz then you'll have 2 path A and B which one you need to find out some of the useful tips as below
http://www.linuxquestions.org/questions/linux-enterprise-47/connect-debian-etch-to-ibm-san-meaning-of-sns-scan-failed-570598/
http://forums13.itrc.hp.com/service/forums/bizsupport/questionanswer.do?admit=109447627+1249019619056+28353475&threadId=1154098
http://forums.novell.com/novell-product-support-forums/suse-linux-enterprise-server-sles/sles-configure-administer/362473-lun-not-visible.html
Diago,
multipathd is easy to use but I recommend checking EMC's best practices guide to get started. It works equally well on iSCSI or Fiber Channel and is directly plugged in linux's device manager.
Useful switches for multipath on Fiber Channel:
In short I've used multipathd on centos on Fiber Channel with the qla2xxx driver successfully (albeit on a 3PAR storage array).
Diego,
We've tried powerpath before (1-2 years ago), with CentOS 4.x connected to an EMC clarrion via FC.
Setting up native multipath is easier IMO, but really it's not a huge difference in difficulty.
Mind you, this was an older version, but the difference that we saw, was powerpath crippled our disc IO.
We had EMC support swear up and down that our setup was fine, but the throughput was bad. Weird thing is, when we uninstalled powerpath, local disc IO (which we had been testing for comparison) got significantly better as well.
I'd be interested in finding out if it still has the same issues.
We decided to go with native multipathing: no surprise. When we hooked up a new EMC last month, we opted just to stick with native multipathing. The servers and the EMC are still in testing, but so far, no problems.
Sill, I'd be interested to hear your experience with and without powerpath.
--Kyle
looking at your dmesg|grep ql* i'm finding at issues have you created a zoning for your SAN box and are your able to see fdisk -l new /dev/sda /dev/sdb which your have created partition on your SAN box it happiness bcoz of zoning to be right way in same time i'll upload a screen shot for you
cd /opt/Navisphere/bin/
./naviserverutilcli
Welcome to Navisphere Server Utility - version : 6.28.20.1.40
Select from one of the following options: or Select '0' to exit the application.
Update Server Information - Select this option to send information about the server to all connected storage systems.
Snapshot Tasks (Navisphere Express only) - Select this option to perform Snapshot tasks on the source server or the secondary server.
Generate high availability report.
Display help for the application.
Scanning ...
Connected Storage Systems:
HBA/NIC Port Storage System SP Port SP IP Address
0 FCNPR063600473 B 0 10.5.1.82
0 FCNPR063600473 B 1 10.5.1.82
Virtual Disks on External Storage Systems:
Device Name File System Virtual Disk SP IP Address SP
sdf BCINICSMS001_1 10.5.1.82 B
sdk BCINICSMS001_1 10.5.1.82 B
sdb BCINICSMS001_2 10.5.1.82 B
sdg BCINICSMS001_2 10.5.1.82 B
sdj BCINICSQL001 10.5.1.82 B
sde /VM/sql01 BCINICSQL001 10.5.1.82 B
sdc SMS1_XEN 10.5.1.82 B
sdh SMS1_XEN 10.5.1.82 B
sdd SMS2_XEN 10.5.1.82 B
sdi SMS2_XEN 10.5.1.82 B
Please verify the information above. If it is correct, you can update the server with the attached storage systems. If the information is incorrect you can scan again and then update.
Please select [u]pdate, [s]can, [c]ancel:
if have done right way of zoning then you should get after restart of your server