I am trying to boot temporary Linux via UEFI network boot, and I need it 32-bit x86. Only 32-bit live distribution I know is RescueCD which seem to have no EFI stub in kernel to boot. I was trying to boot it via iPXE efi loader, which I am used to use. Is there a way to boot non-EFI kernel from iPXE or from other network-bootable bootloader? Or perhaps there are some 32-bit live linux distribution I miss?
kab00m's questions
I have SUSE 11 SP4 VM, initially it was working on Xen in PV mode. Now I am moving it to KVM. My usual approach is to netboot any Linux in target VM, mount root of target OS, chroot and rebuild initramfs, then reboot VM into target OS.
SLES 11 SP4 seem to lack something because after that initramfs can't find any vbd device to mount root. However, I have managed to run it via direct qemu command on KVM host:
qemu-kvm -m 32768 -smp 8 -device virtio-net-pci,mac=42:5f:96:48:39:fa,netdev=vmnic -netdev tap,id=vmnic,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -nographic -serial mon:stdio -drive file=/dev/lvm/vm,if=none,id=drive0,format=raw -device virtio-blk-pci,drive=drive0,scsi=off
and it works fine.
KVM config (disk-related) look like this:
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type="block" device="disk">
<driver name="qemu" type="raw" cache="none" io="native"/>
<source dev="/dev/lvm/vm"/>
<target dev="vda" bus="virtio"/>
<address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
</disk>
<controller type="pci" index="3" model="pcie-root-port">
<model name="pcie-root-port"/>
<target chassis="3" port="0xa"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x2"/>
</controller>
and my virt-manager do not allow me to make significant changes here.
I might be wrong here, but I think the main difference is PCI devices structure so initramfs work in one way, but not in the other. I have compared PCI devices:
Device tree found on VM which was directly run via qemu command:
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Device 1234:1111 (rev 02)
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device
Device tree found on any other KVM VM (same host):
00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller
00:01.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.4 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.5 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.6 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:01.7 PCI bridge: Red Hat, Inc. QEMU PCIe Root port
00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)
00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)
01:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)
02:00.0 USB controller: Red Hat, Inc. QEMU XHCI Host Controller (rev 01)
03:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01)
04:00.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon (rev 01)
05:00.0 Unclassified device [00ff]: Red Hat, Inc. Virtio RNG (rev 01)
08:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01)
Here I see the difference: qemu allow attaching storage to root PCI host bridge, but in KVM it is always attached to QEMU PCIe Root port.
My questions are:
- Is it possible that SLES 11 is too old to support QEMU PCIe Root port?
- Is it possible to ease VM configuration to attach storage to Host bridge directly?
- I rebuild initramfs in target environment, adding nothing to config files. Am I missing something (hooks or drivers) when rebuilding initramfs?
I want to programmatically open tape library via command-line. My HP MSL4048 have 3-tape mailslot and 4 magazines, which I can open from web interface only. Reading manuals I understood that opening mailslot or unlocking magazines commands are not standard and every vendor may have its own way to do this. Does anyone know commands to eject HPE MSL4048 mailslot or unlock magazines? I also believe that Quantum i40 or Sun Storagetek SL48 or IBM TS3000 library series have similar hardware and may have similar commands for both actions.
I have tried:
mtx -f /dev/sg2 unlock (does nothing)
mtx -f /dev/sg4 eepos 1 transfer 32 32 (give an error)
mtx -f /dev/sg4 eepos 0 transfer 32 32 (give Source Element Address 1032 is Empty)
and any other numbers after eepos
.
I have a server with two FC adapters connected to Sun 6140 (IBM DS4700) array. One adapter (1) is connected to Cisco MDS switch and that switch connected to both array controllers. Second adapter (2) is connected directly to one of the array controllers.
Sun 6140 is declared as multipath-ready and I have multipathd working. But multipath -ll show me that I have three paths to my disks, but only 2 paths on 1 adapter is working at the moment, third one (direct) is in ghost state. My best guess is that array accept multipath from one point only and my question is what is the parameter it rely on?
Is it WWNN and if yes how may I change WWNN? Or maybe there is something else I can do so the array think it is dealing with one machine on all 3 links?
I am trying to use new vlan-filter capable bridge on Virtualization Host running OEL 8.1. It does not support OpenVSwitch out of the box and I think vlan-aware bridge might work.
Now I have (configured via nmcli):
[root@nano ~]# bridge vlan
port vlan ids
eno5
22 PVID untagged 24
br0
22 PVID untagged 24 untagged
Interface br0 has an IP address, what I think is not right, because there should be somethink like br0.22 having that address.
So there is a bridge and main interface having VLANs: 1. 22 - main for the host 2. 24 - main for VMs
The question is - how exactly KVM can be configured to be attached to single 24 VLAN and how exactly those VLANs should be used.
KVM documentation says you can define network with portgroups for OpenVSwitch (and I have that somewhere) or you define network for a bridge. Old technology is to make separate bridge for each VLAN and I might do this here, but there is no way to create sub-bridges for vlan-aware bridge.
It is not possible to create vlan22 interface either nmcli con add type vlan con-name vlan22 dev br0 id 22 because it will not work.
It appears I am confused with this technology and its way to keep VLAN tag between bridge slaves.
Is there any advice to point me out?