I need to take a box that is part of 2-node DRBD cluster offline for hardware servicing.
How do I force the primary (the other?) DRBD node to StandAlone so that I can turn the other node down?
Just turn it off? Is it safe to do that?
I have removed obsolete kernel after new kernel was installed. Now I keep getting following message from needrestart
over and over again:
The currently running kernel version is 4.9.0-16-amd64 and there is an ABI compatible upgrade pending.
Several restarts of the box did not help with this. How can I fix this?
OS: Debian 9.13
This is "blast from the past" type question.
I need to make a backup of data stored on SCSI disk used in a large industrial machine.
That machine takes SCSI Ultra320 drives like Cheetah 10K.7 ST336807LC.
This drive has 80-pin SCA-2 connector and this is where my trouble starts. I don't have the controller and the cable yet and I need to know what to buy exactly and how to connect it all to the drive. I would appreciate a piece of advice there. Apart from an hour of googling, I'm not really familiar with SCSI, so I need to know this:
Most refurbished Ultra320 controllers that are available appear to have 68-pin external VHDCI connectors or what seems to my untrained eye like internal wide 68-pin connectors.
I could not find any pluggable controllers that would have 80-pin SCA-2 connectors available. So I need to know if I can somehow connect that specific disk via an interposer like this to such a controller? Will 68-pin cable connected via interposer have proper electrical and other properties so that I don't damage either device?
UPDATE
Sorry, I did not specify explicitly that the machine takes only SCA-2 drives.
I'm using KVM on Debian 10 as host and for now two guests that are Debian 10 as well. The guests are "stuttering" frequently, what I mean by that is that like at least several times an hour a guest becomes unresponsive for like 10 seconds. If I have an SSH session open and I'm typing I can do nothing until the guest spontaneously "unfreezes". It seems like what I typed just before "freeze" is still buffered, because once the guest unfreezes, it all appears in command line.
The host box does not suffer anything like that.
The host is part of active/passive cluster with following configuration:
/dev/md0
for root fs, two bigger partions are joined into /dev/md1
for guest data./dev/md1
is used for DRBD device between two cluster hosts. Protocol C for synchronous writes is used between the hosts.I did not really change anything in either KVM host or guest settings if I remember correctly, just used the defaults. Anyway, this is the configuration I have:
Guest domain XML definition:
% cat bind.xml
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh edit bind
or other application using the libvirt API.
-->
<domain type='kvm'>
<name>bind</name>
<uuid>6fc751ea-2ce0-4e69-b098-48b8ea0fc78a</uuid>
<metadata>
<libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://debian.org/debian/10"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>1048576</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
<bootmenu enable='yes'/>
</os>
<features>
<acpi/>
<apic/>
<vmport state='off'/>
</features>
<cpu mode='host-model' check='partial'>
<model fallback='allow'/>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/vgr0/bind'/>
<target dev='vda' bus='virtio'/>
<boot order='2'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='sda' bus='sata'/>
<readonly/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</controller>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x15'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x16'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:da:43:59'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:25:ea:03'/>
<source bridge='br1'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' autoport='yes'>
<listen type='address'/>
<image compression='off'/>
</graphics>
<sound model='ich9'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</video>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='2'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='3'/>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</rng>
</devices>
</domain>
QEMU settings:
% virsh -c qemu:///system capabilities
<capabilities>
<host>
<uuid>53f34900-9b09-11e2-98e4-6c3be51bf934</uuid>
<cpu>
<arch>x86_64</arch>
<model>IvyBridge-IBRS</model>
<vendor>Intel</vendor>
<microcode version='33'/>
<topology sockets='1' cores='4' threads='1'/>
<feature name='ds'/>
<feature name='acpi'/>
<feature name='ss'/>
<feature name='ht'/>
<feature name='tm'/>
<feature name='pbe'/>
<feature name='dtes64'/>
<feature name='monitor'/>
<feature name='ds_cpl'/>
<feature name='vmx'/>
<feature name='smx'/>
<feature name='est'/>
<feature name='tm2'/>
<feature name='xtpr'/>
<feature name='pdcm'/>
<feature name='pcid'/>
<feature name='osxsave'/>
<feature name='arat'/>
<feature name='md-clear'/>
<feature name='stibp'/>
<feature name='ssbd'/>
<feature name='xsaveopt'/>
<feature name='invtsc'/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
</cpu>
<power_management>
<suspend_mem/>
</power_management>
<iommu support='no'/>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
<uri_transport>rdma</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='1'>
<cell id='0'>
<memory unit='KiB'>16298904</memory>
<pages unit='KiB' size='4'>4074726</pages>
<pages unit='KiB' size='2048'>0</pages>
<distances>
<sibling id='0' value='10'/>
</distances>
<cpus num='4'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
<cpu id='1' socket_id='0' core_id='1' siblings='1'/>
<cpu id='2' socket_id='0' core_id='2' siblings='2'/>
<cpu id='3' socket_id='0' core_id='3' siblings='3'/>
</cpus>
</cell>
</cells>
</topology>
<cache>
<bank id='0' level='3' type='both' size='6' unit='MiB' cpus='0-3'/>
</cache>
<secmodel>
<model>apparmor</model>
<doi>0</doi>
</secmodel>
<secmodel>
<model>dac</model>
<doi>0</doi>
<baselabel type='kvm'>+64055:+64055</baselabel>
<baselabel type='qemu'>+64055:+64055</baselabel>
</secmodel>
</host>
<guest>
<os_type>hvm</os_type>
<arch name='i686'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu-system-i386</emulator>
<machine maxCpus='255'>pc-i440fx-3.1</machine>
<machine canonical='pc-i440fx-3.1' maxCpus='255'>pc</machine>
<machine maxCpus='1'>isapc</machine>
<machine maxCpus='255'>pc-1.1</machine>
<machine maxCpus='255'>pc-1.2</machine>
<machine maxCpus='255'>pc-1.3</machine>
<machine maxCpus='255'>pc-i440fx-2.8</machine>
<machine maxCpus='255'>pc-1.0</machine>
<machine maxCpus='255'>pc-i440fx-2.9</machine>
<machine maxCpus='255'>pc-i440fx-2.6</machine>
<machine maxCpus='255'>pc-i440fx-2.7</machine>
<machine maxCpus='128'>xenfv</machine>
<machine maxCpus='255'>pc-i440fx-2.3</machine>
<machine maxCpus='255'>pc-i440fx-2.4</machine>
<machine maxCpus='255'>pc-i440fx-2.5</machine>
<machine maxCpus='255'>pc-i440fx-2.1</machine>
<machine maxCpus='255'>pc-i440fx-2.2</machine>
<machine maxCpus='288'>pc-q35-3.1</machine>
<machine canonical='pc-q35-3.1' maxCpus='288'>q35</machine>
<machine maxCpus='255'>pc-i440fx-2.0</machine>
<machine maxCpus='288'>pc-q35-2.11</machine>
<machine maxCpus='288'>pc-q35-2.12</machine>
<machine maxCpus='288'>pc-q35-3.0</machine>
<machine maxCpus='1'>xenpv</machine>
<machine maxCpus='288'>pc-q35-2.10</machine>
<machine maxCpus='255'>pc-i440fx-1.7</machine>
<machine maxCpus='288'>pc-q35-2.9</machine>
<machine maxCpus='255'>pc-0.15</machine>
<machine maxCpus='255'>pc-i440fx-1.5</machine>
<machine maxCpus='255'>pc-q35-2.7</machine>
<machine maxCpus='255'>pc-i440fx-1.6</machine>
<machine maxCpus='255'>pc-i440fx-2.11</machine>
<machine maxCpus='288'>pc-q35-2.8</machine>
<machine maxCpus='255'>pc-0.13</machine>
<machine maxCpus='255'>pc-0.14</machine>
<machine maxCpus='255'>pc-i440fx-3.0</machine>
<machine maxCpus='255'>pc-i440fx-2.12</machine>
<machine maxCpus='255'>pc-q35-2.4</machine>
<machine maxCpus='255'>pc-q35-2.5</machine>
<machine maxCpus='255'>pc-q35-2.6</machine>
<machine maxCpus='255'>pc-i440fx-1.4</machine>
<machine maxCpus='255'>pc-i440fx-2.10</machine>
<machine maxCpus='255'>pc-0.11</machine>
<machine maxCpus='255'>pc-0.12</machine>
<machine maxCpus='255'>pc-0.10</machine>
<domain type='qemu'/>
<domain type='kvm'/>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<disksnapshot default='on' toggle='no'/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
<pae/>
<nonpae/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<machine maxCpus='255'>pc-i440fx-3.1</machine>
<machine canonical='pc-i440fx-3.1' maxCpus='255'>pc</machine>
<machine maxCpus='1'>isapc</machine>
<machine maxCpus='255'>pc-1.1</machine>
<machine maxCpus='255'>pc-1.2</machine>
<machine maxCpus='255'>pc-1.3</machine>
<machine maxCpus='255'>pc-i440fx-2.8</machine>
<machine maxCpus='255'>pc-1.0</machine>
<machine maxCpus='255'>pc-i440fx-2.9</machine>
<machine maxCpus='255'>pc-i440fx-2.6</machine>
<machine maxCpus='255'>pc-i440fx-2.7</machine>
<machine maxCpus='128'>xenfv</machine>
<machine maxCpus='255'>pc-i440fx-2.3</machine>
<machine maxCpus='255'>pc-i440fx-2.4</machine>
<machine maxCpus='255'>pc-i440fx-2.5</machine>
<machine maxCpus='255'>pc-i440fx-2.1</machine>
<machine maxCpus='255'>pc-i440fx-2.2</machine>
<machine maxCpus='288'>pc-q35-3.1</machine>
<machine canonical='pc-q35-3.1' maxCpus='288'>q35</machine>
<machine maxCpus='255'>pc-i440fx-2.0</machine>
<machine maxCpus='288'>pc-q35-2.11</machine>
<machine maxCpus='288'>pc-q35-2.12</machine>
<machine maxCpus='288'>pc-q35-3.0</machine>
<machine maxCpus='1'>xenpv</machine>
<machine maxCpus='288'>pc-q35-2.10</machine>
<machine maxCpus='255'>pc-i440fx-1.7</machine>
<machine maxCpus='288'>pc-q35-2.9</machine>
<machine maxCpus='255'>pc-0.15</machine>
<machine maxCpus='255'>pc-i440fx-1.5</machine>
<machine maxCpus='255'>pc-q35-2.7</machine>
<machine maxCpus='255'>pc-i440fx-1.6</machine>
<machine maxCpus='255'>pc-i440fx-2.11</machine>
<machine maxCpus='288'>pc-q35-2.8</machine>
<machine maxCpus='255'>pc-0.13</machine>
<machine maxCpus='255'>pc-i440fx-2.12</machine>
<machine maxCpus='255'>pc-0.14</machine>
<machine maxCpus='255'>pc-i440fx-3.0</machine>
<machine maxCpus='255'>pc-q35-2.4</machine>
<machine maxCpus='255'>pc-q35-2.5</machine>
<machine maxCpus='255'>pc-q35-2.6</machine>
<machine maxCpus='255'>pc-i440fx-1.4</machine>
<machine maxCpus='255'>pc-i440fx-2.10</machine>
<machine maxCpus='255'>pc-0.11</machine>
<machine maxCpus='255'>pc-0.12</machine>
<machine maxCpus='255'>pc-0.10</machine>
<domain type='qemu'/>
<domain type='kvm'/>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<disksnapshot default='on' toggle='no'/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
</capabilities>
Hardware:
% lshw
description: Desktop Computer
product: HP Compaq Elite 8300 CMT (QV993AV)
vendor: Hewlett-Packard
serial:
width: 64 bits
capabilities: smbios-2.7 dmi-2.7 smp vsyscall32
configuration: administrator_password=disabled boot=normal chassis=desktop family=103C_53307F G=D frontpanel_password=disabled keyboard_password=disabled power-on_password=disabled sku=.. uuid=0049F353-099B-E211-98E4-6C3BE51BF934
*-core
description: Motherboard
product: 3396
vendor: Hewlett-Packard
physical id: 0
serial: ...
*-firmware
description: BIOS
vendor: Hewlett-Packard
physical id: 0
version: K01 v02.83
date: 10/29/2012
size: 64KiB
capacity: 16MiB
capabilities: pci pnp upgrade shadowing cdboot bootselect edd int5printscreen int9keyboard int14serial int17printer acpi usb biosbootspecification netboot uefi
*-cache:0
description: L1 cache
physical id: 4
slot: CPU Internal L1
size: 256KiB
capacity: 256KiB
capabilities: internal write-through unified
configuration: level=1
*-cache:1
description: L2 cache
physical id: 5
slot: CPU Internal L2
size: 1MiB
capacity: 1MiB
capabilities: internal write-through unified
configuration: level=2
*-cache:2
description: L3 cache
physical id: 6
slot: CPU Internal L3
size: 6MiB
capacity: 6MiB
capabilities: internal write-back unified
configuration: level=3
*-memory
description: System Memory
physical id: 7
slot: System board or motherboard
size: 16GiB
*-bank:0
description: DIMM DDR3 Synchronous 1600 MHz (0.6 ns)
product: M378B5173EB0-YK0
vendor: Samsung
physical id: 0
serial:
slot: DIMM4
size: 4GiB
width: 64 bits
clock: 1600MHz (0.6ns)
*-bank:1
description: DIMM DDR3 Synchronous 1600 MHz (0.6 ns)
product: M378B5173EB0-YK0
vendor: Samsung
physical id: 1
serial:
slot: DIMM3
size: 4GiB
width: 64 bits
clock: 1600MHz (0.6ns)
*-bank:2
description: DIMM DDR3 Synchronous 1600 MHz (0.6 ns)
product: M378B5173EB0-YK0
vendor: Samsung
physical id: 2
serial:
slot: DIMM2
size: 4GiB
width: 64 bits
clock: 1600MHz (0.6ns)
*-bank:3
description: DIMM DDR3 Synchronous 1600 MHz (0.6 ns)
product: M378B5173EB0-YK0
vendor: Samsung
physical id: 3
serial:
slot: DIMM1
size: 4GiB
width: 64 bits
clock: 1600MHz (0.6ns)
*-cpu
description: CPU
product: Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
vendor: Intel Corp.
physical id: e
bus info: cpu@0
version: Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
slot: Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
size: 1637MHz
capacity: 3800MHz
width: 64 bits
clock: 100MHz
capabilities: lm fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp x86-64 constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d cpufreq
configuration: cores=4 enabledcores=4 threads=4
*-pci
description: Host bridge
product: Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller
vendor: Intel Corporation
physical id: 100
bus info: pci@0000:00:00.0
version: 09
width: 32 bits
clock: 33MHz
configuration: driver=ivb_uncore
resources: irq:0
...
*-pci:0
description: PCI bridge
product: 7 Series/C216 Chipset Family PCI Express Root Port 1
vendor: Intel Corporation
physical id: 1c
bus info: pci@0000:00:1c.0
version: c4
width: 32 bits
clock: 33MHz
capabilities: pci pciexpress msi pm normal_decode bus_master cap_list
configuration: driver=pcieport
resources: irq:16 ioport:e000(size=4096) memory:f7c00000-f7cfffff
*-network:0
description: Ethernet interface
product: 82571EB Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 0
bus info: pci@0000:01:00.0
logical name: enp1s0f0
version: 06
serial: 68:05:ca:1a:a1:94
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k duplex=full firmware=5.11-2 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:29 memory:f7ca0000-f7cbffff memory:f7c80000-f7c9ffff ioport:e020(size=32) memory:f7c60000-f7c7ffff
*-network:1
description: Ethernet interface
product: 82571EB Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 0.1
bus info: pci@0000:01:00.1
logical name: enp1s0f1
version: 06
serial: 68:05:ca:1a:a1:95
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k duplex=full firmware=5.11-2 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:31 memory:f7c40000-f7c5ffff memory:f7c20000-f7c3ffff ioport:e000(size=32) memory:f7c00000-f7c1ffff
*-usb:2
description: USB controller
product: 7 Series/C216 Chipset Family USB Enhanced Host Controller #1
vendor: Intel Corporation
physical id: 1d
bus info: pci@0000:00:1d.0
version: 04
width: 32 bits
clock: 33MHz
capabilities: pm debug ehci bus_master cap_list
configuration: driver=ehci-pci latency=0
resources: irq:23 memory:f7d37000-f7d373ff
*-usbhost
product: EHCI Host Controller
vendor: Linux 4.19.0-13-amd64 ehci_hcd
physical id: 1
bus info: usb@4
logical name: usb4
version: 4.19
capabilities: usb-2.00
configuration: driver=hub slots=3 speed=480Mbit/s
*-usb
description: USB hub
product: Integrated Rate Matching Hub
vendor: Intel Corp.
physical id: 1
bus info: usb@4:1
version: 0.00
capabilities: usb-2.00
configuration: driver=hub slots=8 speed=480Mbit/s
*-pci:1
description: PCI bridge
product: 82801 PCI Bridge
vendor: Intel Corporation
physical id: 1e
bus info: pci@0000:00:1e.0
version: a4
width: 32 bits
clock: 33MHz
capabilities: pci subtractive_decode bus_master cap_list
*-isa
description: ISA bridge
product: Q77 Express Chipset LPC Controller
vendor: Intel Corporation
physical id: 1f
bus info: pci@0000:00:1f.0
version: 04
width: 32 bits
clock: 33MHz
capabilities: isa bus_master cap_list
configuration: driver=lpc_ich latency=0
resources: irq:0
*-sata
description: SATA controller
product: 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode]
vendor: Intel Corporation
physical id: 1f.2
bus info: pci@0000:00:1f.2
logical name: scsi0
logical name: scsi1
logical name: scsi2
version: 04
width: 32 bits
clock: 66MHz
capabilities: sata msi pm ahci_1.0 bus_master cap_list emulated
configuration: driver=ahci latency=0
resources: irq:28 ioport:f0d0(size=8) ioport:f0c0(size=4) ioport:f0b0(size=8) ioport:f0a0(size=4) ioport:f060(size=32) memory:f7d36000-f7d367ff
*-disk:0
description: ATA Disk
product: TOSHIBA HDWD120
vendor: Western Digital
physical id: 0
bus info: scsi@0:0.0.0
logical name: /dev/sda
version: ACF0
serial:
size: 1863GiB (2TB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096 signature=4de1036d
*-volume:0
description: EXT4 volume
vendor: Linux
physical id: 1
bus info: scsi@0:0.0.0,1
logical name: /dev/sda1
logical name: /boot
version: 1.0
serial: 340f09ca-7a41-472e-99e2-d72aecd7517f
size: 285MiB
capacity: 285MiB
capabilities: primary bootable journaled extended_attributes large_files huge_files dir_nlink 64bit extents ext4 ext2 initialized
configuration: created=2020-12-09 19:54:51 filesystem=ext4 lastmountpoint=/boot modified=2020-12-31 21:12:10 mount.fstype=ext4 mount.options=rw,relatime,stripe=4 mounted=2020-12-31 19:21:14 state=mounted
*-volume:1
description: Linux raid autodetect partition
physical id: 2
bus info: scsi@0:0.0.0,2
logical name: /dev/sda2
capacity: 37GiB
capabilities: primary multi
*-volume:2
description: Linux raid autodetect partition
physical id: 3
bus info: scsi@0:0.0.0,3
logical name: /dev/sda3
capacity: 1825GiB
capabilities: primary multi
*-disk:1
description: ATA Disk
product: TOSHIBA HDWD120
vendor: Western Digital
physical id: 1
bus info: scsi@1:0.0.0
logical name: /dev/sdb
version: ACF0
serial:
size: 1863GiB (2TB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096 signature=b3257d2e
*-volume:0
...
Recently I took up maintenance of another company network where BIND has been installed in a dedicated VM (ostensibly for security). The company uses Debian as a system for servers.
I have to say the concept intrigued me. Unless you have a dedicated box for BIND (I don't), installing it on a VM hosts (two, because they're in active/passive cluster) is kind of security risk knowing BIND's vulnerability history. I know it's chrooted in Debian (?), but still.
Do you think it's a good idea? Pros, cons? Is it really needed or is it basically pointless given current BIND versions?
I need to archive important mail incoming for specific address in case it gets accidentally deleted from email server, etc.
Either saving or using a pipe to archive it on backup machine is fine (I can rsync
the backup automatically later, etc).
However, I have trouble to get system_filter
working. I have configured it this way so far:
/etc/exim4/exim4.conf
:
system_filter = /etc/exim4/system_filter
system_filter_user = Debian-exim
system_filter_group = Debian-exim
system_filter_directory_transport = local_copy_to_directory
# transport section
local_copy_to_directory:
driver = appendfile
delivery_date_add
envelope_to_add
return_path_add
group = Debian-exim
user = Debian-exim
mode = 0660
maildir_format = true
create_directory = true
In /etc/exim4/system_filter
:
# Exim filter
if $local_part is "example"
then
unseen save /tmp/example_dir
endif
Nothing gets written in logs, nothing gets saved (normal delivery occurs of course).
When I change $local_part
in system filter file to root
and test it like so:
% exim -bF /etc/exim4/system_filter -d-all+filter -f [email protected] <tfpmet
Exim version 4.89 uid=0 gid=0 pid=1261 D=200
...
Return-path taken from "Return-path:" header line
Return-path = [email protected]
Sender = [email protected]
Recipient = [email protected]
Testing Exim filter file "/etc/exim4/system_filter"
Condition is true: $local_part is root
Unseen save message to: /tmp/example_dir
Filtering did not set up a significant delivery.
Normal delivery will occur.
>>>>>>>>>>>>>>>> Exim pid=1261 terminating with rc=0 >>>>>>>>>>>>>>>>
It clearly says:
Condition is true: $local_part is root
Unseen save message to: /tmp/example_dir
However, nothing gets saved again.
OS: Debian 9.11 amd64.
I'd prefer to achieve this result using system filter, but any good solution would do really.
swaks -4 -S -t [email protected] -f [email protected] -ao -au user2 -ap ***** -tls -s example3 --header Subject: Q4mSShEEoYnAviWg
*** Error connecting to example3:25:
*** IO::Socket::INET6: getaddrinfo: Temporary failure in name resolution
I'm using -4
option, so why does it complain about ipv6
hostname?
I'm using DRBD (config below) and tried to test reliability of the setup.
I have rebooted secondary node (host1
) and noticed it went into this state:
host1:
0:r0/0 WFConnection Secondary/Unknown UpToDate/DUnknown
host2:
0:r0/0 StandAlone Primary/Unknown UpToDate/DUnknown lvm-pv: vgr0 1861.65g 40.00g
drbd
service was running on primary, it also started on secondary. However, anything I tried on secondary failed to reconnect it:
drbdadm adjust all
drbdadm disconnect r0
drdbadm connect all
All commands ended with:
Failure: (102) Local address (port) already in use.
Finally, I have restarted drbd service (service drbd restart
) on a primary. Only that reconnected the service:
host1:
0:r0/0 Connected Secondary/Primary UpToDate/UpToDate
host2:
0:r0/0 Connected Primary/Secondary UpToDate/UpToDate lvm-pv: vgr0 1861.65g 40.00g
Why is that? Can I recover from WFConnection
without restarting the service on primary?
Resource definition:
resource r0 {
protocol C;
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
}
disk {
on-io-error detach;
c-fill-target 10M;
c-max-rate 700M;
c-plan-ahead 7;
c-min-rate 4M;
}
net {
# max-epoch-size 20000;
max-buffers 36k;
sndbuf-size 1024k;
rcvbuf-size 2048k;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
syncer {
rate 400M;
al-extents 6433;
}
on host1 {
device /dev/drbd0;
disk /dev/sdc;
address 10.0.0.2:7788;
meta-disk internal;
}
on host2 {
device /dev/drbd0;
disk /dev/sdc;
address 10.0.0.3:7788;
meta-disk internal;
}
}
I'm using /dev/drbd0
as LVM volume:
% pvdisplay /dev/drbd0
--- Physical volume ---
PV Name /dev/drbd0
VG Name vgr0
PV Size 1.82 TiB / not usable 3.79 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 476583
Free PE 466343
Allocated PE 10240
PV UUID JC0Us5-jxC1-9u2F-Wsyp-toJy-E2J4-sXu8Id
I'm trying to find SMART params of the disks in my array, it seems that simple way does not work:
% smartctl -d sat --all /dev/sg0 -H
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-8-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Read Device Identity failed: scsi error unsupported scsi opcode
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
root@wrzos ~ % smartctl -d sat -T permissive -T permissive --all /dev/sg0 -H
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-8-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
Read Device Identity failed: scsi error unsupported scsi opcode
=== START OF INFORMATION SECTION ===
Device Model: [No Information Found]
Serial Number: [No Information Found]
Firmware Version: [No Information Found]
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: [No Information Found]
Local Time is: Tue Jan 29 15:32:21 2019 CET
SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 82-83 don't show if SMART supported.
SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 85-87 don't show if SMART is enabled.
Checking to be sure by trying SMART RETURN STATUS command.
SMART support is: Unknown - Try option -s with argument 'on' to enable it.
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
% smartctl -d scsi --all /dev/sg0 -H
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-8-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Vendor: HP
Product: P410
Revision: 6.40
Serial number: 111111111111
Device type: storage array
Local Time is: Tue Jan 29 15:29:34 2019 CET
SMART support is: Unavailable - device lacks SMART capability.
Is there any other way I can query status of those (SATA) disks?
On a KVM host I have a few VMs with legacy OSes that do not listen to ACPI shutdown event. As it happens on Debian 9.5 host, on shutdown the libvirt-guests.service
waits for 5 minutes for each such VM and then destroys it.
In order to avoid that and shut them down cleanly, I have created custom VM shutdown service, with a script that uses special methods to shut them down:
% cat /etc/systemd/system/multi-user.target.wants/vm_stop.service
[Unit]
Description=vm_shutdown
Before=libvirt-guests.service
[Service]
ExecStart=/bin/true
ExecStop=/usr/local/bin/vm_shutdown_all.sh
[Install]
WantedBy=multi-user.target
However, on shutdown the service appears to run after libvirt-guests.service
in spite of the Before=
settings in above service.
Now, I have tested that the custom service actually does run on shutdown - it touch
es a test file where I can verify it.
The problem: how do I ensure that it runs before libvirt-guests.service
?
Joplin desktop app (https://joplin.cozic.net/) creates a directory in /tmp
that I cannot get any info about as root:
While logged in as a regular user I can enter a dir, I cannot even display its attributes:
regularuser@homehost /tmp % ls -ld .mount_JoplinHNsadS
drwxrwxr-x 4 root root 0 Sep 30 21:48 .mount_JoplinHNsadS
regularuser@homehost /tmp % lsattr .mount_JoplinHNsadS
lsattr: Function not implemented While reading flags on .mount_JoplinHNsadS/AppRun
lsattr: Inappropriate ioctl for device While reading flags on .mount_JoplinHNsadS/app
lsattr: Function not implemented While reading flags on .mount_JoplinHNsadS/joplin.desktop
lsattr: Operation not supported While reading flags on .mount_JoplinHNsadS/joplin.png
lsattr: Inappropriate ioctl for device While reading flags on .mount_JoplinHNsadS/usr
However, root
cannot even enter this directory:
root@homehost /tmp % ls -al | grep mount
ls: cannot access '.mount_JoplinHNsadS': Permission denied
d????????? ? ? ? ? ? .mount_JoplinHNsadS
root@homehost /tmp % file .mount_JoplinHNsadS
.mount_JoplinHNsadS: cannot open `.mount_JoplinHNsadS' (Permission denied)
Why is that happening? I thought root can access any directory, even with sticky bit set like /tmp
?
How to diagnose such a directory as root? How was this directory created?
At the end of .bashrc
I added:
touch /tmp/bash_noninteractive_test
Run:
/usr/bin/ssh -v -C [email protected] 'ls'
On the host (logged in interactively before):
% ls -l /tmp/bash_noninteractive_test
ls: cannot access /tmp/bash_noninteractive_test: No such file or directory
I thought ~./bashrc
is ALWAYS sourced in non-interactive shells, like over SSH? How do I fix that?
Systems affected:
% lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
% lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.11 (jessie)
Release: 8.11
Codename: jessie
I need to build 2-node cluster(-like?) solution in active-passive mode, that is, one server is active while the other is passive (standby) that continuously gets the data replicated from active. KVM-based virtual machines would be running on active node.
In case of the active node being unavailable for any reason I would like to manually switch to the second node (becoming active and the other passive).
I've seen this tutorial: https://www.alteeve.com/w/AN!Cluster_Tutorial_2#Technologies_We_Will_Use
However, I'm not brave enough to trust fully automatic failover and build something that complex and trust it to operate correctly. Too much risk of split-brain situation, complexity failing somehow, data corruption, etc, while my maximum downtime requirement is not so severe as to require immediate automatic failover.
I'm having trouble finding information on how to build this kind of configuration. If you have done this, please share the info / HOWTO in an answer.
Or maybe it is possible to build highly reliable automatic failover with Linux nodes? The trouble with Linux high-availability is that there seems to have been a surge of interest in the concept like 8 years ago and many tutorials are quite old by now. This suggests that there may have been substantial problems with HA in practice and some/many sysadmins simply dropped it.
If that is possible, please share the info how to build it and your experiences with clusters running in production.
I have a problem with monit executing script on success.
~/.monitrc
:
check host example.com with address example.com
if failed url http://example.com/startpage and content == "mainBaner"
timeout 10 seconds
then exec "/usr/local/bin/monit_example_error.sh"
else if succeeded then exec "/usr/local/bin/monit_example_ok.sh"
It appears to run the script if there's an error, but not when there's no error.
Log shows that the tests are ran and succeed, /var/log/monit.log
:
[CEST Jun 8 12:24:52] debug : 'example.com' succeeded testing protocol [HTTP] at INET[example.com:80/startpage] via TCP
[CEST Jun 8 12:25:22] debug : 'example.com' succeeded connecting to INET[example.com:80/dlibra] via TCP
[CEST Jun 8 12:25:46] debug : HTTP: Regular expression matches
[CEST Jun 8 12:25:46] debug : 'example.com' succeeded testing protocol [HTTP] at INET[example.com:80/startpage] via TCP
[CEST Jun 8 12:26:16] debug : 'example.com' succeeded connecting to INET[example.com:80/dlibra] via TCP
[CEST Jun 8 12:26:39] debug : HTTP: Regular expression matches
I have checked that running /usr/local/bin/monit_example_ok.sh
works as expected (creating relevant status file in relevant dir).
OS:
% lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.5 LTS
Release: 14.04
Codename: trusty
% uname -a
Linux ql 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Monit:
ii monit 1:5.6-2 amd64 utility for monitoring and managing daemons or similar progra
virsh save vm_name memdump
and then virsh restore memdump
restores a (running) VM all right.
However, a VM is shut off after virsh save
. I'm writing a "live" backup and restore script for KVM VMs, so in the backup part I obviously need a VM running after backup. It's not a problem to do virsh restore memdump
right after backup but it strikes me as essentially unnecessary - I "should" be able to pause a VM, save its memory to a file and then simply resume/unsuspend a VM.
This is not really a problem with VMs that have little memory, but if VM has sizable working memory, then it prolongs backup unnecessarily.
Unfortunately a VM is shut off even if I do virsh suspend
first, before virsh save
.
Is there a way to do this? (i.e. suspend, save, unsuspend)
Yes, I have reduced vm.swappiness a lot:
% sysctl -a | grep swap
vm.swappiness = 1
Memory is mostly free:
% cat /proc/meminfo | head
MemTotal: 8070592 kB
MemFree: 2619580 kB
ps_mem.py
(https://raw.github.com/pixelb/ps_mem/master/ps_mem.py) shows just 2.8 GB allocated, yet the system swaps a lot:
% vmstat 2
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 3 3673808 3034600 34924 723456 13 19 297 210 16 132 33 9 56 3
1 2 3671712 3027772 35984 721372 3248 0 4100 176 2185 3149 19 10 44 28
1 2 3671048 3008808 35984 721256 9436 0 9436 0 3256 3507 18 10 42 30
1 2 3670884 2994548 35984 721280 7120 0 7120 0 2734 2926 18 10 48 24
free
output:
% free -m
total used free shared buffers cached
Mem: 7881 5881 1999 0 262 965
-/+ buffers/cache: 4653 3228
Swap: 7627 3023 4604
Somebody else has registered a name pointing to my webserver's IP address in .ma
TLD.
My domain foo.bar -> my ip address 1.2.3.4
Somebody defined:
suspiciousdomain.ma -> my ip address 1.2.3.4
So this looks like a reverse to typical DNS spoofing.
Questions:
is this preparation for some other attack? Like, people log into the suspiciousdomain.ma website, then suspiciousdomain.ma changes the IP address after some time and redirects traffic to a "man in the middle" server used for stealing credentials?
What is the best way to prevent this?
I was thinking about blocking HTTP requests on Host:
header (that is, rejecting all the http requests that do not have Host: foo.bar
header set). Would this be effective, that is, is there no reasonable way that attackers could abuse it? (is that header set by browser?)
Embedding javascript code to prevent this in the page does not have to be effective since the attackers could after all delete this code when they switch DNS to "man in the middle" server address.
I can't seem to find MAXWAIT setting (bridge initialization time) anywhere in /etc on Debian 7.2 x64. Sometimes default 20s is too short for some reason and I'd like to set it longer.
I have configured BIND and ISC DHCPD to work together (using keys for updates). Now it's not that it does not work at all: forward maps etc are most often added.
However, very often, for no apparent reason the .jnl file (journal) for the zone is left there and the main zone file is not updated. This results in infuriating lack of resolution of some hosts after DHCP lease acquire (if the hosts was not there in the zone file in the first place, or it leads to the old address).
Permissions look like this:
-rw-r--r-- 1 bind bind 691 Dec 10 11:06 myzone.zone
-rw-r--r-- 1 bind bind 765 Dec 10 12:17 myzone.zone.jnl
It should not be permissions problem though since the zone does (often) get updated via DHCP/DDNS?
What is the source of this problem and a fix for it?
OS: debian 7.2 x64, stable release bind and isc-dhcp server.
I'm building binary only package:
dpkg-buildpackage -b -us -uc
The build actually runs successfully, but I have deleted previous version of the package and now dpkg-genchanges complains:
dh_builddeb
dpkg-deb: building package `zzz' in `../zzz_01-4_amd64.deb'.
dpkg-genchanges -b >../zzz_01-4_amd64.changes
dpkg-genchanges: binary-only upload - not including any source code
dpkg-genchanges: error: cannot fstat file ../zzz_01-1_amd64.deb: No such file or directory
dpkg-buildpackage: error: dpkg-genchanges gave error exit status 2
Is there any way to skip this step? I really do not need it, as I'm building deb package for local use and previous versions are unnecessary.