Targetcli configuration got lost after server was rebooted, i tried to restore configuration from backup files with targetcli restoreconfig <backupFile>
configuration is not restored output message of the command storageobjects or targets present, not restoring
. below are outputs of targetcli ls
and systemctl status -l target
targetcli ls
o- / ................................................................................................ [...]
o- backstores ..................................................................................... [...]
| o- block ......................................................................... [Storage Objects: 0]
| o- fileio ........................................................................ [Storage Objects: 0]
| o- pscsi ......................................................................... [Storage Objects: 0]
| o- ramdisk ....................................................................... [Storage Objects: 0]
o- iscsi ................................................................................... [Targets: 1]
| o- iqn.2017-01.com.urgroup-tz:target ........................................................ [TPGs: 1]
| o- tpg1 ...................................................................... [no-gen-acls, no-auth]
| o- acls ................................................................................. [ACLs: 1]
| | o- iqn.2017-01.com.urgroup-tz:initiator ........................................ [Mapped LUNs: 0]
| o- luns ................................................................................. [LUNs: 0]
| o- portals ........................................................................... [Portals: 1]
| o- 0.0.0.0:3260 ............................................................................ [OK]
o- loopback ................................................................................ [Targets: 0]
# systemctl status -l target
● target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Ij 2017-03-10 17:18:43 EST; 1 day 18h ago
Main PID: 1342 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/target.service
Mac 10 17:18:43 server1 target[1342]: Could not create StorageObject tools_disk: Cannot configure StorageObject because device /dev/cl/tools_lv is already in use, skipped
Mac 10 17:18:43 server1 target[1342]: Could not create StorageObject bamboo_disk: Cannot configure StorageObject because device /dev/cl/bamboo_lv is already in use, skipped
Mac 10 17:18:43 server1 target[1342]: Could not create StorageObject metadata_disk: Cannot configure StorageObject because device /dev/cl/ovirt_domain_metadata is already in use, skipped
Mac 10 17:18:43 server1 target[1342]: Could not find matching StorageObject for LUN 2, skipped
Mac 10 17:18:43 server1 target[1342]: Could not find matching StorageObject for LUN 1, skipped
Mac 10 17:18:43 server1 target[1342]: Could not find matching StorageObject for LUN 0, skipped
Mac 10 17:18:43 server1 target[1342]: Could not find matching TPG LUN 0 for MappedLUN 0, skipped
Mac 10 17:18:43 server1 target[1342]: Could not find matching TPG LUN 1 for MappedLUN 1, skipped
Mac 10 17:18:43 server1 target[1342]: Could not find matching TPG LUN 2 for MappedLUN 2, skipped
Mac 10 17:18:43 server1 systemd[1]: Started Restore LIO kernel target configuration.
Ensure the service is enabled before you reboot:
systemctl enable target
Helps here.
mistige
In case you use LVM managed storage pool for backstore devices, you should make certain that LVM/Devicemapper discards the second layer VGs/LVs.
What i meant by second layer VGs/LVs; for example:
Assume that the below LVs (DISK_1) has another VG initilized by iSCSI client, and used for the services within client. There will be two different VG layer in one disk, one VG within another.
If your LVM subsystem scans for VGs within the first layer LVs, newly discovered second Layer VGs and LVs within it will be mapped to the Target server. Since LV's are mapped to the target server (by devicemapper), lio_target modules will fail to load them as backstores.
LVM searches for VGs and LVs during booting OS. That is why you didn't realized the issue in first place.
You should set a LVM filter to scan for new VGs within disks. See lvm.conf manual for global_filter. Using this configuration, you will be able to discard second layer VGs. Below is a sample for above storage architecture, only to scan VGs within PVs, and discard rest of all block devices.
You can simply use a script for running "vgchange -an 2nd_layer_VG" after bootup and restore LIO-target configuration. However i suggest using LVM's "global_filter" feature.
Note: Before CentOS 7/Red Hat 7, there was no problem on initilizing the second layer LVs, targetd were still able to load them as LUNs. However, new linux-iscsi(LIO) failt in that situation. I didn't reaserch the issue further.
Regards...
You should be running target.service at boot in order to restore the LIO configuration, and also ensure that iscsid.service is running to export your LIO devices and that tgtd is not running since it will conflict with the other LIO daemons.
Should look something like this,
You'll also want to cleanup whatever you did before this because it'll get confusing. You've likely got volumes that were created outside of LIO so when you go to manage them with targetcli later you'll have some things that are not properly exported and it'll become confusing.
If it's possible I'd recommend wiping the system and making a clean start if you have this option. Getting the iscsi subsystem setup correctly from the start is pretty important since it's dangerous to work with after it's running since you've got a lot of potentially destructive actions you can make to your users data.