If I do
btrfs fi defrag -rv /home
then I get a long list of files which needs to be defragmented.
It seams as if it doesn't actually do anything.
Is it possible to see how far in the defrag progess it is?
If I do
btrfs fi defrag -rv /home
then I get a long list of files which needs to be defragmented.
It seams as if it doesn't actually do anything.
Is it possible to see how far in the defrag progess it is?
When using --dry-run
with rsync it can list the files that will be backed up. But when I with duplicity do
duplicity --dry-run --name pchome --encrypt-sign-key xxx --include $HOME/Desktop --exclude '**' $HOME file:///mnt/backup
it just gives the statistics.
Question
How can I get duplicity to list the changes (copy/dalete) it will preform?
I want to offline migrate the KVM guest e-devel
to another centos73 host using virsh
. So I do
# virsh -d 0 migrate --offline --persistent e-devel qemu+ssh://kvm2/system
migrate: offline(bool): (none)
migrate: persistent(bool): (none)
migrate: domain(optdata): e-devel
migrate: desturi(optdata): qemu+ssh://kvm2/system
migrate: found option <domain>: e-devel
migrate: <domain> trying as domain NAME
root@kvm2's password:
migrate: found option <domain>: e-devel
migrate: <domain> trying as domain NAME
#
After typing the root passowrd I would have expected the guest to be migrated, but nothing happens.
The last debug line migrate: <domain> trying as domain NAME
seams to me that something is missing.
What does this line mean?
When I do the below then the qcow2 file is less than 200KB.
# qemu-img create -f qcow2 /var/lib/libvirt/images/urb-dat0.qcow2 10G
# du -hsc /var/lib/libvirt/images/urb-dat0.qcow2
196K /var/lib/libvirt/images/urb-dat0.qcow2
If I attach it to a KVM guest and fdisk -l
Disk /dev/vdb: 0 MB, 197120 bytes, 385 sectors
Question
How to make a 10GB qcow2 file that is not thin provisioned?
I have made /dev/sdb
which is a 16 TB disk using hardware RAID, where I am temped to put XFS directly on /dev/sdb
without making partitions. In the future will I need to expand this to double the size.
The hardware is an HP ProLiant DL380 Gen 9 with 12 SAS disk trays in the front.
One advantage of not making partitions is that a reboot isn't needed, but are things different on >2 TB disks?
Do I need to have a GPT, or can I run into trouble when expanding the RAID array and XFS without one?
I have the below in /etc/fstab
and when I cd ~/dat0
and then do anything in there e.g. tab complete, it takes 10 seconds every time. I.e. it doesn't cache anything.
Is there something that can be done to speed this up?
ss@dat0: /home/ss/dat0 fuse.sshfs defaults,_netdev,identityfile=/home/ss/.ssh/id_rsa,uid=1000,gid=1000,allow_other 0 0
On Solaris and variants, it makes a zfs clone of the current /
filesystem and installs the new kernel there. When rebooting you get the new /
.
In my case I have a /scripts
directory, so if I ever should go back to a previous kernel, then my /scripts
would also get rolled back, which to me should be independent of with kernel I am on.
Question
How can I avoid loosing the changes made to the filesystem from the time the kernel upgrade is finished till the host is rebooted?
Is there a procedure I am not aware of, as even if you are very fast to reboot after the kernel upgrade, log entries could easily have been made, which you would never see.
There should exist mainboards that support Self Encrypting Device (SED) in BIOS, so when connecting a SSD which have SED support, and SED have been enabled in the BIOS, it prompts on bootup for password.
Searching for "mainboard sed support ssd bios" doesn't give me anything, so I suspect it is called something else.
Question
Does anyone know how to find mainboards that support SED, so the BIOS asks for password for the SSD's SED?
A very good way to erase a SSD which have SED support is to change the password/key. But what to do with those that doesn't have SED support?
This article says
Fortunately it is possible to erase most SSDs, though this is closer to a “reset” than a wipe. The “ATA Secure Erase” command instructs the drive to flush all stored electrons, forcing the drive to “forget” all stored data. This command essentially resets all available blocks to the “erase” state, which is what TRIM uses for garbage collection purposes.
Question
I suppose it is something that can be done with hdparm
, so does anyone know what command that does this?
I have a zpool where I have just replaced a failed disk, and started a resilvering to the new disk.
What I don't understand is, why zpool status
says it want to scan 129TB, when the size of the vdev is ~30TB. When I look at iostat -nx 1
then I can see the 5 disks in the vdev are getting heavy reads, and the new disk equal heavy writes. So zfs doesn't scan all the data as it says.
# zpool status tank3 |head
pool: tank3
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Thu Apr 30 09:59:15 2015
61.2T scanned out of 129T at 3.03G/s, 6h23m to go
946G resilvered, 47.34% done
Question
I would say that each vdev is independent of each other, so a resilver of one should not require any scan of the others. Why does zfs scan all used disk space when resilvering?
Is there an easy way to make a list of the dependencies a newly installed RPM package will install with yum
?
Example: If you do yum install ruby
then it will also install some rubygems.
But when I uninstall the ruby
package I also want to get rid of the dependencies it installed.
So my first idea were to make a list of those new packages, and then do an rpm -e
on those when I uninstall ruby
.
Question
How to make such list in an automated way?
Or is there an easier way then to have to manage text files with rpm package names?
Question at the bottom.
I have several ZFS file system I would like to use ZFS compression, but since enabling the compression will only affect new data written to the file system, I would like to write a script that can migrate file systems, so all data is compressed.
This is my test attempt
du -h /tmp/dump.txt
zfs create -p tank3/xtest1/fs
cp /tmp/dump.txt /tank3/xtest1/fs
zfs list | grep xtest
zfs create tank3/xtest2
zfs set compression=lzjb tank3/xtest2
zfs inherit compression tank3/xtest2
zfs snapshot tank3/xtest1/fs@snap
zfs send tank3/xtest1/fs@snap | zfs receive tank3/xtest2/fs
zfs get compression tank3/xtest2/fs
zfs list | grep xtest
zfs destroy -r tank3/xtest1
zfs destroy -r tank3/xtest2
echo "test 2"
zfs create tank3/xtest2
zfs set compression=lzjb tank3/xtest2
zfs list | grep xtest
cp /tmp/dump.txt /tank3/xtest2
zfs list | grep xtest
zfs get compressratio tank3/xtest2
zfs destroy -r tank3/xtest2
which gives
344M /tmp/dump.txt
tank3/xtest1 575K 6.38T 288K /tank3/xtest1
tank3/xtest1/fs 288K 6.38T 288K /tank3/xtest1/fs
NAME PROPERTY VALUE SOURCE
tank3/xtest2/fs compression off default
tank3/xtest1 344M 6.38T 304K /tank3/xtest1
tank3/xtest1/fs 344M 6.38T 344M /tank3/xtest1/fs
tank3/xtest2 344M 6.38T 288K /tank3/xtest2
tank3/xtest2/fs 344M 6.38T 344M /tank3/xtest2/fs
test 2
tank3/xtest2 288K 6.38T 288K /tank3/xtest2
tank3/xtest2 288K 6.38T 288K /tank3/xtest2
NAME PROPERTY VALUE SOURCE
tank3/xtest2 compressratio 1.00x -
In the first test would I have expected the replication would compress the data when creating tank3/xtest2/fs
but newly created file systems does not inherit compression when using send/receive it seams.
In test 2 I can't see the 344MB file takes any space.
From what I can tell compression doesn't work.
Question
Why do I see these weird results?
And how should I migrate a not compressed file system to be compressed?
Update
Added compressratio
property which shows that no compression have been done. dump.txt
can be compressed to 190MB.
Are there feature advantages from a VMware point of view to use a NetApp nas instead of a ZFS based NAS running e.g. OmniOS?
Another way to formulate the question: Are there features unlocked when using a NetApp NAS compared to a ZFS based NAS? E.G. provisioning or perhaps performance because VMware can send commands to the NetApp.
My setup would be as simple as possible. No clustering or HA, just one disk array in each case.
In the Wikipedia page for CPU time, it says
The CPU time is measured in clock ticks or seconds. Often, it is useful to measure CPU time as a percentage of the CPU's capacity, which is called the CPU usage.
I don't understand how a time duration can be replaced by a percentage. When I look at top
, doesn't %CPU
tell me that MATLAB
is using 2.17 of my cores?
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18118 jasl 20 0 9248400 261528 78676 S 217.2 0.1 8:14.75 MATLAB
Question
In order to better understand what CPU usage is, how do I calculate the CPU usage myself?
We have several hosts where we have an identical hot spare host, which is patched and updated so it is very close to have to same software and config. In case of failure the network cable is switched and the DHCP server is updated with the new MAC address. This is best case, as there usually are a bit more that needs modification.
I feel it is a waste of electricity to have a hot spare host and waste of time to maintain it, and since config modifications are needed in case of failover, I'd like to ask the following:
Are hot spare hosts old school and there are better ways now?
Instead of having a hot spare host, would it make sense to make it a cold spare, take the hard drives and put them in the primary host and change the RAID from 1 to 1+1. In case of failure all I would have to do is change network cables, update the DHCP server, take the hard drives and insert them in the cold spare and power on. The benefit, as I see it, is that the 2x2 disks are always in sync, so only one host to maintain and no config changes are needed when failing over.
Is that a good idea?
I have a couple of these LSI arrays
LSI Model 0834
LSI product name: 1932
LSI Product codename: Mary Jane
Enclosure name: Shea (DM1300)
End of Life: 31-Dec-2010
but I can't find any information about the FiberChannel interface is uses to the host. Right now it is connected for a BlueArc Mercury 50 filer, which also is have reached EOL.
Question
Can I buy any FC card, or What should I look for, when I want to connect this array to a Linux host?
Are transceivers in general hot swappable, and if not, how do I find out if these are?
Using these adapters
665243-B21, HP Ethernet 10Gb 2-port 560FLR-SFP+
665249-B21, HP Ethernet 10Gb 2-port 560SFP+
and these transceivers
455883-B21, HP BLc 10GB SR SFP+ opt
PSFP10-2321SF, PEAKOPTICAL SFP+ 10Km 1310nm DFB w/DDMI
I have a HP D6000 (sometimes called MDS 600) storage array, HP DL380p G8 with an LSI 9207-8e SAS adapter.
When I am in the OmniOS (Solaris 10 fork), I can e.g. dd
to one of the disks, but I don't see the HDD led flash. On the front of the storage array are the HDD leds with numbers,
and they never light up. I suspect they should be green according to the manual:
Green: The drive is online, but is not currently active.
Off: The drive is offline, a spare, or not configured as part of an array.
If I enter the LSI setup, then I get can get number and HDD to light up using the test feature.
Question
What does the offline message above mean, and how do I active them, so I can use the disks as a JBOD for ZFS?
If I do
dd if=/dev/zero of=/tank/test/zpool bs=1M count=100
how can I treat the file /tank/test/zpool
as a vdev, so I can use it as a zpool?
It is for zfs testing purposes only.
Question
Is there a way to completely reset a PostgreSQL installation on Linux, so it is in the same state as when I installed it?
Idea
I have considered
rm -rf /var/lib/pgsql/*
rm -rf /var/lib/pgsql/backups/*
rm -rf /var/lib/pgsql/data/*
but perhaps that is not a recommended method.
Purpose
This would be handy to get rid of left overs from previous programs that have used it.