Which temperature sensor shows the temperature of the hard drives in the HP ProLiant DL380 G7 server using ILO 3? (the "Temperature" tab shows 30 temperature sensors, but which one is responsible for the token disk?). Is there a transcript of the temperature sensor readings for the HP ProLiant DL380 G7?
According to this Seagate presentation there are some ongoing (?) efforts targeted toward modification of ext4 file system introducing SMRFS -EXT4 - support of host managed hard drives. The goal is to provide layer that will hide specifics of ZAC commands from applications (I believe). There is also this document that claims that "As of kernel v 4.7... Host managed drives are exposed as SG node - No block device file". What does it mean? maybe these document are outdated and ext4 (or other common linux file system) has been added support for host aware HDD. What linux distro support Host Managed HDD by file system? If such support exists - What steps are needed to get Host Managed HDD up and running without changes in applications (where file system hides all specifics)? General applications like DB are my concern - not log style.
Also there is such video (SDC2020: Improve Distributed Storage System TCO with Host-Managed SMR HDDs) that claims that starting from 4.10 linux kernel f2fs supports host managed drives already - did you used such approach? Maybe f2fs is not best match for random operations but I hope f2fs can fulfill such tasks with acceptable performance (where reading is dominant)
Updated: There are solutions: f2fs starting from linux 4.10 and Device Mapper from 4.13: But I'm not sure whether it works in practice. What distributions support Host Managed better:
list of linux distro with level of support for block zoned devices
I need to erase then recycle a bunch of SAS HDDs. They are from servers (I removed from a local ISP for a client) that require 20 amp power, which my house doesn't have, so I can't just erase them in the server using a bootable CD...
I have tried a SAS to SATA adapter, but it's passive, so doesn't work. Seems like there is no simple, inexpensive solution for this dilemma. I'm not going to pay hundreds of dollars for something I'll probably only need to use once.
Is there a simple solution for this, such as an external enclosure that converts SAS to USB? I can't find anything online at a reasonable price.
Maybe I'll just take out the platters and smash them with a hammer.
I currently have a debian-based Linux system that I'd like to optimize heavily. This machine has three different drives: An SLC SSD, a QLC SSD, and a 4TB HDD. I wanted to know if it was possible to create a multi-tier caching solution that leverages both of the SSDs for caching at different levels.
My utopian structure is this:
- SLC SSD (fastest, good-reliability): Hot Cache for files that are written to and read often
- QLC SSD (fast, OK-reliability): Warm Cache for (potentially larger) files that are written to and read from less often
- HDD (slow, high-reliability): Cold Storage for files that aren't written to or read often
Unfortunately, I haven't found much in terms of capabilities for multi-tier caching that allows for this type of configuration in the most common linux utilities I've found for this: lvmcache
or bcache
.
My question is whether or not it is possible to configure lvmcache
or bcache
to be leverage these drives in such a way? And, if not, are there solutions out there that enable such a configuration?
# fio --name=random-write --directory=/mnt/test/ --ioengine=posixaio --rw=randwrite -bs=4k --numjobs=1 --size=4g -iodepth=1 -runtime=600 --time_based --end_fsync=1
random-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
fio-3.7
Starting 1 process
random-write: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]
Any idea why it returned after 60 mins rather than 600 seconds that I set?
I checked dmesg
, no error:
[Mon Mar 1 20:53:36 2021] XFS (sda2): Mounting V5 Filesystem
[Mon Mar 1 20:53:37 2021] XFS (sda2): Starting recovery (logdev: internal)
[Mon Mar 1 20:53:45 2021] XFS (sda2): Ending recovery (logdev: internal)
I ran the same command on another drive (a SSD instead) on the same box at the same time, and it finished on time and returned.
Thanks in advance!