A Dell T420 bought in 2015 was originally configured with the single 550W non-redundant power supply. Replacement parts are cheap these days, and I'd like to replace the single P/S with dual redundant power supplies. Can I simply pull out the existing P/S and replace it with two others, or is there a motherboard (or other) configuration issue that prevents a T420 originally built with the non-redundant P/S to accept redundant ones?
mhucka's questions
On a CentOS 7.8 server, I have a homegrown script that, among other things, starts another program whose output log I need to manage. I would like to use s6-log
or multilog
or tinylog
or similar logging program, but cannot seem to find a distribution in the standard CentOS 7.8 or EPEL repositories. A yum search
for any of them, or for the parent packages such as daemontools
or perp
or s6
or similar turns up empty.
Where does one find them? Or is it necessary to build from sources?
I don't need a particular one, just any robust piped logging tool that handles log rotation for output produced by a program or shell script. It can be one of the ones above or a similar one. I'm puzzled that none of them seem to be easily found, which makes me think I'm looking for the wrong thing or doing something dumb ...
On a CentOS Linux 7.8 system, if I create a systemd service configuration file and include LogsDirectory
and/or CacheDirectory
in the [Service]
section, then do a systemctl daemon-reload
, the following errors are printed in /var/log/messages
:
Sep 8 13:14:50 model systemd: [/etc/systemd/system/hugo-sbml.service:18] Unknown lvalue 'CacheDirectory' in section 'Service'
Sep 8 13:14:50 model systemd: [/etc/systemd/system/hugo-sbml.service:19] Unknown lvalue 'LogsDirectory' in section 'Service'
The man page for systemd.exec
is confusing: it does not document LogsDirectory
and CacheDirectory
, but it does list error codes associated with failures involving them. Googling around, those settings appear to be commonly used in systemd configuration files.
Is there a replacement for the LogsDirectory
and CacheDirectory
settings in a systemd
service configuration file on CentOS 7? Or to put it another way, what are we supposed to do if we need log and cache directories to be created?
I have a Dell T410 which we're moving to a new location. It would be convenient to put it in a rack. However, I can't figure out if that's possible. Based on this 2013 reply by a Dell rep in a Dell forum, it would appear it's not possible, and yet searching reveals adapter kit PKCR1 which seems like it's designed to adapt a T410 to ReadyRails rails (but in a manner I can't quite figure out).
Thus, my questions: (1) is it possible to take a stock T410 in tower configuration and mount it on rails in a rack, and if so, (2) what parts would be necessary?
I have four servers connected to our organization's network. I've obtained a layer 3 switch (Cisco SG300). I've separately connected each server's management interface NIC to this switch. (The management interfaces are Dell iDRAC, in case it matters.) Now I want to isolate the management network on this switch for security reasons, connect the switch to our organization's network, and only allow outside connections from specific hosts such as my laptop.
,- server 1 management interface
,-------. +- server 2 management interface
external (open) network ---+ SG300 +-+- server 3 management interface
`-------' `- server 4 management interface
I think I can work out VLAN configuration for the management network on the right-hand side of the SG300, and if I understand ACLs on the Cisco switch correctly, I should be able to create an ACL that allows only a specific MAC address from the external network to connect through the SG300 to the VLAN on the right-hand side.
My problem is this: how can a connection from the outside network specify which destination (1-4) to connnect to? Suppose the management NICs have IP addresses 192.1.1.1 through 192.1.1.4, and say I'm on the external network, and I want to connect to machine 3's management interface. How do I do that? The servers' management interfaces will not have IP addresses on the external network, so I can't connect to a specific IP address. How do I indicate the desired destination?
This is probably a basic networking question, and obviously I lack clue, but after beating my head against Google for quite some time now, I can't figure this out. What is the basic approach to achieving this configuration, and are there resources that explain how to make it happen?
We are investigating speeding up some machine learning code written using Theano and Keras, in particular by getting a GPU card. Does anyone have direct experience with this or a very similar combination? Specifically, we are interested in people's experiences about:
- Is it physically possible to install a card such as a GTX 1060 in a Dell R710 or R730xd?
- Is anything special required to get CentOS Linux to recognize the card, other than installing the necessary Nvidia drivers?
- Are there any issues with respect to power, cooling, etc., we should worry about?
A similar question has been asked, but for a different card and operating system. Discussions elsewhere such as here suggest it's possible for similar hardware, but a bit tricky. Before having our organization buy the hardware, it would be helpful to know whether there are serious issues.
To improve the performance of a Dell R710 running CentOS 7.2 and being used as a MongoDB server, I'd like to add SSDs. The PNY Enterprise SSDs (specifically the 240GB EP7011) seem reasonably priced. But will those SSD drives work in an R710 with a Perc H700? Will the controller recognize them? I searched, but have not found information either for or against this combination.
(We don't have a service contract with Dell.)
I bought a refurbished R730xd without the rear 2.5" drives, and am now investigating how to add drives. The system does not appear to have come with the necessary mounting hardware to actually put drives in the two rear mounting locations (right now, they're really just empty spaces), so it appears I have to find mounting brackets to hold disk trays and also any associated cabling and maybe circuit boards that may be necessary. Googling around, I have not been able to identify what the part numbers might be. Can anyone point me in the right direction? What are the necessary parts to make the rear drives work? And also, where do they connect or plug into?
I created an XFS file system using default parameters at the time of creation of a system. Now looking at the output of xfs_info
, it show 0 for the values of sunit
and swidth
. I can't seem to find an explanation of what 0 means in this context. (The discussions of sunit
and swidth
that I have found are focused on setting the correct values for these parameters, not setting them to 0.)
# xfs_info .
meta-data=/dev/mapper/centos-root isize=256 agcount=8, agsize=268435455 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=1927677952, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
This must be an ignorant question, for which I apologize because I am an XFS newbie, but what is the meaning of 0 values for sunit
and swidth
? How can I find out what XFS is really using for those parameter values, and how those values relate to the values that would be appropriate for my RAID array? (This is an XFS system on top of LVM on top of hardware RAID.)
I have a half-dozen servers and workstations in an office. The systems are connected to a Cisco SG200 gigabit switch in the room. The room has two gigabit wall-port ethernet outlets. I have the switch connected to one of the wall ports. I'd like to make use of the other port to increase throughput to/from the systems in the room. However, our network organization does not support link teaming/aggregation between switches for switches they don't install and control themselves (for reasons that make sense for our institution), so I can't set up my switch to use link aggregation over two ethernet connections directly to the wall port.
How can I put the additional port to best use? One alternative is to directly connect it to another NIC on one chosen computer, so at least one system can take advantage of dual links. But perhaps there is a better configuration? I would ideally like to see the highest throughput possible between any system in the room and the outside world. It's a shame to bottleneck all network connections to a single gigabit line.
The servers mostly serve web pages (which includes some large content like videos) and REST-based network services that involve file transfers in the multi-megabyte range.
Bought a Perc H710 without battery for a Dell T420. Installed it, cabled it, etc. It's been running great for months. I later bought a backup battery (model #70K80) from a reasonably reputable seller (not Dell). Installed it, let the system run for several days. Yet it continued to claim (every time it was rebooted) that the battery was either discharged or faulty. Thinking I got a bum battery, I bought another. And the same thing is happening: despite having been in the system for days, if I reboot and watch the boot messages, the Perc controller prints the "The battery is currently discharged or disconnected. Verify the connection and allow 30 minutes for charging...." message every time. Going into the Perc controller menus, I can see that it thinks there is no battery.
I can't see a way to install the battery incorrectly, physically, in the H710: the cable connector plugs in only one way, and the mounting clip fits over the board only one way. So I'm down to 4 hypotheses: (1) I received a second bad battery in a row, (2) I have the wrong type of battery, (3) I'm not doing something else that needs to be done when installing a battery in the H710, or (4) something is wrong with the H710 card itself.
To help eliminate #3, can people explain whether there is any other step involved in adding a backup battery to the H710, besides mounting the battery on the card itself?
I have a pair of Broadcom NetXtreme 57711 10GbE cards. I put one in a Dell R710; it boots with the card fine, the OS (CentOS 7) recognizes it, and all seems well. However, when I put the other card in an R730xd (also running CentOS), something unexpected happens: the R730xd's fans kick into high speed as soon as the system starts to boot the OS, and run continuously at high speed no matter what is happening. The fans do not run at full speed when interacting with the Lifecycle Controller or the BIOS screens. They only start spinning at full speed when the computer starts to boot the OS and before the OS comes up, so it doesn't seem to be a function of the OS.
I've updated the R730xd's firmware to the latest versions available, I've tried setting the CPU performance profiles in the BIOS, and I've tried setting the thermal profile in the iDRAC, but nothing seems to change the behavior; the system always goes into full-on jet-engine mode. Googling reveals at least one other person encountering similar fan behavior related to adding a PCI card to an R730xd (though it's unclear whether it's the same card – it doesn't appear to be).
What am I doing wrong? More importantly, can this behavior be changed, so that the fans do not stay stuck at full speed?
On my CentOS 7 systems, I use tuned-adm
to set a profile appropriate to the environment during configuration, but after that, I never subsequently change that profile. It seems that the tuned system spawns a process (/usr/bin/python -Es /usr/sbin/tuned -l -P
) for dynamic monitoring and adjustment. This process uses noticeably more memory compared to other daemons on my system. I would like to reduce nonessential services on a certain memory-constrained server. If I do not use a profile that involves dynamically adjusting parameters such as power consumption, does the tuned
process need to keep running? Can I safely stop the process and have the profile that I originally set up persist from that point on?
Dell Perc RAID cards (among others) allow you to set the disk cache policy to be either on (meaning, the individual hard disks use their built-in caches) or off (meaning, the individual disk caches are disabled). In reading discussions on the net, I find conflicting information about which setting is best. Some people say to disable the disk caches because a power failure can cause corruption of data; others say you can leave the disk caches enabled if your computer is connected to an uninterruptible power supply, and that enabling the caches improves disk performance even in RAID configurations.
Is there a definitive conclusion to which way the disk caches should be set?
Note that this is not about the RAID card's cache and caching policy – this is about the disks used in the array, not the card cache or the battery backup on the card itself.
I'm considering the purchase of a refurbished Dell R730xd with Perc H730 controller. Does anyone know if that controller will recognize and work with Seagate Constellation 4TB NL-SAS drives (ST4000NM0023)? The configuration would be RAID 6.
I realize the drives are not officially listed by Dell as supported drives. Reading around the net, it is unclear to me to what extent the Dell controller will only work with specific (signed?) drives. On another system, with a Dell H710, I was able to buy 3rd-party disks and they worked fine, but I have no experience with the newer H730 controllers.
I'm trying to get omreport
on a Dell R710 running CentOS 7, but failing. The instructions at
http://linux.dell.com/repo/hardware/omsa.html
include a case for CentOS. They involve using wget
to set up a yum repo configuration for Dell OpenManage Repository, but the command
yum install srvadmin-all
produces
Loaded plugins: dellsysid, fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.unl.edu
* extras: mirror.hmc.edu
* rpmforge: mirror.hmc.edu
* updates: mirrors.unifiedlayer.com
No package srvadmin-all available
The same thing happens with package srvadmin-base
. I checked the Dell OMSA FAQ, tried yum clean all
, but still, no joy. My yum.conf
has plugins=1
. In fact, here is my yum.conf
file:
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5
bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
distroverpkg=centos-release
And here is the output of yum repolist
:
# yum repolist
Loaded plugins: dellsysid, fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.unl.edu
* extras: mirror.hmc.edu
* rpmforge: mirror.hmc.edu
* updates: mirrors.unifiedlayer.com
repo id repo name status
base/7/x86_64 CentOS-7 - Base 8465
dell-omsa-indep/7/x86_64 Dell OMSA repository - Hardware independent 2812
dell-omsa-specific/7/x86_64 Dell OMSA repository - Hardware specific 2812
dsu_repository_dependent/7/x86_64 dsu_repository_dependent 0
dsu_repository_independent dsu_repository_independent 488+1
extras/7/x86_64 CentOS-7 - Extras 104
rpmforge RHEL 7 - RPMforge.net - dag 245
updates/7/x86_64 CentOS-7 - Updates 1721
repolist: 16647
The Dell pages do not mention CentOS higher than 5, so is the problem that the Dell repository materials are not compatible with the latest CentOS?
In any case, how can I get omreport
for CentOS 7?
After having purchased a Dell T420 with SATA drives and without RAID, and discovering this was a mistake for performance reasons, I'm obtaining a Perc H710 PCI card and SAS drives. Now I'm stuck with a dumb question: what kind of cables are needed to go between the Perc H710 card and the SAS drives? I confess to being inexperienced with SAS and RAID cards, and I can't quite figure out the correct power and data cabling, despite having spent some time looking at descriptions and pictures on the web. (Also, I'm not currently next to the computer, making it more difficult to figure out the right parts that I need.)
The T420 was purchased with the cabled 4-drive, 3.5" configuration with embedded SATA. If someone could point me towards a description of the typical connectors needed by SAS drives, and what a Dell T420 might have or need to put SAS drives in it, I would very much appreciate it.
A number of vendors (e.g., on eBay) sell a Perc H710 card they say is a part number VM02C. However, I can't find this in Dell's website, either by looking at every H710 card list in the parts area, or using the site-wide search, or even using Google search with site:dell.com. A chat with one of the vendors indicates they are certain it's a Dell part, and I have no reason to doubt them. I think the part numbers they have are simply from a different type of database or else I'm looking in the wrong place.
Does anyone know what a "VM02C-HIGH P" card corresponds to, in terms of Dell part numbers? Here are the things that I can find that look plausibly like the pictures on Dell's website:
- Manufacturer Part# : 2YP62, Dell Part# : 342-3631
- Manufacturer Part# : 8PX3M, Dell Part# : 342-4203
- Manufacturer Part# : PCVT5, Dell Part# : 342-3536
(In case it matters, I'm trying to find a card that will work properly in a Dell T420, and was hoping to save some $$ by getting a used card.)
In attempting to make some Dell server BMC's more secure, I followed the recommendations given elsewhere and disabled cipher 0, using the following command (ipmitool
running on the host OS, which is CentOS 6.5 – I'm root while doing this, of course):
> ipmitool lan set 1 cipher_privs XXXaXXXXXXXXXXX
Then I wanted to change it to something else, and discovered that, apparently, I can't:
> ipmitool lan set 1 cipher_privs Xaaaaaaaaaaaaaa
LAN Parameter Data does not match! Write may have failed.
In other respects, things look fine:
> ipmitool lan print 1
Set in Progress : Set In Progress
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD5
: User : MD5
: Operator : MD5
: Admin : MD5
: OEM :
IP Address Source : Static Address
IP Address : ...omitted for this posting...
Subnet Mask : 255.255.255.0
MAC Address : ...omitted for this posting...
SNMP Community String : ...omitted for this posting...
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : ...omitted for this posting...
Default Gateway MAC : 00:00:00:00:00:00
Backup Gateway IP : 0.0.0.0
Backup Gateway MAC : 00:00:00:00:00:00
802.1q VLAN ID : Disabled
802.1q VLAN Priority : 0
RMCP+ Cipher Suites : 0,1,2,3,4,5,6,7,8,9,10,11,12,13
Cipher Suite Priv Max : XXXaXXXXXXXXXXX
: X=Cipher Suite Unused
: c=CALLBACK
: u=USER
: o=OPERATOR
: a=ADMIN
: O=OEM
Does anyone recognize this problem, and know how to solve it? Why does it appear impossible now to change the cipher_privs
value? I'm probably doing something ignorant – apologies if so.
I'm stumped and I hope someone else will recognize the symptoms of this problem.
Hardware: new Dell T110 II, dual-core Pentium G850 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 1746018) + vSphere Client. 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client.
Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM:
for i in 1 2 3 4 5 6 7 8 9 10; do
dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync
done
I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well:
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s
...
but after 7-8 of these, it then prints
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s
If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again.
Based on preliminary search for possible causes, in particular VMware KB 2011861, I changed the Linux i/o scheduler to be "noop
" instead of the default. cat /sys/block/sda/queue/scheduler
shows that it is in effect. However, I cannot see that it has made any difference in this behavior.
Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd
reports the low throughput. (And yes, things get pretty unresponsive while that's happening.)
What could be causing this?
I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction.
(P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].)
ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct
to dd
. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.)
ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches
into the loop does not make a difference at all.
ADDENDUM #3: To take out further variables, I now run dd
such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct
. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south.
ADDENDUM #4: Here is the output of iostat -d -m -x 1
running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct
.) First, when things are "good", it shows this:
When things go "bad", iostat -d -m -x 1
shows this:
ADDENDUM #5: At the suggestion of @ewwhite, I tried using tuned
with different profiles and also tried iozone
. In this addendum, I report the results of experimenting with whether different tuned
profiles had any effect on the dd
behavior described above. I tried changing the profile to virtual-guest
, latency-performance
and throughput-performance
, keeping everything else the same, rebooting after each change, and then each time running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct
. It did not affect the behavior: just as before, things start off fine and many repeated runs of dd
show the same performance, but then at some point after 10-40 runs, performance drops by half. Next, I used iozone
. Those results are more extensive, so I’m putting them in as addendum #6 below.
ADDENDUM #6: At the suggestion of @ewwhite, I installed and used iozone
to test performance. I ran it under different tuned
profiles, and used a very large maximum file size (4G) parameter to iozone
. (The VM has 2.5 GB of RAM allocated, and the host has 4 GB total.) These test runs took quite some time. FWIW, the raw data files are available at the links below. In all cases, the command used to produce the files was iozone -g 4G -Rab filename
.
- Profile
latency-performance
:- raw results: http://cl.ly/0o043W442W2r
- Excel (OSX version) spreadsheet with plots: http://cl.ly/2M3r0U2z3b22
- Profile
enterprise-storage
:- raw results: http://cl.ly/333U002p2R1n
- Excel (OSX version) spreadsheet with plots: http://cl.ly/3j0T2B1l0P46
The following is my summary.
In some cases I rebooted after a previous run, in other cases I didn’t, and simply ran iozone
again after changing the profile with tuned
. This did not seem to make an obvious difference to the overall results.
Different tuned
profiles did not seem (to my admittedly inexpert eyes) to affect the broad behavior reported by iozone
, though the profiles did affect certain details. First, unsurprisingly, some profiles changed the threshold at which performance dropped off for writing very large files: plotting the iozone
results, you can see a sheer cliff at 0.5 GB for profile latency-performance
but this drop manifests itself at 1 GB under profile enterprise-storage
. Second, although all profiles exhibit weird variability for combinations of small file sizes and small record sizes, the precise pattern of variability differed between profiles. In other words, in the plots shown below, the craggy pattern in the left side exists for all profiles but the locations of the pits and their depths are different in the different profiles. (However, I did not repeat runs of the same profiles to see if the pattern of variability changes noticeably between runs of iozone
under the same profile, so it is possible that what looks like differences between profiles is really just random variability.)
The following are surface plots of the different iozone
tests for the tuned
profile of latency-performance
. The descriptions of the tests are copied from the documentation for iozone
.
Read test: This test measures the performance of reading an existing file.
Write test: This test measures the performance of writing a new file.
Random read: This test measures the performance of reading a file with accesses being made to random locations within the file.
Random write: This test measures the performance of writing a file with accesses being made to random locations within the file.
Fread: This test measures the performance of reading a file using the library function fread(). This is a library routine that performs buffered & blocked read operations. The buffer is within the user’s address space. If an application were to read in very small size transfers then the buffered & blocked I/O functionality of fread() can enhance the performance of the application by reducing the number of actual operating system calls and increasing the size of the transfers when operating system calls are made.
Fwrite: This test measures the performance of writing a file using the library function fwrite(). This is a library routine that performs buffered write operations. The buffer is within the user’s address space. If an application were to write in very small size transfers then the buffered & blocked I/O functionality of fwrite() can enhance the performance of the application by reducing the number of actual operating system calls and increasing the size of the transfers when operating system calls are made. This test is writing a new file so again the overhead of the metadata is included in the measurement.
Finally, during the time that iozone
was doing its thing, I also examined the performance graphs for the VM in vSphere 5’s client interface. I switched back and forth between the real-time plots of the virtual disk and the datastore. The available plotting parameters for the datastore were greater than for the virtual disk, and the datastore performance plots seemed to mirror what the disk and virtual disk plots were doing, so here I enclose only a snapshot of the datastore graph taken after iozone
finished (under tuned
profile latency-performance
). The colors are a little bit hard to read, but what is perhaps most notable are the sharp vertical spikes in read latency (e.g., at 4:25, then again slightly after 4:30, and again between 4:50-4:55). Note: the plot is unreadable when embedded here, so I've also uploaded it to http://cl.ly/image/0w2m1z2T1z2b
I must admit, I don’t know what to make of all this. I especially don’t understand the weird pothole profiles in the small record/small file size regions of the iozone
plots.