Does ReFS use FPGA-based XOR engines for RAID parity calculations? If not, is it possible to get it to use such tech?
GregC's questions
By default, this SAS-12 controller runs with SAS-12 SSDs at SAS-6 speeds. How to unlock SAS-12?
The error states:
Caution: Memory conflict detected. You may face boot problem.
What steps would you recommend I take to troubleshoot this?
Details: Dual Xeon E5 v1, two LSI 9286-8e RAID-on-Chip2208-based controllers, one LSI 2308 SAS6 HBA, Mellanox EN-3 NIC, Highpoint 1144C 4-port USB 3.0, AMD Radeon HD 46xx graphics
Latest BIOS and Option BIOS on each card
Connecting controller to any of the three PCIe x16 slots yield choppy read performance around 750 MB/sec
Lowly PCIe x4 slot yields steady 1.2 GB/sec read
Given same files, same Windows Server 2008 R2 OS, same RAID6 24-disk Seagate ES.2 3TB array on LSI 9286-8e, same Dell R7610 Precision Workstation with A03 BIOS, same W5000 graphics card (no other cards), same settings etc. I see super-low CPU utilization in both cases.
SiSoft Sandra reports x8 at 5GT/sec in x16 slot, and x4 at 5GT/sec in x4 slot, as expected.
I'd like to be able to rely on the sheer speed of x16 slots.
What gives? What can I try? Any ideas? Please assist
Cross-posted from http://en.community.dell.com/support-forums/desktop/f/3514/t/19526990.aspx
Follow-up information
We did some more performance testing with reading from 8 SSDs, connected directly (without an expander chip). This means that both SAS cables were utilized. We saw nearly double performance, but it varied from run to run: {2.0, 1.8, 1.6, and 1.4 GB/sec were observed, then performance jumped back up to 2.0}.
The SSD RAID0 tests were conducted in a x16 PCIe slot, all other variables kept the same. It seems to me that we were getting double the performance of HDD-based RAID6 array.
Just for reference: maximum possible read burst speed over single channel of SAS 6Gb/sec is 570 MB/sec due to 8b/10b encoding and protocol limitations (SAS cable provides four such channels).
After updating to latest controller firmware, I started receiving the following error messages:
LSI 2208 ROC: Temperature sensor below error threshold on enclosure 1 Sensors 5 thru 7
Is this something I should worry about, or is it a Red Herring?
Details: I have a Sans Digital NexentaSTOR 24-disk JBOD enclosure connected to LSI 9286-8e RAID-on-Chip controller with two SAS cables. Seagate ES.2 3TB SAS hard drives populate every bay in the enclosure.
Here's what I have done so far:
Using more Rx/Tx buffers boosts performance the most from defaults. I set RSS Queues to 4 on each adapter, and specified starting RSS CPU on the second port to something other than 0 (it's 16 on the PC that I use, with 16 cores, 32 HTs).
From watching ProcessExplorer, I am limited by CPU's ability to handle the large number of incoming interrupts, even with RSS enabled. I am using PCIe x8 (electrical) slot in 2.x mode. Each of the two adapters connects with a 5GT/sec x8 bus.
OS responsiveness does not matter, I/O throughput does. I am limited by clients' inability to process Jumbo packets.
What settings should I try next?
Details: Dual Xeon-E5 2665, 32 GB RAM, eight SSDs in RAID0 (RAMDrive used for NIC perf validation), 1TB data to be moved via IIS/FTP from 400 clients, ASAP.
In response to comments:
Actual read throughput is 650 MB/sec over a teamed pair of 10Gb/sec links, into RAM Drive
Antivirus and firewall are off, AFAICT. (I have fairly good control over what's installed on the PC, in this case. How can I be sure that no filters are reducing performance? I will have to follow up, good point.)
In Process Explorer, I see spells of time where CPU keeps going (red, kernel time), but network and disk I/O are stopped
Max RSS processors is at its default value, 16
Message-signaled intrrupts are supported on both instances of X520-DA2 device, with MessageNumberLimit set to 18. Here's what I see on my lowly desktop card
Once in a blue moon, I am seeing a blue screen of death on a shiny new Dell R7610 with a single 1100 Watt Dell-provided power supply on a beefy UPS. BCode is 101 (A clock interrupt was not received...), which some say is caused by under-volting a CPU.
Naturally, I would have to contact Dell support, and their natural reaction would be to replace a motherboard, a power supply, or CPU, or a mixture of the above components.
In synthetic benchmarks, system memory and CPU, as well as graphics memory and CPU perform admirably, staying up for hours and days.
My questions are:
- Is power supply good enough for the application? Does it provide clean enough power to VRMs on the motherboard?
- Are VRMs good enough for dual Xeon E5-2665?
- Does C-states logic work correctly?
- Is there sufficient current provided to PCIe peripherals, such as disk controllers?
P.S. Recently, I've gone through the ordeal with HP. They were nice and professional about it, but root cause was not established, and the HP machine still is less than 100%, giving me a blue screen of death once in a couple of months.
Here's what quick web-searching turns up: http://www.sevenforums.com/bsod-help-support/35427-win-7-clock-interrupt-bsod-101-error.html#post356791
It appears Dell has addressed the above issue by clocking PCIe bus down to 5GT/sec in A03 BIOS. My disk controllers support PCIe 3.0, meaning that I would have to re-validate stability. Early testing shows improvements.
Further testing shows significant decrease in performance on each of the x16 slots with Dell R7610 with A03 BIOS. But now it's running stable.
HP machine has received a microcode update in September 2013 SUM (July BIOS) that makes it stable.
I am getting beeping from the LSI external RAID controller even if I properly shutdown and there is no array re-initialization.
Would I be required to use a battery back (BBU) with the LSI controller if it's driving an external enclosure with an expander chip?
Details: firmware and drivers are all 5.5 for Windows
I have a RAID6 array managed by LSI 9286-8e card. I also have Sans Digital 24-bay NexentaSTOR JBOD enclosure with SAS extender built-in. They are connected to separate UPS devices. Normally, I'd shut down the PC, leaving RAID6 in healthy state. But today the power to JBOD enclosure was cut but PC kept running.
After restarting the PC, all disks in RAID6 have lit up RED, and the only option in LSI MegaRAID manager app was to reset each disk to unassigned, thereby loosing all data on RAID6 array. Thankfully, I am only testing, but how would I recover if this were to happen in production?
I am running a team of two 10 GigE ports on Intel X520-DA2 network card. They work well in tandem and achieve the desired throughput. However, I see an intermittent issue whereby WireShark and my own application (using WinPCap) only show the underlying ports, failing to recognize the team adapter.
Details: Intel 17.4 NIC drivers on Windows Server 2008 R2 with all patches. HP DL370 G6 server. RSS enabled on Intel both underlying Intel NICs.
The exact error: Unable to open the adapter (rpcap://\Device\NPF_{401D5903-16E7-41DC-8484-5D96765B9692}). failed to set hardware filter to promiscuous mode
Cross-posted on WireShark site.
I have an HP ProLiant DL370 G6 server that I am using as a workstation. It takes 60 seconds during reboot and cold boot before screens post with a discrete Radeon HD6xxx GPU. What can I do to make it boot faster?
I have had a chance to use HP Gen8 server. It posts quickly and shows various CPU/memory/QPI initialization steps. Still takes a long time, but at least I can see what's going on.
I am experiencing rare but real unrecoverable machine checks on HP DL370 G6 dual-core Xeon server. I ran memtest86+ before, and ran CPU-intensive operations without any problems.
In your opinion, does this indicate a real problem, or is it normal and expected behavior?
How would you approach this problem?
EDIT after some troubleshooting, it seems that these machine checks, as well as problems when showing device manager can be traced back to NC375i NICs. All is well when the NICs are not in the server.
Further improvements to stability of HP Gen6 with Intel Xeon have been brought in with BIOS update in September 2013 HP Update DVD. Intel's newer microcode makes these CPUs much more stable. We haven't seen hardware-related BSODs since the update in September.
I would like to rely on a RAID-on-chip solution to control 24 SAS hard drives in a direct-attached environment. How would you approach this to get best bandwidth given that I'd like to spend less than $10,000 on the interconnect.
I've read that LSI 2208 chip can easily handle 8-drive SSD RAID6 in hardware. I'd like to harness its power to control 24 SAS hard drives over an expander in an external enclosure.
Currently I use an Infortrend S24S-G2240 external enclosure. It provides its own controller and expander. I'd like to use LSI 2208 controller for RAID6 somehow instead of the mystery controller in the enclosure.
P.S. I tried to create SAS-expander as a tag, but my rep on this site is low.
What would be reasons for blocking trace route functionality for clients on a corporate network?
I have an Infortrend EonStor A16F-G2430 RAID-6 connected to Windows Server 2008 R2 x64 machine via Fiber Channel QLogic 2Gb PCI-X adapter. I am able to initialize the 8.5TB storage as GPT, and able to create a simple partition spanning the entire disk. Assigned letter X: to the partition.
When I try to format from Explorer, command line, or disk manager, i always get "Disk volume is write-protected" error.
Please assist with formatting.
Edit: I already saw http://support.microsoft.com/kb/971436 -- It does not apply.
I would like the image to detect and install storage controller and video drivers for a limited but varying set of machines, known ahead of time. Windows 7 x64 and Windows Server 2008 R2.
I would like to activate each restored instance separately.
I have a developer MSDN subscription, if that helps. We commonly use Acronis Server.
I'd like to limit server disk fragmentation when multiple clients upload to an FTP server simultaneously.
Is there a way to tell FTP server to preallocate disk space for a big block of an incoming file, and to keep incrementing in big blocks?
Alternatively, is there a way to tell the FTP server to preallocate entire file's worth, and write out incoming FTP packets in a more sequential pattern.
Defragmenting files after fransfer is not acceptable: it will take too long to defragment.
I am using IIS 6.0, but other FTP server products would be a welcome replacement.
I am considering different topologies to attempt at improving FTP upload speeds from many devices into one PC with Quad NIC.
Right now we use 802.ad on the PC (same IP address on all four ports). From the PC we connect two 1Gb Ethernet cables going to one switch, two more going to another switch. There are two more cables that interconnect the two switches, creating a redundant path.
If four uplink ports on each switch are configured with LACP, do you think that this is speediest connection, or can I get a quicker connection by using a different topology? Keep in mind that single IP address is a requirement.
I have two 24-port switches that in turn connect to 6 24-port switches. There's a device trying to FTP into a PC at each of the leaf ports (6 x 24 devices). All at once.
On the PC end, I am trying to make sure that the bandwidth is adequate for the job. So, i grabbed a quad port 1000GT card by Intel. Teaming for performance.
Long story shot, intermittently the kernel time goes to 25% on a quad CPU system, locking up anything network-related. What would you recommend?
I am trying to create a software stripe setup with two physical disks (underneath they are 128K stripe, RAID5 for each). I've read that one can use diskpart, but I am unable to come up with a command that works. This is on Server 2k3 SP2.
I was trying
create volume stripe disk=2,3 align=1024
Diskpart errors out:
The arguments you specified for this command are not valid.
P.S. Tried successfully with a basic disk and a primary partition.
Please reply,
-Greg