I'm wondering if openconnect OCSERV, in any way, can support authentication using a script like OpenVPN auth-user-pass-verify configuration?
raitech's questions
I am trying to get statistics for subinterfaces on a Cisco Nexus3500 C3548P-10GX switch.
The subinterface is configured for a port channel as shown below.
interface port-channel3.1802
encapsulation dot1q 1802
ip address 192.168.1.165/30
There is nothing shown as statistics for VLAN or subinterface.
NEXUS(config)# sh interface port-channel3.1802
port-channel3.1802 is up
admin state is up,, [parent interface is port-channel3]
Hardware: Port-Channel, address: xxxx.xxxx.7641 (bia xxxx.xxxx.7608)
Description: vlan for internet
Internet Address is 192.168.1.165/30
MTU 1500 bytes, BW XXXX000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation 802.1Q Virtual LAN, Vlan ID 1802, medium is broadcast
Auto-mdix is turned off
Switchport monitor is off
EtherType is 0x8100
NEXUS(config)# sh interface vlan 1802
Vlan1802 is up, line protocol is up, autostate enabled
Hardware is EtherSVI, address is XXX.XXX.XXX
Description: MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not supported
ARP type: ARPA
Last clearing of "show interface" counters never
L3 in Switched:
ucast: 14385814 pkts, 1951146757 bytes
For other interfaces, I have these statistics:
RX
285716197370 unicast packets 1725784 multicast packets 1273 broadcast packets
285717924427 input packets 279284980293540 bytes
0 jumbo packets 0 storm suppression packets
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 0 input discard
0 Rx pause
TX
10240494209194 unicast packets 0 multicast packets 0 broadcast packets
10240494209298 output packets 14411966319769316 bytes
0 jumbo packets
0 output error 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 12372589734 output discard
0 Tx pause
show version:
Software
BIOS: version 5.3.1
NXOS: version 7.0(3)I7(7)
BIOS compile time: 06/04/2019
NXOS image file is: bootflash:///nxos.7.0.3.I7.7.bin
NXOS compile time: 8/28/2019 16:00:00 [08/29/2019 00:41:42]
Is there any way to enable statistics for subinterfaces?
On an ESXi guest machine, with Ubuntu 18.04 operating system, two nics connected from two difference vswitch, each of them has a separate 10Gb uplink.
I made a bonded nic from these two links with balance-rr
and balance-alb
modes.
When testing the bandwidth, it doesn't exceed the 10Gb (around 9.7gbps) limit for the bonded interface.
bwm-ng v0.6.1 (probing every 0.500s), press 'h' for help
input: /proc/net/dev type: rate
\ iface Rx Tx Total
==============================================================================
lo: 0.00 b/s 0.00 b/s 0.00 b/s
ens160: 3.82 kb/s 5.30 Gb/s 5.30 Gb/s
ens192: 15.33 kb/s 4.35 Gb/s 4.35 Gb/s
bond0: 19.16 kb/s 9.64 Gb/s 9.64 Gb/s
------------------------------------------------------------------------------
total: 38.31 kb/s 19.28 Gb/s 19.29 Gb/s
# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: ens192: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 3a:f0:c2:xx:xx:xx brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 3a:f0:c2:xx:xx:xx brd ff:ff:ff:ff:ff:ff
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: ens192
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: ens192
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:xx:xx:xx
Slave queue ID: 0
Slave Interface: ens160
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:xx:xx:xx
Slave queue ID: 0
I already tested the configuration, without using ESXi (an Ubuntu machine on bare machine), an I got a aggregated bandwidth of around 16Gbps for bond0 interface. Also, with one nic on an ESXi guest, I can get saturate it and get a bandwidth around 10gbps.
Is there any limit on ESXi vswitch or the guest machine?
There is an HP DL380 G8 server with P420i Raid Controller, containing a RAID5 volume. It suddenly became slow, but all physical disks are ok.
[root@localhost ~]# ssacli ctrl all show config detail
Smart Array P420i in Slot 0 (Embedded)
Bus Interface: PCI
Slot: 0
Serial Number: XXXXXXXXXXXX
Cache Serial Number: XXXXXXXXXXXX
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 5.42-0
Rebuild Priority: Low
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Parallel Surface Scan Supported: No
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 10% Read / 90% Write
Drive Write Cache: Enabled
Total Cache Size: 1.0
Total Cache Memory Available: 0.8
No-Battery Write Cache: Enabled
SSD Caching RAID5 WriteBack Enabled: False
SSD Caching Version: 1
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 72
Cache Module Temperature (C): 41
Capacitor Temperature (C): 33
Number of Ports: 2 Internal only
Encryption: Not Set
Driver Name: hpsa
Driver Version: 3.4.20
Driver Supports SSD Smart Path: True
PCI Address (Domain:Bus:Device.Function): 0000:02:00.0
Port Max Phy Rate Limiting Supported: False
Host Serial Number: XXXXXXXXXXXX
Sanitize Erase Supported: False
Primary Boot Volume: None
Secondary Boot Volume: None
Internal Drive Cage at Port 1I, Box 1, OK
Power Supply Status: Not Redundant
Drive Bays: 4
Port: 1I
Box: 1
Location: Internal
Physical Drives
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS HDD, 8 TB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS HDD, 8 TB, OK)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS HDD, 8 TB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS HDD, 8 TB, OK)
Internal Drive Cage at Port 2I, Box 1, OK
Power Supply Status: Not Redundant
Drive Bays: 4
Port: 2I
Box: 1
Location: Internal
Physical Drives
physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS HDD, 8 TB, OK)
physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS HDD, 8 TB, OK)
physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS HDD, 8 TB, OK)
physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS HDD, 8 TB, OK)
Port Name: 1I
Port ID: 0
Port Connection Number: 0
SAS Address: XXXXXXXXXXXX
Port Location: Internal
Port Name: 2I
Port ID: 1
Port Connection Number: 1
SAS Address: XXXXXXXXXXXX
Port Location: Internal
Array: A
Interface Type: SAS
Unused Space: 0 MB (0.00%)
Used Space: 58.22 TB (100.00%)
Status: OK
Array Type: Data
Smart Path: disable
Logical Drive: 1
Size: 50.94 TB
Fault Tolerance: 5
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 1792 KB
Status: OK
Unrecoverable Media Errors: None
Caching: Enabled
Parity Initialization Status: Initialization Completed
Unique Identifier: XXXXXXXXXXXX
Disk Name: /dev/sda
Mount Points: /boot 1023 MB Partition Number 2
OS Status: LOCKED
Logical Drive Label: XXXXXXXXXXXX
Drive Type: Data
LD Acceleration Method: Controller Cache
physicaldrive 1I:1:1
Port: 1I
Box: 1
Bay: 1
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 8 TB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Rotational Speed: 7200
Firmware Revision: E004
Serial Number: XXXXXXXXXXXX
WWID: XXXXXXXXXXXX
Model: SEAGATE ST8000NM0075
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
physicaldrive 1I:1:2
Port: 1I
Box: 1
Bay: 2
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 8 TB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Rotational Speed: 7200
Firmware Revision: E003
Serial Number: XXXXXXXXXXXX
WWID: XXXXXXXXXXXX
Model: SEAGATE ST8000NM0075
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
physicaldrive 1I:1:3
Port: 1I
Box: 1
Bay: 3
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 8 TB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Rotational Speed: 7200
Firmware Revision: E003
Serial Number: XXXXXXXXXXXX
WWID: XXXXXXXXXXXX
Model: SEAGATE ST8000NM0075
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
physicaldrive 1I:1:4
Port: 1I
Box: 1
Bay: 4
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 8 TB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Rotational Speed: 7200
Firmware Revision: E003
Serial Number: XXXXXXXXXXXX
WWID: XXXXXXXXXXXX
Model: SEAGATE ST8000NM0075
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
physicaldrive 2I:1:5
Port: 2I
Box: 1
Bay: 5
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 8 TB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Rotational Speed: 7200
Firmware Revision: E003
Serial Number: XXXXXXXXXXXX
WWID: XXXXXXXXXXXX
Model: SEAGATE ST8000NM0075
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
physicaldrive 2I:1:6
Port: 2I
Box: 1
Bay: 6
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 8 TB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Rotational Speed: 7200
Firmware Revision: E003
Serial Number: XXXXXXXXXXXX
WWID: XXXXXXXXXXXX
Model: SEAGATE ST8000NM0075
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
physicaldrive 2I:1:7
Port: 2I
Box: 1
Bay: 7
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 8 TB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Rotational Speed: 7200
Firmware Revision: E003
Serial Number: XXXXXXXXXXXX
WWID: XXXXXXXXXXXX
Model: SEAGATE ST8000NM0075
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
physicaldrive 2I:1:8
Port: 2I
Box: 1
Bay: 8
Status: OK
Drive Type: Data Drive
Interface Type: SAS
Size: 8 TB
Drive exposed to OS: False
Logical/Physical Block Size: 512/4096
Rotational Speed: 7200
Firmware Revision: E004
Serial Number: XXXXXXXXXXXX
WWID: XXXXXXXXXXXX
Model: SEAGATE ST8000NM0075
PHY Count: 2
PHY Transfer Rate: 6.0Gbps, Unknown
Drive Authentication Status: OK
Carrier Application Version: 11
Carrier Bootloader Version: 6
Sanitize Erase Supported: True
Unrestricted Sanitize Supported: True
Shingled Magnetic Recording Support: None
SEP (Vendor ID PMCSIERA, Model SRCv8x6G) 380
Device Number: 380
Firmware Version: RevB
WWID: XXXXXX
Vendor ID: PMCSIERA
Model: SRCv8x6G
All disks look good when I check their SMART info.
[root@localhost ~]# smartctl -a -d cciss,1 /dev/sda
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-957.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Vendor: SEAGATE
Product: ST8000NM0075
Revision: E003
Compliance: SPC-4
User Capacity: 8,001,563,222,016 bytes [8.00 TB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
LU is fully provisioned
Rotation Rate: 7200 rpm
Form Factor: 3.5 inches
Logical Unit id: 0xXXXXXXXXXXXXXXX
Serial number: XXXXXXXXXXXXXXX
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Sat Jan 22 01:58:45 2022 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Temperature Warning: Enabled
=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK
Grown defects during certification <not available>
Total blocks reassigned during format <not available>
Total new blocks reassigned <not available>
Power on minutes since format <not available>
Current Drive Temperature: 41 C
Drive Trip Temperature: 60 C
Manufactured in week 16 of year 2018
Specified cycle count over device lifetime: 10000
Accumulated start-stop cycles: 98
Specified load-unload count over device lifetime: 300000
Accumulated load-unload cycles: 1411
Elements in grown defect list: 0
Vendor (Seagate Cache) information
Blocks sent to initiator = 238399504
Blocks received from initiator = 2085969384
Blocks read from cache and sent to initiator = 3750062329
Number of read and write commands whose size <= segment size = 45023273
Number of read and write commands whose size > segment size = 2781
Vendor (Seagate/Hitachi) factory information
number of hours powered up = 31417.37
number of minutes until next internal SMART test = 7
Error counter log:
Errors Corrected by Total Correction Gigabytes Total
ECC rereads/ errors algorithm processed uncorrected
fast | delayed rewrites corrected invocations [10^9 bytes] errors
read: 2200494275 0 0 2200494275 0 13316.232 0
write: 0 0 0 0 0 1068.584 0
Non-medium error count: 4083
[GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on']
No Self-tests have been logged
Is there any way to check the IO stats on a physical disk in a hardware raid controller?
How can I find out the root cause of the problem?
I have two p2p tunnels to two different servers on a host, tun1 and tun2: (host is Ubuntu server 18.04)
(Server 1: IP1) <--> (Host: IP0) <--> (Server 2: IP2)
For tun1 to server 1 I have:
proto udp
mode p2p
remote IP1
rport 4856
local IP0
lport 4856
dev-type tun
tun-ipv6
resolv-retry infinite
dev tun1
comp-lzo
persist-key
persist-tun
cipher aes-256-cbc
ifconfig 192.168.76.2 192.168.76.3
secret /etc/openvpn/key.key
for tun2 to server 2 I have:
proto udp
mode p2p
remote IP2
rport 4857
local IP0
lport 4857
dev-type tun
tun-ipv6
resolv-retry infinite
dev tun2
comp-lzo
persist-key
persist-tun
cipher aes-256-cbc
ifconfig 192.168.77.2 192.168.77.3
secret /etc/openvpn/key.key
I want to forward packets received on tun1 with dst=IP4 to tun2 so I added a static route:
ip route add IP4/32 via 192.168.77.3
Ip forwarding is also enabled.
sysctl -w net.ipv4.ip_forward=1
No Iptables rules existed. All chains have ACCEPT behavior.
All tunnel interfaces are connected to their corresponding servers and up.
When I send packets from Server 1 with dst=IP4 to the tunnel, packets are received in host but they are not forwarded to tun2 and I cannot see them in Server 2 using tcpdump
.
Any idea?
I'm trying to setup Katran Load balancer from https://github.com/tehnerd/katran on Ubuntu, but I am confused with the example explained and how the packets are forwarding through IPIP interface between LB and real servers. The example doc has some confusing tcp dumps and ips. I couldn't find out how many NICs or IPs I need.
I couldn't find any other documentation elsewhere.
Newly installed FreeBSD 12.1 and 11.4 immediately freezes when loading oce by running kldload if_oce
. The whole system becomes locked up at this time. CTRL+c and CTRL+z don't work.
What is the starting point to find out the problem cause?
Why does the process make the whole FreeBSD unresponsive and the only option is to reboot?
I already encountered such problem with mount
and zfs operations.
Nginx setup for static large file (100MB-16GB) serving on a CentOs v7.7 with a bound network 2x10Gbps. ZFS on linux is used.
- Pool size 50TB on 8x8TB disks
- Max arc size 65GB
- L2ARC 1TB nvme
- Recordsize=16M
- ashift=12
- nginx: sendfile off
- nginx: aio on
- nginx: output_buffers 1 128k
System is up for some days. Too much cpu used on filling arc. Disk is busy on 600MB/s but nginx throughput is under 2Gbps and L2ARC hit ratio is very low. Any idea?
Here is zfs_arc_summary output and perf report.
ZFS Subsystem Report Wed May 20 12:27:46 2020
ARC Summary: (HEALTHY)
Memory Throttle Count: 0
ARC Misc:
Deleted: 1.84m
Mutex Misses: 157.78k
Evict Skips: 157.78k
ARC Size: 102.54% 66.97 GiB
Target Size: (Adaptive) 100.00% 65.32 GiB
Min Size (Hard Limit): 92.87% 60.66 GiB
Max Size (High Water): 1:1 65.32 GiB
ARC Size Breakdown:
Recently Used Cache Size: 46.89% 31.40 GiB
Frequently Used Cache Size: 53.11% 35.57 GiB
ARC Hash Breakdown:
Elements Max: 159.31k
Elements Current: 97.44% 155.23k
Collisions: 11.76k
Chain Max: 2
Chains: 779
ARC Total accesses: 446.46m
Cache Hit Ratio: 99.29% 443.29m
Cache Miss Ratio: 0.71% 3.17m
Actual Hit Ratio: 99.29% 443.29m
Data Demand Efficiency: 99.28% 402.73m
CACHE HITS BY CACHE LIST:
Most Recently Used: 5.99% 26.57m
Most Frequently Used: 94.01% 416.71m
Most Recently Used Ghost: 0.00% 9.65k
Most Frequently Used Ghost: 0.28% 1.26m
CACHE HITS BY DATA TYPE:
Demand Data: 90.19% 399.81m
Prefetch Data: 0.00% 0
Demand Metadata: 9.81% 43.47m
Prefetch Metadata: 0.00% 1.82k
CACHE MISSES BY DATA TYPE:
Demand Data: 91.77% 2.91m
Prefetch Data: 0.00% 0
Demand Metadata: 7.85% 249.26k
Prefetch Metadata: 0.38% 12.12k
L2 ARC Summary: (HEALTHY)
Low Memory Aborts: 0
Free on Write: 3
R/W Clashes: 0
Bad Checksums: 0
IO Errors: 0
L2 ARC Size: (Adaptive) 458.07 GiB
Compressed: 99.60% 456.23 GiB
Header Size: 0.00% 5.34 MiB
L2 ARC Breakdown: 3.17m
Hit Ratio: 15.02% 476.70k
Miss Ratio: 84.98% 2.70m
Feeds: 55.31k
L2 ARC Writes:
Writes Sent: 100.00% 55.27k
ZFS Tunable:
metaslab_debug_load 0
zfs_multihost_interval 1000
zfs_vdev_default_ms_count 200
zfetch_max_streams 8
zfs_nopwrite_enabled 1
zfetch_min_sec_reap 2
zfs_dbgmsg_enable 1
zfs_dirty_data_max_max_percent 25
zfs_abd_scatter_enabled 1
zfs_remove_max_segment 16777216
zfs_deadman_ziotime_ms 300000
spa_load_verify_data 1
zfs_zevent_cols 80
zfs_obsolete_min_time_ms 500
zfs_dirty_data_max_percent 40
zfs_vdev_mirror_non_rotating_inc 0
zfs_resilver_disable_defer 0
zfs_sync_pass_dont_compress 8
zvol_volmode 1
l2arc_write_max 8388608
zfs_disable_ivset_guid_check 0
zfs_vdev_scrub_max_active 128
zfs_vdev_sync_write_min_active 64
zvol_prefetch_bytes 131072
zfs_send_unmodified_spill_blocks 1
metaslab_aliquot 524288
zfs_no_scrub_prefetch 0
zfs_abd_scatter_max_order 10
zfs_arc_shrink_shift 0
zfs_vdev_queue_depth_pct 1000
zfs_txg_history 100
zfs_vdev_removal_max_active 2
zil_maxblocksize 131072
metaslab_force_ganging 16777217
zfs_delay_scale 500000
zfs_free_bpobj_enabled 1
zfs_vdev_async_write_active_min_dirty_percent 30
metaslab_debug_unload 1
zfs_read_history 0
zfs_vdev_initializing_max_active 1
zvol_max_discard_blocks 16384
zfs_recover 0
zfs_scan_fill_weight 3
spa_load_print_vdev_tree 0
zfs_key_max_salt_uses 400000000
zfs_metaslab_segment_weight_enabled 1
zfs_dmu_offset_next_sync 0
l2arc_headroom 2
zfs_deadman_synctime_ms 600000
zfs_dirty_data_sync_percent 20
zfs_free_min_time_ms 1000
zfs_dirty_data_max 4294967296
zfs_vdev_async_read_min_active 64
dbuf_metadata_cache_max_bytes 314572800
zfs_mg_noalloc_threshold 0
zfs_dedup_prefetch 0
dbuf_cache_lowater_pct 10
zfs_slow_io_events_per_second 20
zfs_vdev_max_active 1000
l2arc_write_boost 8388608
zfs_resilver_min_time_ms 3000
zfs_max_missing_tvds 0
zfs_vdev_async_write_max_active 10
zvol_request_sync 0
zfs_async_block_max_blocks 100000
metaslab_df_max_search 16777216
zfs_prefetch_disable 1
metaslab_lba_weighting_enabled 1
zio_dva_throttle_enabled 1
metaslab_df_use_largest_segment 0
zfs_vdev_trim_max_active 2
zfs_unlink_suspend_progress 0
zfs_sync_taskq_batch_pct 75
zfs_arc_min_prescient_prefetch_ms 0
zfs_scan_max_ext_gap 2097152
zfs_initialize_value 16045690984833335022
zfs_mg_fragmentation_threshold 95
zil_nocacheflush 0
l2arc_feed_again 1
zfs_trim_metaslab_skip 0
zfs_zevent_console 0
zfs_immediate_write_sz 32768
zfs_condense_indirect_commit_entry_delay_ms 0
zfs_dbgmsg_maxsize 4194304
zfs_trim_extent_bytes_max 134217728
zfs_trim_extent_bytes_min 32768
zfs_user_indirect_is_special 1
zfs_lua_max_instrlimit 100000000
zfs_free_leak_on_eio 0
zfs_special_class_metadata_reserve_pct 25
zfs_deadman_enabled 1
dmu_object_alloc_chunk_shift 7
vdev_validate_skip 0
zfs_commit_timeout_pct 5
zfs_arc_meta_limit_percent 75
metaslab_bias_enabled 1
zfs_send_queue_length 16777216
zfs_arc_p_dampener_disable 1
zfs_object_mutex_size 64
zfs_metaslab_fragmentation_threshold 70
zfs_delete_blocks 20480
zfs_arc_dnode_limit_percent 10
zfs_no_scrub_io 0
zfs_dbuf_state_index 0
zio_deadman_log_all 0
zfs_vdev_sync_read_min_active 64
zfs_deadman_checktime_ms 60000
metaslab_fragmentation_factor_enabled 1
zfs_override_estimate_recordsize 0
zfs_multilist_num_sublists 0
zvol_inhibit_dev 0
zfs_scan_legacy 0
zfetch_max_distance 16777216
zap_iterate_prefetch 1
zfs_scan_strict_mem_lim 0
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_scan_checkpoint_intval 7200
dmu_prefetch_max 134217728
zfs_recv_queue_length 16777216
zfs_vdev_mirror_rotating_seek_inc 5
dbuf_cache_shift 5
dbuf_metadata_cache_shift 6
zfs_condense_min_mapping_bytes 131072
zfs_vdev_cache_size 0
spa_config_path /etc/zfs/zpool.cache
zfs_dirty_data_max_max 4294967296
zfs_arc_lotsfree_percent 10
zfs_vdev_ms_count_limit 131072
zfs_zevent_len_max 1024
zfs_checksum_events_per_second 20
zfs_arc_sys_free 0
zfs_scan_issue_strategy 0
zfs_arc_meta_strategy 1
zfs_condense_max_obsolete_bytes 1073741824
zfs_vdev_cache_bshift 16
zfs_compressed_arc_enabled 1
zfs_arc_meta_adjust_restarts 4096
zfs_max_recordsize 16777216
zfs_vdev_scrub_min_active 48
zfs_zil_clean_taskq_maxalloc 1048576
zfs_lua_max_memlimit 104857600
zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3
zfs_per_txg_dirty_frees_percent 5
zfs_vdev_read_gap_limit 32768
zfs_scan_vdev_limit 4194304
zfs_zil_clean_taskq_minalloc 1024
zfs_multihost_history 0
zfs_scan_mem_lim_fact 20
zfs_arc_meta_limit 0
spa_load_verify_shift 4
zfs_vdev_sync_write_max_active 128
l2arc_norw 0
zfs_arc_meta_prune 10000
zfs_vdev_removal_min_active 1
metaslab_preload_enabled 1
dbuf_cache_max_bytes 629145600
zfs_vdev_mirror_non_rotating_seek_inc 1
zfs_spa_discard_memory_limit 16777216
zfs_vdev_initializing_min_active 1
zvol_major 230
zfs_vdev_aggregation_limit 1048576
zfs_flags 0
zfs_vdev_mirror_rotating_seek_offset 1048576
spa_asize_inflation 24
zfs_admin_snapshot 0
l2arc_feed_secs 1
vdev_removal_max_span 32768
zfs_trim_txg_batch 32
zfs_multihost_fail_intervals 10
zfs_abd_scatter_min_size 1536
zio_taskq_batch_pct 75
zfs_sync_pass_deferred_free 2
zfs_arc_min_prefetch_ms 0
zvol_threads 32
zfs_condense_indirect_vdevs_enable 1
zfs_arc_grow_retry 0
zfs_multihost_import_intervals 20
zfs_read_history_hits 0
zfs_vdev_min_ms_count 16
zfs_zil_clean_taskq_nthr_pct 100
zfs_vdev_async_write_min_active 2
zfs_vdev_async_read_max_active 128
zfs_vdev_aggregate_trim 0
zfs_delay_min_dirty_percent 60
zfs_vdev_cache_max 16384
zfs_removal_suspend_progress 0
zfs_vdev_trim_min_active 1
zfs_scan_mem_lim_soft_fact 20
ignore_hole_birth 1
spa_slop_shift 5
zfs_vdev_write_gap_limit 4096
dbuf_cache_hiwater_pct 10
spa_load_verify_metadata 1
l2arc_noprefetch 1
send_holes_without_birth_time 1
zfs_vdev_mirror_rotating_inc 0
zfs_arc_dnode_reduce_percent 10
zfs_arc_pc_percent 0
zfs_metaslab_switch_threshold 2
zfs_vdev_scheduler deadline
zil_slog_bulk 786432
zfs_expire_snapshot 300
zfs_sync_pass_rewrite 2
zil_replay_disable 0
zfs_nocacheflush 0
zfs_vdev_aggregation_limit_non_rotating 131072
zfs_arc_max 70132659200
zfs_arc_min 65132659200
zfs_read_chunk_size 1048576
zfs_txg_timeout 5
zfs_trim_queue_limit 10
zfs_arc_dnode_limit 0
zfs_scan_ignore_errors 0
zfs_pd_bytes_max 52428800
zfs_scrub_min_time_ms 1000
l2arc_headroom_boost 200
zfs_send_corrupt_data 0
l2arc_feed_min_ms 200
zfs_arc_meta_min 0
zfs_arc_average_blocksize 8192
zfetch_array_rd_sz 1048576
zfs_autoimport_disable 1
zio_slow_io_ms 30000
zfs_arc_p_min_shift 0
zio_requeue_io_start_cut_in_line 1
zfs_removal_ignore_errors 0
zfs_scan_suspend_progress 0
zfs_vdev_sync_read_max_active 128
zfs_deadman_failmode wait
zfs_reconstruct_indirect_combinations_max 4096
zfs_ddt_data_is_special 1