I am no means Windows fella, but due to circumstances have to configure sikuli test suite on Windows Server 2016. The problem I have is that i have installed Tight VNC on it, and when connect to the running VNC server on that Windows Server 2016 instance from Mac VNC Viewer i get login prompt and then after entering the password black screen and it just stays black. RDP does work. Any things i am missing in setup? Thank you.
Danila Ladner's questions
Hello fellas engineers.
I have a ESXi5.0 cluster setup with 3 ESXi hosts. Now I need to create a test case for networking hardware failure and preform the test in the datacenter.
My Setup:
1) 3 DELL R820 Servers (all identical in the configuration and hardware)
2) PHYSICAL: Pair of 1GB ports for vSphere Management Network (active/standby)
VIRTUAL: 1 VMkernel Port vmk0 on standard vSwitch0
3) PHYSICAL: Pair of 10GB ports for regular network communications between guests MESH(active/active using IP Hash load balancing connected to the redundant switches)
VIRTUAL: dvSwitch0 with exposed and needed VLANs.
4) PHYSICAL: Pair of 10GB for storage NFS/VMDK (active/passive, Failover Only with "Link Status Only" network failure detection connected to different switches)
VIRTUAL: 1 VMkernel port vmk1 connected to distibuted switch dvSwitch01
5) PHYSICAL: Pair of 10GB for storage (guest initiated) (active/active, load balancing is based on Port ID with "Link Status Only" network failure detection connected to different switches)
HA and DRS enabled.
I was planning just do regular pull cable test but might be missing some factors. I would appreciate any suggestions and/or best practices to perform such a test.
I installed Dell OpenManage 7.3 VIB on ESXi5.1 host. I assumed that VIB will load all DELL specific MIBs into OS. I enabled snmpd on that host as well, but when I am doing "snmpwalk" or "snmpget" I do not get information on Dell specific OIDs
The source of the VIB:
The output i get:
nmpget -v2c -c public myesxi.domain.com 1.3.6.1.4.1.674.10892.1.300.10.1.8.1
SNMPv2-SMI::enterprises.674.10892.1.300.10.1.8.1 = No Such Object available on this agent at this OID
I do get OIDs from VMWare stack, but not the Dell ones. Eventually I want to use nagios plugin "check_openmanage" but it apparently gives me an error as it cannot query Dell OID's
Am I missing something?
EDIT: I see the package is installed:
# esxcli software vib list | grep -i "OpenManage"
OpenManage 7.3-0000 Dell PartnerSupported 2013-08-21
We use Nagios for monitoring. Is there a way to create hardware checks using SNMP MIB for R820 servers running ESXi5.x on them? Right now we are using this python plugin:
But we can use it no longer due to security policies within the org. We are satisfied with the output of the current plugin, therefore it would be great if we could use similar agent less check using SNMP. Thanks
I have an issue with 2 new ESXi hosts. OS - ESXi5.0 Custom Dell ISO patched up to Build 1024429
DELL R820 with 1G network 2 ports from integrated MB NIC(Intel) connected to 2 juniper switches, one is active, one is standby lose connectivity intermittently.
When they lose connectivity i can ping neighbor hosts on the same subnet connected to the same switch but pinging default getaway returns:
"sendto() failed (Host is down)"
From esxcli I see nothing in ARP table for default getaway ip issuing a command when connectivity is lost:
esxcli network ip neighbor list
Tcpdump from esxicli shows that esxi host constantly sends ARP broadcast for the default getaway IP but never gets the response. I have actually 6 new servers and only 2 of them are having these problems. Same hardware and same settings for ESXi, This connections are for 1G Management network.
Does any have slight idea what can be wrong?