Where is the option to configure port mappings in the new Microsoft Azure portal (http://portal.azure.com)?
Hans Malherbe's questions
I integrated Openfiler with an Active Directory.
I configured a SMB/CIFS share as Controlled Access
and set domain admins = PG
and domain users = RO
. This should give domain users readonly access to the share.
When I open a share from a Vista machine on the domain everything works.
When I try to open the share from a Vista machine that is not on the domain I get the login prompt as expected, but no matter what I enter, I get a message
\192.168.1.51\raided.main.iso is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions.
Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again.
When I configure the share as Public guest access
it works both ways. Both of these machines are on the same network.
What gives?
We have three VMware ESXi 4 hosts serving VM's from an OpenFiler NFS share. Every host has a direct gigabit connection to the NAS. Although read performance has been great, writing files inside the VM guests are suffering.
The recommended configuration for data integrity is to export NFS shares with the sync
option and mount ext3 with data=journal
.
I'd like to compare the behaviour of the maximum integrity configuration with the maximum I/O performance configuration. To configure for performance I exported the NFS share as
/mnt/raided/main/vm 10.0.0.0/255.255.0.0(rw,anonuid=96,anongid=96,secure,root_squash,wdelay,async)
while ext3 is mounted with
/dev/raided/main /mnt/raided/main ext3 defaults,usrquota,grpquota,acl,user_xattr,data=writeback,noatime
Will these configuration options give me optimal I/O performance? How about changing the file system? Will XFS improve performance significantly?
Other than the NAS crashing or power failures, what can cause data integrity issues with this configuration?
mdadm
does not seem to support growing an array from level 1 to level 10.
I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array.
My current strategy:
- Make good backup.
- Create a degraded 4 disk RAID 10 array with two missing disks.
rsync
the RAID 1 array with the RAID 10 array.
one disk from the RAID 1 array.fail
and remove- Add the available disk to the RAID 10 array and wait for resynch to complete.
- Destroy the RAID 1 array and add the last disk to the RAID 10 array.
The problem is the lack of redundancy at step 5.
Is there a better way?
We plan to run our test environment inside ESXi 4 hosts on a couple of Core i7 920's with X58 mb's.
The hardware is very non-HCL which also means we forfeit VMware support. No biggie for a test environment already running two whitebox ESXi 3.5 hosts for almost a year without problems.
We don't need the onboard NIC's or VMFS on SATA, although it would be a bonus. We just need to get ESXi 4 installed and load drivers for some dual-port Intel PT adapters. The PT's are on the HCL.
If anyone has this working on Core i7 (or not), I would be very interested.