I have a question about using HybridFox to manage my Openstack "Cloud" I have successfully installed Openstack and have AMI's up and running via commandline Euca2ools. Can I use HybridFox or ElasticFox to manage my AMI's? I am assuming that Openstack is EC2 compatible and simply uses REST calls to interface.
Jeremy Hajek's questions
My department is implementing an Ubuntu Enterprise Cloud. I have done the testing and am quite comfortable with the 4 pieces, CC/SC, CLC, WS, NC. Looking at various documents below it appears the the Storage Controller and Cluster Controller (eucalyptus-sc and eucalyptus-cc) are always installed on the same system. My question is this: can I install the storage controller and the cluster controller on separate systems?
- http://open.eucalyptus.com/wiki/EucalyptusAdvanced_v2.0 the picture indicates that cc and sc are two different machines
- http://www.canonical.com/sites/default/files/active/Whitepaper-UbuntuEnterpriseCloudArchitecture-v1.pdf P.10 1st paragraph uses the word "machine(s)"
- http://software.intel.com/file/31966 P. 8 indicates the same separate architecture
- BUT... https://help.ubuntu.com/community/UEC/PackageInstallSeparate indicates below that the SC and CC are to be on the same system.
I have a question about a non-optimal setup and the practical implications of this. Ideally you would place the ESXi server right in the same room as the FreeNas white box end of question.
My situation is this: I have a run of ~125ft of Cat 5e connecting a ESXi server to a FreeNas whitebox in the server room. I know the distance of the ethernet cable is within the maximum distance for ethernet traffic but I have two questions...
- Can Cat 5e support gigbit speeds at that distance if the switch on the back end is a linksys SRW-2048?
- Should I be concerned about the distance causing data read and write timeouts in the SCSI portion--(disk operations of the ESXi)?
I am using a HP DL 160 G6 server that according to specs takes PC3 Registered or Unbuffered. When I combine the two types of memory below the system will not POST. When I use just the first type of memory listed the system will POST.
I have two pieces of HP memory that came with the server labeled
PC3-10600E-9-10-E0
and then I have some Crucial memory labeled
PC3-10600R-9-10-B0
I wager that the R means Registered memory and the E means ECC - then shouldn't the crucial memory boot with the system according to the HP specs? Or does the E mean it is Unbuffered and therefore I shouldn't mix and match as according to this HP memory config doc?
I am using FreeNas 0.7.1 (FreeBSD 7.2-release-p6) Recently I have been getting a series of these error messages to the console. I am using ZFS pools mirrored on the FreeNAS system, and they report as healthy and in good status.
I think this error message is related to the FreeBSD underneath. The ZFSpools are iscsi targets for a VMWare installation over Gigabit network. Is the delta_t they are referring to talking about a potential time out for packets over iSCSI? Has anyone experienced this error message? I have attached an image below.
I have a Lacie Ethernet RAID disk NAS box. I configured the 4 disks in a RAID 10 config. The Lacie unit will not let me make CIFS connections to the box stating "Shared Folders: Not Ready" but I can see that the data is still there. I have done the troubleshooting route with resetting the system. Now I am in data recovery mode.
If I take two disks (one mirrored stripe) out of the Lacie box, assuming the data is intact, how can I go about recovering the data since it is part of a stripe? Could I connect the two drives to a windows system? Or a Linux system? Would they been seen as a stripe? Or as individual drives?