Does anyone know what it means to have billing for EC2 instances that are labeled No Region?
Elastic Compute Cloud
-$8,888.50
No Region
No Instance Type
Does anyone know what it means to have billing for EC2 instances that are labeled No Region?
Elastic Compute Cloud
-$8,888.50
No Region
No Instance Type
How do you add an existing key to a live EC2 instance that has no key pair?
I have tried using Session Manager to vi /.ssh/authorized_keys
and add in the public key of the pair, but I get this error:
"~/.ssh/authorized_keys"
"~/.ssh/authorized_keys" E212: Can't open file for writing
Is this something that has to be done using the console?
No associated key pair
This instance is not associated with a key pair. Without a key pair, you can't connect to the instance through SSH.
You can connect using EC2 Instance Connect with just a valid username. You can connect using Session Manager if you have been granted the necessary permissions.
I realized that I only had one nameserver configured for an EC2 web server I set up last year, as the other elastic IP didn't get associated. Everything seems to work though?
Is it necessary to have a working second nameserver for an EC2 web server?
Do you need separate elastic IP addresses for each nameserver - say NS1, NS2 - for an EC2 webserver?
I tried
yum update python
and
yum upgrade python
both came up with results with endline: "No Packages marked for Update"
I currently have
Python 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
Apparently, the latest version of python is 3.x already ... what's the best way to upgrade?
When trying to run yum, I get the following error:
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
I've manually killed each yum process id, but it keeps on occurring (other application is: yum), although the days ago become more recent (used to be 3 days ago, then 1 day ago)..
Any idea what's wrong?
Title says it all - I've migrated to a new dedicated server at a different IP.. It's been more than 48 hours since DNS was setup. Some MySQL inserts (standard php variables invoke mysql) from at least one domain this server hosts still show up on the old server, rather than the new. What's happening?
Here's one domain to check - http://inacentaur.com
After running a dig on an old domain name moved from an old server to a new one, completely different IP and provider, I ran a dig and found that the listed nameserver does not match the one in the whois. The whois shows the new nameserver, but the dig shows the old. Which one is more accurate?
Also, the dig shows an additional FQDN that I've never seen before. The {bracketed} have been replaced with general identifiers.
@root: dig +nocmd {domain.com} any +multiline +noall +answer
{domain.com.} 14400 IN A {IP address}
{domain.com.} 86400 IN SOA {old name server.} {unknown, never-been-referenced domain} (
2010062900 ; serial
86400 ; refresh (1 day)
7200 ; retry (2 hours)
3600000 ; expire (5 weeks 6 days 16 hours)
86400 ; minimum (1 day)
)
After migrating from different dedicated servers from completely different providers, I then had to update nameservers on hundreds of domains from different registrars.
I notice that GoDaddy DNS updates typically occur flawlessly within 2 hours. Netfirms, so far is jumping back and forth between the old nameservers and the new... but it's still within the 24-48 hour window - I am hoping it will stabilize.
I am wondering if such a discrepancy among domain registrars is normal, or if there might be something I've misconfigured?
Migrated from Server A to Server B from different providers, with different IP's for hostname, nameservers, etc. I've updated the nameservers for a number of domains, and they seem to be propagating, resolving, etc... for a short period of time, and then later, it looks like some of the domains have "jumped" back to the old nameservers...
Any idea what might be causing this? Solutions?
For some reason, traceroute to hostname (and any domains hosted on this server) always times out on hop 5, right between the two shown below:
76.246.22.2
(Request timed out.)
ggr7.sffca.ip.att.net [12.122.114.17]
The tracert eventually completes at hop 16. The addresses are all unique, and the timed out hop only occurs on hop 5. After the att.net, the hops are all some long hostname at above.net, before finally resolving to another IP, and then, finally, the hostname.
1) What does this timeout effect?
2) Is there anything I can do about this - or it something I've misconfigured? Dedicated CentOS/WHM server.
(The other IP's assigned to the same machine works without this timeout...)
Are MySQL databases always stored in /var
on CentOS?
Specifically - If a new CentOS/cPanel server that needs to support MySQL, doesn't have that partition, but only has these, does that mean MySQL data is somehow being stored in a temporary dump somewhere?
/dev/sda3 /
tmpfs /dev/shm
/dev/sda1 /boot
/usr/tmpDSK /tmp
Is there a way to know for sure that your remote-ly hosted server is actually a dedicated, and not just a virtual, "faking" to be a dedicated? What are some possible shell commands to run to test this?
I am trying to migrate from my old server (Server 1) from provider 1 to a new server (Server B) at provider 2, keeping the process as seamless as possible.
One of the first things I noticed in the test folder I migrated is that several PHP functions are not supported with Server 2 -- apache_request_headers(), for example. This is supposedly because PHP was not compiled as an Apache module on Server 2. There might be other differences that may cause fatal script errors, that I haven't yet found. Both servers run CentOS with WHM. Is there a way to configure the new server to be exactly the same as the old, without this ad hoc checking?
PHP, WHM, and several other services are already installed on a CentOS x64 server I am trying to migrate data to. Many of my existing PHP scripts are dependent on PHP's apache_request_headers() function, which the current server's PHP configuration does not support. Apparently, compiling PHP as an Apache module is one solution, but are there other ways to enable this (without uninstalling PHP, reinstalling, etc., and all dependent services), perhaps as easy as modifying php.ini, somehow?
What's the best way to check for HDD errors and early signs of failure on CentOS?
Is there a way to migrate from an old dedicated server to a new one without losing any data in-between - and with no downtime? In the past, I've had to lose MySQL data between the time when the new server goes up (i.e., all files transferred, system up and ready), and when I take the old server down (data still transferred to old until new one takes over). There is also a short period where both are down for DNS, etc., to refresh.
Is there a way for MySQL/root to easily transfer all data that was updated/inserted between a certain time frame?