As most company we have different pre-production environment: development, integration, staging, production. We would like to keep these env OS updated but we would like to perform yum update in development, after a couple of days update the exact list of package that has been updated on integration env, after a couple of days in staging, and finally in production. The purpose is to avoid that a new update will pop up during the process and have it running in production at first. As far as I googled, there is no something like 'yum update at this timestamp' Do you know if there is some way to handle this? Asking since we have to deal with mission critical environment. Thx
We bought some software from a small'ish company, it's a Windows 32-bit video content workflow manager, there's been some customisation by them.
We've been working fine for over a year running this code in a VMWare ESXi 4.1u2 VM on W2K3EE-32-bit (that's what they support running it on).
Then they updated their code a month or so back and we started seeing one of the vCPUs periodically pegging at 100%, the second vCPU is fairly idle, say 5-7% - so we just assumed that the code's badly threaded and contacted them about it.
They've now come back to us saying that their code doesn't work in a VM, they've known about this requirement for 18 months or so, and that they want us to V2P it. They say they only see this problem when ran inside VMs. I've a call with their senior programmer scheduled in a few hours to discuss.
Now luckily we have a few physicals that we can do this on, bit time-consuming but do'able.
My question however is that given this VM doesn't touch any hardware directly, is on a very modern host and actually has very low requirements (2 x vCPU, 4GB, 20GB boot vdisk, 100GB data vdisk, single vNIC and nothing else) what could possibly be the issue with running it in a VM, if there is one?
Obviously I'm strongly pursuing this with them but I just wondered if anyone else has found a regular application, that somehow misbehaves inside a VM but not on a physical.
new here, greetings and such.
My problem is relatively simple, but the situation is a tad odd. Currently, I have just been hired on to improve (essentially fix) a server room for a health care dispatcher. However, their server room is in shambles. Everything works, but it's a matter of I don't know what is going to break first. They have just recovered from a 10-day data loss from their SQL server (the main server that keeps everything they need for day-to-day operation). The server is a custom built, running Windows '03 with SQL '05 and needs to be upgraded badly. However, their back-up system is also a problem because they have everything essential on 4 tapes, one of which has failed already (the reason for the crash and missing those 10 days). Everything I have priced out for a new SQL server as well as a disk back-up appliance from dell. Both costs are over $10K and I know for the moment, I can only get away with getting one or the other.
After all that background, my question to you folks is. Which should I get first? A new server that controls the whole company, or a better back-up solution to ensure nothing is lost? And also, how would I go about convincing management of buying one of these items. I have a nice case prepared, ready to show how each of these are important to the operation. I just feel a few more points that maybe I did not think of would help as well.
We are running a production server based on Ubuntu 9.10 Karmic Koala, kernel is almost up-to-date (2.6.38.2-grsec-xxxx-grs-ipv6-64) but karmic package repository is now ridiculously outdated, eg. Nginx is 0.7.62 - really buggy - while latest stable is 1.0.x!
In addition, Karmic just reached its end of life.
This question: Best practices for keeping UNIX packages up to date? looks similar but actually only includes some suggestions about package managers; not at all what I need!
So the options that I see are:
- Get a new machine, install it from scratch, migrate
- Distribution upgrade
- Use a different repository (launchpad/ppa / backport / pinning)
- Build your own
The disadvantages of #1 are quite obvious.
I do not dare do a dist-upgrade path though, as downtime and possible catastrophic consequences are just impossible to predict for a production server, and currently am mostly re-building my own required packages. But I'm sure I might be missing some.
It is not really clear to me what the risks are (stability/compatibility) of using Ubuntu backports, in addition, nothing is officially provided for 9.10 anymore. Launchpad are individual-builds, similar question - how much better is this than compiling my own?
Building packages seems fine, but:
- Sometimes I have trouble reproducing the correct ./configure options in order to re-use my existing configuration files
- I am sure there are tons of packages and dependencies that are now pretty outdated and possible sources of bugs
Finally… what about "old" packages in a recent distribution? I guess there's no other way than re-building them myself? Is a combination of 2 and 4 finally the best path?
Is there any objective consensus on what is the best way to do this, or reasons why some of my options are fine/not fine?
If really there isn't, I will accept that the question gets closed before creating an endless thread!
Our project are plan to migrate from Sparc to x86, and our HA requirement is 99.99%, previous on Sparc, we assume the hardware stability would like, hardware failure every 4 month or even one year, and also we have test data for our application, then we have requirement for each unplanned recovery (fail over) to achieve 99.99% (52.6 minutes unplanned downtime per year).
But since we are going to use Intel x86, it seems the hardware stability is not so good as Sparc, but we don't have the detail data.
So compare with Sparc, how about the stability of the Intel x86, should we assume we have more unplanned downtime? If so, how many, double?
Where I can find some more detail of this two type of hardware?