Is there a definitive guide for PageFile size on the latest 64-bit servers with large amounts of RAM?
I've seen conflicting recommendations, and am wondering if the latest releases have changed the recommendations.
Is there a definitive guide for PageFile size on the latest 64-bit servers with large amounts of RAM?
I've seen conflicting recommendations, and am wondering if the latest releases have changed the recommendations.
It's so difficult to track dozens of passwords in different locations. Syncing fails from time to time and you end up with collision correction avoidance syndrome.
Is there a single source of safe, online, commercial password storage anywhere? One that will be around for years to come and one that is truly safe enough to ensure protection?
In the 'good ole days' of NT, the rule of thumb was simple... if you had installed a service pack on a server and subsequently installed a piece of software that prompted you to insert the OS disc which then installed un-patched components, then you simply re-installed the latest service pack immediately afterwards to ensure that the new components got patched.
In today's auto-update world...when you have a fully patched server and you install a windows component that requires the OS disc to install additional items... is the auto-updates smart enough to always ensure every component is updated properly? Seems to be a pretty bold assumption.
To clarify further, let's pick an example...say you have IIS installed but not SMTP component on a 2003 server box. Years have gone by, along with many, many updates on the system. Someone then installs something that requires the SMTP component pieces and it now gets installed. If there are any new DLL's unique to this component, which were previously not on the system, then they are installed in their unpatched state from the OS disc.
Updates would have to know that IIS was fully patched prior to SMTP component, but now individual pieces need to be updated.
In general, do you rely on windows updates to properly handle this situation?
Since it's kind of a time consuming option to increase the size of a Virtual Disk, what's the downside of selecting a huge disk to start with? (Why not start at the largest size and never worry about running out of virtual space?)
I'd like to have dev environments 'baselined' with certain software installed along with component packages. From time to time, a new software piece may be desired which I'd like to install in a clean VM until confidence is obtained that it's desirable to be on the production dev box.
I have been playing around with a test system trying to 'versionize' it by naming it like DevUsr0105 (major version 01, sub version 05, DevUsr is username) but this gets difficult with activation.
I can imagine a time frame from release of the next version before deprecating the previous version so at some times there may be two DevUsr type VMs in use. (New dev happening on the latest dev release, and completing previous work on the existing VM before retiring it.) There's obviously no problem paying for two OS licenses per Dev but I don't want the hassle of calling in to migrate an activation. (Also - isn't there a limit on the # of migrations per license?)
Each developer would grab the latest production developer VM whenever a new release is available. Any of their local customizations would be versionized in their SVN repository to ensure a clean migration without a lot of manual work.
So is the latest release sysprep'ed and ready for final activation by the developer? Do we really need to transfer the activation or re-activate the OS on every dev VM release?
VM's are not actively being utilized yet by developers and I'm just looking for feedback on how something like this is being handled by others before wasting a lot time going down the wrong path.