How should I go about installing debian to a remote server, that I have no physical access to?
Background about the server: debian installation, no xen or lvm, ext3 fs takes all disk with 11G free space.
Here's my plan of action, please comment and suggest improvements.
- attempt to shrink mounted partition to current_data_size + 2G.
- use fdisk/mkfs.ext3 to create a new partition in space that was freed up
- install a temporary FS in the new partition (could be debian) using some un-attended/remote installation technique (any suggestions?)
- Modify grub's menu.lst to boot kernel from the new partition (is that enough to get the new OS running?)
- reboot..
- After getting into the new OS (how do I make sure it has an active sshd server?) use FS tools to wipe to old OS, use resize2fs to grow the new partition to overtake the space that was freed up from clearing old one.
Some random questions: can I install new OS using LVM and then, when additional space is available, extend VGs/LVs to take it up?
EDIT:
Am I correct having realized that shrinking a mounted filesystem has pitfalls, but shrinking a mounted partition is impossible?
The system has an unsed partition for swap, 2.5G size, maybe I'll be able to swapoff it and format for new OS installation, what do you think?
Disclaimer, this isn't exactly an answer to your question.
There are so many ways this can go wrong. You need console access, if not for the install then for any one of a dozen reasons that your server may not come up cleanly with future upgrades or patches. Enjoy those remote network card driver updates!
If this is a real server, you should purchase some form of lights-out management card (like the integrated ILOs that come with HPs). You can talk someone through the initial configuration remotely, and then never have a problem again.
http://en.wikipedia.org/wiki/Lights_out_management
Shrinking a partition while it is still mounted could be...um, exciting.
As for the "setup everything and reboot hoping it boots, and sshd starts, etc...", well, you might not make a single mistake and succeed, but what I would do is, get another machine that you do have access to, and try all of this on that. Without touching the box. Each time you mess up, go fix it, try again, and take notes. After you get good at this, you have a better chance at succeeding on the remote machine.
Also build in as many "ways out" as you can think of. It's also far easier if you have some kind of headless remote access to the box's BIOS, but I'm sure you have no choice there.
Let us know how it goes.
Yes, shrinking a mounted partition is impossible. You can install on the swap partition, using a chroot would probably be easiest. Other options: you can install a kernel to do an nfs boot, or do a PXE boot if your network card supports it. You can then try and shrink the partition or simply scrap it and install from the net.
It's doable but being completely remote creates 'holes' where you lose control of the machine (at boot, usually). ILO or serial console + remote power might fill these gaps for you, but require some configuration done on site, so it comes down to having 'remote hands' (someone that you can guide through the phone). If it's possible to setup remote BIOS access, do it, it will save plenty of time later.
Also, resizing mounted partitions is not possible (at least for most filesystems I know). Use the swap partition for a debootstrap installation or do a PXE install.
I dealt with a similar situation a couple of years ago by creating a custom CD that was fully preseeded, partitioned the disk, installed all the basics, and left me with a configured machine ready to SSH into and manage. I used http://linuxcoe.sourceforge.net/ as the basis for the image, then tweaked the behoovious out of it.
Sure, it took a while to get the image tweaked to the point of absolute automation, but it meant that I could have a DC monkey rack the box and install the OS without having to think at all. Came in handy primarily for large numbers of installs, in your situation I'd be more inclined to use an IPKVM or serial console (if you've got a good colo provider they should be able to get you hooked up) unless you've got IPMI/iLO/DRAC, which are the nicest ad-hoc remote admin tools.
one thing puzzles me - if the remote system already has debian installed, why do you want to install it again? why not just upgrade it or install/re-install packages as required?
but, ignoring that, what you want to do is quite possible. I've done it several times (including converting a couple of remote HP/Compaq machines located in the UK from RHEL to Debian, while i was here in Australia). It tends to go smoother and with less risk if you have a remote management card (like iLO etc) in the server but it is possible (just riskier) without one.
the general idea is to install debian into a spare partition (the swap partition can be used for this if there's no other free space available), chroot into that partition, install sshd, configure grub and anything else that needs to be configured (fstab, for example). you say your system has 11GB free, so you can use that.
if the existing system is debian, you can use debootstrap or cdebootstrap to install debian into your spare partition.
if the existing system is not debian, use debootstrap (or even the standard debian installer) to install debian into a subdirectory (or a xen/kvm/virtuablbox vm) on a local system and then tar it up. scp it to the existing system and untar it into the right location.
as with any major system "surgery", make a plan of what you're going to do and the order you're going to do it BEFORE you start doing any work. the very process of writing down the plan will remind you of other things you need to do. then stop and re-read your plan and make any corrections or extra notes that you need to. do that a few times, until you're sure you haven't forgotten anything.
try to design your plan to put off the "moment of no-return" to the last possible moment in time. this generally means a lot of safe, boring preparatory steps, with one last step to activate all the previous steps....and, whenever possible, leave yourself a way to revert/undo each step. for example, set up grub so that the next reboot ONLY will boot into your new environment but subsequent reboots will boot into the old environment - that way if it doesn't come back up, you can just power cycle it. if it works, then you can manually change grub's default.
if possible, practice the procedure on a local machine....with no keyboard or monitor, just as you will have with the remote machine.
at some point, though, you're going to have to gamble that you've done it right and reboot the machine. it's at this point that having a remote console is invaluable. if you don't have one, try to arrange a specific time for someone in the remote data center to be available to follow your instructions by phone/email/irc if necessary.
good luck.