Is there a way to start services.msc already attached to a remote system (e.g. from command line)? I want to avoid clicking Action -> Connect to remote computer, because I have to do it so often...
jan bernlöhr's questions
We have some Ubuntu clients here which shall mount kerberos protected NFS homes. The server works nicely with the existing clients, so we can assume that ldap and kerberos are ok.
We managed to configure ldap on the ubuntu clients and kinit is able to get us tickets for ldap users. When root gets a root ticket with kinit, we can mount the nfs shares.
To allow users to mount their homes we set up autofs. However this does not work since autofs seems to performs the mount as 'root'. However, root does not have any tickets so the mount fails - see the attached log excerpt from rpc.gssd. Note that our kerberos setup does not use machine principals but user principals. How can we get autofs to pass the correct uid to gssd?
handling gssd upcall (/var/lib/nfs/rpc_pipefs/nfs/clnt10)
handle_gssd_upcall: 'mech=krb5 uid=0 '
handling krb5 upcall (/var/lib/nfs/rpc_pipefs/nfs/clnt10)
process_krb5_upcall: service is '<null>'
getting credentials for client with uid 0 for server purple.physcip.uni-stuttgart.de
CC file '/tmp/krb5cc_554' being considered, with preferred realm 'PURPLE.PHYSCIP.UNI-STUTTGART.DE'
CC file '/tmp/krb5cc_554' owned by 554, not 0
WARNING: Failed to create krb5 context for user with uid 0 for server purple.physcip.uni-stuttgart.de
Ubuntu 11.10 Desktop Autofs5
The output of kinit. The user id is 65678. Realm and username has been altered to respect privacy.
usr01@ubuntuclnt01:/$ klist
Ticket cache: FILE:/tmp/krb5cc_65678_ed3816
Default principal: usr01@REALM
Valid starting Expires Service principal
11/18/11 17:18:57 11/19/11 03:18:57 krbtgt/REALM
renew until 11/19/11 17:18:57
Update: If found a 2,5 year old bug report describing exactly this phenomena. https://bugs.launchpad.net/ubuntu/+source/nfs-utils/+bug/368153#5
It seems that (ubuntu) linux cannot get along with kerberos secured home volumes by design - while virtually any other os can do that - even Mac OSX!
ganeti 2.4-rc3 setup on my squeeze server went fine and I can successfully create instances:
$ gnt-instance add -t plain -s 5G -o debootstrap+default -n obi-wan vm01
Wed Mar 2 17:12:51 2011 * disk 0, vg xenvg, name fdc7fa9e-19ac-405c-adad-f72da34d6682.disk0
Wed Mar 2 17:12:51 2011 * creating instance disks...
Wed Mar 2 17:12:51 2011 adding instance vm01.physcip.uni-stuttgart.de to cluster config
Wed Mar 2 17:12:51 2011 - INFO: Waiting for instance vm01.physcip.uni-stuttgart.de to sync disks.
Wed Mar 2 17:12:51 2011 - INFO: Instance vm01.physcip.uni-stuttgart.de's disks are in sync.
Wed Mar 2 17:12:51 2011 * running the instance OS create scripts...
Wed Mar 2 17:13:03 2011 * starting instance...
$
It tells me that the instance is running fine:
$ gnt-instance list
Instance Hypervisor OS Primary_node Status Memory
vm01.physcip.uni-stuttgart.de xen-pvm debootstrap+default obi-wan.physcip.uni-stuttgart.de running 128M
However debootstrap cannot install an operating system because it fails to mount a root device. This is a snap of the vm console:
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... [ 0.491828] device-mapper: uevent: version 1.0.3
[ 0.492487] device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: [email protected]
done.
Begin: Waiting for root file system ... done.
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/sda1 does not exist. Dropping to a shell!
What's installed:
- debian squeeze with xen 4 and lvm2
- kernel 2.6.32-5-xen-amd64
- ganeti 2.4 rc3
ganeti cluster info: Cluster with a single node (named obi-wan)
gnt-cluster info
Cluster name: vmcluster.physcip.uni-stuttgart.de
Cluster UUID: 2ae3377c-8682-486d-9ac2-cdac43a136f7
Creation time: 2011-03-01 12:05:10
Modification time: 2011-03-02 14:12:48
Master node: obi-wan.physcip.uni-stuttgart.de
Architecture (this node): 64bit (x86_64)
Tags: (none)
Default hypervisor: xen-pvm
Enabled hypervisors: xen-pvm
Hypervisor parameters:
- xen-pvm:
blockdev_prefix: sd
bootloader_args:
bootloader_path:
initrd_path: /boot/initrd-2.6-xenU
kernel_args: ro
kernel_path: /boot/vmlinuz-2.6-xenU
migration_mode: live
migration_port: 8002
root_path: /dev/sda1
use_bootloader: False
OS-specific hypervisor parameters:
OS parameters:
Hidden OSes:
Blacklisted OSes:
Cluster parameters:
- candidate pool size: 10
- master netdev: xen-br0
- lvm volume group: xenvg
- lvm reserved volumes: (none)
- drbd usermode helper: /bin/true
- file storage path: /srv/ganeti/file-storage
- maintenance of node health: False
- uid pool:
- default instance allocator:
- primary ip version: 4
- preallocation wipe disks: False
Default node parameters:
oob_program: None
Default instance parameters:
- default:
auto_balance: True
memory: 128
vcpus: 1
Default nic parameters:
- default:
link: xen-br0
mode: bridged
How can block files be mounted on osx? I tried
hdiutil attach filename
however this is terminating with
hdiutil: attach failed - not recognized
hdiutil only seems to work for iso/dmg images. On ubuntu the block file can easily be mounted with
mount -o loop filename mountpoint
Background: I used vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) to mount virtual disk files (e.g. vhd). vdfuse itself works fine and the partitions contained in the virtual disc appear as block files on the mount point.
VHD File -> /my/mountpoint/Partition1, Partition2, ...
On ubuntu the block files can be mounted via
mount -o loop /my/mountpoint/Partition1
however the -o loop option does not exist on osx.
Is it possible (how) to mount an VHD file created by Windows 7 in OS X?
I found some information about how to do this on linux. There is a fuse fs "vdfuse" which uses virtualbox libs to mount filesystems supported by virtualbox. However I was unable to compile the package on osx because nearly all headers are missing and I doubt that it would work anyway...
EDIT #2: Okay I got my hands dirty and finally compiled vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) on osx. As a starting point I used macfuse (http://code.google.com/p/macfuse/) and looked at the example file systems.
This led me to the following build script
infile=vdfuse.c
outfile=vdfuse
incdir="your/path/to/vbox/headers"
INSTALL_DIR="/Applications/VirtualBox.app/Contents/MacOS"
CFLAGS="-pipe"
gcc -arch i386 "${infile}" \
"${INSTALL_DIR}"/VBoxDD.dylib \
"${INSTALL_DIR}"/VBoxDDU.dylib \
"${INSTALL_DIR}"/VBoxVMM.dylib \
"${INSTALL_DIR}"/VBoxRT.dylib \
"${INSTALL_DIR}"/VBoxDD2.dylib \
"${INSTALL_DIR}"/VBoxREM.dylib \
-o "${outfile}" \
-I"${incdir}" -I"/usr/local/include/fuse" \
-Wl,-rpath,"${INSTALL_DIR}" \
-lfuse_ino64 \
-Wall ${CFLAGS}
You actually don't need to compile VirtualBox on your machine, just install a recent version of VirtualBox.
So now I can partially mount vhds. The separate partitions appear as block files Partition1, Partition2, ... on my mount point. However Mac OS X does not include a loopback file system and macfuse's loopback fs does not work with block files, so we need a loopback fs to mount the blockfiles as actual partitions.