I am currently in the process of migrating a server running several linux containers to a server managed by proxmox. In the past when i moved a linux container to a different host i just used the LXD API, simplestreams protocol and executed a lxc copy
command - quite simple. But how is it done if the remote is managed by proxmox so that the migrated container is known to proxmox afterwards?
harald's questions
When developing a shell script that should run on various unix/linux derivates, i sometimes have the problem that some tools have to be called in a different way compared to other systems. For example if the arguments are different. I wonder what's the best way to solve this.
Should i use uname
to check for the operating system name and rely on this to execute the tool in different ways or are there any "better" ways, some kind of "ability" check for shell commands and tools?
The systems in question are for example Linux, Mac OS X, Solaris and Irix -- all quite different when it comes to abilities of tools and shell commands.
I wonder, what's the best way to determine the size of a file using common unix tools. I need to determine the size of a file in bytes in a shell script. The problem is, that the shell script needs to be portable across different operating systems like osx, irix, linux -- that said: using the "stat" command may not work well, because the arguments required to get the result i want are different on almost every operating system.
I tried to use:
cat ... | wc -c
and while this seems to work quite well, i will probably get issues in a multibyte environment, won't i? So: what's a good way to do this?
I would like to monitor some servers using munin, that are in a different network and are not reachable directly by telnet. I wonder what are the possibilities:
Can I install a central node in the remote network, that collects all data from other servers in this network?
Or would I have to do port-forwarding for each server I want to monitor?
i am currently looking for an open-source monitoring solution like zabbix and icinga. While these both seem to be very powerful for monitoring generic states of hard- and software, I am missing information -- for me -- important functionality, or i could not figure out how it could work by just reading their documentation.
I would like to integrate some job-queue in such a monitoring tool. On the one hand, I need to know summary information of the queue like generic availability etc., which would be no problem to integrate with one of these tools. On the other hand i would like to have additional detailed information about whats going on in the queues.
I would like develop a plugin, which could return an arbitrary amount of detailed data -- like information about each job stored in the queue -- which i could fill in a custom view / template, which i could nicely integreate in one of these monitoring tools.
Is this possible with zabbix, icinga or any other open-source monitoring solution?
i am a big fan of qemu-kvm, i've several instances running on servers running ubuntu linux. I now wonder if it's anyhow possible to use a virtual server image on an Mac OS X machine. Either by running qemu on OSX or by running any other virtual machine? Is there any possibility?
thanks,
I've used mysqlimport for MyISAM tables a lot in the past, without any problems. But now i want to import data into a InnoDB table and are facing the following problem: mysqlimport reports the following error and won't import anything:
mysqlimport: Error: 1062, Duplicate entry '1' for key 'PRIMARY', when using table: ...
... and i don't know how to resolve this error. The table i want to import the data into is freshly created, without any data. The table looks like the following:
CREATE TABLE `member` (
`member_id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'ID of Member',
...
PRIMARY KEY (`member_id`),
...
) ENGINE=InnoDB;
The data i want to import includes the "member_id", which is defined as "auto_increment" in the table. Of course there are no duplicate 'member_id' in the csv-file -- i've tripple-checked this. Can this cause any errors when importing into MySQL ... and if so: how can i resolve this?
MySQL Server version is: 5.5.8
i think i am missing some basic knowledge about network connectivity, maybe someone out there could explain me, why the following causes problems. The facts:
- We are connected to the "internet" with 100mbit/s, full duplex.
- We had the cable of our internet provider connected to a security appliance (Cisco ASA): outside- and inside-port where configured to both 100mbit/s, full duplex.
- Behind the security appliance we had a 24-port 10/100/1000mbit/s switch. The port connected to the ASA was configured to be 100mbit/s, full-duplex ... the other ports where configured to be 1000mbit/s
Internal bandwidth between the machines connected to the switch was always very good. Incoming bandwidth was always ok. Outgoing bandwidth was ok until after two years suddenly the outgoing bandwidth dropped down to below 1mbit/s.
At first we thought we had a problem with our ASA, because we detected lot's of CRC errors on the outside-port of the appliance. We swapped the hardware, but the bandwidth problem was not solved.
We than changed the configuration of the inside-port of the ASA to 1000mbit/s, full duplex and the port of the switch to 1000mbit/s, too ... so that every port of the switch now has 1000mbit/s.
This not only solved our bandwidth problem, it's even better than before. Apparently we had some kind of bandwidth-mismatch because of the different configuration in the switch ... but i am not really sure, why ... is there some "easy" explanation for this kind of problem?
Thanks in advance!
for nginx there's a very nice module available to filter a response and search/replace content in it: http://wiki.nginx.org/HttpSubModule
i wonder, if there's a similar possibility for lighttpd available?
thanks in advance,
i am currently trying to find a solution to synchronize two storage servers both running open-e dss 6.
- i need only one-way synchronization (storage A -> storage B)
- my volume is 30TB in size, therefore dss' replication will not work (at least afaik it only supports volumes up to 16TB)
- there are about 1.000.000 files to synchronize
i've tried with rsync but things got very, very slow with a million files.
i have no clue if there are any better solutions for this problem, so any help would be very much appreciated!
i'm currently thinking of a clean way of how to bring an ftp server down for maintenance. i wonder, if anybody out there could give me some hints of how to solve this:
- i don't want to interrupt any current uploads, but want to block any new connects / uploads and wait, till uploads have finished, before taking down the ftp server
- is there a way of dynamically prevent user-logins and show a message eg.: "ftp currently down for maintenance" when a user tries to log in?
are my thoughts on this very uncommon or how do others handle this -- i feel, that just halting ftp server and killing any current uploads is not the right way for this ...
i use proftpd (with SQL backend) btw, maybe there are some specific solutions for this -- or are there any generic tools to achieve this?
many thanks!
we have a storage server, with currently an amount of about 20TB of media files which we want to synchronize with a second storage server, for backup and fail-over. the facts are:
- we're storing currently about 9.000.000 files
- file sizes from several KB up to 1 GB
- only one-way synchronizing required
- the files do not get updated and there are no deletes -- only new files to synchronize
- the storage servers are running open-e, they are mounted as NFS volumes in the network
currently we use just plain rsync on a third-server to perform the synchronization.
i would like to know, if there are better tools for such an amount of files -- commercial or open-source?
thanx very much,
I'm having big trouble with my proftpd installation: the login is slow as hell, it takes about 10 seconds and more and that's to much for the default settings of most ftp clients. i'm using mysql and mod_sql as backend for the login.
I've already searched a bit and added the following to my proftpd configuration:
UseReverseDNS off
IdentLookups off
ServerIdent on “...”
But that did not help at all. Mysql is already set to not make any DNS lookups.
At the moment I have no idea, what to try next -- do you have any?
Thanks a lot!
i've problems connecting to an ftp-server behind a cisco asa firewall using passive mode. ftp works using active and "extended passive" mode, however: when i turn off "extended passive" (epsv in ftp console client app), it does not work anylonger -- all commands result in a timeout. however we need non-"extended passive" mode for some application we use.
any ideas?
thanks, harald
UPDATE / SOLUTION
as it turns out, it was not exactly ASA's fault, or was it? i had to turn of masquerading in proftpd configuration. i had the masquerading-address in proftpd-config set to the IP address of the ftp-server domain and that resulted in unexpected things when passing traffic through the asa. now -- without address masquerading -- everything works very well.