How to keep the current Nginx websites running intact when installing iRedMail?
takeshin's questions
I'm using web app (AppSheet.com) to connect to my MySQL database and export data from this app to my mysql. I don't have ability to edit the app (connection, queries etc.), but I have full root access to the remote mysql server.
The app connects successfully, creates first table and inserts rows one by one. After 5 minutes I get an error from the app saying that it can't find the table it was writing to and the export stops.
I was trying to play with those mysql config variables:
wait_timeout = 28800
interactive_timeout = 28800
max_execution_time = 28800
delayed_insert_timeout = 28800
net_read_timeout = 28800
net_write_timeout = 28800
connect_timeout = 10
but witout any luck. App admins are not helpful too.
mysql error log does not have any new entries.
How can debug and find what really happens and why the connection is lost? How can I make sure that there is no time limit?
How can I install opendmarc on Debian Wheezy?
I have tried:
~ $ echo 'deb http://ftp.debian.org/debian wheezy-backports main contrib' >> /etc/apt/sources.list
~ $ apt-get update
~ $ apt-get install opendmarc
But it's unable to find the package.
Removed from the repository?
Manual installation?
I know that there are some cron jobs (run every minute) scheduled in my Ubuntu.
How do I track what's running them,
when the cron files (sudo su; crontab -e
) are empty?
I'm using IMAPCopy 1.04.
During the copy process I get the following errors for many messages:
Bad or invalid system flag \RECENT
(messages with this error are not copied at all)
What does it mean and how to fix this?
How can I get from the console information about domain expiration date for european domains, like .eu
, .de
, .sk
?
For .com
domains I just use whois example.com
, but for european domains I get just the brief info, without the date (eg. NOT DISCLOSED!
for .eu
domains)
As an alternative solution I've found paid web service www.whoisxmlapi.com, but it's limited too (and I'm looking for a solution for non commercial projects).
How can I constantly monitor (small) file for changes?
Eg. when file is updated (action from web application), a script is executed (if not already running).
Right now, I do it every minute using cron, but this has delay up to one minute. I'd like to take the action immediately after the file is changed.
Maybe I need to write some low level process running in the background, once the server is started?
The reason I want to do it is to separate web application from root actions (performed on demand, once when file is updated).
I have a hosting for my client on a shared server (website, domain and mailboxes) and I want to change the provider and move everything to another server. Moving the website is as easy as copying the content and changing the DNS settings.
What about the mailboxes?
I don't have root access to the server, but I want to keep all the data in the mailboxes. They are using both POP3 and IMAP.
What's the best flow in this case, regarding that:
- clients have 10 computers using Thunderbird (clients are non thech-savvy, I'll need to update the settings if any, I don't have remote access to those machines)
- they are also using webmail access (roundcube)
- we want to keep all the data and minimum downtime (preferably that clients didn't notice any change in they daily work)
Once I change the DNS to the new servers, the mailboxes will be empty. How to keep the data from old mailboxes and have not to worry about any data loss?
How you would set up an automated nightly backup for the whole disk (boot and data partition) running Ubuntu Server?
I have a 1TB drive (current usage 200GB) and I want to have daily clone of this drive on another drive (the same capacity and model), on the same machine.
I was previously running dd
to have partition backup. But now I'm looking for some bulletproof solution to have a HDD clone to switch to in case of crash.
RAID is not an option, as when something is broken/deleted on the first drive, this happens on the other drive too (I know I should have RAID + the backup solution).
Copying the whole 1TB would take some time, so I'm looking for a tool which is able to find the daily differences and just update them.
The other problem is that that I need the clone of the whole disk (both partitions - ext2 and lvm + the boot record).
How would you set up this?
I have web projects in /var/www/projects/some/long/path/strange-project-name
Now I want to type in terminal:
webs str{TAB}
It should autocomplete to the webs strange-project-name
(basing on ls /var/www/projects/some/long/path/
) and after executing the command, the pwd
should point to project path. Kind of smart cd strange-project-name
with autocomplete
How would you implement this feature? Some smart alias? Function in .bashrc
? Script?
Some smart alias?
I have an Ubuntu Server up and running on LVM2 partition. My motherboard supports RAID 0 and 1.
I bought new, second HDD, the same as the system one. I want to set up RAID mirroring to ensure my data are safe.
How can I do this without reinstalling the whole system? Is software RAID better than hardware RAID from cheap motherboards?
I have installed ircd-hybrid
on my Ubuntu Server (192.168.1.2, example.com).
We use #teamchannel
to communicate inside the team.
The question is: how can I send some short message from example.com
to #teamchannel
from the bash script? e.g. example.com: Alert! The server is rebooting now
Edit:
I have found a perl script which does exactly what I needed.
Sometimes in desperation, to test if my problem is not the permission problem I do:
sudo chmod -R 777 mydir/
In most cases it does not helps, and now I have two problems ;)
Files inside mydir/
had different permissions and owners each, and now I need to restore them to the original state.
Is there any smart way to recursive restore the permissions, except creating a backup copy? E.g.
command_to_save_the_permissions_somewhere mydir/
chmod -R 777 mydir/
command_to_restore_the_permissions_from_somewhere mydir/
BTW, any tips on debugging permission issues?
My command: git show --pretty="format:" --name-only
returns list of files.
Then I use xargs to run a shell script on those files:
git show --pretty="format:" --name-only | xargs -i phpmd $dir/'{}' text codesize,unusedcode,naming
However, I'd like to run that xargs command only on files with .php
extension. How to filter the unwanted files?
I use the following entry in ~/.bashrc
file to colorize the prompt and display current branch of git repository:
PS1='\[\e[1;32m\]\[\u@\h\]\[\e[m\] \[\w\]\[\e[1;1m\]\[$(__git_ps1 " (%s)")\] \[\e[1;1m\]\[$\] \[\e[m\]'
This works almost fine, except when I use bash history (up arrow key few times), the command line becomes 'outdented' (just the first characters of the prompt remains untouched), and the visible is:
usemmand
when my username is user
and the command is command
.
During copying files via lan my Ubuntu Server 10.04 hung up and I had to reset the computer.
After reboot, I got: Grub error 17
, so I tried the rescue alternate CD,
but I got a info, that no partitions were found on the disk.
I used testdisk
to restore the partitions.
Using fdisk -l
shows the partitions now, but when the system boots up I does nothing but displays:
L234:
When I plug the drive to another computer, it is not automatically mounted and I can't access the data.
What to do now?
How to restore the grub and boot up the system?
I have installed Hudson using apt-get, and the Hudson server is available on example.com:8080
.
For example.com
I use standard port *:80 and some virtual hosts set up this way:
# /etc/apache2/sites-enabled/subdomain.example.com
<Virtualhost *:80>
ServerName subdomain.example.com
...
</Virtualhost>
Here is info about Hudson process:
/usr/bin/daemon --name=hudson --inherit --env=HUDSON_HOME=/var/lib/hudson --output=/var/log/hudson/hudson.log --pidfile=/var/run/hudson/hudson.pid -- /usr/bin/java -jar /usr/share/hudson/hudson.war --webroot=/var/run/hudson/war
987 ? Sl 1:08 /usr/bin/java -jar /usr/share/hudson/hudson.war --webroot=/var/run/hudson/war
How should I forward:
http:// example.com:8080
to:
http:// hudson.example.com
I'm trying to set up password-less login with ssh on Ubuntu Server, but I keep getting:
Agent admitted failure to sign using the key
and prompt for password.
I have generated new rsa keys. Before the system reboot it worked just fine.
All the links lead me to this bug, but nothing works. SSH Agent is still not running.
How to fix that? Maybe the files need specific permissions?