I had to convert a .pfx
certificate into a .pem
certificate. However, following a bug I am working on, I am wondering whether the .pem
's passphrase has been set properly.
How can I check this easily from a terminal/command line?
I had to convert a .pfx
certificate into a .pem
certificate. However, following a bug I am working on, I am wondering whether the .pem
's passphrase has been set properly.
How can I check this easily from a terminal/command line?
I have recently set a new URL http://www.mytechnotes.biz for a blog I was maintaining on a subdomain http://technotes.tostaky.biz.
The redirection works in the following cases:
technotes.tostaky.biz -> www.mytechnotes.biz
www.technotes.tostaky.biz -> www.mytechnotes.biz
But it does not work in this case:
http://technotes.tostaky.biz/2012/11/introduction-to-css-concepts.html
Yet, the following page exists:
www.technotes.biz/2012/11/introduction-to-css-concepts.html
The content of my .htaccess
file is:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^technotes\.tostaky\.biz$ [OR]
RewriteCond %{HTTP_HOST} ^www\.technotes\.tostaky\.biz$
RewriteRule ^/?$ "http\:\/\/www\.mytechnotes\.biz\/" [R=301,L]
I am not a sysadmin. I am relying on CPanel configuration of my host, but I can't resolve this issue. Anyone knows how to solve it? Thanks!
I have just performed a fresh installation of Tomcat 7 (apache-tomcat-7.0.30) using the 32-bit/64-bit Windows Service Installer available here, on my local Windows 7 PC.
Yet, when I go in Service to start it manually, it starts and stops immediately after displaying the following message:
I have noticed that each time I try, I get the following lines in my tomcat-7-stdout log:
2012-09-16 18:41:12 Commons Daemon procrun stdout initialized
Error occurred during initialization of VM
java/lang/NoClassDefFoundError: java/lang/Object
Anyone knows what is happening and how to solve it?
I am running a PHP script which inserts lines in a database every minute, using a cron job.
My provider says:
An email will be sent to this address ONLY if your cron produces output.
If no output is generated, then no email will be sent.
I am only issuing echo statements in my PHP when there is a query error. But I don't have errors, and I see the lines appearing in my DB.
Yet, I still get emails with (nearly) empty content even if I don't have errors:
Content-type: text/html
How can I prevent this? What is considered as output when running a PHP script cron job?
UPDATE
In order to get rid of the header, see: https://stackoverflow.com/questions/10723546/how-to-get-rid-of-content-type-text-html-in-php-script-output
I am trying to use symbolic links. I did some reading and found the following commands:
Creation -> ln -s {/path/to/file-name} {link-name}
Update -> ln -sfn {/path/to/file-name} {link-name}
Deletion -> rm {link-name}
Creations and deletions work fine. But updates do not work. After performing this command, the symlink becomes invalid.
I have read here and there that it is not possible to update/override a symlink. So there is contradictory information on the net. Who is right? If a symlink can be updated/overridden, how can I achieve this?
Update
Here is my directory structure:
~/scripts/test/
~/scripts/test/remote_loc/
~/scripts/test/remote_loc/site1/
~/scripts/test/remote_loc/site1/stuff1.txt
~/scripts/test/remote_loc/site2/
~/scripts/test/remote_loc/site2/stuff2.txt
~/scripts/test/remote_loc/site2/
~/scripts/test/remote_loc/site3/stuff3.txt
From ~/scripts/test/
, when I perform:
ln -s /remote_loc/site1 test_link
a test_link
is created, and I can ls -l
it, but it seems broken (contrary to what I said above in my question).
How can I perform a multiple directory level link?
I have large static content that I have to deliver via a Linux-based webserver. It is a set of over one million small, gzip files. 90% of the files are less than 1K and the remaining files are at most 50K. In the future, this could grow to over 10 million gzip files.
Should I put this content in a file structure or should I consider putting all this content in a database? If it is in a file structure, can I use large directories or should I consider smaller directories?
I was told a file structure would be faster for delivery, but on the other side, I know that the files will take a lot of space on the disk, since files blocks will be more than 1K.
What is the best strategy regarding delivery performance?
UPDATE
For the records, I have performed a test under Windows 7, with half-million files:
As part of re-installing NetBeans 7.0.1, I had Tomcat 7.0.14 installed too on my PC. I created a manager role too. I can access http://localhost:8084/manager/html
successfully.
<role rolename="manager-gui" />
<user username="tomcat" password="tomcat" roles="manager-gui"/>
However, when I try to access documentation, like http://localhost:8084/docs/setup.html
, I get a HTTP 404 resource not available
. I checked in the installation directory and /webapps/docs/setup.html
does exist.
What am I doing wrong? What am I missing? Thanks.
EDIT
Here is the Host content of server.xml
:
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log." suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" resolveHosts="false"/>
</Host>
I could not find any catalina.out
files in the log directory (or elsewhere).
I am new to webmin. I have managed to install it on a node. I explored the PostgreSQL module and installed it too. The PostgreSQL version is 8.4.8. (released on 2011-04-18, which is recent).
I am interested in using PostgreSQL 9.1. My questions are:
Is it possible to install 9.1 with webmin yet? If yes, how?
If answer to 1. is no, should I just be patient, knowing that 9.1 has been released very recently and that it will be integrated into webmin soon?
Assuming that I start working with 8.4.8, will PostgreSQL or Webmin help with the migration from 8.4.8 to 9.1?
I could not find definitive answers with my googling. Thanks.
One technique to protect against DDoS attacks is to monitor the number or requests per seconds coming from a given IP address. Of course, IP addresses can be fakes, but let's assume this is not an issue here.
A web application installed on Tomcat (for example) can be configured to use secured http connections only (i.e., https). I am not a sysadmin expert, but I believe that in case of a DDoS attack, the high number of https connections attempts could create 100% CPU spikes.
My questions are:
Do DDoS attacks on https create long 100% CPU spikes?
Is it possible to implement a software filter to monitor requests-per-seconds before the SSL negotiation is started in order to avoid long 100% CPU spikes?
If answer to 2. is yes, can this be integrated in Tomcat? If yes how? Or it there a better solution out there?
Thanks.
EDIT
If answer to 2. is yes (but not in Tomcat), what solutions are available out there?
This question follows a mod_jk
configuration question asked earlier. I have managed to to have http://mywebsite.com/MyTomcatApp/
go to Tomcat while having http://mywebsite.com/
go to Apache.
However, my requests to http://mywebsite.com:8080/
(to access Tomcat's manager for example) don't work anymore. How can/should I update my 000-default
file mentioned in my previous question to keep access to Tomcat's manager under this configuration?
P.S.: Yes, I am a newbie/not a sysadmin.
I am trying to set-up a proper configuration to have Apache serve some static html pages and to pass other requests for dynamic pages to Tomcat. So far, I have installed Apache2 and Tomcat6 successfully.
I am trying to follow the instructions available here. I am stuck at step 4. There is a 000-default
file in my /etc/apache2/sites-enabled
directory. The content is:
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
ErrorLog /var/log/apache2/error.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
CustomLog /var/log/apache2/access.log combined
Alias /doc/ "/usr/share/doc/"
<Directory "/usr/share/doc/">
Options Indexes MultiViews FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128
</Directory>
</VirtualHost>
The instructions I am following say:
In your /etc/apache2/sites-enabled/ dir find the vhost you want to use tomcat and edit it, at the end of the vhost declaration put:
#Everything under root goes to tomcat
JkMount /* worker1
#html files should be served by apache2
JkUnMount /*.html worker1
I would like to have tomcat handle requests to http://mywebsite.com/MyTomcatApp1/ or http://mywebsite.com/MyTomcatApp2/ (dynamic content) and all requests to http://mywebsite.com/ to be handled by Apache (static content).
How should I configure 000-default
? I don't really understand the logic of JkMount
and JkUnMount
... Thanks.
Say I have a registered URL mywebsite.com
pointing to my server with a public IP address.
I want to run both tomcat and apache to serve pages (i.e., some static pages on apache and some dynamic pages on tomcat, like jsp etc...).
For the sake of simplicity, let's assume that apache is listening on 80 and tomcat on 8080.
I heard about mod_proxy
. Is it possible to have requests to mywebsite.com
go to apache and mywebsite/loggedin
go to tomcat? If yes, how should this be configured and where? Thanks.
Sorry if this question makes no sense (no expert here), but I understand that tomcat listens to port 8080 and that url are usually addressed to 80. Is there a way to tell DNS that urls should point to 8080? Or how should I solve this issue?
I had to reformat my PC from scratch. I restored my backup data, including a directory containing SVN repositories and a directory containing check-outs of those repositories. These check-outs contain work that has not been checked-in yet.
When I try to check-in, SVN dumps error messages. I could trace it to the fact that I have changed the location of these directories from:
C:/Users/J%C3%A9r%C3%B4me/Documents/Java/Repositories/...
C:/Users/J%C3%A9r%C3%B4me/Documents/Java/CheckOut/...
to:
C:/Users/JVerstry/Documents/Java/Repositories/...
C:/Users/JVerstry/Documents/Java/CheckOut/...
Yes, I choose a different user account name when re-installing my machine. And yes, svn does not manage to fall back on its feet.
How can I solve this issue? I checked the content of files located in .svn
hidden repositories and it seems like they contain lines such as:
file:///C:/Users/J%C3%A9r%C3%B4me/Documents/Java/Repositories/...
I was thinking may be I could use a tool to scan those files and replace J%C3%A9r%C3%B4me
with JVerstry
. Is such a tool available for Windows 7? And is this a good idea?
EDIT
I turns out that my issues are deeper than described above. Some of my .svn
directories seem corrupted + some directories seem to be locked and cannot be unlocked (SVN dumps error messages...) + some directories are missing in the /db
directory of every repository.
Some vendors provide commercial licenses of Linux, including support. What is the real benefit of these? Is it worth the money?
Considering the incredible amount of information and support available on the net, and considering that Ubuntu server is available for free (for example), why should one spend money on RHEL (for example)? What is the added value?
Shouldn't one spend money on a good system administrator instead of a commercial version of Linux? What is the tradeoff? Is it worth it?
I have a large a set of data (+100 GB) which can be stored into files. Most of the files would be in the 5k-50k range (80%), then 50k - 500k (15%) and >500k (5%). The maximum expected size of a file is 50 MB. If necessary, large files can be split into smaller pieces. Files can be organized in a directory structure too.
If some data must be modified, my application make a copy, modifies it and if successful, flags it as the latest version. Then, old version is removed. It is crash safe (so to speak).
I need to implement a failover system to keep this data available. One solution is to use a Master-Slave database system, but these are fragile and force a dependency on the database technology.
I am no sysadmin, but I read about the rsync instruction. It looks very interesting. I am wondering if setting some failover nodes and use rsync from my master is a responsible option. Has anyone tried this before successfully?
i) If yes, should I split my large files? Is rsync smart/efficient at detecting which files to copy/delete? Should I implement a specific directory structure to make this system efficient?
ii) If the master crashes and a slave takes over for an hour (for example), is making the master up-to-date again as simple as running rsync the other way round (slave to master)?
iii) Bonus question: Is there any possibility of implementing multi-master systems with rsync? Or is only master slave possible?
I am looking for advice, tips, experience, etc... Thanks !!!
I would like to forward the emails received by root to an external email on an Ubuntu node. I have seen this post, but it does not explain much about the procedure to follow. There are some other posts available online, but they are often incomplete or unclear.
Does anyone have a complete procedure to share? Should a mailserver be installed on my node? If yes, which one? What are the configurations steps on the node? I am working strictly with command line (the node is a server).
I am trying to learn about securing a Linux box (I am using Ubuntu). Auditd is recommended for monitoring activities on the node. I have managed to install it, but I can't find much information about proper set-up to secure my node.
How should I set-up auditd to make my node more secure? What should I monitor? Why? I am looking for set-up examples and recommendation from experienced administrators.
Thanks!
I am trying to secure a linux Ubuntu box and I am no expert. I am following guidance available on the net.
Section 15.2 discusses world-writable files. The following command
find / -type f -perm -o+w -exec ls -l {} \;
returns a long list of files all located under /proc.
My question is: is this a good or a bad thing regarding security? Should I do something about these files?
Thanks!
I understand that URL patterns can be used to have some handled under HTTP and others under HTTPS.
Let's imagine a web application with two servlets, each accessed with different URL patters (for example .../myapp/servlet1 and .../myapp/servlet2), how can I have to first one handled by HTTP and the second with HTTPS?
Can you provide a configuration example?
Thanks!