When a client is physically on the network their IP address is 10.1.0.10
. When they remote into the VPN server they are on the 10.250.0.0
VPN network, so their IP address may be something like 10.250.0.20
. Is it possible to configure a route such that a client is always accessible on 10.1.0.10
regardless if they are physically on the network or remotely connected to the VPN?
Kirk Ouimet's questions
Let's say you wanted to transfer a 1 GB file from a machine at site 1 to a machine at site 2. You can either connect the two machines via a VPN (L2TP) or basic port forwarding. Which connection method would move the file faster (or would it be a tie?), and why?
I have two Ubiquiti Dream Machines (just upgraded from Netgear and I am blown away by the difference in quality.) I have one of the Dream Machine routers at home and one at my office, each with its own Internet service provider. My goals are to:
- Create a single LAN (VLAN?) of which all of the devices both home and office belong, so that I can access any machine using its LAN address from either location. E.g., having a server at work (10.0.1.10) accessible from my PC at home (10.0.0.10). I would like to do this without using a VPN, just having it work.
- When traveling, be able to connect my laptop to my LAN (via VPN?) and access either the server at work (10.0.1.10) or the PC at home (10.0.0.10).
It seems like this should be possible as it is probably a common enterprise requirement, but I have never done it before. I Googled but have not found any examples on how to configure this. One core issue may also be how DHCP would be handled, especially when a new device joins the network.
So my questions are:
- Is this configuration possible?
- Can you point me in the right direction as far as terminology and what I would need to do to set it up?
- Bonus: can I add more routers and extend my LAN to 3+ locations?
Thanks!
I've got some code connecting to a MySQL database and running a simple query. I have instrumented everything and my timer is showing that the first query returns after 12 milliseconds. The code does this:
make database connection -> run query -> parse results -> return results
Maybe I'm insane but 12 milliseconds feels like an eternity in the world of computing. Is this the standard amount of time it takes to connect to and run a query on a vanilla install of MySQL?
If not, I need to reevaluate the library I am using to make connections to MySQL.
While developing I'm getting blank pages in my browser whenever I create a fatal error in PHP with a typo or just my bad programming ;). It's super annoying for me to have to view the raw nginx error log file to see the fatal errors and find the line numbers where they are. I can't seem to find how to make nginx display PHP fatal errors in the browser. Here is the relevant part of my nginx config:
location @fpm {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_NAME index.php;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
fastcgi_param PATH_INFO $path_info;
}
Here's an example error that shows up in my error log and then results in a blank browser page:
2014/01/04 14:53:52 [error] 20082#0: *403 FastCGI sent in stderr:
"PHP message: PHP Fatal error: Cannot redeclare class ClassName in FilePath on line 356"
while reading response header from upstream, client: 192.168.1.10,
server: servername, request: "GET URLPATH HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "host",
referrer: "referer"
Here is my PHP info:
Here is my PHP-FPM conf:
And my PHP-FPM pool conf:
And my PHP-FPM php.ini:
I would love if anyone could shed some light on what I could do to get these errors to show up!
I'm a bit worried because its been a couple of days and my access log file is up to 250MB+. Does it roll log files over and delete them? I'd like for there only to be 1GB or less of log files total.
I have web server that is getting around 25K visits a day up at yougetsignal.com. Sometimes the site feels a bit sluggish. I am hosting it on nginx with php5-fpm. Is there a way for me to see a list of all of the long running requests that are coming to the site?
I'd love to have a real-time list of all of the active requests that PHP is handling and how long they have been running. Kind of like top, but just for the web server. This would let me know how long requests are taking and which script is the culprit.
Anyone have any ideas on how I can do this?
I have a script that is being access via a URL like this:
/directory/file.php/methodName
I need nginx to handle this specific route and send it to PHP. Right now my config currently just catches anything ending in .php and sends it to PHP. How can I tweak my config to handle the case above as well?
Here is my nginx block:
# Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# With php5-cgi alone:
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
I'd like to build a box that I configure as default gateway for client computers that:
- Captures all of their traffic
- Allows me to easily review all of their traffic
Anyone have some ideas on the best operating system and programs to use? WireShark was the first to come to mind.
Say I have this URL to an image:
www.domain.com/image.jpg
And I want to have image.jpg actually be served from another server entirely (different machine and IP), at
static.domain.com/image.jpg
Is it possible to configure Apache in such a way that any requests to www.domain.com/image.jpg are actually completely served by static.domain.com/image.jpg, but to the user it looks like it is coming from www.domain.com/image.jpg?
Will apt-get dist-upgrade do the exact same thing as running do-release-upgrade?
I'm currently running Ubuntu Server 11.04.
I'm getting a lot of traffic that is crushing my tiny server. Is there something I can install that will allow me to examine my Apache traffic real-time? Ideally a web interface. I'd like to see what the requests are for and which ones are taking the most resources.
So I am really used to LAMP stacks and have an Apache and PHP setup on my Ubuntu server with about 30 PHP sites. I want to throw a new project up on the site that is built on Ruby on Rails. Is it possible to continue to use Apache and have the RoR project sit right next to all of the other projects in /var/www/?
Also, what's the best way to get RoR installed on a default Ubuntu server install?
Thanks!
My company has about 25 cameras around our property and we need a server to store any motion on each of the cameras. We are currently using a software called "Digital Video Witness" which is fairly unimpressive. We are looking for a good server software that can run all day long and easily output segments of video to a thumb drive or DVD.
Can anyone make any recommendations on good software can use, and possibly share your experience with your server hardware as well?
MySQL was not starting. I backed up /var/lib/mysql/*. I uninstalled MySQL and reintalled it. It started again. I copied everything back to /var/lib/mysql. I restarted the service. I logged in and could see all of my tables. When I try to view their contents, I get:
Table 'tablename' doesn't exist
Is there something I need to do to make MySQL recognize the files I copied back into the /var/lib/mysql/ folder?
Any help is appreciated, I'm dying here.
I have a client who uses Outlook to sync to an Exchange server. His "Sent" folder was renamed to the name of one of his contacts. I know, it is really weird. Unfortunately, Outlook does not give you the ability to rename the Sent folder. Does anyone have a clue as to how I can fix this?
Does anyone know if I can run a Linux-based VPN server on my Ubuntu server and connect to it using Windows 7’s native VPN connection wizard? If this is possible, which VPN server should I use?
Thanks!
I am on my Windows 7 laptop and want to connect to my home network and browse to \\192.168.1.10.
I have a Ubuntu Server running on my home network on 192.168.1.11.
So far I have done this on my Ubuntu Server:
apt-get install openvpn
What do I need to do now to allow my Windows 7 laptop to connect to the OpenVPN?
I was dreaming about the DNS system and how awesome it is. In my dream I realized that my cell phone had a somewhat similar system to DNS built in – when I browse to a contact and select someone to call, the phone automatically resolves the name to a phone number.
Then the idea came to me:
What if I could add a “T” (telephone) record to any of my existing domain names? So if someone tries to call kirkouimet.com, it resolves to my personal cell phone number? The business implications are really cool IMO: what if I could just call pizzahut.com? If they were smart they would have their system geolocate me, find the nearest Pizza Hut, and route the call there.
- How hard would it be to extend the DNS functionality to include this?
- Is the idea good enough to pursue beyond just thinking about it?
I have three separate web servers with three different internal IP addresses on a network with only one public IP address. Each web server has security restrictions such that I cannot just run all of my websites on a single web server. All are running Apache.
I want to setup subdomains that allow me to access each of the different web servers remotely, all on port 80. E.g.,
site1.domain.com
site2.domain.com
site3.domain.com
Where all three of those domains resolve to my single public IP address, but some type of service examines the request to see which subdomain is being requested and pulls the data from the appropriate server.
Is this type of thing (1) possible and (2) easy to implement? I'm running Ubuntu Server 9.04.