Im working with a an iSeries Power 7+ server and the operating system is V7R2, is this a version of AIX or a completely different OS?
ProfessionalAmateur's questions
I'm not sure what happened to my nginx
install. Suddenly all page requests are being redirected to the 403
page.
Yesterday I tried to add a user agent to block, restarted service from that point everything was being sent to 403. I backed out that change, restart nginx
and everything is still being directed to the 403
page. Even if I remove the $http_user_agent
and $http_referer
if statements everything is still sent to 403.
I have even restored the entire nginx
folder from a backup and all my page requests are continued to be directed to the 403 page....
Not sure how to troubleshoot this, the conf files come back clean. Is there a trace I can do for nginx
when requests come in?
[root@soupcan nginx]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Here is the website conf:
server {
listen 80;
server_name localhost;
#charset koi8-r;
access_log /var/log/nginx/website1/access.log main;
error_log /var/log/nginx/website1/error.log;
root /srv/www/website1;
## Block http user agent - morpheus fucking scanner ##
if ($http_user_agent ~* "morfeus fucking scanner|ZmEu|Morfeus strikes again.|OpenWebSpider v0.1.4 (http://www.openwebspider.org/)") {
return 403;
}
if ($http_referer ~* (semalt.com|WeSEE)) {
return 403;
}
## Only allow GET and HEAD request methods. By default Nginx blocks
## all requests type other then GET and HEAD for static content.
if ($request_method !~ ^(GET|HEAD)$ ) {
return 405;
}
location / {
index index.html index.htm index.php;
ssi on;
}
location ~ \.php {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
#fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/website1/$fastcgi_script_name;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# Redirect server error pages to the static page
error_page 403 404 /error403.html;
location = /error403.html {
root /usr/share/nginx/html;
}
}
nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_types text/plain text/css application/json application/x-javascript
text/xml application/xml application/rss+xml text/javascript
image/svg+xml application/x-font-ttf font/opentype
application/vnd.ms-fontobject;
server_tokens off;
include /etc/nginx/conf.d/*.conf;
# Load virtual host configuration files.
include /etc/nginx/sites-enabled/*;
# BLOCK SPAMMERS IP ADDRESSES
include /etc/nginx/conf.d/blockips.conf;
}
Permissions for webroot dir:
[root@soupcan nginx]# namei -om /srv/www/website1/
f: /srv/www/website1/
dr-xr-xr-x root root /
drwxr-xr-x root root srv
drwxrwxr-x brian nobody www
drwxr-x--x brian nobody website1
EDIT
EDIT 2
Solution posted below.
I had to change my domain admin password today and since on several servers I will get randomly booted/disconnected with the following error:
Disconnected from <server> (Error 2823)
I cannot find any information on error 2823, even nethelp
gives me squat
P:\>net helpmsg 2823
The system cannot find message text for message number 0xb07 in the message file
for NETMSG.
Anyone have info on this error. The account keeps getting locked but I have no services running with this ID.
I only get kicked from some servers too, not all.
EDIT
Solution posted below.
When I reset user passwords in Active Directory
on Windows Server 2008
or Windows Server 2012
and check the option User must change password at next logon
it prevents users from being able to login.
However when I do not check this option and reset their password and unlock their account the users can login successfully. This obviously present a bit of a security issue.
I'm not versed enough in AD to know why this is occurring, has anyone seen this before?
If we have a redirect setup on a web server inside our company. Does all traffic continue to go through the webserver with the redirect as sort of a middle man? Or does the redirect occur only with the initial request and all subsequent traffic go straight from client to the second server.
Here is a picture to explain what Im trying to ask. I think Option A is how it works, but I'd like verification on my hunch.
We are trying to copy 3 files from our domain controller to a user PC and have it execute from a .bat file when a user logs in via Active Directory
.
When we associate the .bat file to a user in AD for their logon, the PC copies down the 2 files, but the executable isn't launched.
Am I doing something wrong here?
Here is the script in question:
mkdir c:\stinger_dl
copy \\DC01\netlogon\install\PsExec.exe c:\stinger_dl\
copy \\DC91\netlogon\install\stinger.exe c:\stinger_dl\
c:\stinger_dl\psexec.exe /accepteula -u domain\admin -p MagicPassword -d -h c:\stinger_dl\stinger.exe --SILENT --ADL --GO --RPTALL --DELETE --REPORTPATH=c:\stinger_dl
We had a nasty virus outbreak last Friday (current virus protection missed it) and are trying to force a scan on all user PCs with Stinger when they login tomorrow.
Anyone interested, this is the virus that got us
I have this in my .conf
file for my website in attempt to block 2 user agents from constantly probing my server.
## Block http user agent - morpheus fucking scanner ##
if ($http_user_agent ~* "morfeus fucking scanner|ZmEu") {
return 403;
}
Ive also tried the following, with no luck:
if ($http_user_agent ~* ("morfeus fucking scanner|ZmEu"))
if ($http_user_agent ~* (morfeus fucking scanner|ZmEu))
if ($http_user_agent ~* ("morfeus fucking scanner"|"ZmEu"))
if ($http_user_agent ~* "morfeus fucking scanner|ZmEu")
if ($http_user_agent ~* morfeus fucking scanner|ZmEu)
It worked well when I only had 1 user agent, but in attempt to add a second, these user agents are able to probe the server still.
111.90.172.235 - - [17/Feb/2013:23:05:22 -0700] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 118 "-" "ZmEu" "-"
111.90.172.235 - - [17/Feb/2013:23:05:22 -0700] "GET /MyAdmin/scripts/setup.php HTTP/1.1" 404 118 "-" "ZmEu" "-"
111.90.172.235 - - [17/Feb/2013:23:05:22 -0700] "GET /pma/scripts/setup.php HTTP/1.1" 404 118 "-" "ZmEu" "-"
111.90.172.235 - - [17/Feb/2013:23:05:22 -0700] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 403 118 "-" "ZmEu" "-"
111.90.172.235 - - [17/Feb/2013:23:05:22 -0700] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 118 "-" "ZmEu" "-"
111.90.172.235 - - [17/Feb/2013:23:05:22 -0700] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 118 "-" "ZmEu" "-"
According to these two posts#12: How Do I Deny Certain User-Agents?, HowTo: Nginx Block User Agent, I think Im setup correctly, but it doesn't seem to be working.
EDIT
Here is the nginx version and whole conf file
nginx version: nginx/1.2.7
server {
listen 80;
server_name localhost;
#charset koi8-r;
access_log /var/log/nginx/XXXXXX/access.log main;
error_log /var/log/nginx/XXXXXX/error.log;
root /srv/www/XXXXXX;
location / {
index index.html index.htm index.php;
#5/22/2012 - Turn on Server Side Includes
ssi on;
## Block http user agent - morpheus fucking scanner ##
if ($http_user_agent ~* "morfeus fucking scanner|ZmEu") {
return 403;
}
## Only allow GET and HEAD request methods. By default Nginx blocks
## all requests type other then GET and HEAD for static content.
if ($request_method !~ ^(GET|HEAD)$ ) {
return 405;
}
}
location ~ \.php {
try_files $uri =404;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
#fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/XXXXXX/$fastcgi_script_name;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# Redirect server error pages to the static page
error_page 403 404 /error403.html;
location = /error403.html {
root /usr/share/nginx/html;
}
I'm looking for information on how information is shared/passed across a network between a Windows 7 client OS and a Windows Server 2008 server?
Little history of our setup (I apologize as for Im not a network guy, so this may be overly vague):
We have servers in a data center in one state. Our Corp HQ office is in a different state and there a VPN tunnel setup between us and the data center for access. We have several satellite offices in other states that have a VPN tunnel from their location to us at HQ (so everyone's traffic has to go through corporate to get to the data center.
We have a server in the data center that has a folder shared that will get flat-file exports on it several times a day. This folder is shared to AD security groups. Users map a network drive to the folder for access.
We are seeing situations where a flat-file is created on the server, but it is not visible to the users at the remote offices for several hours. It is visible to us in Corp HQ immediately.
Is this normal? I thought this file would be instantly visible at all locations as long as the drive is mapped. Is there anything I can do to help this? A setting some where?
My last sort of general question is how does this process work from a 60,000ft level?
Thanks!
I'm trying to deny some user agents I constantly see probing my nginx
web server.
If I have this in my .conf
file
## Block http user agent - morpheus fucking scanner ##
if ($http_user_agent ~* (morfeus fucking scanner|ZmEu)) {
return 403;
}
I get the following error when starting services:
nginx: [emerg] invalid condition "$http_user_agent" in /etc/nginx/sites-enabled/siteXXX:19
nginx: configuration file /etc/nginx/nginx.conf test failed
If I place quotation marks around it, it starts but doesn't deny as I would expect it to.
## Block http user agent - morpheus fucking scanner ##
if ($http_user_agent ~* "(morfeus fucking scanner|ZmEu)") {
return 403;
}
Any ideas? Im looking for a case-insensitive user agent deny.
We have a print server that contains roughly 30-40 printers. In the past they have created a new printer name, but direct it to the same IP/port so we can have as many as 5 or 6 printer names all pointing to the same device.
On Windows Server 2003
each of these printer names would show up individually in the "Printers and Faxes" section regardless of how many share the same IP/port number. Windows Server 2008
functions differently, it 'bundles' these printer under one listing (see screenshot). This makes it very hard to find an individual printer if I need to check one and the name happens to be nested.
Is there a way to un-group or un-nest the printers in 'Windows Server 2008'?
I have a Windows 2008 R2 server we are using as a print server
. We are encountering issues with some print jobs and I need to be able to look at a print job in one of printers and then go find the associated .SPL
file in the c:\Windows\system32\spool\PRINTERS directory.
Is there an easy way to find the spool job number tied to a printer record?
We are migrating one of our legacy servers at work to a new data center (different domain, etc...). This server has a folder with MANY nested folders inside of it. Each of the nested folders may have different explicit permissions granted to different users. There are hundreds of folders.
Is there a way I can get all the permissions in a report using PowerShell or something similar for all the folders & nested folders? Im looking for an easy-as-possible way to replicate the folder permissions or get some output where I can manually recreate it?
Any ideas?
Trying to install a fresh install of Sharepoint 2010 (w/ SP1)
and SQL Server 2012 PowerPivot for Sharepoint
.
The prerequisites clearly show that Sharepoint 2010 SP1
is needed, which we have installed. However after when trying to install the SQL Server portion we consistently fail the rule SharePoint version requirement for PowerPivot for SharePoint' validation in the
SQL Server` install process.
Here is the process we are following:
- install Sharepoint 2010
- install Sharepoint 2010 SP1
- install SQL Server 2012 PowerPivot for SharePoint
Here is a screen shot of the error and the log file error. We are completely stuck at this point, anyone run into this before?
EDIT
This Article mentions an issue with the eval version of Sharepoint 2010 not correctly inserting the version into the registry but it does not mention how to fix this or work around..... if this helps anyone...
I have an Nginx
web server hosting two sites. I created a blockips.conf
file to blacklist IP addresses that are constantly probing the server and included this file in the nginx.conf
file. However in my access logs for the sites I still see these IP addresses showing up. Do I need to include the black list in each site's conf instead of the global conf for Nginx
?
Here is my nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
# Load virtual host configuration files.
include /etc/nginx/sites-enabled/*;
# BLOCK SPAMMERS IP ADDRESSES
include /etc/nginx/conf.d/blockips.conf;
}
blockips.conf
deny 58.218.199.250;
access.log still shows this IP address.
58.218.199.250 - - [27/Sep/2012:06:41:03 -0600] "GET http://59.53.91.9/proxy/judge.php HTTP/1.1" 403 570 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" "-"
What am I doing incorrectly?
We have an internal IIS server that hosts several websites that redirects to external hosted web applications. Occasionally people will do nutty things like restart servers, stop services, unplug cables, you name it it happens.... I want to verify that our internal IIS webserver is up and running every 15 minutes, and if not try to restart the service and send me an email.
I wrote this powershell script, but am not sure if this is the best way to accomplish this?
###################################################################
# Summary: Verify W3SVC service is running, send email if it's down
###################################################################
##############################
# Get service status
##############################
$serviceName = "W3SVC"
$serverName = hostname
$status = (Get-Service $serviceName).Status
if ($status -ne "Running"){
sendMail "$serviceName" "$serverName" #sendmail is a function (not shown)
Restart-Service $ServiceName
}else{
# Service is running, do nothing;
}
However Im not sure if W3SVC
is the best service to check that IIS7 is up? Do I need to check WAS
and IISADMIN
as well? Is there one that will accomplish everything?
I also thought about checking the HTTP status code for these hosts, but even if the site is down and IIS is up, my return code comes back 200
. This was more of a brain storming idea, are there better "best practices" to accomplish something like this?
$url = "http://intranet_site"
$xHTTP = new-object -com msxml2.xmlhttp;
$xHTTP.open("GET",$url,$false);
$xHTTP.send();
$xHTTP.status # returns the status code
if ($xHTTP.status -ne "200"){
sendMail "$serviceName" "$serverName"
Restart-Service $ServiceName
}
This was more of a brain storming idea, are there better "best practices" to accomplish something like this? I want to verify that the site is up and the redirects are working.
We host several web based applications outside of intranet. The URL's to these applications are long, complex and overall not user friendly. Ex:
http://hostingsite:port/approot/folder/folder/login.aspx <-- (production)
http://hostingsite:port22/approot/folder/folder/login.aspx <-- (dev)
http://hostingsite:port33/approot/folder/folder/login.aspx <-- (test)
I'd like to create an internal DNS entry to allow users to access these sites with ease. Ex:
http://prod --> http://hostingsite:port/approot/folder/folder/login.aspx
http://dev --> http://hostingsite:port22/approot/folder/folder/login.aspx
I'm not familiar with the DNS process and setup, as far as I know a DNS can only be redirected to an IP, but not to subdomains for directory paths as described above? Is this a correct assumption?
I am thinking for throwing up an internal webserver that will listen to the internal DNS entries and redirect to the external sites.
http://prod --> [internal webserver] --> redirect --> http://hostingsite:port/approot/folder/folder/login.aspx
Is there a better way to do this?
Im running CentOS 6.2
, Nginx 1.2.3
following these Linode Instructions to get Perl
to work with Nginx
I've done everything upto the point of testing an actual Perl
file. When I do this the browser says:
The page you are looking for is temporarily unavailable.
Please try again later.
And my Nginx error-log
shows the following:
2012/09/02 22:09:58 [error] 20772#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.102, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:8999", host: "192.168.1.10:81"
Im stuck at this point. Im not sure if it matters but I also have spawn-fcgi
and php-fpm
to serve up PHP
files on this site, but that should be 100% seperate from the perl-fastcgi
setup, different port, etc..
How can I troubleshoot this?
We are in a tough spot, we have a hosted server with the following specs:
OS: Windows Server 2008 R2 Enterprise SP1 64bit
Processor: Intel Xeon X7550 @ 2GHz (8 processors)
RAM: 16GB
The file system is on a SAN or NAS (not sure).
We are seeing very odd issues where a user will open a 25MB .xslb file and it takes literally 60-120 seconds sometimes. The server is just dog slow for excel.
Resources are not being pegged, CPU never jumps up, plenty of RAM... it's just oddly slow.
Our host has been looking at the issue for several weeks with not much to show for it. Is there a utility I can run myself that will help trackdown our issue?
I have found Server Performance Advisor V1.0 Any experience in using it?
Our host is ultimately responsible for fixing this, but we are going on 1 month and our users are losing patience. Any tips would be helpful.
We are going through an RFP process of changing hosting companies for most of our servers (~10 fairly powerful workhorses and database servers).
When the existing company was picked I wasn't at the company, nor have I worked with hosting companies in the past (Always had hardware on site in previous companies). We will be doing site tours for each of the companies over the next few weeks. What type of things do you normally look for? Questions to ask their on site staff, etc? Anything that can help me evaluate and compare.
Most of the of the hosting companies maintiane VM Ware farms with DR sites connected via fiber.
I have a weird issue that if I RDP into a Windows 2003 server, and then logoff (not reboot or restart) that the application services are being restarted.
I do use the /admin
flag when I launch RDP if it matters.
EDIT - It seems it only happens with the /Admin flag.
I have no idea what is going on, but we have critical applications on this server, and I find I cannot logon to troubleshoot during they day because as soon as I logoff the application restarts.
Has anyone see something like this before?