My network serves DHCP through a Windows 2022 server. I would like to log the MAC addresses of all devices that acquire an IP address through this DHCP server. I have installed the IPAM role in the server, but I don't understand how to activate the logging. The server is standalone; it does not use Active Directory and it's not connected to a domain.
aag's questions
I have an Nginx webserver within a LAN which is being addressed from the internet by NAT. The variable $server_addr
contains the internal LAN address of the server. Is there a way to map the external (internet-exposed) IP address of the webserver to an Nginx variable?
The typical way of reverse-proxying different subdomains to different places with Nginx is to install a unique server for each subdomain, like this:
server {
server_name subdomain1.example.com;
location / {
proxy_pass http://hostname1:port1;
}
}
server {
server_name subdomain2.example.com;
location / {
proxy_pass http://hostname2:port2;
}
}
Is it possible to achieve the same result within one single server block (e.g. server_name .example.com, without any specified subdomain), by specifying different locations within that server block?
I have set up a simple linux mailserver using iredmail. But now I am totally worried that some hackers may find a way to hijack it for sending spam. What are the recommended steps to fortify the mailserver? (I haven't enabled any relaying yet, but I have icinga running on a separate server, and I am planning to use the iredmail server as a relay for icinga. Should I implement authentication?)
I have set up a VPN server on a Win2022 server. My Win10 clients (configured as "automatic VPN choice") can log onto the VPN without problem from within the intranet, by addressing either the public URL or the IP address of the server. Obviously, this not useful - but it proves that the VPN plumbing is properly configured and works.
However, when trying to reach the VPN from outside the LAN perimeter, the connection fails. All other connections work, and I can log into the WIN2022 server by RDP, indicating that RDP port forwarding works fine.
The client VPN log says "error 800" which means that the VPN server is generically unreachable. For testing purposes, and I have swtiched off both the gateway firewall (pfsense) and the Win2022 server firewall, and I have directed all TCP/UDP traffic from WAN to the server using 1:1 NAT translation, meaning that all ports are passed to the server. But even that doesn't work.
What might be the cause? I vaguely suspect a DNS-related issue, but I cannot pin it down and I may be wrong anyway.
I have installed a wildcard SSL certificate on a Win2022 Server for use for both web (IIS) and VPN authentication. I let Windows choose the appropriate certificate store, but for good measure I installed the certificate also on LocalMachine. Now, Web serving works just fine. However, the certificate does not appear on the Remote Access Server choice and consequently cannot be selected. Worse, the RAS now complains that the default self-signed certificate is not identical to the IIS SSL and refuses to start. My question is: How can I make the new certificate selectable in the RAS configuration page?
I need to extract a single bit from an SNMP trap, which reports the state of eight digital sensors. The OIDs of the sensors are:
".1.3.6.1.4.1.42505.1.2.1.1.7.x"
where x can be 0 to 7.
The hex dump of the UDP packets that contain the traps is as follows:
30 3E 02 01 00 04 03 69 70 73 A4 34 06 09 2B 06 01 04 01 82 CC 09 01 40 04 0A 0A 0B 66 02 01 06 02 01 01 43 04 00 01 27 63 30 15 30 13 06 0E 2B 06 01 04 01 82 CC 09 01 02 01 01 07 01 02 01 00
Might somebody point me in the direction of how to parse these packets? I do not need a fully fledged trap parsing package, as that would be overkill. All I need is to parse the packets with PHP in order to extract the single relevant bit.
I have several web servers in the LAN, addressable as http://serv1.lan/, http://serv2.lan/, etc.
To be addressed from outside the LAN, requests need to be passed through an authentication reverse proxy, such that https://proxy.com/serv1/ be translated into http://serv1.lan/ etc.
What are the regular expression rules needed to effect this conversion? The authentication server is IIS, but the proxy could also implemented in IsapiRewrite (which has a syntax similar to Apache).
I want to delete all DHCP leases and reservation from server WSKELLER. From the documentation I seem to understand that this would be accomplished with:
Get-DhcpServerv4Scope -ComputerName WSKELLER | Remove-DhcpServerv4Lease -ComputerName WSKELLER
The cmdlet does find all the leases, yet it throws for each lease the following error:
Remove-DhcpServerv4Lease : Failed to delete lease 10.10.12.32 from scope 10.10.0.0 on DHCP server WSKELLER.
At C:\admin\removeDhcpDns.ps1:8 char:48
+ ... mputerName WSKELLER | Remove-DhcpServerv4Lease -ComputerName WSKELLER +
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceExists: (10.10.12.32:root/Microsoft/...cpServerv4Lease) [Remove-DhcpServerv4Lease], CimException
+ FullyQualifiedErrorId : DHCP 20019,Remove-DhcpServerv4Lease
Obviously I am doing some mistake. But which one?
My site https://example.com should execute a given URL rewrite rule (and thus act as a reverse proxy to internalServer1) for user1, and a different rewrite rule for user2 sending him to internalServer2.
Is this scenario possible with IIS8?
I have a strange behavior that came up suddenly with a samba share (arch linux) since yesterday. The only trigger that I can think of is a system update (pacman -Syu
). Ever since, the root share (/
) is accessible and all directories are visible but any attempt to access any of the directories triggers an "invalid handle
" response in Windows. If I however share any of the directories (e.g. /data
) as a separate share, it is fully accessible without trouble. Here is the share definition.
In the meantime, I have isolated the issue to the Samba server (rather than the Windows host). A second Arch Linux installation will mount the [data] share correctly, but will refuse access to the root [/data/root_ssd] share. Conversely, starting Samba on this new, virgin Arch Linux install will again lead to no sharing of the root path.
Any ideas? It seems to me that this behavior is new to a recent Samba upgrade.
[antergos1-festplatte]
comment = 20 GB Festplatte
path = /
writeable = yes
create mask = 0766
directory mask = 0777
guest ok = yes
force user = aag
browseable = yes
[data]
comment = webserver directories
path = /data
writeable = yes
create mask = 0777
directory mask = 0777
guest ok = yes
force user = aag
browseable = yes
force group = admins
Munin-limits seems to trigger notifications for any state change (OK->CRITICAL, OK->WARNING, WARNING->CRITICAL) bidirectionally. Is there a way to prevent Munin from notifying returns to "OK"?
I cannot get the Munin dynamic zoom to work. I am rather certain that the problem has something to do with the Nginx configuration. Any attempt to generate a zoomed graph triggers the following error entry in the nginx log:
2015/02/22 13:26:01 [error] 4782#0: *2580 open() "/data/munin/usr/share/munin/cgi/munin-cgi-graph/bellaria/antergos1.bellaria/diskstats_latency/AntergosVG_AntergosRoot-pinpoint=1421756527,1424607727.png" failed (2: No such file or directory), client: 10.10.10.25, server: munin, request: "GET /usr/share/munin/cgi/munin-cgi-graph/bellaria/antergos1.bellaria/diskstats_latency/AntergosVG_AntergosRoot-pinpoint=1421756527,1424607727.png?&lower_limit=&upper_limit=&size_x=800&size_y=400 HTTP/1.1", host: "munin.bellaria", referrer: "http://munin.bellaria/static/dynazoom.html?cgiurl_graph=/usr/share/munin/cgi/munin-cgi-graph&plugin_name=bellaria/antergos1.bellaria/diskstats_latency/AntergosVG_AntergosRoot&size_x=800&size_y=400&start_epoch=1421756527&stop_epoch=1424607727"
Specifically, I suspect that something is wrong with the fastCGI parameters. May a good friendly soul take a look at my Munin virtual server (see below) and explain to me what's wrong? It's driving me crazy - yet I have a hunch that any expert will identify the problem in a fraction of a second...
# Munin server
server {
listen 80;
server_name munin munin.bellaria;
root /data/munin;
allow all;
access_log logs/munin.access.log;
error_log logs/munin.error.log;
location / {
index index.html index.htm index.php;
}
location ~ \.(php|html|html|cgi)$ {
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param AUTH_USER $remote_user;
fastcgi_param REMOTE_USER $remote_user;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi.conf;
}
location ^~ /cgi-bin/munin-cgi-graph/ {
access_log off;
fastcgi_split_path_info ^(/cgi-bin/munin-cgi-graph)(.*);
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass unix:/var/run/munin/fcgi-graph.sock;
include fastcgi_params;
}
}
Nagios is served by an nginx virtual server named "nagios" with the following configuration:
# nagios server
server {
server_name nagios;
root /usr/share/nagios/share;
listen 80;
index index.php index.html index.htm;
access_log /etc/nginx/logs/nagios.access.log;
allow 10.10.0.0/16;
allow 127.0.0.1;
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param AUTH_USER "nagios";
fastcgi_param REMOTE_USER "nagios";
fastcgi_index index.php;
include fastcgi.conf;
}
location ~ \.cgi$ {
root /usr/share/nagios/sbin;
rewrite ^/nagios/cgi-bin/(.*)\.cgi /$1.cgi break;
fastcgi_param AUTH_USER "nagios";
fastcgi_param REMOTE_USER "nagios";
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi.conf;
fastcgi_pass unix:/run/fcgiwrap.sock;
}
location /nagios {
alias /usr/share/nagios/share;
}
This works well from within the LAN. For accessing from external sites. I have a single public address ("newcompany.com"), and I would like to reverse-proxy the entire Nagios site (including the CGI location) to "https://newcompany.com/nagios". I have tried all kinds of rewrites and proxy_passes, none of which wok. Can somebody show me how the location directive "/nagios" within the secured "newcompany.com" server should look like in order to properly reverse-proxy to the nagios server? Here is the current (broken) version of the upstream server:
server {
server_name newcompany.com antergos1;
listen 80 default_server;
root /usr;
index index.php index.html index.htm;
access_log logs/default.access.log;
error_log logs/default.error.log;
location ~ \.(php|html|html|cgi)$ {
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param AUTH_USER $remote_user;
fastcgi_param REMOTE_USER $remote_user;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
include fastcgi.conf;
}
location /nagios {
index index.php index.html index.htm;
proxy_pass http://nagios/;
}
I have a bunch of non-server gear for which I would like to track downtimes, and therefore I have integrated them in my Nagios collection (so that I can generate availability reports). However, I don't want to be alerted by email if they are unreachable. Therefore, my entries for these machines look as follows:
define host{
host_name Moni_MacBook_Pro
address 10.10.10.27
use generic-host
notifications_enabled 0
}
However, I still get notifications! I suspect that this is related to the fact that the template "genetic-host" contains the following line:
check_command check-host-alive
and, in turn, the "generic-service" template looks as follows:
define service{
name generic-service ; The 'name' of this service template
active_checks_enabled 1 ; Active service checks are enabled
passive_checks_enabled 1 ; Passive service checks are enabled/accepted
parallelize_check 1 ;
obsess_over_service 1 ; We should obsess
check_freshness 0 ; Default is to NOT check service 'freshness'
notifications_enabled 1 ; Service notifications are enabled
event_handler_enabled 1 ; Service event handler is enabled
flap_detection_enabled 1 ; Flap detection is enabled
process_perf_data 1 ; Process performance data
retain_status_information 1 ; Retain status information
retain_nonstatus_information 1 ; Retain non-status information
is_volatile 0 ; The service is not volatile
check_period 24x7 ;
max_check_attempts 3 ;
normal_check_interval 10 ;
retry_check_interval 2 ; Re-check the service every two minutes
contact_groups admins ; Notifications get sent out to everyone in
notification_options u,c ;
notification_interval 1440 ; Re-notify about service problems every hour
notification_period 24x7 ; Notifications can be sent out at any time
register 0 ; DONT REGISTER THIS DEFINITION
}
My diagnosis is that the line "notification_options u,c" in the service template somehow overrides my request NOT to send notifications in the host definition. How is it possible to fix that, if at all?
I am having trouble configuring Nagios on Arch Linux served by Nginx. The Nagios services run without hitches, but the serving via Nginx is broken. As you can see from the screenshot, the fonts are all messed up and the icons do not appear in the browser. I suspect that the paths to the CSS and image files are somehow broken and therefore not seved by Nginx.
Here is my Nginx virtual server conf. I assume that I have made some stupid error, but I cannot spot the problem.
The error log shows the following:
*334 open() "/usr/share/nagios/share/nagios/images/ndisabled.gif" failed (2: No such file or directory)
However, the GIF is located at /usr/share/nagios/share/images/ndisabled.gif
indicating that there is some confusion with the paths, which however I do not know how to best fix.
Probably one or the other Nginx/Nagios expert roaming this forum will find the issue in a microsecond!
server {
server_name nagios.bellaria www.nagios.bellaria;
root /usr/share/nagios/share;
listen 80;
index index.php index.html index.htm;
access_log nagios.access.log;
error_log nagios.error.log;
auth_basic "Nagios Access";
auth_basic_user_file /etc/nagios/htpasswd.users;
location ~ \.php$ {
try_files $uri = 404;
fastcgi_index index.php;
fastcgi_pass unix:/run/php-fpm/php-fpm.sock;
include fastcgi.conf;
}
location ~ \.cgi$ {
root /usr/share/nagios/sbin;
rewrite ^/nagios/cgi-bin/(.*)\.cgi /$1.cgi break;
fastcgi_param AUTH_USER $remote_user;
fastcgi_param REMOTE_USER $remote_user;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi.conf;
fastcgi_pass unix:/run/fcgiwrap.sock;
}
location /stylesheets {
alias /usr/share/nagios/share/stylesheets;
}
}
I am trying to fix an OwnCloud sever. I am stuck since a day, and I am starting to despair. Owncloud (set up on arch linux) says:
Data directory ( /data/ocdata) is invalid
Please check that the data directory contains a file ".ocdata" in its root.
Cannot create "data" directory ( /data/ocdata)
This can usually be fixed by giving the webserver write access to the root directory.
So I did the following (out of desperation):
sudo chown -R 777 /data/ocdata
an ls on ocdata gives:
ls /data/ocdata -a -l
total 12
drwxrwxrwx 2 http http 4096 Sep 14 20:33 .
drwxr-xr-x 4 root root 4096 Sep 14 20:18 ..
-rwxrwxrwx 1 http http 2 Sep 14 20:40 .ocdata
The config.php says:
<?php
$CONFIG = array (
'instanceid' => 'ocac7c1e1b0a',
'passwordsalt' => 'f30d85305490ef50994a3231be3017',
'trusted_domains' =>
array (
0 => '10.10.10.5',
),
'datadirectory' => ' /data/ocdata',
'dbtype' => 'pgsql',
'version' => '7.0.2.1',
'dbname' => 'owncloud',
'dbhost' => 'localhost',
'dbtableprefix' => 'oc_',
'dbuser' => '---',
'dbpassword' => '---',
'installed' => true,
);`
what might be wrong here?
I have set up a web site (www.mysite.com) on IIS to provide SSL and Windows Auth, and to forward all HTTPS requests to another server (BIOS name: internalserver; port 2080). The rule specifying this setup reads:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<rewrite>
<outboundRules>
<preConditions>
<preCondition name="ResponseIsHtml1">
<add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" />
</preCondition>
</preConditions>
</outboundRules>
<rules>
<rule name="ReverseProxyInboundRule1" stopProcessing="true">
<match url="(.*)" />
<action type="Rewrite" url="http://InternalServer:2080/{R:1}" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>
This is leading to the following behavior (reproducible in both IExplorer and Firefox):
Clicking www.mysite.com results in Error 502 "Web server received an invalid response while acting as a gateway or proxy server"
Reloading the page causes the site to load - but unformatted, suggesting that the CSS files were not found.
Reloading the page a second time causes the site to load correctly, including CSS-specified formats.
The Error 502 reappears each time the browser is closed and relaunched.
On the other hand, if I use Basic Auth instead of Windows Auth, the reverse proxy works just fine, with none of the above issues coming up.
My question are:
- is the emergence of Error 502 known in this type of configuration known?
- does this represent a bug in IIS?
- if not, is any interaction with other software known to produce this issue?
I am working my way through establishing IIS as a reverse-proxy. I have set up two internal sites (served by IIS) termed Payroll and Webmail, and one external site (served by a different web server) termed IPS. All three sites are on localhost, but Payroll binds to port 12084, Webmail to port 12085, and IPS to port 2080 I have then created three inbound rules which directs xyz.com/ips/, xyz.com/payroll/, and xyz.com/webmail/ to their respective servers. All of this works just fine.
<rules>
<rule name="Reverse Proxy to ips" enabled="true"
stopProcessing="true">
<match url="^ips/(.*)" />
<action type="Rewrite" url="http://localhost:2080/{R:1}" />
<serverVariables>
<set name="HTTP_ACCEPT_ENCODING" value="" />
</serverVariables>
</rule>
<rule name="Reverse Proxy to webmail" stopProcessing="true">
<match url="^webmail/(.*)" />
<action type="Rewrite" url="http://localhost:12084/{R:1}" />
<serverVariables>
<set name="HTTP_ACCEPT_ENCODING" value="" />
</serverVariables>
</rule>
<rule name="Reverse Proxy to payroll" stopProcessing="true">
<match url="^payroll/(.*)" />
<action type="Rewrite" url="http://localhost:12085/{R:1}" />
<serverVariables>
<set name="HTTP_ACCEPT_ENCODING" value="" />
</serverVariables>
</rule>
</rules>
I have then written one outbound rule which should patch the URLs so that they will work from external clients.
<rule name="Add application prefix" preCondition="IsHTML">
<match filterByTags="A, Area, Base, Form, Frame, Head, IFrame,
Img, Input, Link, Script" pattern="^/(.*)" />
<conditions>
<add input="{URL}" pattern="^/(webmail|payroll|ips)/.*" />
</conditions>
<action type="Rewrite" value="/{C:1}/{R:1}" />
</rule>
<preConditions>
<preCondition name="IsHTML">
<add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" />
</preCondition>
</preConditions>
And that’s where the trouble starts. Payroll and Webmail work just fine: they are test sites containing just a href to themselves, which gets patched correctly. IPS, however, is a different story. You see, IPS/index.html contains a redirect to /html/index.php.
<meta HTTP-EQUIV="REFRESH" content="0; url=/html/index.php">
Now comes my question. For reasons that I do not fully understand, this redirect does not get patched, and as a result the client ends up in the "eternal pastures".
My suspicion is that the outbound rule works only for sites served by IIS itself, but will not affect external servers. My further suspicion is that I could make this work by defining IPS (localhost:2080) as a server in the “Server Farm”, and tweak the outbound rule accordingly. I am wondering whether the above is correct, and – if so – how should the outbound rule be patched.
I am trying to setup a Windows 8 box as a "reverse proxy" web server using IIS.
My goal is as simple as it gets. I do not need subdomains to go to different servers, but rather to redirect all port-443 (SSL) traffic to localhost:2080. Basic auth should be performed by IIS.
I had this config working fine with apache, but I need to migrate to IIS. I have been trying to get IIS to do that for a week now, without any success. I feel depressed and worthless.
Would a good soul out there be willing to explain me, in a few steps, how the above can be accomplished? Many many thanks in advance!