Is there an automated way to traverse a filesystem and remove any ACL entries that reference invalid SIDs in any version of Windows with NTFS?
Zimmy-DUB-Zongy-Zong-DUBBY's questions
I have been investigating a problem that occurred on a Windows 2003 server a few days ago. there are about 15 app pools, and within a few minutes, they all produced the error below in the system log:
A process serving application pool 'Pool 31x' failed to respond to a ping. The process id was '7144'.
The pools were then restarted automatically, but timed out during startup, leaving all sites down.
My question is: what would cause a "ping timeout" to all of the app pools around the same time, and then why would they start up too slowly?
The app in each pool is a WCMS which uses the .NET 1.1 framework. It connects to a remote DB but is otherwise independent of other machines.
We have a web application that is not completely bulletproof, and on occasion the application pool will die off and not restart without user input. Once it is restarted, it will run just fine for days or even months. Is there a way to have it restart on its own? IIS seems quick to kill off app pools that it sees misbehaving. Ideally the web app would be improved but that is not up to me in this case.
We have a WCMS that allows the user to add in domains they've purchased through us to their site manager, which is supposed to instantly work. In order for this to be, we need to be able to have the WCMS make changes to the DNS (add new domains, A records, etc). The user will not be able to access the dirty details, but the app itself must be able to.
The WCMS is in .NET and runs on Windows 2003 or 2008. The DNS server is BIND9 on a separate, unixy machine. Is there an interface out there already for this, or do we need to roll our own? Note that using Microsoft DNS is not an option for this setup.
I am trying to use OpenVPN instead of PPTP for our VPN. In order to be able to do this, I need it to be "easy" to setup. Right now, I have to get people to rename the Tap-Win32 network interface to get the config to work properly.
Is there a way I can streamline client config of OpenVPN to reduce it to a Next-Next-Finish type of procedure?
We are trying to determine what we will need in terms of hardware and bandwidth to run a DNS server which will be an authority for say 5,000 domains. The web servers get a few hits per second. (These are small sites, hence the slanted numbers).
We were planning on using BIND or some other unix-friendly DNS server daemon for this, probably on Linux or FreeBSD. I have no idea how many DNS queries this type of www load will generate, or how much bandwidth it would use, or how expensive the queries are in terms of processing and memory use.
Does anyone here have experience with DNS in the wild?
I have a Win 2003 / IIS 6 web server with thousands of configured sites. Two sites are independent, and the rest all share roughly the same configuration.
There is some common code in a directory, and then each site has it's own classic ASP code which includes files from the common code. It is a grossly inefficient setup, because any changes to the site-specific code needs to be changed on every site individually. It is also a waste of disk space.
Anyways, when I go to visit any one of these sites (besides the aforementioned two odd sites), I get "The system cannot find the file specified" in the browser window. This happens from any location.
There is nothing of interest in the Event Viewer. Process Monitor shows nothing about being unable to open files. I have even grepped the metabase for references to files and dirs, and have fixed the few that were in fact missing.
I can't ask for help on the specifics of these sites, but what I can ask is, where do I go from here in terms of hunting down the problem? Apache has nice error logging, IIS seems content with sending invalid HTTP responses and giving no useful information.
As part of a large site migration, I need to copy over numerous (about 50) SSL certificates for different sites.
I have tried to export them on the current server and then import them on the new server, with no success. I can get everything to load and work correctly, but if I run SSLDiag on the new server, I get the error "#WARNING: You DON'T have a private key that corresponds to this certificate".
I can't find any way to import the key along with the certificate. Do I need to renew each certificate, and if so, is this something that would cost money?
Edit: these servers do not have the connectivity required to use the "copy or move cert" option in the SSL wizards.
Edit2: By renewing the cert, does it invalidate the one on the old server?
I am trying to use xinetd (or inetd) with netcat to act as a TCP proxy. This setup works on Linux without issue.
Under Cygwin, either as a service or from the a Cygwin command line, the (x)inetd fails to open netcat, with the error "no such file or directory".
I have tried specifying /usr/bin/nc
, /usr/bin/nc.exe
, /cygdrive/d/cygwin/usr/bin/nc.exe
, d:\cygwin\bin\nc.exe
, and a TON of other combinations of forward flashes, backslashes, Windows paths and Cygwin paths. No matter what, I get errno 2, no such file or directory
.
Any ideas? I need this working ASAP.
Edit: I thought it may have to do with it being in d:\cygwin
(lame hardcoding?) but I tested it on a machine with cygwin on C:\
, problem exists there too.
I am moving many sites from an old web server to a new one. It needs to be done as transparently as possible.
The sites are backed by a WCMS, so there is a possibility that clients can make changes to what is actually their old site. To avoid this, I was thinking of setting up either a TCP or HTTP proxy on the old machine which transparently forwards on to the new machine. This would greatly reduce pressure to deal with the DNS, which is going to be a colossal job in itself due to some poorly made decisions in the past.
Should I use an HTTP or TCP proxy, or is this just generally a bad call? Note that I am dealing with a few thousand sites basically on my own.
I am in the process of migrating a web server which has a few thousand small sites, and does its own DNS. Each site has a hostname in the form of "customer.ourcompany.com" and some also have "www.customersdomainname.com".
When we do the migration, the IP is going to change, so we need to update all DNS entries for all domains. Because this machine is also the authority for ourcompany.com, the IP for ns1.ourcompany.com must also be changed.
This is the problem. For all of the client domains, we need to make sure that any glue records will contain the correct IP.
Are glue records always used by registrars, even if they are not technically needed for a domain? We migrated another webserver once, and I needed to log in to the registrar's site (GoDaddy) and update EVERY namerserver entry by simply swapping ns1 for ns2 and vice-versa. This forced GoDaddy to lookup the new IPs for nameservers, and store them as glue records. I am afraid of having to do this again, but with 2000 domains, not all at the same registrar.
Thoughts?
On FreeBSD systems, and presumably a bunch of others, there is an amount of space reserved for root which is MINFREE% of the total capacity. With multi-terabyte filesystems, the default of 8% is a staggering amount of space. Volumes that large could have less than 1% MINFREE, but of course, it must be an integer value.
Will this ever change? Does anyone even make use of MINFREE anymore?
We have a web server that runs a WCMS which allows users to add their domains to it. The DNS server is a separate machine. We are using Windows on all applicable machines.
How should the DNS records be managed remotely? We need to be able to add or remove domains from the DNS server. When adding, default records will be created, such as MX, a few A records and so forth. When removing a domain, the domain itself and all of its records would simply be pitched.
We deployed our new Linux/Exim/Spamassassin mail server on Friday (always a good idea to deploy the day before a long weekend when no admins are around). The load has been hovering around 1.3 on the 15-minute average.
The machine is responsive, and mails are delivered in reasonable time. Can we assume that this is acceptable?
How is a certain amount of load deemed acceptable or not acceptable? What metrics are used?
I have often wondered why there is such a passion for partitioning drives, especially on Unixy OSes (/usr, /var, et al). This does not seem to be a common theme with Windows installations.
It seems that partitioning greatly increases the likelihood of filling one partition while others have a great deal of free space. Obviously this can be prevented by careful design and planning, but things can change. I've experienced this on machines many times, mostly on ones setup by others, or by the default install settings of the OS in question.
Another argument I've heard is that it simplifies backup. How does it simplify backup? I've also heard that it improves reliability. Again, how?
Almost 100% of the problems I have encountered with disk storage is with physical failure of the disk. Could it be argued that partitioning can potentially accelerate hardware failure, because of the thrashing a disk does when moving or copying data from one partition to another on the same disk?
I'm not trying to rock the boat too much, I would just like to see justification for an age-old admin practice.
We have two mailservers ("dubone" and "dubdeuce"), one does all the work while the other sits idle. We want to have a setup where if dubone is busy or down, dubdeuce can accept/reject/filter emails just as dubone would.
Then, once dubone is available again, dubdeuce would push all of the emails it has stored onto dubone, so that the emails are all in one place.
We are using Exim4 and Dovecot backed by MySQL, and accessible with RoundCube. Both machines run Ubuntu Linux.
How can I sync the databases between both machines so that dubdeuce is always current (which it needs to be to do its job correctly), and then how would I instruct dubdeuce to push all of its accumulated messages on to dubone?
I have a dedicated server with CentOS 4 on it. I want to put FreeBSD 7 on it, but I do not have console access nor physical access.
I was thinking of doing a generic install of FreeBSD on a local machine, making config tweaks such as setting up network cards etc, and then creating a disk image of the install. I would then write this image onto the server with netcat feeding dd. Only problem is, transferring 70G of data (the size of the disk) is not practical. I can't think of a good way to get around this while still being sure the system will boot FreeBSD properly when rebooted.
Of course, in order for this to work, I need to minimize reboots and try to do everything from the working Linux install. Has anyone pulled something like this off before? How did you do it?
I have a Vista machine which I connect to regularly, and it no longer works. No config changes have been made in quite some time. The only changes along those lines are regular updates with Windows Update. Every time it pops up, I press Install. It is a legit copy of Vista. This problem started a week ago, after it had been working for months.
I have it set to listen on a non-standard port. If I run TCPView or netstat, I can see it listening on the port I specified. If I telnet to that port from the machine itself, I can start typing away, which proves that TCP connectivity is there.
Now, if I go to another machine on the lan, any other machine, I cannot connect to it. I used netcat on a FreeBSD machine to perform the same test as above, and the connection simply times out. If I run netcat on the Vista machine to open some random port, I can reach that port from the FreeBSD machine without issue, so communication between the two machines is fine.
I do not have any firewalls setup on the Vista machine. There is Windows Defender, but I am never prompted to allow Remote Desktop, and given that one of the main EXEs for Windows does the work, I don't see how Windows Defender could be having an effect on it, or how I would configure it.
I am out of ideas. Why won't Remote Desktop accept incoming connections? I've tried rebooting, of course.
I have a DNS server (Win 2003) which handles a few hundred domains. If one of our clients changes their domain to use DNS servers which are not ours, we still have all of the records. Is there a way of automatically pruning these domains so that they don't accumulate?
We are migrating just under 200 domains from a DNS server at another location (we have remote desktop access) to a server at our location. Is there a simple, scripted way of doing all of the zone transfers at once? Each server is Windows 2003, and the domains are internet domains, not Active Directory stuff.