I work in an environment with a variety of hardware, so we usually use automated installs and group policy instead of drive imaging for deployment of Windows 7. One step we always have to do manually is to open Windows Update and tell it to upgrade to Microsoft Update. How can I deploy Microsoft Update automatically?
Josh's questions
After months of perfectly flat disk usage, my tempdb file suddenly grew by several gigs over the weekend. Nobody at the company is aware of anything that might have changed.
When I checked the tempdb database, it had only a few very small tables, whose names were strings of hex digits.
In searching for the cause, I found the following message repeated every few minutes for several days in the event log:
DBCC SHRINKDATABASE for database ID 2 is waiting for the snapshot transaction
with timestamp 51743762409 and other snapshot transactions linked to timestamp
51743762409 or with timestamps older than 51801253540 to finish.
I can't find any possible way that DBCC SHRINKDATABASE could have been run by anybody on the tempdb (which is DB ID 2). Microsoft's own documentation says that SHRINKDATABASE should never be run on tempdb while it's online, so I can't imagine that SQL server is running it itself.
I'm trying to figure out:
- What could have caused such sudden rapid growth in the tempdb file? I'm not aware of any code that uses temporary tables or declares table variables on this server. What else uses the tempdb file?
- Why is DBCC SHRINKDATABASE running on tempdb at all, and why is it failing?
OpenNMS's main dashboard is fantastic, showing all current outages, updated continuously. I'm also using OpenNMS to monitor thresholds - for example, to get notified whenever a disk is more than 90% full. However, I can't find any way to view all outstanding exceeded thresholds the same way I can view outstanding outages. Is this possible?
The thing is, OpenNMS can send lots of notifications (in some circumstances, one outage can generate dozens of notifications) and "threshold exceeded" notifications can get lost in the noise - if I don't catch one, then there's absolutely no indication anywhere in the OpenNMS GUI that something is currently wrong!
Is there some way to set this up? A list of outstanding issues seems like a pretty fundamental feature for an NMS.
EDIT: If it's not possible, what other tools might provide such functionality while also giving the strong graphing features of OpenNMS (I suspect that cacti+nagios will do it, but I'd rather not have to manually configure 2 different monitoring systems for each new computer I want to monitor!)
My company has a number of different types of hardware, for servers as well as workstations. Also, we build new systems (especially servers) rarely enough that imaging is of limited utility, because the image is bound to be out of date by the time we want it again.
We're thinking about installing a virtualization tool (almost certainly VMWare ESXi) on the hardware, but only installing one VM on each machine. This way, we get the benefits of hardware abstraction, but still have consistent performance. We could "image" any existing machine just by copying the VM (or, for the first time, using a P2V tool); load the image whenever we want in VMWare Server for trivially easy updating, and have the benefit of snapshots; and simple deployment to heterogeneous hardware.
The question is: what sort of performance hit can we expect? I've found lots of studies online comparing the performance of N virtual machines running on one computer for various values of N, but none that compare it to bare-metal. I seem to recall hearing once that Xen could cause a 20-40% degradation in IO performance, which (if that's true, and if ESX is similar) would hit our SQL servers pretty hard.
Does anyone know about virtualized vs. bare-metal performance of Windows Server on ESX, when there's only one VM so it's not competing for resources?
Besides performance, can anyone think of other downsides to this sort of setup?
We use windows server 2003 for DNS on our network. The forward DNS entries ("A" records) for windows machines on the domain are populated automatically. However, the reverse DNS entries ("PTR" Records) are not. The reverse lookup zone exists, and I can add entries to it manually, but it doesn't automatically populate. Dynamic updates are enabled for both the forward and reverse zones. What am I doing wrong?
I manage a cisco router acting as a SIP gateway. In order to get it to register to the SIP provider, the connection needs to come from the right IP address. This is done with the following lines in the switch config:
voice service voip
sip
bind control source-interface FastEthernet0/1
bind media source-interface FastEthernet0/1
Recently, the router's main external interface went down briefly, and when it came back up nobody could make any phone calls. It took far too long to troubleshoot this before we discovered that those lines had vanished silently from the config, and the router was failing to register with our SIP provider, who expected the connection to come from a particular IP address!
Further testing revealed that those lines are automatically and silently removed from the config whenever the interface they refer to goes down, however briefly. I have to manually log into the router and re-enter those lines before the phones work.
How can I make sure that SIP service comes back up automatically when the network connection comes back up?
There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once.
The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it.
The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again.
The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month.
As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this:
<NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/>
<NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/>
<NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/>
I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?
UPDATE:
Here's an experiment I tried that provides some more detail:
- I turned off sharing; the file remained huge.
- I saved the file as an .xlsx file, and it shrank to 5MB.
- Then I closed that file, opened it back up, and saved it as an .xls file, with sharing still turned off; it got huge again!
- When an '03 user tries to open that nice, compact .xlsx file, it takes several minutes to open it, even though '07 opens it fine.
So, this seems to be an '03 specific issue, and saving the file in '03 format immediately recreated a bunch of junk that clearly had not been in the '07 file at all.
I'm on a windows workstation, and I want a list of which files are open over the network on a windows server. The Shared Folders MMC Snap-in does this visually, and SysInternals' PSFile does it from the command line, but by default only for admins. I want to let regular users do this, too. What permissions do I need to grant them?
UPDATE: Running sysinternals' accesschk utility, I've found that there are lots of "weird" objects that have permissions but aren't in the filesystem, registry, or active directory. Run "accesschk -o" to see a list of object directories, and then add the name of a directory ("accesschk -o \BaseNamedObjects", for example) to see them. Could the functionality I'm looking for relate to some permissions in here? If so, is there any way to edit the ACLs on these things? (Even if not, I'd still love to find out which specific object represents the ability to enumerate remotely opened files.)
On *nix, admins can use the setuid flag to allow non-admins to run certain programs that would otherwise require admin privileges. Is there any way to do something similar in Windows 7?
This question has been asked here before for Windows XP, and the answers were generally unsatisfying. I'm wondering if Windows 7 provides a better way.
One idea I can think of would be to use Microsoft's Subsystem for UNIX Applications, but I'd rather not install that on every user's system if I can avoid it.
Another idea I can think of (which would work on XP too, but I haven't seen it mentioned anywhere) would be to create a RunAsAdmin application that runs as a service, that takes a whitelist of "safe" apps and can be asked (from a command line, batch file or script) to run any program on the list as LocalSystem or whatever account the service uses. Is this possible?
Are there any solutions that aren't as clunky as those? Or, has anyone implemented either of the above techniques successfully?
Every once in a while we have occasion to change the name of a computer or user on our Active Directory domain. I've changed several computer names without running into any problems... so far. I haven't tried a username yet (to the disappointment of several users who have gotten married and changed their real names).
My question is: can you think of anything that might break if I change the logon name of a user, or the name of a computer (in System Properties > Computer Name) on a domain system? I'm thinking about domain access and authentication issues, but also things like software with draconian (and poorly-designed) license control. Of course I'm most worried about the things I haven't thought of.
My ideas: obviously, changing a server name will break any URLs or user-created references to the server -- mapped network drives, "recent files" links, bookmarks to hosted web pages, etc. Changing a domain controller name might certainly be a delicate process. But I'm mostly just interested in changing workstation names, though. Some users RDP into their workstations, and their saved RDP files would no longer work, but I'm not aware of any other places where a user would connect to a workstation using a stored computer name.
(I've got XP systems on a 2003 domain, but I'm also interested in Win 7 and 2008 domains.)
When logging in to my file server with Remote Desktop, I occasionally get a message saying, "Insufficient system resources exist to complete the requested service" and it fails to load my profile. I started getting this message a few months ago, at the same time that other weird and intermittent problems started occurring, like the occasional inability to open or download larger files from the server. Sometimes Remote Desktop can't connect at all, and I have to locally log into the server's console.
I've seen this message intermittently on several desktops; last year half a dozen desktops in different departments with little in common (besides the hardware and the antivirus software) all started getting this message along with general instability and graphical glitches; it went away on its own after a couple of months. Every computer that had this problem, including the file server when it first started, seemed to have lots of handles open according to Task Manager (>100k instead of the usual 20-30k). Most, though not all, were running some ancient software with lots of bugs. But now it's happening on a server that is using almost no resources: Both hard drives are less than half full; the commit charge is less than 1.5 GB on a system with 4GB RAM; the processors are <5%; the number of open handles is <20k.
What other resources are there that might be depleted? How might I find out, since the system doesn't seem inclined to tell me? Or is this a generic catch-all message meaning "I don't know what the $%^& is wrong"?
I do get an occasional event log message, a couple of times a week, saying, "The server was unable to allocate from the system paged pool because the pool was empty." It doesn't seem to correlate at all with the other symptoms, though. I have no idea what causes it, or what the system is trying to do at the moments this message appears. When I google this message, I only find vague suggestions to make sure all my software and service packs are up to date. It is. This may be related or may be a red herring, but I'm not sure how to investigate it further since windows gives no details.