I am trying to move towards better Sysprep and image management practices, and I am trying to get me and my time to use Audit Mode when modifying the images to get rid of some nasty image problems I discovered after getting started here. However, I cannot find any info if Audit Mode has an expiry period. When I run slmgr.vbs
it looks like it is licensed for good, theoretically (that is until I generalize again). I have tried to find something official, but I cannot. Can someone tell me if you run it for too long, will it eventually bomb out? I wanted to keep a reference computer running forever and fork off prepped images every period (like a month or quarter) to make things run more smoothly.
songei2f's questions
I increasingly deal with users who complain there "computer is slow" or that "it takes forever to load" on the laptops we provide (fairly standard build of Windows 7 32-bit with Office 2010 and a few other enterprise apps for calendaring and such). I wanted a more official way to diagnose the problem and find out if it is BS or not, so I want to use the Performance Monitor.
Outside the standard System Performance counters there by default in the perfmon console, what do you guys use to tease out troublesome applications? Keep in mind I am very new to this and TechNet articles seem insufficient.
I have read about the inverse of SF. To start I know this is bad and less than optimal. But here is the situation. I assume my thinking on this is flawed and wanted to know if I am right or wrong here.
I have users authenticating to a webapp that is controlled by the shared hosting provider. It is not secured; it comes over plain HTTP on 80. I do have control to my own secure services on 443 with a proper cert on my domain. I created a subdirectory (it is not a wildcard cert), that is just a full page iframe that goes to the auth page of that shared hosting service. My rationale for loading a HTTP frame over a HTTPS connection is that is loaded securely through the tunnel and runs around my server on their internal network instead of public internet. In theory that is not as bad. Is that even a remotely safe assumption?
This is not a permanent thing, but I need some kludge in place until I can shift gears and get rid of this.
Can anyone show me a good example of how to match it. Maybe I misread the documentation a bunch of times, but it does not even some close to how real tools like grep ought to work. The output of the following command, wmic /output:stdout csproduct get identifyingnumber
looks like so.
IdentifyingNumber
ABC1234
wmic csproduct get identifyingnumber | findstr with parameters to remove column header | clip
I am not sure what to do because I cannot find an exact example of what I am looking for my batch file since I need the serial enough. Any thoughts?
EDIT: I like PowerShell, but I really need to know how to do this in batch (must be possible) for XP machines as well. I am just surprised I could not figure this out. I mean, this should be simple!
Let's say I want to debug an issue where a few client computers were impacted by a long list of updates that occurred in the last few months. I run a systeminfo | find /i "kb" > updatelist.log
. Now how can I find a way to quickly get a summary of the purpose for the updates. There was a program that kind of did stuff like this for the pre-NT6.x operating systems, but this does not really work anymore. Any useful websites that do this?
UPDATE: So maybe something like Windows Update Downloader or the MrJinje Update XML Tool is kind of what I am looking for, but neither is really scriptable or queryable.
UPDATE: As per my experience here, no one responded. If I do not hear back from TechNet forum members about it, I will post a bounty here, if it makes a difference.
I have banged my head against a wall for what seems like all week. I am going to explain my simple procedure, and how none of it, absolutely none, seems to work afterword despite few alternatives and everyone on the internet telling assuming this is how to do it.
Diskpart Commands to Create FS Structure
REM Select the disk targeted for deployment.
REM
REM NOTE: Usually disk 0, but drive failure can make it external USB
REM media. This will erase the drive regardless!
select disk 0
REM Remove previous formatting.
clean
REM Create System Reserved partition bootloader and files.
create partition primary size=100
REM Format the volume
format fs=ntfs label="System Reserved" quick override noerr
REM Assign the System Reserved partition the D: mount for now
assign letter=C
REM The main system partition, size not specified to occupy whole drive.
create partition primary
REM Format the volume
format fs=ntfs quick override noerr
REM Assign the OS partition the D: mount for now
assign letter=D
REM Make this the active/bootable partition.
sel disk 0
sel partition 1
active
REM Close out the diskpart session.
exit
Now, I thought this was madness, but it turns out the System Reserved partition and standard "System Partition" (C:, commonly both the boot and system volumes where you find the Windows directory AND the bootmgr/ntldr hardware files, this is where Windows 7 diverges) as mounted in the Windows PE session where I run these commands do not matter. See reference here. Since this needs to be BitLocker-ready, enter this crappy System Reserved partition that is separate 100MB of awesome that goes before the regular boot volume. I do this, then I proceed to the next step.
Deploy System Reserved and Normal System Images
REM C is still the "System Reserved Partition", and the image is just like it sounds.
imagex /apply G:\images\systemreserved.wim 1 C:
REM D is now what will be the C: system partition on reboot, supposedly.
imagex /apply G:\images\testimage.wim 1 D:
Reboot the system
Now, the images I just captured should look good. This is not even sysprepped, but reapplying the same fscking image I prepared on the same reference workstation hours before. Problem is I get 0xc000000e could not detect the accessible boot device \Windows\system32\winload.exe
or different kinds of nonsense revolving around being able to find the boot volume with all the right files. I try different variations of things, now none of them work. I tried repairs with bcdboot
, with a fresh System Reserved partition or not, bootrec
, and maually editing the damn BCD store with bcdedit
. I tried finalizing the above process with and without bootsect /nt60 C: /force
. I need to wrap up and automate this procedure. What am I doing wrong that does not make the image happy, but really just miserable.
I know this is ridiculous, but our admin said he would beat me to death if I tried bridging mode and refuses to enable port security on our Cisco switches. Is there any way to get NAT traffic from vnet0
to go the tun0
adapter? I cannot get traffic period, host or guess, without being connected to the VPN anyway, so I do not need to worry if it is connected.
My iptables dump (I assume this is what I will need to modify). I assume I might have to enable IPv4 forwarding, but wanted more guidance than this post gave me.
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootps
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT udp -- anywhere anywhere state NEW udp dpt:ipsec-nat-t
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED
ACCEPT all -- 192.168.122.0/24 anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
And my current adapter set. eth0
, as easily assumed, is my main adapter, tun0
from VPNC, and I assume vnet0 is for the NAT'ing, and the virbr0
the bridging adapter I do and cannot use.
eth0 Link encap:Ethernet HWaddr XX:XX:XX:XX:XX:XX
inet addr:10.2.25.252 Bcast:10.2.25.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6993223 errors:0 dropped:0 overruns:0 frame:0 TX packets:6741080 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5811139414 (5.4 GiB) TX bytes:3373995210 (3.1 GiB) Interrupt:21 Memory:fe9e0000-fea00000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:17912 errors:0 dropped:0 overruns:0 frame:0 TX packets:17912 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11251659 (10.7 MiB) TX bytes:11251659 (10.7 MiB)
tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:10.2.7.181 P-t-P:10.2.7.181 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1412 Metric:1 RX packets:203913 errors:0 dropped:0 overruns:0 frame:0 TX packets:215693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:167581626 (159.8 MiB) TX bytes:15541772 (14.8 MiB)
virbr0 Link encap:Ethernet HWaddr XX:XX:XX:XX:XX:XX
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2054 errors:0 dropped:0 overruns:0 frame:0 TX packets:243 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:253861 (247.9 KiB) TX bytes:36640 (35.7 KiB)
vnet0 Link encap:Ethernet HWaddr XX:XX:XX:XX:XX:XX
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2128 errors:0 dropped:0 overruns:0 frame:0 TX packets:42948 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:289277 (282.4 KiB) TX bytes:2272356 (2.1 MiB)
Yes, I know the obvious ones. I am very specifically looking for any doc by Microsoft/MSDN that indicates which policies in Group Policy specifically require reboot and which do not. I love trial and error as much as the next guy, but I would love to have a document like this handy.
Installed Fedora.
# cat /etc/redhat_release | awk ' { print F "> " $0; print ""; }'
Fedora release 14 (Laughlin)
Installed offlineimap from yum, cuz I'm lazy these days.
# yum info offlineimap | awk ' { print F "> " $0; print ""; }'
Loaded plugins: langpacks, presto, refresh-packagekit
Adding en_US to language list
Installed Packages
Name : offlineimap
Arch : noarch
Version : 6.2.0
Release : 2.fc14
Size : 611 k
Repo : installed
From repo : fedora
Summary : Powerful IMAP/Maildir synchronization and reader support
URL : http://software.complete.org/offlineimap/
License : GPLv2+
Description : OfflineIMAP is a tool to simplify your e-mail reading. With
: OfflineIMAP, you can read the same mailbox from multiple
: computers. You get a current copy of your messages on each
: computer, and changes you make one place will be visible on all
: other systems. For instance, you can delete a message on your home
: computer, and it will appear deleted on your work computer as
: well. OfflineIMAP is also useful if you want to use a mail reader
: that does not have IMAP support, has poor IMAP support, or does
: not provide disconnected operation.
And, lo and behold, every time I run offlineimap and try to redirect output in a crontab, it does not work. Below is my .offlineimaprc
.
[general]
ui = TTY.TTYUI
accounts = Personal, Work
maxsyncaccounts = 3
[Account Personal]
localrepository = Local.Personal
remoterepository = Remote.Personal
[Account Work]
localrepository = Local.Work
remoterepository = Remote.Work
[Repository Local.Personal]
type = Maildir
localfolders = ~/mail/gmail
[Repository Local.Work]
type = Maildir
localfolders = ~/mail/companymail
[Repository Remote.Personal]
type = IMAP
remotehost = imap.gmail.com
remoteuser = [email protected]
remotepass = password
ssl = yes
maxconnections = 4
# Otherwise "deleting" a message will just remove any labels and
# retain the message in the All Mail folder.
realdelete = no
[Repository Remote.Work]
type = IMAP
remotehost = server.company.tld
remoteuser = username
remotepass = password
ssl = yes
maxconnections = 4
I have tried TTY.TTYUI
, NonInteractive.Quiet
and NonInteractive.Basic
with different variations. With or without redirection, the crontab entries I try cause problems.
$ crontab -l | awk ' { print F "> " $0; print ""; }'
*/5 * * * * offlineimap >> ~/mail/logs/offlineimap.log 2>&1
*/5 * * * * offlineimap
I always get the same damn error ERROR: No UIs were found usable!. What am I doing wrong!?
So this is not a very specific question, but how do people reuse their SSH keys? I mean, I wanted to set up a GitHub account. I also have a key pair for logging into a machine at home remotely. Now, maybe I did not massage my Google search terms correctly, but is considered poor form to use the same key pair for the convenience factor? I know security people will probably yell "HELL NO" at me, but how do you sysadmins handle this in practice?
I have tried wget -m
wget -r
and a whole bunch of variations. I am getting some of the images on http://site.com, one of the scripts, and none of the CSS, even with the fscking -p
parameter. The only HTML page is index.html and there are several more referenced, so I am at a loss. curlmirror.pl
on the cURL developers website does not seem to get the job done either. Is there something I am missing? I have tried different levels of recursion with only this URL, but I get the feeling I am missing something. Long story short, some school allows its students to submit web projects, but they want to know how they can collect everything for the instructor who will grade it, instead of him going to all the externally hsoted sites.
UPDATE: I think I figured out the issue. I though the links to the other pages were in the index.html page that downloaded. I was way off. Turns out the footer of the page, which has all the navigation links, is handled by a JavaScript file Include.js
, which reads JLSSiteMap.js
and some other JS files to do page navigation and the like. As a result, wget does not pick up an other dependencies because a lot of this crap is handled not on web pages. How can I handle such a website? This is one of several problem cases. I assume little can be done if wget cannot parse JavaScript.
So the Sysinternals guys have that cool contig.exe
utility that allows me ensure a file is contiguous. I need to copy overs ISO files to a FAT32 USB flash key. Grub4DOS requires the files be continuous, but I do not have Windows access at the moment. Is there a way to copy a file so it is contiguous on the target drive, or a tool like the aforementioned that will make an existing file contiguous. Again, I need it on FAT32, and there lies the rub.
I wanted to know what people do to test drives before installing them in a RAID. I saw badblocks. Would write-mode, prior to installing the filesystem be sufficient in your minds?
Is there any way to check this via VBScript or Powershell? I have briefly looked at the SecurityCenter
and SecurityCenter2
WMI classes, but neither of them look especially useful. It appears the easiest way is determining what the value of productState
via the latter in WMI to get some message that means AV thinks it's OK. Any other thoughts?
I was totally unaware of native SMB/CIFS on ZFS. This wiki doc does not mention performance differences. What kind of performance differences exist between the two?
I am running out of ideas. After a long period of testing this morning, I cannot seem to get this to work, and I have no idea why. I want to mount a Windows SMB/CIFS share with a Debian 5.0.4 VM, and it is not cooperating.
This the command I am using.
debianvm:/home/me# whoami
root
debianvm:/home/me# smbclient --version
Version 3.2.5
debianvm:/home/me# mount -t cifs //hostname.domain.tld/share /mnt/hostname.domain.tld/share --verbose -o user=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD/username
mount.cifs kernel mount options: unc=//hostname.domain.tld\share,ip=10.212.15.53,domain=SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD,ver=1,rw,user=username,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,pass=*******************mount
error 5 = Input/output error Refer to the mount.cifs(8) manual page (e.g.man mount.cifs)
debianvm:/home/me#
The word on the nets has not been very specific, and unfortunately it is almost always environment-specific. I receive no authentication errors. I have tried mount -t smbfs
and mount -t cifs
, along with smbmount
and such. I get the same error before. I doubt it is a problem with DNS resolution, because logging shows the correct IP address. dmesg | tail -f
no longer shows authentication errors when I format the domain and username accordingly. I have played a little with iocharset=utf8
, file_mode
, and dir_mode
as described here. That did not help either. I have also tried ntlm
and ntlmv2
assuming it might be a minimum auth method problem, but not forcing sec=ntlmv2
it can still authenticate without errors anymore. smbclient -L hostname.domain.tld -W SUBADDOMAIN.ADDOMAIN.DOMAIN.TLD -U username
correctly lists all the shares and shows it as the following.
Domain=[SUBADDOMAIN] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager]
Sharename Type Comment
--------- ---- -------
IPC$ IPC Remote IPC
ETC$ Disk Remote Administration
C$ Disk Remote Administration
Share Disk
Connection to hostname.domain.tld failed (Error NT_STATUS_CONNECTION_REFUSED)
NetBIOS over TCP disabled -- no workgroup available
I find the last line intriguing/alarming. Does anyone have any pointers!? Maybe I misread the effin manual.
I assume there is an easy solution to this, but I prefer to ask before mucking up our intranet server. During a support session, my co-worker and I realized we could login in with our Kerberos credentials over SSH, but not the console (in this case the VI Client for ESXi, but it does not really matter). So, do I just modify the login PAM config?
Current state:
# PAM configuration for the "sshd" service
#
# auth
#auth sufficient pam_opie.so no_warn no_fake_prompts
#auth requisite pam_opieaccess.so no_warn allow_local
#auth sufficient pam_ssh.so no_warn try_first_pass
auth sufficient pam_krb5.so try_first_pass
auth required pam_unix.so try_first_pass
# account
#account required pam_nologin.so
#account required pam_login_access.so
account sufficient pam_krb5.so try_first_pass
account required pam_unix.so
# password
#password sufficient pam_krb5.so no_warn try_first_pass
#password required pam_unix.so no_warn try_first_pass
password required pam_permit.so
# session
#session optional pam_ssh.so
#session required pam_permit.so
session required pam_permit.so
#
#
# PAM configuration for the "login" service
#
# auth
auth sufficient pam_self.so no_warn
auth include system
# account
account requisite pam_securetty.so
account required pam_nologin.so
account include system
# session
session include system
# password
password include system
Any hints or tips welcome.
We have a file share we want to roll out at work, and someone asked if there is a way OS X clients can see VSS copies on the network share they mount (to restate: a SMB/CIFS share on a server on the network, not a local HFS+ drive) to restore older copies of the file. Quick searches on Google seem to indicate not many people have interest in this or business requirement, or understand the question (assuming the question to be if Apple has an equivalent technology; I am not interested in that question). Does anyone know? I am at the office right now and do not have access to a Mac Book. I would only be interested in newish OS X releases, so 10.5.x to 10.6.x.
UPDATE: Since this is really vendor specific (in terms of the SMB/CIFS appliance/server), I'll accept the answer specific to NetApp since this is the most common scenario according to how Google stacks up with this answer.
So, I am trying to deploy several MSI packages in a GPO policy. All of the installers are on an external file share. Fortunately for me, this file share died and I was put in charge of salvaging what I could. The solution, given the indifference of many around here, was setting up a DFS Consolidation root pointing \\ournewsharesever\sameshare to \\oldservername\sameshare, so old paths could be maintained from an underway backup/transition to a new, more powerful file server. I hope it makes sense up to now.
So, after restoring and configuring everything, it looks good. Well, kind of at least. A lot of computers are having trouble with the MSI packages. I am perplexed because if I copy and paste the paths over RDP and run them interactively, which I was meticulous about keeping identical to avoid asking ServerFault ;-), they will run fine interactively. It is only through GPO Software Deployment this junk keeps failing. I see errors like this all the time now in the System Event Log of the impacted clients (running a relatively fresh Windows 7 image).
Log Name: System Source: Application Management Group Policy Date: 8/23/2010 8:28:12 AM Event ID: 101 Task Category: None Level: Warning Keywords: Classic User: SYSTEM Computer: COMPNAME.fqdn Description: The description for Event ID 101 from source Application Management Group Policy cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Adobe Acrobat 9.3.3 Pro GPO Name Here 1274
The handle is invalid
Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System>
<Provider Name="Application Management Group Policy" />
<EventID Qualifiers="0">101</EventID>
<Level>3</Level>
<Task>0</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2010-08-23T12:28:12.000000000Z" />
<EventRecordID>19556</EventRecordID>
<Channel>System</Channel>
<Computer>COMPNAME.fqdn</Computer>
<Security UserID="S-1-5-18" /> </System> <EventData>
<Data>Adobe Acrobat 9.3.3 Pro</Data>
<Data>GPO Name Here</Data>
<Data>1274</Data> </EventData> </Event>
This is not the only package with this problem, but a bunch of them. Again, the paths are the same, just with DFS. Permissions have not changed. Because of 1274, I have disabled Logon Optimization as many people on the tubes suggest. Four, five, six reboots later, nothing special has changed and they still do not install. Anyone have a clue before I pull my own hair out?