I'm trying to collect usage on how much memory is being used on our Windows servers using AWS CloudWatch Agent. In order to do this, I have to specify which performance counters to collect. I cannot find a Performance Counter that appears to be the same as "In Use" under the Memory section in Task Manager. What am I missing here?
Benjamin Peikes's questions
We currently have a PPTP VPN, but we have a couple of people off site who are on a network which only allows outgoing http/https and ssh. For some reason, it appears that they cannot connect to our VPN because of some issues on the network where they are trying to connect.
I'd imaging being able to run some software on their laptops which sets up a virtual nic on a private IP address, and then forwards all traffic via ssh to a machine on our network.
Our users are running on Windows and on Macs.
We would like to be able to roll out new versions of an application in RemoteApp without having to kick off users. One idea was to publish the new version of the application in a different directory, say APP/VERSION_XXX, and then update the location. There doesn't appear to be a way change the location via the management tools, but is there a way to do so via powershell?
I'd like to script adding a share to a smb.conf
file. My current script just appends it to the end of the file, but that's not ideal. I'd rather have something that will add a new share if it doesn't exist, and replace it if it does.
I'm currently scripting this on a CentOS 7 distro, but would ideally like something that would work across distros, though that's not a requirement.
Also, I'm using bash
to do this because the script is run before other packages are added to the system. The script uses yum
to install the samba
packages, and then is supposed to configure it and add shares.
I'm using Hyper-V without System Center and trying to create "templates" for linux servers that I can then reuse as base installs. The way we have done it is by creating a machine, let's call it "Template_CentOS7", we run install set up what we need etc. Then I shutdown the machine and copy the vhdx file to our template directory. D:\Templates\Template_CentOS7.vhdx
When I need a new instance for a machine Machine_XXX
, I do the following:
- Copy the template to a new directory, ie
D:\Hyper-V\Machine_XXX\Virtual Hard Disks\Template_CentOS.vhdx
- Rename the file
Machine_XXX.vhdx
- Run Hyper-V Manager to create the new machine via, New->Virtual Machine.
- When prompted to create a new drive, I point it to the the new file.
Now here is the question, we've now made some changes to the base template, and we run a Checkpoint after the change. Now I shut down the Template_CentOS7
machine so I can copy the vhdx file into our templates directory, but now there are several files, Template_CentOS7.vhdx, and a bunch of Template_CentOS7GUID.avhdx files. I'm not sure what I should do next. The Template_CentOS7.vhdx file has a fairly old modify time, so I don't think it includes the changes I've made.
What do I need to do to use this new "template"?
I have a smb share on a Linux box set up that I can view in explorer:
\\XXX.YYY.ZZZ.QQQ\Share
In this share is a batch file:
\\XXX.YYY.ZZZ.QQQ\Share\Scripts\Script.bat
I can open the file in Notepad from Explorer, using right-click "Edit", and even edit the file.
If I double click on the batch file in explorer, or if I have a shortcut to the file on the desktop, I'm initially prompted with a "Open File - Security Warning", but when I click "Run" I get an error message:
Network Error
Windows cannot access \\\\XXX.YYY.ZZZ.QQQ\Share\Scripts\Script.bat
You do not have permission to access \\\\XXX.YYY.ZZZ.QQQ\Share\Scripts\Script.bat.
Contact your network administrator to request access.
The odd thing is that if I open a cmd
window and simply type "\\\\XXX.YYY.ZZZ.QQQ\Share\Scripts\Script.bat"
the script runs with no problems.
I've removed the HWADDR
from ifcfg-eth0
and touched /.unconfigured
, but when I restart the virtual machine, HWADDR is not added back to the ifcfg-eth0. Is there something else we should be doing?
We've got a Hyper-V server set up, and the layout of the files is inconsistent because it was set up by several people. Here are the two different "templates" that were used:
Template 1
D:\Hyper-V\Virtual Machines\MACHINE_NAME_1\Virtual Hard Disks\MACHINE_NAME_1.vhdx
D:\Hyper-V\Virtual Machines\MACHINE_NAME_1\Virtual Machines\GUID_1
D:\Hyper-V\Virtual Machines\MACHINE_NAME_1\Virtual Machines\GUID_1.xml
D:\Hyper-V\Virtual Machines\MACHINE_NAME_2\Virtual Hard Disks\MACHINE_NAME_2.vhdx
D:\Hyper-V\Virtual Machines\MACHINE_NAME_2\Virtual Machines\GUID_2
D:\Hyper-V\Virtual Machines\MACHINE_NAME_2\Virtual Machines\GUID_2.xml
....
and
Template 2
D:\Hyper-V\Virtual Hard Disks\MACHINE_NAME_1.vhdx
D:\Hyper-V\Virtual Hard Disks\MACHINE_NAME_2.vhdx
D:\Hyper-V\Virtual Machines\GUID_1
D:\Hyper-V\Virtual Machines\GUID_1.xml
D:\Hyper-V\Virtual Machines\GUID_2
D:\Hyper-V\Virtual Machines\GUID_2.xml
Template 1
The argument made FOR Template 1, was that when you do an export of a VM the export creates a folder with the machine name, puts separate folders for the disks and vm. You can then simply point to the machine directory when you run an import.
The argument AGAINST this template style is that it doesn't make sense for there to be a directory called Virtual Machines if there is only one file. The other argument against is that it appears that that Hyper-V server itself seems to expect that all hard disks are in one folder, and all the Virtual Machines are in a different folder. i.e. it doesn't create separate folders for each VM (execept for the ones nameed by GUID in the Virtual Machines directory)
Template 2
The argument FOR Template 2 is that it seems like that is what Hyper-V expects the layout to be.
The argument AGAINST Template 2, is that you can't tell which Virtual Machine files are associated with a specific machine unless you look inside the xml files.
I'd love to hear about any pitfalls to either layout.
I thought that Puppet was written in Ruby, so I'm not sure why the puppetmaster service can't run under Windows. Does anyone have an idea why? Note that I'm trying to figure out if there is a technical reason.
We have a smb share on a linux box which is used as network share for windows machines. We put an executable on it for everyone to use. The issue is that if anyone has the application running, we can't update the file on the share.
The strange thing is that if you delete the file from a windows machine, it appears to complete successfully, but when you refresh the directory, it appears again. Additionally, if you try to copy over the file, it simply hangs, it does not give a permission error.
I would expect that either a user is denied permission to delete a file because someone else has it opened or allow you to delete it. The weird thing is that if you delete the file, and then the other user closes the file, it suddenly disappears, not good.
Ideally there would be a way to tell smbd to not allow anyone to take a lock on a file for a particular share. If someone deletes a file, it should get deleted, even if another user has it open.
We have an application running as a RemoteApp on a 2008 RDP server. What is the proper way to push new versions of the app?
We don't currently have an msi for it, but we could make one.
Right now we just copy over the files, but that doesn't seem to be an ideal solution.
We installed RDS, all roles, on a machine in one domain, DomainA. We've decided to move the machine to a domain in a different forest, DomainB. Our admins simply joined the machine to the new domain, but now it appears that RDS has pointers to the old machine name, i.e. machine.DomainA.
We can remove the licenses and re-add them, but there seems to be information stored somewhere that is telling the machine that the old machine is associated with the "Deployment".
It doesn't appear that the information is stored in AD, because after we join the machine to the new domain and reboot, it still thinks that MachineA.DomainA is part of the RDS deployment.
The question is, how do you move a machine which is running RDS to a new domain, or is it impossible to do so?
I typically like to set up separate logins for myself, one with regular user permissions, and a separate one for administrative tasks. For example, if the domain was XXXX, I'd set up a XXXX\bpeikes and a XXXX\adminbp account. I've always done it because frankly I don't trust myself to be logged in as an adminstrator, but in every place that I've worked, the system administrators seem to just add their usual accounts to the Domain Admins group.
Are there any best practices? I've seen an article from MS which does appear to say that you should use Run As, and not login as an admin, but they don't give an example of an implementation and I've never seen anyone else do it.
Lets say you are running a domain X, but want your internal DNS server to handle requests for both domain X and domain Y such that if machine1.X resolves to a.b.c.d, that machine1.Y would resolve to the same IP address?
Basically we would like to have a single master set of DNS records, but have a second domain which had all the same records. We're in the midsts of doing a migration, and it would be great if we didn't have to remember to update both sets of domain records all the time.
We have an internal desktop application which we have deployed to a network share. In that directory are subdirectories for each version, ie, z:\Apps\ApplicationX\1.0 z:\Apps\ApplicationX\2.0 z:\Apps\ApplicationX\2.1
I would like to put a shortcut on the user desktops which point to the newest version of the application. My current solution is to put the current version into a directory called "z:\Apps\ApplicationX\Current", and put a shortcut on their desktop to the exe there.
The problem is that when they are running the application, I can't update the application because the file is locked.
I tried changing the shortcut to point to a batch file which copies the file locally and then runs it from there, but we're all on Windows 7, and UAC is causing issues copying the file to the C: drive where I would expect it to be installed.
I suppose I could copy the executable to the users home drive and run it from there, but I don't like the idea of having an executable in the users home drive. It also means that there are multiple copies of the application on the network, which I'm not a fan of.
I also thought that I might be able to have a shortcut to a shortcut and I would just update the shortcut, but that doesn't work either.
My current solution is to have a batch file which has the start command in it pointing to the current version, and a shortcut to that batch file.
Does anyone else have possible solutions?
We have a Nexenta file server which is using domain authentication for users. All of the Windows 7 machines on our network can connect and use the shares on it without any issues using either \XX.YY.ZZ.AA\share or \fileserver\share.
We added a new Windows 7 machine to our domain, and for some reason I can't access the file server using either \XX.YY.ZZ.AA\share or \fileserver\share. I can ping and connect to the fileserver's web interface from the new machine, but cannot connect to shares even when logged onto this new machine with a user account which can access the share from other working Windows 7 machines.
When I attempt to connect by IP address I get the error:
Check the spelling of the name. Otherwise, there might be a problem with your network. To try to identify and resolve network problems, click Diagnose.
When I attempt to connect by machine name I get the error:
\fileserver\share is not accessable. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions.
The inability to connect to the share by IP number seems extremely odd to me.
New Info (1) Another tidbit of information. While connecting was working from my Windows 7 machine I ran a ipconfig /flushdns, and suddenly it stopped working. Can't connect to it by IP or by name now.
New Info (2) To clarify New Info (1), the file server has two IP numbers, one which it uses just to connect to its SAN, the other to connect to the general network. When I CAN NOT connect to it, I can ping it without any issue: i.e. I see:
ping fileserver Pinging fileserver.domain.com XXX.XXX.XXX.XX with 32 bytes of data: Reply from XXX.XXX.XXX.XXX: bytes=32 time<1ms TTL=254
If I run ipconfig /flushdns, it will occasionally pick up the SAN interface IP for that name. Now when I ping the fileserver, I can't reach it (as expected)
ping fileserver Pinging fileserver.domain.com YYY.YYY.YYY.YYY with 32 bytes of data: Timeout
BUT, and here is the weird thing. I now CAN connect to the share \fileserver.
I really wish MS gave you a better way to turn on logging in the OS. I have a feeling that what is going on is related to the client trying to lookup the server name using DNS, and attempting to connect, and that when it can't (because DNS is returning the IP of the SAN interface I can't reach), that it falls back to NETBIOS, which for some reason makes it work.
We have a Windows server which is connected to a switch processing a large quantity of data. We noticed that when we disabled FlowControl on the network adapter, that it appeared that we were getting much better performance. We are occasionally getting dropped packets, which we are ok with, but the rest of the time we appear to get much better throughput.
We would like to verify that when FlowControl is enabled, that the server is indeed sending PAUSE messages. I was under the impression that WireShark would not be able to see these packets because they don't get passed to the OS, but on Wikipedia's entry for FlowControl http://en.wikipedia.org/wiki/Ethernet_flow_control, there is an image of a "WireShark" screenshot of an ethernet "Pause" frame.
In what scenarios does WireShark have the ability to see PAUSE frames?
We are using a third party library in one of our applications. We would like to find a tool which will list all of the open sockets on the machine AND give us the ability to see the various TCP properties associated with the socket suck as: SendBufferSize ReceiveBufferSize NoDelay (Nagle) DontFragment TTL
We've spoken to the vendor, and although they have disabled Nagle on their server, we would like to know exactly how the sockets are being created with their library on our servers.
Any tools out there for this?
I'm trying to figure out what exactly happens when a machine is added to a domain. Once you type in the domain name: 1) What protocol does the machine use in order to figure out which domain controller to use? 2) How is the domain name looked up? Example: domain is setup as dc=company,dc=com, but the "Windows" domain is COMPA. Some how these names are mapped to each other.
I know that Active Directory and DNS are tightly integrated, but I don't quite understand the details. What is the best source of information on the technical details. Most of what I can find tells you HOW to get things done, but not what happens under the covers.
We've been having some weird issues while migrating domains from one forest to another. We tried turning off the domain controllers for the old domain, and all of a sudden, a share which we used to have access to, on yet another domain, on it's own forest disappeared.
I'm having problems understanding how Windows clients resolve shares. i.e. when I type in \XXXXXX\yyy\zzzz, what exactly does Windows do in order to figure out what server to connect to? Note that we used to have a DFS share on the old domain as well so that plays a part.
Are there any tools that help you track down what Windows is doing? i.e. something that log something like this if I gave it the path \XXXXXX\yyy\zzzz Checking for name in local NETBIOS. Did not find name in netbios. Checking DFS cache... Found in DFS cache, using \computer.domain.com\share for \XXXXX\yyy\zzzz
Here is the whole story. We are migrating from one domain D1.xxxx.com to D2.yyyy.com in a different forest. We have a cross domain trust in place and everything has been moved except DFS. We've decided to stop using DFS in domain 2 because of all the headaches we've had. Instead we'll have a host in D2 called D1 which will serve the whole DFS. We've set it up and copied all the files. We then turn off all domain controllers for D1 and remove the trust. Now, I would expect machines on the D2 domain to go to the share called root on D1.yyyy.com when I type \D1\root\things. For some reason that doesn't seem to happen, and I can't figure out why. I tries using dfsutil /pkflush on a client machine and still no deal.
What I would like to do now is see exactly what my client machine is doing when attempting to connect to \D1\root\things. Seeing what happens on the network doesn't help. I know (or am fairly sure) it's still trying to go to the old DFS share.