I helped out a friend and we are trying to clean up an AD with messed up userdata. The upn, firstname and lastname and E-Mails are correct. But fields like "manager" or something like cost center are not filled in correctly. Now I thought about a form to hand out to everyone digitally and enter the right information and we can insert it using powershell. And later make sure that the info will not change without the right approvals. But for some reason I feel that there must a better tool for that. Can anyone give me advice, how this can be done in a smart way? Maybe there is a tool where "Mr. Bond" can enter his info and place his "Boss" as manager. Later the Boss presses a button to approve this info and the tool writes it back to the ad. Whoever didn't fill in the boss, get's emailed. Or do I have to build this on my own? All help is appriciated. I feel that this might be opinion based for some reason, but I am looking for the smartest way, with less steps and manual work to clean up the mess. I can get azure AD connected in case this helps.
RayofCommand's questions
VMWare Tools are usually installed through SCCM and now the newer version will not be installed through SCCM on all machines. So for some I do have the following failure.
The error code semes to be straight forward and after some research I found out that it means.. Quote:
It means that the application is installed successfully, however the software center showing the deployment as failed.
Check the SCCM detection rule => make sure that correct GUID is used in the detection clause
Most of the people fixed this by updating their wrongly setup detection rules. But in our case the detection rule is correct. It checks the file and the version. And the application is NOT INSTALLED So it's not a detection problem. Now I wonder what else might be blocking us here. Some servers did get the installation. Others didn't. The Appenforce log shows that the app was not installed and it is finished after 8-12 seconds.
<![LOG[++++++ App enforcement completed (11 seconds) for App DT "VMWare_VMWareTools_10.3.5.10430147"
The successfull installation took close to one minute on the other servers, so I guess we get an abortion here or something. Where could I look for more info?
Addtional Info: I am using SSCM 2012 and my servers are 2016 and 2012, I have failures and success on both versions. So this should not be related.
Anyone had issues like that? Any help is upvoted, thank you.
I am trying to create a server collection which collects all Server with SQL Server installed on them in a smart way. I created a Collection and added already some servers using the criterion properties and as shown in the screenshot. Now I don't want to add 100 SQL Versions listed on the right part of the screenshot manually to my criteria. So I thought I will just rework the Query Statement myself. But I didn't find a way to copy all the SQL versions listed... (Also I don't have access to the DB behind SSCM)
And if I use a query using is like Microsoft SQL Server
I will get plenty of native clients, which I don't need.
Can someone help out a rookie here?
I am trying to clean up a disk on a specific server. On that server there is only owncloud installed. Since I was not the guy who setup owncloud, and there is now way to reach the actual administrator I have to solve the issue myself.
First I thought about cleaning old backups, but I saw that the backups are taken differentially, so I can't simply delete some older files. The log is already cleared, but it didn't free enough space, so I read the manual for owncloud to check what options I have.
In the end I would love to simply gather the size used per user. But through the admin panel in owncloud this is not possible. Also the data structure won't allow me to gather this info easily. Is there a nice way of gathering the needed info? So I can pick on some guys to clean up their owncloud files ;) ?
I am running Debian using MySQL DB for OC.
I have a server which has 2 CPU Sockets and I used only one of the sockets so far. I have VMware ESXi V.6.0.0 installed and running. I know that the free version supports 2 physical CPU, so in order to upgrade my physical machine, do I have to buy the exact same CPU again, or can I buy a newer/better/different CPU? I understand that I won't be able to create a machine using parts of both CPU's later on, but if I create only 2 VM's running both on one physical core, will this be possible?
I am running an ESXi Vsphere Client Version 6.0.0 but with all the different documentation and changes I have trouble to understand my limitations. From the official documentation I see that my limit for physical CPU should be unlimited but I can only give a VM up to 8 vCPU.
From other sources I read that I have a limit of 2 physical CPU in the free version. I see that the memory limitation is gone, which I am happy about.
Is there any document or anything which gives me actual limitations? It seems that VMware is hiding it a bit ;) , at least I coudln't use google efficently to gather the correct info here.
I am only interested in the Hardware limitations, not the limits for creating failovers etc.
When it comes to hardware I often read something like "Apple Mac computers, and other lower profile ...." For me it sounds like a better word for low end segment hardware but I am not sure about it. Google didn't help me well to answer this question. Is there something more connected to it? I need it to fully understand some articles. Term is used here for example: HBA H240
Storage controller - plug-in card - low profile
I am having a HP Proliant DL 160 Gen9 Server with a HP H240 Host Bus Adapter. 6x 1 TB Samsung SSD's configured in a raid 5 directly using the internal storage of the machine. After installing a VM on it using VMware (6.0) I did a benchmark with the following result:
After some research I came to the following conclusion:
A Controller without Cache will have trouble to calculate the raid 5 stripes and I pay this in write performance. But 630MB/s Read and 40MB/s seem to be a bit poor. Anyhow I found others having the same problem.
Since I can't change the controller today, is there a way to test if the controller is on the edge? Or do I really have to try a better one and see the result? What are my options? I am pretty new to Server/Hardware/installation since in my previous company this was managed by a outsourced hosting provider.
EDIT UPDATE
Here now the performance with write cache enabled. The read went up even before I did the change. Not sure what happened, I just played around in bios settings of the windows machine. Today I go update the firmware to latest version, let's see what it gives us.
Here a screenshot of a Benchmark with the new Controller P440 with a 4GB Cache activated. (enabling HP SSD smart path didn't bring a performance improvement btw.) But with a Cache we get much better results. Of course I tested it with files > 4GB, to make sure to test the disk and not the cache.
I am quiete now to the topic of routing traffic, since it's done by our hosting provider so far.
I would like to setup some Virtual machines and configure HTTPS and Loadbalancing on Azure. I do pick the Virtual machines, and not website as a service. So in general Infrastructure only.
How can I do the endpoint configuration using a VM on azure? When I read documents like this: http://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-load-balance/ it doesn't help me much :)
is there any dummy explanation? Can I add my own ssl cert easily to my azure vm? Or is that only possible if I select a website instead of VM.
I see that I can more or less easily create a custom second level domain aka contoso.cloudapp.net on Azure VM. But I would like to setup a real domain I purchased such a contoso.com without the clouapp.net.
Is that even possible with a Azure VM? Or do I need to purchase something else?
We rent our servers at a local hosting partner, they manage the setup and settings and we just use them. But we have as well admin rights, but the management is at their side. So if I switch importend things I will let them know beforehanded. Recently I figured out that at least some servers have the power options set to a Balanced plan. Since this option is recommended in Windows Server 2012 I don't understand why this plan can be the worst one? Since we all want performance over energy usage on a server, I guess, why is that value still recommended?
Also I don't see exactly what is changed when I switch it to high performance, does anyone have a test result of a server which run under same circumstances on balanced and once on high performance?
For me it's clear to set it to high performance, but I would like to understand more details. And to my understanding the only negative effect is the electricity bill and maybe a more used hardware.. correct?
If I go to the details of the power plan on my local machine, I see the option for the CPU under Processor power management, on the server there is only System Cooling policy under Processor power Management. It seems that the CPU is not throttled in any case?! This settings appear to be the same under all plans.
I use winsat
to see my disk read/write speed on Windows 8 and it works great:
winsat disk -drive c
Can this be used on Windows Server 2012?
Right now I use a powershell script to see the currently logged in users. But I don't see if their session is idle, active or inactive. I can see when the session was started, that's it. Is there an easy way to see how many users are currently logged in to the server I am logged in and see their status? It should not be remotely executed. I would like to avoid third party tools if possible.
Our prod. setup is one web server, one sql server and one application server. On application server we have services installed which located one the sql server. We never had problems accessing the files, but since we recently switched to a new vdc we get this error once per day for any of the services :
Windows cannot access the file for one of the following reasons: there is a
problem with the network connection, the disk that the file is stored on,
or the storage drivers installed on this computer; or the disk is missing.
Windows closed the program.
Our hosting partner says it's because the sql server is sometimes on 100% cpu and that's why the service can't be accessed by another machine. I personally disagree with this, but I can't prove the opposite.
The service of course has a recovery rule set, so it restarts after that, but still I would like to avoud that. How can i troubleshoot that?
I have multiple databases which are constantly growing. So from time to time, I truncate the log table and shrink the biggest databases. For the biggest one (>40gb) it takes quiete some time.
So I read about the option to have auto_shrink on , which periodicly shrinks the db's which have some free space.
I never tried that and I first want to hear some opinions on that. How often does the shrink happen with that opion on ? Will it eat alot memory or cpu ?
according to this page it's running in the background. Of course it does, butthey dont answer the needed question. How often and how much memory does it take.shrink db's Also, if a 100gb Database has 1 gb free, please don't shrink it....if it takes ages... what criteria does it take ? more than 10% free space? Btw. I am not a professional yet, just learning.
I am pretty new to managing servers, right now I have trouble to find the location of our database. I see the DB in phpmyadmin, is there a way to find the real location using phpmyadmin ?
Running ubuntu 12.04 server
Since this morning I have trouble accessing my confluence. It's on an Ubuntu 12.04 server.
The error itself is this
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /xxx/xx/xxxxx.
Reason: Error reading from remote server
I use an apache with reverse proxy
my apache2 errorlog shows this :
(70007)The timeout specified has expired: proxy: error reading status line from remote server
My apache config is this :
<VirtualHost *:80>
ServerName www.confluence.xxxxx.xxx
ServerAlias confluence.xxxxx.xxx
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ErrorLog /var/log/apache2/error.log
ProxyPass / http://xxx.xxx.xxx.xx:xxxx/ Keepalive=On
ProxyPassReverse / http://xxx.xxx.xxx.xx:xxxx/
<Location />
Order allow,deny
Allow from all
</Location>
</VirtualHost>
I saw some people fixing this by adding the Keepalive=On but for me it didn't help. I restarted apache2 of course.. no success.
Any Ideas? Let me know if you need more information, I can give all you need.
EDIT :
I have to add that the official confluence page is offline or has troubles right now and when the problem started. Can that be related? I mean it's somehow connected, since you can install the updates and addons through interface.
EDIT 2 :
one of our users says that it crashed after he imported and xml document using the menue..
Scenario :
10 servers in the same datacenter and i connect remotely of course using my german keyboard. All my profile settings were always german on serverside and i never had problems with it. Somehow now the profiles on server always switch back to US and i have to change my keyboard settings after every login.
Does anyone know a possible reason for that ? No one else is using my account. On servers we have windows server 2008 / 2012
Is it possible to upgrade powershell without reboot from 2.0 to 3.0 using Windows Server 2008?
I used rsync with -av parameter to migrate a website from one server to another. there was no error displayed but the size of the folders are not the same and the website does not work correctly. so something is missing. Is there a parameter who copies really everything ? i have sudo accounts on both sides ofc.
I check folder size with du -s dir
, the sizes are the following :
old server : 2554620
new server : 2547676
it's in bytes. How do I manage to have the exact copy?
Wordcount output:
old server : 2663 3105 175534
new server : 2665 3107 175594