Through the management interface served on port 5480 by default, you can enable SSH access to a vCenter Server Appliance. How can it be done programmatically using PowerCLI?
stackprotector's questions
In a development environment I want to modify the 'password last set' date of my AD accounts so they won't begin to expire during development phase, but as soon as the environment becomes a production environment.
How can I change that date?
On the DC of a single-AD forest, I am logged in as the default domain administrator Administrator
(in this case also the enterprise administrator). In an elevated PowerShell, I try to get the Kerberos encryption types with the following command (as documented here):
ksetup /getenctypeattr my.example.com
But I get an error message instead:
Query of attributes on MY.EXAMPLE.COM failed with 0xc0000034
Failed /GetEncTypeAttr : 0xc0000034
In consequence (most probably), I also get this error when trying to set the encryption types, as described in this question, which currently does not have a serious answer, unfortunately.
This does happen on Windows Server 2016 and also on Windows Server 2019, which have been setup by mostly using default settings. How can a simple get fail? The error code does not seem to be documented. Does someone know how to troubleshoot or solve this problem?
In the GUI (Active Directory Domains and Trusts MMC Snap-in (domain.msc
)), you can set the "The other domain supports Kerberos AES Encryption" setting for a trust relationship:
I am looking for a way to set this setting programmatically. I already reviewed the Install-ADDSDomain
PowerShell cmdlet and also the netdom TRUST
tool, but both do not seem to include an option to set the Kerberos AES encryption setting.
Can someone tell me, how I can set this setting programmatically?
On RHEL 8, are there prepared functions, methods, processes or tools to implement administrator/operator and auditor roles in the following way:
- An administrator/operator should be able to do almost everything except modifying/deleting logs
- An auditor should be able to read everything, and to delete logs
On my research, I did not find any hints or best practices for this concept. But I imagine, that this might be a common requirement for systems that shall comply with ISO 27001. So I am whondering, if there are already maintainable solutions to implpement such roles on RHEL or if it can be accomplished at all or if this is (currently) just not feasible on RHEL.
I have a simple MS ADDS multi-domain forest setup with a parent domain and one sub-domain. I joined a RHEL 8 server successfully to the sub-domain by using this official documentation. All OSs have been setup by using as much defaults as possible. I can successfully SSH into the RHEL server by using an AD account of the sub-domain. But when I try to use an account of the parent domain, the login fails. As soon as I submit the username of the parent domain, journalctl
reports the following error:
sssd_be[...]: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (KDC has no support for encryption type)
I checked the DCs of each domain and can confirm that all DCs support the same three default encryption types (which are stored in the msDS-SupportedEncryptionTypes
attribute of each DC computer account):
- RC4_HMAC_MD5
- AES128_CTS_HMAC_SHA1_96
- AES256_CTS_HMAC_SHA1_96
I also confirmed that RHEL 8 offers suitable encryption types (/etc/crypto-policies/back-ends/krb5.config
):
[libdefaults]
permitted_enctypes = aes256-cts-hmac-sha1-96 aes256-cts-hmac-sha384-192 camellia256-cts-cmac aes128-cts-hmac-sha1-96 aes128-cts-hmac-sha256-128 camellia128-cts-cmac
So, there should be two matches: aes128-cts-hmac-sha1-96
and aes256-cts-hmac-sha1-96
. As I already stated, it is working fine for the sub-domain. So, why is there no suitable encryption type for the parent domain?
According to Microsoft, Microsoft Windows Server 2019 still does not support Windows Search on Data Deduplication enabled volumes (source):
Windows Search doesn't support Data Deduplication. Data Deduplication uses reparse points, which Windows Search can't index, so Windows Search skips all deduplicated files, excluding them from the index. As a result, search results might be incomplete for deduplicated volumes. Vote for this item for Windows Server vNext on the Windows Server Storage UserVoice.
This has been a problem/challenge for a long time now (example).
I am maintaining a Windows Server 2019 file server, that stores its data on a Data Deduplication enabled ReFS-Volume and I am also facing the problem to provide a working search functionality.
Before implementing a solution by using a 3rd party search engine, I'd like to know if there are already any workarounds available to make Windows Search work on Data Deduplication enabled volumes by using on-board tools.
So, if someone is aware of a valid workaround, I'd appreciate any information on a way to implement this without using 3rd party software.
From the HPE ProLiant System Utilities (BIOS) I booted into HPE Smart Storage Administrator (SSA) to migrate a logical drive with RAID6 (and existing data) to RAID5. After starting the migration task, which would need several hours to complete, the only thing I could do was clicking on X
in the top right corner. After that I was stuck on a screen saying:
After completing the configuration - reboot the system.
What does that mean? Can I reboot the server through iLO (either Reset or Cold boot) or do I have to wait until the migration completes to finally boot into my OS?
I am using a System Center 2019 DPM as a backup server that is running on its own hardware. Besides other backups, it backs up files from a file server cluster.
The file servers use a deduplicated volume to store the files. The backup job backs up a folder on that volume and not the entire drive. It stores the backup data on local MBS (HDDs). During scheduled synchronization - sometimes (not always) - the following error occurs:
The DPM service was unable to communicate with the protection agent on <THE_BACKUP_SERVER_ITSELF>. (ID 65 Details: An existing connection was forcibly closed by the remote host (0x80072746))
The error resolves itself automatically with the next scheduled synchronization on the next day, or by another self initiated synchronization (by the job itself).
How can I prevent this error from happening at all? It is strange that the error occurs when the backup server is performing a backup on another machine, but is having problems to communicate to the protection agent that is running on the backup server itself.
Similar problem, but on DPM 2012 (without a solution): https://social.technet.microsoft.com/Forums/windows/en-US/37c97307-db55-4fca-84e8-115de70ba93e/reccuring-dpm-erorr-0x80072746
What I tried
- I enabled throttling for both file servers, but that did not help.
- I set the following registry values on the DPM and both file servers and restarted each DPMRA service (according to this and this):
New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent" -Name "ConnectionNoActivityTimeout" -PropertyType DWord -Value 7200 New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Agent" -Name "ConnectionNoActivityTimeoutForNonCCJobs" -PropertyType DWord -Value 7200 Restart-Service DPMRA
I have a virtual file server cluster of 2 VMs, where each VM is running with Windows Server 2019 (8 GB RAM, 8 vCPUs). The file servers store their data on a VHD set, which is formatted with NTFS and has deduplication enabled. There is about 14 TB of data, which only consumes around 6 TB due to deduplication.
I have a physical server (64 GB RAM, 2 CPUs à 8 cores with HT, so 32 logical cores in total) with Windows Server 2019 and Microsoft SC DPM 2019 installed. It uses local HDDs (RAID6) to store the backup data on Modern Backup Storage (MBS), formatted with ReFS. This server has a redundant 10 GbE link to the file servers in the same subnet. Protection groups, that back up entire VMs or SQL databases perform well. But the protection group that backs up the data of the file server cluster has a really bad performance.
A full synchronization of the 14 (or 6) TB takes around 70 hours! That's incredibly slow. When it starts, it has about 2-4 MBit/s throughput on the 10 GbE link (which has its full potential when I copy a large file manually). After a long phase of slowness (~1 day) it "speeds" up to 200-400 MBit/s which is still quite slow.
CPUs and RAM do not seem to be a bottleneck on any server.
I found similar problems here and elsewhere:
- DPM for File Server - awfully slow?
- MS DPM - Slow performance of consistency check
- Slow Backup Speed with DPM after few minutes backup is started
- DPM 2016 MBS Performance downward spiral
- How can we improve SCDPM?
- DPM 2016 File Server Backup SLOW
But without a working solution. How can I speed up the backup of data?
I'm using several Windows Server 2019 Clusters (e. g. Hyper-V, File Server). On all machines that have clustered roles, I get the following errors (with different harddisk numbers):
Log Name: System
Source: Disk
Event ID: 11
Level: Error
Message: The driver detected a controller error on \Device\Harddisk1\DR1.
From my observations, I can conclude, that the error is always thrown on harddisks that are currently offline on one cluster member, because they are online on another cluster member. So it happens on disks that are used by cluster roles for data and disk witness in quorum.
I'm not sure, if that's just ok in this case and that I can ignore those errors or if there is some misconfiguration and something has to be fixed.
Can someone confirm, that this is normal behaviour or that something might be broken?
In my infrastructure, I have two servers with Windows Server 2019 and Hyper-V installed. A SAN is directly connected to both servers via FC. The SAN provides three volumes to both servers: a volume for the quorum, a volume for VMs and a volume for data.
I plan to deploy a file service that is as much high available as my given infrastructure can go. Therefore - as I have two nodes - I want to deploy two file servers. This way, I can tolerate the failure of one whole server (host) or a failure of one virtual file server. With just one virtual file server (with HA enabled), I would just tolerate the failure of one host, but not a failure of the VM itsself.
I plan to use the data volume of my SAN to deploy a shared virtual hard disk, that both virtual file servers will use to provide the file shares.
Further more, I want that the users don't have to care which file server they access to access their files. \\FileSrv1\Data\README.md
should be the same as \\FileSrv2\Data\README.md
, but users should be able to access it like \\FS\Data\README.md
. As far as I know, this is a typical use case for DFS. But I don't want two file servers that replicate their data, as I have a shared storage.
So my question is, can I use both - a shared storage for virtual file servers AND DFS to abstract the file access - in my scenario?
Unfortunately, the Active Directory Domains and Trusts MMC Snap-in (domain.msc) lets you create an outgoing trust to a Domain Controller (in other words: specifying the name of a Domain Controller as the name of the Domain to trust). Even more unfortunately, you are not able to revert this change by the GUI. If you try to remove this trust, you will get a warning pop-up with the following message:
An internal error occured.
Trying to delete it via netdom
:
netdom trust my.domain.local /domain:dc1 /oneside:trusting /remove /force
also fails with the same message:
An internal error occured.
So how can I delete such a trust object?