Is there a way to install Windows Updates on a HyperV Host, e.g. Windows Server 2022 Datacenter, without downtime, unless you use live migration? Any suggestions here would be greatly appreciated
A X's questions
We have an Azure SQL elastic pool with a data size max GB setting. We would like to increase this setting to a larger value.
The question is: can you increase the size of the elastic pool max size with no downtime? Said differently, does changing this setting cause downtime on the elastic pool?
Note that we have Zone Redundancy turned on.
Could not find clear documentation on this. Thanks in advance for any assistance.
I am using Hyper-V in Windows Server 2022. I installed Ubuntu Server 20.04 from ISO in Hyper-V. Now, all I want to do is use the "Clipboard > Type Clipboard Text" command in HyeprV Connect. Sadly, it does not work. This has actually been bothering me for years and have had to manually type long strings into Linux VMs as a result of this. Why Microsoft does not bother to make this "just work" out of the box for Linux like it does for Windows I will never know.
It was also my understanding that this was supposed to just work in Ubuntu, but it doesn't. I have Enhanced Session Mode enabled in Hyper-V.
Does anyone know how to get this simple thing to work? Any help would be greatly appreciated.
I am using this command in Windows Server 2022, latest updates:
Disable-TlsCipherSuite -Name "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"
It completes without error. I then tried restarting IIS (and also the server).
But this cipher suite still shows up in SSL Labs. Is this suite part of the suite named "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"? Is that why it can't be turned off?
Any guidance would be greatly appreciated.
Suppose I have an Azure SQL Database Elastic Pool and I am accessing it from an Azure Web App via Firewall rules and everything is working fine.
Now suppose I want to add a new Private Endpoint to the Azure SQL Database Elastic Pool. This would NOT block access via the existing Firewall rules / access outside of the newly created Private Endpoint - is that right?
In other words, adding a Private Endpoint is not saying that access should now be Exclusive through the Private Endpoint but that the Private Endpoint is accessible and existing access channels are also still accessible - is that right?
I have an NGINX server being used as a TCP load balancer. It is default to round-robin load balancing, so my expectation is that for a given client IP, every time they hit the endpoint they will get a different backend upstream server for each request. But instead what is happening is that they get the same upstream server every time, and each distinct client IP is getting a distinct upstream server. This is bad because my clients generate a lot of traffic and it is causing hotspots because any given client can only utilize one upstream server. It seems to slowly rotate a given client IP across the upstream servers; again I want it to randomly assign each request to an upstream per request.
How can I make NGINX randomely assign the upstream server for every request? I tried the random keyword and this had no effect. Any help would be greatly appreciated.
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
stream {
upstream api_backend_http {
server node1.mydomain.com:80;
server node2.mydomain.com:80;
server node6.mydomain.com:80;
server node14.mydomain.com:80;
server node18.mydomain.com:80;
server node19.mydomain.com:80;
server node21.mydomain.com:80;
server node22.mydomain.com:80;
server node24.mydomain.com:80;
}
upstream api_backend_https {
server node1.mydomain.com:443;
server node2.mydomain.com:443;
server node6.mydomain.com:443;
server node14.mydomain.com:443;
server node18.mydomain.com:443;
server node19.mydomain.com:443;
server node21.mydomain.com:443;
server node22.mydomain.com:443;
server node24.mydomain.com:443;
}
server {
listen 80;
proxy_pass api_backend_http;
proxy_buffer_size 16k;
proxy_connect_timeout 1s;
}
server {
listen 443;
proxy_pass api_backend_https;
proxy_buffer_size 16k;
proxy_connect_timeout 1s;
}
}
Our goal is to have a Healthcheck continuously evaluate the health of an endpoint. When it becomes unhealthy, we want the DNS to failover to a different IP address. We have set this up but we now realized that it actually doesn't work (i.e. when the Healthcheck goes red, no failover happens). Here is our current configuration:
A Record
- Record name: www.mydomain.com
- Record type: A
- TTL: 30 seconds
- Routing policy: Failover
- Failover record type: Primary
- Healthcheck: www
- Record ID: www-1
- Value:
A Record
- Record name: www.mydomain.com
- Record type: A
- TTL: 30 seconds
- Routing policy: Failover
- Failover record type: Secondary
- Healthcheck: www
- Record ID: www-1
- Value:
In addition, we have a health check.
OK - so we recently had an issue, where the healthcheck turned red. We got notified via SNS as expected. However, when doing an NSLookup of www.mydomain.com it was still returning the value for the Primary. We fixed the issue within less than 5 minutes.
Given the TTL and so on configured above, shouldn't we have seen the NSLookup update to show the Secondary? Is it possible it would take longer to failover? If so, why?
Is there an error of some kind in the configuration above? If so any guidance would be greatly appreciated.
Here I have a 2-node SQL Server 2016 AlwaysOn Availability Group cluster, with 1 primary and 1 secondary.
The question is - what is the optimal way to install Microsoft Updates on the servers in the cluster? I have struggled to find good, clear recommendations on this.
Here is my current thinking:
- Install updates using Microsoft Update on the secondary
- Restart the secondary to finish the updates
- Perform a manual failover from the Primary to the Secondary
- Install updates using Microsoft Update on the new-secondary (former primary)
- Restart the new-secondary (former primary)
- Perform a manual failover from the new-Primary to the Secondary, making the original primary the primary again
My understanding is that this will:
- Cause absolutely no application downtime
- Cause no syncing errors
- Cause no data corruption
- The cluster will not generate errors when half the nodes are updated and the other half are not
Is this correct? Is there a better way to do this?
Thanks in advance - any help is greatly appreciated.
Suppose I have a Virtual Machine provisioned in Azure running Windows Server 2016 standard Azure image.
Without installing anything onto the VM, how can I retrieve the details of the current VM using PowerShell from inside the VM?
Some properties I would like to retrieve are:
- VM name
- VM IP address
- Subscription ID
- VM location
I believe this is possible because the Windows Desktop image in Azure automatically shows most of the above information. So how can I get this information programmatically through PowerShell?
(As usual, Microsoft documentation live up to their reputation of no value)
Any help would be greatly appreciated.
I have an installation PowerShell script that installs Docker and other components onto Windows Server 2016. I have learned (the hard way) that if the latest Windows Updates are not installed, sometimes Docker will get into a very wierd state and not function as expected.
Therefore, I would like to programmatically check in PowerShell if all available/latest Windows Updates are installed on the server and then show a warning to the user if there are available updates that are not installed.
So the question is, how to programmatically check if all available/latest Windows Updates are installed?
Any help would be greatly appreciated.
Currently there is some wierd traffic on a HTTP server from lots of different IPs. I tried checking against known TOR exit nodes, but there were no matches.
They tend to be from countries in South America and Africa. However, none of the IPs are the same. So I'm not sure how the attacker is able to use so many different IPs, each IP only one time.
Does anyone know how an attacker might be able to get "single use IPs"? Perhaps they are from some sort of rented botnet? If so, is there an easy way I can check these IPs against a list of known threat IPs?
Any help would be greatly appreciated.
In this scenario, we have users who use a primary user mailbox as well as a shared mailbox. They wish to have default signatures for each mailbox. However, when going into Outlook Options > Mail > Signatures, only the user mailbox appears as selectable for configuring a signature. The shared mailbox does not have a way to select a default signature.
Does anyone know how to configure a default signature for a Shared Mailbox?
Any help would be greatly appreciated.
We are trying to log in to AWS Elastic Container Registry (ECR) to pull down docker containers from our private registry. The login is failing with this error:
Error response from Daemon: Get https://... EOF
We believe that the AWS CLI is making a call to some AWS service that is being blocked by our security software, but we cannot find documentation on what endpoints and ports we need to whitelist besides the docker registry URL itself.
Does anyone know what AWS endpoints need to be whitelisted in a secure environment for AWS ECR to authenticate correctly?
Any help here is greatly appreciated
When I change haproxy.cfg, currently I am "applying" those changes by running this command on CentOS:
systemctl restart haproxy
Is there a better way to do this on CentOS that does not involve downtime, or minimum downtime (or connection resets)?
Let's say I have a running Docker container X that is based on image foo.
If I pull a new version of foo, then stop and delete X, and then docker run foo
- will it start the new version of the image?
Said differently - do I need to stop X before I can pull the new version of foo and then start it?
Context: Docker-EE on Windows Server 2016
Background:
- Am using Windows Server 2016
- Have installed Containers
- Have installed Docker (not Docker Desktop / Docker for Windows): https://docs.docker.com/install/windows/docker-ee/
- Have multiple containers running on the same server
- I need container A to be able to connect to container B (on the same server)
What works:
- Hard coding the public IP address, container A can call via HTTP container B
What doesn't work
- Container A tries to use localhost to call over HTTP to container B
- Have also tried host.docker.internal which absolutely does not work.
What is the best way to do this? I cannot hard code the IP address because the IP address of this server is dynamic on restart, hence why I would like to use something close to localhost.
Any help with this is greatly appreciated.
So I found this announcement from Amazon that they have Windows Server 2016 build 1709 and 1803:
Great - but when I search AWS Marketplace for "Windows Server 1709" or anything "Windows Server" related the 1709 / 1803 releases are NOT there anywhere. Note that 1709 is a major update to Windows Server 2016 and is not the same thing as Windows Server 2016.
Does anyone know where to find this AMI? I really need 1709 or later due to container compatability issues.
Thanks in advance!
I have this setup:
- Azure VM (B-series)
- Static public IP
- SQL Server running outside of Azure, with IP whitelisting on the firewall set tot he public static IP of the Azure VM
- Two ASP.NET apps that connect to the SQL Server on the same IP but with different usernames/passwords
Expected:
- Both apps can connect to the SQL Server and return data
Actual:
- Only one of them can connect to the SQL Server and return data
- Apps running elsewhere with static whitelisted IPs can connect just fine
This is really wierd! Does Azure sometimes use some other outbound IP address?
I need to run this Docker command in Kubernetes:
docker run -p 8080:8080 sagemath/sagemath sage -notebook
I can map everything across except "-notebook" - does anyone know how to do that?
Here is what I have so far, and of course it doesn't work since "-notebook" is not translated over to kubectl correctly:
kubectl run --image=sagemath/sagemath sage --port=8080 --type=LoadBalancer -notebook
I resized an Azure VM and now the MySQL DB running in Windows inside the VM is dead. I am seeing the below fatal error on startup. I tried to run mysql_upgrade as it suggests but that just errors out saying it cannot connect. Does anyone have any ideas on how to fix this?
2016-07-12T09:59:48.426367Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
2016-07-12T09:59:48.426367Z 0 [Note] IPv6 is available.
2016-07-12T09:59:48.426367Z 0 [Note] - '::' resolves to '::';
2016-07-12T09:59:48.426367Z 0 [Note] Server socket created on IP: '::'.
2016-07-12T09:59:48.426367Z 0 [Note] Shared memory setting up listener
2016-07-12T09:59:48.438315Z 0 [Note] InnoDB: Loading buffer pool(s) from C:\ProgramData\MySQL\MySQL Server 5.7\Data\ib_buffer_pool
2016-07-12T09:59:48.455584Z 0 [Note] InnoDB: Buffer pool(s) load completed at 160712 9:59:48
2016-07-12T09:59:48.455584Z 0 [ERROR] Fatal error: mysql.user table is damaged. Please run mysql_upgrade.
2016-07-12T09:59:48.455584Z 0 [ERROR] Aborting