Let's say I have a Kubernetes cluster with 2 nodes. A control plane, and a worker.
If I use persistent volumes in my pods, will Kubernetes store my data in both nodes, for redundancy? And is redundancy even default?
Let's say I have a Kubernetes cluster with 2 nodes. A control plane, and a worker.
If I use persistent volumes in my pods, will Kubernetes store my data in both nodes, for redundancy? And is redundancy even default?
I am aware of the consequences and issues with running a single-node cluster. However, I'm still curious if it's possible. I plan on setting everything up myself.
In other words, can I run the control plane and a worker node on the same physical machine.
I would like to make an IAM user which was access to the AWS Lightsail CreateInstances API, but only if they make a request where bundleId
is nano_2_0
.
I am aware of condition keys in AWS, but according to the documentation on that, only tag-related condition keys are available.
However, I was hoping that a more generic version of a condition key was available or something similar, which would allow for the above scenario - perhaps a condition key that would work for any API call.
Is that possible in any way, perhaps with small modifications?
I know that pricing for the AWS Lightsail instances are based on the amount of hours they run. However, if I only run an instance for 10 minutes and then delete it, am I still billed for an hour minimum?
I know how to follow the migration guide to upgrade a classic load balancer to an application load balancer.
However, when I try to clone my Beanstalk environment and perform that migration on that environment, then it still shows as a "Classic" load balancer in the "Configuration" of the environment.
How can I migrate from Classic to Application when using Elastic Beanstalk?
I recently started considering Azure over my current VPS host PhotonVPS for Virtual Machines. I assume that Virtual Machines are the same as VPS servers (please correct me if I am wrong about that).
Now, I need a server running a desktop program which also has a website on it and an SQL server.
I assume that all of these will be able to run just fine in a virtual machine from Azure, but I am worried about pricing.
I have looked at the available options and prices, and it seems that it is possible to also host the website and SQL server separately instead of just installing it on the virtual machine.
What would be the pros and cons of that versus running it all on the same virtual machine in terms of cost and performance? Is it generally considered a bad idea to run my website and my SQL on a virtual machine as well?
I'm using the MySQL Workbench tool to export a database. However, the database data includes trigger data as well. How can I ignore the triggers and not include them in the generated SQL dump file?
We're a company who only has access to a limited amount of diskspace. A total backup of our data is around 300 megabytes.
We would like some kind of system that only uploads files that have changed, in order to save this disk space and use it more efficiently.
Furthermore (optionally), we would love if the system had a feature to decrease the backup intensity as time passes by, again to optimize efficiency. Here's an example of what we had in mind regarding this:
And so on. So in other words, we would like (if possible) a system that starts out with high frequency backups, and then later replaces older backups with new ones, while still keeping some "points" of data back in time.
The operating system of our server is Windows Server 2008.