I have a virtual mail server in house protected by Exchange Online Protection, and it has been running well for the last few months. Over the weekend, the hosts that host the VM will be taken down, as will most of the network, for a rewire and other upgrades. During this time, is there a way to tell EOP not to try deliver email to my domain, but to hold it till the I say its safe? I think EOP will keep mail if the receiving server is offline, but I am not sure for how long. So, how do i pause this? I can see an option to disable the connector, but what happens to mail for the domain then?
TiernanO's questions
I have 2 machines, local1 and cloud1. local1 has 3 Nics: 2 connected directly to cable modems with public IPS (call them eth0 and eth1) and 1 connected to my LAN connection (eth2). cloud1 has a single nic (eth0) and is connected directly to the internet (1Gb/s link in a datacenter). eth0 and eth1 on local both have default gateways and send their traffic over either, depending on which IP is making the request. There are 2 OpenVPN P2P tunnels created, each using one of the public IPs from the modems, and connect to cloud1 over its public ip. This creates 2 tunnels, tun0 and tun1 on each box. they get ips 10.8.0.1/2 on tun0 (respectively) and 3/4 on tun1. 1 and 3 are on cloud, 2 and 4 are local.
the local box has its default route set to use both 1 and 3 (Debian 8.3 with whatever kernel is in the box) and that works "correctly" trace route shows me hitting both 1 and 3 at different times). but on cloud1, for it to get back to my local network (192.168.1.0/24) it has a route, but only through 10.8.0.2... nothing comes back over 10.8.0.4...
watching traffic monitors, i can see traffic going out over tun1 in house, but nothing (or very little) coming back in, while tun0 has lots of traffic coming both in and out...
I know how to set multiple default gateways using linux, but how do i set multiple non default gateways: for example:
ip route add 192.168.1.0/24 via 10.8.0.2 (works)
ip route add 192.168.1.0/24 via 10.8.0.4 (tells me it already exists)
Network forwarding is set on both boxes and i would like to do this without NAT or Masquerading... Also, 192.168.1.0/24 is a DMZ Network, so there are further upstream firewalls for other machines.
I have a Dell T7600 workstation with Perc H200 controller card (did not come with the box previously, was installed after since the standard onboard card only does 3Gb/s connections). Anyway, the machine has been running Linux for the last few months and has had no issues, but I need to move to Server 2012 R2, and I cant get it to see the drives on the controller.
There are 8 disks in the controller: 2 Samsung 850 Pros, which I have setup in a RAID array, 4 2Tb hard drives, a 1Tb disk and a 128Gb SSD. They are left as is, and are not setup in RAID.
I have loaded the H200 drivers using an external USB key, which it does see the "correct" drivers, but still wont show any disks. I have made sure the RAID array is set to boot in the controller, and the BIOS can "see" the drives and array. I even tried removing the array fully and tried installing that way (no arrays, just disks) but no dice. Windows Installer still cant see drives.
I know this is going to be something simple... I just cant seem to find the answer myself.
[Update] Tried some more stuff last night, still no luck. Things i tried:
- delete raid array and re-created
- tried setting the boot option to UEFI
- made sure the RAID array is set to boot from (it is)
- made sure BIOS can actually see the RAID array (sees all disks, including the RAID disk, and that disk is set to first priority, after USB and CD)
Trying to build this with an ISO i got from microsoft.com and the latest Dell H200 driver... When the installer gets to the select disk screen, nothing is shown, i select find driver, and point at the usb key with the driver, it finds H200, but still wont find any disks...
I have a MikroTik powered Router in the house with a couple of internet connections (2 200/10Mb Cable modems and a 100/20Mb VDSL Line). I am using Mangle rules to set routing marks and NAT rules to do some load balancing, and everything seems to be going grand... But it only works for traffic from outside the router... Let me explain:
I have 4 GigE ports on the machine, WAN1,2 and 3, and a LAN port named LAN1. All traffic from LAN1 is getting mangled (as it should be) but traffic from the load router itself (proxy traffic, IPv6 tunnels, VPN connections) are not being mangled. They get the first route to 0.0.0.0/0, which in my case is WAN2, and stick with it.
So, how do I get traffic from the local router to be mangled? Originally it was proxy traffic that caused the problem, but now with IPv6 and VPN, they are more important to be mangled... last time i enabled IPv6 traffic, all traffic only went though WAN2, and the rest where unused... Any ideas?
I have 2 Miktorik RouterBoards, an RB1100 and a RB951G. The 951G is acting as my Wireless box, and has Guest, Internal and Internet Only Wireless networks. The RB1100 has 3 WAN connections (2x150/10Mb Cable modems and a 70/20Mb VDSL modem) and does Load Balancing, firewalling, etc, for the whole network.
The RB1100 is on network 192.168.0.0/24 and the 951 has 3 address ranges:
- Guest -> 192.168.87.0/24
- Internal -> 192.168.88.0/24
- Internet Only -> 192.168.89.0/24
The idea is that guest is firewalled big time (limited bandwidth, limited sites, etc) which i have working with the help of the hotspot.
- Internet Only should only be routed to the internet, possibly limiting some ports, and should not see anything on the 192.168.0.0/24 network.
- Internal should have access to both the internet and also the 192.168.0.0/24 network, and anything on the 192.168.0.0/24 network should be able to see the 192.168.88.0/24 network also...
I Had the Internet Only part working to an extent, but accidentally cleared my router config (doh) but i never managed to setup the Internal network correctly...
Currently i have NAT enabled and that allows me to see all machines on the 192.168.0.0/24 network from the 88.0/24 network, but 0.0/24 cannot see 88.0/24 network...
I know i need to do something with routes, but even when i had that, something was not allowing me to see machines (laptop on wifi could not see desktop on wired).
So, Where am i going wrong?
Again, sorry i cant post the exact config... lost it in a firewall rule screw up...
VMWare has a tool called vFabric Data Directory which allows you to automate building SQL, Postgress and Oracle servers using VMWare. Is there something simular to this that will work for Hyper-V? I was thinking of using a base Windows 2012 Core install, Sys-prepping it and running a script to install required components, but before re-inventing the wheel, is there a automated way of doing this already?
I do not have System Center in my infrastructure, and currently do not have VMM either, but if they are required, I can go down that route...
[Tweaking the question]
Just to clarify, i only need SQL Server (2008 and above) to be installed, and the Base OS. I was thinking a syspreped image of 2008 R2 or 2012 Core would do that job, its just the Automating SQL install that i have the query about...
I have a machine in house which i am planning on Migrating from Windows Server 2008 R2 with HyperV to 2012 with HyperV. I am trying to figure out the easiest way of doing the migration... Most of the VMs are living on a small iSCSI NAS/SAN, with both VHDs and metadata stored there. Based on This MSDN Blogs post I need to manually export each VM and then re-import them on the 2012 box. Is that the only option? Is there a quicker way? Also, if space is limited, when an export completes, does it delete the original VM? or should i manually do this?
We have 2 Hyper-V servers running in a small office, and are looking at using SCVMM for managing VMs. I have used SCVMM before, but only in test environments, and I have always installed it on a VM. In a production environment, where should SCVMM be installed?
I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router.
Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows:
- software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud".
- any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained.
- finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers...
So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?
[UPDATE] Just to be 100% clear, these are 2 separate connections and 2 separate systems... I am trailing this at home and may use it in the office also...
We bought a new Firewall a couple months back, a Netgear UTM9S, and one of the features we enabled is the PPTP VPN to enable access to SQL, Web and File servers for remote users. But now, after reading this Ars Technica article on MSCHAP V2 being broken I am wondering should we worry about this and turn off PPTP and enable one of the other VPN options? SSL and IPSEC options are available, and reading the Manual will be required to move to them, plus configuring each users machine, so should we spend time on this? or is this just FUD?
I have an existing Active Directory in house, a mix between a Win2K8R2 and Win2K3 domain, and i would like to test out Windows Server 2012 Essentials BETA on the network. When walking though the install, it gives me the option of a new domain, or migrating from an existing domain. when clicking existing, it tells me i can only have one SBS server running on a domain at a time... So, i dont have any existing SBS servers in house (both are full standard or enterprise editions) but i do plan on keeping at least one of these extra servers running... So, how do i get a 2012 Essentials server to join a domain, and not migrate the existing domain? or if i do migrate, can i still get one of the other boxes to act as secondary controllers?
I have a couple of different servers around the house, and a lot of storage on each of these machines. I also have a couple of dedicated servers, and some VMs on these boxes... Most of the boxes and VMs run Windows, but i have a Mac and a couple of Linux boxes.
I am looking for some sort of Cross Platform Blob Storage system that can be installed on a machine easily, given an location to store files on, and told an amount of disk space to use... self organizing would be handy, but if i need to tell it how to find other nodes in the cluster, so be it... something like S3, but self hosted...
I am just wondering, is there something with an easy enough API (or even SMB/NFS access) that will allow me to upload a file (or object as it where) to a location, and let it replicate around a network? The Open Stack Storage system looks good, but it doesn't seem to support Windows as a Server, only as a Client... Any recomendations?
I have a dedicated server, and just started installing some VMs on to the box using HyperV. I am currently backing up the VMs using a HyperV backup tool, which seems to work quite well, and then the directory it backs up to is backed up using CrashPlan... Given CrashPlan is not a profesional backup solution, i am wondering what the best way of backing up the VMs would be?
some notes:
- I do not have access to the physical hardware on the box, so adding USB Keys or external drives is not an option...
- The company does give me about 100Gb of storage on a file share... Not sure if 100Gb will be enough though...
- the backup software takes a snapshot every night at midnight, checks the difference between the last backup and only backs up the last file + the difference... but there does not seem to be any compression, and given that VMs would have a lot of compressable (or a good chunk i would think) of data, i think i could save some space...
- finally, it does not do any sort of data-de-duplication... given the VMs are copies of Windows (Win2k8R2 Standard + Web + Win 7, and probably at least one or 2 more Windows Web Boxes also...) i would think a lot of savings could be made...
Any ideas?
As a follow up question to How Do i host multiple servers on HyperV with only a few public IP Addresses I am now trying to figure out where to put the ISA/TMG Server? Should it be virtualized, listening on an External IP and sending data between an internal network, or should it be hosted on the host partition? The last time i played with ISA/TMG, it was a physical box, and there where other machines in behind, so it makes me think the virtual option. Give it 2 public IPs, and let it sort out the rest... give 1 IP to the box itself for managment... Which way should it work?
I have a dedicated server, which is currently running Windows 2008 R2 Web edition and VMWare Server. I am upgrading to Windows Server 2008 R2 Standard with Hyper-V. Once upgraded, it will have 24GB of RAM and a four core Core i7 processor. At the moment, it runs IIS on the main system instance, and has a Linux VM. The Linux VM has a dedicated IP given to it, and the host has 2 IPs.
Once the upgrade is complete, I would like to move the Web hosting to a VM on the machine under Hyper-V, and also move the Linux box to Hyper-V. This will mean that both VMs have a public facing IP as does the box itself. I am currently limited to 3 public facing IPv4 addresses, but have a /64 IPV6 block.
If I want multiple VMs, some public (web sites, etc) others not (development boxes) how do I do it? I know that I can just set up a reverse proxy and give that a public IP and give the resources behind it private IPs. Is that the best solution?
We have decided that it is time to build a domain in our smallish office, and i have been tasked with the job. The Domain is setup, and we are in the process of creating usernames and passwords for users. Since the users all have existing machines (mostly laptops) and have been using them for some time now, when they join the domain, their settings are going to go "missing" for them... they wont be on their desktop, and their documents folder will be somewhere else...
What i am looking for is a quick and easy way of moving their documents, settings, desktops, etc, from their old login account, to their new domain based account... at minimum, this should be files in their "My Documents" folder and Desktop, but moving settings and mail for Outlook, Visual Studio, IE/FireFox/, etc, would be handy too.
I have read about the Microsoft User State Migration Tool, but unless i am reading wrong, it seems to be setup for moving from one machine to another, not from one account to another... Any ideas?
I work in a small company which is getting bigger all the time. We have out grown our old backup system (a Small NAS box and Sugar Sync) and would like to move to something better...
We currently have 3 servers, 2 Win2k3 boxes and a 2k8 box. One of our servers is running SVN with all our code on it, and this is the most important machine to get backed up. We also have SQL boxes, Oracle instances and MySQL installed too...
I have been looking at offsite backup plans, and have been thinking about the following:
- Take all the machines we currently have and Virtualize them using the P2V tools in System Center Virtual Machine Manager.
- Have the VHDs stored on a Nexenta or Solaris machine using ZFS and iSCSI.
- Using ZFS's snapshot tools, we can take a snapshot of the instances while they are running and back them up to Amazon S3 or similar. then just backup the changes between nights.
- if a machine fails, just replace the physical box and add it to the HyperV pool. copy the VMs on (copy is not the right word, given the files are stored on iSCSI, but hopefully you know what i mean).
- As long as the SAN is built correctly, we should be ok for a disk failure (ZRAID or ZRAID2)
- since everything is backed up to S3, if we loose the office (Fire, Meteor Strike, Aliens, etc) we can get our data back (as long as Amazon still exists).
What do you think? Is this a feasible solution?
PS: advantage i just though of using ZFS: Data Deduplication should (in theory) mean we store less on the iSCSI box. If we upgrade all our machines to 2k8R2, we only need to store one real copy of it... the rest are de-duplicated...
Good morning.
We have a service with a SQL box on Amazon EC2 and, as suggested on this question, we are using EBS to store the data... the problem is, What is the best way to setup the storage?
At the moment, during the Development phase, we have 4 10Gb "Disks" (should be enough for the next while, might look into more later) in RAID 0 (this worries me...). We take backups every 3 hours of the SQL box itself, but i am worried about the RAID 0 and loosing an EBS volume causing us to loose a couple hours of data...
Im just wondering, given that we are running Windows, what is the best practice for this? RAID 1? 10? 5? Something else?
Thanks.
Good morning all.
We are currently build a SQL box on Amazon (currently MSSQL, but moving to MySQL soon...) and i have set the firewall in Amazon to only allow connections from our main network IP address and 2 other security groups on Amazon (web servers and worker roles). Anyway, it seems that this firewall rule is not working as planned... checking the SQL server logs i am getting a load of requests from other IP addresses trying to get in to the instance (trying to guess the SA password). this seems to be accounting for quite a lot of traffic and CPU usage...
So, what should i be doing to lock down my instance? I though that only allowing machines in my own security group and my own network would lock down a lot of this on a network level... Am i missing something?
I currently have an Exchange 2010 SP1 server in house, and due to some changes, it looks like i will need Multi Tenant support for a few extra domain names. Looking at the documentation i have found so far, it only mentions multi tenant support when upgrading from 2010 RTM -> SP1, and not what you do if you have already got 2010 SP1 installed.
So, from what i can gather, i have a few options:
- Install a new Exchange server with Multi Tenant support and migrate DBs over
- Back everything up and start again
- Something else...
Any suggestions would be greatly appreciated...
Thanks.