I have sensitive data stored in both Azure DB and Azure SQL VM.
An authorised DBA can log on and query the database, but in theory could a random Microsoft employee do the same without asking permission?
I found this online which suggests the answer is 'no', but is it really?
Customer data ownership: Microsoft does not inspect, approve, or monitor applications that customers deploy to Azure. Moreover, Microsoft does not know what kind of data customers choose to store in Azure. Microsoft does not claim data ownership over the customer information that's entered into Azure.
Also found this on a site discussing the negatives of using a SQL Developer Licence:
Microsoft gets access to your data: it is mandatory with any non-commercial installation of SQL Server that all your usage data covering performance, errors, feature use, IP addresses, device identifiers and more, is sent to Microsoft. There are no exceptions. This will likely rule it out for any company that deals with particularly sensitive data.
I'm not proposing using a developer licence on Azure, but which is it - can Microsoft inspect my data or not, either legitimately or a rogue employee?
Legally speaking, they can't read your data or send your data to law enforcement without a correct court order.
Per transparency from Microsoft, to see the current state of how many laws subpoena they answered on there.
You have to choose wisely your Azure region for that reason. In example HIPAA enterprise in Canada would have to be hosted in Canada in example for their data.
A rogue Microsoft employee could maybe see your data. The process there is unknown, but that risk is the same from any hoster or rogue employee inside your corporation.
You are putting your data on Somebody Else's Computer, and the data can be accessed in some way. In other words, the answer to your exact question is almost surely: Yes, some Microsoft employees can see your data but make an active choice not to perform the tasks that would let them do so.
A wider question is how large the risk actually is for leaks of such data. My opinion is that the risk is considerably lower that a Microsoft employee would attempt to access your data (and leak it) than that a configuration or software error made by you as a tenant would make such data available to third-parties. The latter is what we usually see when it comes to data leaks that make it to the news.
I state this from experience because I used to work there.
Internally Microsoft is very strict about protecting the data of users and customers, and unlike some other big well-known WEB outfits, Microsoft explicitly does NOT scan the contents of user's private files (eg your Hotmail.com Email, your VM's data files) to be used for marketing or advertising.
Any employee who breaks internal rules to access user data would be shown the door PDQ, and would likely face legal consequences. And only a restricted cadre even have the technical ability/access to do that.
Note that "meta data" falls under different rules, which Microsoft is upfront about, but is strict about who might actually see even that. Usually it gets anonymized en-mass and sorted into some internal company database so the operations folks can keep the systems running. Those folks care only about the overall statistics, not the actual user data (which they can't normally see).
The SQL developers license data you mention is meta-data (eg "usage data") not the customer's SQL data.
In short, no human is going to read your files sitting on a Microsoft server unless there is a court order or some dire system repair problem requiring inspection of a specific file (extremely unlikely). And in either case it will be a limited number of eyeballs, and only after internal approvals are granted.
True story: in the very old days (1980s) two of the technicians would periodically take bunches of old hard drives out to the parking lot and drive a railroad spike through each with a sledge hammer. Very therapeutic. How's that for deleting files?
Can they? Yes, the data is on their servers, which they control.
Will they? Probably not, except if they have a reason (usually legal and you have nice answer about that - also keep in mind that there are legal cases they cannot disclose). The probability depends on how your data is interesting or problematic.
Is what they get usable? That part depends on you: if you send them cleartext data then yes, if you encrypt it before sending then no
I've not found exact details about Microsoft's internal access policies, but they do give general information in their brochure "Privacy Considerations in the Cloud" (PDF download, linked from their Privacy at Microsoft page:
Further, data appears to be properly deleted and/or destroyed when you request deletion. ("Request" here appears to include things like releasing virtual hard drives and similar actions.)
That said, some customer data appears not to fall under the above policies and you as the customer need to understand what this is and be careful with data you upload that falls under that. Most of this appears pretty obvious, however, One example from Microsoft data categories and definitions:
The primary document about security and safety of data within Azure appears to be "Protecting Data in Microsoft Azure" (PDF download, linked as "Azure Data protection" in the middle of Data management at Microsoft). This touches on MS staff access only on page 17, where it discusses how staff are trained, they have strict protocols that are audited¹, etc., but it's vague on the details. It does reiterate what we've already seen above, in some cases being a bit more explicit:
The text couple of paragraphs also make clear that anything removed from the data centre is wiped first, and "delete means delete," and is "instantly consistent."
That said, the document is still well worth reading in its entirety if you're using Azure for any security-sensitive information, since security problems are far more likely to come from within your organization than from Microsoft.
¹ Don't read too much into the "comprehensive audits" part, by the way. Many security frameworks, such as ISO/IEC 27001 audit not that you're actually doing a good job at securing things, but that you have documented specific security controls and you have procedures for ensuring that you follow that documentation. Thus, if you document that passwords shall be no longer than 8 characters and consist only of lower-case letters, so long as you can show that you're following that, you pass the audit.
I am addressing the "rogue employee" aspect only.
The vast majority of Microsoft employees do not have access to your data. The few that do still need to jump through some hoops to request access to it.
I am a former Microsoft employee. The few times I did get access to user data, it was with the knowledge and agreement of the customer.
The short answer: If it is cloud it is YES!!! The first rule of IT (or any tech) security is 'the man who holds the box OWNS the box'.
'Safes are meant to keep people outside, not inside' that is how magicians get out of safes.
Now, think, all the companies 'out-sourcing' their data/development/support to Asia and Europe. The recent Capital-One security breach, etc. The Man/Woman CSR 'holds' the box! When you call your credit card/bank the 'CSR' asks you to verify your information... now he/she knows a LOT about you!
A long time ago (over 20 years) I 'caught' an 'big blue' partner's tech reading/browsing a customer's hard disk on a computer that came for repairs.
I have had my emails being read by my email services provider, since then I operate my own email servers. My hotmail, yahoo and gmail accounts are 'public domain' as far as I am concerned!
As ex-president Jimmy Carter said 'The most secure way to communicate is by regular (U.S.) mail'.
I am confident that my answer will be down-voted and removed :)