It has always been my strong conviction that a large-scale multi-user DBMS should reside, stand-alone, on a dedicated server or clusters, with no other unecessary apps, processes or services which could steal resources from the DBMS. I also believe that the DBMS should be tightly integrated to an OS which has been tailored to provide the DBMS with the maximum performance possible! Proprietary systems such as Pick, Terradata and others were designed with this goal. Would the latest Sun/Oracle system fall into this category?.. Would it make sense to accomplish this kind of architecture with other DBMS' like INFORMIX?
If you have the odd quarter of a million USD to spend you can consider buying a slice of an Oracle Exadata machine. This is a highly configured database appliance with highly-specified hardware chosen and Solaris tweaked until it squeaks to optimize Oracle's performance.
The integration between Oracle and Solaris is tighter than most. I can't confirm that it meets your list of requirements, though.
In general, there are certain attributes of the ACID RDBMS that combine to yield polynomial-time performance characteristics when you put two or more computers together to serve one database.
There are a number of attempts to solve this problem:
Optimize the database as much as possible by reducing transactionality, reducing joins, etc.
Optimize one computer as much as possible to serve the database - such as optimizing the supporting operating system, optimizing the disks, etc.
Scaling vertically by serving the database with a single extraordinarily powerful computer.
Distributing the RDBMS by sharding or by putting different tables in different databases.
Using a truly distributed database, that drops some of the attributes of an ACID RDBMS, but offers true distribution and its attendant performance. E.g., Cassandra and others. And truly distributed databases can be run on commodity hardware because the performance of a distributed database is based principally on how many nodes there are, rather than on the performance of any given node.
There are hard limits to the first four methods. There are no limits to the fifth.
As database needs expand many times more rapidly than tweaks and hardware can keep up with, the inevitable solution will be a distributed database. Sure, many people will try to tweak their database servers, and then will be forced to upgrade to massive hardware, but that's just a stop-gap, and when that stops being sufficient, they will be forced to shift to a distributed database.
Very subjective.
Firstly, while SQL Server may be dedicated to the Windows OS on x86 hardware, neither the hardware nor software is specially designed to run a database platform. Furthermore, while SQL Server would be designed to make the best of Windows, it doesn't follow it is primarily designed for performance. In the case of SQL Server, I'd argue that, rather than optimize for performance, it has been optimized for administration and integration.
Secondly, hardware performance increases quickly over time, so what you buy as the optimum hardware this year will be past its peak in a year, and considerably dated in three years. Few buyers would be upgrading their hardware that frequently, so buying a database platform to gain an edge in hardware performance seems short-sighted.
Thirdly, if a database company is building an OS dedicated to its database, can it get the best OS people ? If not, it is a judgement call whether database software will run faster on a generic OS built by the best OS people or a dedicated OS built by the 'lower division'.
Finally, to some degree you can often improve performance by throwing more (or more expensive) hardware at the bottlenecks. For a given budget, you may get a better deal from putting the dollars to grade 'A' components in a generic build than a grade 'B' from a specialist. Generic is going to be cheaper because, the larger the customer base, the more the base costs can be spread (and the more price competition comes into play between suppliers).
PS. Obviously performance isn't the only criteria for purchase. Uptime, skills availability etc all come into play. As does the financial security of the supplier / line of business. I worked at a place that had decided, in the early 90s, Mac's were the best desktop machine. They were stuck with a bunch of machines with no upgrade path.
I'm assuming you want discussion rather than answers... but my answer is "No" anyway
In a big corporate shop, every Oracle, Sybase and SQL Server installation may use the lowest common denominator: the SAN. This may itself be transactionally replicated off site. Ask any corporate DBA.
In any shop, code quality will be a bigger factor then the server/OS. No amount of optimization and integration will save you from poor indexing, for example.
Other points:
You should check out the IBM System i platform. It supports DB2 for i, which isn't exactly compatible with DB2 UDB but it's close enough.
The database is very tightly integrated with the OS. I'm told that this platform grants a level of efficiency and performance beyond the wildest dreams of users of any of the "mainstream" architectures of Windows/Linux/Solaris/etc.
I don't have any experience with it personally, but a lot of users of the System i platform swear by it, and it seems to meet your desire for tight integration.