you would be able to have concurrent writes, provided that the data was hashed to different shards. MongoDB locks the entire database on that instance when doing a write, so even a non-sharded replica set can only write one thing at a time
it's a good way of learning sharding/replica sets/admin, but if you just want to do that you can achieve this with less instances
replication would be very quick :)
The downsides:
you would need a powerful machine (multiple CPUs but also as much memory as possible, and quick disks) to really benefit from the performance - Mongo craves RAM! For optimal performance your total index size should fit into RAM, which means if you have 2 gigs of index per shard, you would need 5 * 3 * 2 = 30 gigs of ram, plus the memory for the OS etc as well
you may not benefit much from the slaveOK query option
you would have no protection against hardware failure - if the box goes, all your shards and your replica sets go
As I understand your question, you are asking if you can have a Master Server, on which you are running five instances of mongod, each of which is identified as a separate server to mongos via the config server, and then three similarly-configured servers as replica-set members of each shard. My answer is based on this interpretation.
Generally this is not a good idea. If you are running five separate mongod processes (with each defined in the mongo config server), and each writes to a separate disk (spindles, not partitions), that may be acceptable and actually gain you some benefits in the area of Disk I/O. However, you won't be using the Master's memory to the best potential, and you have complicated your setup unnecessarily (especially in the topic of backup/recovery.
Usually you only shard when you are having delays due to I/O on disk writes. (I/O delays on reads can be mitigated by turning on Slave Read in the language driver.) This is indicative of having a need to scale your architecture out. It's important to keep that in mind, because it means that you should be buying more servers. Keeping a single master also makes life much more simple; sharding increases your complexity.
Start out with a Replica Set, and if Write operations are a problem, then you shard. It's a lot easier to add sharding than to remove it.
(This is all ignoring the potential to get faster disks and more memory. SSDs often make sharding unnecessary.)
What you are doing is fine if you are just playing with configuring Sharding and Replica Sets. (You probably should avoid doing htis in Production.) Honestly though, I would recommend doing it the other way; put a shard on each host, and spin up replica set members (but avoiding having replset members on the same host as the primary), like so:
The plus sides:
The downsides:
As I understand your question, you are asking if you can have a Master Server, on which you are running five instances of mongod, each of which is identified as a separate server to mongos via the config server, and then three similarly-configured servers as replica-set members of each shard. My answer is based on this interpretation.
Generally this is not a good idea. If you are running five separate mongod processes (with each defined in the mongo config server), and each writes to a separate disk (spindles, not partitions), that may be acceptable and actually gain you some benefits in the area of Disk I/O. However, you won't be using the Master's memory to the best potential, and you have complicated your setup unnecessarily (especially in the topic of backup/recovery.
Usually you only shard when you are having delays due to I/O on disk writes. (I/O delays on reads can be mitigated by turning on Slave Read in the language driver.) This is indicative of having a need to scale your architecture out. It's important to keep that in mind, because it means that you should be buying more servers. Keeping a single master also makes life much more simple; sharding increases your complexity.
Start out with a Replica Set, and if Write operations are a problem, then you shard. It's a lot easier to add sharding than to remove it.
(This is all ignoring the potential to get faster disks and more memory. SSDs often make sharding unnecessary.)
What you are doing is fine if you are just playing with configuring Sharding and Replica Sets. (You probably should avoid doing htis in Production.) Honestly though, I would recommend doing it the other way; put a shard on each host, and spin up replica set members (but avoiding having replset members on the same host as the primary), like so: