Good routers usually have a price higher than a price of a low-budget PC ($200) which I can make a router by installing Zentyal, pfSense, ClearOS or something like this.
For example, my friend uses Dlink DFL-860E in the office. This thing prices like $550. I can buy a descent PC for this money and make it fulful more job, than this Dlink can.
What is the advantage of using such devices instead of PCs? Is this really only electricity cost considerations?
For small offices, a full PC may be overkill if they don't have an administrator or dedicated company with short response time to administer it.
Dedicated units, once configured, are lower power, no moving parts, are reset by power cycling and generally "just work" at that point.
PC's are more flexible, but have moving parts (and more points of failure) and have lots of features that either aren't fully used or needed or are confusing for them to use (without an administrator in-house). It uses more power. It can be noisier. It may get re-appropriated by staff thinking it's just a computer that's unused, or powered off, unless someone is there to slap hands away or make sure a big note is taped to the front. It takes more space.
In the long run unless there's staff for maintaining it the added cost of a dedicated unit is generally a peace of mind tax. You pay extra to keep from having the small office calling and yelling about unknown failures or difficulties in using it.
Also the dedicated units usually have warranties and service support. Do it yourself routers, not so much.
Reliability. Your PC with spinning hard disk and fan won't be as reliable as a good router. Also, the ease of mgmt will easily be saved in your labor.
Because your router is pretty much the definition of core infrastructure. Sure, you could build one but you'll lose out on:
In fact, I can almost guarantee that any mid -> high range off the shelf router will have higher uptime/availability than any roll it your own system.
Also, you'd never want to multi-role your router, even if it was a PC with a conventional operating system. So, that point is fairly moot anyway.
Finally, the Dlink DFL-860E is far more than a router. I think you'd struggle to build a reliable PC that really does have all its features for the same cost. You'd struggle even more if you factor your time as a cost.
If you have a) the time/knowledge to build the PC with quality parts (passive cooled atom probably a reasonable choice + SSD or other non-spinning drive) and b) the time/knowledge to install/configure the software to perform the routing functions that you need, then you'll probably be much happier with building as opposed to buying off-the-shelf. Really comes down to "how complex of a solution do you need" and "how do you prefer to solve this?"
The time and knowledge necessary to do this are probably the main requirements for the build-it approach. You would not want to do this as a professional (see other answers) because it doesn't make sense to spend time on this. But if you do it yourself, you'll learn a ton.
Software quality and completeness. The open source routing software offerings (Vyatta, Quagga, etc) are really impressive in terms of how far they have come, but the basic fact is that their utility is judged in some significant part by how they compare in terms of features and stability to the commercial offerings. It's no accident that the configuration methodologies and compatibility tests for many of these tools tend to reference Cisco gear.
In the case of open source operating systems and mainline apps (databases, web servers, dev tools, etc) there is a community of many hundreds of thousands of developers contributing - many with the backing of commercial organizations with substantial resources. In contrast, open source networking has an equally devoted - but much smaller - community. It is -much- harder to build a truly solid L2/L3 implementation than a lot of folks give credit for. The speed of the hardware (to a point) is the easy part.
All of the above tends to translate into implementations that are simpler and operations that are more repeatable. I have run some very large networks over the years and have worked with most of the open source tools (dating back to the days of freely available gated) and have generally found that over the lifespan of network gear (3x that of most servers) the up-front cost of gear was actually one of the cheapest parts of the equation.
You probably want to have a look at this article by Jim Salter for OpenSource.com and ArsTechnica.com From my point of view, if it is a bigger company, by building a router on a very bare bones CentOS, Ubuntu Server or Debian you might be able to build gateway/ firewall clusters (keepalived, which is basically VRRP or e.g. pacemaker + corosync) cheaply on standard server hardware. I know, I have done so in about 8 or 9 instances. You get serious security updates, you can even use an enterprise distribution like RHEL or SLES or Ubuntu Server with support. With BIRD or OpenBGPd you can user BGP (OSPF and others also) dynamic routing reliably. If you build the cluster for availability rather than performance, consider buying just one box and having the other as a virtual machine if the hypervisors network can be configured accordingly.
You can do a lot of stuff, like saving the traffic of the last few minutes or hours directly on the box, using snort or suricata analyze it, or just watching what you need with tcpdump. You can build a proxy or a VPN (have a look at Wireguard), you can do traffic shaping with traffic-control (tc), you can actively help with some debugging with nmap, netcat, ssh, traceroute, ping, logs from the gateway itself etc.
Actually, considering the hardware, in my experience enterprise desktop PCs are not bad at all. They are maintainable, have spare parts everywhere, can be extended with additional network cards, support multiple harddrives/ SSDs, so you can build a RAID1 for the root-filesystem and can restore last power state if the power goes down (and the UPS is empty). Two of those boxes in a cluster will most likely provide the availability that you seek and enable you to run updates/ reboots into new kernels etc. without noticeable down-time (a few seconds hicup, TCP connections will survive if conntrackd runs properly on both boxes).
For the hardware, more cores with higher clock are better. Disable hyperthreading, it will improve the latency a variance. Even 4 cores of something like a current Core i5 should be enough for almost 10 Gigabit real-world performance. ECC memory is nice, if the server supports it. More channels of memory will give you better bandwidth. Software RAID (md-raid in Linux) can prevent trouble on disk failure. Some servers now come with a microSD card for the basic system. This works out ok, if you have a cluster or a good backup that you can copy quickly. (10 GB dd to a replacement microSD card should take only a few minutes and a reboot...)