PPTP uses a relatively simple encapsulation mechanism and uses the RC4 stream cipher which is relatively cheap in terms of CPU load. L2TP has a much more complex encapsulation mechanism with potentially up to 6 layers of encapsulation with the encapsulated IPSec tunnel using typically 3DES or (more recently) AES encryption. 3DES is relatively efficient when implemented in hardware but my experience is that software only L2TP with 3DES has about double the overhead of simpler encapsulation protocols although I don't have any significant experience with running PPTP and L2TP in anger on the same hardware. With AES the CPU overhead should be lower, I believe this is typically 20-30% below 3DES but I don't have any hard data to back that up.
Way back in the depths of time (2002/2003) I had the fun task of supporting a VPN infrastructure that was made up of a large number of Intel\Shiva Netstructure 3120 & 3130 VPN gateways supporting 60k remote users. Much of the infrastructure was operating at or near to it's practical performance limits at the time. The devices themselves were (for the most part) standard x86 server hardware with a 733Mhz Pentium III CPU and 512MB of RAM. That 3130 had some dedicated crypto accelerator hardware (for DES/3DES) and easily handled 90-95Mbps encrypted throughput\10K simultaneous tunnels but the 3120 was basically just a barebones server with no crypto acceleration and only managed about 20Mbps throughput\2K simultaneous tunnels. That throughput was based on a proprietary Shiva\Intel protocol called SST which had the useful feature of only requiring a single UDP port to work but the same hardware was able to handle about 75% of the throughput with IPSec V2 marginally less with L2TP which was in the process of being ratified at the time. In practice the 3120 gateways still easily handled 1000 concurrent tunnels and 10Mbps or so of throughput with L2TP.
My point is that a software only implementation of L2TP running on a single core 733Mhz Intel Coppermine CPU with an architecture that supported no more than 1GByte/sec memory bandwidth was comfortably able to handle 10Mbps encrypted throughput across a very large number of concurrent sessions. A modern multi-core\multi-socket server will have 20-50x the CPU power per socket and 20x or more the memory bandwidth so I'd expect that such as system should be able to easily support 1Gbps of L2TP throughput with a software only solution and with any crypto hardware at all a modern system should be able to deliver line speed L2TP on multiple Gigabit interfaces without any problem.
No idea about PPTP but in case of IPSec your performance will vary depending upon the choice of encryption cipher. If you are using a *nix box then run
$ openssl speed
It will benchmark your system for different encryption ciphers supported by your system.
PPTP uses a relatively simple encapsulation mechanism and uses the RC4 stream cipher which is relatively cheap in terms of CPU load. L2TP has a much more complex encapsulation mechanism with potentially up to 6 layers of encapsulation with the encapsulated IPSec tunnel using typically 3DES or (more recently) AES encryption. 3DES is relatively efficient when implemented in hardware but my experience is that software only L2TP with 3DES has about double the overhead of simpler encapsulation protocols although I don't have any significant experience with running PPTP and L2TP in anger on the same hardware. With AES the CPU overhead should be lower, I believe this is typically 20-30% below 3DES but I don't have any hard data to back that up.
Way back in the depths of time (2002/2003) I had the fun task of supporting a VPN infrastructure that was made up of a large number of Intel\Shiva Netstructure 3120 & 3130 VPN gateways supporting 60k remote users. Much of the infrastructure was operating at or near to it's practical performance limits at the time. The devices themselves were (for the most part) standard x86 server hardware with a 733Mhz Pentium III CPU and 512MB of RAM. That 3130 had some dedicated crypto accelerator hardware (for DES/3DES) and easily handled 90-95Mbps encrypted throughput\10K simultaneous tunnels but the 3120 was basically just a barebones server with no crypto acceleration and only managed about 20Mbps throughput\2K simultaneous tunnels. That throughput was based on a proprietary Shiva\Intel protocol called SST which had the useful feature of only requiring a single UDP port to work but the same hardware was able to handle about 75% of the throughput with IPSec V2 marginally less with L2TP which was in the process of being ratified at the time. In practice the 3120 gateways still easily handled 1000 concurrent tunnels and 10Mbps or so of throughput with L2TP.
My point is that a software only implementation of L2TP running on a single core 733Mhz Intel Coppermine CPU with an architecture that supported no more than 1GByte/sec memory bandwidth was comfortably able to handle 10Mbps encrypted throughput across a very large number of concurrent sessions. A modern multi-core\multi-socket server will have 20-50x the CPU power per socket and 20x or more the memory bandwidth so I'd expect that such as system should be able to easily support 1Gbps of L2TP throughput with a software only solution and with any crypto hardware at all a modern system should be able to deliver line speed L2TP on multiple Gigabit interfaces without any problem.
In a nutshell, L2TP is used with IPSec, and PPTP is not. L2TP is more secure, PPTP is easier to set up.
No idea about PPTP but in case of IPSec your performance will vary depending upon the choice of encryption cipher. If you are using a *nix box then run
$ openssl speed
It will benchmark your system for different encryption ciphers supported by your system.