From a purely physical standpoint, there's no limit to the number of interfaces you can have. You can get chassis with 12+ PCI slots, and extenders can bump it up to over 100.
From a bandwidth standpoint, a 64-bit PCI bus has a bandwidth of 528 MBps, which translates into 42 100 Mbps links or four 1 Gbps links blasting non-stop. Most will not be doing that however, so many more cards can be used.
From an OS standpoint, an OS that uses an unsigned byte to identify NICs will be limited to 255 or 256 cards. Linux uses a full name instead, which means that there is no theoretical limit.
Actual ports and the ability to serve those ports are two very different things.
If you simply wanted to have a silly number of ports then you could use multiple multi-port USB cards and hubs and have literally hundreds or thousands of USB-based Ethernet adapters - each with their own 10/100 port and a MAC but their performance would be appalling (although I doubt their drivers support such numbers).
In terms of 'proper' NICs there's no reason why you couldn't have ten or more multi-port GigE cards, or even use HP's Virtual Connect Flex-10 cards which will happily provide up to 24 x 1Gbps links over a few 10Gbps trunks.
What matters though is matching the functional need to the available bandwidth. Personally I prefer using fewer 10Gbps NICs and then use VLAN-tagged virtual NICs from within the OS, but that's not for everyone.
When using multi-port NICs or multiple NICs, the actual performance scaling is far from being lineal. First of all the actual performance and the performance scaling will highly depend on the NIC type and vendor. Secondly, the CPU and memory bus overhead is increasing not linearly when using multiple NICs, especially if you will try to reach the maximum possible performance.
If you are looking for performance, check 10GB Ethernet solutions, which have better CPU offloading capabilities at high speeds.
From a purely physical standpoint, there's no limit to the number of interfaces you can have. You can get chassis with 12+ PCI slots, and extenders can bump it up to over 100.
From a bandwidth standpoint, a 64-bit PCI bus has a bandwidth of 528 MBps, which translates into 42 100 Mbps links or four 1 Gbps links blasting non-stop. Most will not be doing that however, so many more cards can be used.
From an OS standpoint, an OS that uses an unsigned byte to identify NICs will be limited to 255 or 256 cards. Linux uses a full name instead, which means that there is no theoretical limit.
In short, 12 ports is nothing.
A friend of mine said me the max he could plug was 4 card of 4 interfaces on PCI (don't remember if it's pci-e) slots.
Actual ports and the ability to serve those ports are two very different things.
If you simply wanted to have a silly number of ports then you could use multiple multi-port USB cards and hubs and have literally hundreds or thousands of USB-based Ethernet adapters - each with their own 10/100 port and a MAC but their performance would be appalling (although I doubt their drivers support such numbers).
In terms of 'proper' NICs there's no reason why you couldn't have ten or more multi-port GigE cards, or even use HP's Virtual Connect Flex-10 cards which will happily provide up to 24 x 1Gbps links over a few 10Gbps trunks.
What matters though is matching the functional need to the available bandwidth. Personally I prefer using fewer 10Gbps NICs and then use VLAN-tagged virtual NICs from within the OS, but that's not for everyone.
When using multi-port NICs or multiple NICs, the actual performance scaling is far from being lineal. First of all the actual performance and the performance scaling will highly depend on the NIC type and vendor. Secondly, the CPU and memory bus overhead is increasing not linearly when using multiple NICs, especially if you will try to reach the maximum possible performance.
If you are looking for performance, check 10GB Ethernet solutions, which have better CPU offloading capabilities at high speeds.