I've read some explanations about nova-network and how to configure it like this one from wiki:
I'm confusing about a detail. If every traffic from the instances must go through nova controller node, then why we still need the public interface for nova-compute node? Is it necessary?
What happen when a request from outside to an instance. For example I have a controller node and a nova-compute node. In nova-compute node I run an instance with a Wordpress website. Then someone connect to the public IP of this instance. So the request go directly from router to the nova-compute node or from router to controller node then nova-compute node?
In the case of nova-network running on the controller node, instances use that node as its gateway to the outside world. The public interface is configured on compute nodes and floating (public) IPs are configured directly on the compute nodes. So, instance bound public traffic would go to an public interface on the compute node rather than the controller.
A better networking model to use is FlatDHCP + multi_host. See this blog post that describes it in detail and describes why its a better choice.
To summarize here: nova-network runs on every compute host. The physical flat_interface (specified in config, usually eth1) gets a bridge built upon it which is assigned an IP address on the private instance network. Using this bridge, nova-network serves DHCP, DNS and a gateway to all instances hosted on that compute node. The physical flat_interface (eth1) should be linked to a switch connecting the flat_interface's of other compute nodes. This enables inter-instance traffic on the internal, private network (eg, 10.0.0.0/24).
Each compute node also has a public_interface specified in config (eth0 by default). This is expected to be connected to the public facing network, say 192.168.25.0/24. As a user, you can allocate an address and associate it to a running instance. The addresses are allocated from a pool that you configure when you initially setup nova's networking. Using EC2, eg:
On the compute node, 192.168.25.241 is added to the public_interface as a secondary virtual IP address and the compute nodes iptables rules are configured to route traffic to this address to the appropriate instance. However, the default security groups do not allow any inbound traffic to make it through. You'd need to authorize ports in the security group(s) that instance is assigned to (via euca-authorize). This translates to nova-network opening ports in the iptables rule set corresponding to that instance.