Hyper-V 2012 R2 Environment Networking Recommendations
hi,
i'm designing clustered hyper-v environment 2 nodes. i'm using san storage (dell equalogics) , dell pe630 servers (8 nics on each).
i'm planning use converged network management, cluster , migration traffic. run on 4-1gb member nic team have virtual switch associated , invidividual vnics each management, cluster , migration. i'll using bandwidth weight , vmq weights ensure qos.
for storage traffic i'm planning create 2 2-1gb nic teams , set virtual switch on top of each. i'll create vnic on each switch.
the basic diagram looks this
clustered equalogics <----> clustered switches <---> clustered hyper-v
hyper-v node
(4-nic team) ==> vswitch lan ==> vnic1 cluster, vnic2 migration, vnic2 management
(2 2-nic teams) ==> vswitch1 san1 ==> vnic1 iscsi1
==> vswitch2 san2==> vnic2 iscsi2
these questions plan:
- based on equallogics' documentation, it's better have virtual machines have access @ least 2 virtual switches mpio can configured. main reason i'm creating 2 virtual switches storage traffic. necessary if i'm teaming nic's? i'd rather team 4 nics , use qos ensure equal traffic distribution. blog post http://www.aidanfinn.com/?p=14509 makes point creating individual virtual switch per nic (or @ least not driving traffic through same virtual switch) ensure full utilization of available bandwidth. i'd appreciate recommendations on respect?
- i've looked @ numerous blogs , documentation. use the -allowmangementos $true option new-vmswitch command both lan , san virtual switches. don't enable @ all. san virtual switch i've read important enable -allowmangementoss hyper-v host can store virtual machines, or other data, on iscsi san volumes. haven't found consistency on whether option should enabled or not. can point me in right direction here?
- question may sound little silly, i'm not sure public traffic going through? should expect pass through lan virtual switch?
Windows Server > Hyper-V
Comments
Post a Comment