listingloha.blogg.se

Hpe nimble
Hpe nimble










hpe nimble
  1. #HPE NIMBLE FULL#
  2. #HPE NIMBLE WINDOWS#

The Nimble and iSCSI are on their own subnet / VLAN. Only the hosts have access to iSCSI - we have tried the VMs connecting directly and it didn't make any difference. This should give near 10gbps performance to from cache on a nimble. the vm has a 2nd nic (not the OS nic) connected to the iscsi virtual switch. physical NIC on this virtual switch, connected to Network switch in a vlan. So the most basic setup for a vm to access iscis would be: What access the nimble by iscsi - just the hypervisor or also guest VMs? usually it would be hypervisor except for special case scenarios.Īre the clients (the iscsi initiator hosts) in the same subnet as the nimble? do not route iscsi. It's not clear upon the setup "9 nics teamed" "different nics per vlan" don't make sense. I'm going to digest your response properly with my colleagues, thank you very much for your input, it is really wrote: Jumbo frames are enabled on every 10GB NIC on both hosts, the switches and the Nimble. We are using the Nimble connection manager which manages the MPIO - the iSCSI NICs aren't teamed. We'll definitely be looking into separating the teamed NICs for iSCSI as that seems to be a common point everyone is getting across. We were advised by our supplier who is Nimble certified on our VLAN/Teaming config - as we know Nimble isn't a normal SAN we took their advice and assumed everything was done to best practice and we wanted the best for failover as we host business critical applications which need to run 24/7 365. We have spent a lot of time with Nimble support and they haven't said anything is wrong - which maybe because they don't want to advise outside of their remit, we struggled to find anyone who even had any experience with Hyper-V within Nimble support, they're all VMWare experts! I will also add that people usually don't use Netgear switches with enterprise iSCSI storage.

hpe nimble

#HPE NIMBLE WINDOWS#

They should be using MPIO, no LACP, and you should have the Nimble Windows Integration Toolkit installed and configured for MPIO. You don't need separate NICs for anything, but if you are to use dedicated NICs for iSCSI traffic, they should not be teamed. My Hyper-V hosts have two 10G NICs: one NIC for each switch.

#HPE NIMBLE FULL#

You're normally speaking to an engineer in about 20 seconds, and they will help you full stack: storage, hypervisor, and storage. The folks that support HPE Nimble are super awesome. BUT, they should be performing better than what you claim - but it will be hard to diagnose a switch bottleneck without comparing it to a similarly deployed Cisco/Fortinet/Brocade switch.įirst of all, talk with the HPE Storage support team. I'd never use Netgear switches with an Enterprise grade SAN like that (great choice BTW). Unless you are doing a ton of migrations, no need to give it a separate VLAN the are done so fast typically.Īnd yeah. If 10G, then I run livemigration over the server network - never the iSCSI network. most of my recent Hyper-V cluster dpeloyments have just iSCSI (2x MPIO) and Server (2x TEAM with vSwitch) networks. I don't think you really need a separate network for CSV. Installing the NCM and Windows Toolkit AFTER all the NICs are installed is important - it does some auto-config to them to help. Are jumbo frames enabled at all NICs/VLANs?ĭoes the Nimble Connection Manager see 4 paths to each volume? ISCSI should never be teamed - use MPIO instead. I like our Fortgates but I'd not yet considered using their switches! Interesting to see Fortinet switches mentioned in storage, I was wondering OP wasn't using Arbua. Literally any opinions or advice is welcomed. We get MBs transfer rates, not GBs.Īll of the VLANs can communicate, theres no additional hops being made, we can't see the traffic outside of the specific networks.ĭoes anyone have any ideas of anything we can check please? Or from my description is anything wrong? We put this solution in summer last year and it has never performed well. We are experiencing such slow speeds within the VMs, it is almost unusable. There are 2 Netgear switches inbetween the hosts and the Nimble for failover, each VLAN has 2 ports per host and each has one connection into each switch.

hpe nimble

We have a Hyper-V 2019 Datacentre Clustered environment, 10GB Netgear 4300 Switches and a HPE hybrid Nimble (flash cache + spinny disks), there are 2 hosts (HP D元80 Gen10), identical spec with 9 ports over multiple teamed NICS (all 10GB apart from management), all on sperate VLANs (iSCSI, VM Client Network, Cluster Shared Volume & Migration).












Hpe nimble