Month: September 2012

HP Exchange technology notes

hptech

I popped along to the HP Exchange technology conference in London and went to a future of networks talk. Here are some notes and some of the questions posed on the Networking Future with Mike Witkowski – ISS tech exchange presentation.

  • Where will networking be going over the next 4-10 years.
  • What is the future?
  •  What will the next gen DC require?
  • What technology will be used?
  • What will be the speeds and feeds?
  • Where do overlay networks fit in to this?
  • How will software design networking play into this?

Pressures demands on networking are memcache as applications will start to have more memcahce like technologies. How can the network deal with this? hadoop and gluster storage devices will be more common and provide parallel storage instead of serialized (SAS) technologies. Groups of nodes serving up data which will include data mirroring functionality. Web 2.0 apps will be on more and more mobile devices. How will applications be able to handle demand for more intensive streaming. RDMA – how will RDMA technology effect the stress on networks. Windows server 2008 already ships with RDMA. RDMA is direct memory based access for storage blocks. Addressable storage in memory. In today’s environments, 10GB will also bottle neck on current storage devices, be it SAS or SATA. SSD and memcache will improve storage speed and access. There will be more persistent memory technologies coming out. Less disks and more memory.

Also pressures are there for current L2 technologies. More and more people are extending layer 2 networks globally. More and more people are talking about VLAN limitations. Vmotion is currently only available across layer 2.

So what do we currently use which can be considered as capable technologies to deal with these challenges?

FABRIC could deal with

  • Converged networks
  • having a pool of storage
  • pool of fabric
  • pool of servers

These all typically run on converged networks. Fabric is a network component that has high connection and cross bandwidth connectivity. Perfect for iscsi and layer 2 networks. Also is used with infiband and RDMA based networks

Other issues and admin tasks which are of a current concern is OSR – the over subscription ratio

The aim is to have a virtual connect to be treated as a single management domain. Scale linear across a set of racks. But for this to happen, the price for optic fiber needs to come down. Companies who make optic fiber need to make the solution more efficient.

40GB is dreaming, but is required to reduced the demands on infrastructure. Do not be short sighted. In fact, faster pipes will mean less compute, which means less servers. It all comes back to disks being the bottle neck for most networks.

There was a big discussion around the comparison of a hierarchical topology versus a CLOS topology.

hierarchical networking

http://en.wikipedia.org/wiki/Hierarchical_internetworking_model

CLOS

http://en.wikipedia.org/wiki/Clos_network

Pros and cons

Good technologies out there are QFabric Quantum which both use SDN

Overlay networks

More information requested but over lay networks deal with 4k valsn and above, typically using Q in Q trunking. Service providers will start to use this more and companies like VMware are looking at ways to bring this into the management, ie VCenter

VXLAN is a good example or STT which is a Nicera protocol, both now owned by VMware.. Where will these technologies go in the future?

Advertisements