network design

The Power of Abstraction

I’ve noticed a trend with almost every company I’ve consulted for:  most network engineering does not use abstract design, but rather provisions each element in a concrete manner.  This is not a cost effective approach for many reasons.

Prevailing Paradigm

There is a paradigm associated with designing a network using COTS products that causes network engineering production centers to disregard the conventional engineering process.  Consider your automobile.  The last time you went to the repair shop, did the mechanic go through the entire car to ensure all the correct parts were installed?  Of course not; the VIN let them know what build was used and all cars of that build were identical except for a few items that made them unique.  Even the options were identical to the same model with those options.  This was not done solely for the benefit of the consumer, but because it is the most cost effective way to manufacture and maintain the vehicle throughout its lifecycle.  This principal can be seem in most industries …. except IT.  Why is compliance software for network systems so popular and valuable?  Because network devices are seldom configured according to a standard.  In the cases where they are to some extent, they are configured using templates that have to be applied manually and the variables entered manually, so they still vary.  This would be like automotive engineers assembling cars by hand.  It isn’t cost effective for a number of reasons.

I was demonstrating a proactive change validation process for a large enterprise customer.  They provided me the configlets and the change documentation for an upcoming change.  I modeled the current network and applied the proposed changes in the simulation and found several errors that would have made the modified network unable to route traffic.  They used templates to create the configlets used in the change, but they used one incorrect template and populated them with some incorrect variables.  The change was an upgrade that had been accomplished at many locations and was standardized to some extent.  If this were implemented as-is, the implementation engineer would have made the necessary modifications to make the system operational and introduce some degree of variance.

This paradigm exists in the service operations as well as design and provisioning.  Tier III often changes the configuration of a device to resolve an incident.  The would be like an auto repair shop making changes to your automobile design to fix a problem.  It’s not running correctly so they install spark plugs that are different from the manufacturer’s specification.  The repair shop wouldn’t do that.  Why is it commonplace in IT?  This is basically ad-hoc system redesign.  If the configuration of a device needs to be changed to resolve a problem, then the system was designed wrong.  There are only a few exceptions to this.  For example, if a router at a remote site has a hot spare interface it is often configured and disabled.  In the event of a failure, the spare interface is enabled, and possible readdressed to take the place of the failed one.  This isn’t really a redesign, it is an operational procedure used in a failure scenario.

This problem has a snowball effect.  Because there are so many variations in the network design, there is no feasible way to test design improvements.  If there were standardization, each variation of the standard systems and sub-systems could be tested in the lab/QA environment.  But because there is really no standard, and too many exceptions to any that exist, the only way to adequately test anything would be to replicate the entire network in QA.  Because of this, the rate of unsuccessful changes and unexpected impacts to change are extremely high.  Management attempts to mitigate this through more rigorous change management, which can not solve this problem and adds delay and effort to the change process.  In the end it costs the organization in loss of productivity due to system downtime and unnecessary labor to manage change and resolve incidents caused by change.

 ITSM Framework

 

The ITIL Service Design process treats a service, such as a network service, similar to the way the automotive industry manages this in the previous examples.  When the network is treated as a service that must be subject to this same rigorous engineering process, the result is improved efficiency a high degree of predictability that reduces service disruptions caused by unexpected probleITIL Change Managementms encountered during changes.  This requires a great deal more engineering effort during the design and release processes, but the ROI is improved availability and reduction effort during implementation.  Implementing the release package becomes a turn-key operation that should be performed by the operations or provisioning team rather than engineering.  This paradigm shift often takes some time for an organization to grasp and function efficiently in, but will improve performance and efficiency and paves the way toward automated provisioning.

In order to accomplish this the design must be abstracted in such a manner to express the level of detail necessary to create physical assembly and logical provisioning such as naming, addressing, routing configuration, policy, management configuration, VLAN assignment, etc. This is most certainly possible because all of these things follow a system of logic – they are not arbitrarily assigned.

An example of this can be seen in Windows system deployment and management.  In the 90’s if you wanted to install a Windows server, you would insert a disk into the server and go through an installation process.  If you were really on your game, you could create a installer init file that answered most of the questions the install utility would ask.  Any custom configuration would need to be accomplished manually one machine at a time.  The advent of system images and group policy provided a means to abstract the system design in a way that an enterprise can easily provision new systems identically and manage them very efficiently.

Conclusion

While there is no out-of-the-box product that provides a mechanism to abstract the network design in the manner that Windows uses images and GPO, it is certainly not out of reach.  The mechanisms to design networks using abstract construct can be developed/integrated and are worth the effort in large environments.

The larger problem is changing the paradigm.  I worked on a project where we developed an Operational Support System (OSS) that provided automated provisioning.   The customer entered the service order into a CRM system which caused the downstream provisioning system to push out all the necessary config changes to provision the service on the network devices.  The system development took us 7 years, but it took just as long to change the organizational mindset to be able to see network design in using abstract constructs.

2013 Networking Trends

Comments on 2013 Networking Trend Predictions

Reference post below by Shamus McGillicuddy

Data Center Networks

It still boggles my mind why there is such a fascination with large bridged networks rather than relying on the proven ability of IP to manage path selection. Spanning Tree doesn’t have the features to ensure optimal path selection. Maybe it’s that the data center is often designed by people with a strong background in computers rather than by network engineers. I’ve seen many cases where data centers have traffic going over the wrong path causing congestion because they can’t get Spanning Tree to place it on a more optimal path. Then add this trend to run layer 2 over the WAN with VPLS. Sure, you don’t have to deal with IP addressing and route distribution, but the tradeoff is a large geographically separated collision domain with little control over path selection and less ability to troubleshoot and monitor it.  IP routing is a solution that shouldn’t be overlooked. It was designed specifically for this reason, and it’s easier to spell.   SDN may prove to be a great solution, but it’s too young yet.

Network Security

Excellent insight.  New technologies and methods will provide more challenges for network security. That’s job security if you can keep pace.

Campus LAN

While 802.11ac may be of interest to those looking to enable laptop and mobile users high speed access, that’s just at the access tier of the LAN. SDN has more potential to change the architecture dramatically, and that not withstanding adequate means to measure performance and monitor security in that environment.

Network Management

Yes, visibility into the cloud has to take a more prominent role. That will require innovative approaches. Are the three big NMS providers able to move fast enough to address this need? I’m looking to startups for the new approaches. And what of Open Source products, which have come a long way? Why invest 3/4 million into COTS and then not develop the customizations and integration to make it do everything you need in your environment? A better approach is to use Open Source and invest the money saved into human resources to configure and integrate the tools – the added benefit is a top-notch support team to keep it in pace with the network changes.

SDN

Added complexity has its costs. Measuring the performance of a dynamically changing topology, the performance of the SDN system itself, and added complexity in network security are just a few challenges.  Software-Defined Networking certainly has potential, but I’m still waiting to see if this can realize a ROI and performance improvement given the additional complexity.  I don’t think everyone is ready to jump on this bandwagon just yet.

Original is reposted below:


Shamus McGillicuddy

What does 2013 have in store for the networking industry? We asked five top industry analysts to predict on networking trends for this year. Click on the links below to find out what will happen in data center networking, network security, campus LANs, network management and software-defined networking.

Eric Hanselman, 451 Research, on Data Center Networks

Data center networks will continue to wrestle with the limitations of spanning tree protocol in 2013, but enterprises that move to alternatives like network fabrics will find roadblocks to scalability. Meanwhile, enterprises will use Ethernet exchanges to build hybrid cloud environments and cutting edge micro-electromechanical systems (MEMS)-based photonic switches will start to make some noise in the data center. Erica Hanselman, research director at London-based 451 Research, shares his predictions for how the data center networking industry will shake out in 2013.

Greg Young, Gartner, on Network Security

In 2013, network security vendors need to develop third-party ecosystems that help enterprises correlate data among the various components of their security architecture. Also, network security pros will need to sort through the software-defined networking (SDN) hype to figure out how secure these new technologies are. Meanwhile, enterprises will accelerate their adoption of next-generation firewalls and advanced threat protection systems. We asked Greg Young, research vice president at Stamford, Conn.-based Gartner
Inc., to share his views on the changes we’ll see in network security this year.

Andre Kindness, Forrester Research, on Campus LANs

Campus networking has lacked innovation for a few years, but 2013 may switch things up a bit. While wireless LAN vendors will be pushing faster 802.11ac networks this year, the industry may also see some architectural changes that could finally deliver true unified wireless and wired campus LANs. We asked Andre Kindness, senior analyst at Forrester Research, to share his views on
the changes we’ll see in campus LANs this year.

Jim Frey, Enterprise Management Associates, on Network Management

Emerging virtual overlay network technology will force network management vendors to develop tools to monitor these new environments in 2013. Meanwhile, enterprises will demand better visibility into their public cloud resources and virtual desktop infrastructure deployments. Enterprise Management Associates Research Director Jim Frey shares these and other predictions for
how the network management market will evolve this year.

Brad Casemore, IDC, on Software-Defined Networking

What’s in store for software-defined networking? IDC analyst Brad Casemore predicts adoption will grow among service providers and cloud providers; vendors will battle each other in Layer 4-7 network services and SDN controllers; and OpenFlow may evolve, but very slowly.  In the longer term, IDC projects that the SDN market will reach $3.7 billion by 2016. Here’s more of what Casemore had to say about the SDN market in 2013.