GSA / modernization

Report to the President on IT Modernization
https://itmodernization.cio.gov
59 stars 12 forks source link

Comments on IPv6, Network Infrastructure Modernization supporting Cloud and Shared Services, FEA2 #20

Open Jllee9753 opened 6 years ago

Jllee9753 commented 6 years ago

JLL response to the POTUS ITM v2.docx JLL response to the POTUS ITM v4.docx Version 4 is the latest update. Please ignore version 2

konklone commented 6 years ago

[Inlining a best-effort version of the attached comment below. Download the original attachment in the issue above to see the original comment.]


Response to the Request for Comments on the Report to the President on Federal IT Modernization (Draft)

Author: Mr. John L. Lee, CTO Internet Associates, LLC\ These comments reflect the personal opinion of Mr. Lee and are not the official position of any other group or organization including Internet Associates, LLC.

Introduction

Recommendations in this response are numerically tied to the six questions in the Request for Comment, the sixth being the catchall responses. The terminology in this document for substantive comments is: “Strong Recommendation” and then there are “Recommendation”, “Suggestion” and “Comment”. There is a short bio for John Lee at the end of the submission to give some background to these comments.

Strong Recommendations:

  1. Complete the adoption of IPv6 (Internet Protocol Version Six) by all federal agencies supporting a common language across all network and attached IT infrastructure[^1] including relevant software/firmware while reducing the cybersecurity attack vectors on all layers of the network.

    Key questions 1,3 and 4\ Rational: IPv4 based Network Address Translation (NAT) and Carrier Grade NAT are typically exploited by criminals involved in human trafficking and child exploitation as well as terrorist to mask their identities and obfuscate Law enforcement investigations. Wireless carriers and other countries have already adopted IPv6 based on their lack of available IPv4 address space. All modern operating systems are deployed with IPv6 enabled and Microsoft OSs configurations are not tested with IPv6 services disabled on their devices.\

  2. Modernize the building and building access infrastructure by transitioning to high speed consolidated, fiber optic based, elastic, shared communications links from aging narrowband, copper based TDM/PDH services. These old technologies do not adequately support existing requirements let alone the much greater performance and elasticity demands for Cloud and shared services interconnections. In addition Commercial Service Providers[^2]are phasing out these old technologies and their associated services from their inventory.\ Key questions 1, 3 and 4\ A number and variety of Agency data communications circuits/services are based on “Narrowband copper” TDM/PDH services at bandwidths less than 64 Kbps, 64 kbps, 1.544 Mbps and sometimes but not often at 45 Mbps[^3] as the service carrier. A large percentage of these circuits are discrete, point to point, fixed function, acquired off of standard contract vehicles by multiple programs within one agency without economies of scale. For a number of federal building, copper network access is the only option while inside building infrastructure has 10 and 100 Gbps fiber infrastructure. It is suggested that Cloud interconnect circuits start at 10 Gbps today and shortly use multi 100 Gbps lambdas. (color of laser light on a fiber or fiber pair.)

  3. In concert with Service Providers, architect and deploy an integrated, end to end high performance, elastic, consolidated and shared, local, metro and wide area network. This network is the required foundation supporting Federal users equal access to Cloud and Shared Services no matter their location and will replace the current, legacy and aging infrastructure.

a. Implementing end to end high performance multi-layer networks require that all components of the network be specifically tuned to provide this capability and that includes:

i. Government leased or owned structured building, facility and campus fiber based distribution infrastructure

ii. Building, facility and campus communications access fiber facilities

iii. Interconnecting various Federal facilities with Computing, Cloud or Shared Service facilities using either:

  1. Leased circuits from Service Providers

  2. Service Provider leased fiber facilities or multi-pairs in conduits

  3. Government leased or owned (IRU) fiber pairs or conduits with contractor managed equipment and services

iv. Computing, Cloud or Shared Services provider facilities including fiber based building communications access and infrastructure

b. Leading US communications vendors have developed the technology and equipment that is in current production used to implement these multi-layer networks including:

i. Advanced fiber optic cable, interconnect assemblies and active equipment deployed with no signal repeaters up to 5,000 km.

ii. PIC[^4] (Photonic Integrated Circuit)[^5] based switching, routing and transmission gear including VDWDM (Very Dense Wave Division Multiplexing) transmission equipment

iii. Utilizing GMPLS (General Multi-Protocol Lambda/Label Switching) based control plane with User to Network and Network to Network protocol interfaces allowing Government on-demand turnup and teardown of almost unlimited communication capacity.

iv. Optical based G.709 digital framing consolidates multiple high performance data circuits and SONET framed narrowband TDM/PDH services over a single fiber pair allowing an orderly transition from legacy services.

v. Software Controlled and Defined Networking (SDN) software allows integration of Government ordering portals with multiple Service Provider portals enabling an on-demand bandwidth marketplace for bandwidth capacity analogous to on-demand Cloud services portals.

Key questions 1, 3 and 4

Rational: The Draft covers a number of relevant areas in Cloud, Shared Services and cyber security but overlooks the current incompatibilities in the chain of communications links/services interconnecting the Agency user with the provider’s Shared and Cloud services. My purpose in providing so much detail to this group is to illustrate that elastic network technology and products are available and being deployed in Service Provider networks today. Multi-layer Elastic Network support both switched optical circuits with the addition and subtraction of 100 Gbps increments of bandwidth which is well suited for movement of applications data into and out of the Cloud for normal hybrid cloud operations as well as satisfying COOP (Continuity Of Operations Plans) requirements during abnormal conditions. \ \ An analogy with PIC technology is how the application of VLSI technology (Integrated Circuits) to computers vastly increased both computing and storage performance with a radical reduction in size, power and cost of computing leading to the current deployment of Cloud based services, implemented on virtualized assets with underlying high performance compute platforms. The current PIC technology, while just beginning down the analogous cost/performance curves for fiber equipment, has produced single chips that contain five to ten complete DWDM optical channels, of 100 Gbps each, including transmit and receive functions as well as optical and electronic add, drop and multiplex capability. For example, a current PIC based product, from a US vendor, occupies one Rack Unit (RU) of space and has 12x100 Gbps user side interfaces and one network side interface at 1.2 Tbps[^6]. A legacy narrowband T1 multiplexor/channel bank also occupies 1 RU of space but only supports 1.5 Mbps versus 1.2 Tbps for PIC based technology. By stacking these 1.2 Tbps boxes and interconnecting them to the same fiber pair the total link capacity is 27.6 Tbps today with current research taking overall link speed to 1 Pbps or 1,000 Tbps.

  1. Identify the advanced fiber optic, multi-layer, switched and routed elastic network infrastructure described herein as a HVA and harden it accordingly by:

a. Increasing the number of geographically dispersed building and facility hardened data/tele communications entrances such as two for larger buildings and if they contain major data or communications centers that should be three to four with independent local loop providers.

b. Historically Service Provider network infrastructure has been designed to withstand an uncoordinated number of natural events such as backhoe fade, accidents with telephone poles, etc. Current threat vectors have expanded to include coordinated events that can diminish single supplier capabilities leading to the need to actively interconnect by fiber and other methods discussed in this submission, facilities that allow the Government to seamlessly move data flows between Service Providers in real time with minimum loss of data.

Key questions 1, 3 and 4\ Rational: The network has become the critical system to support the mission and when it goes down or its effectiveness is mitigated, mission delivery suffers.

Recommendations:

  1. To use Cloud in a Box in existing Tiered Federal Data Center footprints, interconnected by fiber based, multi-layer “elastic” network technology to provide base computing and storage capability in a hybrid cloud environment during normal operations and also able to support real-time COOP requirements, all elements of this paragraph should be consistent with M-16-19 Data Center Optimization Initiative (DCOI).

Key questions 1, 3 and 4\ Rational: Cloud in the Box (Pod) technology can be leveraged by Federal Agencies to provide energy efficient Cloud integrated computing platforms for continuous use and in conjunction with external Cloud Service Providers for both overflow or inflow depending on normal or abnormal conditions. If, for example, a hurricane blows through affecting the provider’s facility, certain workloads may be transferred to Federal systems for a period of time.

Cloud and virtualization technology and business models have greatly improved the efficient and effective use of computing technologies in a number of vertical markets including the Federal Government. With the variety and number of mission requirements computing and networking resources are needed 24x365 with periodic and sometimes large overflow requirements which only reinforces Cloud and elastic network deployments for Federal Agencies. Commercial Cloud Service providers are continuing to grow their data center footprints and geographic locations to better serve their customers as some centers are positioned in rural areas with tenuous high speed communication links.

For customers with large average compute needs, “Cloud in a Box” technology has been jointly developed by Cloud Service providers, for example Amazon and Microsoft to name two, and hardware providers like Dell and HP providing a “pod” which allows a complete receive, drop in and plug in and you have a local Cloud that seamlessly interconnects the associated Cloud service provider. (Note: use of these vendor names is for illustrative purposes only and no endorsement is intended or implied by their use.) A pod is a rigidized, rectangular equipment chassis with card cages for processor, storage and auxiliary cards, power supplies and common and high speed communications cards and interfaces. While they are built to be air shipped, only the military ones are built to be air dropped. They are built with HVAC[^7] though some of the more powerful compute ones have water connections included.

Comments:

  1. I am unsure of the heading “Shared Services to Enable Future Network Architectures” and also the lack of reference to the Federal Enterprise Architecture Version 2 (FEA2) in the Draft. Current references to the FEA2 are provided at the end of this submission. It also my understanding that Dr. Scott Bernard is still the Federal Enterprise Architect at OMB. Under FEA2, Network and Applications are components of the FEA2 and Security is one of the cross functional areas. In my opinion, application shared services are provided as one of the services based on the network’s architecture and its deployment.

References:

Transition Planning for Internet Protocol Version 6 (IPv6)

https://georgewbush-whitehouse.archives.gov/omb/memoranda/fy2005/m05-22.pdf

September 28, 2010 OMB Transition to IPv6 Memo

https://s3.amazonaws.com/sitesusa/wp-content/uploads/sites/1151/downloads/2012/09/Transition-to-IPv6.pdf

Planning Guide/Roadmap Toward IPv6 Adoption within the U.S. Government

https://s3.amazonaws.com/sitesusa/wp-content/uploads/sites/1151/downloads/2012/09/2012_IPv6_Roadmap_FINAL_20120712.pdf

GSA’s Connections II SoW for Agency adoption of IPv6

https://www.gsa.gov/cdnstatic/IPv6_Migration_Connections_II_SOW_Template.docx

FEA documents\ https://obamawhitehouse.archives.gov/omb/e-gov/FEA

Including:

The Common Approach to Federal Enterprise Architecture\ https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/egov_docs/common_approach_to_federal_ea.pdf

Federal Enterprise Architecture Framework, Version 2.0\ https://obamawhitehouse.archives.gov/sites/default/files/omb/assets/egov_docs/fea_v2.pdf

Additional Material

Because of the intricate nature of this networking architecture and technology it plays a critical role in current and future networking, some additional material has been provided to give a brief overview of the fundamental capabilities that it brings to the table. We start off with the fiber optic components and the systems they are used in, including their control by real-time management and network control systems, i.e. SDN (Software Defined Networking) that provide dynamic, real time and elastic functionality supporting current and future mission requirements.

Fiber Optic System Technologies

Photonic Integrated Circuits (PICs) – Several US based companies have developed equipment that are exploiting PICs include Ciena, Cisco and Infinera. (note: use of these companies’ names are for illustration purposes only and no endorsement is expressed or implied.) This equipment had been deployed in a number of service provider’s networks and customer sites. One vendor’s production equipment which only takes up 1RU of rack space supports 12 100Gbps customer facing interfaces and one 1.2 Tbps network interface. (1 Tera bit per second is one million times one million bits per second) The current total capacity on a single fiber pair using several of these boxes is 27.6 Tbps and work is underway to increase the per link speed to 1 Peta bit per second or 1,000 Tbps.

Dense Wave Division Multiplexing (DWDM) takes multiple lambdas or laser colors and transports them typically on a pair of fiber strands. Today each of the colors can be 100 Gbps or slower and several colors can be compressed together and treated optically as a Super Channel. Coherent optical Super Channels usually allow 10 or more 100 Gbps colors to be switched and routed as one channel. Colors (100 Gbps) can be added and removed from Super Channels giving a large range of capacity transport for instantly adding bandwidth for shared or cloud services. I am using the acronym VDWDM to indicated when multiple Super Channels are being carried on one fiber pair. These systems have the equivalent of express lanes where laser colors just pass through the mux until they get to the on or off ramp they are dynamically provisioned for. Current DWDM and VDWDM systems have integral ROADM capability.

Optical switches and Reconfigurable Optical Add, Drop Multiplexor (ROADM)

In optical transport systems 100 Gbps colors could either be switched all optically or converted to electrical signals and then switched or groomed in the electrical domain before being converted by to light. Before the invention of PICs this was a very costly interface in terms of chip count, board real estate and propagation delay, but now it is a minor set of on chip elements. This has reduced the size of the ROADMs, which are logically half an optical switch, so they are routinely added to IP/MPLS (Internet Protocol/ Multi-protocol Label Switching) routers and other types of switches as well as control plane elements..

On-demand and real time networking Technologies

While the optical hardware is necessary for elastic networks, but not sufficient as they are missing the real-time, on-demand orchestration capability required for mission success.

This combination of optical equipment, layer 1,2 and 3 switches as well as IP/MPLS/GMPLS routers enables both the service provider and customers to deploy a multi-layer orchestrated network that providing a multitude of services that used to require separate networks.

A suggested architecture is multi-layered networks which are based on optical hardware utilizing GMPLS (Generalized Multi-protocol, Lambda/Label Switching) as a message based control protocol supporting NNI and UNI (Network to Network, User to Network) interfaces to the Agency and different Service Providers. This control protocol is part of the SDN protocol stack used to support on-demand, elastic bandwidth necessary to support changing Agency and mission requirements. OTN (Optical Transport Network) which utilizes a type of encoding (G.709) based on a digital frame that supports multiple services on an integrated and shared optical carrier which is conveyed over VDWDM transmission systems. Some of these industry standard services include Carrier Ethernet, alongside consolidated links composed of optical channels, SONET/SDH, voice, low and high speed data and video as well as legacy TDM/PDH circuits.

Because there are different flavors of implementation of these systems, it is suggested that NIST provide a more uniform set of APIs and protocol configurations like they did for the Cloud environment.

[^1]: Including optical, electronic switching, routing, transmission, LAN and WAN, DNS, DHCP, Network and Security Management and monitoring as well as computing and storage resources.

[^2]: Including traditional and alternative carriers, information services, Cloud providers with integral network access and others

[^3]: GSA and Agencies did a complete inventory of circuits for the Enterprise Infrastructure Services (EIS) contract that was just awarded.

[^4]: These terms are further defined in the Additional Information Section of this submission.

[^5]: PICs are composed of VLSI circuit components integrated on-chip with miniaturized optical components producing very well behaved and functional optical subsystems produced by at least half a dozen US vendors on two to three different technologies.

[^6]: Tera bits per second equate to 1 million x 1 million bits per second with Megabits being 1 million bits per second.

[^7]: HVAC – stands for Heating, Ventilation and Air Conditioning