Friday, 23 September 2016

Dense Wavelength Division Multiplexing and Optical Networks Fundamentals: Practical recommendations for Deployment.

The objective of this article is meant to be for someone who has not done Optical Networking or Optical Engineering and this is why we put together this article from today's Conference
  23 September 2016   15.00Hrs G.M.T


Let get started by asking this question. how many of  us reading this article have deployed Optical links or engineered one or have designed optical networks or let re-phrase  like this, how many of us have designed or engineered an optical transmission link( SDH/WDM)? or a multiple-node optical transport network? ok! like we said this article was designed for someone who have fairly or no prior knowledge of Optical Networking.
We here you say hmmmm. ok not to worry. now we speak from the provider side or provider services, how many of us are considering to lease a wavelength or multiple wavelengths in the next 6-12 months for providing connectivity? one, two ok Thank! you.
 Now in reference to this article with a focus on dark fibre or sometimes called in outer regions as grey fibre. How many of you are considering to lease a dark fibre or wavelength services in upcoming months?, Ok excellent! so ok assuming we have our audience reading this article coming from IP networking background and we will just have to level set for those of you who have not been dealing with optical networking.Most of the time if you are dealing with data networking, the router, MPLS switches, layer 2 and above, usually what you see is a grey cloud, you have the edge device CE, you have the core (P) and the edge core demarcation.(PE).
Typically, this can be across countries so you don't necessarily have to care about the underlying physical infrastructure. All we care about is the logical connectivity between the routers and switches. One often needs to know if the routers have a 10g, 40g, 100gig interfaces and how many of this interfaces are available.But depending on applications obviously, you do care sometimes. This is what we see in textbooks from juniper or Cisco.
               For optical networking, however, the physical actual fibre map is an important element for optical link or optical network engineering . So what we see and important in any optical networking design is that  we always need to know, how the actual underlying topology is being connected;the fibre distance and fibre characteristics to design an optical networks assuming you are planning for an optical networks, optical links, where there is a ring type topology, where there is a mesh type topology, where there is a point-to-point or point to multi-point. topology. these is always an important consideration.
                 In optical transmission layer, one needs to know exactly the underlying fibre topology, the fibre details and characteristics so that the optical layer and equipment can be dimensioned accordingly.
So let walk through on how an optical network and link can be designed.

In this article, we will look at this from the physics and physical engineering point of view and also from a networking point-of view so that we can see a bigger picture.
A little bit about our background as well, We are fortunate to and have an opportunity to design all networks in this field for the Tier-1's Whether is in the US, North America, Asia Pacific and now Africa.
Let start from the basics which is a point-to-point and answer some common questions for any optical link/network planning. Since we are dealing with fibre optical networks, the condition of the fibre, the length of the fibre, the characteristics of the fibre are very important information before you start designing any optical engineering network of links. For example, the length, obviously you need to know the distance, is it going to be from rack to rack within a data centre.Is it going to be from data centre to data centre e.g TI tO T2 or E1 TO E2,. i.e the fibre distance or fibre topology map for the design or is it going to be from city to city.
               Secondly, how many fibre strands do you have? from the extreme case running to vendors of service provider. The only lease or by-fibre is one strand at a time. We mean not fibre pair but one and they want you to do high capacity transmission over a single fibre stand. A low cost provider will like to do that where the business case justifies it. Do you also get a multiple fibre strands sometimes in cases of North America where the providers are so fibre rich. You have multiple fibre in that case you might not need WDM.Right! because each fibre pair can actually be put 10gig capacity, 100 gig capacity and that might be enough where dark fibre is so cheap or so abundant between some locations, countries or some cities.
So after you have understand the connectivity from this point to that point. How many fibre can you get, lease, rent, borrow?
                Thirdly is the type of fibre string. Obviously, if you talk about metro, regional or long haul meaning spanning a hundred or thousand of kilometres.In all the cases, it is likely to be single-mode fibre.This is the type of fibre used for long distance transmision whereas multi-mode fibre is more like cross shelf between rack types. Also there are multiple fibre types as well(SMF). standard common fibre (SMF) types are G652 (SMF-28)  G653(DSF)  G655(NZDSF). In japan, there are some fibres using G653 which are quite very challenging in the way you design that link.There are some techniques which you can use with seebeck equipment over DSF fibre depending on the distance and the condition of the fibre
e.g coning'S SMF-28 is commonly used in todays networks.
               Fourthly, the condition of the fibre strand meaning do the strands have a lot of knots in it, connections, a lot of space, air space due to eigen or microbend factor. This affect the overall loss between point A to point B. Why do we care about high loss fibre? This is because the loss is directly proportional to amplifier in the link ie. amplifiers might be required if the fibre loss is very high therefore you will have to buy more equipment to  transmit from point A to Point B if the loss is too high. Also the age of the fibre. is it underground or area fibre. These are all very important as it poses varying challenges because the design for a fibre in japan is different from  India and Illinois, Chicago. Also germane is the number of splices or connections on the fibre.
                 Lastly, is your expectation on the bandwidth that you will transport over this fibre link meaning the End of Life (EOL)transmission capacity. This answers the question, how big should "the pipe" be? Do you need 10Gb/s? 40Gb/s or 100 Gb/s or multiple 100Gb/s per wavelength. How many wavelength(s) or "highway lanes" do you need. The End of Life means the life cycle of the network. what is your expectation? right!depending on the capacity you need. Vendors like NGT or whoever will have to design the network accordingly  by picking different level blocks to put together the design for you. You want an optimized for the first course, that one approach or optimize for flexibility, that another approach .
                 For us we deal with the spectrum of customers to the single fibre strand.
The Only one may be HL of 10Gig or NGT want 96 channel 100gig? they won. No questions to ask. we know how to use the capacity. we know that we will grow with that, infact you have to tell us if this platform can upgrade to 200 per wavelength as well. so we have a whole spectrum of requirement. This is very important.
              Today as at the writing of this article, we can clearly inform you that terabit link capacity  is not uncommon to connect data centres in metro  networks e,g in tokyo, japan , hong kong etc. Customers directly buy say give us 500gig between these two point or give us 600gig between theses two points.Very very not uncommon, in order words it very common especially in the datacentre interconnections (DCI)
One of the key thing about fibre is attenuation or loss measured in decibels. As we have the fibre relaunch from point A to point B, the power coming from the laser will reduce due to the scaling of the light and due to the loss when we travel a distance. So depending on the condition of the fibre, depending on the length, typical fibre loss could be 0.20dB/km - 0.35dB/km. although in some regions fibre loss may be as high as 0.5dB/km. It may be the dark fibre, ageing or all of the connectors.so when we average the loss over the length , 0.5 dB/km to 0.6dB/Km can be obtainable.
From the link perspective,The number one basic link budget engineering are fibre loss, spice loss,connector loss and safety margin. This should be far less that power budjet. When we launch at power level from point A to Point B to a lower level by transmitting and receiving power over the fibre link we get loss, after all most of the optical networks last for 5, 10 or 15 years.
On top of that is another concept to know which is the concept of transmission window.  e.g for 800 ,1310 and 1550 wavelength in manometer(nm). This is referred to the band of the transmission window and each transmission window obviously has different profile because of the fibre.


NGT is a Nigerian Based company. we offer services on optical products and we have customer in Asia-pacific where we deliver software-defined networking infrastructure solutions, enabling global service & content providers to  scale their networks and their businesses in Data Centres interconnections. we also have customer in the service provider space that deploy WDM solutions.
NGT systems, some of our audience may not have the chance to visit us both in conferences and workshops, If your have specific questions about NGT optical products and about our networking products, feel free to stop at our online chat room with one of our representatives and drop us a mail at corporatesales@ngittech.com.ng to tell you more about the company. right now, 

Monday, 22 August 2016

Spectrum fees exorbitant, Airtel tells NCC

Everest Amaefule, Abuja 
A leading mobile telecommunications operator, Airtel, has asked the Nigerian Communications Commission to review its spectrum pricing template in line with prevailing economic situation in the country.
 Airtel made the call in response to the NCC invitation to stakeholders to submit comments and observations on its licensing proposal for 38GHz and 42GHz spectrum bands as well as re-planning of 23GHz spectrum band.

http://punchng.com/spectrum-fees-exorbitant-airtel-tells-ncc/

Thursday, 4 August 2016

Bell and Nokia announce a “successful Canadian trial” of 5G mobile technology

Ryan Patrick - July 29, 2016


Canadian mobile users should expect a speed increase with today’s news that Bell Canada has been working with Nokia Corp. to successfully demo 5G network technology. Just don’t expect it anytime soon, according to the communications company.Conducted at Bell’s Wireless Innovation Centre in Mississauga, Ont., the “ pre-commercial 5G system” trial used spectrum in the 73 GHz range to attain sustained broadband data speeds more than six times faster than current 4G mobile speeds available in Canada, Bell claimed

Sunday, 31 July 2016

SDN FOR OPTICAL TRANSPORT NETWORK SERVICES


By Kenny Ade: 30th July 2016 @ 24.00Hrs
 

 There have been so much discussions and so much marketing with Software-Defined Network.
It is important in this article to enumerate the problems that we are trying to solve.
SDN is really interesting for four major issues in the network. The first is efficiency. Right now when carriers operate different transport and packet network, they tend to experience strands between these two layers which result in excess cost.ie ( transport and packet bandwidth builds excess cost into carrier networks).
The second is cost by using better multi-layer planning and provisioning. The objective is perhaps, some of the higher cost equipment specifically layer 3 routers as well as O/E/O convergence can be reduced in sending traffics to ROADMS or expressing it pass router. The third is flexibility as this is most important benefit of SDN and that is really taking the network asset and making them more responsive not just to your own internal operation but ideally to a direct customer control and finally the fourth problem is potentially to solve disaggregation. This will allow service provider to reduce dependence on a single vendor within a domain and if we take this to its ultimate conclusion, you could dissagregate network right down to specific points.

The key take here is that SDN is really about automation and not central planning. It about using automation tools to extract more value from the network.
One of the catalyst that is really triggering service providers to look at deploying SDN is the transition of packet capability moving down to transport layer.
As we go from now till 2017, there is a significant increase in amount of traffic that service provider are expecting to handle using packet optical equipment. There is a decline in the use of layer 3 dedicated routers so optical equipment will play a larger role in providing packet switching and aggregation by 2016. Routers are not going away. Packet optical is still secondary to routing layer. The decrease in the deployment of this technology is really catalyzing SDN in the transport layer.
Optimally, a management approach that balances the use of one layer versus the other is needed.
Bottom-Line, SDN is really just an enhancement of traditional routing models.
At the end of last year, Infinera really surpass report on the Carrier SDN market by estimating direct spending on software market size including NFV orchestration and controllers. This year estimate is pretty small around 200 million dollars and forecast  a growth of about US$1.2 Billion Market by 2018.
Also worth mentioning, how service providers are changing operations in order to use SDN.
In respect to flexibility and automation in our discussions with service providers, there is a rise of dynamic cloud services driving the need of routers to increase speed of innovation and support faster service creation, they need to move faster transforming into a DevOps model and deployment while lowering costs.
The DevOps new model is a model that breaks down silos between development, QA and Operations to rapidly release new services and service enhancements as new features become available.
The traditional Telco Model is slow, rigid with siloed product development cycle as different teams specify and develop software, while another do software testing , integration and test before deployment and this could take up to 18-24 months to deploy. DevOps IT Model is a scenario where you combine the development team, the Q/A team and Agile & collaborate and quickly release features as they become available. What this allows the carrier team to do is move much more quickly in terms of service creation and get your new services out of the door and much more customize to what the users need.
There are basically two drivers for network renovation with SDN technology. The first driver is information and communication technology (ICT) full-service operation. what this means is that the boundary between ICT and CT has been blurred in the recent years, we have seen the presence of voice and data and on top of that we have seen new applications driven by cloud computing, Smart education, Big Data, Smart Home and M2M and the recent 2K/4K video etc.
Basically, this new applications have really push for the need of application-aware on transport network services providing different OS and TOE upon application.
The second driver for network renovation is the user behavior change faithful to consumer or transport network services.
In the past, the large bandwidth will establish static connectivity asking a major consumer transport network. This is changing as user application now require much more that end-to-end agile services based on the premise of pay for usage (PoU). In a nutshell, the transport network have to provide faster and smart services that is needed for user experience.
We have been able to identify what is happening on the transport networks. In this article, we focus on the Transport SDN problem statement, implementation and solutions as experienced by various service provider deployments and carriers' . We discussed the recommended approach in providing solutions to this panacea

In our next article , we will discuss why is these changes are happening and the fundamental changing market conditions...

Monday, 13 June 2016

Communication and Networking Technology For Low Carbon Smart Grid




Telecommunication have always played a vital role in the management of the modern grid system. Until the advent of the smart grid, this has been required to deliver connectivity for back office systems and remote monitoring. Information Flow, Data Management, Monitoring & Control at domestic level have been facilitated by information and communication technologies. 

The demands of climate change and the 21st century information based society requires the development of a smart grid which is created on advanced communication and networking technologies with frameworks that deliver centralized real time monitoring and measurement control across the entire power grid system. The concept of Smart Grid is the combination of Power grid with the communication technology. A smart grid is an electrical grid which includes a variety of operational and energy measures including smart meters, smart appliances, renewable energy resources, and energy efficiency resources. Electronic power conditioning and control of the production and distribution of electricity are important aspects of the smart grid. Roll-out of smart grid technology also implies a fundamental re-engineering of the electricity services industry, although typical usage of the term is focused on the technical infrastructure.


      All smart grid strategies and visions are founded upon the availability of telecommunication network capability. The Energy efficiency and carbon footprint reduction among other leading information and communications technology solutions, radical new power systems architectures and innovative market mechanisms to support increasing renewable energy deployment and the electrification of transportation and heating in the world today are cutting across business and industrial sectors. It is widely agreed that products and services of information and communication technology (ICTF FPNSG) industry are significant enablers for reaching the desired sustainability.

Saturday, 21 May 2016

Arista steps outside the data center with Cloud Connect solution

The rise of virtualization has had a profound impact on the technology industry. In the networking industry, perhaps no vendor has ridden the wave of cloud more than Arista Networks. The company was founded a little over a decade ago, and today it is a publicly traded company with a market capitalization of over $4.6 billion.
 
However, almost all of Arista's revenues today come from selling products inside the data center. The company was one of the most aggressive vendors in pushing the concept of a spine/leaf architecture as a replacement for a traditional multi-tier network.
This week, Arista announced its first solution that is outside the data center. The Arista Cloud Connect solution connects public and private cloud data centers. Moving into the data center interconnect market is a logical extension for Arista and highlights just how far merchant silicon has come over the past decade.
Years ago, merchant silicon was used primarily in low-performance network devices like wiring closet or branch office switches. However, merchant has come a long way and is now widely used in both leaf and spine switches. Obviously, merchant alone hasn't caused this shift. Arista has made the most of off-the-shelf silicon by coupling it with its data center-class EOS operating system.
Cloud interconnects are ripe for change as well. Legacy data center connections remind me a lot of where the data center was five years ago. Active – standby, proprietary protocols, over-engineering, and little to no encryption is still the norm in many deployments.
Today's cloud environments need the same characteristics as what's found inside the data center, where multi-path, open standards, and multi-use platforms are used. In a sense, what Arista has done is extend its spine networking platform to extend outside the data center with the following cloud interconnect use cases.
  • Spine transit. Arista has added a long-haul, Coherent DWDM 6 port 100 Gbps line card with bit 56 MACsec encryption to its 7500E spine switch. The link card has 96 channels in C-band with a total transmission capacity of 10 Tbps. This is the company's first layer 1 product and can connect data centers that are up to 5000km apart. Arista has an advantage over the optical pure plays in that it can offer a consolidated, single box solution instead of requiring three, separate boxes for switching, encryption, and DWDM. 
  • Spine interconnect using VXLAN for layer 2 fabric extension. Currently, if a cloud provider wanted to connect two data center fabrics, it would need to do so by manipulating MPLS, which can be a complicated process, or utilizing some kind of proprietary, vendor-specific protocol that can have limitations down the road when another product is introduced. The use of VXLAN introduces a simple, standards-based way of creating a layer 2 interconnection between clouds.
  • Spine peering. This is Arista's solution to connect clouds at layer 3 that doesn't require the purchase of an expensive Internet router. The primary reason organizations had to use routers at cloud interconnection points was the requirement to hold the full Internet routing table, which can exceed 1 million routes. Routers do provide a tremendous amount of value at certain points in the network, but they are overkill for cloud peering. Arista has developed a feature called Selective Route Download (SRD), where only the routes that are required (about 60,000) are carried into the hardware table. Arista estimates that it can cover about 90% of the traffic with these routes, creating an excellent, lower-cost alternative to a dedicated router.
Merchant silicon has come a long way over the past several years. Arista has taken advantage of advancements in this market and combined it with its highly flexible and programmatic operating system, EOS, to create competitive differentiation inside the data center. It's new Cloud Connect solutions brings the same cost benefits to cloud providers looking to interconnect data centers at layer one, two or three.

Cisco updates CCIE, CCNA certifications: What you need to know

This week, Cisco announced some changes to its CCIE Data Center and CCNA Security certifications to prepare IT pros for the evolving IT landscape.

One of the ways to measure an engineer's value is by the number of certifications that he or she holds. In networking, the gold standard has always been Cisco certifications (disclosure: Cisco is a client of ZK Research). The company has a wide range of certifications, ranging from the entry-level Cisco Certified Network Associate (CCNA) and culminating with the difficult-to-achieve but highly valued Cisco Certified Internetwork Expert (CCIE). The perception of CCIEs is so high that the term has become part of networking vernacular. When describing difficult network challenges, it's common to say that a particularly complex issue was so complicated that "even a team of CCIEs couldn't solve it."
Salary survey 1
Computerworld’s annual IT Salary Survey results are in. Find out what your peers said about their
Read Now
One of the reasons the certifications have been so highly valued for so long is that Cisco has done a great job of continually evolving the programs as times change. This week, Cisco announced some major changes to its CCIE Data Center and CCNA Security certifications to bring them in line with the digital era.
The changes to the CCIE framework are to ensure that the certification is aligned with the evolution of the role of IT and an engineer's ability to produce business outcomes. The technical aspects of CCIE will continue to live on, but Cisco is trying to raise the bar on those who carry a certification that indicates being a leader in the IT industry.
This week's news is centered around changes to CCIE Data Center, but in practicality Cisco is revising the charger for all of its expert-level CCIE and Cisco Certified Design Expert (CCDE) certifications to ensure that individuals carrying these titles can have meaningful business conversations about new technical areas that are causing organizations to rethink their business strategies.
Updates to the programs include a new way of assessing individuals to ensure certified individuals have working knowledge and skill in many emerging areas, such as the Internet of Things (IoT), network programmability, cloud, and business transformation. Also, there will be unified written and lab exam topics for candidates to demonstrate holistic knowledge of exam domain.
Regarding the certifications that Cisco announced are changing, the CCIE Data Center version 2.0 requires skills focused on advanced data center solutions needed to design, implement, and troubleshoot today's modern data center. This includes skills focused on end-to-end management of the environment, policy-based infrastructure, advanced virtualization, automation, and orchestration. Cisco has also added a requirement for building skills in IoT, software defined networking (SDN), cloud, and their impact on architectures and deployment models.
The new CCNA Security builds the skills required to deploy secure infrastructure, implement security controls, enforce policies, and assist in addressing security issues. One of the big changes in the refreshed CCNA is that it now expands the focus of security from just the network to a broader, end-to-end IT security purview. Exam topics will now included new but critical technologies, such as 802.11x, ISE, BYOD, web security, FirePOWER, FireSIGHT, cloud, virtualization, and advanced malware protection.
Digitization is changing business, and this is creating the requirement for new jobs, most of which didn't exist a few years ago. It's critical that IT professionals keep up with current technologies, or they'll see their careers go the way of the mainframe administer and voice manager. Cisco's changes to its certification program ensure that the careers of the certified professionals are aligned with the direction of digitization.