Challenges on the road to C-RAN adoption
Previously published on Light Reading
The great thing about the telecom industry is that people keep inventing new acronyms. Sometimes we lose track of what the acronym actually stands for and sometimes an acronym can mean something different to two different people. Take C-RAN for example. We all agree that RAN stands for Radio Access Network. But does the C stand for Centralized or Cloud? Perhaps China is more appropriate as that is where it seems to have originated.
The reality is that C-RAN stands for both centralized and cloud, as this Heavy Reading report from 2013 explains: C-RAN & LTE Advanced: The Road to "True 4G" & Beyond. The initial focus of C-RAN was on centralization but the end game is cloudification. The centralization phase is all about moving the base band unit (BBU) from the foot of the cell tower to a common location that serves multiple towers. This gives economies of scale in land, power and cooling costs which can be as much as two thirds of a wireless network's operational costs (unless you're in Iceland where cooling is less of an issue). Having a pool of BBUs in a secure, central location also reduces truck roll costs for maintenance.
The next phase, cloudification, is when we replace the proprietary, hardware-based BBUs with software-based BBUs (still proprietary of course) and run them on virtual machines (proprietary or open source) running on commercial off-the-shelf servers (typically using Intel's proprietary x86 processor architecture).
Not all of the functions of a BBU can be handled by COTS servers so there will still be a requirement for some proprietary hardware. A BBU fulfills several functions, some with strict real‑time constraints that require a DSP, others that can be handled with software running on standard CPUs. Non-real-time layer 2 and 3 functions may run as virtual network functions (VNFs) in the NFV cloud. However, real-time layer 1 functions (real‑time digital RF processing, alarms and error handling, error correction) are more difficult to virtualize and will thus continue to run on digital signal processors (DSPs) that are physically located with the remote radio head (RRH).
Nonetheless, a redesigned BBU can offload a lot of routine processing to COTS enabling the hardware consolidation dream of NFV. In theory this leads to both capex and opex savings versus the traditional approach of a dedicated BBU for each cell tower. This article from 2015 cites capex savings of 30% and opex savings of 53% at China Mobile.
Sounds like a no brainer, right? Well, meeting the stringent latency requirements of both TD-LTE and GSM turns out to be quite a challenge when the BBU and RRH are so far apart.
Fronthaul latency challenge
The optical fiber connecting the centralized BBU to the RRHs (power amplifiers, filters and the antenna) is known as fronthaul, a play on the more established term backhaul for the connection from BBU to the core network.
The protocol for the transmission between centralized BBU and the RRHs is either Common Protocol Radio Interface (CPRI) or Open Basestation Architecture Initiative (OBSAI). CPRI takes one optical link per cell, per carrier band and per technology. For example, a cell site with three sectors and 2G, 3G, plus two LTE bands would require 12 CPRI links in each direction: uplink and downlink. Several optical distribution technologies are available including dedicated fibers, passive WDM, active WDM, NG-PON2, and soon Ethernet fronthaul.
The trouble is, CPRI was designed for an optical link between BBU and RRH under the old, distributed architecture when the separation was typically less than 100m. With C-RAN the distance can be up to 25km which introduces more stringent requirements for round‑trip time, latency, and optical power attenuation. This makes choosing the right optical distribution technology critical. For example passive optical networks induce a significant power loss (5‑10 dB) but have low latency. Conversely, active WDM networks regenerate the signal at each hop, which eliminates the power loss issue but adds latency.
Poor FTTA install quality may come back to bite
Assuming you’ve solved the trade-off between power loss and latency with your optical network design you still have the challenge of getting it to work in the field. For many operators, C-RAN will build upon an existing fiber-to-the-antenna (FTTA) deployment program whereby the copper cables, that traditionally connected a BBU in a cabinet at the base of a tower to the RRH located at the top, are replaced by optical fiber. If the FTTA deployment is not done with sufficient care it may transpire that when the operator seeks to upgrade from FTTA to C-RAN they encounter quality issues with the last leg of optical fiber to the RRH when this is spliced to a longer optical link back to a centralized BBU. Returning to the cell site, climbing up to the antenna mast and troubleshooting the root cause of the degradation of radio performance will add significant cost to a C-RAN deployment, undermining its ROI. As my grandmother never said, "a stitch in time saves nine."