Policy Based Routing (PBR) is a technique used to make routing decisions based on policies set by the network administrator. For example, when a router receives a packet it normally decides where to forward it based on the destination address, which is then used to look up an entry in a routing table. For Transparent Caching, the network administrator may choose to forward a packet based on the port address instead. Since TCP port 80 is the default port for Hypertext Transfer Protocol (HTTP), it is common to set up router policy to forward all port 80 traffic to the Intercept Server.
Web Cache Communication Protocol (WCCP), a Cisco-developed content-routing protocol that provides a mechanism to redirect traffic flows, may be used to send content to the Intercept Server. WCCP has built-in load balancing, scaling, fault tolerance, and service-assurance mechanisms. Cisco IOS Release 12.1 and later allows the use of either Version 1 (WCCPv1) or Version 2 (WCCPv2) of the protocol.
The relationship between content delivery, service-layer technology, and the cloud is highly symbiotic because that's the only way for providers to build reusable assets in monetizing new opportunities. Verivue offers a vision of content delivery that's both componentized for service layer integration and virtualized for compatibility with cloud infrastructure. That makes them a critical resource in a critical market space." MORE >
OneVantage is the first and only solution that offers both carrier CDN and transparent caching in a single unified platform. The integration of these two functions helps streamline network operations while reducing deployment cost. Now intelligent caches deployed strategically throughout the CDN network can serve dual-purpose as a Transparent Cache, reducing the network infrastructure and bandwidth costs associated with over-the-top content (OTT).
Unlike traditional CDNs, which only store content based on business agreements, a Transparent Cache automatically intercepts, ingests and serves content as it becomes popular, without the need for ongoing operator intervention. Operators can cache and deliver popular content close to subscribers, thus reducing the amount of transit traffic across their networks.
Transparent Caching works by automatically intercepting popular OTT content at the network edge. Subsequent requests for the same content can then be retrieved from the cache (close to subscribers) instead of transiting across the network. By easing demand on transit bandwidth and reducing delays, operators can deliver better user performance, especially during peak periods. Here's how it works:
A client initiates a TCP connection directly to Object Store.
A switch/router on the network recognizes HTTP packets and diverts them to the OneVantage Transparent Cache instead of forwarding them to their original destination. There are several ways to accomplish the interception, including policy-based routing and web cache communication protocol.
The OneVantage Transparent Cache is a lightweight element tasked with determining the "cache-ability" of the content by checking the domain name or URL against an operator defined whitelist used to target specific websites for caching. Upon a match, the Transparent Cache issues an HTTP redirect that induces the client to request the content from the CDN. This "decoupled" approach allows operators to configure the Transparent Cache in select locations and cache widely across the network for the most optimal design (see figure 1 below).
If the Transparent Cache determines instead that the content is "uncache-able" or is not included on the whitelist, it is passed to the original destination while preserving all the fields and headers from the original client request (see figure 2 below).
The redirected request from the client is sent to the best available OneVantage HyperCache using the services of the OneVantage Request Router. The HyperCache stores copies of content passing through it so that subsequent requests can be satisfied from the cache if available (see figure 3 below). At the HyperCache, the URL is used to retrieve and deliver a previously cached copy to the requesting client. If the content is not in the cache (i.e. a cache miss), a request is made to the Object Store to fill the cache and subsequent requests for the same content can then be retrieved and delivered to the client locally.
If URL transparency is desired, a signature of the content (instead of the URL) can be used to retrieve a previously cached copy for delivery to the requesting client. Under this option, the session always originates at the origin web server to facilitate signature computation. Once computed, the signature is used to search the cache and the origin session/stream is intercepted and replace by cached content if found.
High Availability of the Transparent Cache is maintained through the use of the Virtual Router Redundancy Protocol (VRRP) feature when PBR is in use. WCCP based configurations utilize the inherent redundancy feature of this protocol. If part of the Transparent Cache cluster fails, all remaining components seamlessly subsume the workload of the failed component, providing non-stop operation. Transparent Cache clusters can be deployed in places where on-site support may not be regularly available.
Similar to the HyperCache, the Transparent Cache features a self load-balancing mechanism that distributes workload across all nodes in the cluster. This feature helps eliminate the need for external load balancers, saving cost and complexity while reducing energy consumption and rack space.