Skip navigation
All Places > In the Limelight Blog
1 2 3 Previous Next

In the Limelight Blog

110 posts
wrotch

Deep Dive on Rules at the Edge

Posted by wrotch Jul 19, 2016

In June, Limelight expanded self-service configuration for customers using Website and Application Acceleration to include the ability to apply Rules at the Edge to configurations. Customers can now benefit both from the speed of self-service and the customized power of using rules to accomplish specialized actions at the network edge.

 

Background

Limelight supports a large number of standard configurations for website and app acceleration content, and this allows customers to tailor the delivery of content to their specific needs.  But sometimes more powerful and customer-specific logic is needed to be successful.

 

At Limelight, this is accomplished via what we refer to as Rules at the Edge. Rules at the Edge are customer-specific and often content-specific rules that are executed in real time as  content is requested by users.

 

Rules at Edge screenshot.pngRules can now be applied to Website and Apps Acceleration Configs in Self Service Portal

 

How Rules Work

Rules can be triggered when a request or response meets pre-defined conditions, such as a pattern match with:

  • The URL, file name or query term
  • The IP address
  • The value of a specified HTTP header
  • A cookie
  • The geographic location of a request (using the IP address)


Customers can also control when rules are executed and have 4 distinct choices of where in the flow of requests and responses a rule is applied:

  1. Rule on Edge Request
  2. Rule on Origin Request
  3. Rule on Origin Response
  4. Rule on Client Response

Rules_at_Edge_Flow_Diagram.png

 

In summary, think of rules as a series of if-this-then-that type logic that:

  • Has as its input all of the information contained in the request header or cookies.
  • Can further lookup or transform information- e.g. convert an IP address into the country or state or even zip code that it represents.
  • Can write back changes to the header such as appending text to the path of the object requested or directing the request to a different origin server, or writing a value into a cookie.
  • Can do one thing or a combination of these things and more based on the business needs.


Some Examples of Using Rules

 

All of this sounds pretty theoretical, so let's review some real-world examples of how rules are put to use.   

  1. Doing GEO lookups and using the results: Through basic configuration and a feature we call IP Access Control, customers can whitelist or blacklist requests based on the geography of the requester. However, sometimes, a customer wants to use the GEO information to accomplish more than simply allowing or blocking requests and this is where rules can be helpful. For example, say you had a global logistics company that had different content to display based on the country of the requester. Rather than directing the user to some landing page and requiring them to choose a country first, rules can be used to look up the country of the requester and return content specific to that country.
  2. Working with Cross Origin Resource Sharing (CORS) headers: CORS headers are used to manage and control what content can be sourced cross-origin. Rules at the edge can view the origin specified in a request and, for example, look this up dynamically against a list of ‘approved’ origins. If allowed, the response can contain a allow_origin value of that origin and if not on the approved list it can allow it but redirect the allow_origin header value to a different destination.  All of this means rules at the edge can provide custom logic run at the edge to set allow or deny values in CORS headers.
  3. Manipulating cache keys to optimize content delivery: Using rules to manipulate cache keys can reduce the number of copies of content the edge may need to hold, at the same time increasing cache efficiency, reducing storage requirements, and reducing the amount of traffic back to a customer’s origin. For example, let’s say a family of e-commerce sites are all selling the same item with associated photo and video content. Rules at the edge can be used to translate a series of requests, say for mystore.com/object and thestore.com/object bigstore.com/object, making these requests all point to the same single object regardless of which of many domains are requested.
  4. Setting content expiration: Sometimes rules are used to assist a customer with managing the expiration times of content.  Rules can be used by the edge server to insert a content Time to Live (TTL) value for the content so that this does not have to be managed by the customer or at origin.
  5. Controlling whether or not cached content should be returned: An example of this would be using rules to override the fact that normally if cookies are associated with a request you might assume the content was dynamic and needed to come from origin.  But in some cases you wish to pull the object from cache regardless of the presence of a cookie. 

 

These are just a few of the many possible uses for rules. With the use of a lightweight and efficient scripting language deployed on edge servers, many things are possible. If you think you may benefit from Rules at the Edge, or want more information on types of rules that can be created, please contact your Limelight Account Manager or Solutions Engineer.

nhoch

IBC 2016: Let's Talk

Posted by nhoch Jul 19, 2016

In September, leading media companies will gather at the annual IBC show in Amsterdam. Planning on going ? If so, make sure to stop by the Limelight team in Hall 3, Stand A.23 to say hello. It's a great opportunity for Limelight customers to find out what's new from our technical experts, and to connect with our senior management team as well.  Our experts will be on-site throughout the Exhibition (9 - 13 Sept 2016) to  answer your questions about what it takes to consistently deliver your video content at broadcast quality, everywhere in the world. Whether you want to talk about OTT, customizable cloud-based workflows, multi-format delivery for any device, security, or any other video delivery topic, we'll be ready!

 

Haven't signed up yet? Email ibc@llnw.com to book an appointment in one of our private meeting rooms at the show, or just come by at Hall 3, Stand A.23.

 

We look forward to seeing you!

At Limelight, we pride ourselves on delivering excellent service, including a superior cache hit rate. High cache hit rates improve response time, throughput, and availability, and protect customer origins from high requests per second and high traffic. High cache hit rates help customers to avoid costly buildouts of networks, servers and locations to protect them, because Limelight provides that protection for them.

 

Through Limelight’s diligent and relentless efforts, we have made a phenomenal leap in cache at the edge. In fact, utilized cache at the edge is now 33 petabytes, which is 50% more than the 22 petabytes from just a few months ago.

 

What is the impact? One way to measure it is to look at the data from one of our customers, a multi-national software and device manufacturer. For this customer, we charted out cache hit ratio and requests per second from March 1-June 10, 2016.

 

Here is what we see:

 

In a time of significant CDN traffic growth for us and the industry, this customer had a cache hit rate of 98.45% on May 1, and by June 1 had moved to an incredible cache hit rate of 99.83%. Even better, the cache hit rate had very low variability (standard deviation of a mere 0.65%) even though their traffic was highly variable!

 

 

On any given day, we are managing an astonishing 20 to 25 billion objects in cache at the edge. We have added 50% more total cache at edge while significantly reducing the number of total servers as we move to be much more dense and green and power efficient.  Proof that server count is an irrelevant metric, and is only a sign of inefficiency and an aging fleet!

 

We have the best CDN service, and now there is 50% more of it—with better performance, fewer total servers, less total electricity, and less total floor space. We’ve gone from great to incredible!

Earlier this week, we shared the exciting news that Limelight joined the Google Cloud Platform CDN Interconnect program. This collaboration brings some significant benefits to anyone using Google’s Cloud Platform with Limelight’s Content Delivery Network (CDN).

 

As an interconnect partner, Limelight has a number of direct interconnect links with Google’s edge network. So, if you are using Google’s Cloud Platform with Limelight’s CDN, your traffic will bypass the public Internet. That results in dramatic performance improvements and lower costs, since your traffic is subject to Google’s discounted egress pricing.

 

Adding CDN delivery to your Google Cloud Platform services brings other benefits as well. Digital content is cached in the Limelight CDN offloading your Google Cloud Platform origin and further reducing network egress costs. In addition, you get to use all of the value-added services a leading CDN provider like Limelight offers, including security services like SSL encryption and DDoS protection.

 

We are excited to be partnering with Google’s Cloud Platform and we will continue to work on tighter integration in the future to benefit our joint customers. Stay tuned!

 

You can read more about the Google-Limelight relationship on our website.

Organizations interested in distributing live video streams need to make decisions about how much of the process to handle themselves vs. offloading to service providers. A starting point would be to examine what other companies have done to successfully transition to publishing their own content. As a hypothetical example we will look at a company that produces and packages on-demand and live streaming videos, and distributes them via YouTube.

The popularity of YouTube as a go-to site for on-demand and live video is unquestioned. Of the broad spectrum of content hosted on this site, a particularly useful genre is workout at home video. For the organizations producing content in this space, leveraging YouTube’s infrastructure is an easy way to distribute exercise videos to viewers, but at the risk of having them enticed by all the on-page content, and click away from your video. For video producers with a large and growing audience, protecting their brand and keeping users engaged is of paramount importance. A good way to accomplish this is to distribute the videos directly to subscribers, but most companies lack the infrastructure to broadly reach subscribers everywhere.

Introduction to MMD Live

Fortunately, Limelight’s Multi-device Media Delivery Live (MMD Live) provides the infrastructure to take in their live streaming feed and make it available for their website. We are going to explore new capabilities now available with MMD Live that enable live streaming with the flawless viewing experience demanded by audiences. We will limit the discussion to formatting streams for various viewing devices and distributing them globally.

MMD Live simplifies the workflow for delivery of live streaming content in the face of many challenges. Among them is supporting the variety of devices users consume video on, from Smart TVs, to laptops, mobile screens, and game consoles, as well as a company’s mobile app, Roku channel and their syndication partners. To address this, MMD Live supports transcoding to multiple bitrates and formats from a single live stream, and transmuxing into the popular formats HLS, HDS, MSS, RTMP and RTSP. Offloading stream transcoding and transmuxing to cloud-based services eliminates CapEx expenditure for on-site compute resources and management of the workflow.

Easy Delivery

Production tools have become sophisticated and inexpensive, so our fitness video company would have a straightforward path to producing their own OTT channel. Mixing live streaming with pre-produced video to create a 24/7 channel is easily within a small company’s capacity. Distribution to a geographically and device diverse audience is where MMD Live provides a cost effective service that the company cannot easily reproduce.

To simplify the process of generating the correct format stream for every user device, MMD Live lets you send a single bitrate RTMP stream to the Limelight ingest servers, and the stream will be trancoded and transmuxed into multiple bitrates of playback formats HLS, HDS, MSS, RTMP, and RTSP. The converted stream is sent to edge delivery servers located across our global network and allow viewers to access the live stream on any device within seconds, no matter where they are. If viewers are watching using the Limelight live video player, their experience will be optimized as the video player autoselects format and bitrate for the device and the current connection speed. If you use your own video player, the playback URLs provided will allow access to multiple bitrate streams so your player can optimize the viewing experience in the same way the Limelight video player does.

More Features

In addition to making worldwide delivery to multiple devices, here’s what’s new in MMD Live:

 

    • A new MMD Live Transmux slot—Sophisticated content providers can publish multiple bitrates of their live stream to MMD Live for multi-format delivery.
    • Control Self Service User Interface—Access to the Limelight Control Portal gives you direct access for you to configure slots, copy them, delete slots, and view slot detail and reports.
    • MediaVault—Server side authentication service secures live streams from unauthorized access.
    • Integrated Live Player—The SmartEmbed live video player for websites autoselects for device, browser, and connection speed.

Business benefits

The new capabilities make MMD Live a more powerful platform enhancement for delivering live video streams to global audiences, with numerous business benefits. Stream delivery is simplified by automatic on-the-fly format conversion during live streaming with delivery directly to users. Protect content from unauthorized access to enforce licensing restrictions. MMD integration with the Limelight CDN provides instant global scale to get your content to audience devices anywhere.

If you have any questions about MMD Live or the new capabilities, please contact your account manager. They are, as always, here to help you succeed.

Neustar, based in Sterling, Virginia, is a trusted, neutral provider of real-time information services and security solutions to protect against the serious threat of expanding Distributed Denial of Service (DDoS) attacks.  DDoS attacks have become more prominent and Neustar’s DDoS service is an important component to protect an organization’s mission-critical digital infrastructure.  Working together, Neustar and Limelight are creating the largest global distributed DDoS mitigation network.

 

“As DDoS attacks have become increasingly powerful and prevalent, it is critical for organizations to invest in a solution that can outpace its attackers. We selected Limelight based on their impressive global infrastructure, technology and people. The relationship demonstrates our joint commitment to securely manage, distribute and protect digital content distribution around the world.”

- Lisa Hook, chief executive officer at Neustar.

 

“As a global leader in digital content delivery, Limelight is excited to be working with Neustar to help them build out the world’s largest DDoS mitigation network. The scale and scope of the Limelight platform and the technology of Neustar will produce a security solution to easily handle the world’s largest DDoS attacks. We believe with Neustar’s industry knowledge and Limelight’s network capabilities, we will be able to uniquely address the market needs.”

- Bob Lento, chief executive officer at Limelight

 

To read more details, here are links to the Limelight press release and Neustar’s press release.

SmartPurge has been specifically architected to execute purges rapidly, at global scale, so you can be confident that your end users are always receiving the most accurate and up-to-date content.

 

Here are just a few of the benefits SmartPurge offers:

  • Near real-time purging – As soon as a request for purge is received, the purged content will stop being served to the end user.
  • Cache eviction – Any and all copies of purged content are permanently deleted from the Limelight CDN. All requests are sent back to the origin to fetch new content.
  • Configurable purge parameters – Submitting and executing purge commands is simple. You can easily submit patterns instead of regular expressions, thus making the whole process intuitive. The real benefit in using patterns is their flexibility they offer in targeting content easily.

 

Please review the SmartPurge documentation and FAQ in the Control portal to learn more about its advanced capabilities. For a quick overview of the new purge process, take two minutes now to watch this short how-to video demo.

 

IMPORTANT!
 The legacy Purge tool and its API that you have been using until now will reach end-of-life status on June 30, 2016. For more information on migrating from purge to SmartPurge, please see the “SmartPurge Migration FAQ” in the Control Support Documentation. We strongly suggest that you start your migration from using legacy Purge to SmartPurge as soon as possible.

 

smartpurge-control.jpg

Note: If you are a customer with a Control user account, log in to control.llnw.com to see the changes outlined below. If you don't yet have an account, simply ask your Limelight Account Manager to help you set one up.

 

There are a number of differences in the user interface between SmartPurge and the older Purge tool:

  • In the What should be purged section
    • The choices for the Where to purge from option have changed
    • Single Site, Exact URL and Regex are replaced by Published URL
    • All Sites across Account is replaced by Origin URL or Pattern
    • In the Protocol selector, a new option - Both HTTP and HTTPS - is now available
    • In the Published URL field, entering a URL that does not have a matching Origin URL generates only an error message. The option to attempt a purge in this case is no longer available.
    • The Directory field has been eliminated because a directory can now be specified by appending its path to the Published URL.
    • A new option, Include query string, is now available. When this option is selected, query strings are taken into account when matching the contents of the Origin URL or Pattern field.
    • The Add to Queue button is now labeled Add to Purge Request
  • In the Review URLs to Purge section
    • The Account column has been removed because all items in a single purge request are associated with the specified Account (shortname)
    • The What should be purged column is now labeled What needs to be purged
    • The Resulting Regex column is now labeled Resulting Pattern
    • A new column, Include Query String, indicates which items have the Include query string setting applied
    • In the Purge result email notification section
  • The option to choose between Summary View and a Detail View has been removed.
  • When using the SmartPurge REST API, purge objects are specified with Patterns instead of Regex
  • SmartPurge requests are processed atomically (instead of one URL/Regex at a time)

 

If you have any questions about SmartPurge or would like to arrange a demo of the new capabilities, please contact your account manager. They are, as always, here to help you succeed.

Last quarter I wrote a blog post about how at Limelight, we pride ourselves on doing an excellent job for our customers while also doing our share in being green and reducing our carbon footprint.

 

As good corporate citizens, we have a responsibility to our customers, our investors, and the environment. As part of my role at Limelight, my team oversees more than 80 data centers, which produce almost 100% of our carbon footprint. We are also responsible for ensuring that we have enough physical infrastructure capacity to fulfill customer needs, while meeting our own internal efficiency goals.

 

Our team at Limelight has been working hard to improve in the following areas:

 

  • Increasing capacity so that our customers’ delivery requirements are met. Software enhancements and innovation have contributed significantly to this increase in capacity.
  • Ensuring reliability for our customers. We’ve seen record-breaking traffic and have achieved a new record for both peak bandwidth and petabytes delivered.
  • Refreshing our technology by acquiring new servers, lowering fan speeds, consolidating server locations, providing internal efficiency and lessening of the impact on the environment.

 

We are very proud to see the positive impact of our continuing efforts.

 

The following chart is a sampling that shows how we’ve improved in our carbon footprint reduction and our capacity increase in various locations around the globe:

 

Location

Carbon footprint

Capacity

Tokyo

↓49%

↑50%

Frankfurt

↓50%

↑40%

San Jose, CA

↓39%

↑30%

 

Stay tuned for more updates as I will be blogging again next quarter to share more of the great progress we’ve been making and will continue to make.

Rhapsody, based in Seattle, Washington, was the first paid online music subscription service.  For 15 years they have been delivering streaming music to subscribers and for the same amount of time Limelight has been working closely with them to make this happen flawlessly. By using Limelight’s content delivery network, Rhapsody consistently delivers music to its customers in milliseconds across a wide variety of connected devices, resulting in a high quality listening experience. In addition, by utilizing both the Orchestrate Delivery and Storage services, Rhapsody is able to place its vast library of more than 35 million songs closer to the end-user, resulting in improved speed of delivery globally.

 

““The team at Limelight has genuine interest in helping Rhapsody enhance its service, and is willing to work with us on innovative solutions. We’ve had a long and positive relationship with Limelight. They are easy to work with and very responsive. We look forward to continuing our partnership and working together in the future.

 

-        Paul Vandegrift, Senior Director, Vendor Relationship Management, Rhapsody

 

Want to learn more? Check out a full case study here.

Today we went live with a new generation of our Self Service portal called Control 3.  This is the result of nearly two years (and counting) of research, design, and development focused on improving the user experience of the site.

Control3_Dashboard w border.png

New Control 3 Customer Dashboard

Here's a look at some of the benefits being delivered in this new portal:

  • Fresh new look and navigation — CONTROL 3 supports adaptable screen layout, new navigation tabs representing activities, and better search capabilities.
  • Full redesign of configuration — An improved layout and workflow makes it easier and faster to create configuration changes.
  • Full redesign of SmartPurge — Our best-in-class SmartPurge product has gotten even better with completely redesigned screens featuring easier definition of templates and clearer display of purge statistics.
  • Improved reports — Numerous improvements to existing reports make them easier to use. Later in 2016 you can expect a full reports redesign featuring even more substantial improvements.

If you are a customer with a Control user account you can go to control.llnw.com today and try out the new application.   If you don't yet have an account- simply ask your Limelight Account Manager to help you set one up.

Beneath the multiple topic tracks at the 2016 Game Developer’s Conference —which ranged from AI to Esports to community management—a silent competition was waging right in the center of the Expo floor. This year, game engine companies Epic, Unity and Crytek returned to the center of the exhibition space, only to have to share it with new arrival: Amazon’s Lumberyard.

 

In their effort to attract the best and brightest of the world’s game developers, the engine companies are borrowing from the phrase ‘If you build it they will come” by betting on a new version “if they build on it, they will stay”.  This year there is more at stake than ever before as two huge developments hit the gaming industry and developers need and want help with both of them.  The first development is virtual reality—developers need and want help integrating the best player technology with the best rendering and design technology to help them build high quality games that feature the best aspects of this rapidly growing phenomenon.  The second is player to player connectivity—developers want tools that enable their gamers to connect with each other even more seamlessly than before. Not only is connectivity key to the competition that drives Esports, but it’s key to integrating gaming with gamers’ social circles.

 

So let’s take a look at the turf staked out by each of these companies, as well as the economic model the companies have put in place to incentivize developers to 'build and stay'.

mCclaren.png

                                                                                                                                       Picture Source

Epic spent a morning session highlighting the advanced features of its Unreal 4 Engine and made it clear they see a future for their developers that spans beyond gaming into state-of-the art product design, virtual reality applications, and film creation. Their wide-ranging talk included several stunning demonstrations to prove their point. In one, an actress’ every movement and emotion were incorporated in real time into a game world, plus digitalized for future possible use. The result was a powerful fusion of live human action and the fantastic world of a 3D game. In another demonstration spilling over into real life, the Unreal Engine 4 was used to create realistic car designs, so detailed they could actually be used in custom building the McClaren automobile (one of which was on display at their booth). Epic is blurring the lines between cinematography and game making, as well as fully embracing virtual reality. Everything demonstrated in this talk showed they are serious about their intent to own the high end of visual production and design.

 

The revenue model for Epic reflects their confidence in the engine. Over a year ago they started giving away their engine for free, in return for a 5% percent cut of a developers’ product or game revenues once they hit a certain amount per month. By empowering high-end creativity, they position themselves to share in major successes, but also take on the challenge of providing a highly sophisticated solution that is extremely powerful.

crytek.png

                                                                                                                                 Picture Source

Crytek released its CRYENGINE V at GDC which will provide integration with an impressive range of virtual reality solutions and hardware, Playstation VR, OSVR, HTC Vive and Oculus Rift. Crytek also announced new partners in its VR initiative, aimed at supporting VR research and development at leading universities by providing hardware and funding. AMD, Leap Motion, OSVR, and Razer are now partners in this initiative. Crytek business development manager was quoted as saying “Now we are much closer to our goal of forming a global VR community.

 

The business model for Crytek’s engine is based on offering developers a community, not just for marketing and selling their games, but for actual IP as well. Crytek gives away their engine for “whatever developers want to pay” and includes with it, access to the CRYENGINE Marketplace. The Marketplace offers thousands of game assets created by CRYENGINE users, including those collected by the company over the years. Given the impressive set of games developed on CRYENGINE  no doubt this is a rich source of material for developers.

Unity.png

                                                                                                                                       Picture Source

Unity, which physically dominated the entrance to the Expo announced its release of Unity 5.3.4 and 5.4 public beta. It’s response to the AR/VR phenomenon has been extensive and year-long. As part of the show they announced support for Nvidia’s VR Works. VR Works includes API’s, sample code, and libraries for VR developers that speed up and improve device integration and graphic rendering.  In addition, they have made manipulating VR scenes even easier with a “Chessboard” system that puts a miniature version of the VR scene into the larger screen, making the scene easier to manipulate as a whole. Like Crytek, Unity has a “Made for Unity” asset store where developers can download free assets to enhance their game.

 

The monthly user base for Unity is huge (over 1M) and adding significant new features while maintaining stability is not trivial. At the show they emphasized the many accomplishments of the past year, including adding AR/VR plugin optimization. As far as connectivity goes, they announced that Unity Multi-Player is out of beta and available. This new offering allows developers to create multiplayer games using Unity’s servers, makes it easy for gamers to connect with each other, and is extremely scalable.

 

Unity’s economic model is based on a monthly charge of $75 for the “Professional” version of its engine, plus a charge for using its servers for concurrent game players. The concurrent player charge scales up depending on how many gamers there are how much messaging is taking place. Unity has a global infrastructure of servers in the US, Europe, and Asia that support its multi-player games.

Lumberyard.png

                                                                                                                                       Picture Source

Amazon’s Lumberyard game engine was the newcomer to the party and they too showed up with a game engine they are giving away for free. Lumberyard is described as  a “free, cross-platform, 3D game engine for you to create the highest-quality games, connect your games to the vast compute and storage of the AWS Cloud, and engage fans on Twitch.” Many game developers and publishers are already familiar with Amazon’s infrastructure offerings, including its storage, S3, and compute instances, EC2. Providing an engine that links seamlessly to this infrastructure (and generates revenue while doing so) is another way to tie developers in so they will stay. Lumberyard also provides two solutions for connecting with players: ChatPlay and JoinIn. ChatPlay allows Twitch viewers to directly influence and comment on game play, and “JoinIn” provides one- click ability to have a gamer play against a broadcaster.

 

The revenue model for Lumberyard seems to be aimed at building usage of Amazon’s prodigious infrastructure, as well as the user-base for Twitch.  The engine is free, but developers pay for their use of servers, storage, and other infrastructure.

 

In their race to be the platform of choice, each engine has had to decide where it will optimize the developer’s experience. And for mature engines, the challenge to keep innovating while serving a huge installed base is tremendous. Putting themselves at the center of the GDC floor showed all four of these companies know what is at stake as a whole new era of opportunity hits the gaming world.

Today we released findings of our second annual ‘State of Digital Downloads’ report. This study is part of Limelight’s series of annual surveys exploring consumer perceptions and behavior around digital content. Key findings may surprise you.  They include:

  • The mobile phone is the most dominant device for downloading content
  • Beyond OS updates, Movies/TV Shows, Music, and Apps are the most popular downloads
  • Consumers tend to download most often at night
  • Download speed is critical to providing a great experience
  • When things go wrong with downloading, typically ISPs are blamed
  • Google is winning the content war but Apple isn’t far behind
  • Android is the most dominant smartphone for downloading content but comes in second with tablets

 

The survey was conducted by a third-party organization with access to U.S. and international consumer panels. In all, 1,136 consumers ranging in age, gender and education completed the survey.  A copy of the press release is posted to our website, and can be found here and the complete report is available here.

Would you like the opportunity to help your peers learn from your experiences?  We’re seeking Limelight customers to profile how you’re leveraging our technology to innovate, grow your business and improve your customers’ experience.  Join Arsenal Football Club, OTT company Dailymotion, retailer Costume Supercenter and many more diverse organizations from the established to start-up around the world and tell your story.  Simply leave me a message here or email me at dhohler@llnw.com and I’ll contact you to discuss how you could be featured here.

Access control is more than a passing fancy for many Limelight customers. In April, 2016, we will have many features in the Orchestrate Platform to help control who can access what, from where. We recently merged two access control features: ACL  (Access Control Lists) and  Geo-Fencing. For quite a while, we have had support for Geo-Fencing and ACLs. Geo-Fencing enables customers to allow/deny access based on an end-user's geographic location. ACLs enable customers to allow/deny access based on end-user IP address or HTTP Method. In the original implementation, Geo-Fencing and ACLs were separate processes and were difficult to use in concert. The new White/Black Listing IP and Geo-Fencing is greater than the sum of its parts.

 

In the new implementation, Geo-Fencing and IP ACL are combined into a set of access control rules. The new service allows IPs to be organized into "Groups". IP Groups and IP geo-location data are treated in the same manner. Access control rules are processed in the order in which they are written. The first time and IP address is found in a rule determines how that IP will be treated. Mixing and matching IP Groups and geo-location  rules is considerably more flexible than the disparate legacy systems were.

 

  Feature of the new system include:

Rule

Description

HTTP Method

Allow/deny access based on HTTP method. Option: get/head/options/post/put/delete

Geo-Fence

Allow/deny access based on geographic location of  end-users IP address

IP Groups

Allow/deny access to a group of IP address.   IP ranges in a group can be defined by: get/head/options/post/put/delete

Anonymous  Proxy

Allow/deny access to end-users who are routing their requests through an anonymous proxie

All

Allow/deny access to all

 

Example:

Sportsball_Live.com  has licensed distribution of the World Championship of CalvinBall (WCCB).  Their license limits them to European distribution. Advertising partners paid big bucks to bring WCCB to Europe. The partner offices are spread around the globe and must have access to the WCCB content. The licence agreement is strict and requires the blocking access from anonymous proxies.

 

An ordered set of access control rules can be constructed to enable Sportsball_Live.com  to meet their licence agreement and bring WCCB to Europe.

Calvin Ball.gif

Rule Order

Name

Action

Description

1

HTTP Method

Allow get/head/options

Because WCCB  is a live video event  HTTP methods will be restricted to  get/head/options

2

Whitelist

Allow Advertising_Partners_List

Explicitly allows any IP found in the Advertising_Partners_List access to the WCCB Event

3

Geo-fence

Deny Anonymous Proxies

Explicitly denies access to WCCB to any know anonymous proxies

4

Geo-fence

Allow Europe

Explicitly allows access to WCCB event to any IP in Europe

5

ALL

Deny All

Denies access to any end-user who has not been given access by the above rules. ALL should always be the last rule.

One of the largest Esports events in the world just took place this weekend - the Intel Extreme Masters (IEM) World Championships in Katowice, Poland.  IEM Katowice featured three games - CounterStrike, League of Legends, and Starcraft II.  Qualifying tournaments have been running all over the world for the Finals in Katowice, and each game’s championship match offered € 500,000. And lots of people were watching the action—live and online. In fact, online viewership for this 3-part tournament likely exceeded last year’s 2.3 million peak concurrent viewers and 4 million Youtube views.

Crowd.jpg

 

What a lot of people don’t know, though, is what goes on behind the scenes of these events. From player preparation to live-stream logistics, there’s a ton of activities happening. In a recent webinar (full disclosure: Limelight Networks hosted the webinar) with Fnatic’s CEO Wouter Sleijffle, we took attendees behind the scenes to look at how professional sports agencies, like Fnatic, prepare their teams for this intense competition and what’s required to host a world-class live streaming event. Here are a few highlights:

  • Player preparation happens on multiple levels. Players have to be prepared physically and mentally to hold up to the hours of intense live action on stage. Wouter shared that part of this preparation is being able to rely on teammates, and spending time together away from the game, even occasionally living together in training venues as a way to build trust and connection between teammates.
  • Developers have a role to play in preparation. Wouter had some advice for game developers—create training tools that let coaches and analysts improve game play.  Don’t hold back exciting game play for the top levels - make all levels exciting.
  • It’s not all about just playing the game. Are you a couch potato convinced hours of game playing will make you the best?  Not so, it appears. Fnatic puts a surprising amount of work into the physical fitness of their players, including healthy eating and sleeping habits. It’s all designed to keep the mind as sharp as possible.  And flexible too.
  • The competition is never over. Think that a professional gamer’s work is done after the event? Not according to Wouter who feels that data analytics (post-match analysis) plays a crucial role in future success. In fact, Fnatic now employs not one but two analysts to dig into game play data, competitors and match data.

 

Does all this preparation work?  Turns out it does - really well.  The results from Katowice are in:

Winners.jpg

 

 

Limelight and Cedexis were both on hand during the webinar to explain how live event gets transmitted to millions of fans around the world who are watching from their PC’s, phones and tablets. Delivering broadcast quality coverage to this audience, especially when they are watching from all around the globe, on hundreds of different devices, is a huge challenge.  Luckily many of these challenges have solutions that have been tested and proven successful by other industries that deliver live events:

  • Planning is key. Preparing for a live event actually requires careful planning and an experienced team. For an online audience to receive broadcast quality requires that a broadcaster’s entire workflow, from the stage cameras to the encoding and transcoding to the content delivery, is architected to eliminate latency and handle sudden spikes in viewership.
  • The public Internet is not the right solution. If you want to create a high quality experience for your audience relying on the public internet is a poor choice.   Some of the  the reasons for this are that Esports audiences are truly global and  the size of these audiences can be large and unpredictable.  Congestion from other events on the public internet can interfere with a smooth broadcast, or ruin the quality of a broadcast for an entire region.
  • Build in redundancy. By choosing between 2 or more CDN’s for your broadcast, you ensure capacity for every log-on. Two or more CDN’s also allows traffic optimization between the CDN’s for each CDN investment you get the most out of that investment.

For more on how to satisfy Esports live event viewers, and how the pro’s prepare for these amazing contests, you can listen to the whole webinar  here.

 

 

Photos Courtesy of Edwin Kuss, March 2016.