Skip navigation
All Places > In the Limelight Blog > 2014 > December
2014

We are pleased to announce the launch of the Limelight Executive Forum program, which presents an opportunity for customers and company leaders to discuss requirements, technology directions, and better ways for us to serve you. At our first event in New York City last week, we heard from leading companies in media and broadcasting, online retail, financial services, and other industries who expressed pleasure at improvements they are seeing in availability, service responsiveness, and account management. We also discussed needed improvements in reporting, training, and communication. We look forward to future events being planned for Seattle, Los Angeles, London, and other global locations in 2015. Thank you to all our customers who participated in our inaugural event last week.  Please contact your account manager for more details on future events.

 

Thanks to everyone who has helped to make Limelight Connect a destination for staying up to date with the latest product and industry news. We are working hard to provide you, our community members, with a valuable service.

 

In addition to the content we've been providing, we'd really like to hear more from you. In fact, this is our mission. How can we at Limelight help you to further your own business goals? What questions would you like to ask us, what conversations would you like to participate in?

 

To make it easier for you to engage with Limelight staff and your peers, we have moved to a new community platform that has some great features to encourage deeper engagement.

  • Discussions—The new platform  makes it easy for you to contribute, answer, and follow discussions.
  • In the Limelight Blog: Now you'll be able to follow our blog posts and comment right here on the community.
  • Idea generation—You can post ideas and others can vote on them. Limelight can tap in to customer feedback at a larger scale and use the platform to ask for product feedback.
  • Game mechanics (gamification): Earn points and badges as you contribute ideas, questions, and other content. If this sort of thing motivates you, you're in luck! The new platform provides some great opportunities to engage you further through this feature.
  • Content: As in the past, we will continue to provide you with relevant, timely industry and product articles, blogs, technical papers, and updates.

 

Explore new Limelight Connect now >

 

Let us know what you think! You can respond directly to this blog post or email me at eforan@llnw.com

Originally published July 29, 2014

 

As digital becomes more important in how organizations connect with their prospects and customers, it behooves us from time-to-time to take a barometer reading of user expectations and perceptions. That’s exactly what we did in our annual study, The State of the User Experience.

Based on a survey of over 1,000 end users, our report finds some startling and interesting conclusions:

 

  1. Performance is the most important expectation for digital experience and can directly affect revenue
  2. Mobile devices are becoming the primary web access point for consumers, who now expect similar performance from mobile devices and desktop browsers
  3. The value of web experience personalization remains to be seen

 

This infographic captures some of the data points behind the conclusions. Download the report today and learn more how users really feel about your digital experiences.

 

 

Originally published September 24, 2014

 

Mike Hendrickson, Vice President for Content Strategy at O’Reilly Media, interviewed Steve Miller-Jones, Director of Product Management at Limelight Networks, during the 2014 Velocity conference in New York.

 

Topics they covered:

 

  • Who is Limelight Networks?
  • Why real user measurement (RUM) helps content distributors
  • Delivering dynamic website content faster, globally
  • How to optimize content delivery from origin to ISP: first mile, middle mile, and last mile acceleration

 

Catch highlights from their interview below! (Total length: 00:05:36.)



Originally published July 27, 2014

 

If you use a website to grow your business, then you need to know how online performance impacts your bottom line.

But you might be surprised to find out just how drastic the impact can be.

 

This infographic gives you the inside story.

 

Twitter_logo_blue Tweet this infographic: Get the facts: Web speed and your bottom line – an infographic from @LLNW

 

 

Click the infographic to enlarge it.

blogWebSpeedYourBottomLineFinal.jpg

Twitter_logo_blue Tweet this infographic: Get the facts: Web speed and your bottom line – an infographic from @LLNW

 

What makes a successful gaming launch?

 

On November 3, Activision pre-released the newest installment of “Call of Duty®: Advanced Warfare,” with much success (and double XP!). And with all those beautifully detailed graphics, it’s no wonder this 46 GB behemoth has taken off on its official international release today.

 

Whether it’s a blockbuster game or the next big thing, there is always something to learn from those that do it successfully. “Call of Duty: Advanced Warfare” (CoD:AW) employs several tactics and strategies that make for an effective game launch.

 

Below are five things that Activision did right with this release, but any size game studio or publisher can learn from.

 

1. New Title + DLC for continuous play:

 

Advanced Warfare is not only the newest title release of the Call of Duty series, (preceded by the underwhelming Ghosts release in 2013), but it differs from its predecessors by taking gamers on a divergent journey in a futuristic world heavy on sci-fi themed features. Developed solely by Sledgehammer Games, (you may remember them as a co-developer from Modern Warfare 3), this release includes cool new features such as the highly praised exoskeleton suit, new maps, and several new weapons with mix and match options.

 

Although it is a completely new title with a new journey, the upgrades and DLC within the game have transformed an already successful title. Activision can successfully maintain the interest of die-hard fans, while garnering interest from new gamers—for example, the new virtual firing range provides a safe space to practice new weapons, while the Combat Readiness Program engages new gamers, while also blocking out advanced players whose skill level may discourage newcomers. To adapt to the modern gamer’s expectations, there are endless possibilities to keep gamers coming back for more multi-player action, even after completing the single-player campaign.

 

What’s more is there will be DLC available for purchase in 2015 to boost interest after the holiday season and consistently engage gamers.

 

2. Multi-Channel Promotion Strategy:

 

A game of this size and hype does merit a significant amount of promotion. While not all games will be able to recruit the likes of Kevin Spacey or launch a full-scale television advertising campaign, there is something to learn from the CoD:AW strategy:

 

  • Using video, the ads themselves showcase the remarkable quality of the game while simultaneously generating hype over the new storyline and action capabilities.
  • CoD:AW also does an excellent job spanning multiple channels for promotion, via television, paid YouTube advertisements, and an online video player that delivers HD quality content on a microsite.
  • And while we’re at it, let’s talk about their fantastic microsite. Gamers won’t have to navigate through other sites and sub-menus to find what they’re looking for. The site is dedicated to hardcore gamers to chat in forums, view videos and photos, or purchase season passes and special editions. All backed by a content delivery network to make sure the site is easily and quickly accessible worldwide.
  • Finally, there is a companion smartphone app to make sure gamers are always connected to their clans. While the full scope of the app doesn’t seem to include a mobile version of the game, it does create and manage clans and clan wars. The hardcore CoD:AW gamer will always be just a touch screen away from the brand.

 

3. Responsive Support Team:

 

Although the pre-release initially saw some bugs that prevented downloads a responsive support team quickly identified and fixed the issues for the main launch. While the pre-release was a day early for those who preordered the game, it proved to be a great way to test and resolve any bugs before the official launch date.

 

While beta testing and pre-releases may not always be an option, a game release should be treated as an event. As such, it requires 24/7 technical attention in those first few days to quickly send out patches and keep gamers satisfied. Nothing ruins the bliss of getting that new game like an issue that prevents users from downloading it! While mistakes and glitches happen, it’s important to prepare for the worst.

 

4. Global Distribution:

 

Gamers are everywhere. Being prepared to release your game to a global digital audience rather than just regionally is no longer an option, it is essential. Distributing games, like CoD:AW, via popular consoles and online stores can solve the delivery problem, but it doesn’t address the rest of the game experience—like making sure your videos are available world-wide, on any device, and your microsite is super fast and responsive.

 

Not only did the game reach a geographically global audience, it catered to their multiplatform tastes. CoD:AW is available for Xbox One, Xbox 360, PS3, PS4, and PC.

 

5. And Zombies…

 

Literally, everything in CoD:AW turns into a zombie. Fun, right? Maybe zombies don’t fit into every gaming world, but it is important to note the power of capitalizing on pop culture trends. In 2015, Gamers can pay for the $50 DLC Season Pass to experience the beloved Zombie mode from past CoD titles, as well as other expansions, or $15 on its own. Hey, if it works, it works. And Zombies work. Think we’re kidding? Zombies have enjoyed a tenfold increase in popularity on Google since 2005—bottom-line proof that there is no shortage of clever monetization strategies available to game developers and publishers.

Originally published October 22, 2014

 

internet_bandwidth-600x400.gif                                           

End-user bandwidth is continuing to increase. Whether it’s the migration from 3G to 4G to 5G on mobile devices, improved WiFi, or fiber to the home, people around the globe are increasingly finding themselves with more bandwidth. In fact, according to Akamai’s most recent State of the Internet Report, global broadband speeds are up 24%!

 

But there’s a problem with that.

 

Just as consumers are getting faster speeds to access digital experiences, the organizations providing those experiences may consider letting their foot off the proverbial gas—they may assume that more user bandwidth will improve the performance of those digital experiences (i.e., websites, online games, software downloads). Only that isn’t exactly the case. According to this study by Mike Belshe[1], “if users double their bandwidth without reducing their Round Trip Time (RTT), the effect on Web browsing will be a minimal improvement (approximately 5%). However, decreasing RTT, regardless of current bandwidth always helps make web browsing faster.” The study demonstrated, through varying bandwidth from 1mbps to 10mbps, that Page Load Time (PLT) saw diminishing returns as the bandwidth got higher (Table 1).

 

Bandwidth (in Mb/s)Page Load Time via HTTP
13106
21950
31632
41496
51443
61406
71388
81379
91368
101360

(Table 1)

 

Although there is a considerable jump in the early bandwidth speed increases, the returns as the pipe gets bigger continue to diminish until they are almost negligible (i.e., from 9mbps to 10mbps).

 

Bandwidth, it would seem, isn’t the answer to web performance woes! That’s because even though the pipe between the end-user and the Internet is bigger doesn’t mean that a website or web application will really download any faster. The measure of how well a digital experience performs isn’t just about the speed of download. It’s how quickly the first byte is accessed, how many round trips it takes to return the data, how fast the digital experience is rendered in the browser (i.e., time to paint), and more…all of which depend upon a variety of factors that are well outside the purview of broadband speed. A poorly tuned web origin will still have inherent latency issues whether it’s being accessed through a low bandwidth Edge or a super-high 100mbps fiber connection!

 

Sure, you can mitigate some of your Page Load Time latency when users have bigger connections (although if they are already on a high-speed connection, as Mike’s study points out, this will be severely diminished), but you can’t remove the rest of the latency. Consider the following:

 

  1. Your website loads in 10s on a 2mb/s connection.
  2. Using Table 1, users gaining access to your website through a 10mb/s connection would feasibly cut your Page Load Time to 7s (~30% download improvement)
  3. That still leaves your website at 7s. Let’s say that your end-user connections improve to 20mb/s. Using Table 1 again, you might see further improvement of approximately 5% feasibly cutting your Page Load Time to ~6.7s.

 

That’s still 6.7s of Page Load Time that cannot be mitigated through faster bandwidth. So even as users are accessing your digital experiences through bigger pipes, your website may still be performing subpar. And as we discovered in our own State of the User Experience report, almost 60% of users will abandon a website if it takes longer than five seconds to load! Uh oh.At some point, this latency must be addressed through alternative means such as reducing Round Trip Time and reducing the number of Round Trips. In fact, according to Mike’s study, decreasing Round Trip Time (regardless as to what you reduce it to) has a steady, scaled effect on reducing Page Load Time (Figure 1).http://blog.limelight.com/wp-content/uploads/2014/10/Screen-Shot-2014-10-21-at-3.24.53-PM.png

 

Screen-Shot-2014-10-21-at-3_24_53-PM.png

(Figure 1)

 

What then should you focus on? What can you do?

 

  1. First, do what you can to reduce the number of Round Trips it takes to retrieve your website. The easiest solution is to enable persistent connections so that more data can be delivered in a fewer number of connections. Each time an end-user request has to return to your origin (or even cache), latency is added to the transaction journey.
  2. Second, reduce the Round Trip Time. This is inherently harder, especially if you are using the Public Internet. Get off the Internet! You need to use a CDN so that you can take advantage of objects being in cache and very close to the end user, significantly reducing the Middle-Mile journey and thereby making for a shorter Round Trip Time.
  3. Third, tune the server. Make sure that your webserver is properly tuned and doing only one thing—serving your website. Lots of organizations will serve websites on generic “all purpose” boxes that also house other applications like databases. The extra computational load will slow down your webserver’s ability to do what it needs to do…return content to a user’s request.
  4. Fourth, turn on the cache. Whether or not you are using a CDN, you should enable caching on your webserver. This will help reduce the Round Trip Time by fetching popular content quicker (especially when it’s fetched out of memory, i.e., memcache, rather than from the hard disk).

 

Of course, there are lots of other ways to improve website experiences such as optimizing images and compressing text files like JavaScript. All of which will help reduce the number of Round Trips and the Round Trip Time. Whatever you decide to do, just don’t sit back and do nothing, hoping that as users get faster connections your website will perform better.

 

Because in the long run…it won’t.

 

[1] Belshe, Mike. “More Bandwidth Doesn’t Matter (Much)” April 8, 2010. https://docs.google.com/a/chromium.org/viewer?a=v&pid=sites&srcid=Y2hyb21pdW0ub3JnfGRldnxneDoxMzcyOWI1N2I4YzI3NzE2

Image courtesy of www.etny.net.

Originally published October 16, 2014

 

MiniaturePoodleLucaPurebredDogApricotColorTan2

 

On Tuesday, October 14, 2014, Google researchers announced the discovery of a vulnerability that affects systems with SSL 3.0 enabled. This vulnerability has been named POODLE (Padding Oracle On Downgraded Legacy Encryption). Details are available at https://www.openssl.org/~bodo/ssl-poodle.pdf

To mitigate exposure to this vulnerability, it is recommended that the use of SSL 3.0 be avoided. SSL 3.0 is an outdated standard, but is still in use in support of legacy applications.

 

Because of the need to support legacy systems, the elimination of SSL 3.0 may not be practical. In that case, customers must weigh the need to support the older standard against the threat of security vulnerability.

 

Limelight strongly encourages discontinuing the use of SSL 3.0, and is actively working with customers to implement the mitigation, while minimizing disruption to their end users. While we strongly believe customers should discontinue use of SSL V3, we have chosen to work with customers to help them mitigate this vulnerability, rather than shutting down SSL V3 support across the board, which might have the unintended consequence of disrupting customer’s businesses. However, we have published proactive notification, informing customers they continue running SSL V3 at their own risk.

 

Regarding the potential use of a workaround known as TLS_FALLBACK_SCSV, Limelight’s position is that this particular mitigation may not fully address the vulnerability. We believe the only acceptable method to fully address the vulnerability is to discontinue the use of SSL v3. More information on TLS_FALLBACK_SCSV is available at https://tools.ietf.org/html/draft-ietf-tls-downgrade-scsv-00

 

If a customer requests, Limelight can block/eliminate SSL 3.0 on their behalf. Again, we will do this FOR customers, not TO them as some other providers have chosen to do.

 

Customers requiring Limelight to mitigate SSL V3.0 on their behalf are encouraged to call/email Limelight support through the normal support process. Because mitigating SSL v3.0 may affect the customer’s end users, it is imperative that the request to shut it off be generated by someone who is authorized to speak for the customer, and who has considered the potential service issues.

 

We have a process in place to perform this mitigation quickly for customers, and believe typical mitigations should be completed within 72 hours of the request.

As always, the security and availability of your services is our highest priority.

 

Please direct further questions to Support@llnw.com

 

Image courtesy of www.dogbreedinfo.com.

Originally published October 1, 2014

 

Yes, it’s that time of the year again. The approach of the holiday season brings cheer to all faces – but if you are in charge of the operational side of a web application or website, it’s the time of year that you secretly dread. The 3 am support calls, the fire drills, the back-up server activation… we’ve all been there.

 

If this is you during the holiday season, you may not want to miss this blog!

holiday season shopping.jpg

 

Most companies put code changes under a strict lock-down mode (typically by the end of Q3) and rarely make changes that affect the stability of their systems until well after the holidays are over.

 

However, that does not mean you can’t test the operational readiness of your systems to make sure that servers, application resources, databases, and the coffee machine are all ready for the holiday traffic. In this blog post, we will review the various readiness tests you generally run and provide a list of key questions to answer as you ramp up for holiday traffic.

 

Before we jump into the details of how to conduct a readiness test, let’s do a quick glossary check; it is important to understand what kind of readiness test is best suited for your environment, and application, as you prepare for the holiday onslaught.  There are three types of tests to consider running.

 

  • Stress Test: Testing the limits of your resources by an artificial load over time to a point where things break
  • Load Test: Creating artificial traffic on your resources to make sure things work fine when traffic increases are anticipated
  • Spike Test: Trying to create a sudden traffic spike just to assess how the system handles sudden peaks

 

Each of these is considered a volumetric test; you can find more details about various testing types here.Now that we know what sort of volumetric tests might be useful, here are a few key questions that site reliability engineers should consider when planning a holiday readiness test for applications that use a content delivery network (CDN):

 

Am I testing the limits of my origin resources or the capacity of the CDN?Your test should measure the capacity of your origin resources. Because Limelight has virtually unlimited delivery capacity and built-in intelligence to distribute and automatically manage the load, there are virtually no CDN capacity limits to test.

 

Am I testing how much the CDN is able to insulate me from the sudden spike by offloading requests?While this is an important question, it is difficult to predict. There are many dependencies, like how much dynamic content your website has, and how the cacheability headers are set. We at Limelight often see customers achieve over 98% cache-hit ratio!

 

How am I going to measure the impact (of my tests)?Using third party testing platforms like Gomez and Keynote for conducting backbone testing and Real User Measurement (RUM) testing providers like Cedexis to measure user impact is the right way to go.

 

What are some of the metrics that I should look at?Some of the key metrics that typically get impacted during holiday traffic are:

  • Availability
  • Response Time
  • Errors

Am I conducting a truly realistic load test to mimic the real world traffic pattern?

 

Most times site reliability engineers are not conducting a true to life test. We may be sending traffic from one location (or even one server) in a specific city to the CDN. This is far away from the real world situation where your customers pound the CDN/origin from multiple locations simultaneously.

 

And what about multiple devices? Most users don’t necessarily sit behind a laptop and access your websites. There is a complex mix of devices like tablets, smartphones and even gaming consoles now!

 

Have I set goals and success criteria for my tests?

 

Many companies conduct arbitrary volumetric tests to gauge readiness, they don’t have set goals. Goals those are important to their business. For example:

 

“The pages should load in X seconds”

 

“The availability should not drop below X%”

 

The right approach is to set a goal that is closely aligned with your business metrics, like revenue goals.

 

Am I testing the right transactions that the typical users would execute?

 

When running tests, it is important to mimic what most real users on your website would do as closely as possible. For example:

 

“Are your customers able to view all the images on the catalogue—not just one?”

 

“Are they able to successfully add multiple items to the shopping cart without issues?”

 

If you are a retailer, testing just the home page may not be enough.

 

These are some of the key questions that you should consider before planning a holiday readiness test. Luckily we at Limelight have extensive experience and capacity to handle sudden traffic spikes and make sure the holiday season traffic translates into more revenue for you. We would be happy to speak with you and answer any specific questions that you may have about preparing for holiday traffic.

 

To know more about how we can help you solve today’s complex application delivery and optimization challenges please contact us or follow up with us on our online community!

Happy selling!

jthibeault

From IBC 2014: Wrap-up

Posted by jthibeault Dec 8, 2014

Originally published September 18, 2014

 

Well, another IBC is in the books. Thousands of business cards exchanged, tens of miles walked, and gallons of coffee consumed. But what was it all for? Besides the obvious answer of “drum up more business,” IBC serves a critical function in the media and broadcast business—to reveal, challenge, and validate the trends that are shaping the industry.

 

So what did this year’s show reveal?

 

  • OTT—clearly, OTT has come of age. The floor was dotted with OTT providers like SatLink Communications, Viaccess-Orca, and Zappware all pitching end-to-end software for carrier and broadcast-grade OTT solutions. It would seem that OTT is rapidly becoming the “gateway” for how consumers discover, consume, and playback content regardless of the source.
  • Mobilization of Live—content acquisition of live events used to be handled solely by trucks, cables, and satellites. But a number of new technologies such as LiveU and Quicklink (IEC Telecom) provide backpack-based or other mobility camera solutions to enable truckless acquisition of content in the field.
  • Cloud-based Live Production—broadcast has traditionally been carried out at specialized “control rooms.” But as everything in that control room gets connected via IP, there’s no reason why production and broadcasting can’t happen through cloud-based services like Make.tv, which offers a complete virtualized production studio.
  • Complete workflows—the entire broadcasting paradigm is getting turned on its head as a result of intelligent and powerful software that liberates content publishers from their traditional workflows. These cloud-based solutions, like AVID Everywhere and Mediagenix cover everything from content production to distribution.
  • Programmatic Advertising—obviously, monetization is on the tip of everyone’s tongue. Companies like Civolution are taking it to the next level by synchronizing TV-based advertising with media buying opportunities on social networks and the Web.

 

But IBC 2014 wasn’t just about technology trends. There were also fundamental themes woven throughout the show:

 

  • Legitimizing the cloud—not only are cloud-based services springing up to replace incumbent broadcaster technologies and processes, but more and more elements of the broadcaster workflow are becoming connected to the cloud to provide more accessibility and greater flexibility.
  • IP broadcasting—despite the usual prevalence of satellite vendors at the show, IP is making a deeper and more meaningful push into the broadcast industry with the ultimate goal of enabling the delivery of core content (i.e., broadcast television) to any device, anywhere in the world.
  • Broadcasting without boundariesas I wrote about in a previous post, many of the technology trends were about liberating broadcasting from the traditional, legacy processes, hardware, and technologies so that the acquisition, production, and delivery of content can happen anywhere, not just in a control room.

 

Deep-down, though, IBC 2014 was all about showing that things were starting to work now. The media and broadcast industries are inundated with new technologies all the time. So much so that we forget the bigger problem: making them all work together. Cloud services. Software-based encoding. Workflow solutions. All of them sound good but when they are bright and shiny they are also unproven. And there’s a lot of risk in incorporating unproven technology into production environments. IBC 2014 seemed to forgo the bright-and-shiny for some tried-and-true.

 

That’s it from the show. Doe-doei until next year (that means bye-bye in Dutch…I think).

Originally published September 15, 2014

 

Clearly broadcasters have a challenge: get their content to end-users as quickly as possible in the most efficient manner. But that problem is obviously exacerbated by the proliferation of devices. Consumers are no longer chained to their desks, chairs, or couches. So those same broadcasters who have for so long distributed their content via closed, terrestrial networks are now facing the uphill battle of extending their infrastructures, workflows, and processes to push all that content over IP.

 

Which brings me to IBC 2014.

 

With the changing landscape of content consumption, it’s clear that broadcasters must evolve the way they publish. Not only must they deliver content to all those devices, but they must also do so quickly and efficiently. If you listen to Avid, a staple of many broadcaster publishing workflows, it’s because of an “accelerated digitization of the media value chain.” Countless elements of the content publishing workflow are now being offered as cloud-based services enabling broadcasters and media companies to literally publish from anywhere. In fact, Avid announced AvidEverywhere, “the most fluid end-to-end, distributed media production environment in the industry, a comprehensive ecosystem that encompasses every aspect of the new digital media value chain.” Only Avid isn’t alone. AP, Ericsson, Verizon Digital Media Services, Microsoft Azure Media Services, and others have cloud-based platforms for media publishing as well.

 

What’s special about these services is that they promise to liberate content publishing just as multi-device has liberated content consumption. Broadcasters and media organizations are no longer tied to expensive equipment or desktop-based software. Through cloud-based services they are empowered to acquire, edit, and publish content from anywhere they can access the Internet.

 

But it’s only half the story. Creating the best content in the world doesn’t matter if you can’t get it to the viewers on whatever device they want to use, wherever in the world they are. And just as the IBC floor showcased some of these new, innovative services to publish content it also addressed the other side of the workflow: delivery. Quickly and efficiently getting content to end users means being able to convert content into the necessary formats as well as being able to distribute it to multiple end points over an increasingly congested Internet. The Limelight Orchestrate for Media and Broadcasters solution tackles that exact problem: transforming and delivering content in the cloud.

 

It’s broadcasting without boundaries.

 

Together, these new cloud-based media production workflows coupled with delivery workflow services replicate what were once closed software/hardware ecosystems for content creation, publishing, and distribution. This enables content publishers to not only create what they want and where they want, but also to distribute to consumers so they can watch when they want, how they want, and where they want.

 

Of course, the system still isn’t perfect. There remains a tremendous amount of integration that needs to happen between these two different parts of the workflow so that the entire process is seamless for the content publisher. But IBC 2014 showed us a glimmer of a future in which content publishing becomes fluid, when nothing gets in the way of getting great content to end consumers.

Originally published September 3, 2014

 

Right now, media and broadcast outlets are confronted with a remarkable opportunity for growth. Consumer demand for online video content has been well documented; as Cisco has widely reported, it is set to double by 2018.

Limelight Orchestrate for Media and Broadcasters

 

Content owners seeking to leverage this market opportunity will do anything to meet the overwhelming demand… even at the expense of operational efficiency. They understand that however costly disjointed workflows and disconnected third party technology solutions may be, the implications of content being unavailable to viewers—even for an instant—are even costlier. The broadcast generation, a consumer base that expects online media to perform at the same level as traditional linear broadcast, simply assumes that content will be delivered flawlessly to any device.

 

Broadcasters and content owners need a unified solution to distribute media easily, quickly, and securely to this worldwide audience on a multitude of platforms and devices.

 

Today, Limelight Networks announced the launch of the Limelight Orchestrate™ solution for Media and Broadcasters  (Orchestrate for Media and Broadcasters): a powerful, integrated, cloud-based workflow that enables media delivery to the broadcast generation.

 

Orchestrate for Media and Broadcasters integrates multiple pieces of a traditional workflow using cloud components connected across the private, global Limelight network. Content providers can reduce the complexity associated with traditional online media publishing, freeing management to focus on revenue-building activities and streamlining operations.

 

Combining the power of Limelight Orchestrate Content Delivery, Limelight Orchestrate Cloud Storage, and Limelight Orchestrate Video, the solution is divided into four key areas:

 

  • Ingest: Customers can upload content to the Orchestrate Cloud Storage platform from globally distributed ingest points. Through policy, files are automatically and instantaneously replicated to worldwide points of presence (POPs) strategically located at the edge of viewer access networks—effectively eliminating the manual steps associated with global distribution.
  • Convert: With just a single mezzanine file, Orchestrate for Media and Broadcasters automatically transforms content into the formats required for optimal performance on mobile, desktop, set-top, and other devices. Thanks to the proprietary Zero Time to Publish (ZTP) feature, content can be made available on demand before transcoding has even finished. DRM support and authentication services add additional security for protected content, while seamless ad integration facilitates monetization.
  • Deliver: Once content has been ingested and converted to appropriate formats, the globally distributed Limelight content delivery network speeds time to market. Files are placed in cache, including tens of thousands of streaming servers, according to business logic determined by the customer. An intelligent software layer detects the viewer’s requesting device to optimize delivery in real time. And with over 9 Tbps of egress capacity, the network can scale rapidly for planned or unplanned traffic.
  • Playback: Media and broadcast outlets are judged by the quality of the playback experience they provide. Orchestrate for Media and Broadcasters delivers a seamless multiscreen experience for live sport games, movies on demand, breaking news, and more. Built-in adaptive bitrate streaming adjusts playback for different bandwidth environments. Multi-device media delivery ensures that video is delivered flawlessly to mobile and ultra HD devices alike. Finally, built-in analytics give content owners the insight required to measure, and maximize, impact.

 

Limelight customers that take advantage of the solution enjoy a more efficient workflow, better performance, broader reach, greater security, and increased revenues. Stefano Flamia, CTO of Italian video service CHILI stated, “At CHILI, we need to be able to efficiently deliver content to our customers around Europe, to any device, while managing the rapid growth of our content. That’s why we chose Limelight. Our use of the Limelight Orchestrate solution enables us to manage large amounts of objects and to offer fast content availability, while managing spikes in demand and controlling our delivery costs.”Limelight Orchestrate for Media and Broadcasters is the easiest, fastest, and most economical solution for content providers that need to deliver broadcast quality at global scale. Learn more here or explore related resources below.

 

Orchestrate for Media & Broadcaster Workflow

 

Related resources:

  • Press release: Limelight Announces New Digital Content Delivery Solution for Media and Broadcasters
  • Solution brief: Limelight Orchestrate for Media and Broadcasters
  • White paper: Delivering to the Broadcast Quality Generation
  • Case study: How StreamOn Revolutionized Media Delivery
  • Online community: Find your peers and broadcast experts at Limelight Connect

Originally published August 29, 2014

 

Wake up, folks. ZRT is here.

 

Facebook Zero. Wikipedia Zero. Google Freezone. All these initiatives share one thing in common—their traffic doesn’t cause the mobile subscriber to rack up usage charges against their data plan.

 

But it was the most recent entry into the zero-rating game, T-Mobile, that truly demonstrated how powerful and real zero-rating has become as a marketing tool. Early this year, T-Mobile launched the “Music Freedom” platform. This initiative enables subscribers to listen to as much audio as they want from their favorite audio streaming provider without burning through their monthly T-Mobile data allowance. At the time of launch, only 8 providers[1] were enabled although T-Mobile is now crowd-sourcing requests for future services to be added.

 

It is a great marketing campaign, targeting a specific, high-profit user segment. With its launch, T-Mobile opened the door to leveraging zero-rated traffic as a marketing vehicle, and that is some powerful mojo.

 

Why audio? Because it is easy. Even high quality audio at 128kbps makes it difficult for a large audience of simultaneous subscribers to suck up tons of bandwidth. But what’s next? A “Video Freedom” platform? Who knows…

 

Regardless of how ZRT as a feature plays out, it should be a major wake up call to every content provider, because delivering zero-rated traffic is different. The key is that this kind of traffic is tied into the carrier’s billing system. The carrier “whitelists” one or more IP addresses, allowing the content from those IP addresses to be zero rated when it’s consumed by the user. Meaning, there’s no charge against the users data allotment.

 

And when a content owner’s traffic is zero-rated, the impact can be dramatic. Suddenly, there are a lot more people wanting to consume that content. And, all that expanded traffic may well require the use of a CDN to deliver the scale and global reach necessary to satisfy user demands.

 

If you are considering ZRT, you will want to engage a CDN anyway. CDNs can provide the reserved IPs that the carrier needs to make ZRT work with their billing system. Be careful, though, because not all CDNs are created equal when it comes to reserved IPs. Most can only deliver them from a few specific points within their network. That kind of restriction forces a tradeoff between performance and capacity that obviates the benefit of using a CDN in the first place.

 

But this isn’t the case with Limelight Networks. Our network was architected to allow for reserved IPs at scale. That means when you reserve an IP address in our network, it is reserved in every POP, everywhere in the world. When you are using a CDN provider that can provide reserved, virtual IPs at scale, you can deliver zero rated traffic through virtually any mobile carrier with which you can sign a contract.

 

In addition to requiring reserved, dedicated IPs for whitelisting traffic, some zero-rated traffic also needs to be delivered over Secure Sockets Layer (SSL). This is especially true for content such as device updates but it is applicable for any content owner that fears their content may be ripped off in transit.

 

The issue here is similar to the dedicated IP issue. Many CDNs serve HTTPs (SSL) traffic only out of separate delivery pools that are significantly smaller and less geographically dispersed than their HTTP equivalent. Again, this forces unnecessary capacity and performance tradeoffs. But it’s not the case with Limelight Networks.

 

In early 2014, Limelight combined delivery pools enabling us to deliver secure HTTPS traffic at the same global scale as we traditionally deliver HTTP.

Although you may not be running out to your local mobile carrier to strike a zero-rated traffic deal tomorrow, T-Mobile’s initiative has set the bar for content owners. If you are considering ZRT, or have other needs for reserved IPs at scale, perhaps the best choice is to partner with a global CDN that’s already future proof. A CDN that doesn’t force you to make tradeoffs between scale, availability, and performance just to get your content delivered the way you need it. A CDN like Limelight Networks.

 

[1] http://www.t-mobile.com/offer/free-music-streaming.html

Image courtesy of e27.co.

Originally published August 5, 2014

 

This is blog post #8, the final installment in our blog series #OptimizeDigital, where we explore themes based on our newly released book Optimizing the Digital Experience (available for download here). You can catch up on the previous post here.

 

Have you followed along with our #OptimizeDigital blog series? Are you a master of general web performance principles? Have you been working hard to build a faster website?

 

Then you are ready to become the Performance Champion of your organization.

 

Performance Champions do more than talk about the benefits of great web performance; they know how to build a compelling case for it… one that will win buy-in from senior management and achieve tangible business benefits.

 

These four tips will help you do just that.

 

Tip 1: Benchmark Yourself

 

When trying to optimize a website, it is important to identify key performance indicators (KPIs). KPIs help you measure progress and remain aligned with the goals of your business. Once you know your KPIs (here’s a handy list), you can analyze your performance against them. There are plenty of free tools to help you do so.

 

  • HTTP Archive: Enter any URL to view data such as millisecond-level screen shots, or larger trends in transfer size and request.
  • Gomez: Test your page download time with details by object type from various global regions.
  • Keynote: Download free apps (including mobile and Internet testing environments) with a focus on real user monitoring.
  • Webpagetest: Test your speed by browser and mobile device type. Create a video file of the filmstrip to include in presentations.

 

And while you’re at it, don’t forget about the competition. Knowing how your web content performs against theirs is important in making the case for performance to your executive leadership. Some of the performance tests you run on your own websites can be run on the competition, too.There are even tools to help you benchmark your performance against the standards of your industry as a whole. You can get started with Compuware APM’s benchmark tool and Alexa Internet, Inc.’s Top Sites list.

 

Tip 2: Quantify ValueExecutive management may not be interested in knowing that you shaved milliseconds off load times or deferred scripts. You have to build the case for performance in terms that are important to them, and that is not likely to be technical speeds and feeds.No secret here: The key to securing investment in web optimization projects is to quantify the financial value. It may be a lower total cost of ownership (TCO) or higher return on investment (ROI). It may be increased revenues or decreased capital expenses. Whatever the metric, there is a financial impact attached to your web performance.Above all, management will want to know what that financial impact is.

 

Tip 3: Build Your Partner EcosystemWeb optimization requires an entire technology ecosystem. The ecosystem has to integrate seamlessly with your existing technology, and automate workflows at every opportunity for higher return on investment (ROI—a key theme for your executive leadership). Moreover, it has to scale to meet your unpredictable web traffic patterns; your executive team may not be inclined to build out to peak capacity.A content delivery network (CDN) like Limelight Networks is a critical element of this ecosystem and can significantly improve web performance. When choosing a CDN or any ecosystem partners, pay attention to these qualities:

 

  • Integration
  • Manageability
  • High performance
  • Resiliency
  • Elasticity
  • Future proofing

 

Though it seems self-evident, this bears repeating: The performance of your web content is only as good as the networks that you choose to deliver it. Choose wisely.

 

Tip 4: Schedule System Checks

 

Performance Champions know that improving performance is not a one-time engagement.

 

Sure, some aspects of performance optimization can be automated in time. But the results need to be continually assessed and re-evaluated.

Look to real user measurement (RUM) for a handle on how users are benefitting from your efforts. Also, live reporting and analytics should be made available to you by your CDN on a geographical basis in real time or near real time. Finally, it is best practice to conduct a high-level performance review against your KPIs and re-audit your content on a monthly basis, to monitor changes or trends worthy of your attention.

 

But the executive team will want to know how you are progressing against company objectives. You will want to show results that speak to revenue, savings, business value, customer satisfaction, and other matters uniquely important to senior management.

 

The Final Word

 

Your company has invested a tremendous amount of work in developing great digital content. But unless it is delivered successfully—unless it performs in a way that allows end users to locate and interact with it in the way they want—what’s the point?

 

Fortunately, there is a wealth of tools at your disposal to overcome performance challenges. An architected and managed approach yields real performance gains along the entire delivery path. It maximizes internal resources. It future proofs your business. And most importantly, it creates a great digital experience for your end users.

 

As the company’s Performance Champion, you are in a position to make it happen… provided you have the right tools in hand.

 

Ready to get started? I welcome you to consult with our experts, participate in our online community, and download our book, Optimizing the Digital Experience, to learn more.

 

Thanks for reading the #OptimizeDigital series, and happy web performance!

Originally posted July 9, 2014

 

This year, like every year, the best minds in the web operations and performance arena came together to learn, exhibit, and exchange ideas that impact digital experiences at the Velocity Conference in Santa Clara, California.

 

The key theme this year was – Building a Faster and Stronger Web. With 2000 attendees, over 100 speaker sessions, and 75 exhibitors, it was indeed the perfect platform for Limelight Networks, Inc. (Limelight) to showcase its exceptional technology, speak with experts at the front lines of web performance, and educate customers about the latest web performance optimization techniques.

 

The below social media post conveys the show’s significance to people responsible for web performance:

 

“Velocity is the conference I always wanted. Instead of focusing on one particular product or technology, it focuses on the true problem of keeping websites fast and available, which a lot of us have to deal with.” —Peter Zaitsev, CEO Percona Inc., co-author of High Performance MySQL

 

What We Heard

 

The attendees at the conference were enthusiastic to learn and share real-world best practices for optimizing their web applications, focusing on the complex topics of the performance of Ajax, CSS, JavaScript, and images. (These are the core components that a typical website is made up of, and what makes them dynamic and personalized.)

 

But how do you know if your applications are truly optimized? Performance testing was another widely discussed topic, with a focus on real user measurement (RUM).

 

Finally, attendees were hungry for best practices around optimizing performance for mobile and responsive design.

 

JavaScript Is the Assembly Language of the Cloud

 

Let’s focus on JavaScript for a moment.

 

In a keynote session by Scott Hanselman of Microsoft®, titled Virtual Machines, JavaScript and Assembler, he said – “JavaScript is the assembly language of the cloud.” (You can watch this session on-demand here.)

 

We at Limelight Networks see, optimize, and enable delivery of some of the most complex web applications and sites every minute. While JavaScript has empowered developers to do things at the client-side (your browser or device) that were not possible only a few years ago, it has also added more complexity and performance challenges to the webpages.

 

Many Velocity attendees came in thinking that their applications and websites were already optimized but found out about best practices and techniques that they weren’t even aware of. The also found that applying some of the best practices in a cookie-cutter way could actually impact the performance of their web applications negatively. Many engineers representing their companies were amazed to learn that their websites that they thought were already fast and optimized could be further fine-tuned for performance gains that ranged from 30% – 100%. They were eager to learn more and dive deep into the methods and what they could do better.

 

An intelligent service like the Limelight Orchestrate® Performance platform provides power to the developers to create more “client side heavy” JavaScript code without having to worry about optimizations, resulting in that superior user experience that Velocity attendees are hungry for… especially for JavaScript-heavy content.

 

Getting Beyond Page Level Metrics

 

Modern browsers are updating at a pace that is almost impossible to keep up with. While it is great to optimize code and create a powerful experience on the users’ devices and browsers, how can we make sure that experience is consistent?

 

In a keynote session on Real User Metrics (RUM) by Buddy Brewer of Soasta, he focused the attention on the complexities of browsers and page components. More importantly, how these components load at various times can point to which components are slowing pages down.

 

Real User Measurement (RUM) isn’t just about the page level metrics anymore. We can now collect real user data at the object level, find slow page components, and keep third parties honest. For example: If your users are complaining about poor performance, you have ability to isolate it to a much more granular level than possible before. Using the waterfall charts produced by a typical RUM test, you can find that small problem thumbnail image coming from an ad-network as the source of slowness on a specific browser, device or network.

 

RUM is an important counterpart to synthetic testing, where nodes sit on the Internet backbone to measure raw speed but give an incomplete picture of the user experience.

 

The key questions that RUM can answer are: Which page components affect perceived latency? Is that JavaScript object a single point of failure? Or is it that external CSS stylesheet causing the slowdown?

 

Using RUM to measure your content is critical to your business.

 

Mobile

 

Web experts and prospects alike were keenly interested to know more about how to successfully solve the challenges of today’s complex websites, particularly mobile. Many speakers at this year’s conference focused on “Mobile First” and responsive design.

 

Sessions covered the challenges of mobile performance and the difficulty ensuring a great user experience on all devices.

 

One session covered building out a device lab and testing all the browsers. Tammy Everts (Radware) shared some interesting data on how views react to slow performance: 23% curse at the phone, 11% scream at their phone, and 4% throw their phone. (Our very own CMO has been known to bang his phone on the table as if hitting it like an old rabbit ear TV set would reset it.)

 

Overall, these sessions were very well attended and it was clear that attendees see web performance on mobile devices as one of their key challenges.

 

Our Take from the Show Floor

 

photo

 

Major high-tech companies like LinkedIn®, Facebook®, and Google® were showcasing their work and invited people to challenges and trivia contests about web performance problems. We at Limelight were honored to be among them.

 

Web experts and prospects alike were keenly interested to know more about how to successfully solve the challenges of today’s complex websites. Our CDN and performance experts had key conversations around mobile, performance, and web application best practices. Many attendees were amazed to learn a CDN like Limelight can not only accelerate and cache static content, but also accelerate dynamic and personalized content. Front End Acceleration (FEA) techniques also caught they eye of many technologists. These web professionals were amazed to see the possibilities introduced by a more complete approach to web performance optimization, combining static object caching, dynamic content acceleration, and front end acceleration.

 

While JavaScript, RUM, and mobile conversations dominated the conference, one more thing drew long, inquisitive lines at our booth – the presence of a caricature artist. In a more lighthearted moment, one of our customers – a prominent social media network – brought in its whole team to have their caricatures drawn.

 

velocity caricature

 

Conclusion

 

Velocity is about the people and technologies that keep the Web fast, scalable, resilient, and highly available. From ecommerce to mobile to the complexities of cloud, companies need to create a faster, consistent web experience globally. And companies like Limelight are dedicated to enabling that for you.

To know more about how we can help you solve today’s complex application delivery and optimization challenges please contact us and follow up with us on our online community!

Originally published July 2, 2014

 

This is blog post #7 in our blog series #OptimizeDigital, where we explore themes based on our newly released book Optimizing the Digital Experience (available for download here). You can catch up on the previous post here.

 

Twitter_logo_blue Tweet this post: Currently reading: Are You Getting the Most Out of Your CDN?

 

Looking to make your website faster? Then you have probably come across content delivery networks (CDNs). A CDN acts as your instant global infrastructure, distributing and caching your content on its network of servers, which are located closer to audience members everywhere than your infrastructure could ever be—at least without spending unthinkable capital and developing deep technical expertise. (Consider that Google is one of a very few companies that operates its own global CDN.)

 

Caching is a key performance strategy for any CDN. The general rule: the closer your end user is to your content, the better. Shorter distances minimize latency and cut down on unnecessary network hops that can lead to packet loss and retransmissions. Caching enables those short distances.

 

Once upon a time, that was enough.

 

No more.

 

Today’s CDNs must be able to handle today’s content. That means addressing trends in media, user consumption, and web development so you can superbly deliver:

 

  • Content to mobile and other connected devices
  • Content for every browser
  • Dynamic (personalized) content
  • Video, video, and more video

 

Do you have any of these content types? A quick content audit  will tell.

 

This blog post introduces you to optimization techniques that traditional CDN providers cannot offer. These techniques accommodate the more complex delivery needs of organizations operating in today’s digital environment.

 

Dynamic Content Acceleration

 

This is important: most CDNs are capable of delivering dynamic content, but very few are capable of accelerating that delivery. The difference between the two has a real impact on your bottom line.

 

Unlike static content, which is the same for every user and can be efficiently cached in an edge server and counted on not to change for long periods of time, personalized content must be uniquely generated each time it is requested. That means every request for dynamic content must travel all the way back to the origin servers where you store it, and the object requested travels the same distance to your end user; a network round trip, at minimum. The network round trip can occur on the public Internet, or a private network. (See Chapter 5 of Optimizing the Digital Experience  to learn more about the difference between the two.)

 

Personalized content providers (anyone delivering content that has to be refreshed on request like a real-time bank account balance, custom search results, or personalized web experience) face a challenge: How do I achieve the kind of performance gains that static object caching can yield, given the fact that my dynamic content can’t be cached in the same way?

 

Specialized CDNs can help you overcome the dynamic challenge with specific optimizations across the middle mile—in essence, opening up a bottleneck-free world where data can travel freely between your servers and your online audience. There are two main ways to get this done: route optimization and TCP acceleration.

 

Route optimization means selecting a delivery path that produces the best possible performance. This delivery path can be the public Internet, or a private network like Limelight’s. (In addition to controlling the flow of your data, privately operated CDNs control everyone else’s traffic on their network as well to avoid congestion.)

Transmission control protocol (TCP) optimization is really a combination of techniques to improve the performance of your dynamic content as it travels across the networks between you and your audience. Since TCP sets limitations on how your content can move across the Internet, specialized techniques are required to either respond to or overcome those limitations. But rather than hand coding these optimizations into your content, a CDN can actually automate decisions and executions based on factors like your user’s browser, device type, packet loss, and timeouts all in real-time.

 

Front End Acceleration

 

Nobody can click, browse, search, or transact on your site until all of its components have loaded.

 

But components do not necessarily load in the order that the end user cares about them. A browser does not know to load the “buy now” button before all scripts have run. Specifically with respect to JavaScript, your page may behave differently when some script loading is deferred. For example, if the script is written to expect certain user actions, such as onclick or onkeypress, these actions will not be triggered until the page is completely rendered.

 

Think of it this way: If your audience could prioritize the loading of objects on your page, what would they put first? Probably the stuff they care about the most, like product images or headlines.

 

But even if your development team is already optimizing code to load content based on user preferences, the number and evolution of browsers on the market makes that a self-defeating exercise. Shaving milliseconds off load times for the latest version of one browser may add milliseconds on the next release. There is no cookie-cutter model to apply; some optimizations can actually slow down your performance.

 

Front end acceleration (FEA), also known as front end optimization (FEO), analyzes and optimizes your code to load content more intelligently, based on user expectations. This is done with a variety of techniques that can be applied to your content as well as any third party content on your site, like advertisements.

By placing the responsibility of staying ahead of the most recent trends and changes in the browser market on your CDN, you free your development team to focus on primary business goals like developing new features and functionalities. A mobile banking app developer, for instance, should be able to improve product usability without worrying whether downloading more client-side scripts will reduce application performance or introduce more latency.

 

Just as there is no cookie-cutter approach to optimizing code for every browser, compressing for all browser types is difficult. There are many browsers that are fairly old and do not support compression (for example, some variants of IE 6).

 

Your CDN should be capable of compressing content for the browsers your audience uses to free up bandwidth and storage; compression generally reduces the file size by about 70%. Anything text based can be compressed including XML and JSON; most websites compress their HTML documents as well. Compression reduces response times by reducing the size of the payload. It is also worthwhile to compress scripts and stylesheets, but many content providers miss this opportunity.

 

Currently, Gzip is the most popular and effective compression method. Approximately 90% of today’s Internet traffic travels through browsers that claim to support the Gzip approach. Note: Gzip compression should not be applied to image and PDF files because they are already compressed. Trying to apply it in these cases not only wastes CPU, but can also potentially increase file sizes. Still, compressing as many file types as possible is a simple way to reduce page weight and accelerate the user experience.

 

In Sum

 

Traditional CDN services alone are just not enough to improve performance in today’s digital landscape. New techniques have emerged, and R&D teams are working as you read this to optimize delivery for the next generation of devices, browsers, and user behaviors.

 

As anyone who has been using the Internet for more than a few days knows, it is a living network that does not ever stop evolving.

 

Before web pages grew from an average of 100 kb in 2005 to 1800 kb in 2014, in-house performance optimizations were enough for most organizations. Now, before things get any more complex, stop and think about what it will take to create a great digital experience for your audience.

Do you know what it takes?

 

If not, you can catch up on the #OptimizeDigital blog series or read the book Optimizing the Digital Experience.

 

In this blog series, we have covered the bases in terms of the optimizations required for content providers like you to create the best user experience. In the next and final post, we will talk about making the case for performance to others in your organization. Thanks for reading.

 

Want to chat? Find me on Twitter, drop me a note, and join the  [#OptimizeDigital] conversation on Twitter.

 

Want to learn more about web performance? Download the book Optimizing the Digital Experience for an in-depth look at the topics presented in this blog series.

optimizingthedigitalepxerience

Click to download

 

Twitter_logo_blue Tweet this post: Currently reading: Are You Getting the Most Out of Your CDN?

Originally published June 30, 2014

 

GOOOOOAAAAALLLLLLLLL!!!!

 

world-cup-2014-600x337

 

It’s a battle cry that’s resonating not just from television sets around the world but from smartphones and tablets as well. More and more people are turning online to get their World Cup fix while they are standing in line, waiting at a traffic light, or huddling over their laptop at their desk. What started with the 2012 Summer Olympics has only gained momentum—more people watching more video online. The World Cup, though, is blowing the Olympics out of the proverbial water! A global effort of multiple broadcasters and CDNs is delivering hundreds terabytes of data each match to people around the world.

 

So what’s really happening? As of the posting of this blog, our network has seen massive utilization peaking at well over 3.6 Tb/s (terabits per second) during match play. Watching from their iOS, Android devices, and game consoles, thousands of concurrent users are tuning in online. In fact during the United States vs. Germany final qualifying game, over 750,000 users connected at the same time for a flawless game-time experience. And one of our partners single-handedly hit close to 1 Tb/s as the U.S. advanced into the quarterfinals!

 

Of course, this doesn’t just happen by itself. In fact, when you pull back the curtain, what you get is a massive engine of people and technology dedicated each match to ensuring the best possible end-user experiences. So what does it take exactly to deliver a World Cup match online? Check out the numbers:

 

  • Tens of thousands of servers to accept connections from end-user devices and deliver the video to them all around the globe
  • Hundreds of man hours of dedicated engineering and support resources during each match (these are literally people sitting behind monitors in our network operations center)
  • Software to capture analytics and provide real-time feedback on who’s watching what, when, where, and how.

 

What makes it all possible? That would be the Limelight Network—a massive global private network supporting over 11tbps of egress capacity with 80+ locations in over 40 countries. It’s the private nature of the network that marks it from competitors and enables us to deliver flawlessly, for example, to 750,000 concurrent users. No Internet congestion with which to contend!

 

Of course, we are only still at the beginning of the World Cup. With the round of 16 just underway, the elimination matches promise to yield even more traffic and concurrent users. And that’s the really telling story behind this year’s World Cup: it’s a game changer for the way we consume media. It’s the herald of a snowball rolling downhill that threatens to transform the landscape of rich media. But it also signals something else—the need for more capacity, more software, and more expertise to handle the World Cup of the future…something that we are tirelessly focused on providing to customers around the world.

Originally published June 11, 2014

 

This is blog post #6 in our blog series #OptimizeDigital, where we explore themes based on our newly released book Optimizing the Digital Experience (available for download here). You can catch up on the previous post here.

 

Twitter_logo_blue Tweet this post: Currently reading: Instant Global Infrastructure? What a CDN Does for You.

 

Though Internet service providers (ISPs) and mobile networks provide increasingly fast connection speeds, a host of variables along every mile of the content delivery path means that those speeds are not consistent in the real world. And even if they were, other factors—including availability, scalability, and user device—could hurt your online performance.

 

Fortunately, there are plenty of optimizations to consider: caching, strategic storage, dynamic content acceleration, and front-end acceleration… to name a few.

Have you performed a web content audit? A content audit is a quick and easy way to map your content to these different optimizations, identifying which ones are right for you. (The previous post in the #OptimizeDigital series is all about how to conduct your audit. Read it here.)

 

In this post, you will learn about optimizations for:

 

  • Delivering static content
  • Large and small objects
  • Rich media (like video)

 

We begin with an overview of how content delivery networks (CDNs) work. A CDN connects two points on the content delivery path: your origin (the servers where you store your content), and the edge of the networks that your users rely on to access content in the “last mile.” A CDN connects your content to user access networks either by an optimized public Internet path, or across a privately owned network.

 

Recall that this middle mile is full of performance-killing threats. A CDN routes your content across an optimized path to its own servers, which are located right next to your end user access networks. CDN servers are located in clusters referred to as points of presence (POPs). POPs interconnect with one another across the public Internet or a private network; they also connect with last-mile access networks in a specific region or around the world, depending on the scale of the CDN. Private networks (like the one we operate here at Limelight) help you avoid the hazardous public Internet. Basically, a CDN can extend your infrastructure as far and wide as required to reach your audience on any device.

 

Static Object Caching

 

For large and small object delivery

 

CDNs offer static object caching: storing copies of your most requested static content in cache servers across POPs near end users, refreshing it as needed, and ensuring availability through replication and backup. (Advanced CDNs, which we will discuss in the next blog post, can offer more than just static object caching, but consider it the point of entry.)

 

But the real value of caching is in how much of your content a CDN can retain at a global scale. Since static content, like the logo on your website, does not often change, it can stay in the CDN’s cache servers almost indefinitely. If the CDN has a high cache hit ratio, meaning that the content is available most of the time it is requested, then your content is within immediate reach 95%+ of the time. Unfortunately, some CDNs have a low cache hit ratio—so when a user requests your content in cache and it is not available, the request travels through to the origin until the content is found.

 

Cache hit ratio is a true parameter of caching performance and depends highly on the CDN’s architecture. Because of our densely-architected metro POPs, we at Limelight Networks are able to maintain a cache hit ratio above 98% (one of the highest in the industry).

 

Static object caching with a high cache hit ratio gives you two important advantages. First, you avoid so-called round trips, or multiple requests traveling back to your origin for content, which conserves valuable bandwidth. Second, content is right there when a user requests it, reducing the latency associated distance between your content and the end user requesting it.

 

BOOM—faster website!

 

Both the small and large static objects that make up your website can be cached, from text to rich media on demand. “Whole site delivery” is a term used to refer to the delivery of both small and large objects as well as the containers of those files such as HTML. Delivering whole sites is more challenging because they present a heterogeneous mix of cacheable and non-cacheable content. The main HTML file could be non-cacheable, but the various components that make up a page (scripts, images, stylesheets) could be cacheable. This requires intelligent cache management and header parsing rules at the edge to differentiate among complex content needs.

 

Cloud Origin

 

For better delivery of media, software, and more

 

Recall that before a CDN can cache your content, the content has to be retrieved from your origin. If your origin is in Las Vegas and your users are in London, then of course you want to cache content in London. But the content has to get there from Las Vegas in the first place. Any time there is a cache miss—meaning your content is unavailable in cache for any reason, such as unpopular content that has fallen out of cache—the request travels back to Las Vegas. That distance comes with a performance penalty, especially for large files like rich media and video.

 

When it comes to performance, where you store your content matters.

 

While many CDNs offer traditional storage in their multiple POPs, cloud origin provides something different. Limelight’s purpose-built cloud storage that is optimized for cloud origin allows content providers to take advantage of Limelight’s global content delivery network—storing origin content in strategically located POPs and uploading data locally and replicating it worldwide.

 

Limelight’s cloud storage infrastructure can ingest content from you locally through the nearest POP, but when origin data is requested (like the first byte of your website or a repeat request on a cache miss), the request does not need to travel from London all the way back to your origin in Las Vegas; the content has been pre-positioned in the London POP, maybe even in the same rack as the server where it will be cached for future requests, cutting down on round trip times and latency.

 

Like caching, cloud storage that is optimized for cloud origin protects your origin from repeated requests, as well as improving availability and performance. Whether you are distributing large video files or releasing new software, cloud storage that is optimized for cloud origin is a key to better performance.

 

In Conclusion

 

A CDN provides access to global storage, delivery infrastructure, and optimization techniques that are otherwise inaccessible to most organizations. Moreover, a CDN overcomes a major delivery challenge: the inability to access or control the vast network that transports your content between you and your end users. Static object caching is a good solution to small and large object delivery challenges, including rich media; cloud storage that is optimized for cloud origin can further improve your user’s experience on a cache miss.

 

But there is more! Chapter 5 of Optimizing the Digital Experience goes into depth about how CDNs optimize your performance. And in the next blog post in this series, you will learn about how to optimize dynamic (non-cacheable) content, what front-end acceleration can do for user engagement, and more.

Until then, find me on Twitter where the  [#OptimizeDigital] conversation is ongoing or drop a note.

 

Want to learn more about web performance? Download the book Optimizing the Digital Experience for an in-depth look at the topics presented in this blog series.

 

optimizingthedigitalepxerience

Click to download

 

Twitter_logo_blue Tweet this post: Currently reading: Instant Global Infrastructure? What a CDN Does for You.

Originally published June 4, 2014

 

On June 4, Limelight Networks Solutions Engineer Adam Copeland presented the webinar 5 Things the Fastest Websites Did First (And You Can Do, Too)!

 

 

View it on demand to discover how leading organizations approach web performance.

 

Fair warning: This is not your usual web performance webinar. It’s all about the strategy you need to build before you start minifying, compressing, and prioritizing scripts.

 

Twitter_logo_blue Tweet this post: Webinar on demand: 5 Things the Fastest Websites Did First

 

Screen Shot 2014-06-04 at 1.09.13 PM

 

Click to view the webinar

 

Want to know more about web performance as a strategy? Download the book Delivering the Digital Experience: A Step by Step Guide to High Performing Websites and Web Applications and join the #OptimizeDigital conversation on Twitter.

 

Twitter_logo_blue Tweet this post: Webinar on demand: 5 Things the Fastest Websites Did First

Originally published May 22, 2014

                                                                                                      

Screen Shot 2014-05-22 at 2.06.15 PM

This year’s 1-day show about content delivery and performance was all about the end-user. Quality of Service (QoS) and Quality of Experience (QoE) took center stage as real-user monitoring (RUM) and transparent caching seemed to be on everybody’s lips.

 

When it came to RUM, there was no better presentation than the session featuring Dan Rayburn (EVP, Streaming Media) and Pete Mastin (Market Strategy and Product Evanglish for Cedexis). This presentation on best practices in multi-CDN delivery stressed the value of real user monitoring (RUM) data in improving quality of service. For companies seeking to segment traffic based on end user performance, the Cedexis Radar community provides crowd-sourced data from 350 million global end users per day, and is considered one of the most accurate sources of CDN performance data on the market.

 

Our own performance benchmarking efforts validate these conclusions. After evaluating our dynamic content acceleration with internal and external synthetic testing, (http://resources.limelight.com/rs/limelight/images/ESGLabValidation-LLNWOrchesteratePerformance.pdf) we looked to Cedexis Radar data for validation. The results confirmed that our performance exceeded the competition by an average of 15%. [http://blog.limelight.com/2014/02/real-user-data-reveals-startling-results-on-the-performance-of-dynamic-site-acceleration-services-limelight-is-1/]

 

The takeaway here? RUM data provides more transparency to CDN customers that want to optimize performance based on real-world KPIs beyond raw network speed. For more on RUM, read our recent post “Real Users: A Common Web Performance Blind Spot.”

 

As for transparent caching, it all comes down to QoS and QoE. In the quest to optimize Quality of Experience, content providers increasingly depend on transparent caching to accelerate streaming and download of their popular content by placing it within user access networks. That means they might put a transparent caching box in Time Warner’s network, for example. On the provider side, Qwilt and PeerApp presented about the benefits of transparent caching. And both Netflix and Google promoted their caches being placed within operator networks to give users a better experience. Of course, this makes total sense—the closer you can get the content to the end user, the better the experience should be. And if you got content any closer to end users than transparent caching, you’d probably put a server in their lap!

 

Google and Netflix are large traffic pushers today, but we don’t know what the world will look like tomorrow or even more so in two years. (Just look at how fast Twitch emerged as a major player in live streaming.) Other Content providers are looking to content delivery networks (CDNs) to implement the transparent caching strategies that will lend a competitive advantage as demand for their content grows.

 

Regardless of the content source, transparent caching was generally regarded by show attendees as a triple win. Network operators increase capacity with drastically less capex, and can control the devices placed in their network for better design, monitoring, and management practices. Content providers control the quality of experience they provide over the short haul. End users, of course, get what they want: better video streams.

 

What did you hear at the show? How do you think these strategies impact QoS and QoE? Drop me a note at jt@llnw.com, comment below, or catch me on twitter @_jasonthibeault.

Originally published May 20, 2014

 

This is blog post #5 in our blog series #OptimizeDigital, where we explore themes based on our newly released book Optimizing the Digital Experience (available for download here). You can catch up on the previous post here.

Twitter_logo_blue Tweet this post: Currently reading: How to audit content for better web performance

 

Let’s say you are a web performance rock star. You “get” performance, you know why it can be less than stellar, you have defined KPIs, and you put a monitoring system in place to track progress. (Hint for those of you who are not yet web performance rock stars: click those links for a crash-course-by-blog or read chapters 1-3 of Optimizing the Digital Experience.)

 

Congratulations! It’s optimization time. Do you know what your optimization strategy will look like? What kind of technology ecosystem is required to support your goals?

 

This simple exercise can pave the way.

 

For the sake of example, imagine that your objective is to improve speed and end user experience to drive online sales of a new product. Your website visitors are complaining about a slow experience. Bounce rates are high. But your tests show that your product page is loading quickly.

 

If your site isn’t slow, why are users so quick to leave?

 

Digging a little deeper into the analytics, you see that your video is buffering. Low quality network connections in your key market are to blame. As a result of poor video performance, your website’s time to interact (TTI)—the milliseconds users must wait to access key content, like product videos—is high. The page loads completely but the player window is blank, so visitors perceive your entire site as slow. And they leave.

 

Here is where a content audit comes in handy. It isn’t complicated and it can be done fairly quickly. Simply correlating content type with performance issues takes the guesswork out of finding the most effective optimization for your use case.

 

First, round up the content you want to optimize. Then, use this framework to guide you.

 

web performance optimization audit

 

The output of your audit can be as simple as a spreadsheet or summary report. Remember, the objective is to understand how your content type impacts performance so you can identify the most effective solution.

 

If static content is being bottlenecked in the middle mile between your origin server and global audience, for example, then you may need to cache files closer to end users. Whereas browser diversity in the last mile may call for an acceleration solution to speed up small object loading on the front end. And your product video—the one that was buffering in the example above, even though the rest of your page was performing well—requires adaptive bitrate streaming.

Once you audit content, how do you optimize it?

 

Next week’s post presents a closer look at how to use the results of your content audit to your performance advantage. If you are an overachiever (or just really impatient) you can read ahead to chapters 5 and 6 of Optimizing the Digital Experience to find out how different performance solutions map to different content types.

And be sure to register for our June 4 webinar with Limelight performance expert and Solutions Engineer Adam Copeland. Rumor has it that he’s going to reveal what makes the fastest websites in the world so fast.

 

Meanwhile, tweet your questions and thoughts on how to [#optimizedigital].

 

Questions? Drop me a note or find Limelight on Twitter at @LLNW.

 

Want to learn more about web performance? Download the book Optimizing the Digital Experience for an in-depth look at the topics presented in this blog series.

optimizingthedigitalepxerience

Click to download

Originally published May 13, 2014

 

This is blog post #4 in our blog series #OptimizeDigital, where we explore themes based on our newly released book Optimizing the Digital Experience (available for download here). You can catch up on the previous post here.

 

Tweet this post: Currently reading: Web performance blind spots and how to measure them

 

Whether you are fine-tuning your web optimization strategy or just curious about how fast your site loads, you need an accurate answer to the question, “How is my online performance?”

 

One common way to measure performance is to log on to the company website or portal and take note of the response time.

 

For many IT managers who do this, performance seems great. That’s why it is such a shock when you get the angry call from an executive on a business trip across the world demanding to know why the corporate website won’t load in Country X.

Gulp.

flickr/Marvin Lee
Flickr/Marvin Lee

In truth, you cannot really experience your website or app from corporate data centers the same way your audience experiences it. Of course delivery looks blazing fast when you are measuring performance on the server side, from the within the network where content is served!

 

But for your end users, it can be a different story. Objects may be loading slowly in the browser because they are not optimized. Or, the wrong things are loading in the wrong order. If nothing else, the latency resulting from real-world distance between your servers and your audience adds a few seconds, even just a few milliseconds, to load times; still, it is enough to push user wait times into the noticeably unacceptable range.

 

The user experience is a surprisingly common blind spot when it comes to web performance testing. That vast, un-optimized stretch of public Internet between you and your end users is full of bottlenecks that can sabotage performance, and should be taken into account when looking at how your website or app is really performing. The best option to measure performance is with real user monitoring (RUM). RUM monitors actual user interaction with your website or application.

 

RUM functions by injecting a small piece of code, typically JavaScript, into the digital content you want to analyze (your site, for example). The code captures statistics like available bandwidth, CPU usage, time to action, and similar trends. It records and relays download times and task completion times and flags certain events if they are not within the normal threshold. With RUM, any page or transaction can be analyzed by geographical location, IP blocks, and regions.

RUM tools are available for free, or on a paid basis. Popular RUM services are:

 

 

If you begin to monitor your performance with RUM, remember to combine metrics from every category of KPI: speed, availability, scalability, multi-device support, and end user experience. After all, improving performance is not just about making your website faster. It’s about enabling your end users to perform the tasks they need to perform to support your business objectives.

 

When you begin testing, you will probably uncover performance issues. Stick with us to find out how you can address them.

 

Meanwhile, the conversation goes on in Twitterland [#optimizedigital].

 

Questions? Drop me a note or find me on Twitter at @clarekirlin.

 

Want to learn more about web performance? Download the book Optimizing the Digital Experience for an in-depth look at the topics presented in this blog series.

optimizingthedigitalepxerience

Click to download

Originally published April 22, 2014

 

Welcome to the second post in our blog series #OptimizeDigital, where we explore themes based on our newly released ebook Optimizing the Digital Experience (available for download here). Feel free to join the ongoing discussion on Twitter® by using the hashtag #OptimizeDigital. And for those of you who missed last week’s post, catch up here.

 

There is no sense in sugarcoating this: Creating a great digital experience for your audience is really hard!

 

But the rewards of getting it right are yours to reap, provided that you effectively locate and remove the web performance bottlenecks that stand in your way. In this post, I will introduce you to those bottlenecks so you know what you are up against in the battle for a better digital experience.

 

Network Segments

 

The network that delivers your digital content is often discussed in three segments: first mile, middle mile, and last mile. These segments connect audiences from their devices to your valuable content. The quality of their experience results from the performance along all three segments.

 

Network Segments

 

Each segment presents unique performance challenges. Let’s take a closer look.

 

Last mile: Beginning with your audience, the last mile is where content and requests travel between your user’s access network and their device. This is the zone most prone to performance issues that can go undetected; most content providers have very little insight into what happens in the last mile. Here is what to watch out for.

 

  1. Latency: This is the biggest performance killer in the last mile. Even if broadband penetration is increasing globally, end users increasingly access your content through multiple devices and over wireless connections. Wireless networks can introduce seconds of latency. In the face of high latency, Internet protocols reduce throughput—a vicious cycle.
  2. Congestion: Even with high bandwidth, traffic can exceed capacity in certain situations. Requests from hundreds of thousands or even millions of end users may all be converging on the same 1-gigabit-per-second link exiting a single region. Flash crowds converging on viral content constrain access to your files.
  3. Browser diversity: Trying to keep up with every type of browser is a losing proposition, even for the most talented developers. In fact, the mechanisms that your developers are putting in place to optimize performance for browsers today may actually hurt performance when the next version of the browser is released.
  4. Content complexity: Web 1.0 and Web 2.0 users guided their own digital experiences. Now content guides the user, creating personalized journeys based on individual profiles. This involves a huge amount of dynamic content, which in turn means more round trip requests traveling across the Internet, to more servers, from more users than ever before.
  5. Content structure: A simple webpage can contain tens of styling libraries, which introduce round trip time delays and have different execution times. They load up in the order they are downloaded, forcing end users to wait before interacting with your content. If a high resolution image loads first, for example, browser resources and bandwidth are consumed rapidly.

 

Middle mile: This is the distance between your end user’s access network and the server where your content is stored. Whereas ISPs and mobile networks have an incentive to improve performance across the last mile of the delivery path, the middle mile is a different story. It can be a vast, un-optimized, and ungovernable stretch of public Internet over which you have absolutely no control. (It is also where HTML chattiness and TCP latency pile up fast.) Every request that travels back to the origin must cross the middle mile. Look out!

 

  1. Latency: It’s not just a last mile problem. Requests and data moving from end user access networks to your origin and back can travel through dozens of networks in just one round trip. TCP only allows those requests to be transferred incrementally, and limits the amount of data transferred during each request or TCP window. Any amount of packet loss requires retransmission, which further decreases throughput.
  2. Lack of control over network types: The biggest portion of the Internet resides between your ISP and your end user’s access network. Many heterogeneous networks, Border Gateway Protocol (BGP) sessions, public routes, latency, packet loss and variance in quality of service (QoS) affect your content along the way.
  3. Content rules: Caching behavior can easily be manipulated by a malicious proxy sitting in the middle mile, causing performance issues on HTTP traffic. And poorly configured websites and applications may have many fragmented components including scripts, stylesheets, and images, mostly in an HTML container. The way that browsers request sequentially and in parallel adds many round trips to render a webpage. The simple formula for latency can thus be attributed to two main factors: latency between two nodes, and the number of round trips required.
  4. Network equipment configuration: Fewer hops do not necessarily mean that content is transferred more quickly. Two hops across a path that has high latency and packet loss are inferior to three hops across a faster route. Depending on how equipment is configured, content may not be moving efficiently across the delivery path.

 

First mile: It begins at your origin servers and extends to the point where you give up control to another party (such as a transit provider or CDN). Many organizations spend considerable capital on the development and maintenance of IT resources at the origin: custom developed applications, servers, data centers and networking equipment, to name a few. And every one of them is performance sensitive. Be on alert for these top performance killers.

 

  1. Complex technology ecosystem: Many IT resources are custom developed applications built on a mix of varied technologies. This not only adds complexity to your design and architecture, but protocol differences can also lead to performance degradation. Lack of software integration can mean that information is not shared among sales and marketing automation, CRM, content management, and other platforms—limiting insight into performance issues and creating a disjointed end user experience.
  2. Network device resource limitations: Servers, switches, and routing equipment have limited computing memory; if they are not optimally load balanced, they can become quickly overloaded with a simple web application. Reducing the payload size forces a tradeoff between server consumption and performance.
  3. Web server resource limitations: Most webpages and applications hosted on your equipment have transaction-completion-time boundaries. Assets like script, images, and stylesheets have to be fetched and served over the wire in milliseconds. The cycle repeats millions of times per day (billions for popular applications). But when RAM and CPU capacity are limited, a database query, disk read/write, or cached response could add up to several seconds’ difference in response times.
  4. Content rules: A server has to set content rules on caching and how much time an object can be retained in the browser before sending a request to refresh it. Each refresh request adds a read/write load to all of the components of your origin: router, storage disk, bandwidth and CPU.
  5. SSL processing: SSL transactions have more server consumption and utilization than normal transactions, and they are time sensitive. Using the same CPU for concurrent SSL transactions introduces latency in the overall system due to extra steps required in authentication, certificate handling and digital handshakes.

 

The next question is obvious: With so many factors impacting performance and so much of what happens to your content seemingly out of your control, how do you go about formulating a strategy that supports better performance?

 

Identify your KPIs.

 

By the way, if you guessed that our next blog post is all about identifying KPIs, then you were correct. And if you want to talk web performance between now and then, join the #OptimizeDigital discussion with @LLNW.

 

Questions? Drop me a note or find me on Twitter at @clarekirlin.

 

Want to learn more about web performance? Download the book Optimizing the Digital Experience for an in-depth look at the topics presented in this blog series.

optimizingthedigitalepxerience

Originally published April 28, 2014

 

Welcome to post #3 in our blog series #OptimizeDigital, where we explore themes based on our newly released book Optimizing the Digital Experience (available for download here). The discussion is live on Twitter® (#OptimizeDigital) and you can catch the previous post in our series here.

 

Achieving great online performance is hard. If you’ve been following along with this blog series, you understand why.

 

If great performance would have a positive impact on your business (it would!), then you’ll need to define a starting point: your performance baseline. Your baseline allows you to monitor progress toward your goal of optimizing the digital experiences that your business delivers.

 

But ask ten people in your organization what “better web performance” means, and you’ll get at least ten different answers. Reduced site load times. Increased page speed. Improved conversion rates. Lower shopping cart abandonment rates. Faster rendering on mobile. And on and on.

 

All of these  answers illustrate one critical aspect: Performance is about more than speed alone. Choosing a broad set of performance KPIs to track against your baseline gives you more than just a faster website. It gives you a true digital experience optimization strategy.

 

Generally speaking, performance related KPIs can be grouped into five categories. (Chapter 3 of Optimizing the Digital Experience provides an in-depth list of specific KPIs that fall into each of these categories.)

 

  1. Speed: Yes, speed is the most commonly used indicator of performance. The primary ways to measure speed are a system’s responsiveness to a request, and the end user’s ability to interact with content once that response is completed.
  2. Availability: Availability is a given; your content must be consistently available and secure at all times. If visitors go to your website and cannot find the content they want, then they leave… possibly forever. There are too many alternatives out there. In this case, speed has little to do with it. You need to make certain that your website, content servers, and network links are always up and running.
  3. Scalability: Your ability to accommodate changing needs and traffic patterns is highly correlated to the quality of your users’ experiences over time. Distributing or removing content for audiences of highly variable behavior patterns, sizes, locations, and access devices demands scalability.
  4. Multi Device Support: It is not just the number of users accessing your content that creates performance challenges, but the proliferation of different types of devices, with different browsers and operating systems. If your audience is using multiple devices or platforms, remember to measure performance across each one. You will apply KPIs in the categories of speed, availability, and scalability to every device your audience uses.
  5. End User Experience: Ultimately, you work to improve performance in the interest of the end user. Machine metrics such as error rate or site load time are valuable, but they exist only as indicators of how easily your audience can locate and interact with your content. KPIs associated with the end user experience signal how usable, relevant, and valuable your content is.

 

Rather than just speed, performance is a combination of KPIs from all of these five categories: speed, availability, scalability, multi device support, and end user experience. The specific metrics you define in every category will be unique to your business, and they should reflect what you expect to achieve from web performance optimization efforts.

 

Once you establish baseline KPIs, you will continually monitor and evaluate progress, diagnosing and re-diagnosing performance issues. In the next two posts, we will reveal the most effective way to measure your progress across these five KPI categories and how to build an infrastructure that supports it.

Until then, find @LLNW on Twitter where the #OptimizeDigital conversation never stops.

 

Questions? Drop me a note or find me on Twitter at @clarekirlin.

 

Want to learn more about web performance? Download the book Optimizing the Digital Experience for an in-depth look at the topics presented in this blog series.

optimizingthedigitalepxerience

Originally posted on April 15, 2014

 

Welcome to our blog series #OptimizeDigital. In this series we will examine online performance from every angle: What does performance mean to your business? Why is delivering consistent performance so hard? What can you do to ensure the performance of your websites and web application meets your business needs?

The content in this series is based on our newly released book, Optimizing the Digital Experience (available for download here). You can also join the ongoing discussion on Twitter® by using the hashtag #OptimizeDigital.

 

All right. Let’s get started.

 

What Drives Your Online Performance?

 

When it comes to creating great digital experiences, there are two sides to the coin: creating/managing your content, and delivering/optimizing your content.

Many organizations focus their efforts on content creation and management. They strive to make website content more interactive, to integrate online video, and to optimize keywords. These initiatives do help improve search rankings. You know, because you are likely doing many of these things right now.

 

That’s great, but did you realize that delivering and optimizing the user experience with that content is just as important? In fact, web responsiveness could be the deciding factor in your organization’s ability to achieve business results. Performance can make or break your ability to hit revenue targets or increase conversion rates.

 

Delivery and optimization of the user experience can also be the most difficult factors to solve for in the digital equation.

Ensuring an awesome and effective digital experience starts with a thorough understanding of why delivering and optimizing digital content is so hard. There are several answers to that question.

 

1. Performance is complex.

 

Performance is often discussed in terms of speed: How quickly did the system respond? How fast did my image render? How long did it take to download that file? And speed is an important aspect of performance, perhaps the most important. But issues like availability, multi device delivery, and security come into play too.

2. Performance is notoriously difficult to measure.

 

So you want to improve performance? That means locating the points of failure in a complex digital landscape. Once you find out what’s wrong, how will you measure progress? Response time? Average revenue per user? Time onsite? Choosing metrics that matter to your business can be daunting.

 

3. Improving performance requires new technologies and partnerships.

 

Business owners and marketers need to improve user experience when engaging digital content. The fulfillment of that need often falls to the technical team, which must now think about servicing both external and internal audiences in an entirely new way.

 

4. The public Internet is slow and invisible.

 

There is a vast, un-optimized stretch of public Internet between your origin servers and the access networks that connect you to your end users. Other than compressing files and optimizing scripts, how much control do you really have? Can you see clear across the path that your content takes to reach end users? Can you protect your content from security vulnerabilities along the way?

 

5. BOOM! Dynamic content is exploding.

 

Four out of five CMOs think that custom content is the future of marketing. Websites and apps have evolved to the point that much of what a visitor experiences is dynamically created and presented on the fly—just for them. Multiple technologies are required to support all of these dynamic websites and apps. Can you possibly master them all?

 

Building, managing, delivering, and optimizing your content to create a great digital experience for your audience is hard for a lot of reasons. But it’s no less critical. No business can afford to overlook the fact that today’s online audience will abandon slow-loading content in the blink of an eye. Your job is to figure out what makes it slow… and fix it.

 

In my next post, I’ll try to help you do just that, with information about common performance killers that can sneak up on your content during its long, complex journey from your servers to your audience. Stay tuned!

 

Enjoyed this post? Share it on Twitter.

 

Questions? Drop me a note or find me on Twitter at @clarekirlin.

 

Want to learn more about web performance? Download the book Optimizing the Digital Experience for an in-depth look at the topics presented in this blog series.

 

optimizingthedigitalepxerience