It’s been almost a year since Data Center Frontier published “Edge Computing: The Data Center Frontier Special Report.” If you missed the report, it’s a great primer, worth the read, and you can have it for free here.
So the industry has had a year to work on the implementation of the edge – what issues are we seeing so far?
- Just what is the “edge,” anyway?
We’ve said in other places that “slow is the new down.” That is, infrastructure operators must be as concerned about latency as they are about downtime – because both can be costly. For revenue-generating applications, it’s a matter of not having customers bail out of a purchase that takes too long. For Internet of Things applications, it’s simply a question of table stakes. Do the application, hardware and other enabling technology do what they’re supposed to do?
In light of that, we think it’s best to define the edge using latency. So since no one we know of has defined the edge, let’s take a stab at it: The edge is where compute and content delivery happen – request and response, so a full cycle – within 10 milliseconds or less.
So here’s a stake in the ground – 10 milliseconds or less. Willing to listen to alternatives.
- The emergence of 5G NIMBY
Editor Christopher Mims of The Wall Street Journal recently wrote:
“Most cities want 5G, but they don’t want to be told how, when and at what cost. Rules the FCC has already passed, meant to expedite 5G’s rollout, might well be creating acrimony that serves to do the exact opposite.”
But while municipalities argue why 5G towers shouldn’t be built in their backyards, edge computing is already being driven by an onslaught of data, the growing number of IoT connected devices and the need to stay competitive in a digital world.
So while the emergence of 5G will eventually make a major impact, organizations aren’t waiting for this technology to appear in order to gain the edge competitive advantage.
5G NIMBY is an emerging political issue that doesn’t fit our existing partisan fault lines. And as the world – particularly Asia – steals a technology march on the U.S. in 5G, the pressure to adopt and implement this technology will become enormous. Worth watching.
- Is anyone doing the math yet?
Data Center Frontier’s report says: “Amazon’s calculated that a page load slowdown of just one second could cost it $1.6 billion in sales each year.” So latency does sure sound like it sure can be expensive. But what we aren’t seeing is the connection between this asserted cost of latency and what some forecasters have predicted is a massive move of 50% of compute capacity out of centralized data centers and the cloud to the edge. Who is doing the hard work of creating the analytical framework to determine precisely which applications should be moved to the edge, and why? And who is building the financial models that take into account the relevant cost and VALUE factors that could support the capital expenditures for this migration to the edge?
A while back, in an excellent article, 451 Research set out an idea for an applications evaluation matrix that at least is a starting point to begin this type of analysis. Download the report and give us a shout if you’d like to discuss it. Would also love to hear about firms that are doing this tough stuff right.