>>Live from Las Vegas, it’s theCUBE covering AWS re:Invent 2018. Brought to you by Amazon
Web Services, Intel, and their ecosystem partners.>>Hey everyone, I’m John Furrier, co-host of theCUBE here live in Las Vegas where Amazon Web Services,
AWS re:Invent 2018. Our sixth year covering it,
presented by Intel and AWS. Our next guest is Paul Savill, Senior Vice President of core network and technology for CenturyLink. Welcome to theCUBE, good to see you. Thanks for coming on.>>Thanks, really glad to be here.>>So one of the things
we’ve been covering on SiliconANGLE and theCUBE is that the holy trinity of infrastructure is storage, network, and compute, never going away but it’s
evolving as the market evolves. You guys have been providing
connectivity and core network.>>Right.>>Really high availability bandwidth and connectivity for many, many years. Now you guys are in the
middle of a seat change, what’s your story? What are you guys doing at re:Invent? You guys are partners,
your logo is everywhere, What’s your story? Why are you here? What are you talking about
at re:Invent this year?>>Sure, yeah. You know I really do
believe it is a seat change. We’ve actually been working
with AWS for many years and when AWS first started, we were one of the major
internet service providers for AWS and access into
AWS cloud services, but a few years ago we
really started seeing this seat change start to happen because Enterprise customers started asking for weird things from us. They actually wanted to order dedicated 10 gigabit optical waves from their Enterprise location
into the AWS platform, and we were thinking
everything should come through the public internet,
why are people doing this? And really, what was driving it, was issues around performance, concerns around security, and so we’re starting to
see the network really start to play a major role in how cloud services and how performance of cloud-based applications are delivered.>>We’re here in day two of re:Invent, we’ve got two more days to go. Andy Jackson’s got his big keynote, he’s the CEO he’s got his
big keynote tomorrow morning. We’re expecting to hear latency to be a big part of his keynote. Specifically as Amazon
evolves their strategy from being public cloud,
where all the action is, to having a cloud version on premise.>>Right>>Because of latency and
heritage or legacy workloads on premise aren’t going away certainly. Maybe their footprint might be smaller, I’d buy that, but it’s not going away. But connectivity and latency is now at the front of the
conversation again because data and compute have that relationship. I don’t want to be moving date around, if I do it better be low latency, but I want to run
compute over the network, I want to send some compute to the edge. So latency is important. Talk about this, because
you gave a talk here around milliseconds
matter, I love that line, because they do matter now.>>They do, yeah.>>Talk about that concept.>>Sure, yeah. We’re absolutely seeing it and the reason we kind of came up with that tag line is because more and more
as we’ve been working with enterprises on networking solutions, we’ve found that this is really true in how well their applications
perform in the cloud, and I really do applaud Jausi and AWS for working on that solution
to deliver, to the prim, some of the AWS capabilities. But really we see the market evolving, where in the future
it’s a trade off between latency and the amount of bandwidth and how the performance needs to be applied across the field. Because we believe that
some things will make a lot of sense to be hosted
out of the cloud core, where there’s major iron, major storage and compute, some things can be
distributed on the prim, but then other things make more sense to be hosted out of
somewhere on the far edge where it can serve multiple locations. It may be more efficient that way, because maybe you don’t want
to haul all the bandwidth, or huge amounts of data
very long distances, that becomes expensive.>>Well bandwidths cost,
it’s a cost to you.>>It’s still a cost, yeah.>>Latency, one, is a
performance overhead cost on that that could hurt the application, but there’s also a cost, there’s actual financial cost.>>Yes, there is.>>Talk about this concept
of latency in context to the new kinds of applications, because what’s going
on is that as compute, and as you mentioned, storage, start to get more functionality, specifically compute,>>Yes>>Things happen differently. I’ve been studying AI, I’ve been a computer science
major since the 80’s, and AI’s been around since
the 80’s and earlier, but all those concepts just didn’t have the compute
capability and now they do, now machine learning is on
fire, that’s a renaissance. Compute can help connectivity, you just mentioned a huge case there, so this is powering new
software applications that no one has ever seen before.>>That’s right.>>How are these new networks workloads and applications changing connectivity? Give some examples, what are some of the things you guys are seeing as use cases running over the connectivity?>>Sure. So we’re seeing a lot
of different use cases, and you’re right, it
really is transforming. An example of this is retail
robotics for instance. We’re seeing very real applications where large retail customers want to drive robotics in their many
retail store locations but it’s just not affordable to put that whole hardware software stack in every single store
to run those robotics, but then if you try to run those robotics from an application that
hosted in a cloud somewhere a thousand miles away then it doesn’t have the latency performance that it needs to accurately run those
robotics in the store. So we believe that what
we’re starting to see is this transformation where applications are going to be broken up
into these microservices where parts of it’s going
to run in the cloud core, part of it’s going to run in the prim, and part of it’s going
to run on the near edge where things are more efficient to run for certain types of applications.>>It’s kind of like a human. You got your brains and you got your arms and legs to move around. So the brains can be in the cloud, and then whatever is going on at the edge can have more compute. Give some other examples. You and I were talking
before we came on camera about video retail analytics.>>Righ, uh huh.>>Pretty obvious when you think about it, but not obvious when you don’t have cloud. So talk about video analytics.>>Yeah, that’s another important driver is with all of the AI tools
that are being developed and as AI advances and as other things, technology like machine
learning, advances, then we want to apply AI to a whole new range of applications. So retail, like video
analytics for instance, what we’re starting to see
is the art of the possible. You may have a retail
store that has 30 different video cameras spread up around its store and it’s constantly monitoring
people’s expressions, people’s moods as they come in, there’s an AI sitting
somewhere that’s analyzing how people feel when
they walk in the store versus how they feel when they walk out. Are they happier when they walk
out than when they walk in? Are they really mad when they’re in the waiting line someplace, or is there a corner of
the store where, real time, there’s an AI that’s detecting that, hey there’s a problem in the
corner of the store because people seem like they look upset. That type of analysis,
you don’t want to feed all of that video, all of
those simultaneous video feeds to some AI that’s sitting
a thousand miles away. That’s just too much of a lift in terms of bandwidth and in terms of cost. So the answer is there’s
this distributed model where portions of the
application in the AI is acting at different
locations in the network and the network is tying it all together.>>Microservice is going to create a whole new level of capabilities and change how they’re
implemented and deployed.>>Yes.>>And connectivity still feeds the beast called the application. Also the other thing we’re seeing, as we’re expected to hear Amazon announce, new kinds of connectivity, whether it be satellite
and/or bandwidth to edges. IOT, or the network edge as it’s called, where the edge network kind of ends with power and connectivity. Because without power and connectivity it’s not on the network, it’s not an edge.>>That’s right.>>There’s a trend to
push the boundary of edge. Battery power is lasting longer, so now you need connectivity. How do you guys at Century look at this? So do you guys want to
push the boundaries, how are you guys just
pushing the boundaries?>>Sure.>>Yeah, IOT is another area that’s really changing the business. It’s opening up so many new opportunities. When you talk about the edge, it’s really funny because people define the edge in so many different ways, and the truth is the edge can vary depending on what the application is. An IOT, if you have a bunch of remote devices that are battery powered that are signaling back to
some central application, well then that IOT,
those physical devices, are the new edge and they could be very deep into some kind of a market. But there’s a lot of different communications technology
that can access those. There’s 5G wireless that’s
emerging, or regular wireless. There are applications like LauraNet, which is a very low bandwidth
but very cost effective way for small IOT devices to
communicate small amounts of data back to a central application. And then there’s actual
fiber that can be used to serve locations where
IOT devices can be feeding very heavy amounts of
bandwidth back to applications.>>So it’s good for your business?>>It’s great for our business. We really see it opening up
so many other new avenues for us to serve our customers.>>So I’m going to put
you on the spot here. If I asked you a question, what has cloudification
done for CenturyLink? How has it changed your business? How would you respond to that question?>>I think that it’s made what we do even more critical to the future
of how enterprises operate. The reason for that is just the point that you made when we started, which is storage and
compute and networking, it’s all really coming back
together in terms of how it boils down to those things. But networking is becoming
a much more important factor in all of this because of the
latency issues that are there and the bandwidth amount
that is possible to generate. We believe that it’s
creating an opportunity for us to play a more pivotal role in the whole evolving cloud ecosystem.>>I still think this is such an awesome new area because, again, it’s so early. And as storage, network, and
compute continues to morph, all of us networking geeks
and infrastructure geeks, software geeks are going to actually have an opportunity to reimagine
how to use those parts.>>It is, yeah.>>And with microservices
and custom silicon, you see what Amazon’s done with amapertna. You can have data processing units, connectivity processing units, you can have all kinds
of new capabilities. It’s a whole new world.>>It is and, you know,
interestingly enough organizations are going to have to change. One of the things we see with enterprises is that many enterprises are organized so that those three areas
are still completely managed in separate departments. But in this new world of how cloud is crushing all of those things together, those departments are
going to have to start working much more closely aligned. I had a customer visit me
after our session yesterday and was saying I get the
whole thing of how now when you deploy an
application in the cloud, you can’t just think
about the application. You got to think about the
network that ties it all togeher. But he says I don’t know how to get my organization to do that. They’re still so segmented and separated. It’s a tough challenge.>>And silos are critical. I just saw a presentation
with the FBI director, deputy director of counter terrorism, and they can’t put the
puzzle pieces together fast enough to evaluate threats
because of the databases. She gave an example around
the Las Vegas shooting here. Just to go through the
video tape of the hotel took 12 people for 20 hours a day for a week to go through that video. They did it in twenty minutes
with facial recognition. And they have all this data, so putting those puzzle
pieces together is critical. I think connectivity truly is going to be a new kind of backbone.>>Yes, uh huh.>>You guys are doing some good work. Okay, lets get a plug in
for you guys real quick. By the way, thanks for the insight. Great stuff here at re:Invent. One year anniversary CenturyLink with level three coming together. Synergies, what are you guys doing? Give the update on the coming one year anniversary of the Synergies.>>Sure. Uh huh.>>What are the Synergies?>>Yeah, we’re getting
tremendous synergies. In fact, I think if you
listen to our analyst reports and our quarterly earnings calls, we’re really ahead of plan in that area. We’ve actually raised our
earnings guidance for the year as a result of what was
originally expected of us. We’re doing really well on that front. I’ll tell you the thing that
excites me more than synergies is the combined opportunity that we have because of these two
companies coming together. Ways that, bringing
the companies together, surprised me that we
found new opportunities. For instance when you take level three, which is a globally distributed network covering Europe, and Latin America, and North America, and
parts of the Pack Rim with fiber and sub sea systems, and combine it with
CenturyLink’s dense coverage of fiber in North America, then it really creates a stronger ability for this company to reach enterprises with very high performing network solutions. One of the main things that surprised me actually relates to this conference, and that is that CenturyLink
was really focused around building out cloud services, working closely with companies like AWS on creating managed services around cloud, building performance tools around managing cloud based applications. Level three was really
focused on building out network connectivity
in a dynamic way to use the new software defined
networking technologies to be the preferred
provider of high performance networking to cloud service providers.>>The timing was pretty
impeccable on the combination because you were kind
of cloudifying before cloud native was called cloud native. You were thinking about it
in kind of a dev ops mindset and they were kind of thinking of it from a software agility perspective
out of infrastructure. Kind of bring those together. Did I get that right?>>That’s exactly right. Level three was thinking
about how to make the network consumable on a dynamic basis and on demand basis the same way cloud is. When you combine CenturyLink’s
capabilities with that then it’s just opening up so
many new things for us to do, so many new ways that we can deliver value to our enterprise customers.>>Well I’m always hungry for
more bandwidth, so come on. You guys lighting up all that fiber? How’s all the fiber?>>Yeah, we’re expanding dramatically. We’re investing heavily
in that fiber network. We have around 160,000
enterprise buildings on our network today and we’re growing
that just as fast as we can.>>So Paul Sevill, you’re the guy to call if I want to get some
cord network action huh?>>That’s right.>>Alright. Thanks for the insight, great to have you. Good luck at the show here at re:Invent. CenturyLink here inside
theCUBE powering connectivity. Big part of the theme here at re:Invent this year
is powering the edge, getting connectivity to places that need low latency for those workloads. That’s the key theme. You guys are right on the trend line here. CenturyLink on theCUBE, I’m John Furrier. Stay with us for more
wall to wall coverage after this short break. (upbeat techno music)