Editor’s note: This interview has been recorded and published as part of a content project in collaboration with the Japan External Trade Organization (JETRO).
The idea of cloud gaming has been around for a long time, but only now, with the likes of Google’s Stadia and Nvidia’s GeForce Now, it seems to be getting closer to the mainstream. Polystream, however, believes that the future of cloud gaming doesn’t look like any of these services, and offers a very different solution. Essentially, it places everything but graphics processing in the cloud, which is the opposite of what everybody else is doing. We sat down with the company’s co-founder and CEO Bruce Grove back at Slush 2019 to learn more.
This interview has been edited for clarity and brevity.
Q: What is Polystream?
Polystream is a new way of streaming games and applications from the cloud. It’s a new way of running very high-end 3D content. We’re very focused on how we’re going to make this scale, and we do this with a technology never seen before.
Q: How does it work? How is it different from what we’ve got right?
Everyone today has been taking your game console or your workstation and putting it into the cloud. They’re just moving the computing as one thing from one place to another. It’s not distributed computing, and it doesn’t scale very well. This leads to a problem: people don’t talk about concurrent users in cloud gaming, they don’t talk about how many users can actually play at the same time.
At Polystream, we’re looking at distributed computing in a distributed cloud. We run the game in the cloud — but you already have a GPU. We have billions of GPU in people’s hands, from their smartphones to their game consoles, to their PCs. We stream the graphics commands to your machine, and we draw locally, and now we have distributed compute. We can run out of any cloud, we can scale as much as we want, and we’re basically no longer limited by having to find one of those special cloud-gaming machines.
Q: Then how is it different from just playing the game locally?
When you play the game locally, you’re bound by the compute that you have. Today, when we build a game, we build it for a game console, or for a PC. But what we really want to do is we want to build new experiences, we want to build with the power of the cloud. How do we build more AI? How do we build experiences that allow us — whatever device we’re on — to come into that world and experience different things, maybe take different viewpoints?
What we want to do is actually look forward for what we can do with the massive compute of the cloud. We can build ever bigger engines, more information, more data, more players, and put all of that together, and then just deliver it as a graphic stream.
Q: So, if I do this on my old PC, am I still limited by my graphics card?
You are indeed. Everyone thinks that we can solve problems just by magically moving them from one place to another. When you look at what most people are doing with cloud gaming today, they’re saying, we’re going to put all this graphics compute in the cloud and we’re going to stream it to a 10-year-old PC. And that works, but are we really going to put 200 million graphics processes in the cloud to stream to 200 million old PCs? We’re just moving the cost and the problem of doing that to somewhere else.
What we’re thinking about is actually the next generation of cloud, we’re thinking about what the next generation of devices is going to be. If I don’t need a device to be super powerful, but it just needs a graphics processor in it, I can still have a very thin client. It’s just a graphically capable client.
Q: It seems to me that right in the newer gaming PCs the graphics card is one of the most expensive things that also drains the most power. What does your solution change?
It changes the distribution of the compute; it changes how we think of cloud and how we think of dynamic workloads. Today, if I look at an Xbox — and Xbox is really a $200 device, it’s not hugely expensive, — and I look at the new generation of Intel integrated graphics, they actually have the same performance. That’s a graphically capable GPU.
What we’re getting to now is ultrabook-class devices, low-end devices, mid-range devices that are very graphically capable. But if I wanted a 4K device, I don’t need to build everything else around it — I just put the graphics processor in there. Now I can start to think about a set-top box or a small form-factor PC with a GPU in it. And suddenly I’m now at a much more graphically capable device, but I can run whatever size application I want from the cloud. I’m no longer bound by my size of compute, memory, or storage.
Q: So rather than moving graphics processing to the cloud, you move everything else to the cloud?
Exactly. Because everything else is very elastic, it’s very dynamic. We can scale compute where we need it, and I can spin up CPU resources whenever I need to, and I can move those and I can size them very well to the needs of each application. Whereas with a graphics processor, if you want to play a game, you need that graphics processor — and at whatever level of compute you need, you take that resource and no one else can use it. It’s not really very dynamic and it’s not flexible. Also, to overcome latency, it needs to be near you. So, we need to populate every data centre with enough of those graphics processors to meet the demand of all of these people all around the world.
Q: So, in a way, we’re looking at the same old vision of the external graphics adapter that can connect to a computer, except this external graphic adapter now is the actual computer.
Yes, and in fact, we talk about it in a similar way to how you talk about the internet of things. IoT has taken all of these devices right out to the hands of the consumer, right into your house, and they’ve made them part of the cloud, parts of that compute model. And we see the same future for visual cloud and visual compute.
It doesn’t matter whether you’re an architect or a gamer, now you can bring your visual device, your visual experience, and make your phone, or your PC, or your console part of the cloud. Now we get to tap into that enormous resource of those devices, but we balance the workloads properly. We put the compute in the right places.
Instead of compressing video and giving a subversion of what we actually started with, it’s rendered locally, it’s drawn perfectly, and it’s exactly what it should be.
Q: What sort of requirements do you have to the apps in this case? Would you require some special architecture or special ways the applications are created and built?
No, we actually make no changes to the applications whatsoever. We take the executable, but we don’t see the application, it’s a black box to us. What we see are the graphics commands that come out of it. Polystream takes that stream of graphics commands and compresses them, makes them so that we can send them over the network. At 10 Mbit/s, we can send a 4K stream because it’s just commands data, it’s telling your GPU what to draw.
Q: So, in a way, it’s similar to edge computing as it’s used in IoT?
Very much. Also, we look forward not just to bringing old applications to the cloud, but also what the cloud is going to be. And when we talk about edge compute… There is going to be compute available at the edge, there are going to be CDNs with more powerful capabilities than there are today. There are huge hyperscale data centers all over the world. And then there are the devices that we’re not going to give up. What we want to do is actually think about the workload, and what is the compute that we’ll need at that time?
Q: What is your product after all?
Polystream is a B2B company. We are a platform that enables games, game engines, 3D visual compute providers to deliver their applications to their audience and their customers. We sell directly to engines and game publishers. For example, a publisher might want to do instant marketing or demos of games, and this makes it easily available to people, you don’t have to wait for the downloads.
What we’re looking at now is to develop the product fit, get into these different spaces as people start to develop their applications for the clouds rather than just moving an application into the cloud, which is what we see today.
Q: Have you raised money along the way?
We’re well backed. We raised $12 million in our Series A earlier this year that was led by Intel Capital. We also have people like Wargaming, Lauder Partners, London Venture Partners, Initial Capital. So, we’re well down our path now. It’s a deep tech, we’re doing something that nobody’s really ever done before. The team is very focused on bringing that technology, showing that you can stream using graphics data commands over the internet, not just using video, and that it can scale dynamically.
Q: Why do you need so much money?
It’s a complicated problem to solve, we have a lot of engineers, the whole team is just over 30 people now. We’re getting to that point where we need to bring the product to market, to run and operate a live platform — and that really is just a headcount problem. We’ve also got to have enough people to look after the customers as we grow as a business. It takes an organisation to support the product.
Q: And who are your customers?
We don’t have any public customers yet, and that’s because we’re still pre-revenue. We have a lot of stuff in trial at the moment, and we hope to be announcing something in Q1 next year.
We’re not public-facing yet, and that’s intentional. We don’t need to be for where we are in our development cycle. What we want to do really is make sure that we get this right when we bring it out, we’re not going to just sort of drop it in everywhere and say, right, here you go. Everyone can go nuts because it requires thinking differently about how we bring compute to market.
Q: It’s interesting timing for this interview, a few days after the launch of Stadia by Google. What’s your take on that?
I was the first head of engineering at OnLive, [a cloud gaming service that was eventually acquired by Sony,] so I have been here before. There’s a reason Polystream exists and it’s because, fundamentally, streaming with video is just too challenging to scale. We can stream with video, we know it works, but when we get to the point of willing to replace everybody’s console, that’s a lot of infrastructure even for Google.
What we’ve seen this week are the challenges that come with setting an expectation that big. We’ve seen people saying, this is great, we’ve seen people say, this is terrible, and it all feels very 10 years ago if I’m honest.
Q: Do you think it will still exist in any shape or form in the future?
I think it will, Google has more than enough resources to develop this. These are the early humps that you experience with any technology, and particularly when you’re innovating and creating something new in a space. However, I don’t think that Google will forever be doing video streaming. I genuinely believe that at some point Google will have to look at command streaming Polystream to actually scale it to that kind of ambition they have. They’re a big company, they can keep developing it, and they also don’t need to solve every problem today — they can afford to do what they’re doing and then grow it and take it forward to the next level.
Q: And you don’t necessarily see Google and other bigger companies doing similar thing as your competitors at the moment?
I see them more as an opportunity, so I see them as a place for Polystream to work with. Polystream could work on top of Google cloud today, we’ve tested that. It works on Amazon, it works on Azure, it works on UpCloud in Helsinki. We’ve demonstrated that we are a multicloud solution already, and that’s really the strength of command streaming in particular.
Q: What did you do before starting Polystream?
I started as a systems engineer, working on jet engines. I then moved into telecoms, and that actually took me to Silicon Valley in the early 2000s. I’d spent more than a decade in Silicon Valley doing a mixture of large companies and startups. When I went to work for Tellme Networks, I got to really understand a very different dynamic for how companies can be and what it is you can create some grow if you started from scratch.
Then I went to OnLive, I was very early in there, and I wrote that all the way through. I came back to Europe as general manager for all of our European business. In 2015, when we finally sold off OnLive to Sony, I met my co-founder Adam Billyard, and he had this kind of crazy idea for a different way to do cloud gaming. It just felt right, it was the right time to try and start something from scratch. Five years later I’m feeling pretty good about where we are at.
Q: Are you a gamer yourself?
I am. I’m a big gamer, I’ve been a gamer all my life. I still play games, I have every console. I’ve been sucked into Assassin’s Creed Odyssey for a long time. I get home, and I can just sit and run around that world.
Q: What sort of games would work best with the Polystream engine?
I think we haven’t even seen these games yet. We’ve got very hung up on cloud gaming being a substitution for a console or a PC, and I think that what we really want to see is somebody thinking differently about what can be done if a game is built for the cloud first. There’s a company here in Helsinki called mainframe, and they recently talked about building a game with a cloud-first mindset. We’ve been talking to them, and they’ve got some pretty amazing ideas.
One of the things that we haven’t had the opportunity to do yet is start developing with that cloud-first mindset. And if we do in the next five years, we’re going to see IP that we’ve never imagined before it will just change our world and change our experiences.
Q: And when will we see the first applications written with this sort of mindset?
I think it’s going to take a couple of years. The only way that you can take that kind of platform forward and build that kind of audience is to create something you can’t get anywhere else. And so, when you build a game, when you build a massive IP, it does take two or three years to really get that out to market. I think we’ll see something within the next 18 months in early access, we’ll get people starting to see what these experiences can be.