Connect with us


IronSource’s Supersonic launches LiveGames publishing service for indies



IronSource's Supersonic launches LiveGames publishing service for indies

Did you miss GamesBeat Summit 2021? Watch on-demand here! 

Mobile monetization firm IronSource said its Supersonic Studios division has launched LiveGames, a self-service way for indie game developers to manage mobile games and their live services (such as tournaments or updates).

This is for Supersonic publishing solution, which IronSource launched more than a year ago. The announcement comes after it announced that it plans to go public via a special purpose acquisition company (SPAC) at an $11.1 billion valuation.

The product offers developers who publish their mobile games with Supersonic access to game management and full visibility and transparency into in-game metrics that enable them to better manage and grow their published games.

Nadav Ashkenazy, the general manager of Supersonic Studios, said in an interview with GamesBeat that the goal is to make publishing tools accessible to indie developers so they can get their games off the ground. It helps with the “growth loop,” after a game reaches a large scale and then needs attention in terms of improving numbers, such as the average playtime per user.

“After you scale a game globally, everything gets more complicated,” Ashkenazy said. “For average playtime per user, we give you a snapshot for that.”

The idea is to support developers as independent companies by productizing what is otherwise a manual process. It also adds some important transparency for developers that they normally can’t get out of game publishers, said Omer Kaplan, the chief revenue officer at IronSource, in an interview with GamesBeat.

“Historically, publishing is a black box,” Kaplan said. “A developer’s game meets retention goals. Then a publisher handles growth and gives a revenue share. We make everything transparent. We have complete transparency for the developers using our publishing solution on the IronSource platform.”

Several rival products in the market help developers test the performance and marketability of their prototypes, with IronSource launching its self-serve testing product for Supersonic developers in 2020. However, one of the biggest challenges comes once a game has been published, since many of the insights relating to a game and its performance are not commonly visible to the developer, limiting the ability to understand, test, iterate and improve for the long term.

Above: IronSource’s LiveGames helps studios manage their game data.

Image Credit: IronSource

With Supersonic, IronSource has focused on helping game companies become better developers, rather than treat each game as a standalone unit.

Through LiveGames, developers will have access to data such as daily, monthly, and annual profit for each of their published games; advanced analytics including retention, playtime, lifetime value, and ad engagement for each region and user acquisition channel; rewarded video and interstitial ad analysis; and advanced analytics from A/B tests for test comparison.

Stan Mettra, the CEO of game studio Born2play, is using LiveGames with the game StackyDash. He said in a statement this is the first time the company has so many insights into the performance of the game. That helps take away blind spots and helps the company take steps to increase revenue. About 25 studios used the LiveGames service in alpha testing and they’re now ready to start using the product.

“We’re encouraging the developers to remain independent,” Kaplan said.

Tel Aviv, Israel-based IronSource has previously said it would raise $2.3 billion in cash proceeds for both shareholders and the company itself through the transactions, which includes both the proceeds from the SPAC (a faster way of going public compared to an initial public offering) and an additional private investment known as a PIPE, or private investment in a public equity. SPACs have become a popular way for fast-moving companies to go public without all the hassle of a traditional IPO. Regulators have come up with more rules to govern SPACs, but the idea is to raise money faster.

IronSource said it recorded 2020 revenue of $332 million and adjusted earnings before interest, taxes, depreciation, and amortization (EBITDA) of $104 million. IronSource said its monetization platform is designed to enable any app or game developer to turn their app into a scalable, successful business by helping them to monetize and analyze their app and grow and engage their users through multiple channels, including unique on-device distribution through partnerships with telecom operators such as Orange and a device makers such as Samsung.

In 2020, IronSource said 94% of its revenues came from 291 customers with more than $100,000 of annual revenue, a dollar-based net expansion rate of 149%.


GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


Disney interview: Big games coming with Avatar and Pirates of the Caribbean



Disney interview: Big games coming with Avatar and Pirates of the Caribbean

Elevate your enterprise data technology and strategy at Transform 2021.

Disney had a big week at the Electronic Entertainment Expo (E3) with the announcement of Ubisoft’s new open-world game, Avatar: Frontiers of Pandora. The title has cinematic graphics that replicate the imagery of the movie and the environments of the beautiful moon of Pandora.

Microsoft’s Rare studio also announced that characters from the Pirates of the Caribbean films, like Jack Sparrow and Davy Jones, will be integrated into Sea of Thieves. Both are examples of Disney’s return to triple-A games after changes to its strategy for games over the years.

Disney had triple-A games in the past when it had its own game studios. But it closed down or sold off the studios, and more recently it has been licensing its properties to outside companies, mostly mobile game publishers such as Glu and Jam City. And now it’s clear that Disney has been licensing its properties out for triple-A games as well.

I talked with Sean Shoptaw, senior vice president of Walt Disney Games, and Luigi Priore, vice president of Disney and Pixar Games, about Disney’s presence at E3 and the latest on its strategy for games.

Here’s an edited transcript of our interview.

Above: Pandora looks beautiful as an open world.

Image Credit: Disney

GamesBeat: What’s new for Disney Games?

Sean Shoptaw: I guess that’s a pretty loaded question. There’s a lot going on. It’s been a great week at E3 with some of the announcements you’ve seen. The business is doing well. We’re super excited about the products we’ve announced, and a lot of the products we have in the market already as well. We’re very excited about the status of games at Disney.

GamesBeat: What was announced altogether this week?

Shoptaw: The Avatar title and the Sea of Thieves integration were the two big ones so far.

GamesBeat: How long has Avatar been in the making now?

Priore: That predates us, obviously, because Disney didn’t acquire the 20th Century Fox properties until a couple of years ago. That started well before the acquisition. The great thing moving forward is we’ve been lucky enough to be able to work with Ubisoft and the team at Massive with Lightstorm, James Cameron’s production company, and Jon Landau, who worked on Avatar: Frontiers of Pandora. They announced Massive was working on it a while ago, but this is the name announcement and the first glimpse of what that game is going to be. We’ve gotten very good responses.

GamesBeat: The animation almost feels like it is the movie.

Shoptaw: Yeah, after the trailer, people are finding out that it’s very cinematic. The quality is extremely high. We’re super excited about that title.

GamesBeat: How does that relate to the movie releases, the next Avatar movies? Are they slated for particular dates yet?

Priore: Yeah, the next one is holiday 2022. Sean can get into our general strategy, but on licensing games like this these days — there was a time 15 or 20 years ago where playing the movie was something. You bought the game and played the movie. Things like the classic Aladdin game on Sega Genesis. You played the film. That was popular at the time, but gamers expect more now. They want to interact with their favorite characters and worlds, but they want to play new stories and do new things with those characters and worlds.

On Avatar it’s the same thing. What James Cameron and Jon Landau created is an amazing science fiction world. Pandora is awesome. They have great heroes. It’s a great playground to play in. This is a brand new story with new characters. It’s going to become part of the canon. The whole idea is to have it be part of the storyline of that giant franchise on Pandora, but it’s not a “play the movie” game. It’s an all new open world, new characters. That’s why it’s called Frontiers of Pandora. It takes place on another frontier, another area of the moon of Pandora.

avatar 2

Above: The environs of Pandora.

Image Credit: Disney

GamesBeat: How much will we recognize it? Is it a replication of the movie world, or is it more Ubisoft’s imagining of a new part of the world?

Priore: No, we’re working directly with the filmmakers. Jon Landau is involved almost every day on this. This is the same world. It’s just that you’re going to meet new characters, new clans of Na’vi, and your role is going to be different. I don’t want to go too much into it because we didn’t announce everything yet. But it’s a whole new story with new characters on the same planet, in the same canon. Jedi: Fallen Order was a new story about a new Jedi in the Star Wars canon. It’s the same idea here.

GamesBeat: On your level, how are you involved, compared to Ubisoft’s responsibility?

Priore: Massive is the developer. They’re one of the best in class at open world games. Division, Division II, amazing games. They’re working with the FoxNext team and Lightstorm, working directly with the filmmakers. Where we come in is we’ve brought our expertise in working on IP, working on games. We’ve talked about this a bit. We have a collection of producers, game designers, artists, writers that work together with our partners to get the best out of what they want to do.

Although we just joined this game production recently, since we acquired the 20th Century properties, we’re working directly with Massive and Lightstorm to help them make the best game possible. It’s our job to make sure that Massive has everything they need and that the brand is as authentic as possible working with Lightstorm.

GamesBeat: It still feels like there are so many opportunities for Disney in games. How do you approach which ones to take on, how many of them to do on what platforms?

Shoptaw: There’s no shortage of inbound interest to work with all of our franchises, thankfully. That’s something we’re grateful for. We try to take the approach that — we need to align our partnerships around people’s passions for IP. When we sit down and meet with a developer or publisher about an idea, a lot of that is driven by their passion to go make a specific game with a specific IP. Ideally we’re matching that up with a best-in-class partner. To the point about Massive, about EA, about the partnerships you see now and will see in future, it’s about matching that passion with best-in-class partners to go make what we hope are the best games we’ve made for whatever genre or IP it might be.

That, to us, is the recipe. It’s about working with high-quality partners that have passion for Disney IP, whatever it may be. It gets to be a much easier conversation once you’re in that world, where you see that passion. They have a track record of developing high-quality products. Then it’s about figuring out exactly what the execution is going to be, working closely with Luigi and our other teams internally to map to what ultimately is the final product. But that really is, at the top, our focus, to match people’s passions and the highest quality of partner we can find to go make a certain game.

GamesBeat: There’s a lot more coming than what we’ve seen here at E3, I’m sure.

Shoptaw: As I said yesterday, our slate has never been better. We’ve never been more excited about the slate we have. Some of that’s been announced and some hasn’t. But we feel like we’ve been fortunate to do some exciting partnerships with partners that have a high bar on quality and thankfully have a passion for our IP. We look at our pipeline of product and it’s never been healthier. The quality bar has never been higher.

GamesBeat: Star Wars: Hunters is another one of those coming.

Priore: Very excited about Hunters. Both mobile and Switch, which is very exciting for us. We’ve wanted to get more content on the Switch. We’re excited about what that game represents within the Star Wars universe. We think it’s a unique take, both creatively and from a genre perspective. It’s a very differentiated experience, one we haven’t seen so far in Star Wars.

avatar 3 1

Above: Avatar: Frontiers of Pandora is coming in 2022.

Image Credit: Disney

GamesBeat: Zynga is an interesting choice there. They haven’t done a console game before. When I was talking to them about their Harry Potter game, though, I was pretty stunned by how much work went into that. Several years, the biggest team they ever had. How much they put into all the animation and everything else that keeps players immersed in that universe was very interesting. It wasn’t as much of a surprise to see them do a Star Wars game.

Priore: They came to us with a good idea, with a team that we had a lot of respect for. They have a lot of passion for Star Wars. It made a lot of sense to us as we sat down and mapped out what a game could look like here. You’ll see that passion and quality in the final product. As I said, I think it’s a unique take on Star Wars, and knock on wood, our fans will agree. We’re pretty bullish on that game, excited for the world to see it.

GamesBeat: Is there anything else announced in Star Wars?

Shoptaw: We announced the Massive title as well not too long ago. We’ll do an open world Star Wars game with Massive. Similar to Zynga, we feel like it fits a need within the Star Wars universe that hasn’t been fulfilled, and we felt Massive was a perfect partner to execute on it. We’re huge fans of David [Polfeldt] and the team. We aligned quickly on a vision and an experience for Star Wars that, again, fans and gamers will flock to, hopefully. We feel good about the team making it, and we think the idea behind it is great.

GamesBeat: I take it that it’s just not the time to show a glimpse of that?

Shoptaw: We’re still a little ways off, but at the right time I think people will see why we’re so excited about it. We had Avatar to show this time. We didn’t want to show too much at once. With Star Wars, we’ve seen such a great response to Star Wars recently. Jedi: Fallen Order continues to perform. We just hit the 20 million user milestone recently. That title was another great example of telling a truly original story within that universe, something that hadn’t been told before. Allowing people to go be a Jedi and play a fun game like that has proven to work well and continues to resonate.

We’re not looking to flood the market and put one game on top of another. We want to be disciplined and focused on the best experiences. It’s not about making as many games as we can possibly make. It’s about making the right games with the right partners. When we do that, we see that we’re able to have a good amount of success. We feel fortunate about that. We’ll continue to do things that we think fans and gamers will be excited about with the right partners in the right genres on the right platforms. If we can keep that discipline I think we’ll continue to raise the bar on quality and continue to deliver products that will meet the moment, meet the level of quality that we want.

GamesBeat: What’s the strategy around platforms, especially mobile?

avatar 4 1

Above: The Avatar game has been years in the making.

Image Credit: Disney

Shoptaw: Mobile is a huge market globally. We’re always going to have more mobile products than we have console products, just by the nature of the platform. It’s pretty simple. We want to be where it makes sense for our IP to be, across genres, across markets. That might mean local products like Twisted Wonderland in Japan, which is a very unique, specific take on Disney in a market that is hugely passionate about Disney specifically. That execution is a great example of being very locally focused, an execution we know is going to resonate with a certain market. We certainly have regional looks as well, products that make sense in certain parts of the world. Asia is a good example. And then we have a fair amount of products that are global.

We look at it through a local, regional, and global lens. We want to make sure we match franchises and IP with markets in genres that resonate most powerfully. Twisted Wonderland is an incredible example of a local execution. A lot of our titles, obviously, are global, and they’ve been massive successes across markets. We’ll continue to look at big global opportunities like Galaxy of Heroes with EA. Obviously the Marvel portfolio has had a lot of incredible success across mobile.

We’re not one size fits all. We’ll focus on the right execution in the right market with the right partner and the right genre. We don’t want to flood the market, again, with a bunch of duplicative titles, or just put our brand on any title that we get some interest in. We’re going to be disciplined, and we’re going to make sure we apply that sort of strategic thought to every game we do, regardless of market. That approach over the last few years for us has shown that it works well, and we’ll continue to have that view of the world. It needs to make sense. It needs to be really high quality.

Even if we think we’re missing something, if there’s an opportunity for a genre or a certain IP is underserved, we’re not going to rush and just do a game because we think we need to. We will wait and make the right game with the right partner. That’s as important as getting any games out there. That’s something we’re focused on as much as we are getting products to market and satisfying the demand that we fortunately have for our IP. We’ll continue to be disciplined.

GamesBeat: Did the pandemic change your thinking in any ways?

Shoptaw: No. Fortunately the game industry overall, and certainly our business within Disney, had been doing very well prior to COVID. People’s perception was that video games benefited a lot from people staying home, working from home. There’s certainly some truth to that. But video games have been growing rapidly as an industry prior to COVID. It would have continued to grow rapidly if we never had COVID. So it hasn’t changed any strategic thinking for us. Fortunately our products and releases, nothing was impacted too dramatically by COVID. Again, strategically it hasn’t changed our view of the world.

avatar 5 1

Above: The humans are the enemy in Avatar.

Image Credit: Disney

GamesBeat: It seems like the video game opportunity is a lot more clear than it used to be in the wake of the pandemic. I’ve been writing all these stories about how much more money is coming into the game industry. I think it’s $49 billion in the first five months of this year in terms of investments and acquisitions and public offerings. That compares to $33 billion for all of last year. At the same time I know the movie industry is contracting. Does it make some sense to argue the case for games as a bigger slice of the pie going forward, a bigger opportunity? Is it time to double down on video games?

Shoptaw: We look at games as that pillar, regardless of what the model is. For us we feel like playing in the space where we’re playing gives us the highest quality products that we can scale across the world. When you look at internal development, obviously that comes with a considerable amount of investment and volume to go hit the aspirations that we have in this space. Again, that’s to work and deliver the best products across the world — console, mobile, PC.

Generally there’s no shortage of investment still happening on the linear side. To your point around film, streaming has taken a considerable bite out of that traditional film apple. But the investment in linear content is still extremely material. I don’t think that’s been diminished in any way. From a games perspective, again, our focus has been, and will continue to be, on quality, on being able to scale this business and meet the demand that exists in video games.

We feel like right now, that strategy is to go license and work with the best partners in the world to deliver on that demand. We’ll continue to do that as long as we can meet that bar of quality, of volume, and making sure that our reach is where we need it to be. Again, we’re fortunate to have the IP that we do. We owe it to consumers, fans, and gamers to make sure we’re delivering at that level. That will continue to be our focus.

We’re excited about where this business is and where it’s going. We think it is a pillar, regardless of model. As long as we’re delivering products like we are, games will continue to be a foundational part of the overall entertainment medium. Certainly from a Disney perspective we do that very thoughtfully. We’ve given a lot of attention and focus to it internally. You’re seeing those results in products today, and you’ll continue to see them in the future.

GamesBeat: Can you tell me a little about the Sea of Thieves integration?

Priore: We’re excited. The team at Rare — this goes back to what we were saying about best-in-class partners. They’ve made the best pirate game ever with Sea of Thieves. We’re excited to have A Pirate’s Life, something authentic to Pirates of the Caribbean that’s also authentic to Sea of Thieves. It lines up with what Sean was saying about doing it the right way, making it authentic to what we do at Disney. Just as we said about Avatar or Star Wars, we want to do that all the time, and we feel like we’re having success with that.

I’ve been here a long time. I’ve been at Disney in games for 25 years. I’ve been on the roller coaster, and I’ve never been more excited about the opportunities we have lined up. You’re seeing some of them, whether it’s Massive and Ubisoft with Avatar or Rare and Microsoft with Sea of Thieves and Pirates of the Caribbean. We’re excited about what’s coming next.

GamesBeat: Call of Duty has an interesting funnel these days, where they start with Call of Duty Mobile. They have 500 million people that way. They have Warzone, a free-to-play console and PC game, 100 million players. That feeds into Cold War, a $60 packaged game that sold 40% better than the previous game in the series. It seems like no accident. You widen that funnel and eventually you widen the market for the franchise’s premium games. It seems like only the biggest companies can do that. I don’t know if Disney has looked at that strategy as well, where there’s a purpose to each game in that funnel.

Shoptaw: People’s strategic view of a game and that game’s purpose are going to differ greatly. If you’re developing a game like Call of Duty, that’s a significant franchise and an incredibly successful one. There’s a lot of ways you can continue to funnel users and grow that pie across platforms.

From a Disney perspective it’s obviously different. We’re working with partners to create experiences. Our strategy is, again, to bring as high a quality of product as we can to market. It’s not about platform-building. We’re not doing this vertically, building out platforms and doing things that might be the strategy of a big game developer.

For us, we’re certainly open to playing in a space that creates these multiplatform experiences that drive audiences in meaningful ways across products. It’s something we’d be happy to engage on if that kind of execution made sense for a franchise of ours. But again, our focus is generally tied to working with partners that can go elevate the IP, that can bring it to consumers in new, unique, innovative ways. If that outcome happens, to your initial question, that’s great. But it’s not core to our strategy because we’re not a developer. We don’t think about it through that lens. If they can leverage our IP in a similar way to Call of Duty, sure, we’re happy to engage on that conversation.


GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading


Ambarella unveils two new AI chip families for 4K security cameras



Ambarella unveils two new AI chip families for 4K security cameras

Elevate your enterprise data technology and strategy at Transform 2021.

Ambarella unveiled two new AI chip families for 4K security cameras today as it pushes further into computer vision.

The Santa Clara, California-based company is introducing its new CV5S and CV52S security chips as the latest in its system-on-chip portfolio based on the CVflow architecture. The chips use an advanced 5-nanometer manufacturing process, where the width between circuits is five-billionths of a meter.

With the combination of new designs and better miniaturization from the manufacturing process, the SoCs can support simultaneous 4K encoding and advanced AI processing in a single low-power design, which provides great AI SoC performance per watt of power consumed, said Fermi Wang, CEO of Ambarella, in an interview with VentureBeat.

“If you have a camera, you want to cover a very big space, and then you want to use a 4K camera to cover all 360 degrees,” Wang said. “You want to have a single chip to talk to the 4K sensors. You want to process all of the videos together and analyze the video.”

The CV5S family targets security camera applications that require multiple sensors for 360-degree coverage, over a wide area and with a long range, such as outdoor city environments or large buildings. Ambarella designed the CV52S family for single-sensor security cameras with advanced AI performance that need to more clearly identify individuals or objects in a scene, including faces and license plate numbers over long distances, such as ITS traffic cameras.

Camera applications

Above: Ambarella’s CV5S AI chips can power the latest security cameras.

Image Credit: Ambarella

The cameras can be used for detecting the faces of criminals in crowds or recognizing license plates. There are of course privacy concerns about that. They can also send an alert if a crowd is forming in a part of a city, and they can monitor packages left behind in stations, airports, and more. Applications also include managing traffic congestion, detecting vehicle accidents, automating speed control, locating missing or stolen vehicles, monitoring queues in retail environments, better managing retail product placement, enhancing warehouse tracking — generally providing more actionable intelligence at both the store and corporate levels. And these more invasive applications can be countered with the ability to set up privacy masks, like continuously obscuring certain portions of larger scenes, preserving privacy.

“We are generating revenue based on all of the CVflow family today,” Wang said. “If you look at all the possible applications that we can address here — our current security camera market, automotive markets — you can see that there’s a lot more opportunities. We’re talking about smart homes, smart cities, smart retail, and also in the future robotics. There are many, many applications that we can address moving forward.”

John Lorenz, senior technology and market analyst at Yole Développement, said in a statement that security system designers want higher resolution cameras, more channels, and faster AI. He said Ambarella’s new chips are competitive in the security chip market, which is expected to exceed $4 billion by 2025, with two-thirds of that being chips with AI capabilities.

The new CV5S SoC family supports multi-imager camera designs and can simultaneously process and encode four imager channels of up to 8 megapixels (MP), or 4K resolution, each at 30 frames per second, while performing advanced AI on each 4K imager. These SoCs double the encoding resolution and memory bandwidth while consuming 30% less power than Ambarella’s prior generation.

ambarella Photo Fermi Wang CEO of Ambarella

Above: Fermi Wang is CEO of Ambarella.

Image Credit: Ambarella

The new CV52S SoC family targets single-sensor security cameras and supports 4K resolution at 60fps, while providing four times the AI computer vision performance, two times the central processing unit (CPU) performance, and 50% more memory bandwidth than its predecessors. This increase in neural network (NN) performance enables more AI processing to be performed at the edge, instead of in the cloud.

“Because you do all the video analytics at the edge, the full video doesn’t need to leave the camera,” Wang said. “You only pass the data that you analyze along to the cloud.”

That’s important as you don’t want traffic from self-driving cars to clog up the wireless bandwidth connections to datacenters.

“The biggest difference approach is that we talked about in the past, we call it ‘algorithm first.’ Basically, when we do the video compression or video processing, or image processing, even and then the controller computer vision for the deep neural network or AI processor, we can see the flow, we try to determine the hardware architecture as well,” Wang said. “For the architecture, we look at what kind of algorithm we want to implement first. So then, after we go through all the areas of study, we understand how an application works, and the portion of the area that takes the most computation performance, and where we can optimize without losing the performance or accuracy of the algorithm.”

ambarella Photo Ambarella Edge AIoT Market

Above: Ambarella is making AI chips for a spectrum of devices.

Image Credit: Ambarella

He added, “And after going through all the tradeoffs, we create the architecture not only try to deliver the best performance, but also deliver the best power consumption as well. I think we are very differentiated and can compete.”

In addition to security, there are many other AI-based internet of things (IoT) applications that can take advantage of the high resolution and advanced AI processing provided by these new SoC families. For example, smart cities can leverage high edge AI performance and image resolution for improved traffic management, accident detection, and automated speed control, as well as the rapid location of missing and stolen vehicles.

“If there’s an accident where there is traffic congestion, or if you need to find a missing vehicle, then you can get enough information to monitor all of the smart city requirements,” Wang said. “You need to do real-time management.”

Likewise, smart retail operations can use this resolution and advanced AI to better manage product placement, adjust cashier staffing for real-time line management, enhance warehouse product tracking, and provide more actionable intelligence at both the store and corporate levels.

The chip families share features such as a software development kit (SDK) for the security camera market, CVflow development tools, and dual Arm A76 1.6GHz CPUs with 1MB of L3 cache memory and a two times performance gain over prior generations. It also has enhanced image signal processing with high-dynamic range, ISO low-light, dewarping, and rotation performance.

They also have on-chip privacy masking to block out a portion of the captured scene, connector interfaces, on-chip cybersecurity hardware with secure boot, OTP, and Arm TrustZone technology. They can support up to 14 cameras and a variety of memory.

The CV5S and CV52S SoC families are expected to be available for sampling in October.


ambarella Photo Ambarella Portfolio

Above: Ambarella’s chip portfolio.

Image Credit: Ambarella

Ambarella was founded in 2005. It has evolved over the years from a video processor chip design firm to a designer of computer vision chips for a variety of markets. It started with video processors for cameras and video cameras. Then it transitioned to making AI-based chips for automobiles and security cameras.

“We went through many different markets, some good, some bad,” Wang said. “We did chips for camcorders and GoPro sports cameras, DJI drone cameras, and eventually there was no innovation in these markets.”

The markets that have lasted longer include security cameras, which require ever-increasing levels of resolution and quality, and automotive cameras, which started in 2011 and continue today as cars need more video sensors and AI processing to distinguish driving hazards.

“Over time, we believes that video analytics will become very important, where you can interpret what the computer vision shows,” Wang said.

Deep learning neural networks are necessary for that work, and that has put a lot of pressure on better AI processing while at the same time delivering better efficiency with low power consumption and lower costs.

ambarella Photo Ambarella SoC Architecture

Above: Ambarella’s CVflow architecture is the basis for a lot of chip families.

Image Credit: Ambarella

“We start our CVflow family with 10-nanometer production, and today we go to five nanometers,” Wang said. “And also we build tons of different software, including a tool to convert any neural network designed by our customers.”

In the past several years, Ambarella has spent $500 million on research and development on computer vision, and it reported last year’s revenue for the segment at $25 million, Wang said. Analysts are expecting the company to hit $75 million in computer vision revenue in 2021.

“It’s not only just a product anymore,” Wang said. “It’s really a revenue generator for our customers. We proved that the investment was really important, and I’m glad we went through that.”

The company has about 40 mass-production customers now. And they are asking for better and better performance.

“If you use 8K performance to process multiple video streams at the same time, then it becomes a mainstream product,” Wang said. “In fact, I can say that in security cameras, people want to connect four to six to eight cameras to one single chip.”


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading


Evolution, rewards, and artificial intelligence



Reinforcement learning artificial intelligence

Elevate your enterprise data technology and strategy at Transform 2021.

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

The article and the paper triggered a heated debate on social media, with reactions going from full support of the idea to outright rejection. Of course, both sides make valid claims. But the truth lies somewhere in the middle. Natural evolution is proof that the reward hypothesis is scientifically valid. But implementing the pure reward approach to reach human-level intelligence has some very hefty requirements.

In this post, I’ll try to disambiguate in simple terms where the line between theory and practice stands.

Natural selection

In their paper, the DeepMind scientists present the following hypothesis: “Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.”

Scientific evidence supports this claim.

Humans and animals owe their intelligence to a very simple law: natural selection. I’m not an expert on the topic, but I suggest reading The Blind Watchmaker by biologist Richard Dawkins, which provides a very accessible account of how evolution has led to all forms of life and intelligence on out planet.

In a nutshell, nature gives preference to lifeforms that are better fit to survive in their environments. Those that can withstand challenges posed by the environment (weather, scarcity of food, etc.) and other lifeforms (predators, viruses, etc.) will survive, reproduce, and pass on their genes to the next generation. Those that don’t get eliminated.

According to Dawkins, “In nature, the usual selecting agent is direct, stark and simple. It is the grim reaper. Of course, the reasons for survival are anything but simple — that is why natural selection can build up animals and plants of such formidable complexity. But there is something very crude and simple about death itself. And nonrandom death is all it takes to select phenotypes, and hence the genes that they contain, in nature.”

But how do different lifeforms emerge? Every newly born organism inherits the genes of its parent(s). But unlike the digital world, copying in organic life is not an exact thing. Therefore, offspring often undergo mutations, small changes to their genes that can have a huge impact across generations. These mutations can have a simple effect, such as a small change in muscle texture or skin color. But they can also become the core for developing new organs (e.g., lungs, kidneys, eyes), or shedding old ones (e.g., tail, gills).

If these mutations help improve the chances of the organism’s survival (e.g., better camouflage or faster speed), they will be preserved and passed on to future generations, where further mutations might reinforce them. For example, the first organism that developed the ability to parse light information had an enormous advantage over all the others that didn’t, even though its ability to see was not comparable to that of animals and humans today. This advantage enabled it to better survive and reproduce. As its descendants reproduced, those whose mutations improved their sight outmatched and outlived their peers. Through thousands (or millions) of generations, these changes resulted in a complex organ such as the eye.

The simple mechanisms of mutation and natural selection has been enough to give rise to all the different lifeforms that we see on Earth, from bacteria to plants, fish, birds, amphibians, and mammals.

The same self-reinforcing mechanism has also created the brain and its associated wonders. In her book Conscience: The Origin of Moral Intuition, scientist Patricia Churchland explores how natural selection led to the development of the cortex, the main part of the brain that gives mammals the ability to learn from their environment. The evolution of the cortex has enabled mammals to develop social behavior and learn to live in herds, prides, troops, and tribes. In humans, the evolution of the cortex has given rise to complex cognitive faculties, the capacity to develop rich languages, and the ability to establish social norms.

Therefore, if you consider survival as the ultimate reward, the main hypothesis that DeepMind’s scientists make is scientifically sound. However, when it comes to implementing this rule, things get very complicated.

Reinforcement learning and artificial general intelligence

In their paper, DeepMind’s scientists make the claim that the reward hypothesis can be implemented with reinforcement learning algorithms, a branch of AI in which an agent gradually develops its behavior by interacting with its environment. A reinforcement learning agent starts by making random actions. Based on how those actions align with the goals it is trying to achieve, the agent receives rewards. Across many episodes, the agent learns to develop sequences of actions that maximize its reward in its environment.

According to the DeepMind scientists, “A sufficiently powerful and general reinforcement learning agent may ultimately give rise to intelligence and its associated abilities. In other words, if an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agent’s behaviour.”

In an online debate in December, computer scientist Richard Sutton, one of the paper’s co-authors, said, “Reinforcement learning is the first computational theory of intelligence… In reinforcement learning, the goal is to maximize an arbitrary reward signal.”

DeepMind has a lot of experience to prove this claim. They have already developed reinforcement learning agents that can outmatch humans in Go, chess, Atari, StarCraft, and other games. They have also developed reinforcement learning models to make progress in some of the most complex problems of science.

The scientists further wrote in their paper, “According to our hypothesis, general intelligence can instead be understood as, and implemented by, maximising a singular reward in a single, complex environment [emphasis mine].”

This is where hypothesis separates from practice. The keyword here is “complex.” The environments that DeepMind (and its quasi-rival OpenAI) have so far explored with reinforcement learning are not nearly as complex as the physical world. And they still required the financial backing and vast computational resources of very wealthy tech companies. In some cases, they still had to dumb down the environments to speed up the training of their reinforcement learning models and cut down the costs. In others, they had to redesign the reward to make sure the RL agents did not get stuck the wrong local optimum.

(It is worth noting that the scientists do acknowledge in their paper that they can’t offer “theoretical guarantee on the sample efficiency of reinforcement learning agents.”)

Now, imagine what it would take to use reinforcement learning to replicate evolution and reach human-level intelligence. First you would need a simulation of the world. But at what level would you simulate the world? My guess is that anything short of quantum scale would be inaccurate. And we don’t have a fraction of the compute power needed to create quantum-scale simulations of the world.

Let’s say we did have the compute power to create such a simulation. We could start at around 4 billion years ago, when the first lifeforms emerged. You would need to have an exact representation of the state of Earth at the time. We would need to know the initial state of the environment at the time. And we still don’t have a definite theory on that.

An alternative would be to create a shortcut and start from, say, 8 million years ago, when our monkey ancestors still lived on earth. This would cut down the time of training, but we would have a much more complex initial state to start from. At that time, there were millions of different lifeforms on Earth, and they were closely interrelated. They evolved together. Taking any of them out of the equation could have a huge impact on the course of the simulation.

Therefore, you basically have two key problems: compute power and initial state. The further you go back in time, the more compute power you’ll need to run the simulation. On the other hand, the further you move forward, the more complex your initial state will be. And evolution has created all sorts of intelligent and non-intelligent lifeforms and making sure that we could reproduce the exact steps that led to human intelligence without any guidance and only through reward is a hard bet.

Robot working in kitchen

Above: Image credit: Depositphotos

Many will say that you don’t need an exact simulation of the world and you only need to approximate the problem space in which your reinforcement learning agent wants to operate in.

For example, in their paper, the scientists mention the example of a house-cleaning robot: “In order for a kitchen robot to maximise cleanliness, it must presumably have abilities of perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue), and social intelligence (to encourage young children to make less mess). A behaviour that maximises cleanliness must therefore yield all these abilities in service of that singular goal.”

This statement is true, but downplays the complexities of the environment. Kitchens were created by humans. For instance, the shape of drawer handles, doorknobs, floors, cupboards, walls, tables, and everything you see in a kitchen has been optimized for the sensorimotor functions of humans. Therefore, a robot that would want to work in such an environment would need to develop sensorimotor skills that are similar to those of humans. You can create shortcuts, such as avoiding the complexities of bipedal walking or hands with fingers and joints. But then, there would be incongruencies between the robot and the humans who will be using the kitchens. Many scenarios that would be easy to handle for a human (walking over an overturned chair) would become prohibitive for the robot.

Also, other skills, such as language, would require even more similar infrastructure between the robot and the humans who would share the environment. Intelligent agents must be able to develop abstract mental models of each other to cooperate or compete in a shared environment. Language omits many important details, such as sensory experience, goals, needs. We fill in the gaps with our intuitive and conscious knowledge of our interlocutor’s mental state. We might make wrong assumptions, but those are the exceptions, not the norm.

And finally, developing a notion of “cleanliness” as a reward is very complicated because it is very tightly linked to human knowledge, life, and goals. For example, removing every piece of food from the kitchen would certainly make it cleaner, but would the humans using the kitchen be happy about it?

A robot that has been optimized for “cleanliness” would have a hard time co-existing and cooperating with living beings that have been optimized for survival.

Here, you can take shortcuts again by creating hierarchical goals, equipping the robot and its reinforcement learning models with prior knowledge, and using human feedback to steer it in the right direction. This would help a lot in making it easier for the robot to understand and interact with humans and human-designed environments. But then you would be cheating on the reward-only approach. And the mere fact that your robot agent starts with predesigned limbs and image-capturing and sound-emitting devices is itself the integration of prior knowledge.

In theory, reward only is enough for any kind of intelligence. But in practice, there’s a tradeoff between environment complexity, reward design, and agent design.

In the future, we might be able to achieve a level of computing power that will make it possible to reach general intelligence through pure reward and reinforcement learning. But for the time being, what works is hybrid approaches that involve learning and complex engineering of rewards and AI agent architectures.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Copyright 2021


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading