Connect with us

Tech

Understanding the differences between biological and computer vision

Published

on

Understanding the differences between biological and computer vision

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Since the early years of artificial intelligence, scientists have dreamed of creating computers that can “see” the world. As vision plays a key role in many things we do every day, cracking the code of computer vision seemed to be one of the major steps toward developing artificial general intelligence.

But like many other goals in AI, computer vision has proven to be easier said than done. In 1966, scientists at MIT launched “The Summer Vision Project,” a two-month effort to create a computer system that could identify objects and background areas in images. But it took much more than a summer break to achieve those goals. In fact, it wasn’t until the early 2010s that image classifiers and object detectors were flexible and reliable enough to be used in mainstream applications.

In the past decades, advances in machine learning and neuroscience have helped make great strides in computer vision. But we still have a long way to go before we can build AI systems that see the world as we do.

Biological and Computer Vision, a book by Harvard Medical University Professor Gabriel Kreiman, provides an accessible account of how humans and animals process visual data and how far we’ve come toward replicating these functions in computers.

Kreiman’s book helps understand the differences between biological and computer vision. The book details how billions of years of evolution have equipped us with a complicated visual processing system, and how studying it has helped inspire better computer vision algorithms. Kreiman also discusses what separates contemporary computer vision systems from their biological counterpart.

While I would recommend a full read of Biological and Computer Vision to anyone who is interested in the field, I’ve tried here (with some help from Gabriel himself) to lay out some of my key takeaways from the book.

Hardware differences

In the introduction to Biological and Computer Vision, Kreiman writes, “I am particularly excited about connecting biological and computational circuits. Biological vision is the product of millions of years of evolution. There is no reason to reinvent the wheel when developing computational models. We can learn from how biology solves vision problems and use the solutions as inspiration to build better algorithms.”

And indeed, the study of the visual cortex has been a great source of inspiration for computer vision and AI. But before being able to digitize vision, scientists had to overcome the huge hardware gap between biological and computer vision. Biological vision runs on an interconnected network of cortical cells and organic neurons. Computer vision, on the other hand, runs on electronic chips composed of transistors.

Therefore, a theory of vision must be defined at a level that can be implemented in computers in a way that is comparable to living beings. Kreiman calls this the “Goldilocks resolution,” a level of abstraction that is neither too detailed nor too simplified.

For instance, early efforts in computer vision tried to tackle computer vision at a very abstract level, in a way that ignored how human and animal brains recognize visual patterns. Those approaches have proven to be very brittle and inefficient. On the other hand, studying and simulating brains at the molecular level would prove to be computationally inefficient.

“I am not a big fan of what I call ‘copying biology,’” Kreiman told TechTalks. “There are many aspects of biology that can and should be abstracted away. We probably do not need units with 20,000 proteins and a cytoplasm and complex dendritic geometries. That would be too much biological detail. On the other hand, we cannot merely study behavior—that is not enough detail.”

In Biological and Computer Vision, Kreiman defines the Goldilocks scale of neocortical circuits as neuronal activities per millisecond. Advances in neuroscience and medical technology have made it possible to study the activities of individual neurons at millisecond time granularity.

And the results of those studies have helped develop different types of artificial neural networks, AI algorithms that loosely simulate the workings of cortical areas of the mammal brain. In recent years, neural networks have proven to be the most efficient algorithm for pattern recognition in visual data and have become the key component of many computer vision applications.

Architecture differences

Above: Biological and Computer Vision, by Gabriel Kreiman.

The recent decades have seen a slew of innovative work in the field of deep learning, which has helped computers mimic some of the functions of biological vision. Convolutional layers, inspired by studies made on the animal visual cortex, are very efficient at finding patterns in visual data. Pooling layers help generalize the output of a convolutional layer and make it less sensitive to the displacement of visual patterns. Stacked on top of each other, blocks of convolutional and pooling layers can go from finding small patterns (corners, edges, etc.) to complex objects (faces, chairs, cars, etc.).

But there’s still a mismatch between the high-level architecture of artificial neural networks and what we know about the mammal visual cortex.

“The word ‘layers’ is, unfortunately, a bit ambiguous,” Kreiman said. “In computer science, people use layers to connote the different processing stages (and a layer is mostly analogous to a brain area). In biology, each brain region contains six cortical layers (and subdivisions). My hunch is that six-layer structure (the connectivity of which is sometimes referred to as a canonical microcircuit) is quite crucial. It remains unclear what aspects of this circuitry should we include in neural networks. Some may argue that aspects of the six-layer motif are already incorporated (e.g. normalization operations). But there is probably enormous richness missing.”

Also, as Kreiman highlights in Biological and Computer Vision, information in the brain moves in several directions. Light signals move from the retina to the inferior temporal cortex to the V1, V2, and other layers of the visual cortex. But each layer also provides feedback to its predecessors. And within each layer, neurons interact and pass information between each other. All these interactions and interconnections help the brain fill in the gaps in visual input and make inferences when it has incomplete information.

In contrast, in artificial neural networks, data usually moves in a single direction. Convolutional neural networks are “feedforward networks,” which means information only goes from the input layer to the higher and output layers.

There’s a feedback mechanism called “backpropagation,” which helps correct mistakes and tune the parameters of neural networks. But backpropagation is computationally expensive and only used during the training of neural networks. And it’s not clear if backpropagation directly corresponds to the feedback mechanisms of cortical layers.

On the other hand, recurrent neural networks, which combine the output of higher layers into the input of their previous layers, still have limited use in computer vision.

visual cortex vs neural networks

Above: In the visual cortex (right), information moves in several directions. In neural networks (left), information moves in one direction.

In our conversation, Kreiman suggested that lateral and top-down flow of information can be crucial to bringing artificial neural networks to their biological counterparts.

“Horizontal connections (i.e., connections for units within a layer) may be critical for certain computations such as pattern completion,” he said. “Top-down connections (i.e., connections from units in a layer to units in a layer below) are probably essential to make predictions, for attention, to incorporate contextual information, etc.”

He also said out that neurons have “complex temporal integrative properties that are missing in current networks.”

Goal differences

Evolution has managed to develop a neural architecture that can accomplish many tasks. Several studies have shown that our visual system can dynamically tune its sensitivities to the common. Creating computer vision systems that have this kind of flexibility remains a major challenge, however.

Current computer vision systems are designed to accomplish a single task. We have neural networks that can classify objects, localize objects, segment images into different objects, describe images, generate images, and more. But each neural network can accomplish a single task alone.

Gabriel Kreiman

Above: Harvard Medical University professor Gabriel Kreiman. Author of “Biological and Computer Vision.”

“A central issue is to understand ‘visual routines,’ a term coined by Shimon Ullman; how can we flexibly route visual information in a task-dependent manner?” Kreiman said. “You can essentially answer an infinite number of questions on an image. You don’t just label objects, you can count objects, you can describe their colors, their interactions, their sizes, etc. We can build networks to do each of these things, but we do not have networks that can do all of these things simultaneously. There are interesting approaches to this via question/answering systems, but these algorithms, exciting as they are, remain rather primitive, especially in comparison with human performance.”

Integration differences

In humans and animals, vision is closely related to smell, touch, and hearing senses. The visual, auditory, somatosensory, and olfactory cortices interact and pick up cues from each other to adjust their inferences of the world. In AI systems, on the other hand, each of these things exists separately.

Do we need this kind of integration to make better computer vision systems?

“As scientists, we often like to divide problems to conquer them,” Kreiman said. “I personally think that this is a reasonable way to start. We can see very well without smell or hearing. Consider a Chaplin movie (and remove all the minimal music and text). You can understand a lot. If a person is born deaf, they can still see very well. Sure, there are lots of examples of interesting interactions across modalities, but mostly I think that we will make lots of progress with this simplification.”

However, a more complicated matter is the integration of vision with more complex areas of the brain. In humans, vision is deeply integrated with other brain functions such as logic, reasoning, language, and common sense knowledge.

“Some (most?) visual problems may ‘cost’ more time and require integrating visual inputs with existing knowledge about the world,” Kreiman said.

He pointed to following picture of former U.S. president Barack Obama as an example.

ObamaPicture

Above: Understanding what is going on it this picture requires world knowledge, social knowledge, and common sense.

To understand what is going on in this picture, an AI agent would need to know what the person on the scale is doing, what Obama is doing, who is laughing and why they are laughing, etc. Answering these questions requires a wealth of information, including world knowledge (scales measure weight), physics knowledge (a foot on a scale exerts a force), psychological knowledge (many people are self-conscious about their weight and would be surprised if their weight is well above the usual), social understanding (some people are in on the joke, some are not).

“No current architecture can do this. All of this will require dynamics (we do not appreciate all of this immediately and usually use many fixations to understand the image) and integration of top-down signals,” Kreiman said.

Areas such as language and common sense are themselves great challenges for the AI community. But it remains to be seen whether they can be solved separately and integrated together along with vision, or integration itself is the key to solving all of them.

“At some point we need to get into all of these other aspects of cognition, and it is hard to imagine how to integrate cognition without any reference to language and logic,” Kreiman said. “I expect that there will be major exciting efforts in the years to come incorporating more of language and logic in vision models (and conversely incorporating vision into language models as well).”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Sea of Thieves: A Pirate’s Life — 5 details you’ll want to know

Published

on

Sea of Thieves: A Pirate's Life -- 5 details you'll want to know

Elevate your enterprise data technology and strategy at Transform 2021.


Rare revealed A Pirate’s Life during E3 last week. This new adventure for Sea of Thieves will give players a chance to adventure with Pirates of Caribbean star Captain Jack Sparrow, and it’s coming to the Xbox and PC game as a free expansion on June 22.

Still, there is a lot about the experience that we don’t know. I was part of a recent virtual preview for A Pirate’s Life, and it gave me an opportunity to learn a lot more about Rare’s Disney crossover.

And so, much unlike a pirate, I’ve decided to share this treasure of knowledge with you all. I’ve picked out five of the most interesting nuggets that I learned from Rare.

You can play it by yourself

Are you a Pirates of the Caribbean fan who is worried about finding a crew of other players before starting A Pirate’s Life? Don’t be! Rare has created the story so that it can be enjoyed by one, two, three, or four players. The difficulty and gameplay will scale based on the size of your party.

Playing by yourself may still sound like a lonely experience for a game that prides itself on co-op multiplayer, but you will have Jack Sparrow along to help you. He’ll assist you during fights by manning the cannons, and if you’re exploring the open seas you can find him looking at your map and commenting on the names of the game’s many islands.

Above: Jack Sparrow isn’t the only familiar character you’ll encounter.

Image Credit: Rare

You can start the story with a new character

Maybe you want to try A Pirate’s Life, but you aren’t a Sea of Thieves player. You’ll be fine. Even new characters can start the campaign.

Granted, you do have to at least complete the game’s opening tutorial, so A Pirate’s Life can’t be the literal first thing that you do. Still, that shouldn’t take you too long.

You can’t play as Jack Sparrow

Sorry, you won’t be able to play as the captain himself. However, you will be able to buy Pirates of the Caribbean-themed cosmetics for your character, including Jack Sparrow’s famous pirate outfit.

Sea of Thieves is still a game about being your own pirate. It’s also still, well, Sea of Thieves. A Pirate’s Life doesn’t turn the experience into a full-on Pirates of the Caribbean game. Rather, it brings the Disney characters into Sea of Thieves’ world.

SoT Cine Shot Trident

Above: Jack acts as something of an AI crewmate during parts of the adventure.

Image Credit: Rare

No, that isn’t Johnny Depp

Jack Sparrow’s voice sounds convincing in the game, but it isn’t Johnny Depp you’re hearing. Rare hired Jared Butler for the role. The voice actor has voiced Jack before, including for 2019’s Kingdom Hearts III.

He also did the voice of the Mad Hatter in 2010’s Alice in Wonderland game (yes, they made a game based off of that movie), so he is an experienced Johnny Depp voice double.

It takes inspiration from the ride as well as the moves

I do like the Pirates of the Caribbean movies (some of them more than others), but my heart truly belongs to the original ride. So I was happy to hear that Rare is taking just as much inspiration from the Disneyland masterpiece.

You may have already noticed a tribute to the ride’s famous dog-and-key scene in the reveal trailer, but the game also has homages to other moments from the ride, including the mayor interrogation scene and the pirate ship attack. You’ll also hear a version of the eerie narration from before the ride’s second drop.

Oh, and you can also learn how to play the famous Pirates of the Caribbean song, “Yo Ho (A Pirate’s Life for Me).” That is reason enough for me  to play Sea of Thieves’ new adventure.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading

Tech

Candy Shop Slaughter is a video game concept creatd by AI

Published

on

Candy Shop Slaughter is a video game concept creatd by AI

Elevate your enterprise data technology and strategy at Transform 2021.


It is possible for artificial intelligence to create a video game. Contrary to popular opinion and hopes for humanity, an AI came up with the basic design for a video game called Candy Shop Slaughter.

The game has all of the elements needed for success in the competitive mobile game industry. OnlineRoulette.com commissioned the project, which was created by Fractl, a South Florida growth marketing agency.

Games are thriving despite the pandemic and video game jobs are growing in spite of the competition from automation. Video games are a creative art, and it’s hard to believe that a machine can come up with the kind of creativity needed to make such a work. But we shouldn’t be too complacent about human ingenuity against the continuous improvement of AI.

That was part of the point of the project, said Kristin Tynski, cofounder of Fractl, in an interview with GamesBeat. The art for the game was created by Fractl’s artists. GPT-3 generated the text. With other AI projects, like JetPlay’s Ludo, AI is used to generate everything from the game art to the game characters and gameplay. It’s no longer the case that only humans can create games.

Joe Mercurio, creative strategy lead at Fractl, said in an interview with GamesBeat that he developed the idea and development of the project, and Tynski worked on the AI outputs. Their company is an agency that works on growth campaigns for companies.

“A year or two ago, we received access to Open AI technology, GPT-2, and then we got access to GPT-3,” said Mercurio. “We started fooling around with that. Kristin actually developed a full website that had a bunch of blog content that was completely AI-generated. We were just inspired to set up a bunch of different ideas. And for Online Roulette, we decided to explore a video game.”

Fractl’s creative team has always been interested in generative AI, and it saw GPT-2 and GPT-3 as a big advancement, Tynski said.

The agency created the game to see if people were interested in characters and gameplay created by the OpenAI program known as GPT-3, a text generator. Fractl used GPT-3 to create a hero character, bosses to battle, and friends to meet along the way in both story and arcade modes in Candy Shop Slaughter.

With the characters and gameplay created by GPT-3, OnlineRoulette then surveyed 1,000 gamers to find out if they would be willing to play it, how original they found the various aspects of the game, and whether they’d be willing to pay for it.

AI-developed story and arcade modes

Above: Candy Shop Slaughter characters were generated by AI.

Image Credit: Fractl

Using the OpenAI text generator GTP-3, Fractl created a story, arcade, and multiplayer mode for the fictional video game.

In the synopsis, the AI created the main character Freddy Skittle and his best friend Ted. In story mode, the game utilizes a karma system where players can accumulate experience points for all of the good actions they make along the way and lose experience points when they make poor choices. The more they progress, players can unlock additional characters with different strengths that appear in the game’s universe who can aid in the boss battles players will encounter.

In arcade mode, Candy Shop Slaughter turns into a classic 3D fighting game, where blood and guts are transformed into candy and treats and players can experience plenty of food puns and jokes along the way. Players start by creating characters from a template and have the opportunity to unlock new costumes and weapons as they play.

AI-created video game  characters

Fractl's team

Above: Fractl’s team

Image Credit: Fractl

The AI also imagined 12 unique characters, bosses, and companions players could encounter in Candy Shop Slaughter.

The main protagonist Freddy Skittle throws knives and uses a retractable pocketknife in close combat. Bosses to fight in various levels include Pie Cake, who throws spiked pie slices in battle; Honey Bun, who evolves into a massive honey monster; and M&M’s Candy, the final boss who utilizes sweet soda bottles and candy worms in battle.

“GPT-3’s capabilities are pretty astounding. And it demonstrates a pretty fundamental shift and in what generative AI is capable of,” Tynski said. “We’ve had a ton of fun doing this project and testing out the creative abilities of GPT-3 within the context of a specific idea.”

Will game developers lose their jobs to AI? Probably not real soon.

“AI is going to take a lot of jobs. And I think it’s going to transform all the other jobs,” said Tynski. “I think you’re always going to have to have a human that’s part of the creative process because I think other humans care who created it. What’s super cool about these technologies is they’ve democratized creativity in an amazing way. I think as a creator you can find something mutually beneficial in this technology.”

She added, “There are and will be a lot more companies that are basically packaging GPT-3 outputs of specific game styles or types, or use cases, and then they use and using that to create some sort of service.”

Gamer impressions

candy shop pie charts

Above: Gamer reactions to Candy Shop Slaughter.

Image Credit: Fractl

Seventy-seven percent of gamers indicated they would play Candy Shop Slaughter, and 65% of gamers would be willing to pay for the game.

When asked about its uniqueness, just 10% of gamers found it unoriginal or very unoriginal, while 54% said Candy Shop Slaughter was original, and 20% of gamers deemed it very original.

The most impressive part of Candy Shop Slaughter was the characters, which 67% of gamers ranked as high quality. Following the characters, more than half of gamers considered the overall game (58%), the storyline (55%), and the game title (53%) to be high quality.

Fifty-seven percent of gamers indicated Candy Shop Slaughter sounded more like a mobile game, while 43% believed it would be a console game. With the descriptions of gameplay in mind, 73% also said the story mode of the game sounded more appealing, compared to just 28% who felt more intrigued by the arcade mode.

With the descriptions and details of 12 different characters, 48% of gamers felt Freddy Skittle (the main character) sounded the most fun to play, followed by Cookie Sandwich (33%), Pie Cake (30%), and Honey Bun (30%).

Respondents were not informed that the video game, storylines, and characters were AI-generated.

“It wasn’t like we cherry-picked the results here,” Tynski said. “There were lots of other ones that we generally ended up generating later that were similarly good. It pulls from well-known tropes. It is pretty difficult for humans to differentiate the text that was generated by AI.”

OnlineRoulette.com got responses from 1,000 players. The survey was designed with the intent of having them rate the storylines and characters presented to them.

“As an agency, we see AI becoming a much more integrated piece of content generation and part of the creative process,” Tynski said. “I think we’re just starting to scratch the surface. And this is also at the same time is advancing very, very rapidly. So we just want to continue to explore what’s possible and, and help our clients to create cool things by integrating these new technologies.”

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading

Tech

How Azur Games grew its hypercasual games to 1.5B downloads

Published

on

How Azur Games grew its hypercasual games to 1.5B downloads

Elevate your enterprise data technology and strategy at Transform 2021.


Azur Games is one of those companies that has quietly become one of the top ten game publishers in the world with more than 1.5 billion downloads. It did so by pivoting into the emerging hypercasual games on mobile devices.

Now the company has more than 300 employees in Cyprus and Eastern Europe, said Dmitry Yaminsky, CEO of Azur Games, in an interview with GamesBeat. And in the first quarter, Azur’s Stack Ball and Hit Master 3D games were among the top-20 most downloaded games worldwide, according to measurement firm Sensor Tower.

During this successful run, Yaminsky found that while many hypercasual games remain big hits for about a month, they quickly fall off after that. But the downloads don’t go to zero. Rather, they fall to maybe 10% or 20% of the early numbers. And then they stay at that level of perhaps 100,000 downloads a month for a long time, giving the games a longer life and a more predictable revenue stream. And with so many games in the library and upcoming pipeline, Azur Games has a pretty good business that can sustain the employee base, Yaminsky said. Stack Ball is the biggest hit, with more than 300 million downloads, while WormsZome.io has more than 200 million downloads.

But the competition isn’t easy. There are rivals like Voodoo Games, SayGames, Rollic, and others.

Origins

Above: Modern Strike was the first hit by Azur Games.

Image Credit: Azur Games

Yaminsky previously worked in the advertising industry, but decided to move into games after a downturn struck in 2014 and 2015.

Yaminsky formally started the game publisher in 2016 in Moscow to publish midcore games, or those with hardcore themes but can be played in short cycles on mobile devices. The company found a studio that was working on a title called Modern Strike Online, a mobile take on Counter-Strike. And they helped them with the launch with marketing at first.

That game became a huge hit with more than 70 million downloads, and Azur Games acquired half of the development studio.

“It was very successful. And with the launch of the game, I decided to work on other games. And so after the company started as a mobile publisher, and then we started our own development,” he said.

In 2017, hypercasual games — which take perhaps a minute to play — started taking off, thanks to new game companies like Voodoo Games. He moved the headquarters to Cyprus.

“We didn’t really know how to approach user acquisition for the new market at the time, so we decided to fund and conduct an experiment — two people from the team made a hypercasual game that became a hit,” Yaminsky said. “It turned out that two or three people can create a project with better metrics than a team of 60 in the midcore segment. That was our pivot.”

In the first experiment, one person worked on programming and another on art. It took about a month to finish the game. On its first day, the game generated $1,000 in advertising revenue. On the second day, it was $2,000.

“Then we started acquiring users,” he said. “Back then it was just so easy. There were so few competitors. A lot of people said it was impossible. I said it was my money. Let me waste it. In fact, the first game I launched was a real success.”

Growing the business

Azur Games' employees in Cyprus.

Above: Some of Azur Games’ employees in Cyprus.

Image Credit: Azur Games

Some companies started churning out games like factories. It was relatively easy to grow in hypercasual at that time since the market was small and the developers were very enthusiastic about presenting their games.

That’s when we knew we needed to stand out from the other publishers and tried to see the teams for their potential, refine the prototypes, and accumulate expertise within the company,” Yaminsky said. “This was a breath of fresh air for the industry, since most publishers at that time just looked at the first metrics — if they were good, then they took the project, if not, they sent it back to the developer.”

Azur Games started to build an ecosystem that would be comfortable for the developers and grow projects within it. It shared its experience and actively helped budding studios and solo developers to enter the market. As a result, the marketing budgets grew, the studios learned to trust Azur Games, and the company began attracting a lot of new developers.

While the headquarters is in Cyprus, the team is spread out, with back offices in Dubai and Moscow. Most of the people work remotely, which helps the company grow quickly. About 50 people work in marketing and analytics, while a team of 30 motion designers work on creative ads that help the games spread. About 200 people work on midcore projects, which can have higher margins.

azur 3

Above: Azur Games has lots of hypercasual titles.

Image Credit: Azur Games

“We’re trying to pave our own way,” Yaminsky said. “Many companies on the market are still waiting for finished projects with good metrics. But we at Azur Games believe in teams and improve the projects ourselves.”

While hypercasual games still provides most of the downloads, Azur Games has diversified into the casual game and midcore game segments. Those games will start coming out in the coming months and years.

The hypercasual department consists of several mini-teams, which include a producer, two or three product assistants, and two or three game designers. Each mini-team works with a limited number of studios.

“We prototype about 200 games a month, and after we test them, we launch about one or two games per month,” Yaminsky said. “In other words, to get a lot of downloads, you need to do a lot of work, which isn’t always visible from the outside.”

Staying ahead of competitors

Worm

Above: WormZome.io is one of Azur’s games.

Image Credit: Azur Games

Now that hypercasual is a big market, companies like Zynga have acquired hypercasual firms like Rollic, and the market is crowded.

“You can win as a company only if you share your expertise with developers more than the others, run tests faster, use your own analytics, and invest your skills and experience in development,” Yaminsky said. “We put the emphasis on communication and providing the necessary resources: for example, if the team doesn’t have motion designers, game designers or artists, we involve them as needed.”

In other words, the current strategy is to offer favorable conditions and development infrastructure within its ecosystem. This means that the company is willing to share anything that could help the developers make the right decision, trend-tracking data being one example. At the same time, the company never reworks the games for the studios and it only suggests the direction.

That means the company has to find the right teams to build long-term, mutually beneficial relationships. It has invested more than $10 million in developers to date. Many of the developers are in Eastern Europe, where companies have learned to move quickly and efficiently without running up high costs, Yaminsky said. There are also a lot of educated programmers in the region.

“First and foremost, we always assess the potential. If it’s there, we’re ready to invest our own efforts and substantial amounts of money,” Yaminsky said. “For instance, if there’s a studio with annual revenue of up to $5 million, we’re ready to invest up to $10 million for a 20% to 30% stake, even more in some cases. Meanwhile, the studio stays in control of the project, and we only help to grow it in all directions, including marketing.”

azur 5

Above: Azur Games prototypes 200 games a month.

Image Credit: Azur Games

By 2019, the market got a lot more competitive, and now it is even more heated. In the month of May, the company spent more than $15 million in marketing. The company also tries to offer the developers more favorable terms than others do, like paying well for each prototype. This allows them to cover development costs, so they can feel comfortable and try more things than they would in a different setting, Yaminsky said.

“When it comes to the product strategy, we aim at increasing lifetime value of users and paying more attention to in-app monetization,” he said. “This means we’re planning to do deeper projects, but we always take the studio resources into account — if the developer doesn’t have a lot of experience, they work on simple mechanics.”

A hit game can get 300,000 to 500,000 downloads a day, but Yaminsky believes that the long-term matters a lot. In the long tail for a hit, a game can generate $100,000 to $400,000 a month. With 10 to 30 such hits, the long tail generates a consistent revenue that is in the millions of dollars a month.

Now the company is looking for more game studios to invest in to keep generating more hits.

“The number of competitors keeps growing, and we have to stay competitive,” Yaminsky said.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading

Trending