Connect with us

Tech

The four most common fallacies about AI

Published

on

The four most common fallacies about AI

Join Transform 2021 this July 12-16. Register for the AI event of the year.


The history of artificial intelligence has been marked by repeated cycles of extreme optimism and promise followed by disillusionment and disappointment. Today’s AI systems can perform complicated tasks in a wide range of areas, such as mathematics, games, and photorealistic image generation. But some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them.

Part of the continued cycle of missing these goals is due to incorrect assumptions about AI and natural intelligence, according to Melanie Mitchell, Davis Professor of Complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide For Thinking Humans.

In a new paper titled “Why AI is Harder Than We Think,” Mitchell lays out four common fallacies about AI that cause misunderstandings not only among the public and the media, but also among experts. These fallacies give a false sense of confidence about how close we are to achieving artificial general intelligence, AI systems that can match the cognitive and general problem-solving skills of humans.

Narrow AI and general AI are not on the same scale

The kind of AI that we have today can be very good at solving narrowly defined problems. They can outmatch humans at Go and chess, find cancerous patterns in x-ray images with remarkable accuracy, and convert audio data to text. But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems. Mitchell describes the first fallacy as “Narrow intelligence is on a continuum with general intelligence.”

“If people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI,” Mitchell writes in her paper.

For instance, today’s natural language processing systems have come a long way toward solving many different problems, such as translation, text generation, and question-answering on specific problems. At the same time, we have deep learning systems that can convert voice data to text in real-time. Behind each of these achievements are thousands of hours of research and development (and millions of dollars spent on computing and data). But the AI community still hasn’t solved the problem of creating agents that can engage in open-ended conversations without losing coherence over long stretches. Such a system requires more than just solving smaller problems; it requires common sense, one of the key unsolved challenges of AI.

The easy things are hard to automate

Above: Vision, one of the problems every living being solves without effort, remains a challenge for computers.

When it comes to humans, we would expect an intelligent person to do hard things that take years of study and practice. Examples might include tasks such as solving calculus and physics problems, playing chess at grandmaster level, or memorizing a lot of poems.

But decades of AI research have proven that the hard tasks, those that require conscious attention, are easier to automate. It is the easy tasks, the things that we take for granted, that are hard to automate. Mitchell describes the second fallacy as “Easy things are easy and hard things are hard.”

“The things that we humans do without much thought—looking out in the world and making sense of what we see, carrying on a conversation, walking down a crowded sidewalk without bumping into anyone—turn out to be the hardest challenges for machines,” Mitchell writes. “Conversely, it’s often easier to get machines to do things that are very hard for humans; for example, solving complex mathematical problems, mastering games like chess and Go, and translating sentences between hundreds of languages have all turned out to be relatively easier for machines.”

Consider vision, for example. Over billions of years, organisms have developed complex apparatuses for processing light signals. Animals use their eyes to take stock of the objects surrounding them, navigate their surroundings, find food, detect threats, and accomplish many other tasks that are vital to their survival. We humans have inherited all those capabilities from our ancestors and use them without conscious thought. But the underlying mechanism is indeed more complicated than large mathematical formulas that frustrate us through high school and college.

Case in point: We still don’t have computer vision systems that are nearly as versatile as human vision. We have managed to create artificial neural networks that roughly mimic parts of the animal and human vision system, such as detecting objects and segmenting images. But they are brittle, sensitive to many different kinds of perturbations, and they can’t mimic the full scope of tasks that biological vision can accomplish. That’s why, for instance, the computer vision systems used in self-driving cars need to be complemented with advanced technology such as lidars and mapping data.

Another area that has proven to be very difficult is sensorimotor skills that humans master without explicit training. Think of the how you handle objects, walk, run, and jump. These are tasks that you can do without conscious thought. In fact, while walking, you can do other things, such as listen to a podcast or talk on the phone. But these kinds of skills remain a large and expensive challenge for current AI systems.

“AI is harder than we think, because we are largely unconscious of the complexity of our own thought processes,” Mitchell writes.

Anthropomorphizing AI doesn’t help

The field of AI is replete with vocabulary that puts software on the same level as human intelligence. We use terms such as “learn,” “understand,” “read,” and “think” to describe how AI algorithms work. While such anthropomorphic terms often serve as shorthand to help convey complex software mechanisms, they can mislead us to think that current AI systems work like the human mind.

Mitchell calls this fallacy “the lure of wishful mnemonics” and writes, “Such shorthand can be misleading to the public trying to understand these results (and to the media reporting on them), and can also unconsciously shape the way even AI experts think about their systems and how closely these systems resemble human intelligence.”

The wishful mnemonics fallacy has also led the AI community to name algorithm-evaluation benchmarks in ways that are misleading. Consider, for example, the General Language Understanding Evaluation (GLUE) benchmark, developed by some of the most esteemed organizations and academic institutions in AI. GLUE provides a set of tasks that help evaluate how a language model can generalize its capabilities beyond the task it has been trained for. But contrary to what the media portray, if an AI agent gets a higher GLUE score than a human, it doesn’t mean that it is better at language understanding than humans.

“While machines can outperform humans on these particular benchmarks, AI systems are still far from matching the more general human abilities we associate with the benchmarks’ names,” Mitchell writes.

A stark example of wishful mnemonics is a 2017 project at Facebook Artificial Intelligence Research, in which scientists trained two AI agents to negotiate on tasks based on human conversations. In their blog post, the researchers noted that “updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating [emphasis mine].”

This led to a stream of clickbait articles that warned about AI systems that were becoming smarter than humans and were communicating in secret dialects. Four years later, the most advanced language models still struggle with understanding basic concepts that most humans learn at a very young age without being instructed.

AI without a body

Can intelligence exist in isolation from a rich physical experience of the world? This is a question that scientists and philosophers have puzzled over for centuries.

One school of thought believes that intelligence is all in the brain and can be separated from the body, also known as the “brain in a vat” theory. Mitchell calls it the “Intelligence is all in the brain” fallacy. With the right algorithms and data, the thinking goes, we can create AI that lives in servers and matches human intelligence. For the proponents of this way of thinking, especially those who support pure deep learning–based approaches, reaching general AI hinges on gathering the right amount of data and creating larger and larger neural networks.

Meanwhile, there’s growing evidence that this approach is doomed to fail. “A growing cadre of researchers is questioning the basis of the ‘all in the brain’ information processing model for understanding intelligence and for creating AI,” she writes.

Human and animal brains have evolved along with all other body organs with the ultimate goal of improving chances of survival. Our intelligence is tightly linked to the limits and capabilities of our bodies. And there is an expanding field of embodied AI that aims to create agents that develop intelligent skills by interacting with their environment through different sensory stimuli.

Mitchell notes that neuroscience research suggests that “neural structures controlling cognition are richly linked to those controlling sensory and motor systems, and that abstract thinking exploits body-based neural ‘maps.’” And in fact, there’s growing evidence and research that proves feedback from different sensory areas of the brain affects both our conscious and unconscious thoughts.

Mitchell supports the idea that emotions, feelings, subconscious biases, and physical experience are inseparable from intelligence. “Nothing in our knowledge of psychology or neuroscience supports the possibility that ‘pure rationality’ is separable from the emotions and cultural biases that shape our cognition and our objectives,” she writes. “Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world. It’s not at all clear that these attributes can be separated.”

Common sense in AI

Developing general AI needs an adjustment to our understanding of intelligence itself. We are still struggling to define what intelligence is and how to measure it in artificial and natural beings.

“It’s clear that to make and assess progress in AI more effectively, we will need to develop a better vocabulary for talking about what machines can do,” Mitchell writes. “And more generally, we will need a better scientific understanding of intelligence as it manifests in different systems in nature.”

Another challenge that Mitchell discusses in her paper is that of common sense, which she describes as “a kind of umbrella for what’s missing from today’s state-of-the-art AI systems.”

Common sense includes the knowledge that we acquire about the world and apply it every day without much effort. We learn a lot without being explicitly instructed, by exploring the world when we are children. These include concepts such as space, time, gravity, and the physical properties of objects. For example, a child learns at a very young age that when an object becomes occluded behind another, it has not disappeared and continues to exist, or when a ball rolls across a table and reaches the ledge, it should fall off. We use this knowledge to build mental models of the world, make causal inferences, and predict future states with decent accuracy.

This kind of knowledge is missing in today’s AI systems, which makes them unpredictable and data-hungry. In fact, housekeeping and driving, the two AI applications mentioned at the beginning of this article, are things that most humans learn through common sense and a little bit of practice.

Common sense also includes basic facts about human nature and life, things that we omit in our conversations and writing because we know our readers and listeners know them. For example, we know that if two people are “talking on the phone,” it means that they aren’t in the same room. We also know that if “John reached for the sugar,” it means that there was a container with sugar inside it somewhere near John. This kind of knowledge is crucial to areas such as natural language processing.

“No one yet knows how to capture such knowledge or abilities in machines. This is the current frontier of AI research, and one encouraging way forward is to tap into what’s known about the development of these abilities in young children,” Mitchell writes.

While we still don’t know the answers to many of these questions, a first step toward finding solutions is being aware of our own erroneous thoughts. “Understanding these fallacies and their subtle influences can point to directions for creating more robust, trustworthy, and perhaps actually intelligent AI systems,” Mitchell writes.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Incorta nabs $120M to power business data analytics

Published

on

Incorta

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Incorta, an analytics platform designed to speed up data ingestion, this week announced that it raised $120 million in funding contributed by Prysm Capital, with participation from National Grid Ventures, GV, Kleiner Perkins, M12, Sorenson Capital, Telstra Ventures, Ron Wohl, and Silicon Valley Bank (in the form of a credit facility). CEO Scott Jones says that the capital, which brings Incorta’s total raised to $195 million, will be used to expand go-to-market operations and meet demand for Incorta’s analytics products.

According to a recent IDC study, 70% of CEOs acknowledge that their organization needs to become more data-driven, with 87% saying that becoming more agile and integrated is a top priority over the next five years. Meanwhile, new research from Ventana Research highlights where companies struggle most with data analytics. Fifty-five percent of organizations report that the most time-consuming task in analytics is preparing the data. According to Ventana, 25% of organizations combine more than 20 data sources in their data preparation activities and 39% uses more than 104.

Incorta, which was founded in 2014 by Oracle veterans Hichem Sellami, Klaus Fabian, Matthew Halliday, and Osama Elkady, offers a solution that aims to help companies to acquire, enrich, analyze, and act upon business data. It can make upwards of tens of billions of rows of data “analytics-ready” without the need to pre-aggregate, reshape or transform the data in any way, connecting to enterprise apps, data streams, and data files via over 240 integrations.

Above: Incorta’s management dashboard.

Image Credit: Incorta

“The unprecedented events of the past year highlight the importance of modern data analytics in today’s business environment — platforms and tools like Incorta that deliver data to users directly without costly systems and processes like data warehousing … severely limiting speed and agility,” Jones said in a press release. “After hitting a major inflection point in 2020, Incorta is now scaling fast to meet global demand for modern data analytics in the cloud.”

Data transformation

Ninety-five percent of businesses cite the need to manage unstructured data as a problem for their business. Problematically, 80% to 90% of the data companies generate today is unstructured, according to CIO.

Incorta addresses this by offering an enriched metadata map combined with smart query routing. The result is a repository for analytics and machine learning — one that can be run on-premises, hosted by a cloud provider, or delivered as a fully-managed cloud service. Incorta can run as a complete standalone data and analytics pipeline or as a component within a larger analytics and business intelligence tech portfolio, depending on an organization’s data analytics needs.

“Companies have an increasing need to gain insight and make decisions from data with speed and agility, and Incorta provides this mission-critical solution with a differentiated offering,” Muhammad Mian, cofounder and partner at Prysm Capital, said in a statement. “Prysm is excited to partner with an exceptional management team to support the growth of a product that is at the intersection of attractive long-term trends: the explosion of data, digital and cloud transformation, and business intelligence modernization.”

Incorta’s latest round of fundraising, a series D, comes after a year in which nearly 60% of the company’s new revenue came from organic expansion with existing customers across media and entertainment, social, high tech, ecommerce, and retail markets. Incorta recently launched Incorta Mobile, a data analytics experience for mobile devices, as well as partnerships with Microsoft Azure, Google Cloud, eCapital, and and Tableau. And it established a footprint North America, the Middle East, U.K., and Japan.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Tech

Magic: The Gathering’s Adventures in the Forgotten Realms delves into Dungeons

Published

on

Magic: The Gathering's Adventures in the Forgotten Realms delves into Dungeons

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


I once feared Magic: The Gathering would kill Dungeons & Dragons. Wizards of the Coast ended up saving it, and now, the granddaddy of trading card games is heading to the Forgotten Realms of Faerûn — and its Dungeons.

Today, Wizards of the Coast is showing off more cards from the Adventures in the Forgotten Realms set, which launches July 8 on Magic: The Gathering — Arena and July 23 in paper. In addition to bringing the likes of Drizzt Do’Urden, Tiamat, and Lolth the Spider Queen to Magic, Adventures in the Forgotten Realms introduces Dungeons to the card game.

Senior game designer James Wyatt (who also worked on two of my favorite D&D books, the 3rd Edition City of the Spider Queen and Draconomicon) and worldbuilding designer Meris Mullaley showed off a handful of the set’s cards to the press last week. And the three Dungeons and their Venture mechanic showed how the Magic team is approaching fitting Realmslore into the set.

Dungeon delving

Above: The dungeons of Adventures in the Forgotten Realms.

Image Credit: GamesBeat

The Dungeons are adaptations of existing D&D modules and campaigns that have appeared in 5th Edition (among others):

“Whenever you have a card that tells you to Venture into the Dungeon, what you do is you pick one of these Dungeons, and you put a marker at the very top room. And every time you Venture, then you can move down a level — farther into the Dungeon — by one room,” Wyatt said in a video briefing.

Each player has their own Dungeons, so they could be exploring the Lost Mine of Phandelver at the same time (so, two people could be doing so in a 1-on-1 game, or three of four players could be in a Commander match). You can have one, two, or all three active at once. When you Venture, you could either go deeper into one or begin exploring another.

These Dungeons offer choices. You choose which one you want to delve into and which path you take. The Tomb of Annihilation has you sacrificing cards, artifacts, and life to gain a horrific benefit (which fits the theme of the lich Acererak’s deathtrap). I also find adding this dungeon interesting because Acererak was a card in Spellfire, which was D&D‘s failed answer to Magic back in the 1990s.

Halaster’s dungeon gives you more choices, but it takes longer to get through it (as befits the numerous levels of Undermountain).

“If you choose Dungeon of the Mad Mage, you’re really in this dungeon for a long time exploring the holes of Undermountain,” Wyatt said. “You need seven Ventures to get all the way through, but you have lots of choices to make as you go along the way.”

Dungeons are a neat way to capture the flavor of D&D within Magic. Undermountain has been a mainstay of the Realms since The Ruins of Undermountain boxed set in 1991; since then, TSR or Wizards of the Coast has published several campaign sets, adventures, game books, and even a board game about these halls.

The Magic team is using its existing combination of creatures, artifacts, and spells to take advantage of these Dungeons.

FR Venture Dungeon cards

Above: These cards work with Dungeons, giving you benefits or helping you get through them.

Image Credit: GamesBeat

“There are a variety of cards that interact with Venture in interesting ways, including all the way down to Common [rarity] with things like Shortcut Seeker, hitting that classic trope of ‘look, there’s a trapdoor under the rug,’” Wyatt said. “Venture is a strong theme across all rarities, so there’s lots of opportunity for players to experience the thrill of exploring Dungeons.”

I asked if the Dungeons had special loot attached to them, such as a Sphere of Annihilation for the Tomb of Annihilation. A Wizards spokesperson on the call said we’d have to wait and see on that.

Give me land, lots of land

Another way to capture the flavor of the Forgotten Realms is with lands. The Basic lands all have some art or text reference to Faerûn, even if it’s not obvious at first glance.

What’s really interesting are some of the alternate land cards. One example is Evolving Wilds, a Magic staple. This treatment captures the style of classic D&D modules such as The Keep on the Borderlands (it even has the lavender-ish coloring).

FR lands

Above: The Basic lands reference the Realms in their art and their text.

Image Credit: GamesBeat

The set will have nine of these lands, eight of them with new names.

“We’re calling this the Classic Module land frame. These are borderless module lands featuring art that is reminiscent of the cover art from classic Dungeons & Dragons adventure modules,” Mullaley said. “They’re all lands. There’s nine of them. This one is Evolving Wilds, but the other lands are new, with names that were created to sound like adventures.”

Seeing some of the Basic lands did raise a concern. The Forest doesn’t scream Forgotten Realms to me, and the text doesn’t add any flavor; it looks like it could fit into any other Magic set.

“We did a full concept push for this set, like we do for any Magic set. Obviously there’s already a ton of art exploring what the Forgotten Realms looks like. There’s not necessarily a ton of of art or color art establishing the look of specific geographical regions like the Evermoors, or the Spine of the World, or the High Forest,” Wyatt said. “So all of these lands — almost all of these lands — do actually point to specific places that we developed in the world guide, though I think that forest right there is an example of elven architecture, rather than a specific place, so that was also one of the areas we explored in the world guide.

“If I’m remembering right, the cycles of lands include one of each land type in the Underdark, one that shows a settlement of various peoples of the Realms, one that is just a wilderness area, and one that includes some ruins of ancient civilizations. So there’s definitely a lot of Realms flavor, sometimes not obvious in there, but in there.”

Who’s the set for?

FR card treatments

Above: Card treatments for Adventures in the Forgotten Realms include borderless art cards, special art cards that look like D&D stat blocks, and illustrations that hark back to 1st and 2nd Edition styles.

Image Credit: GamesBeat

As Mullaley and Wyatt showed off this batch of cards, I wondered (as did others on the briefing) who this set was for. Is it for Magic players, enticing them into something new? Is it for Realms fans who Wizards wants to push into Magic? Or folks like me, who enjoy both of Wizards’ big properties?

“I think that for someone who is familiar with Magic and not familiar with Dungeons & Dragons, it will be like encountering a completely new plane that we’ve created for the first time for a Magic set,” Mullaley said. “It’s for Standard play, so it’s built to work with all of the other sets in Standard. And while we created a few new mechanics that were kind of inspired by Dungeons & Dragons play for this set, for the most part, it plays like a Magic set, and it’s got the creature types you’ve come to expect and be the Standard exciting Magic gameplay, and the flavor of the world happens to be Dungeons & Dragons.

“So we’re hoping that, as you’re playing this, what might be a deep cut reference for a friend of yours might be something that sparks a bit of curiosity for you.”

One card that worries me is a Legendary character, the Dragonborn knight Nadaar, Selfless Paladin. They’re a character created for this set. But why would you need to make characters when you have official material going back to the “Grey Box” set of 1987 and Realms fans want characters they’ve come to love over the years, such as The Simbul, the dastardly wizard Manshoon, or even gods such as Bhaal?

“Hopefully, we can do both,” Wyatt said on mixing known and new characters together. “We have a lot of goals, putting Legends into a set, including hitting nostalgia, but also hitting various diversity milestones, trying to make sure that that we’re reflecting our audience and the game as it is now, not as it was 25 years ago. So, yeah, we definitely trying to do both.”

Yesterday, Magic head designer Mark Rosewater posted a blog with a number of hints and teases that addresses my concerns. These include:

  • a Legendary creature that makes a Legendary Hamster creature token (this must be Minsc & Boo, the beloved duo from the Baldur’s Gate games)
  • a card that creates a Legendary creature token named Vecna (while Vecna is more associated with Greyhawk than the Realms, the lich is a popular figure in the D&D community and was part of Critical Role’s story)
  • a creature with a death trigger that makes an equipment token (this could be a Gelatinous Cube, with the remains of an adventurer inside it)
  • Spend this mana only to cast Dragon spells or activate abilities of Dragons (this could be from an Orb of Dragonkind)
  • Creature — Bird Bear (this must be an Owlbear)
  • Creature — Elf Spider (this must be a Drider, the drow that Lolth curses to be part elf, part spider, and all horror)
  • Legendary Creature — Devil God (this must be Asmodeus, who’s been playing with the Realms for some time now)
  • Legendary Creature — Beholder

Also yesterday, Wizards of the Coast put out a list of folks who will have card previews and the date they’re showing them off.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading

Tech

Tonkean raises $50M to expand its workflow automation platform

Published

on

Tonkean

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Tonkean, a software startup developing a no-code workflow automation platform, today announced that it nabbed $50 million in a series B round led by Accel with participation from Lightspeed Ventures and Foundation Capital. CEO Sagi Eliyahu says that the proceeds will be put toward scaling up the company’s hiring efforts across engineering and go-to-market teams.

San Francisco, California-based Tonkean was founded in 2015 by Eliyahu and Offir Talmor. At age 18, Eliyahu and Talmor met in the Israel Defense Forces (IDF), where they spent four years working on software technologies and challenges. Before founding Tonkean, Eliyahu was the VP of engineering at Jive Software, but many of Tonkean’s R&D early hires in Israel came from Eliyahu’s and Talmor’s IDF unit.

Eliyahu argues that the value proposition of Tonkean’s platform is twofold. It gives businesses and teams within those businesses the ability to tailor workflows to systems, employees, and processes. At the same time, it solves challenges in a way that doesn’t require many customizations.

“As [Jive] scaled, [we] encountered problems that large businesses often see as inevitable: a tech stack that balloons to include hundreds if not thousands of applications and inefficiency that ran rampant throughout the organization,” Eliyahu told VentureBeat via email. “Tonkean was built to solve the fundamental challenges of enterprise software to allow department and operational experts to actually deliver software with the flexibility to streamline business processes without introducing yet more apps.”

Workflow automation

Tonkean’s workflow designer features adaptive modules that can be added or removed in a drag-and-drop fashion. Customers can use it to proactively reach out and follow people via email, Slack, or Microsoft Teams to deliver data and actions to them or to keep track and manage performance across processes, people, and systems. Moreover, they can automate manual steps such as triaging finance requests, routing items to team members, and chasing status updates. Or they can dive into live details of individual jobs and see aggregate views of metrics and KPIs like turnaround time, turnover rate, and cycle times for tasks.

“In many cases, Tonkean is reducing the need for internal custom development by IT and business technology teams or the need to purchase multiple packaged solutions to support needs from various business units,” Eliyahu said. “Tonkean operates at the cross-section of automation platforms like robotic process automation, integration platform as a service, and business process automation, often replacing but also often extending the value of these platforms by allowing enterprises to orchestrate more complex, human-centric processes and reducing the technical skill sets needed to leverage capabilities provided by technology platforms.”

Above: A screenshot of Tonkean’s workflow automation platform.

Image Credit: Tonkean

Tonkean says it already has “a few dozen” customers, mostly at the Fortune 1000 level — including Grubhub and Crypto.com.

“Tonkean’s AI-powered coordination engine can intelligently and proactively reach people by learning individual or team preferences, like what communication medium is preferred, and route alerts, data, or actions to the right place at the right time,” Eliyahu said. “Tonkean is the operating system for business operations, and as such can be used to deliver use cases in any business operations function including revenue operations, legal operations, HR operations, finance operations, IT operations, and more.”

Tonkean has raised $81 million in venture capital to date with this latest funding round, which also had contributions from Zoom CEO Eric Yuan, Atlassian co-CEO Scott Farquhar, former Google CEO Eric Schmidt, and executives from UiPath. The company, which has over 60 employees, plans to expand the size of its workforce to over 100 within the next year.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Trending