Connect with us

Tech

A simple model of the brain provides new directions for AI research

Published

on

A simple model of the brain provides new directions for AI research

Elevate your enterprise data technology and strategy at Transform 2021.


Last week, Google Research held an online workshop on the conceptual understanding of deep learning. The workshop, which featured presentations by award-winning computer scientists and neuroscientists, discussed how new findings in deep learning and neuroscience can help create better artificial intelligence systems.

While all the presentations and discussions were worth watching (and I might revisit them again in the coming weeks), one in particular stood out for me: A talk on word representations in the brain by Christos Papadimitriou, professor of computer science at Columbia University.

In his presentation, Papadimitriou, a recipient of the Gödel Prize and Knuth Prize, discussed how our growing understanding of information-processing mechanisms in the brain might help create algorithms that are more robust in understanding and engaging in conversations. Papadimitriou presented a simple and efficient model that explains how different areas of the brain inter-communicate to solve cognitive problems.

“What is happening now is perhaps one of the world’s greatest wonders,” Papadimitriou said, referring to how he was communicating with the audience. The brain translates structured knowledge into airwaves that are transferred across different mediums and reach the ears of the listener, where they are again processed and transformed into structured knowledge by the brain.

“There’s little doubt that all of this happens with spikes, neurons, and synapses. But how? This is a huge question,” Papadimitriou said. “I believe that we are going to have a much better idea of the details of how this happens over the next decade.”

Assemblies of neurons in the brain

The cognitive and neuroscience communities are trying to make sense of how neural activity in the brain translates to language, mathematics, logic, reasoning, planning, and other functions. If scientists succeed at formulating the workings of the brain in terms of mathematical models, then they will open a new door to creating artificial intelligence systems that can emulate the human mind.

A lot of studies focus on activities at the level of single neurons. Until a few decades ago, scientists thought that single neurons corresponded to single thoughts. The most popular example is the “grandmother cell” theory, which claims there’s a single neuron in the brain that spikes every time you see your grandmother. More recent discoveries have refuted this claim and have proven that large groups of neurons are associated with each concept, and there might be overlaps between neurons that link to different concepts.

These groups of brain cells are called “assemblies,” which Papadimitriou describes as “a highly connected, stable set of neurons which represent something: a word, an idea, an object, etc.”

Award-winning neuroscientist György Buzsáki describes assemblies as “the alphabet of the brain.”

A mathematical model of the brain

To better understand the role of assemblies, Papadimitriou proposes a mathematical model of the brain called “interacting recurrent nets.” Under this model, the brain is divided into a finite number of areas, each of which contains several million neurons. There is recursion within each area, which means the neurons interact with each other. And each of these areas has connections to several other areas. These inter-area connections can be excited or inhibited.

This model provides randomness, plasticity, and inhibition. Randomness means the neurons in each brain area are randomly connected. Also, different areas have random connections between them. Plasticity enables the connections between the neurons and areas to adjust through experience and training. And inhibition means that at any moment, a limited number of neurons are excited.

Papadimitriou describes this as a very simple mathematical model that is based on “the three main forces of life.”

Along with a group of scientists from different academic institutions, Papadimitriou detailed this model in a paper published last year in the peer-reviewed scientific journal Proceedings of the National Academy of Sciences. Assemblies were the key component of the model and enabled what the scientists called “assembly calculus,” a set of operations that can enable the processing, storing, and retrieval of information.

“The operations are not just pulled out of thin air. I believe these operations are real,” Papadimitriou said. “We can prove mathematically and validate by simulations that these operations correspond to true behaviors… these operations correspond to behaviors that have been observed [in the brain].”

Papadimitriou and his colleagues hypothesize that assemblies and assembly calculus are the correct model that explain cognitive functions of the brain such as reasoning, planning, and language.

“Much of cognition could fit that,” Papadimitriou said in his talk at the Google deep learning conference.

Natural language processing with assembly calculus

To test their model of the mind, Papadimitriou and his colleagues tried implementing a natural language processing system that uses assembly calculus to parse English sentences. In effect, they were trying to create an artificial intelligence system that simulates areas of the brain that house the assemblies that correspond to lexicon and language understanding.

assembly calculus natural language processing

“What happens is that if a sequence of words excites these assemblies in lex, this engine is going to produce a parse of the sentence,” Papadimitriou said.

The system works exclusively through simulated neuron spikes (as the brain does), and these spikes are caused by assembly calculus operations. The assemblies correspond to areas in the medial temporal lobe, Wernicke’s area, and Broca’s area, three parts of the brain that are highly engaged in language processing. The model receives a sequence of words and produces a syntax tree. And their experiments show that in terms of speed and frequency of neuron spikes, their model’s activity corresponds very closely to what happens in the brain.

The AI model is still very rudimentary and is missing many important parts of language, Papadimitriou acknowledges. The researchers are working on plans to fill the linguistic gaps that exist. But they believe that all these pieces can be added with assembly calculus, a hypothesis that will need to pass the test of time.

brain areas language processing

“Can this be the neural basis of language? Are we all born with such a thing in [the left hemisphere of our brain],” Papadimitriou asked. There are still many questions about how language works in the human mind and how it relates to other cognitive functions. But Papadimitriou believes that the assembly model brings us closer to understanding these functions and answering the remaining questions.

Language parsing is just one way to test the assembly calculus theory. Papadimitriou and his collaborators are working on other applications, including learning and planning in the way that children do at a very young age.

“The hypothesis is that the assembly calculus—or something like it—fills the bill for access logic,” Papadimitriou said. “In other words, it is a useful abstraction of the way our brain does computation.”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Lucidworks: Chatbots and recommendations boost online brand loyalty

Published

on

Who is loyal

Elevate your enterprise data technology and strategy at Transform 2021.


Pandemic-related shutdowns led consumers to divert the bulk of their shopping to online — and many of those shoppers are now hesitant about returning to stores as businesses begin to open back up. A recent survey of 800 consumers conducted by cloud company Lucidworks found that 59% of shoppers plan to either avoid in-person shopping as much as possible,  or visit in-person stores less often than before the pandemic.

Above: Shoppers across the U.S. and U.K. agree that high-quality products, personalized recommendations, and excellent customer service are the top three reasons they’re brand-loyal.

Image Credit: Lucidworks

As the world stabilizes, shoppers want brands to provide a multi-faceted shopping experience — expanded chatbot capabilities, diverse recommendations, and personalized experiences that take into account personal preferences and history, Lucidworks found in its study. More than half of shoppers in the survey, 55%, said they use a site’s chatbot on every visit. American shoppers use chatbots more than their counterparts in the United Kingdom, at 70%.

The majority of shoppers, 70%, use chatbots for customer service, and 53% said they want a chatbot to help them find specific products or check product compatibility. A little less than half, or 48%, said they use chatbots to find more information about a product, and 42% use chatbots to find policies such as shipping information and how to get refunds.

A quarter of shoppers will leave the website to seek information elsewhere if the chatbot doesn’t give them the answer. Brands that deploy chatbots capable of going beyond basic FAQs and can perform product and content discovery will provide the well-rounded chatbot experience shoppers expect, Lucidworks said.

Respondents also pointed to the importance of content recommendations. The survey found that almost a third of shoppers said they find recommendations for “suggested content” useful, and 61% of shoppers like to do research via reviews on the brand’s website where they’ll be purchasing from. A little over a third — 37% — of shoppers use marketplaces such as Amazon, Google Shopping, and eBay for their research.

Brands should try to offer something for every step in the shopping journey, from research to purchase to support, to keep shoppers on their sites longer. How online shopping will look in coming years is being defined at this very moment as the world reopens. Brands that are able to understand a shopper’s goal in the moment and deliver a connected experience that understands who shoppers are and what they like are well-positioned for the future, Lucidworks said.

Lucidworks used a self-serve survey tool, Pollfish, in late May 2021 to survey 800 consumers over the age of 18—400 in the U.K. and 400 in the U.S.—to understand how shoppers interact with chatbots, product and content recommendations, where they prefer to do research, and plans for future in-store shopping.

Read the full U.S./U.K. Consumer Survey Report from Lucidworks.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Tech

Breakroom teams up with High Fidelity to bring 3D audio to online meetings

Published

on

Breakroom teams up with High Fidelity to bring 3D audio to online meetings

Elevate your enterprise data technology and strategy at Transform 2021.


Social meeting space Breakroom has integrated High Fidelity‘s 3D audio into its 3D virtual world for social and business events.

The deal is a convergence of some virtual world pioneers who have made their mark on the development of virtual life. Philip Rosedale is the CEO of High Fidelity, and he also launched Second Life in 2003. And Sine Wave Entertainment, the creator of Breakroom, got its start as a content brand in Second Life before it spun out to create its own virtual meeting spaces for real world events.

Adam Frisby, chief product officer and cofounder of Sine Wave, said in our interview conducted inside Breakroom that the High Fidelity spatial audio will help Breakroom create a triple-A quality experience in a virtual world.

“The real benefit of having 3D audio in a virtual world like this is you can have lots of conversations going on simultaneously,” said Frisby. “3D audio is the only way to replicate the real-world experience in an online environment. You can have a 150-person conference and end up with 10 groups of people talking at the same time. That has helped us with engagement.”

Above: Breakroom lets an event have dozens of simultaneous conversations where people don’t talk over each other, thanks to High Fidelity.

Image Credit: Sine Wave

Most online events get engagement times of 20 or 30 minutes. But Breakroom’s average events, ranging from 600 to 1,000 attendees, have engagement times of an hour and 40 minutes, Frisby said.

Sine Wave’s Breakroom draws heavily on lessons learned in Second Life to create a frictionless, mass market, user-friendly virtual world.

“You can hear everything better with High Fidelity,” said Rosedale, in our interview in Breakroom. “Breakroom combines low-latency server-side video and spatial audio in a way that lets you hold an event like it’s in the real world.”

High Fidelity is a real-time communications company. Its mission is to build technologies that power more human experiences in today’s digital world. The company’s patented spatial audio technology, originally developed for its VR software platform, adds immersive, high-quality voice chat to any application — for groups of any size. You can really tell how close you are to someone in a High Fidelity space when they talk to you, as voices become fainter the farther away they are.

“We are super excited about this general direction and we wound up building the audio subsystem and extracting that first,” Rosedale said. “It works well where there is no possibility of face-to-face meetings.”

breakroom 3

Above: I could hear Philip Rosedale’s voice clearly in this conversation in Breakroom.

Image Credit: Sine Wave

Spatial audio in a 3D virtual world helps encourage spontaneous conversations into a fun, productive setting, in a way that flatscreen video calls and webinars simply can’t match, Frisby said. It’s easy to tell in Breakroom who is speaking to you, and from what direction.

It took me a little while to figure out how to unmute my voice. Rosedale was jumping up and down while we were talking.

“It’s all remote rendered. And that means that we can bring people in on a variety of platforms,” Frisby said. “No matter what your target hardware is, you can actually get in here and still get good high fidelity. It’s a good quality 3D rendering experience here regardless of what device you’re on.”

I asked Rosedale if he could hear me chewing lettuce, which sounded very loud on my headsets. But he said no. It definitely helps if you have good headsets with 3D audio.

Breakroom is being used by organizations like Stanford University, the United Nations, and The Economist. Breakroom runs on any device with a Chrome browser, offering good 3D graphics and audio quality, with no installation required.

Frisby said that Breakroom is also a way for companies to enable remote workers to gather and meet each other in more relaxed environments as if it were an intermediate space between online-only environments and going back to work in offices.

breakroom 4

Above: Breakroom and High Fidelity are enabling conferences with spatial audio.

Image Credit: Sine Wave

Its full suite of communication tools includes voice chat, instant messenger, and in-world email. It has video conferencing, media sharing, and desktop sharing tools. It has a diverse range of fully customizable avatars and scenes. You can get around just by pointing and clicking on the environment.

It also has event management tools to facilitate conversation and agenda flow, branded interactive exhibition stands, and private meeting rooms, available for rent by sponsors. It has environments including dance clubs, beach and mountain retreats, casual games, quiz shows, and live music/comedy shows. It has an integrated shop where brands can upload and sell their content to customers for real cash.

It gives you the ability to seamlessly license and import any item from the Unity Asset Store (Sine Wave is a verified partner of Unity). The iOS and Android version of Breakroom is in closed beta and Breakroom for consoles and the Oculus Quest 2 coming soon. It has LinkedIn and Eventbrite integration, including ticket sales. It also has a self-serve portal for customers to quickly customize and configure their organizations’ Breakroom, as well as sub-licensing agreements which enable Breakroom customers to host and monetize events and experiences to their own customer base.

Frisby said it has been a technical challenge so that people don’t get kicked out of the room, but his team has managed to refine the technology during the pandemic. He thinks conferences are great use cases for the technology because so many people come together simultaneously and push the tech to the limit.

As for High Fidelity, Rosedale believes that the education market will come around, and the whole world will eventually move to better spatial experiences.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading

Tech

Moderne helps companies automate their code migration and fixes

Published

on

https://www.youtube.com/watch?v=uR9EPALJKjI&feature=emb_title

Elevate your enterprise data technology and strategy at Transform 2021.


While every company may well be a software company these days, the software development sphere has evolved greatly over the past decade to get to this stage, with developer operations (DevOps), agile, and cloud-native considerations at the forefront.

Moreover, with APIs and open source software now serving as critical components of most modern software stacks, tracking code changes and vulnerabilities introduced by external developers can be a major challenge. This is something fledgling startup Moderne is setting out to solve with a platform that promises to automatically “fix, upgrade, and secure code” in minutes, including offering support for framework or API migrations and applying CVE (common vulnerabilities and exposures) patches.

The Seattle-based company, which will remain in private beta for the foreseeable future, today announced a $4.7 million seed round of funding to bring its SaaS product to market. The investment was led by True Ventures, with participation from a slew of angel and VC backers, including GitHub CTO Jason Warner; Datadog cofounder and CEO Olivier Pomel; Coverity cofounder Andy Chou; Mango Capital; and Overtime.vc.

Version control

If a third-party API provider or open source framework is updated, with the older version no longer actively supported, companies need to ensure their software remains secure and compliant. “It requires revving dependencies [updating version numbers in configuration files] and changing all the call sites for the APIs that have changed — it’s tedious, repetitive, but hasn’t been automated,” Moderne CEO and cofounder Jonathan Schneider told VentureBeat.

Moderne is built on top of OpenRewrite, an open source automated code refactoring tool for Java that Schneider developed at Netflix several years ago. While developers can already use the built-in refactoring and semantic search features included in integrated development environments (IDEs), if they need to perform a migration or apply a CVE patch, they have to follow multiple manual steps. Moreover, they can only work on a single repository at a time.

“So if an organization has hundreds of microservices — which is not uncommon for even very small organizations, and larger ones have thousands — each repository needs to be loaded into [the] IDE and operated one by one,” Schneider said. “A developer can spend weeks or months doing this across the codebase.”

OpenRewrite, on the other hand, provides “building blocks” — individual search and refactoring operations — that can be composed into an automated sequence called recipes anyone can use. Moderne’s offering complements OpenRewrite and allows companies to apply these recipes in bulk to their codebases.

Above: Moderne screenshot

Enterprises, specifically, can accumulate vast amounts of code. One of Moderne’s early product design partners is a “large financial institution” that incorporates some 250 million lines of Java code — or “one-eighth of all GitHub Java code,” Schneider noted, adding that this is actually on the “low to medium” side for what a typical enterprise might have.

“Some of this code is obsolete (e.g. accrued through historical acquisitions), some is under rapid development (e.g. mobile apps) — but the majority represents super valuable business assets, such as ATM software and branch management software,” Schneider said.

And let’s say a company decides to redeploy developers internally to work on rapid development projects — it still needs to consider the core software components that underpin the business and need to be maintained. Moderne automates the code migration and CVE patching process, freeing developers to work on other mission-critical projects.

When Moderne eventually goes to market, it will adopt an open core business model, with a free plan for the open source community and individual users, while the premium SaaS plan will support larger codebases and teams with additional features for collaboration.

The company said it will use its fresh cash injection to grow a “vibrant open source community for OpenRewrite,” expand its internal engineering team, and bolster its SaaS product ahead of launch.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Trending