Connect with us

Tech

Why AI can’t solve unknown problems

Published

on

Herbert Roitblat is the author of “Algorithms Are Not Enough”

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


When will we have artificial general intelligence, the kind of AI that can mimic the human mind in all aspect? Experts are divided on the topic, and answers range anywhere between a few decades and never.

But what everyone agrees on is that current AI systems are a far shot from human intelligence. Humans can explore the world, discover unsolved problems, and think about their solutions. Meanwhile, the AI toolbox continues to grow with algorithms that can perform specific tasks but can’t generalize their capabilities beyond their narrow domains. We have programs that can beat world champions at StarCraft but can’t play a slightly different game at amateur level. We have artificial neural networks that can find signs of breast cancer in mammograms but can’t tell the difference between a cat and a dog. And we have complex language models that can spin thousands of seemingly coherent articles per hour but start to break when you ask them simple logical questions about the world.

In short, each of our AI techniques manages to replicate some aspects of what we know about human intelligence. But putting it all together and filling the gaps remains a major challenge. In his book Algorithms Are Not Enough, data scientist Herbert Roitblat provides an in-depth review of different branches of AI and describes why each of them falls short of the dream of creating general intelligence.

The common shortcoming across all AI algorithms is the need for predefined representations, Roitblat asserts. Once we discover a problem and can represent it in a computable way, we can create AI algorithms that can solve it, often more efficiently than ourselves. It is, however, the undiscovered and unrepresentable problems that continue to elude us.

Representations in symbolic AI

Throughout the history of artificial intelligence, scientists have regularly invented new ways to leverage advances in computers to solve problems in ingenious ways. The earlier decades of AI focused on symbolic systems.

Above: Herbert Roitblat, data scientist and author of Algorithms Are Not Enough.

Image Credit: Josiah Grandfield

This branch of AI assumes human thinking is based on the manipulation of symbols, and any system that can compute symbols is intelligent. Symbolic AI requires human developers to meticulously specify the rules, facts, and structures that define the behavior of a computer program. Symbolic systems can perform remarkable feats, such as memorizing information, computing complex mathematical formulas at ultra-fast speeds, and emulating expert decision-making. Popular programming languages and most applications we use every day have their roots in the work that has been done on symbolic AI.

But symbolic AI can only solve problems for which we can provide well-formed, step-by-step solutions. The problem is that most tasks humans and animals perform can’t be represented in clear-cut rules.

“The intellectual tasks, such as chess playing, chemical structure analysis, and calculus are relatively easy to perform with a computer. Much harder are the kinds of activities that even a one-year-old human or a rat could do,” Roitblat writes in Algorithms Are Not Enough.

This is called Moravec’s paradox, named after the scientist Hans Moravec, who stated that, in contrast to humans, computers can perform high-level reasoning tasks with very little effort but struggle at simple skills that humans and animals acquire naturally.

“Human brains have evolved mechanisms over millions of years that let us perform basic sensorimotor functions. We catch balls, we recognize faces, we judge distance, all seemingly without effort,” Roitblat writes. “On the other hand, intellectual activities are a very recent development. We can perform these tasks with much effort and often a lot of training, but we should be suspicious if we think that these capacities are what makes intelligence, rather than that intelligence makes those capacities possible.”

So, despite its remarkable reasoning capabilities, symbolic AI is strictly tied to representations provided by humans.

Representations in machine learning

Machine learning provides a different approach to AI. Instead of writing explicit rules, engineers “train” machine learning models through examples. “[Machine learning] systems could not only do what they had been specifically programmed to do but they could extend their capabilities to previously unseen events, at least those within a certain range,” Roitblat writes in Algorithms Are Not Enough.

The most popular form of machine learning is supervised learning, in which a model is trained on a set of input data (e.g., humidity and temperature) and expected outcomes (e.g., probability of rain). The machine learning model uses this information to tune a set of parameters that map the inputs to outputs. When presented with previously unseen input, a well-trained machine learning model can predict the outcome with remarkable accuracy. There’s no need for explicit if-then rules.

But supervised machine learning still builds on representations provided by human intelligence, albeit one that is more loose than symbolic AI. Here’s how Roitblat describes supervised learning: “[M]achine learning involves a representation of the problem it is set to solve as three sets of numbers. One set of numbers represents the inputs that the system receives, one set of numbers represents the outputs that the system produces, and the third set of numbers represents the machine learning model.”

Therefore, while supervised machine learning is not tightly bound to rules like symbolic AI, it still requires strict representations created by human intelligence. Human operators must define a specific problem, curate a training dataset, and label the outcomes before they can create a machine learning model. Only when the problem has been strictly represented in its own way can the model start tuning its parameters.

“The representation is chosen by the designer of the system,” Roitblat writes. “In many ways, the representation is the most crucial part of designing a machine learning system.”

One branch of machine learning that has risen in popularity in the past decade is deep learning, which is often compared to the human brain. At the heart of deep learning is the deep neural network, which stacks layers upon layers of simple computational units to create machine learning models that can perform very complicated tasks such as classifying images or transcribing audio.

Layers of a neural network for deep learning

Above: Deep learning models can perform complicated tasks such as classifying images.

But again, deep learning is largely dependent on architecture and representation. Most deep learning models needs labeled data, and there is no universal neural network architecture that can solve every possible problem. A machine learning engineer must first define the problem they want to solve, curate a large training dataset, and then figure out the deep learning architecture that can solve that problem. During training, the deep learning model will tune millions of parameters to map inputs to outputs. But it still needs machine learning engineers to decide the number and type of layers, learning rate, optimization function, loss function, and other unlearnable aspects of the neural network.

“Like much of machine intelligence, the real genius [of deep learning] comes from how the system is designed, not from any autonomous intelligence of its own. Clever representations, including clever architecture, make clever machine intelligence,” Roitblat writes. “Deep learning networks are often described as learning their own representations, but this is incorrect. The structure of the network determines what representations it can derive from its inputs. How it represents inputs and how it represents the problem-solving process are just as determined for a deep learning network as for any other machine learning system.”

Other branches of machine learning follow the same rule. Unsupervised learning, for example, does not require labeled examples. But it still requires a well-defined goal such as anomaly detection in cybersecurity, customer segmentation in marketing, dimensionality reduction, or embedding representations.

Reinforcement learning, another popular branch of machine learning, is very similar to some aspects of human and animal intelligence. The AI agent doesn’t rely on labeled examples for training. Instead, it is given an environment (e.g., a chess or go board) and a set of actions it can perform (e.g., move pieces, place stones). At each step, the agent performs an action and receives feedback from its environment in the form of rewards and penalties. Through trial and error, the reinforcement learning agent finds sequences of actions that yield more rewards.

Computer scientist Richard Sutton describes reinforcement learning as “the first computational theory of intelligence.” In recent years, it has become very popular for solving complicated problems such as mastering computer and board games and developing versatile robotic arms and hands.

Screengrabs of StarCraft, Rubik's Cube, Go, and DOTA

Above: Reinforcement learning can solve complicated problems such as playing board and video games and performing robotic manipulations.

Image Credit: Tech Talks

But reinforcement learning environments are typically very complex, and the number of possible actions an agent can perform is very large. Therefore, reinforcement learning agents need a lot of help from human intelligence to design the right rewards, simplify the problem, and choose the right architecture. For instance, OpenAI Five, the reinforcement learning system that mastered the online video game Dota 2, relied on its designers simplifying the rules of the game, such as reducing the number of playable characters.

“It is impossible to check, in anything but trivial systems, all possible combinations of all possible actions that can lead to reward,” Roitblat writes. “As with other machine learning situations, heuristics are needed to simplify the problem into something more tractable, even if it cannot be guaranteed to produce the best possible answer.”

Here’s how Roitblat summarizes the shortcomings of current AI systems in Algorithms Are Not Enough: “Current approaches to artificial intelligence work because their designers have figured out how to structure and simplify problems so that existing computers and processes can address them. To have a truly general intelligence, computers will need the capability to define and structure their own problems.”

Is AI research headed in the right direction?

“Every classifier (in fact every machine learning system) can be described in terms of a representation, a method for measuring its success, and a method of updating,” Roitblat told TechTalks over email. “Learning is finding a path (a sequence of updates) through a space of parameter values. At this point, though, we don’t have any method for generating those representations, goals, and optimizations.”

There are various efforts to address the challenges of current AI systems. One popular idea is to continue to scale deep learning. The general reasoning is that bigger neural networks will eventually crack the code of general intelligence. After all, the human brain has more than 100 trillion synapses. The biggest neural network to date, developed by AI researchers at Google, has one trillion parameters. And the evidence shows that adding more layers and parameters to neural networks yields incremental improvements, especially in language models such as GPT-3.

But big neural networks do not address the fundamental problems of general intelligence.

“These language models are significant achievements, but they are not general intelligence,” Roitblat says. “Essentially, they model the sequence of words in a language. They are plagiarists with a layer of abstraction. Give it a prompt and it will create a text that has the statistical properties of the pages it has read, but no relation to anything other than the language. It solves a specific problem, like all current artificial intelligence applications. It is just what it is advertised to be — a language model. That’s not nothing, but it is not general intelligence.”

Other directions of research try to add structural improvements to current AI structures.

For instance, hybrid artificial intelligence brings symbolic AI and neural networks together to combine the reasoning power of the former and the pattern recognition capabilities of the latter. There are already several implementations of hybrid AI, also referred to as “neuro-symbolic systems,” that show hybrid systems require less training data and are more stable at reasoning tasks than pure neural network approaches.

System 2 deep learning, another direction of research proposed by deep learning pioneer Yoshua Bengio, tries to take neural networks beyond statistical learning. System 2 deep learning aims to enable neural networks to learn “high-level representations” without the need for explicit embedding of symbolic intelligence.

Another research effort is self-supervised learning, proposed by Yann LeCun, another deep learning pioneer and the inventor of convolutional neural networks. Self-supervised learning aims to learn tasks without the need for labeled data and by exploring the world like a child would do.

“I think that all of these make for more powerful problem solvers (for path problems), but none of them addresses the question of how these solutions are structured or generated,” Roitblat says. “They all still involve navigating within a pre-structured space. None of them addresses the question of where this space comes from. I think that these are really important ideas, just that they don’t address the specific needs of moving from narrow to general intelligence.”

In Algorithms Are Not Enough, Roitblat provides ideas on what to look for to advance AI systems that can actively seek and solve problems that they have not been designed for. We still have a lot to learn from ourselves and how we apply our intelligence in the world.

“Intelligent people can recognize the existence of a problem, define its nature, and represent it,” Roitblat writes. “They can recognize where knowledge is lacking and work to obtain that knowledge. Although intelligent people benefit from structured instructions, they are also capable of seeking out their own sources of information.”

But observing intelligent behavior is easier than creating it, and, as Roitblat told me in our correspondence, “Humans do not always solve their problems in the way that they say/think that they do.”

As we continue to explore artificial and human intelligence, we will continue to move toward AGI one step at a time.

“Artificial intelligence is a work in progress. Some tasks have advanced further than others. Some have a way to go. The flaws of artificial intelligence tend to be the flaws of its creator rather than inherent properties of computational decision making. I would expect them to improve over time,” Roitblat said.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Latest Edelman survey rates trust in tech at a 21-year low

Published

on

Social media is not a trusted source of information.

The technology sector plummeted from being the most trusted industry sector in 2020 to 9th place in 2021, according to the 21st annual analysis from communications firm Edelman. Lack of accountability and unwillingness to self-govern is eroding the public’s trust in technology.

Trust in technology reached all-time lows in 17 of 27 countries over the past year, Edelman said in its recent 2021 Edelman Trust Barometer: Trust In Technology report. The report is based on a survey of more than 33,000 people from 28 countries, including both general population respondents and what the firm calls “informed public respondents” for a well-rounded picture.

Trust and fear have a reciprocal relationship: The faster one rises, the faster the other drops. Traditionally, the technology sector was something of an expert at managing the two, but that is no longer the case. Edelman found that fear of technology is growing at a faster rate than trust in technology. It will take years for the technology industry to bounce back and regain the public trust.

Tech broke trust

Edelman’s survey results show respondents feel both betrayed by, and fearful of, technology. Job loss is the single greatest driver of societal fears, followed by the loss of civil liberties. There is a 6% drop in the number of people who are willing to share their personal information online. Social media, traditional media, and search engines are also at record low levels of trust.

Above: Respondents did not view many information sources favorably when asked to rate each one on how trustworthy they were for general news and information. Source: 2021 Edelman Trust Barometer: Trust in Technology.

Image Credit: Edelman

While the technology industry is full of entrepreneurs who believe in unleashing creativity and innovation and pursuing moonshot ideas, there are also those who monitor customers and invade privacy. The tendency to use technology as an authoritarian tool to monitor dissent is a concern, which explains China’s 16% drop in trust. The sheer drop is ironic, because China is also a global leader in tech R&D, innovation, and tech manufacturing.

Pandemic amplified fears

Edelman recorded one of the steepest declines in trust in the eight months between May 2020 and January 2021, when the public’s trust in technology dropped from 74% to 67%. People were increasingly concerned about AI and robots, and 53% of the respondents in Edelman’s survey worried the pandemic would accelerate the rate at which their employers would replace human workers with AI and robots. Cyberattackers capitalizing on the pandemic didn’t help matters, as 35% of respondents reported being fearful of attackers and breaches.

Edelman’s Trust in Technology study presents a paradox between tech employees and their employers. Employer trust is highest among tech sector employees, with 83% saying they trust their employers, and 62% believing they have the power to make corporations change. Yet the public’s trust in those employers is plummeting. The disconnect comes from the public perception that humans are not controlling technology, but that technology is trying to control them. There is a growing perception that technology — especially social media — is more capable at manipulating people than previously believed.

One way for the industry sector to regain some trust is to re-evaluate how they handle customer data and to be transparent about what they do with the information.

Gain trust by guarding information quality

Businesses as a whole are still trusted in most of the countries surveyed, with 61% of all respondents trusting companies above nonprofit organizations, government, and media. The most effective step businesses can take to increase trust is to guard the quality of information. Additional factors include embracing sustainable practices, implementing a robust COVID-19 health and safety response, driving economic prosperity, and emphasizing long-term thinking over short-term profits.

However, just saying they will protect information isn’t enough. Businesses need to take a data-centric security approach to achieve greater resiliency and cybersecurity. Businesses should also address the concerns employees have over job loss and automation. They should be transparent and honest with their employees if robotics and automation are part of the business plan. Investing in re-skilling employees for new jobs is a great way to transform a business digitally.

In short, senior management teams should remember that lasting transformation starts with employees.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Tech

Google makes business process tool AppSheet Automation generally available

Published

on

Google AppSheet Automation

Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


Last year, Google launched AppSheet Automation, an “intent-driven” experience in Google Cloud powered by AI that enabled enterprises to connect to a number of data sources to model automated business processes. After several months in early access, Google today announced that AppSheet Automation is generally available with new capabilities, including document processing, a monitoring app, and expanded eventing support.

According to Forrester, while automation has been a major force reshaping work since before the pandemic, it’s taking on a new urgency in the context of business risk and resilience. A McKinsey survey found that at least a third of activities could be automated in about 60% of occupations. And in its recent Trends in Workflow Automation report, Salesforce reported that 95% of IT leaders are prioritizing workflow automation, with 70% seeing the equivalent of more than 4 hours saved each week per employee.

AppSheet Automation, which arose from Google’s acquisition of AppSheet in July 2020, is an AI-enabled, no-code development platform designed to help automate existing business processes. The service offers an environment for building custom apps and pipelines, delivering governance capabilities and leveraging AI to understand goals and construct process artifacts.

One new feature in AppSheet Automation, Intelligent Document Processing, automatically extracts text from unstructured files like invoices and W-9s to eliminate the need for manual entry. Another, a monitoring app, allows customers to build AppSheet apps that can then monitor their automations.

Google also extended AppSheet Automation’s data source eventing, which previously supported Salesforce, to include Google Workspace Sheets and Drive in the general release. Looking ahead, the company says it’s building the ability to embed rich AppSheet views in Gmail to enable users to perform approvals on the go.

Google AppSheet Automation

“Digital transformation has been an enterprise priority for years, but recent Google Cloud research reinforces that the mandate is more pressing today than ever, with most companies increasing their technology investments over the last year,” Prithpal Bhogill, product manager on AppSheet’s business application platform, wrote in a blog post. “While there are many dependencies shaping the future of work, the challenge is to leverage technology to support shifting work cultures. Automation is the rallying point for this goal.”

The launch of AppSheet Automation follows news that Google will collaborate with robotic process automation (RPA) startup Automation Anywhere to accelerate the adoption of RPA with enterprises “on a global scale.” As a part of its agreement with Automation Anywhere, Google plans to integrate the former company’s RPA technologies, including low- and no-code development tools, AI workflow builders, and API management, with Google Cloud services like Apigee, AppSheet, and AI Platform. Automation Anywhere and Google said they’ll also jointly develop solutions geared toward industry-specific use cases, with a focus on financial services, supply chains, health care and life sciences, telecommunications, retail, and the public sector.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Tech

1Password expands into secrets management to help enterprises secure their infrastructure

Published

on

1Password expands into secrets management to help enterprises secure their infrastructure

Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


Password-management platform 1Password is expanding into the “secrets management” space, helping developer teams across the enterprise safeguard private credentials, such as API tokens, keys, certificates, passwords, and anything used to protect access to companies’ internal applications and infrastructure.

Alongside the launch, 1Password has also announced its first acquisition with the purchase of SecretHub, a Dutch startup founded in 2018 that claims to protect “nearly 5 million enterprise secrets” each month. Following the acquisition, SecretHub will be shuttered entirely, with its whole team — including CEO Marc Mackenbach — joining 1Password.

Secret sauce

Recent data from GitGuardian, a cybersecurity platform that helps companies find sensitive data hidden in public codebases, revealed a 20% rise in secrets inadvertently making their way into GitHub repositories. If this data falls into the wrong hands, it can be used to gain access to private internal systems. By way of example, Uber revealed a major breach back in 2017 that exposed millions of users’ personal data. The root cause was an AWS access key hackers discovered in a personal GitHub repository belonging to an Uber developer.

There has been a flurry of activity across the secrets management space of late. Israeli startup Spectral recently exited stealth with $6.2 million in funding to serve developer operations (DevOps) teams with an automated scanner that finds potentially costly security mistakes buried in code. San Francisco-based Doppler, meanwhile, last month raised $6.5 million in a round of funding led by Alphabet’s venture capital arm GV and launched a bunch of new enterprise-focused features.

1Password has built a solid reputation over its 16-year history, thanks to a platform that can store passwords securely and simplify log-in. It allows consumers and businesses to log into all their online services with a single click (rather than having to manually input passwords) and can also be used to store other private digital data, such as credit cards and software licenses. The Toronto-based company raised its first (and only) round of funding back in 2019, securing $200 million to help it push further beyond the consumer sphere and cement itself as an integral security tool for the enterprise.

Machine secrets

Today, 1Password claims some 80,000 business customers, including enterprise heavyweights such as IBM, Slack, Dropbox, PagerDuty, and GitLab. With its latest “secrets automation” product, the company is striving to make its platform stickier for existing and potential clients searching for an all-in-one platform that protects all their credentials — from employees’ email passwords to core backend business systems.

Above: 1Password: Secrets automation

While 1Password’s existing password-management toolset helps people securely access accounts without having to remember dozens of passwords, the “automation” facet of its new product name refers to machine-based system workflows that, for example, enable an application to securely “talk” to a database. “This means being able to roll secrets into your infrastructure directly from within 1Password,” chief product officer Akshay Bhargava told VentureBeat. “We are the first company encompassing human and machine secrets.”

Typically, infrastructure secrets can be splayed across countless cloud providers and services, but according to 1Password, it’s not uncommon for companies to cut corners or use a dubious combination of hacks and homegrown tools to manage the security around this issue.

According to Bhargava, 1Password was working on a secrets management solution before it acquired SecretHub. In fact, many of 1Password’s customers were already storing their infrastructure secrets in its vaults.

“Our customers have raised this workflow as something they’d like 1Password to solve,” Bhargava said. “It’s fair to say our first version is homegrown, and we’ve been focused on solving this problem for a while.”

Secrets automation allows admins to define which people and services have access to secrets, as well as what level of access is granted. At launch, it integrates with HashiCorp Vault, Terraform, Kubernetes, and Ansible, with “more on the way.” However, 1Password is also announcing a deeper partnership with GitHub, which will see the duo collaborate to “solve problems for our shared customers and users,” according to Bhargava. “We plan to build a workflow to support customers in delivering secrets and configuration into their CI/CD pipelines on GitHub,” he said.

As for costs, all companies will receive three credits for free. The cost then rises to $29 per month for 25 credits, $99 for 100 credits, and $299 for 500 credits. “We prorate based on usage,” Bhargava added. “We will work with companies needing more than 500 credits a month on an individual basis.”

In terms of how credit is consumed, companies configure the 1Password vaults they want secrets automation to access and then stipulate the permissions for a development environment with tokens. “If an API client needs read and write access to data stored in a 1Password vault, that access is defined using a token,” Bhargava explained. “One token, accessing one vault, is what defines a credit. If that same API client needs to access two vaults, that then becomes two credits. And similarly, if a single token is created for read access to vault A and another for write access to vault B, that becomes two credits.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Trending