Connect with us

Tech

TrustArc launches PrivacyCentral to bring data privacy intelligence to enterprises

Published

on

PrivacyCentral: Intelligent scan

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Data privacy management and compliance company TrustArc has announced a new data intelligence tool that delivers contextual, real-time insights on companies’ data privacy programs.

The PrivacyCentral tool is called can generate automated, customizable reports to help privacy teams determine which regulations are most likely to apply to their data and prioritize each law by setting timelines, assigning owners, evaluating key performance indicators (KPIs), and providing detailed reporting.

“PrivacyCentral allows for organizations to understand laws and readiness against laws simultaneously,” Hilary Wandall, SVP of privacy intelligence and general counsel at TrustArc, told VentureBeat. “Unlike traditional approaches to privacy law readiness or gap assessments that typically look at one law at a time, PrivacyCentral’s intelligence engines evaluate your progress and effectiveness against all of them at the same time.”

PrivacyCentral continuously scans a company’s profile against the various privacy laws around the world to identify which ones apply to their business, and then it asks relevant assessment questions.

The platform leans on AI to conduct these automated assessments spanning company records, systems, and data to match against the ever-evolving privacy regulations for a specific jurisdiction.

Above: PrivacyCentral: Intelligent scan

Intelligent Assessments 1

Above: PrivacyCentral: Intelligent assessments

Data privacy landscape

TrustArc is one of the myriad data privacy startups capitalizing on the growing array of privacy regulations, such as GDPR in Europe and CCPA in California. The San Francisco-based company develops various data protection, certification, and compliance products for enterprises like Intuit, Johnson & Johnson, and Monster, helping them monitor risk around regulations and identify gaps spanning regulatory frameworks.

Other notable players in the space include Privacera and DataGrail, which just this week raised $50 million and $30 million, respectively; BigID, which is valued at $1 billion following a recent investment; and OneTrust which recently attained a $5 billion valuation.

It’s clear that the data privacy market is hot, something TrustArc is looking to capitalize on with PrivacyCentra, which Wandall said solves many problems, including the “inefficient and time-consuming nature of managing privacy across an organization,” particularly when a new law or regulation is introduced.

PrivacyCentral is designed to serve as a “single source of truth,” one that removes the manual processes involved in assessing and complying with the various privacy laws and works between all the existing regulations and any new ones as they come into force.

“A privacy leader no longer needs to restart the cycle of working with business stakeholders to understand how a new law or regulation applies to their business,” Wandall said. “The privacy leader can automatically get an in-depth understanding of the gaps between where they are and where they need to be and engage the business in a meaningful way.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Understanding dimensionality reduction in machine learning models

Published

on

Feature selection

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Machine learning algorithms have gained fame for being able to ferret out relevant information from datasets with many features, such as tables with dozens of rows and images with millions of pixels. Thanks to advances in cloud computing, you can often run very large machine learning models without noticing how much computational power works behind the scenes.

But every new feature that you add to your problem adds to its complexity, making it harder to solve it with machine learning algorithms. Data scientists use dimensionality reduction, a set of techniques that remove excessive and irrelevant features from their machine learning models.

Dimensionality reduction slashes the costs of machine learning and sometimes makes it possible to solve complicated problems with simpler models.

The curse of dimensionality

Machine learning models map features to outcomes. For instance, say you want to create a model that predicts the amount of rainfall in one month. You have a dataset of different information collected from different cities in separate months. The data points include temperature, humidity, city population, traffic, number of concerts held in the city, wind speed, wind direction, air pressure, number of bus tickets purchased, and the amount of rainfall. Obviously, not all this information is relevant to rainfall prediction.

Some of the features might have nothing to do with the target variable. Evidently, population and number of bus tickets purchased do not affect rainfall. Other features might be correlated to the target variable, but not have a causal relation to it. For instance, the number of outdoor concerts might be correlated to the volume of rainfall, but it is not a good predictor for rain. In other cases, such as carbon emission, there might be a link between the feature and the target variable, but the effect will be negligible.

In this example, it is evident which features are valuable and which are useless. in other problems, the excessive features might not be obvious and need further data analysis.

But why bother to remove the extra dimensions? When you have too many features, you’ll also need a more complex model. A more complex model means you’ll need a lot more training data and more compute power to train your model to an acceptable level.

And since machine learning has no understanding of causality, models try to map any feature included in their dataset to the target variable, even if there’s no causal relation. This can lead to models that are imprecise and erroneous.

On the other hand, reducing the number of features can make your machine learning model simpler, more efficient, and less data-hungry.

The problems caused by too many features are often referred to as the “curse of dimensionality,” and they’re not limited to tabular data. Consider a machine learning model that classifies images. If your dataset is composed of 100×100-pixel images, then your problem space has 10,000 features, one per pixel. However, even in image classification problems, some of the features are excessive and can be removed.

Dimensionality reduction identifies and removes the features that are hurting the machine learning model’s performance or aren’t contributing to its accuracy. There are several dimensionality techniques, each of which is useful for certain situations.

Feature selection

A basic and very efficient dimensionality reduction method is to identify and select a subset of the features that are most relevant to target variable. This technique is called “feature selection.” Feature selection is especially effective when you’re dealing with tabular data in which each column represents a specific kind of information.

When doing feature selection, data scientists do two things: keep features that are highly correlated with the target variable and contribute the most to the dataset’s variance. Libraries such as Python’s Scikit-learn have plenty of good functions to analyze, visualize, and select the right features for machine learning models.

For instance, a data scientist can use scatter plots and heatmaps to visualize the covariance of different features. If two features are highly correlated to each other, then they will have a similar effect on the target variable, and including both in the machine learning model will be unnecessary. Therefore, you can remove one of them without causing a negative impact on the model’s performance.

Heatmap

Above: Heatmaps illustrate the covariance between different features. They are a good guide to finding and culling features that are excessive.

The same tools can help visualize the correlations between the features and the target variable. This helps remove variables that do not affect the target. For instance, you might find out that out of 25 features in your dataset, seven of them account for 95 percent of the effect on the target variable. This will enable you to shave off 18 features and make your machine learning model a lot simpler without suffering a significant penalty to your model’s accuracy.

Projection techniques

Sometimes, you don’t have the option to remove individual features. But this doesn’t mean that you can’t simplify your machine learning model. Projection techniques, also known as “feature extraction,” simplify a model by compressing several features into a lower-dimensional space.

A common example used to represent projection techniques is the “swiss roll” (pictured below), a set of data points that swirl around a focal point in three dimensions. This dataset has three features. The value of each point (the target variable) is measured based on how close it is along the convoluted path to the center of the swiss roll. In the picture below, red points are closer to the center and the yellow points are farther along the roll.

Swiss roll

In its current state, creating a machine learning model that maps the features of the swiss roll points to their value is a difficult task and would require a complex model with many parameters. But with the help of dimensionality reduction techniques, the points can be projected to a lower-dimension space that can be learned with a simple machine learning model.

There are various projection techniques. In the case of the above example, we used “locally-linear embedding,” an algorithm that reduces the dimension of the problem space while preserving the key elements that separate the values of data points. When our data is processed with the LLE, the result looks like the following image, which is like an unrolled version of the swiss roll. As you can see, points of each color remain together. In fact, this problem can still be simplified into a single feature and modeled with linear regression, the simplest machine learning algorithm.

Swiss roll, projected

While this example is hypothetical, you’ll often face problems that can be simplified if you project the features to a lower-dimensional space. For instance, “principal component analysis” (PCA), a popular dimensionality reduction algorithm, has found many useful applications to simplify machine learning problems.

In the excellent book Hands-on Machine Learning with Python, data scientist Aurelien Geron shows how you can use PCA to reduce the MNIST dataset from 784 features (28×28 pixels) to 150 features while preserving 95 percent of the variance. This level of dimensionality reduction has a huge impact on the costs of training and running artificial neural networks.

dimensionality reduction mnist dataset

There are a few caveats to consider about projection techniques. Once you develop a projection technique, you must transform new data points to the lower dimension space before running them through your machine learning model. However, the costs of this preprocessing step are not comparable to the gains of having a lighter model. A second consideration is that transformed data points are not directly representative of their original features and transforming them back to the original space can be tricky and in some cases impossible. This might make it difficult to interpret the inferences made by your model.

Dimensionality reduction in the machine learning toolbox

Having too many features will make your model inefficient. But cutting removing too many features will not help either. Dimensionality reduction is one among many tools data scientists can use to make better machine learning models. And as with every tool, they must be used with caution and care.

Ben Dickson is a software engineer and the founder of TechTalks, a blog that explores the ways technology is solving and creating problems.

This story originally appeared on Bdtechtalks.com. Copyright 2021

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Tech

SolarWinds breach exposes hybrid multicloud security weaknesses

Published

on

State of Cloud Security

Join Transform 2021 this July 12-16. Register for the AI event of the year.


A hybrid multicloud strategy can capitalize on legacy systems’ valuable data and insights while using the latest cloud-based platforms, apps, and tools. But getting hybrid multicloud security right isn’t easy.

Exposing severe security weaknesses in hybrid cloud, authentication, and least privileged access configurations, the high-profile SolarWinds breach laid bare just how vulnerable every business is. Clearly, enterprise leaders must see beyond the much-hyped baseline levels of identity and access management (IAM) and privileged access management (PAM) now offered by cloud providers.

In brief, advanced persistent threat (APT) actors penetrated the SolarWinds Orion software supply chain undetected, modified dynamically linked library (.dll) files, and propagated malware across SolarWinds’ customer base while taking special care to mimic legitimate traffic.

The bad actors methodically studied how persistence mechanisms worked during intrusions and learned which techniques could avert detection as they moved laterally across cloud and on-premises systems. They also learned how to compromise SAML signing certificates while using the escalated Active Directory privileges they had gained access to. The SolarWinds hack shows what happens when bad actors focus on finding unprotected threat surfaces and exploiting them for data using stolen privileged access credentials.

The incursion is particularly notable because SolarWinds Orion is used for managing and monitoring on-premises and hosted infrastructures in hybrid cloud configurations. That is what makes eradicating the SolarWinds code and malware problematic, as it has infected 18 different Orion platform products.

Cloud providers do their part — to a point

The SolarWinds hack occurred in an industry that relies considerably on cloud providers for security control.

A recent survey by CISO Magazine found 76.36% of security professionals believe their cloud service providers are responsible for securing their cloud instances. The State of Cloud Security Concerns, Challenges, and Incidents Study from the Cloud Security Alliance found that use of cloud providers’ additional security controls jumped from 58% in 2019 to 71% in 2021, and 74% of respondents are relying exclusively on cloud providers’ native security controls today.

Above: Cloud providers’ security controls are not enough for most organizations, according to the State of Cloud Security Concerns report.

Image Credit: Cloud Security Alliance

Taking the SolarWinds lessons into account, every organization needs to verify the extent of the coverage provided as baseline functionality for IAM and PAM by cloud vendors. While the concept of a shared responsibility model is useful, it’s vital to look beyond cloud platform providers’ promises based on the framework.

Amazon’s interpretation of its shared responsibility model is a prime example. It’s clear the company’s approach to IAM, while centralizing identity roles, policies, and configuration rules, does not go far enough to deliver a fully secure, scalable, zero trust-based approach.

The Amazon Shared Responsibility Model makes it clear the company takes care of AWS infrastructure, hardware, software, and facilities, while customers are responsible for securing their client-side data, server-side encryption, and network traffic protection — including encryption, operating systems, platforms, and customer data.

Like competitors Microsoft Azure and Google Cloud, AWS provides a baseline level of support for IAM optimized for just its environments. Any organization operating a multi-hybrid cloud and building out a hybrid IT architecture will have wide, unsecured gaps between cloud platforms because each platform provider only offers IAM and PAM for their own platforms.

Cloud security as shared responsibility

Above: The AWS Shared Responsibility Model is a useful framework for defining which areas of cloud deployment are customers’ responsibility.

Image Credit: Amazon Web Services

While a useful framework, the Shared Responsibility Model does not come close to providing the security hybrid cloud configurations need. It is also deficient in addressing machine-to-machine authentication and security, an area seeing rapid growth in organizations’ hybrid IT plans today. Organizations are also on their own when it comes to how they secure endpoints across all the public, private, and community cloud platforms they rely on.

There is currently no unified approach to solving these complex challenges, and every CIO and security team must figure it out on their own.

But there needs to be a single, unified security model that scales across on-premises, public, private, and community clouds without sacrificing security, speed, and scale. Averting the spread of a SolarWinds-level attack starts with a single security model across all on-premises and cloud-based systems, with IAM and PAM at the platform level.

Amid hybrid cloud and tool sprawl, security suffers

The SolarWinds attack came just as multicloud methods had started to gain traction. Cloud sprawl is defined as the unplanned and often uncontrolled growth of cloud instances across public, private, and community cloud platforms. The leading cause of cloud sprawl is a lack of control, governance, and visibility into how cloud computing instances and resources are acquired and used. Still, according to Flexera’s 2021 State of the Cloud Report, 92% of enterprises have a multicloud strategy and 82% have a hybrid cloud strategy.

Enterprise cloud strategy

Above: Cloud sprawl will become an increasing challenge, given organizations’ tendency to prioritize multicloud strategies.

Image Credit: Flexera

Cloud sprawl happens when an organization lacks visibility into or control over its cloud computing resources. Organizations are reducing the potential of cloud sprawl by having a well-defined, adaptive, and well-understood governance framework defining how cloud resources will be acquired and used. Without this, IT faces the challenge of keeping cloud sprawl in check while achieving business goals.

Overbuying security tools and overloading endpoints with multiple, often conflicting software clients weakens any network. Buying more tools could actually make a SolarWinds-level attack worse. Security teams need to consider how tool and endpoint agent sprawl is weakening their networks. According to IBM’s Cyber Resilient Organization Report, enterprises deploy an average of 45 cybersecurity-related tools on their networks today. The IBM study also found enterprises that deploy over 50 tools ranked themselves 8% lower in their ability to detect threats and 7% lower in their defensive capabilities than companies employing fewer toolsets.

Rebuilding on a zero trust foundation

The SolarWinds breach is particularly damaging from a PAM perspective. An integral component of the breach was compromising SAML signing certificates the bad actors gained by using their escalated Active Directory privileges. It was all undetectable to SolarWinds Orion, the hybrid cloud-monitoring platform hundreds of organizations use today. Apparently, a combination of hybrid cloud security gaps, lack of authentication on SolarWinds accounts, and lack of least privileged access made the breach undetectable for months, according to a Cybersecurity & Infrastructure Security Agency (CISA) alert. One of the most valuable lessons learned from the breach is the need to enforce least privileged access across every user and administrator account, endpoint, system access account, and cloud administrator account.

The bottom line is that the SolarWinds breach serves as a reminder to plan for and begin implementing zero trust frameworks that enable any organization to take a “never trust, always verify, enforce least privilege” strategy when it comes to their hybrid and multicloud strategies.

Giving users just enough privileges and resources to get their work done and providing least privileged access for a specific time is essential. Getting micro-segmentation right across IT infrastructures will eliminate bad actors’ ability to move laterally throughout a network. And logging and monitoring all activity on a network across all cloud platforms is critical.

Every public cloud platform provider has tools available for doing this. On AWS, for example, there’s AWS CloudTrail and Amazon CloudWatch, which monitors all API activity. Vaulting root accounts and applying multi-factor authentication across all accounts is a given.

Organizations need to move beyond the idea that the baseline levels of IAM and PAM delivered by cloud providers are enough. Then these organizations need to think about how they can use security to accelerate their business goals by providing the users they serve with least privileged access.

Adopting a zero trust mindset and framework is a given today, as every endpoint, system access point, administrative login, and cloud administrator console is at risk if nothing changes.

The long-held assumptions of interdomain trust were proven wrong with SolarWinds. Now it’s time for a new, more intensely focused era of security that centers on enforcing least privilege and zero-trust methods across an entire organization.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Tech

How Scopely tries to do game acquisitions the right way

Published

on

How Scopely tries to do game acquisitions the right way

Did you miss GamesBeat Summit 2021? Watch on-demand here! 


Big game companies could probably use a manual for acquiring game companies at a time when such acquisitions have reached an all-time record. In the first quarter, the money for acquisitions, public offerings, and investments hit $39 billion, more than the $33 billion for all of last year, according to InvestGame.

So it seems timely for the whole game industry to step back and think about the right way to do acquisitions. We had a couple of executives from mobile game publisher Scopely talk about this subject at our recent GamesBeat Summit 2021 event.

Nick Tuosto, managing director of Liontree and cofounder of Griffin Gaming Partners, moderated the session with Tim O’Brien, chief revenue officer at Scopely, and Amir Rahimi, president of games at Scopely. I also interviewed Aaron Loeb, chief business officer at Scopely, about the company’s culture that enables acquisitions to succeed. In those conversations, we extracted some lessons that all companies could use in making mergers work and distilling the right culture for game companies.

Rahimi became a part of Scopely after the Los Angeles company acquired FoxNext Games, the game division of the entertainment company Fox. Disney acquired Fox and then chose to spin out its game division. That deal in January 2020 was the end of a long journey for Rahimi, who was part of TapZen, a studio that was started in 2013 and was first acquired by Kabam in 2015 to make mobile games. That was the first of six different transactions that Rahimi was part of over seven years as the studio was passed around from owner to owner and finally found its home at Scopely.

Marvel Strike Force’s growth

Above: Left to right: Nick Tuosto of Liontree/Griffin Gaming, Tim O’Brien, and Amir Rahimi.

Image Credit: GamesBeat

That involved a “process filled with twists and turns and drama,” Rahimi said during the session. Rahimi’s studio in Los Angeles had managed to launch Marvel Strike Force, a free-to-play fighting game in 2018. And in 2020, under Scopely’s ownership, Marvel Strike Force grew its revenues more than 70% to $300 million. It was a big hit, and it became an even big after the Scopely deal.

What’s amazing about that result is that the growth happened during lockdown and as Scopely was integrating FoxNext Games into the company, a task that O’Brien was heavily involved with as an executive on the buying side.

“We were sort of anti-remote work. When shelter-in-place hit, I was terrified and then was subsequently blown away by how well the team rallied and stayed focused and productive,” Rahimi said. The deal closed in February, right before everything shut down. In terms of the transition from Disney to Scopely, it was a model for how to do M&A.”

It took about eight months to get the deal done. Rahimi said his team spent a lot of time getting to know O’Brien and the Scopely team. They knew their visions for the future were aligned.

“They created the optimal conditions for us to thrive,” Rahimi said. “We talked about how we could power up my studio.”

O’Brien said the whole plan to have gatherings and dinners was no longer possible in the pandemic. But the company had to make a lot of decisions to put the team members first, he said.

Creative freedom and user acquisition

Marvel Strike Force

Above: Scopely acquired Marvel Strike Force with its FoxNext games deal.

Image Credit: FoxNext Games

One of the things that Rahimi liked was that Scopely gave Rahimi’s studio creative freedom to make the right decisions for Marvel Strike Force and other games such as an upcoming Avatar title. Rahimi said Scopely’s leaders understood that the team was functioning well and it needed support, rather than a change in direction.

“I personally feel more empowered at Scopely than I have anywhere,” Rahimi said. “That allows me to empower my people and that benefits everyone. Unfortunately, there are companies that approach M&A in a very different way. They make changes in leaders or modify game concepts or swap out intellectual properties. And that proves to be disruptive.

“You can’t do that,” Rahimi said. The way you need to approach a team, especially a team that’s already successful, is really understanding what they do well, really understanding what you can bring to the table, and forming a partnership that is based on trust and transparency and candor,” and then accelerating the growth after the acquisition.

“The team went for it, and those decisions paid off,” Rahimi said.

Scopely did know where it had to make investments, and that was in marketing and acquiring new users, Loeb said in our fireside chat. It knew it had to do this in order to capture the players for the long term. Scopely applied its talent in user acquisition, something that it handles through its central technology services, and that really paid off, Rahimi said. That helped the game find new audiences around the globe. O’Brien said that the combination of the two companies led to the growth acceleration for the game and all of the game’s key performance indicators, such as how often players will play the game in the week. Scopely has also added a lot of the FoxNext Game team members into leadership positions at Scopely.

“We knew we could learn as much from them as they could from us,” O’Brien said.

How culture matters too

One of the new leaders at Scopely is Aaron Loeb, chief business officer, who was also president of FoxNext Games before the acquisition. I also spoke with Loeb in a session at the GamesBeat Summit 2021 event about how Scopely is gathering its learnings into what it calls the Scopely Operating System. It’s really more about all of the processes and culture at Scopely. That’s all in service of becoming the definitive free-to-play game company, Loeb said.

“Our tools, our technology, everything is built around enabling the vision of the game team to go and do what they’re seeking to do to grow the game based on their vision,” Loeb said. “Scopely is really focused on the core problem set of making a great game, which is going and finding the right talent, the right leadership, people with a vision for what they want, and then supporting them with both the right technology but also the right resources to go out and actually chase that vision.”

Loeb said that Scopely had a game that was lost in the wilderness. It decided to restart the effort and reconstitute the vision.

“It pushed through those dark hours the soul when it looked like they couldn’t make it,” Loeb said. “That game is now one of our biggest hits.”

A learning machine

Scopely has expanded to a 60,000-square-feet space in Culver City.

Above: Scopely has expanded to a 60,000-square-feet space in Culver City.

Image Credit: Scopely

Loeb said one of the things that defines the culture is humility.

“This culture starts with hiring the smartest people you can find who are also really humble, who are excited about learning from each other,” Loeb said. “Our culture is a learning culture. We often talk about the company as a learning machine.”

The lesson of that for other companies is that it’s easy to fall into a pattern of operating with blinders on, with a focus on execution. But Loeb believes it’s important to question the pattern and have a desire for learning new ways of thinking. It turns out this way of thinking at the company is good for players too. It leads to changes in the experience for gamers, even in areas that were previously thought as solved problems.

“Our people are incredibly dedicated to challenging their own assumptions,” Loeb said. “We are, we are skeptical of opinions, particularly our own. And I think that that’s a really critical factor to building a great learning culture.”

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading

Trending