Connect with us

Tech

Combining edge computing and IoT to unlock autonomous and intelligent applications

Published

on

Camp Mabry Pilot program for PINN

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


The boom in Internet-connected devices means there is an unprecedented amount of data being collected, leaving enterprises with the challenge of storing, securing, and processing the data at scale. The sheer amount of data involved is driving the case for edge computing, even as enterprises continue with their digital transformation plans.

Edge computing refers to moving processing power to the network edge–where the devices are–instead of first transferring the data to a centralized location, whether that is to a data center or a cloud provider. Edge computing analyzes the data near where it is being collected, which reduces Internet bandwidth usage and addresses security and scalability concerns over where the data is stored and how it is being transferred. The main drivers are Internet-of-Things and real-time applications that demand instantaneous data processing. 5G deployments are accelerating this trend.

Enterprises have been focused on moving their applications to the cloud over the past few years. Analysts estimate that 70 percent of organizations have at least one application in the cloud, and enterprise decision makers say digital transformation is one of their top priorities. However, there are some limits to an all-in cloud strategy as more data-hungry applications come online.

By 2025, 175 zettabytes (or 175 trillion gigabytes) of data will be generated around the globe, and more than 90 zettabytes of that data will be created by edge devices, according to IDC’s Data Age 2025 report. That is a lot of data that needs to be uploaded some place before anything can be done with them, and there may not always be enough bandwidth to do so. There is also the question of latency, since it would take time for data to travel the distance from the device to where the analysis is being performed and then come back. And finally, there is the question of network reliability. If the network is unavailable for some reason, then the application is essentially offline.

“You’re backhauling data to a cloud that’s far way, miles away,” James Thomason, CTO of EDJX, which provides a platform that makes it easy for developers to write edge and IoT applications and secure edge data at the source. “That’s an insurmountable speed of light problem.”

Analysts estimate that 91% of today’s data is created and processed in centralized data centers. By 2022 about 75% of all data will need analysis and action at the edge.

“We knew when we started EDJX that the pendulum would have to swing from cloud and centralization back to decentralized,” Thomason said.

The case of edge in enterprises

Edge computing isn’t limited to just sensors and other Internet-of-Things, but can involve traditional IT devices, such as laptops, servers, and handheld systems. Enterprise applications such as enterprise resource planning (ERP), financial software, and data management systems typically don’t need the level of real-time instantaneous data processing most commonly associated with autonomous applications. Edge computing has the most relevance in the world of enterprise software in the context of application delivery. Employees don’t need access to the whole application suite or all of the company’s data. Providing them just what they need with limited data results in better performance and user experience.

Edge computing also makes it possible to harness AI into enterprise applications, such as voice recognition. Voice recognition applications need to work locally for fast response, even if the algorithm is trained in the cloud.

“For the first time in history, computing is moving out of the realm of abstract stuff like spreadsheets, web browsers, video games, et cetera, and into the real world,” Thomason said. Devices are sensing things in the real world and making decisions based on that information.

Developing for the edge

Next-generation applications and services require a new computing infrastructure that delivers low latency networks and high-performance computing at the extreme edge of the network. That is the idea behind Public Infrastructure Network Node (PINN), the initiative out of the Autonomy Institute, a cooperative research consortium focused on advancing and accelerating autonomy and AI at the edge. PINN is a unified open standard supporting 5G wireless, Edge Computing, Radar, Lidar, Enhanced GPS, and Intelligent Transportation Systems (ITS). The design of the PINN makes it look no different than a light post, making it possible to deliver computing power without relying on a skyline of cell towers or heavy cables.

According to Thomason, PINN clusters in a city deployment could be positioned to collect information from the sensors and cameras at a street intersection. The devices can see things that a driver can’t see–such as both directions of traffic, or that a pedestrian is about to enter the crosswalk–and know things the driver doesn’t know–such as an emergency vehicle is on the way or that traffic lights are about to change. Edge computing–using PINN–is what will make it possible to do process all of this data and actually take action, whether that is to send a signal to the traffic lights or to an autonomous vehicle to do something differently.

Currently, only vetted developers would be allowed in the PINN ecosystem, Thomason said. Developers write code, which is then compiled in WebAssembly, which is the actual code that runs on PINN. Using WebAssembly makes it possible to have a very small attack surface, very hardened, so that it would be harder for an adversary to break out of the application and get to the data on the PINN, Thomason said.

PINN in the real world

Autonomy Institute announced a pilot program for PINN at the Texas Military Department’s Camp Mabry location in Austin, Texas. The program will deploy PINNs one thousand feet apart on a sidewalk over the 400-acre property. With the pilot, the focus will be on optimizing traffic management, autonomous cards, industrial robotics, autonomous delivery, drones that respond to 911 calls, automated road and bridge inspection–all the things that a smart city would care about.

Above: The Autonomy Institute partnered with Atrius Industrise and EDJX for a pilot program to deliver autonomous solutions at the edge.

Image Credit: EDJX

The first PINNS are scheduled to come online in the second quarter of 2021, and eventually have tens of thousands of PINNs deployed by mid-2022. Eventually, the goal is to expand the program from Austin to other major cities in the United States and around the world, EDJX said.

While the pilot program is specifically for building out city infrastructure, Thomason said that PINN and similar approaches will be important in other contexts, as well. As developers start developing for the platform, there will be opportunities to build applications for other industry sectors and use cases where data needs to be aggregated from multiple sources and fused together. Real-world edge applications on PINN can cover a whole range of things, including industrial IoT, artificial intelligence, augmented reality, and robotics.

“That general pattern of sensor data, fusion, and things happening in the real world is happening across industries,” Thomason said. “It’s not just smart cities and vehicles.”

For specific industries, there are different ways PINNs can be used. The energy sector needs to monitor the pipelines for natural gas and oil for signs of leaks–both from financial reasons and over environmental concerns. However, having enough sensors with sniffers to cover all the pipelines and wells could be too difficult. But setting up an infrared camera or a spectrometer to see the leaks and then raise the alert would prevent In another example, a factory may use cameras or other sensors to detect the presence of a worker inside the assembly line before starting the machinery.

“If you can use computing and sensors to do that, you can reduce workplace accidents, significantly,” Thomason said.

It is up to the developers that come to the platform what kind of applications they will build–the PINN had to exist first, said Jeffrey DeCoux, chairman of the Autonomy Institute. The pilot is an opportunity to explore and test out real-world solutions. PINN deployments will also encourage more work around sensors, 5G deployments, and all other technologies that depend on edge computing.

“Everybody came to the same realization, if we don’t do this, all of these industry 4.0 applications will never happen,” DeCoux said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Understanding the differences between biological and computer vision

Published

on

Understanding the differences between biological and computer vision

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Since the early years of artificial intelligence, scientists have dreamed of creating computers that can “see” the world. As vision plays a key role in many things we do every day, cracking the code of computer vision seemed to be one of the major steps toward developing artificial general intelligence.

But like many other goals in AI, computer vision has proven to be easier said than done. In 1966, scientists at MIT launched “The Summer Vision Project,” a two-month effort to create a computer system that could identify objects and background areas in images. But it took much more than a summer break to achieve those goals. In fact, it wasn’t until the early 2010s that image classifiers and object detectors were flexible and reliable enough to be used in mainstream applications.

In the past decades, advances in machine learning and neuroscience have helped make great strides in computer vision. But we still have a long way to go before we can build AI systems that see the world as we do.

Biological and Computer Vision, a book by Harvard Medical University Professor Gabriel Kreiman, provides an accessible account of how humans and animals process visual data and how far we’ve come toward replicating these functions in computers.

Kreiman’s book helps understand the differences between biological and computer vision. The book details how billions of years of evolution have equipped us with a complicated visual processing system, and how studying it has helped inspire better computer vision algorithms. Kreiman also discusses what separates contemporary computer vision systems from their biological counterpart.

While I would recommend a full read of Biological and Computer Vision to anyone who is interested in the field, I’ve tried here (with some help from Gabriel himself) to lay out some of my key takeaways from the book.

Hardware differences

In the introduction to Biological and Computer Vision, Kreiman writes, “I am particularly excited about connecting biological and computational circuits. Biological vision is the product of millions of years of evolution. There is no reason to reinvent the wheel when developing computational models. We can learn from how biology solves vision problems and use the solutions as inspiration to build better algorithms.”

And indeed, the study of the visual cortex has been a great source of inspiration for computer vision and AI. But before being able to digitize vision, scientists had to overcome the huge hardware gap between biological and computer vision. Biological vision runs on an interconnected network of cortical cells and organic neurons. Computer vision, on the other hand, runs on electronic chips composed of transistors.

Therefore, a theory of vision must be defined at a level that can be implemented in computers in a way that is comparable to living beings. Kreiman calls this the “Goldilocks resolution,” a level of abstraction that is neither too detailed nor too simplified.

For instance, early efforts in computer vision tried to tackle computer vision at a very abstract level, in a way that ignored how human and animal brains recognize visual patterns. Those approaches have proven to be very brittle and inefficient. On the other hand, studying and simulating brains at the molecular level would prove to be computationally inefficient.

“I am not a big fan of what I call ‘copying biology,’” Kreiman told TechTalks. “There are many aspects of biology that can and should be abstracted away. We probably do not need units with 20,000 proteins and a cytoplasm and complex dendritic geometries. That would be too much biological detail. On the other hand, we cannot merely study behavior—that is not enough detail.”

In Biological and Computer Vision, Kreiman defines the Goldilocks scale of neocortical circuits as neuronal activities per millisecond. Advances in neuroscience and medical technology have made it possible to study the activities of individual neurons at millisecond time granularity.

And the results of those studies have helped develop different types of artificial neural networks, AI algorithms that loosely simulate the workings of cortical areas of the mammal brain. In recent years, neural networks have proven to be the most efficient algorithm for pattern recognition in visual data and have become the key component of many computer vision applications.

Architecture differences

Above: Biological and Computer Vision, by Gabriel Kreiman.

The recent decades have seen a slew of innovative work in the field of deep learning, which has helped computers mimic some of the functions of biological vision. Convolutional layers, inspired by studies made on the animal visual cortex, are very efficient at finding patterns in visual data. Pooling layers help generalize the output of a convolutional layer and make it less sensitive to the displacement of visual patterns. Stacked on top of each other, blocks of convolutional and pooling layers can go from finding small patterns (corners, edges, etc.) to complex objects (faces, chairs, cars, etc.).

But there’s still a mismatch between the high-level architecture of artificial neural networks and what we know about the mammal visual cortex.

“The word ‘layers’ is, unfortunately, a bit ambiguous,” Kreiman said. “In computer science, people use layers to connote the different processing stages (and a layer is mostly analogous to a brain area). In biology, each brain region contains six cortical layers (and subdivisions). My hunch is that six-layer structure (the connectivity of which is sometimes referred to as a canonical microcircuit) is quite crucial. It remains unclear what aspects of this circuitry should we include in neural networks. Some may argue that aspects of the six-layer motif are already incorporated (e.g. normalization operations). But there is probably enormous richness missing.”

Also, as Kreiman highlights in Biological and Computer Vision, information in the brain moves in several directions. Light signals move from the retina to the inferior temporal cortex to the V1, V2, and other layers of the visual cortex. But each layer also provides feedback to its predecessors. And within each layer, neurons interact and pass information between each other. All these interactions and interconnections help the brain fill in the gaps in visual input and make inferences when it has incomplete information.

In contrast, in artificial neural networks, data usually moves in a single direction. Convolutional neural networks are “feedforward networks,” which means information only goes from the input layer to the higher and output layers.

There’s a feedback mechanism called “backpropagation,” which helps correct mistakes and tune the parameters of neural networks. But backpropagation is computationally expensive and only used during the training of neural networks. And it’s not clear if backpropagation directly corresponds to the feedback mechanisms of cortical layers.

On the other hand, recurrent neural networks, which combine the output of higher layers into the input of their previous layers, still have limited use in computer vision.

visual cortex vs neural networks

Above: In the visual cortex (right), information moves in several directions. In neural networks (left), information moves in one direction.

In our conversation, Kreiman suggested that lateral and top-down flow of information can be crucial to bringing artificial neural networks to their biological counterparts.

“Horizontal connections (i.e., connections for units within a layer) may be critical for certain computations such as pattern completion,” he said. “Top-down connections (i.e., connections from units in a layer to units in a layer below) are probably essential to make predictions, for attention, to incorporate contextual information, etc.”

He also said out that neurons have “complex temporal integrative properties that are missing in current networks.”

Goal differences

Evolution has managed to develop a neural architecture that can accomplish many tasks. Several studies have shown that our visual system can dynamically tune its sensitivities to the common. Creating computer vision systems that have this kind of flexibility remains a major challenge, however.

Current computer vision systems are designed to accomplish a single task. We have neural networks that can classify objects, localize objects, segment images into different objects, describe images, generate images, and more. But each neural network can accomplish a single task alone.

Gabriel Kreiman

Above: Harvard Medical University professor Gabriel Kreiman. Author of “Biological and Computer Vision.”

“A central issue is to understand ‘visual routines,’ a term coined by Shimon Ullman; how can we flexibly route visual information in a task-dependent manner?” Kreiman said. “You can essentially answer an infinite number of questions on an image. You don’t just label objects, you can count objects, you can describe their colors, their interactions, their sizes, etc. We can build networks to do each of these things, but we do not have networks that can do all of these things simultaneously. There are interesting approaches to this via question/answering systems, but these algorithms, exciting as they are, remain rather primitive, especially in comparison with human performance.”

Integration differences

In humans and animals, vision is closely related to smell, touch, and hearing senses. The visual, auditory, somatosensory, and olfactory cortices interact and pick up cues from each other to adjust their inferences of the world. In AI systems, on the other hand, each of these things exists separately.

Do we need this kind of integration to make better computer vision systems?

“As scientists, we often like to divide problems to conquer them,” Kreiman said. “I personally think that this is a reasonable way to start. We can see very well without smell or hearing. Consider a Chaplin movie (and remove all the minimal music and text). You can understand a lot. If a person is born deaf, they can still see very well. Sure, there are lots of examples of interesting interactions across modalities, but mostly I think that we will make lots of progress with this simplification.”

However, a more complicated matter is the integration of vision with more complex areas of the brain. In humans, vision is deeply integrated with other brain functions such as logic, reasoning, language, and common sense knowledge.

“Some (most?) visual problems may ‘cost’ more time and require integrating visual inputs with existing knowledge about the world,” Kreiman said.

He pointed to following picture of former U.S. president Barack Obama as an example.

ObamaPicture

Above: Understanding what is going on it this picture requires world knowledge, social knowledge, and common sense.

To understand what is going on in this picture, an AI agent would need to know what the person on the scale is doing, what Obama is doing, who is laughing and why they are laughing, etc. Answering these questions requires a wealth of information, including world knowledge (scales measure weight), physics knowledge (a foot on a scale exerts a force), psychological knowledge (many people are self-conscious about their weight and would be surprised if their weight is well above the usual), social understanding (some people are in on the joke, some are not).

“No current architecture can do this. All of this will require dynamics (we do not appreciate all of this immediately and usually use many fixations to understand the image) and integration of top-down signals,” Kreiman said.

Areas such as language and common sense are themselves great challenges for the AI community. But it remains to be seen whether they can be solved separately and integrated together along with vision, or integration itself is the key to solving all of them.

“At some point we need to get into all of these other aspects of cognition, and it is hard to imagine how to integrate cognition without any reference to language and logic,” Kreiman said. “I expect that there will be major exciting efforts in the years to come incorporating more of language and logic in vision models (and conversely incorporating vision into language models as well).”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Tech

How Legacy Games still has a good business selling CD games at Walmart

Published

on

How Legacy Games still has a good business selling CD games at Walmart

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Legacy Games has been publishing and distributing casual PC games at retail since 1998. And believe it or not, it’s still in business and its founder Ariella Lehrer is back in charge of the company that targets women who are 40 years old or older.

Lehrer started the Los Angeles company 23 years ago to make games for women at retail. She left in 2017 to move on to augmented reality game maker Hitpoint. Legacy Games stayed small with just a handful of people, but it kept its relationships with key retailers such as Walmart. And it still has Walmart as a client. Meanwhile, most of its competitors have moved on to more attractive markets. So after three years at Hitpoint, Lehrer returned as CEO of Legacy Games in October and she has started a new indie publishing program.

Legacy has helped game developers find new casual game customers through Legacy’s unique distribution channels, such as Walmart. Now the company is diversifying its game portfolio by working with indie game developers. Lehrer said in an interview with GamesBeat that she is signing up a variety of indie developers who are making PC and mobile games that target casual gamers. Roughly 70% of the customers are older women, and about 30% are men.

“We are signing up cool indie game developers, and that’s overdue,” Lehrer said. “I came back and found it was still kicking, and maybe I can push it toward digital. I’m really focused on bringing Legacy Games into the digital age.”

Going digital and physical

Above: Legacy Games targets its games at women over 40.

Image Credit: Legacy Games

Since coming back, Lehrer launched a digital store and she expects the company triple its digital sales in 2021.

She is signing up developers that have highly rated casual games on Steam, but have otherwise had limited distribution. Many developers have had a hard time in the pandemic. A survey by the Game Developers Conference found that 34% of game developers saw their business decline, and a report from Video Game Insights found more than 50% of indies never make more than $4,000.

“We found there are all these wonderful indie games on Steam, but our customers don’t go on Steam,” she said.

Lehrer distributes the games on the company’s web site. And if any do particularly well on the digital storefront, then the company will see if they will sell at Walmart, where the company sells around 3,000 units a week. Legacy can package the games together in a bundle on DVD discs. Successful digital bundles will then be sold at retail.

“It’s a lovely little business,” she said. “We have been profitable every year except for the Great Recession” in 2008.

legacy 3

Above: Legacy Games was started in 1998.

Image Credit: Legacy Games

It got started with a hit game called Emergency Room, originally created for IBM. Lehrer got the rights back and then sold it at retail at Walmart, and the title sold more than a million units. At its height, Legacy Games had about $5 million in revenues. That was never that exciting to investors. But the company has stayed steady and it did raise money once a while ago from Targus. The company made 20 different games based on television licenses like Law & Order, Criminal Minds, Murder She Wrote, Tarzan, and others. Lehrer kept it going but stayed on

Legacy has 18 of 24 spots on the shelf for casual games at Walmart stores. All of the competitors have loved on to other markets. Lehrer said she values the relationship with Walmart, which is the last national retail company standing when it comes to selling casual game DVD bundles, Lehrer said. Legacy Games also sells its games on retailers’ online websites, such as Walmart.com, Amazon.com, Staples.com, and through the following online distributors: Arvato, Avanquest, and Synnex. Additionally, Legacy Games sells its games through other traditional outlets like Steam, Microsoft Windows, and wherever casual games can be sold profitably.

“Others have said it’s a shrinking market at retail and they are going somewhere else exciting,” said Lehrer. “I think there is an opportunity here. There’s still an opportunity to sell these kinds of games at retail. I had a feeling these women were underserved. They buy their products at Walmart. They love casual games like hidden object games, or match-3, or time management, and they want to play on the PC.”

While Lehrer was gone, three part-time employees ran the company. Then she came back and she has added three more full-time employees. And now the company’s revenues are close to $1 million.

New developers

Lehrer has signed up 15 new game studios this year. These include JumpGate (Project Blue Book), Thomas Bowker (Lyne), Joel McDonald (Prune), Flippfly (Evergarden) and Walkabout (Wanderlust: Travel Stories), Joybits (Doodle God), and BufoProject (Classic Card Games 3D), among others.

“We’re going to try out different genres, different ways of packaging, different pricing and we will see what resonates,” Lehrer said.

Legacy Games has a long history of working with established casual game developers such as Artifex Mundi, Brave Giant, Alawar, Microids, Jet Dogs, Crisp App Studios, and many more. Rivals include Big Fish Games. The company has publishing contracts with more than 50 game developers, and it sells more than 500 individual games. One of the regular hits is the Amazing Games bundle at Walmart, with titles including Supernatural Stories, Fantastic Fables, True Crime, Murder Mystery, Greatest Hits, and Magical Matches.

“There are many fewer retail and digital sites to purchase casual PC games than there were a few years ago,” Lehrer said. “Many of our competitors have switched their focus to mobile. Our customers find Steam overwhelming. I believe there is a significant revenue opportunity for indie developers to reach new customers and generate incremental revenue by partnering with Legacy.”

One of the developers using Legacy’s publishing services is Aaron San Filippo, co-owner of Flippfly, a three-person studio near Madison, Wisconsin. In an interview, he said Legacy reached out to him a couple of months ago to get his game Evergarden, which is a mysterious puzzle gardening title, onto its platform. It will be launching soon in the digital store and it has a chance for physical distribution, San Filippo said.

San Filippo said he launched the game on Steam a few years ago and it didn’t connect well with that audience. Steam was more about hardcore gamers, and so the casual gaming audience of Legacy seemed a lot more appealing. The game also debuted on Linux and iOS, and it did best on iOS.

“It goes to the target market for our games,” San Filippo said. “We’re always looking for more opportunities. This is all about diversifying our income streams. Additional revenue streams are worthwhile, even if it’s small. I’m hopeful this will do well.”

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading

Tech

Colonial Pipeline paid a $5 million ransom—and kept a vicious cycle turning

Published

on

Colonial Pipeline paid a $5 million ransom—and kept a vicious cycle turning

Sean Rayford | Getty Images

Nearly a week after a ransomware attack led Colonial Pipeline to halt fuel distribution on the East Coast, reports emerged on Friday that the company paid a 75 bitcoin ransom—worth as much as $5 million, depending on the time of payment—in an attempt to restore service more quickly. And while the company was able to restart operations Wednesday night, the decision to give in to hackers’ demands will only embolden other groups going forward. Real progress against the ransomware epidemic, experts say, will require more companies to say no.

Not to say that doing so is easy. The FBI and other law enforcement groups have long discouraged ransomware victims from paying digital extortion fees, but in practice many organizations resort to paying. They either don’t have the backups and other infrastructure necessary to recover otherwise, can’t or don’t want to take the time to recover on their own, or decide that it’s cheaper to just quietly pay the ransom and move on. Ransomware groups increasingly vet their victims’ financials before springing their traps, allowing them to set the highest possible price that their victims can still potentially afford.

wired logo

In the case of Colonial Pipeline, the DarkSide ransomware group attacked the company’s business network rather than the more sensitive operational technology networks that control the pipeline. But Colonial took down its OT network as well in an attempt to contain the damage, increasing the pressure to resolve the issue and resume the flow of fuel along the East Coast. Another potential factor in the decision, first reported by Zero Day, was that the company’s billing system had been infected with ransomware, so it had no way to track fuel distribution and bill customers.

Advocates of zero tolerance for ransom payments hoped that Colonial Pipeline’s proactive shutdown was a sign that the company would refuse to pay. Reports on Wednesday indicated that the company had a plan to hold out, but numerous subsequent reports on Thursday, led by Bloomberg, confirmed that the 75 bitcoin ransom had been paid. Colonial Pipeline did not return a request for comment from WIRED about the payment. It is still unclear whether the company paid the ransom soon after the attack or days later, as fuel prices rose and lines at gas stations grew.

“I can’t say I’m surprised, but it’s certainly disappointing,” says Brett Callow, a threat analyst at antivirus company Emsisoft. “Unfortunately, it’ll help keep United States critical infrastructure providers in the crosshairs. If a sector proves to be profitable, they’ll keep on hitting it.”

In a briefing on Thursday, White House press secretary Jen Pskai emphasized in general that the US government encourages victims not to pay. Others in the administration struck a more measured note. “Colonial is a private company and we’ll defer information regarding their decision on paying a ransom to them,” said Anne Neuberger, deputy national security adviser for cyber and emerging technologies, in a press briefing on Monday. She added that ransomware victims “face a very difficult situation and they [often] have to just balance the cost-benefit when they have no choice with regards to paying a ransom.”

Researchers and policymakers have struggled to produce comprehensive guidance about ransom payments. If every victim in the world suddenly stopped paying ransoms and held firm, the attacks would quickly stop, because there would be no incentive for criminals to continue. But coordinating a mandatory boycott seems impractical, researchers say, and likely would result in more payments happening in secret. When the ransomware gang Evil Corp attacked Garmin last summer, the company paid the ransom through an intermediary. It’s not unusual for large companies to use a middleman for payment, but Garmin’s situation was particularly noteworthy because Evil Corp had been sanctioned by the US government.

“For some organizations, their business could be completely destroyed if they don’t pay the ransom,” says Katie Nickels, director of intelligence at the security firm Red Canary. “If payments aren’t allowed you’ll just see people being quieter about making the payments.”

Prolonged shutdowns of hospitals, critical infrastructure, and municipal services also threaten more than just finances. When lives are literally at stake, a principled stand against hackers quickly drops off of the priorities list. Nickels herself recently participated in a public-private effort to establish comprehensive United States–based ransomware recommendations; the group could not agree on definitive guidance about if and when to pay.

“The Ransomware Task Force discussed this extensively,” she says. “There were a lot of important things that the group came to a consensus on and payment was one where there was no consensus.”

As part of a cybersecurity Executive Order signed by President Joseph Biden on Wednesday, the Department of Homeland Security will create a Cyber Safety Review Board to investigate and debrief “significant” cyberattacks. That could at least help more payments be made in the open, giving the general public a fuller sense of the scale of the ransomware problem. But while the board has incentives to entice private organizations to participate, it may still need expanded authority from Congress to demand total transparency. Meanwhile, the payments will continue, and so will the attacks.

“You shouldn’t pay, but if you don’t have a choice and you’ll be out of business forever, you’re gonna pay,” says Adam Meyers, vice president of intelligence at the security firm CrowdStrike. “In my mind, the only thing that’s going to really drive change is organizations not getting got in the first place. When the money disappears, these guys will find some other way to make money. And then we’ll have to deal with that.”

For now, though, ransomware remains an inveterate threat. And Colonial Pipeline’s $5 million payment will only egg on cybercriminals.

This story originally appeared on wired.com.

Continue Reading

Trending