Connect with us

Tech

HoneyBook boosts contractor payment, booking, invoicing with $155M

Published

on

HoneyBook

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Business management platform HoneyBook today announced that it raised $155 million at a $1.1 billion valuation post-money. The company says that it’ll use the funding to expand its platform and acquire new customers nationally.

The gig economy is alive and well. One estimate pegs the number of U.S. workers who have an “alternative work arrangement” as their primary job at 57 million, contributing to a global freelance and independent contractor market worth $3.7 trillion, according to Staffing Industry Analysts. Statista projects that the gross volume of the freelancer economy is expected to reach $455.2 billion by 2023.

HoneyBook, an alum of startup accelerator UpWest Labs, offers a financial and business management service for freelancers and “solopreneurs.” CEO Oz Alon, who founded the company in 2013 with his wife Naama Alon, Dror Shimoni, and Shadiah Sigala, initially launched HoneyBook as a wedding album service targeting gig economy photographers. The goal was to build a crowdsourced database of wedding vendors, but the company soon decided to broaden the platform’s focus.

Above: HoneyBook’s booking platform.

Image Credit: HoneyBook

Simplifying freelance booking

Today, HoneyBook offers tools to help facilitate freelancer booking, proposals, invoicing, and payments. HoneyBook users create profile pages using a set of preconfigured templates. After a client responds and agrees to terms, the parties draft a contract together using a module that automatically pulls in the relevant details. HoneyBook then highlights important fields and generates notifications, alerting all parties when the paperwork has been reviewed and signed.

HoneyBook’s apps make project files and documents shareable while collating text, email, and chat messages in a single view. The platform’s billing service handles recurring, scheduled, and one-off payments and walks customers through the invoicing, contract, and closure processes. Clients can sign digitally, freelancers can brand the workflow with banners and logos, and HoneyBook’s automation toolset can be programmed to send reminders via email.

There’s also a community component. HoneyBook hosts a curated classified ads board where hirers can post and solicit replies about opportunities and ping the company’s network of over 75,000 workers. A search tool lets clients drill down by location and expertise or view profile pages highlighting past projects and collaborations.

Growth in a crowded field

HoneyBook shares the crowded gig networking space with Fiverr, which recently acquired Phoenix-based ClearVoice, and dozens of others, including Upwork (which filed for an IPO in October), Freelancer.com, and Guru.com (which raised $25 million in December). But Oz Alon believes HoneyBook’s breadth of features — particularly its automation and contract management tools — put it a cut above the rest.

In 2020, HoneyBook saw $3 billion in business booked on its platform, $1 billion of which occurred in 2020 alone.

Durable Capital Partners led HoneyBook’s latest funding round with participation from Tiger Global Management, Battery Ventures, Zeev Ventures, and O1 Advisors, bringing the company’s total raised to $241 million. Existing investors including Citi Ventures, Norwest Venture Partners, Aleph, Vintage Investment Partners, Hillsven Capital, and UpWest Labs also contributed.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Understanding the differences between biological and computer vision

Published

on

Understanding the differences between biological and computer vision

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Since the early years of artificial intelligence, scientists have dreamed of creating computers that can “see” the world. As vision plays a key role in many things we do every day, cracking the code of computer vision seemed to be one of the major steps toward developing artificial general intelligence.

But like many other goals in AI, computer vision has proven to be easier said than done. In 1966, scientists at MIT launched “The Summer Vision Project,” a two-month effort to create a computer system that could identify objects and background areas in images. But it took much more than a summer break to achieve those goals. In fact, it wasn’t until the early 2010s that image classifiers and object detectors were flexible and reliable enough to be used in mainstream applications.

In the past decades, advances in machine learning and neuroscience have helped make great strides in computer vision. But we still have a long way to go before we can build AI systems that see the world as we do.

Biological and Computer Vision, a book by Harvard Medical University Professor Gabriel Kreiman, provides an accessible account of how humans and animals process visual data and how far we’ve come toward replicating these functions in computers.

Kreiman’s book helps understand the differences between biological and computer vision. The book details how billions of years of evolution have equipped us with a complicated visual processing system, and how studying it has helped inspire better computer vision algorithms. Kreiman also discusses what separates contemporary computer vision systems from their biological counterpart.

While I would recommend a full read of Biological and Computer Vision to anyone who is interested in the field, I’ve tried here (with some help from Gabriel himself) to lay out some of my key takeaways from the book.

Hardware differences

In the introduction to Biological and Computer Vision, Kreiman writes, “I am particularly excited about connecting biological and computational circuits. Biological vision is the product of millions of years of evolution. There is no reason to reinvent the wheel when developing computational models. We can learn from how biology solves vision problems and use the solutions as inspiration to build better algorithms.”

And indeed, the study of the visual cortex has been a great source of inspiration for computer vision and AI. But before being able to digitize vision, scientists had to overcome the huge hardware gap between biological and computer vision. Biological vision runs on an interconnected network of cortical cells and organic neurons. Computer vision, on the other hand, runs on electronic chips composed of transistors.

Therefore, a theory of vision must be defined at a level that can be implemented in computers in a way that is comparable to living beings. Kreiman calls this the “Goldilocks resolution,” a level of abstraction that is neither too detailed nor too simplified.

For instance, early efforts in computer vision tried to tackle computer vision at a very abstract level, in a way that ignored how human and animal brains recognize visual patterns. Those approaches have proven to be very brittle and inefficient. On the other hand, studying and simulating brains at the molecular level would prove to be computationally inefficient.

“I am not a big fan of what I call ‘copying biology,’” Kreiman told TechTalks. “There are many aspects of biology that can and should be abstracted away. We probably do not need units with 20,000 proteins and a cytoplasm and complex dendritic geometries. That would be too much biological detail. On the other hand, we cannot merely study behavior—that is not enough detail.”

In Biological and Computer Vision, Kreiman defines the Goldilocks scale of neocortical circuits as neuronal activities per millisecond. Advances in neuroscience and medical technology have made it possible to study the activities of individual neurons at millisecond time granularity.

And the results of those studies have helped develop different types of artificial neural networks, AI algorithms that loosely simulate the workings of cortical areas of the mammal brain. In recent years, neural networks have proven to be the most efficient algorithm for pattern recognition in visual data and have become the key component of many computer vision applications.

Architecture differences

Above: Biological and Computer Vision, by Gabriel Kreiman.

The recent decades have seen a slew of innovative work in the field of deep learning, which has helped computers mimic some of the functions of biological vision. Convolutional layers, inspired by studies made on the animal visual cortex, are very efficient at finding patterns in visual data. Pooling layers help generalize the output of a convolutional layer and make it less sensitive to the displacement of visual patterns. Stacked on top of each other, blocks of convolutional and pooling layers can go from finding small patterns (corners, edges, etc.) to complex objects (faces, chairs, cars, etc.).

But there’s still a mismatch between the high-level architecture of artificial neural networks and what we know about the mammal visual cortex.

“The word ‘layers’ is, unfortunately, a bit ambiguous,” Kreiman said. “In computer science, people use layers to connote the different processing stages (and a layer is mostly analogous to a brain area). In biology, each brain region contains six cortical layers (and subdivisions). My hunch is that six-layer structure (the connectivity of which is sometimes referred to as a canonical microcircuit) is quite crucial. It remains unclear what aspects of this circuitry should we include in neural networks. Some may argue that aspects of the six-layer motif are already incorporated (e.g. normalization operations). But there is probably enormous richness missing.”

Also, as Kreiman highlights in Biological and Computer Vision, information in the brain moves in several directions. Light signals move from the retina to the inferior temporal cortex to the V1, V2, and other layers of the visual cortex. But each layer also provides feedback to its predecessors. And within each layer, neurons interact and pass information between each other. All these interactions and interconnections help the brain fill in the gaps in visual input and make inferences when it has incomplete information.

In contrast, in artificial neural networks, data usually moves in a single direction. Convolutional neural networks are “feedforward networks,” which means information only goes from the input layer to the higher and output layers.

There’s a feedback mechanism called “backpropagation,” which helps correct mistakes and tune the parameters of neural networks. But backpropagation is computationally expensive and only used during the training of neural networks. And it’s not clear if backpropagation directly corresponds to the feedback mechanisms of cortical layers.

On the other hand, recurrent neural networks, which combine the output of higher layers into the input of their previous layers, still have limited use in computer vision.

visual cortex vs neural networks

Above: In the visual cortex (right), information moves in several directions. In neural networks (left), information moves in one direction.

In our conversation, Kreiman suggested that lateral and top-down flow of information can be crucial to bringing artificial neural networks to their biological counterparts.

“Horizontal connections (i.e., connections for units within a layer) may be critical for certain computations such as pattern completion,” he said. “Top-down connections (i.e., connections from units in a layer to units in a layer below) are probably essential to make predictions, for attention, to incorporate contextual information, etc.”

He also said out that neurons have “complex temporal integrative properties that are missing in current networks.”

Goal differences

Evolution has managed to develop a neural architecture that can accomplish many tasks. Several studies have shown that our visual system can dynamically tune its sensitivities to the common. Creating computer vision systems that have this kind of flexibility remains a major challenge, however.

Current computer vision systems are designed to accomplish a single task. We have neural networks that can classify objects, localize objects, segment images into different objects, describe images, generate images, and more. But each neural network can accomplish a single task alone.

Gabriel Kreiman

Above: Harvard Medical University professor Gabriel Kreiman. Author of “Biological and Computer Vision.”

“A central issue is to understand ‘visual routines,’ a term coined by Shimon Ullman; how can we flexibly route visual information in a task-dependent manner?” Kreiman said. “You can essentially answer an infinite number of questions on an image. You don’t just label objects, you can count objects, you can describe their colors, their interactions, their sizes, etc. We can build networks to do each of these things, but we do not have networks that can do all of these things simultaneously. There are interesting approaches to this via question/answering systems, but these algorithms, exciting as they are, remain rather primitive, especially in comparison with human performance.”

Integration differences

In humans and animals, vision is closely related to smell, touch, and hearing senses. The visual, auditory, somatosensory, and olfactory cortices interact and pick up cues from each other to adjust their inferences of the world. In AI systems, on the other hand, each of these things exists separately.

Do we need this kind of integration to make better computer vision systems?

“As scientists, we often like to divide problems to conquer them,” Kreiman said. “I personally think that this is a reasonable way to start. We can see very well without smell or hearing. Consider a Chaplin movie (and remove all the minimal music and text). You can understand a lot. If a person is born deaf, they can still see very well. Sure, there are lots of examples of interesting interactions across modalities, but mostly I think that we will make lots of progress with this simplification.”

However, a more complicated matter is the integration of vision with more complex areas of the brain. In humans, vision is deeply integrated with other brain functions such as logic, reasoning, language, and common sense knowledge.

“Some (most?) visual problems may ‘cost’ more time and require integrating visual inputs with existing knowledge about the world,” Kreiman said.

He pointed to following picture of former U.S. president Barack Obama as an example.

ObamaPicture

Above: Understanding what is going on it this picture requires world knowledge, social knowledge, and common sense.

To understand what is going on in this picture, an AI agent would need to know what the person on the scale is doing, what Obama is doing, who is laughing and why they are laughing, etc. Answering these questions requires a wealth of information, including world knowledge (scales measure weight), physics knowledge (a foot on a scale exerts a force), psychological knowledge (many people are self-conscious about their weight and would be surprised if their weight is well above the usual), social understanding (some people are in on the joke, some are not).

“No current architecture can do this. All of this will require dynamics (we do not appreciate all of this immediately and usually use many fixations to understand the image) and integration of top-down signals,” Kreiman said.

Areas such as language and common sense are themselves great challenges for the AI community. But it remains to be seen whether they can be solved separately and integrated together along with vision, or integration itself is the key to solving all of them.

“At some point we need to get into all of these other aspects of cognition, and it is hard to imagine how to integrate cognition without any reference to language and logic,” Kreiman said. “I expect that there will be major exciting efforts in the years to come incorporating more of language and logic in vision models (and conversely incorporating vision into language models as well).”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Continue Reading

Tech

How Legacy Games still has a good business selling CD games at Walmart

Published

on

How Legacy Games still has a good business selling CD games at Walmart

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Legacy Games has been publishing and distributing casual PC games at retail since 1998. And believe it or not, it’s still in business and its founder Ariella Lehrer is back in charge of the company that targets women who are 40 years old or older.

Lehrer started the Los Angeles company 23 years ago to make games for women at retail. She left in 2017 to move on to augmented reality game maker Hitpoint. Legacy Games stayed small with just a handful of people, but it kept its relationships with key retailers such as Walmart. And it still has Walmart as a client. Meanwhile, most of its competitors have moved on to more attractive markets. So after three years at Hitpoint, Lehrer returned as CEO of Legacy Games in October and she has started a new indie publishing program.

Legacy has helped game developers find new casual game customers through Legacy’s unique distribution channels, such as Walmart. Now the company is diversifying its game portfolio by working with indie game developers. Lehrer said in an interview with GamesBeat that she is signing up a variety of indie developers who are making PC and mobile games that target casual gamers. Roughly 70% of the customers are older women, and about 30% are men.

“We are signing up cool indie game developers, and that’s overdue,” Lehrer said. “I came back and found it was still kicking, and maybe I can push it toward digital. I’m really focused on bringing Legacy Games into the digital age.”

Going digital and physical

Above: Legacy Games targets its games at women over 40.

Image Credit: Legacy Games

Since coming back, Lehrer launched a digital store and she expects the company triple its digital sales in 2021.

She is signing up developers that have highly rated casual games on Steam, but have otherwise had limited distribution. Many developers have had a hard time in the pandemic. A survey by the Game Developers Conference found that 34% of game developers saw their business decline, and a report from Video Game Insights found more than 50% of indies never make more than $4,000.

“We found there are all these wonderful indie games on Steam, but our customers don’t go on Steam,” she said.

Lehrer distributes the games on the company’s web site. And if any do particularly well on the digital storefront, then the company will see if they will sell at Walmart, where the company sells around 3,000 units a week. Legacy can package the games together in a bundle on DVD discs. Successful digital bundles will then be sold at retail.

“It’s a lovely little business,” she said. “We have been profitable every year except for the Great Recession” in 2008.

legacy 3

Above: Legacy Games was started in 1998.

Image Credit: Legacy Games

It got started with a hit game called Emergency Room, originally created for IBM. Lehrer got the rights back and then sold it at retail at Walmart, and the title sold more than a million units. At its height, Legacy Games had about $5 million in revenues. That was never that exciting to investors. But the company has stayed steady and it did raise money once a while ago from Targus. The company made 20 different games based on television licenses like Law & Order, Criminal Minds, Murder She Wrote, Tarzan, and others. Lehrer kept it going but stayed on

Legacy has 18 of 24 spots on the shelf for casual games at Walmart stores. All of the competitors have loved on to other markets. Lehrer said she values the relationship with Walmart, which is the last national retail company standing when it comes to selling casual game DVD bundles, Lehrer said. Legacy Games also sells its games on retailers’ online websites, such as Walmart.com, Amazon.com, Staples.com, and through the following online distributors: Arvato, Avanquest, and Synnex. Additionally, Legacy Games sells its games through other traditional outlets like Steam, Microsoft Windows, and wherever casual games can be sold profitably.

“Others have said it’s a shrinking market at retail and they are going somewhere else exciting,” said Lehrer. “I think there is an opportunity here. There’s still an opportunity to sell these kinds of games at retail. I had a feeling these women were underserved. They buy their products at Walmart. They love casual games like hidden object games, or match-3, or time management, and they want to play on the PC.”

While Lehrer was gone, three part-time employees ran the company. Then she came back and she has added three more full-time employees. And now the company’s revenues are close to $1 million.

New developers

Lehrer has signed up 15 new game studios this year. These include JumpGate (Project Blue Book), Thomas Bowker (Lyne), Joel McDonald (Prune), Flippfly (Evergarden) and Walkabout (Wanderlust: Travel Stories), Joybits (Doodle God), and BufoProject (Classic Card Games 3D), among others.

“We’re going to try out different genres, different ways of packaging, different pricing and we will see what resonates,” Lehrer said.

Legacy Games has a long history of working with established casual game developers such as Artifex Mundi, Brave Giant, Alawar, Microids, Jet Dogs, Crisp App Studios, and many more. Rivals include Big Fish Games. The company has publishing contracts with more than 50 game developers, and it sells more than 500 individual games. One of the regular hits is the Amazing Games bundle at Walmart, with titles including Supernatural Stories, Fantastic Fables, True Crime, Murder Mystery, Greatest Hits, and Magical Matches.

“There are many fewer retail and digital sites to purchase casual PC games than there were a few years ago,” Lehrer said. “Many of our competitors have switched their focus to mobile. Our customers find Steam overwhelming. I believe there is a significant revenue opportunity for indie developers to reach new customers and generate incremental revenue by partnering with Legacy.”

One of the developers using Legacy’s publishing services is Aaron San Filippo, co-owner of Flippfly, a three-person studio near Madison, Wisconsin. In an interview, he said Legacy reached out to him a couple of months ago to get his game Evergarden, which is a mysterious puzzle gardening title, onto its platform. It will be launching soon in the digital store and it has a chance for physical distribution, San Filippo said.

San Filippo said he launched the game on Steam a few years ago and it didn’t connect well with that audience. Steam was more about hardcore gamers, and so the casual gaming audience of Legacy seemed a lot more appealing. The game also debuted on Linux and iOS, and it did best on iOS.

“It goes to the target market for our games,” San Filippo said. “We’re always looking for more opportunities. This is all about diversifying our income streams. Additional revenue streams are worthwhile, even if it’s small. I’m hopeful this will do well.”

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Continue Reading

Tech

Colonial Pipeline paid a $5 million ransom—and kept a vicious cycle turning

Published

on

Colonial Pipeline paid a $5 million ransom—and kept a vicious cycle turning

Sean Rayford | Getty Images

Nearly a week after a ransomware attack led Colonial Pipeline to halt fuel distribution on the East Coast, reports emerged on Friday that the company paid a 75 bitcoin ransom—worth as much as $5 million, depending on the time of payment—in an attempt to restore service more quickly. And while the company was able to restart operations Wednesday night, the decision to give in to hackers’ demands will only embolden other groups going forward. Real progress against the ransomware epidemic, experts say, will require more companies to say no.

Not to say that doing so is easy. The FBI and other law enforcement groups have long discouraged ransomware victims from paying digital extortion fees, but in practice many organizations resort to paying. They either don’t have the backups and other infrastructure necessary to recover otherwise, can’t or don’t want to take the time to recover on their own, or decide that it’s cheaper to just quietly pay the ransom and move on. Ransomware groups increasingly vet their victims’ financials before springing their traps, allowing them to set the highest possible price that their victims can still potentially afford.

wired logo

In the case of Colonial Pipeline, the DarkSide ransomware group attacked the company’s business network rather than the more sensitive operational technology networks that control the pipeline. But Colonial took down its OT network as well in an attempt to contain the damage, increasing the pressure to resolve the issue and resume the flow of fuel along the East Coast. Another potential factor in the decision, first reported by Zero Day, was that the company’s billing system had been infected with ransomware, so it had no way to track fuel distribution and bill customers.

Advocates of zero tolerance for ransom payments hoped that Colonial Pipeline’s proactive shutdown was a sign that the company would refuse to pay. Reports on Wednesday indicated that the company had a plan to hold out, but numerous subsequent reports on Thursday, led by Bloomberg, confirmed that the 75 bitcoin ransom had been paid. Colonial Pipeline did not return a request for comment from WIRED about the payment. It is still unclear whether the company paid the ransom soon after the attack or days later, as fuel prices rose and lines at gas stations grew.

“I can’t say I’m surprised, but it’s certainly disappointing,” says Brett Callow, a threat analyst at antivirus company Emsisoft. “Unfortunately, it’ll help keep United States critical infrastructure providers in the crosshairs. If a sector proves to be profitable, they’ll keep on hitting it.”

In a briefing on Thursday, White House press secretary Jen Pskai emphasized in general that the US government encourages victims not to pay. Others in the administration struck a more measured note. “Colonial is a private company and we’ll defer information regarding their decision on paying a ransom to them,” said Anne Neuberger, deputy national security adviser for cyber and emerging technologies, in a press briefing on Monday. She added that ransomware victims “face a very difficult situation and they [often] have to just balance the cost-benefit when they have no choice with regards to paying a ransom.”

Researchers and policymakers have struggled to produce comprehensive guidance about ransom payments. If every victim in the world suddenly stopped paying ransoms and held firm, the attacks would quickly stop, because there would be no incentive for criminals to continue. But coordinating a mandatory boycott seems impractical, researchers say, and likely would result in more payments happening in secret. When the ransomware gang Evil Corp attacked Garmin last summer, the company paid the ransom through an intermediary. It’s not unusual for large companies to use a middleman for payment, but Garmin’s situation was particularly noteworthy because Evil Corp had been sanctioned by the US government.

“For some organizations, their business could be completely destroyed if they don’t pay the ransom,” says Katie Nickels, director of intelligence at the security firm Red Canary. “If payments aren’t allowed you’ll just see people being quieter about making the payments.”

Prolonged shutdowns of hospitals, critical infrastructure, and municipal services also threaten more than just finances. When lives are literally at stake, a principled stand against hackers quickly drops off of the priorities list. Nickels herself recently participated in a public-private effort to establish comprehensive United States–based ransomware recommendations; the group could not agree on definitive guidance about if and when to pay.

“The Ransomware Task Force discussed this extensively,” she says. “There were a lot of important things that the group came to a consensus on and payment was one where there was no consensus.”

As part of a cybersecurity Executive Order signed by President Joseph Biden on Wednesday, the Department of Homeland Security will create a Cyber Safety Review Board to investigate and debrief “significant” cyberattacks. That could at least help more payments be made in the open, giving the general public a fuller sense of the scale of the ransomware problem. But while the board has incentives to entice private organizations to participate, it may still need expanded authority from Congress to demand total transparency. Meanwhile, the payments will continue, and so will the attacks.

“You shouldn’t pay, but if you don’t have a choice and you’ll be out of business forever, you’re gonna pay,” says Adam Meyers, vice president of intelligence at the security firm CrowdStrike. “In my mind, the only thing that’s going to really drive change is organizations not getting got in the first place. When the money disappears, these guys will find some other way to make money. And then we’ll have to deal with that.”

For now, though, ransomware remains an inveterate threat. And Colonial Pipeline’s $5 million payment will only egg on cybercriminals.

This story originally appeared on wired.com.

Continue Reading

Trending