In the rush to build, test and deploy AI systems, businesses often lack the resources and time to fully validate their systems and ensure they’re bug-free. In a 2018 report, Gartner predicted that 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them. Even Big Tech companies aren’t immune to the pitfalls — for one client, IBM ultimately failed to deliver an AI-powered cancer diagnostics system that wound up costing $62 million over 4 years.
Inspired by “bug bounty” programs, Jeong-Suh Choi and Soohyun Bae founded Bobidi, a platform aimed at helping companies validate their AI systems by exposing the systems to the global data science community. With Bobidi, Bae and Choi sought to build a product that lets customers connect AI systems with the bug-hunting community in a “secure” way, via an API.
The idea is to let developers test AI systems and biases — that is, the edge cases where the systems perform poorly — to reduce the time needed for validation, Choi explained in an email interview. Bae was previously a senior engineer at Google and led augmented reality mapping at Niantic, while Choi was a senior manager at eBay and headed the “people engineering” team at Facebook. The two met at a tech industry function about 10 years ago.
“By the time bias or flaws are revealed from the model, the damage is already irrevocable,” Choi said. “For example, natural language processing algorithms [like OpenAI’s GPT-3] are often found to be making problematic comments, or mis-responding to those comments, related to hate speech, discrimination, and insults. Using Bobidi, the community can ‘pre-test’ the algorithm and find those loopholes, which is actually very powerful as you can test the algorithm with a lot of people under certain conditions that represent social and political contexts that change constantly.”
To test models, the Bobidi “community” of developers builds a validation dataset for a given system. As developers attempt to find loopholes in the system, customers get an analysis that includes patterns of false negatives and positives and the metadata associated with them (e.g., the number of edge cases).
Exposing sensitive systems and models to the outside world might give some companies pause, but Choi asserts that Bobidi “auto-expires” models after a certain number of days so that they can’t be reverse-engineered. Customers pay for service based on the number of “legit” attempts made by the community, which works out to a dollar ($0.99) per 10 attempts.
Choi notes that the amount of money developers can make through Bobidi — $10 to $20 per hour — is substantially above the minimum wage in many regions around the world. Assuming Choi’s estimations are rooted in fact, Bobidi bucks the trend in the data science industry, which tends to pay data validators and labelers poorly. The annotators of the widely used ImageNet computer vision dataset made a median wage of $2 per hour, one study found, with only 4% making more than $7.25 per hour.
Pay structure aside, crowd-powered validation isn’t a new idea. In 2017, the Computational Linguistics and Information Processing Laboratory at the University of Maryland launched a platform called Break It, Build It that let researchers submit models to users tasked with coming up with examples to defeat them. Elsewhere, Meta maintains a platform called Dynabench that has users “fool” models designed to analyze sentiment, answer questions, detect hate speech and more.
But Bae and Choi believe the “gamified” approach will help Bobidi stand out from the pack. While it’s early days, the vendor claims to have customers in augmented reality and computer vision startups, including Seerslab, Deepixel and Gunsens.
The traction was enough to convince several investors to pledge money toward the venture. Today, Bobidi closed a $5.5 million seed round with participation from Y Combinator, We Ventures, Hyundai Motor Group, Scrum Ventures, New Product Experimentation (NPE) at Meta, Lotte Ventures, Atlas Pac Capital and several undisclosed angel investors.
Of note, Bobidi is among the first investments for NPE, which shifted gears last year from building consumer-facing apps to making seed-stage investments in AI-focused startups. When contacted for comment, head of NPE investments Sunita Parasuraman said via email: “We’re thrilled to back the talented founders of Bobidi, who are helping companies better validate AI models with an innovative solution driven by people around the globe.”
“Bobidi is a mashup between community and AI, a unique combination of expertise that we share,” Choi added. “We believe that the era of big data is ending and we’re about to enter the new era of quality data. It means we are moving from the era — where the focus was to build the best model given with the datasets — to the new era, where people are tasked to find the best dataset given with the model-complete opposite approach.”
Choi said that the proceeds from the seed round will be put toward hiring — Bobidi currently has 12 employees — and building “customer insights experiences” and various “core machine learning technologies.” The company hopes to triple the size of its team by the end of the year despite economic headwinds.
This article was originally published on TechCrunch.com. Read More on their website.