Y Combinator-backed Intrinsic is building infrastructure for trust and safety teams


    A few years ago, Karine Mellata and Michael Lin met while working at Apple’s fraud engineering and algorithmic risk team. Both engineers, Mellata and Lin were involved with helping to address online abuse problems including spam, botting, account security and developer fraud for Apple’s growing customer base.

    Despite their efforts to develop new models to keep up with the evolving patterns of abuse, Mellata and Lin felt that they were falling behind — and stuck rebuilding core elements of their trust and safety infrastructure.

    “As regulation puts more scrutiny on teams to centralize their somewhat ad-hoc trust and safety responses, we saw a true opportunity for us to help modernize this industry and help build a safer internet for everyone,” Mellata told TechCrunch in an email interview. “We dreamt of a system that could magically adapt as quickly as the abuse itself.”

    So Mellata and Lin co-founded Intrinsic, a startup that aims to give safety teams the tools necessary to prevent abusive behavior on their products. Intrinsic recently raised $3.1 million in a seed round that had participation from Urban Innovation Fund, Y Combinator, 645 Ventures and Okta.

    Intrinsic’s platform is designed for moderating both user- and AI-generated content, delivering infrastructure to enable customers — mainly social media companies and e-commerce marketplaces — to detect and take action on content that violates their policies. Intrinsic focuses on safety product integration, automatically orchestrating tasks like banning users and flagging content for review.

    “Intrinsic is a fully customizable AI content moderation platform,” Mellata said. “For instance, Intrinsic can help a publishing company that’s generating marketing materials avoid giving financial advice, which entails legal liabilities. Or we can help marketplaces detect listings such as brass knuckles, which are illegal in California but not Texas.”

    Mellata makes the case that there’s no off-the-shelf classifiers for these types of nuanced categories, and that even a well-resourced trust and safety team would need several weeks — or even months — of engineering time to add new automated detection categories in-house.

    Asked about rival platforms like Spectrum Labs, Azure and Cinder (which is nearly a direct competitor), Mellata says that he sees Intrinsic standing apart in its (1) explainability and (2) greatly expanded tooling. Intrinsic, he explained, lets customers “ask” it about mistakes it makes in content moderation decisions and offers explanation as to its reasoning. The platform also hosts manual review and labeling tools that allow customers to fine-tune moderation models on their own data.

    “Most conventional trust and safety solutions aren’t flexible and weren’t built to evolve with abuse,” Mellata said. “Resource-constrained trust and safety teams are seeking vendor help now more than ever and looking to cut moderation costs while maintaining high safety standards.”

    Absent a third-party audit, it’s tough to say just how accurate a given vendor’s moderation models are — and whether they’re susceptible to the sorts of biases that plague content moderation models elsewhere. But Intrinsic, in any case, appears to be gaining traction thanks to “large, established” enterprise customers signing contracts in the “six-figure” range on average.

    Intrinsic’s near-term plans are expanding the size of its three-person team and extending its moderation tech to cover not only text and images but video and audio.

    “The broader slowdown in tech is driving more interest in automation for trust and safety, which places Intrinsic in a unique position,” Mellata said. “COOs care about cutting costs. Chief compliance officers care about reducing risk. Intrinsic helps with both. We’re cheaper and faster and catch way more abuse than existing vendors or equivalent in-house solutions.”



    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here