It’s critical to regulate AI within the multi-trillion-dollar API economy


    Application programming interfaces (APIs) power the modern internet, including most websites, mobile apps, and IoT devices we use. And, thanks to the ubiquity of the internet in nearly all parts of the planet, it is APIs that give people the power to connect to almost any functionality they want. This phenomenon, often referred to as the “API economy,” is projected to have a total market value of $14.2 trillion by 2027.

    Given the rising relevance of APIs in our daily lives, it has caught the attention of multiple authorities who have brought in key regulations. The first level is defined by organizations like IEEE and W3C, which aim to set up the standards for technical capabilities and limitations, which define the technology of the whole internet.

    Security and data privacy aspects are covered by internationally acknowledged requirements such as ISO27001, GDPR, and others. Their main goal is to provide the framework for the areas underpinned by APIs.

    But now, with AI, it has become much more complicated to regulate.

    How AI integration changed the API landscape

    Various kinds of AI have been here for a while, but it’s generative AI (and LLMs) that completely changed the risk landscape.

    Many AI companies use the benefits of API technologies to bring their products to every home and workplace. The most prominent example here is OpenAI’s early release of its API to the public. This combination would not be possible just two decades ago, when neither APIs nor AI were at the level of maturity that we started observing in 2022.

    Code creation or co-creation with AI has quickly become the norm in software development, especially in the complicated process of API creation and deployment. Tools like GitHub Copilot and ChatGPT are able to write the code to integrate with any API, and soon they will define certain ways and patterns that most software engineers use to create APIs, sometimes even without understanding it deeply enough.

    We also see how companies like Superface and Blobr innovate in the field of API integration, making it possible to use AI to connect to any API you want in a way you would talk to a chatbot.

    Various kinds of AI have been here for a while, but it’s generative AI (and large language models [LLMs]) that completely changed the risk landscape. GenAI has the ability to create something in endless ways, and this creativity is either controlled by humans or — in the case of artificial general intelligence (AGI) — will be beyond our current ability to control.



    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here