Shriyash Upadhyay and Etan Ginsberg, AI researchers from the University of Pennsylvania, are of the opinion that many large AI companies are sacrificing basic research in pursuit of developing competitive, powerful AI models. The duo blame market dynamics: when companies raise substantial funds, the majority usually goes toward efforts to stay ahead of rivals rather than studying fundamentals.
“During our research on LLMs [at UPenn,] we observed these concerning trends in the AI industry,” Upadhyay and Ginsberg told TechCrunch in an email interview. “The challenge is making AI research profitable.”
Upadhyay and Ginsberg thought that the best way to tackle this might be by founding a company of their own — a company whose products benefit from interpretability. The company’s mission would naturally align with furthering interpretability research rather than capabilities research, they hypothesized, leading to stronger research.
That company, Martian, today emerged from stealth with $9 million in funding from investors including NEA, Prosus Ventures, Carya Venture Partners and General Catalyst. The proceeds are being put toward product development, conducting research into models’ internal operations and growing Martian’s ten-employee team, Upadhyay and Ginsberg say.
Martian’s first product is a “model router,” a tool that takes in a prompt intended for a large language model (LLM) — say GPT-4 — and automatically routes it to the “best” LLM. By default, the model router chooses the LLM with the best uptime, skillset (e.g. math problem solving) and cost-to-performance ratio for the prompt in question.
“The way companies currently use LLMs is to pick a single LLM for each endpoint where they send all their requests to,” Upadhyay and Ginsberg said. “But within a task like creating a website, different models will be better suited to a specific request depending on the context the user specifies (what language, what features, how much they are willing to pay, etc.) … By using a team of models in an application, a company can achieve a higher performance and lower cost than any single LLM could achieve alone.”
There’s truth to that. Relying exclusively on a high-end LLM such as GPT-4 can be cost-prohibitive for some, if not most, companies. The CEO of Permutable.ai, a market intelligence firm, recently revealed it costs the firm over $1 million a year to process around 2 million articles per day using OpenAI’s high-end models.
Not every task needs a pricier models’ horsepower, but it can be difficult to build a system that switches intelligently on the fly. That’s where Martian — and its ability to estimate a model’s performs without actually running it — comes in.
“Martian can route to cheaper models on requests that perform similarly to the most expensive models, and only route to expensive models when necessary,” they added. “The model router indexes new models as they come out, incorporating them into applications with zero friction or manual work needed.”
Now, Martian’s model router isn’t new tech. At least one other startup, Credal, provides an automatic model-switching tool. So its uptick will depend on the competitiveness of Martian’s pricing — and its ability to deliver in high-stakes commercial scenarios.
Upadhyay and Ginsberg claim that there’s been some uptake already though, including among “multi-billion-dollar” companies.
“Building a truly effective model router is extremely difficult because it requires developing an understanding of how these models fundamentally work,” they said. “That’s the breakthrough we pioneered.”
Source link