Today at its first-ever developer conference, OpenAI unveiled GPT-4 Turbo, an improved version of its flagship text-generating AI model, GPT-4, that the company claims is both “more powerful” and less expensive.
GPT-4 Turbo comes in two versions: one that’s strictly text-analyzing and a second version that understands the context of both text and images. The text-analyzing model is available in preview via an API starting today, and OpenAI says it plans to make both generally available “in the coming weeks.”
They’re priced at $0.01 per 1,000 input tokens (~750 words), where “tokens” represent bits of raw text — e.g., the word “fantastic” split into “fan,” “tas” and “tic”) and $0.03 per 1,000 output tokens. (Input tokens are tokens fed into the model, while output tokens are tokens that the model generates based on the input tokens.) The pricing of the image-processing GPT-4 Turbo will depend on the image size. For example, passing an image with 1080×1080 pixels to GPT-4 Turbo will cost $0.00765, OpenAI says.
“We optimized performance so we’re able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4,” OpenAI writes in a blog post shared with TechCrunch this morning.
GPT-4 Turbo boasts several improvements over GPT-4 — one being a more recent knowledge base to draw on when responding to requests.
Like all language models, GPT-4 Turbo is essentially a statistical tool to predict words. Fed an enormous number of examples, mostly from the web, GPT-4 Turbo learned how likely words are to occur based on patterns, including the semantic context of surrounding text. For example, given a typical email ending in the fragment “Looking forward…” GPT-4 Turbo might complete it with “… to hearing back.”
GPT-4 was trained on web data up to September 2021, but GPT-4 Turbo’s knowledge cut-off is April 2023. That should mean questions about recent events — at least events that happened prior to the new cut-off date — will yield more accurate answers.
GPT-4 Turbo also has an expanded context window.
Context window, measured in tokens, refers to the text the model considers before generating any additional text. Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic — often in problematic ways.
GPT-4 Turbo offers a 128,000-token context window — four times the size of GPT-4’s and the largest context window of any commercially available model, surpassing even Anthropic’s Claude 2. (Claude 2 supports up to 100,000 tokens; Anthropic claims to be experimenting with a 200,000-token context window but has yet to publicly release it.) Indeed, 128,000 tokens translates to around 100,000 words or 300 pages, which for reference is around the length of “Wuthering Heights,” “Gulliver’s Travels” and “Harry Potter and the Prisoner of Azkaban.”
And GPT-4 Turbo supports a new “JSON mode,” which ensures that the model responds with valid JSON — the open standard file format and data interchange format. That’s useful in web apps that transmit data, like those that send data from a server to a client so it can be displayed on a web page, OpenAI says. Other, related new parameters will allow developers to make the model return “consistent” completions more of the time and — for more niche applications — log probabilities for the most likely output tokens generated by GPT-4 Turbo.
“GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats (e.g. ‘always respond in XML’),” OpenAI writes. “And GPT-4 Turbo is more likely to return the right function parameters.”
GPT-4 upgrades
OpenAI hasn’t neglected GPT-4 in rolling out GPT-4 Turbo.
Today, the company’s launching an experimental access program for fine-tuning GPT-4. As opposed to the fine-tuning program for GPT-3.5, GPT-4’s predecessor, the GPT-4 program will involve more oversight and guidance from OpenAI teams, the company says — mainly due to technical hurdles.
“Preliminary results indicate that GPT-4 fine-tuning requires more work to achieve meaningful improvements over the base model compared to the substantial gains realized with GPT-3.5 fine-tuning,” OpenAI writes in the blog post.
Elsewhere, OpenAI announced that it’s doubling the tokens-per-minute rate limit for all paying GPT-4 customers. But pricing will remain the same at $0.03 per input token and $0.06 per output token (for the GPT-4 model with an 8,000-token context window) or $0.06 per input token and $0.012 per output token (for GPT-4 with a 32,000-token context window).
Source link