OpenAI debuts GPT-4 Turbo and copyright shield

OpenAI debuts GPT-4 Turbo and copyright shield

Source Node: 2967894

Video OpenAI, maker of ChatGPT and less memorably branded AI models, held its first developer conference on Monday in San Francisco, where it announced a new foundational model, more affordable pricing, customizable, low-code models called GPTs, and a store to distribute them.

CEO Sam Altman presided over the event, which featured a special guest, Microsoft CEO Satya Nadella, whose company has invested billions in investments in OpenAI to enhance its own products.

Asked about how Microsoft views its partnership with OpenAI, Nadella gushed, “We love you guys. It’s been fantastic for us.” The Windows boss then went on to talk about how Azure, Microsoft’s cloud platform, has changed in light of the models OpenAI has been building.

Microsoft’s goal, he said, “ultimately, it’s about being able to get the benefits of AI broadly disseminated to everyone,” adding that safety is also a serious focus. If you want to sit through the whole thing yourself, it’s embedded below.

Youtube Video

Altman touched on that concern as it applies to OpenAI customers just prior to Nadella’s arrival, when he talked up the company’s indemnification plan, called Copyright Shield, for API and enterprise customers who might be concerned about legal liability arising from the use of unvetted AI model output.

“Copyright shield means that we will step in and defend our customers and pay the costs incurred if you face legal claims or on copyright infringement,” said Altman. “And this applies both to ChatGPT Enterprise and the API. And let me be clear, this is a good time to remind people we do not train on data from the API or ChatGPT Enterprise ever.”

Microsoft and GitHub are presently fighting a copyright lawsuit from developers who contend GitHub’s Copilot code assistant, derived from OpenAI’s Codex model, reproduced their code without permission.

OpenAI introduced its application programming interface (API) back in 2020 and now claims over two million developers use the spec for integrating models like GPT-4, GPT-3.5, DALL·E and Whisper into their own applications.

Press the turbo button!

The new model is called GPT-4 Turbo, the successor of GPT-4 which debuted in March and saw public release in July. It can accept verbose prompts – up to 128,000 tokens or about 300 pages of text – and more modest budgets – it’s priced 3x less for input tokens and 2x cheaper for output tokens.

GPT-4 Turbo also has more current knowledge about the world, at least up to April 2023, he said. Altman promised more effort will be made to keep GPT-4 Turbo current – its predecessor was aware of recent events only until September 2021 and developers found that somewhat frustrating.

“We will try to never let it get that out of date again,” said Altman.

The updated model is also better at function calling – it can handle prompts that ask for multiple tasks (e.g. “open the pod bay doors then apologize and take yourself offline”) – and it’s more likely to return the right function parameters for such requests.

What’s more, the API now has a parameter to ensure that models return properly formatted JSON data. And there’s a seed parameter that enables reproducible outputs from the model.

Within a few weeks, OpenAI plans to support a feature to return the log probabilities for the most likely output tokens from GPT-4 Turbo and GPT-3.5 Turbo (also updated), which could prove useful for autocompletion applications.

GPT-4 Turbo can also accept images as input to the Chat Completions API, and the Images API can generate visual output via DALL·E 3, and there’s a new text-to-speech API for generating human-sounding speech from text input thrown in as well.

Some assistance on assistants?

OpenAI introduced a new API called the Assistants API, which is intended to make it easier to build AI assistants.

“The Assistants API includes persistent threads, so they don’t have to figure out how to deal with a long conversation tree, built-in retrieval, code interpreter, a working Python interpreter and sandbox environment, [and improved function calling],” explained Altman.

The biz also demonstrated customizable, sharable AI models called GPTs (which, in case anyone asks, stands for generative pre-trained transformer).

“GPTs are tailored versions of ChatGPT for a specific purpose,” said Altman. “You can build a GPT, a customized version of ChatGPT, for almost anything. Add instructions, expanded knowledge and actions, and then you can publish it for others to use. And because they combine instructions, expanded knowledge and actions, they can be more helpful to you.”

GPTs, available to ChatGPT Plus and Enterprise users, can be created without coding, via conversational interaction. To demonstrate, Altman created one on stage to give founders and developers advice when starting new projects. The value of that advice, however, was left untested.

Later this month, OpenAI plans to launch the GPT Store. The company intends to promote certain GPTs that prove popular and will offer revenue sharing based on usage. “Revenue sharing is important to us,” said Altman. “We’re gonna pay people who build the most useful and the most used GPTs a portion of our revenue.”

Altman promised to provide further details shortly; a spokesperson for the company also kicked the can down the road when asked how the program will work. Absent the specific contractual terms, it’s too early to tell how OpenAI’s arrangement with developers will compare to the 15-30 percent regime enforced by Apple and Google in their respective app stores.

OpenAI also said it will make customized models for companies that can afford it.

“With custom models, our researchers will work closely with a company to help them make a great custom model, especially for them and their use case, using our tools,” said Altman.

“This includes modifying every step of the model training process, doing additional domain-specific pre-training or a custom or post-training process, tailored for a specific domain, and whatever else. We won’t be able to do this with many companies to start. It’ll take a lot of work and in the interest of expectations, at least initially it won’t be cheap.” ®

Time Stamp:

More from The Register