News from this site

 Rental advertising space, please contact the webmaster if you need cooperation


+focus
focused

classification  

no classification

tag  

no tag

date  

2024-11(9)

GPT-4 API 接口调用及价格分析

posted on 2023-05-07 20:54     read(1105)     comment(0)     like(24)     collect(0)


GPT-4 API interface call and price analysis

In the early morning of the 15th, OpenAI released the much-anticipated GPT-4 ! The new model supports multimodality , has powerful image recognition capabilities, and significantly improves reasoning ability and answer accuracy. Comparable or even surpasses human performance on various professional and academic benchmarks. No wonder OpenAI CEO Sam Altman called GPT-4 "our most powerful model to date!"

insert image description here

Regarding the capabilities of GPT-4, I did a test on the day of release. For specific results, please read "OpenAI Releases GPT-4 - Early Access to the Whole Network" .

For developers, the most exciting thing about GPT-4 is the simultaneous release of API interfaces . An application is currently required to access. I joined the waitlist for the first time and gained access today. This article will share with you the use of the GPT-4 API interface and the price analysis that everyone cares about.

insert image description here

GPT-4 API

The interface and parameters of the GPT-4 API are consistent with the GPT-3.5 interface opened earlier, and the model name is:

model namedescribeMaximum number of tokenstraining data
gpt-4More powerful than the GPT-3.5 model, capable of performing more complex tasks, and optimized for chat scenarios. It will be updated iteratively.8,192As of June 2021
gpt-4-0314gpt-4The March 14, 2023 snapshot version of . This model will not be updated for the next 3 months until June 14, 2023.8,192As of October 2019
gpt-4-32kgpt-4Same function as but gpt-4with 4 times the context length. It will be updated iteratively.32,768As of June 2021
gpt-4-32k-0314gpt-4-32kThe March 14, 2023 snapshot version of . This model will not be updated for the next 3 months until June 14, 2023.32,768As of October 2019

limit

Since it is still in the beta stage, the frequency of GPT-4 API calls is limited:

  • 40k tokens / minute
  • 200 requests/minute

This frequency is sufficient for functional testing and proof of concept.

If you use ChatGPT Plus to experience GPT-4, there is a limit of 100 messages for 4 hours.

price

The pricing strategy for the GPT-4 API is different from the previous model. Before GPT-4, interface pricing was uniformly charged according to the number of tokens, regardless of whether it was the token for the prompt or the token for generating the response. In GPT-4, the prompt token and the generated response token are priced separately, and the prices are as follows:

  • $0.03/1K prompt tokens
  • $0.06/1K generate response token

This price is at least 15 times more expensive gpt-3.5-turbothan $0.002 / 1K tokens.

Since the GPT-4 interface is too expensive, and the prompt and the generated response are charged separately, it is necessary for us to conduct a detailed analysis of its price before using the GPT-4 API on a large scale.

price analysis

Token number evaluation

The most difficult thing to evaluate in the GPT+ series API is the correspondence between the number of tokens and words (number of words). Because we can intuitively understand only the number of words or words, and the number of tokens is the number of tokens after tokenization, we cannot directly and accurately estimate it. Fortunately, the interface will return the number of tokens for each request to promote and generate a response. We can roughly obtain a corresponding relationship between the number of tokens and the number of words through statistical means.

I found 8 articles from short to long and entered the GPT-4 API. In order to stabilize the results, I chose the stable and non-updated model, gpt-4–0314and then returned statistics based on the number of prompt tokens based on the interface. The results are as follows:

#word countnumber of tokenspercentage
11,6002,13375.01%
22,0002,66774.99%
347,09462,79275.00%
490,000120,00075.00%
5445,134593,51275.00%
6783,1341,044,18375.00%
7884,4211,179,22875.00%
81,084,1701,445,56075.00%

Through the above test results, we can draw an important conclusion:

About every 750 characters (words) consume 1000 tokens

price comparison

Let's first compare the unit prices of several models horizontally

$0.06$0.03$0.002$0.02$0.002$0.0005$0.0004
gpt-4(completion)gpt-4(prompt)gpt-3.5-turbodavincicuriebabbageThere is
gpt-4(completion)0129229119149
gpt-4(prompt)-0.50140.5145974

As can be seen from the above table, gpt-4 prompt is 14 times more expensive than gpt-3.5-turbo , and gpt-4 completion is 29 times more expensive than gpt-3.5-turbo ! Assuming that the word count between prompt and completion is 1:4 (in practice, completion is often longer than prompt), then the overall cost of gpt-4 interface is 27 times that of gpt-3.5-turbo!

The table below gives a more intuitive view of the cost required for each model to process the corresponding word count:

insert image description here

It can be seen from the above table that gpt-3.5-turbo$20 can handle 7.5 million words, while the same amount gpt-4can only handle about 300,000 words.

So the question is, is it worth spending more than 20 times the cost to use gpt-4? In other words, is there a 20-fold effect of a gpt-4relative capability increase?gpt-3.5-turbo

Is GPT-4 worth it?

The answer to this question depends heavily on your scenario. If it is a scenario that requires high accuracy (such as law, education, etc.), then GPT-4 is definitely a better choice than GPT-3.5. For all other use cases and scenarios I would recommend in-depth testing to see if the added cost would provide an equivalent benefit over the ChatGPT API.

It is worth mentioning that gpt-4the maximum number of tokens of the model is gpt-3.5-turbotwice that of . For the scene of long text generation, if gpt-3.5-turbothe maximum 4,096 tokens are not enough, you can choose to use them gpt-4. At the same time, GPT-4 also provides a 32K version, which supports 32,768 tokens, but the price is also more expensive.

  • $0.06/1K prompt tokens
  • $0.12/1K generate response token

In summary, the choice between ChatGPT API and GPT-4 API depends on the specific needs and constraints of the project. When you're at the crossroads of cutting-edge technology, consider what really matters:

  • expected application
  • expected accuracy
  • moral considerations
  • financial impact
  • Adaptability to future development

Ultimately, your decisions will be a testament to your vision and the head start that embraces the AI ​​revolution.



Category of website: technical article > Blog

Author:kimi

link:http://www.pythonblackhole.com/blog/article/355/f5f8357e9caaacefe146/

source:python black hole net

Please indicate the source for any form of reprinting. If any infringement is discovered, it will be held legally responsible.

24 0
collect article
collected

Comment content: (supports up to 255 characters)