posted on 2023-05-07 20:54 read(1105) comment(0) like(24) collect(0)
In the early morning of the 15th, OpenAI released the much-anticipated GPT-4 ! The new model supports multimodality , has powerful image recognition capabilities, and significantly improves reasoning ability and answer accuracy. Comparable or even surpasses human performance on various professional and academic benchmarks. No wonder OpenAI CEO Sam Altman called GPT-4 "our most powerful model to date!"
Regarding the capabilities of GPT-4, I did a test on the day of release. For specific results, please read "OpenAI Releases GPT-4 - Early Access to the Whole Network" .
For developers, the most exciting thing about GPT-4 is the simultaneous release of API interfaces . An application is currently required to access. I joined the waitlist for the first time and gained access today. This article will share with you the use of the GPT-4 API interface and the price analysis that everyone cares about.
The interface and parameters of the GPT-4 API are consistent with the GPT-3.5 interface opened earlier, and the model name is:
model name | describe | Maximum number of tokens | training data |
---|---|---|---|
gpt-4 | More powerful than the GPT-3.5 model, capable of performing more complex tasks, and optimized for chat scenarios. It will be updated iteratively. | 8,192 | As of June 2021 |
gpt-4-0314 | gpt-4 The March 14, 2023 snapshot version of . This model will not be updated for the next 3 months until June 14, 2023. | 8,192 | As of October 2019 |
gpt-4-32k | gpt-4 Same function as but gpt-4 with 4 times the context length. It will be updated iteratively. | 32,768 | As of June 2021 |
gpt-4-32k-0314 | gpt-4-32k The March 14, 2023 snapshot version of . This model will not be updated for the next 3 months until June 14, 2023. | 32,768 | As of October 2019 |
Since it is still in the beta stage, the frequency of GPT-4 API calls is limited:
This frequency is sufficient for functional testing and proof of concept.
If you use ChatGPT Plus to experience GPT-4, there is a limit of 100 messages for 4 hours.
The pricing strategy for the GPT-4 API is different from the previous model. Before GPT-4, interface pricing was uniformly charged according to the number of tokens, regardless of whether it was the token for the prompt or the token for generating the response. In GPT-4, the prompt token and the generated response token are priced separately, and the prices are as follows:
This price is at least 15 times more expensive gpt-3.5-turbo
than $0.002 / 1K tokens.
Since the GPT-4 interface is too expensive, and the prompt and the generated response are charged separately, it is necessary for us to conduct a detailed analysis of its price before using the GPT-4 API on a large scale.
The most difficult thing to evaluate in the GPT+ series API is the correspondence between the number of tokens and words (number of words). Because we can intuitively understand only the number of words or words, and the number of tokens is the number of tokens after tokenization, we cannot directly and accurately estimate it. Fortunately, the interface will return the number of tokens for each request to promote and generate a response. We can roughly obtain a corresponding relationship between the number of tokens and the number of words through statistical means.
I found 8 articles from short to long and entered the GPT-4 API. In order to stabilize the results, I chose the stable and non-updated model, gpt-4–0314
and then returned statistics based on the number of prompt tokens based on the interface. The results are as follows:
# | word count | number of tokens | percentage |
---|---|---|---|
1 | 1,600 | 2,133 | 75.01% |
2 | 2,000 | 2,667 | 74.99% |
3 | 47,094 | 62,792 | 75.00% |
4 | 90,000 | 120,000 | 75.00% |
5 | 445,134 | 593,512 | 75.00% |
6 | 783,134 | 1,044,183 | 75.00% |
7 | 884,421 | 1,179,228 | 75.00% |
8 | 1,084,170 | 1,445,560 | 75.00% |
Through the above test results, we can draw an important conclusion:
About every 750 characters (words) consume 1000 tokens
Let's first compare the unit prices of several models horizontally
$0.06 | $0.03 | $0.002 | $0.02 | $0.002 | $0.0005 | $0.0004 | |
---|---|---|---|---|---|---|---|
gpt-4(completion) | gpt-4(prompt) | gpt-3.5-turbo | davinci | curie | babbage | There is | |
gpt-4(completion) | 0 | 1 | 29 | 2 | 29 | 119 | 149 |
gpt-4(prompt) | -0.5 | 0 | 14 | 0.5 | 14 | 59 | 74 |
As can be seen from the above table, gpt-4 prompt is 14 times more expensive than gpt-3.5-turbo , and gpt-4 completion is 29 times more expensive than gpt-3.5-turbo ! Assuming that the word count between prompt and completion is 1:4 (in practice, completion is often longer than prompt), then the overall cost of gpt-4 interface is 27 times that of gpt-3.5-turbo!
The table below gives a more intuitive view of the cost required for each model to process the corresponding word count:
It can be seen from the above table that gpt-3.5-turbo
$20 can handle 7.5 million words, while the same amount gpt-4
can only handle about 300,000 words.
So the question is, is it worth spending more than 20 times the cost to use gpt-4? In other words, is there a 20-fold effect of a gpt-4
relative capability increase?gpt-3.5-turbo
The answer to this question depends heavily on your scenario. If it is a scenario that requires high accuracy (such as law, education, etc.), then GPT-4 is definitely a better choice than GPT-3.5. For all other use cases and scenarios I would recommend in-depth testing to see if the added cost would provide an equivalent benefit over the ChatGPT API.
It is worth mentioning that gpt-4
the maximum number of tokens of the model is gpt-3.5-turbo
twice that of . For the scene of long text generation, if gpt-3.5-turbo
the maximum 4,096 tokens are not enough, you can choose to use them gpt-4
. At the same time, GPT-4 also provides a 32K version, which supports 32,768 tokens, but the price is also more expensive.
In summary, the choice between ChatGPT API and GPT-4 API depends on the specific needs and constraints of the project. When you're at the crossroads of cutting-edge technology, consider what really matters:
Ultimately, your decisions will be a testament to your vision and the head start that embraces the AI revolution.
Author:kimi
link:http://www.pythonblackhole.com/blog/article/355/f5f8357e9caaacefe146/
source:python black hole net
Please indicate the source for any form of reprinting. If any infringement is discovered, it will be held legally responsible.
name:
Comment content: (supports up to 255 characters)
Copyright © 2018-2021 python black hole network All Rights Reserved All rights reserved, and all rights reserved.京ICP备18063182号-7
For complaints and reports, and advertising cooperation, please contact vgs_info@163.com or QQ3083709327
Disclaimer: All articles on the website are uploaded by users and are only for readers' learning and communication use, and commercial use is prohibited. If the article involves pornography, reactionary, infringement and other illegal information, please report it to us and we will delete it immediately after verification!