OpenAI has just introduced GPT-4, a new AI model that excels in comprehending both images and text. The company says that this achievement represents a major step forward in the quest to advance deep learning technology.
OpenAI’s paying subscribers can access GPT-4 through ChatGPT Plus, albeit with certain usage limitations. As for developers, they can join a waitlist to gain access to the API.
GPT-4 represents a significant improvement over its predecessor, GPT-3.5, as it can accept both text and image inputs and generate text, which GPT-3.5 was not capable of doing.
Additionally, GPT-4 is deemed to perform at a “human level” on various professional and academic benchmarks. For instance, GPT-4 scored in the top 10% of simulated bar exam takers, whereas GPT-3.5’s score was in the bottom 10%.
One of the fascinating features of GPT-4 is its ability to comprehend not just text, but also images. It can effectively caption and interpret intricate images, such as identifying a Lightning Cable adapter in a picture of an iPhone that is plugged in.
However, an even more significant advancement in GPT-4 is its steerability tooling. OpenAI is introducing a new API capability in GPT-4, called “system” messages, which allows developers to give specific instructions about style and tasks by providing explicit directions. These system messages establish boundaries and set the tone for the AI’s future interactions. Eventually, ChatGPT will also have this feature.
Despite the implementation of system messages and other enhancements, OpenAI recognizes that GPT-4 is still prone to imperfections. It occasionally generates false information and makes reasoning mistakes, often with a high degree of certainty. For instance, OpenAI cited an instance where GPT-4 incorrectly identified Elvis Presley as the “son of an actor,” a glaring error.
It also shares some of the same limitations as ChatGPT. Such as:
GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its data cuts off (September 2021), and does not learn from its experience. It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces.
Though it does come with some other improvements. For instance, it is less likely to refuse requests on how to synthesize dangerous chemicals, for one. The company says that it is 82% less likely to respond to requests with a “disallowed” message.
The pricing model for GPT-4 involves a fee of $0.03 per 1,000 “prompt” tokens, which equates to approximately 750 words. Similarly, the cost is $0.06 per 1,000 “completion” tokens, which also corresponds to around 750 words.
The prompt tokens represent the portions of text that are fed into GPT-4, while the completion tokens are the output generated by the model.
GPT-4 has apparently been hiding in plain sight this whole time. Microsoft has confirmed that Bing Chat, their chatbot technology co-developed with OpenAI, is powered by GPT-4.
Other organizations that have already embraced GPT-4 include Stripe, which employs it to scan business websites and generate a summary for customer service representatives. Duolingo has incorporated GPT-4 into a new language learning subscription level.
Meanwhile, Morgan Stanley is developing a GPT-4-based system that retrieves information from corporate documents and provides it to financial analysts. Additionally, Khan Academy is utilizing GPT-4 to build an automated tutor of some kind.
Bht rates hain cars ky ab log khana pura kran ya gari lyn.ya petrol pura kran😢😢😢😢