You are using an older browser version. Please use a supported version for the best MSN experience.

OpenAI GPT-4: Features, Comparison with ChatGPT, and How to Use It

PocketNow logo PocketNow 15/03/2023 Sanuj Bhatia
© Provided by PocketNow

OpenAI, the company behind the highly popular tool ChatGPT, has announced its latest AI model, GPT-4. GPT-4 builds upon the success of its predecessor, adding features such as image input, improved accuracy for results, enhanced creativity, and more. In this article, we will take a closer look at what's new in GPT-4, compare it with GPT-3.5 — the model that powers ChatGPT — and learn how you can use GPT-4 right now.

What is GPT-4?

OpenAI has been releasing GPT (Generative Pre-trained Transformer) models since 2018, and GPT-4 is the latest AI language model developed by the company. It comes with many significant improvements over GPT-3.5, the model behind highly popular tool ChatGPT. OpenAI claims GPT-4 is the "most advanced system, producing safer and more useful responses."

The company claims that GPT-4 can solve more difficult and complex problems with better accuracy, thanks to its broader general knowledge and problem-solving abilities. Additionally, GPT-4 will be available as an API for developers to build applications and services. Many companies, such as Duolingo, Be My Eyes, Stripe, and Khan Academy have already integrated GPT-4 with their products. The company also live-streamed a demo of GPT-4 on YouTube, showcasing its capabilities.

Replay Video

GPT-4: New Features and Comparison with ChatGPT

According to OpenAI, GPT-4 is better than the previous models in three key areas: creativity, support for image input, and longer context. Let's take a look at each of these areas in detail:

Support for Image Input

One of the most noticeable changes over the previous generation is that GPT- 4 is "multimodal," meaning it supports more than one form of input. ChatGPT, and all the services based on GPT-3, are limited to only text input. On the other hand, the newer GPT-4 supports both text and image inputs, as well as a combination of both.

So, you can ask GPT-4 to generate a caption for an image, describe what is happening in a scene, but its understanding extends beyond that. For instance, users can show it the ingredients they have in their kitchen and ask what they can make. In the live demo, Greg Brockman, President and Co-Founder of OpenAI, was also able to generate the whole HTML code for a website based just on a sketch.

Additionally, Be My Eyes, an app that connects blind and low-vision individuals with sighted volunteers, has also started integrating GPT-4 in their service. In a demonstration, the service showed off GPT-4's capabilities as it was able to correctly identify plants, describe patterns on clothing, translates labels, and read maps, and do much more.

Longer Context

Another improvement over ChatGPT (or rather GPT-3) is that GPT-4 can keep more context "in mind." If you have used ChatGPT on your iPhone or laptop, you must have noticed that it "forgets" the conversation after a point. This is because it has a maximum capacity of 2048 tokens, which is roughly around 1500 words.

In comparison, GPT-4 has the ability to handle longer forms of input as it has a maximum token count of 32,768. This means you can feed up to 25,000 words to a model that is based on GPT-4. This will come in handy when you have a large document to summarize, or are creating long-form content using the GPT tool.

GPT-4 is More Creative

OpenAI claims that GPT-4 has significantly improved in terms of its ability to generate creative content and collaborate with users on creative projects. This includes a diverse range of applications such as music composition, screenplay writing, technical writing, and even adapting to a user's individual writing style.

Other Features

  • Advanced reasoning capabilities: GPT-4 has a broader general knowledge and deeper understanding of various domains than ChatGPT. When put through the Uniform Bar Exam and the Biology Olympiad, GPT-4 performs exceptionally well.
  • Multilingual: GPT-4 is multilingual and can answer multiple-choice questions with high accuracy across 26 languages, from Italian to Korean.
  • Safer and more aligned: GPT-4 is safer and more aligned according to OpenAI. It is 82% less likely to generate negative content in response to prompts and 40% more likely to generate desired information.

GPT-4: Limitations

While GPT-4 is undoubtedly a powerful language model, it still has certain limitations. OpenAI has acknowledged that that social biases, hallucinations, and adversarial prompts remain issues that need to be tackled. The company says it continues to work on these issues with "transparency, user education, and wider AI literacy."

Additionally, GPT-4 like ChatGPT still has limitations in terms of its knowledge base, as it lacks knowledge of events that occurred after September 2021, and it still can't access online sources.

GPT-4: How to Use It Right Now

There are two ways you can the GPT-4 model right now. The best way to use GPT-4 right now is to use it through ChatGPT Plus — OpenAI's paid version of ChatGPT. The company says that ChatGPT Plus has been upgraded to the GPT-4 AI model. The access is still (sort of) limited and OpenAI says it will adjust usage cap "depending on demand and system performance." For now, paid subscribers are limited to 100 messages every 4 hours for GPT-4.

To upgrade to ChatGPT Plus, head over to the website. Click on the "Upgrade to Plus" button located in the left menu. Next, click on the "Upgrade Plan" button, which will take you to a Stripe checkout page. Fill in the required details and complete the checkout. Once you become a paid customer, you'll immediately have access to the new GPT model.

Another way to access GPT-4 model is through Microsoft's Bing Chat. Microsoft has confirmed that its AI-based new Bing search engine is running on GPT-4, which has been customized for search. If you've used the new Bing Chat preview in the last five weeks, you've already experienced an early version of this powerful model.

There are some limitations of Bing Chat, however. For instance, it currently doesn't support image input, although this functionality may be added at a later stage. But, for the time being, it doesn't support multimodal input. Moreover, it should be emphasized that Microsoft is using a combination of GPT-4 and its own Prometheus model to provide information and results to ensure that the model doesn't go off the rails.

You can sign up for Bing Chat by using the official Bing app for iOS and Android. You can also access the service on the Edge web browser by going to this link. It's worth noting that there may be a waitlist, so you may not be able to use it immediately.

What are your thoughts on GPT-4? How is it going to improve your workflow? Do let us know in the comments section below!


More from PocketNow

image beaconimage beaconimage beacon