Tuesday, November 5, 2024
Home > Analysis > OpenAI Announces Latest Milestone in Deep Learning Scaling

OpenAI Announces Latest Milestone in Deep Learning Scaling

OpenAI revealed that it has been using GPT-4 internally, functioning in sales, programming, support, and content moderation. 

Shortly after launching the ChatGPT that got global attention, artificial intelligence company OpenAI released a multimodal AI dubbed GPT-4. The latest technology is a new image and text-understanding AI model, which the company refers to as “the latest milestone in OpenAI’s effort in scaling up deep learning.” The new technology model can accept a profit of tests and images and allows users to specify a vision or language task. However, image inputs on the product remain under research and are currently unavailable to the public.

OpenAI Announces GPT-4

Announcing ChatGPT on March 14, OpenAI noted that it may not be as functional as humans in many real-work scenarios. However, it demonstrates human-level performance on multiple academic and professional standards. 

OpenAI is about to create another disruption in the tech space with the launch of GPT-4. The technology was explained to have passed a mimic bar exam, excelling with a score achieved by the top 10 test takers. Meanwhile, the GPT-3.5 did the same test and scored around the bottom 10%. OpenAI explained that it utilized lessons from its adversarial testing program and ChatGPT to work on the new multimodal model in six months. Notably, GPT-3.5 was the first test run of the system, but it had some bugs. The artificial intelligence company fixed the bugs and upped its theoretical foundation, resulting in the GTP-4. 

OpenAI said GPT-4 is the first large model whose training performance it was able to predict ahead of time accurately. Moving on, the tech-developing company will focus on improving its methodology to prepare for future capabilities. 

OpenAI gave more detail on its GPT-4, noting:

“We are releasing GPT-4’s text input capability via ChatGPT and the API (with a waitlist). To prepare the image input capability for wider availability, we’re collaborating closely with a single partner to start. We’re also open-sourcing OpenAI Evals, our framework for automated evaluation of AI model performance, to allow anyone to report shortcomings in our models to help guide further improvements.”

The differences between Chat-4 and Chat-3.5 are evident depending on the text complexity. The latest technology has proven to be more creative and reliable. OpenAI conducted various tests for the two models to understand their capabilities. 

“We also evaluated GPT-4 on traditional benchmarks designed for machine learning models. GPT-4 considerably outperforms existing large language models, alongside most state-of-the-art (SOTA) models which may include benchmark-specific crafting or additional training protocols.”

OpenAI revealed that it has been using GPT-4 internally, functioning in sales, programming, support, and content moderation. 



Artificial Intelligence, Business News, News, Technology News


Ibukun is a crypto/finance writer interested in passing relevant information, using non-complex words to reach all kinds of audience.
Apart from writing, she likes to see movies, cook, and explore restaurants in the city of Lagos, where she resides.

Source