The GPT-4 large language model (LLM), the next iteration, might be made available as soon as next week by OpenAI. Popular language models ChatGPT and DallE were also created by OpenAI.
While ChatGPT has only been able to respond to users’ questions in text form so far, the company’s upcoming language model may be able to produce AI-powered films and other sorts of content.
Microsoft Germany’s Chief Technical Officer (CTO), Andreas Braun, stated to the German news website Heise, “We will introduce GPT-4 next week… We will have multimodal models that provide entirely new possibilities, like videos.
A multimodal language model can source information from a number of sources meaning that the latest developments based on GPT-4 could have the ability to reply to user’s queries in the form of images, music and video.
What do we know so far about GPT-4?
Apart from GPT-4’s multimodal abilities, it could also be successful in solving ChatGPT’s problem of responding slowly to user-generated queries. The next-generation language model is expected to give out answers much more quickly and in a more human-like manner.
Reportedly, OpenAI could also be working on a mobile app powered by GPT-4. Notably, ChatGPT is a web-based language model and does not have a mobile app yet.