SAN FRANCISCO, Jan 4 — Who said Apple isn’t investing in artificial intelligence? The US-based company is working on a multimodal language model that could soon provide a boost to many tools, from its intelligent assistant Siri to Photos. Called Ferret, it is the Cupertino firm’s answer to Google’s recently unveiled Gemini.

Artificial intelligence was undoubtedly a major theme in the tech realm for the year 2023. Yet one company, and by no means a marginal one, has been particularly discreet on the subject — Apple. As 2024 gets underway, the company is still not communicating officially on the subject, but specialist American media are reporting that, last autumn, an open source version of a multimodal language model developed in collaboration with Cornell University, in the state of New York, was put online and made accessible via GitHub,

The model, which is currently being referred to as Ferret, is a model based on the simultaneous detection and comprehension of text and visual elements, the idea being that the AI be capable of identifying different elements to be analysed as part of any query. For the moment, no concrete use has been made of this technology by the Cupertino, California-based giant, but it could soon be integrated into Mac (through macOS) and iPhone (iOS) environments for various practical uses.

Once it’s been further refined, Ferret could, for example, help Siri answer more specific queries. It could also help with the task of recognizing and sorting photos, as well as creating images or texts. For the visually impaired, it could even help describe images or landscapes.

Advertisement

Since Ferret is still in the development phase, it could be a while before Apple decides to make any official announcements. Some industry watchers hypothesize that the company may take advantage of its annual developer conference (WWDC), scheduled for next June, to unveil its uses. — ETX Studio