Last month, OpenAI spear GPT-4 with vision (GPT-4V), allowing the chatbot to read and answer questions on the images. One of the many ways AI users are using this new functionality is to decode redacted government documents on UFO sightings. “ChatGPT-4V Multimodal decodes redacted government document on UFO sighting released by NASA,” one tweet read. is ecstatic. “Maybe the truth doesn’t exist; it’s here in GPT-V.” Decipher the reports: Trying to fill in the gaps in a string of text is essentially what LLMs do. The user did the next best thing by trying to test GPT-V’s capabilities and had it guess parts of a text it had censored. “Nearly 100% intent accuracy.” he reported. Of course, it’s hard to verify whether his guess about what’s otherwise obscured is accurate — it’s not like we can ask the CIA how successful it was in peering through the black lines. Other ways users use GPT-4V include: deciphering a doctor’s handwriting; understand medical images, such as x-rays, and receive analysis and information on specific medical cases; provision of information on the nutritional content of meals or food products; help interior design enthusiasts by offering design suggestions based on personal preferences and images of living spaces; and prove technical analysis of stocks and cryptocurrencies based on screenshots.