Open-sourced Version of GPT-4 Vision 👁️
PLUS: AI Watermarking by Meta, Low Resource Languages Jailbreak GPT-4, OpenAI's AI Chips
Today’s top AI Highlights:
LLaVa 1.5: Enhanced Vision-Language AI
Stable Signature by Meta: Watermarking AI-Generated Images
OpenAI to Develop its own AI Chips
English to Low-Resource Languages for Jailbreaking GPT-4
& so much more!
Read time: 3 mins
Latest Developments 🌍
The Opensourced Version of GPT-4 Vision 👁️
Researchers introduced LLaVA, a large multimodal model by connecting the visual encoder of CLIP with LLaMA for general-purpose visual and language understanding, instruction tuned on GPT-4-generated data. An enhanced version LLaVA-1.5 is now released and open-sourced.
Key Highlights:
The new model involves simple modifications to LLaVA, including an MLP cross-modal connector and the integration of academic-task-oriented VQA data.
In contrast to InstructBLIP or Qwen-VL trained on hundreds of millions of image-text paired data, LLaVA used merely 600K image-text pairs. LLaVA-1.5's training was significantly streamlined, reaching its final state in one day on a single 8-A100 node.
LLaVA-1.5 performs comparable to GPT-4V, and when combined with Vicuna 7B and 13B models, outperformed other models in 11 out of 12 benchmarks, despite using smaller pretraining and instruction tuning data.
No more AI-Generated Deception 🕵️
FAIR and Inria have introduced Stable Signature, a new invisible watermarking technique to identify images generated by generative AI models. This watermark can be detected by algorithms, even if the images are edited or manipulated.
Key Highlights:
The watermarking process involves two CNNs: one encodes a watermark into the image, and the other extracts the watermark from the image. The decoder of the generative model is fine-tuned to embed a fixed signature into the generated images.
Stable Signature is designed to remain robust even if images are altered through cropping, compression, or color changes. The watermark can still be traced back to the original generative model.
The model can detect the origin of an image generated from a text prompt, even if it's cropped to 10% of its original size, with an accuracy of 90%, with a false positive rate below 10^6.
Low-Resource Languages to Jailbreak GPT-4 🔓
Researchers release a report that reveals that translating unsafe English inputs into low-resource languages can bypass safeguards in LLMs like GPT-4, raising concerns about the effectiveness of AI safety training.
Key Highlights:
The study exposes the unequal valuation and treatment of languages in AI safety training, with LLMs' capability to defend against attacks varying significantly between high-resource and low-resource languages like Zulu or Scots Gaelic.
On the AdvBenchmark, GPT-4 engages with unsafe translated inputs and provides actionable items for harmful goals 79% of the time, comparable to or even surpassing SOTA jailbreaking attacks.
OpenAI’s Own AI Chips 🔬
OpenAI is reportedly exploring the development of its own AI chips amid a chip shortage for AI model training. Currently relying on GPUs, the company is now considering various strategies including acquisition or internal chip design.
This move follows similar efforts by tech giants like Google, Amazon, and Microsoft but entails significant challenges and costs.
Tools of the Trade ⚒️
AgentHub: A no-code visual builder to automate even the most niche or complex flows, just drag, drop, and connect modular components onto a canvas.
FL0: Simplifies backend application deployment with automatic scaling, GitHub integration, AI-powered debugging, and supports various databases.
Voxify: AI voice generation for creating high-quality, multilingual voice-overs with customizable options and emotion-rich features.
GradientJ: A platform for building and managing LLM applications, with GPT-4 powered API deployment, tools for creating prompts, and performance monitoring.
TalkNotes: Transforms spoken thoughts into organized notes with AI transcription that can be customized as per desired styles.
😍 Enjoying so far, TWEET NOW to share with your friends!
Hot Takes 🔥
2026 is gonna be one heck of a year. ~ Bojan Tunguz
Yeah, so open source AI is important & will be even more so in the coming years. Not your models, not your mind ✊ ~ Emad Mostaque
Meme of the Day 🤡
That’s all for today!
See you tomorrow with more such AI-filled content. Don’t forget to subscribe and give your feedback below 👇
Real-time AI Updates 🚨
⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!!
PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!