Google’s AI Overview Goes Rogue
PLUS: OpenAI partners with News Corp., Meta’s mixed-modal model
Today’s top AI Highlights:
OpenAI partners with News Corp. to bring reliable information to people
Google is in controversy (again) as its AI Overview gives misleading responses
Meta’s mixed-model AI model that blends text and images seamlessly
Nvidia CEO says they are planning to launch new AI chips every year
Hide Google’s AI Overview and ads with this Chrome extension
& so much more!
Read time: 3 mins
Latest Developments 🌍
ChatGPT Answers Will Come from Wall Street Journal 📰
OpenAI is partnering with News Corp., the company behind big names like Wall Street Journal, New York Post, Barron’s, and MarketWatch, to bring News Corp.’s content as answers to questions by OpenAI’s users. The aim is to provide users with more reliable information and news from trusted sources while using OpenAI’s products.
Plus, News Corp. will “share journalistic expertise to help ensure the highest journalism standards are present across OpenAI’s offering.”
The internet is stuffed with garbage content and it’s easy to come across misleading and false information. Getting reliable and verified responses is now extremely important. Take Google’s AI Overview for example which uses top search results to summarize information and present it to user queries. Since there is no system to check the reliability of the answers, Google has again fallen into the controversy pit as AI Overview shows inaccurate and stupid answers.
Simply saying that search results are filled with misinformation and Google’s LLM is just summarizing them, not hallucinating, doesn’t absolve Google of its responsibilities. AI should be leveraged to cut the noise and give accurate information.
Mixed-Modal Early-Fusion Foundation Models
Current multimodal AI models struggle to truly blend information from different sources like text and images. They often treat them separately, which limits their ability to understand and create complex content that seamlessly combines both. Meta’s research team has developed Chameleon, a new AI model designed to overcome these limitations. Chameleon can understand and generate both text and images together regardless of how they are arranged.
Key Highlights:
Unified Understanding: Chameleon can analyze and generate content that freely mixes images and text. It can answer questions about images, generate captions, write stories illustrated with images, and even create entirely new visual concepts based on textual descriptions.
Performance: Chameleon outperforms SOTA models like GPT-4V and Gemini in several tasks, including image captioning, visual QA, and text-based reasoning. Interestingly, it achieved these results while being a single model, as opposed to other models which were augmented with DALL-E 3-generated images.
A new foundation model: Most existing models are bottlenecked by their “late fusion” approach – they process text and images separately and then combine the information later. This is like trying to understand a story by reading the text in one room and looking at the pictures in another. Chameleon’s early fusion architecture breaks down the walls between these modalities from the very beginning. It allows a single transformer model to learn representations that inherently blend information from both sources.
Key Takeaways from Nvidia’s Earnings Call 🌐
Nvidia posted a blasting Q1-2025 earnings this Wednesday, exceeding all expectations and estimates. The company reported record revenue of over $26 billion, fueled by insatiable demand for its AI-powering GPUs. The earnings call gave us a glimpse into Nvidia’s ambitious roadmap, with a clear emphasis on expanding its full-stack solutions and catering to the burgeoning demands of AI factories and a future brimming with AI applications.
Here are the key takeaways:
Automotive Sector Poised for Growth: The automotive sector will ascend to become Nvidia’s largest enterprise vertical within the Data Center market this year. This means autonomous vehicle companies are investing heavily in AI infrastructure to train and build next-gen vehicles. In another interview, Jensen Huang said “every single car, someday we will have to have autonomous capability.”
No brakes in Demand: Even with the introduction of the Blackwell architecture, the demand for Hopper GPUs continues to surge. This highlights the insatiable appetite for AI computing power, and Nvidia is struggling to cater to the supply.
Generative AI is Fueling an Inference Explosion: The rise of generative AI, with its complex inference requirements to create text, images, and more, is driving incredible growth in demand for Nvidia’s products. Coupled with the emergence of 15-20,000 AI startups all needing processing power, this indicates we’re only at the beginning of a massive wave of AI adoption.
AI Factories: Large-scale AI deployments, what Nvidia calls “AI factories,” are being built by major players like Meta and Tesla. These are massive clusters of GPUs dedicated to training and deploying cutting-edge AI models.
Sovereign AI is on the Rise: Countries around the world are investing heavily in building their own AI infrastructure, and Nvidia sees this as a significant growth opportunity.
Blackwell is Selling Soon: The Blackwell platform, boasting a massive performance boost over Hopper, is already in production. Shipments begin in Q2, with major customers like Amazon, Google, Meta, and Microsoft expected to have systems up and running by Q4.
Second Quarter Outlook: Nvidia projects Q2 revenue of $28 billion, indicating continued strong demand across all market platforms.
The Race for AI Supremacy is Heating Up: Cloud providers are vying for a piece of the AI pie, with companies like Google and Meta developing their own AI chips. But Nvidia remains confident in its full-stack approach.
A New Chip, Every Year: Nvidia is on an aggressive release schedule, planning to launch a brand new AI chip architecture every single year. This ‘one-year rhythm’ means customers can expect a constant stream of performance improvements and new capabilities.
😍 Enjoying so far, share it with your friends!
Tools of the Trade ⚒️
Unify: Dynamically routes each prompt to the best LLM based on your desired balance of quality, speed, and cost. It helps you efficiently manage different LLMs, ensuring optimal performance and cost-effectiveness for your applications.
Bye Bye, Google AI: Turn off Google AI Overviews, ads, and discussions from this Google Chrome extension. Since you can’t turn these off from any settings in Chrome, this extension lets you hide them. It simply uses CSS to set those areas of the page to be hidden (display="none").
MusicGPT: Run the latest music generation AI models locally in a performant way, in any platform and without installing heavy dependencies like Python or machine learning frameworks. Right now it only supports MusicGen by Meta.
Awesome LLM Apps: Build awesome LLM apps using RAG for interacting with data sources like GitHub, Gmail, PDFs, and YouTube videos through simple texts. These apps will let you retrieve information, engage in chat, and extract insights directly from content on these platforms.
Hot Takes 🔥
I would urge parents to limit the amount of social media that children can see because they're being programmed by a dopamine-maximizing AI. ~
Elon MuskYou could argue that the point of programming is to produce bugs. Bugs show you where your model of a problem doesn't match the problem, and in a highly motivating form. ~
Paul Graham
Meme of the Day 🤡
LLMs being released in 2024 🔥
That’s all for today! See you tomorrow with more such AI-filled content.
Real-time AI Updates 🚨
⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!
PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!