Meta Unveils Next-gen Custom AI Chip 🔥
PLUS: Karpathy trains GPT-2 model in C, LLMs are getting persuasive, Create Taylor Swift like songs from text prompt
Today’s top AI Highlights:
Meta unveils its next-gen of custom AI chip, MTIA V2
Anthropic measures how persuasive can LLMs be compared to humans
Andrej Karpathy builds an entire LLM in ~1,000 lines of C
Archetype AI’s foundation model to understand real world in real-time
Generate complete songs with lyrics with just one simple prompt
& so much more!
Read time: 3 mins
Latest Developments 🌍
Meta Doubles Down on AI Performance with New AI Chip 🔋
Meta has unveiled the next generation of its custom-designed AI chip, the Meta Training and Inference Accelerator (MTIA) v2. This powerful chip is specifically built to handle the demands of Meta’s AI workloads, particularly the ranking and recommendation models that power user experiences across its platforms. MTIA v2 boasts significant performance improvements over its predecessor, offering greater efficiency and capabilities to support Meta’s growing AI ambitions.
Key Highlights:
Double the Power: Delivers more than double the compute and memory bandwidth compared to MTIA v1. This translates to faster processing and smoother experiences for users interacting with AI-driven features.
Tasks: The chip architecture is specifically optimized for ranking and recommendation models, ensuring users receive high-quality and relevant suggestions.
Processing: Featuring an 8x8 grid of processing elements, MTIA v2 achieves a 3.5x increase in dense compute performance and a remarkable 7x improvement in sparse compute performance.
Memory: To keep up with the increased processing power, MTIA v2 boasts tripled local PE storage, doubled on-chip SRAM with 3.5x the bandwidth, and doubled LPDDR5 capacity. This ensures the chip has ample memory resources to handle complex AI tasks efficiently.
Efficiency: Early results show a 3x performance improvement over the first-generation chip and a 1.5x improvement in performance per watt at the platform level. This means MTIA v2 delivers more power while using energy efficiently.
“Because we control the whole stack, we can achieve greater efficiency compared to commercially available GPUs.” ~ Meta
How Convincing Can AI Be? 🙇♂️
Have you ever wondered if AI could write something so convincing it could sway your opinion? Turns out that LLMs are getting pretty good at that. Anthropic has been looking into just how persuasive these models can be and how we can measure this persuasiveness. They’ve even used human volunteers to see if they can tell the difference between human and LLM-written text.
Key Highlights:
Data used:
Real-world data from Reddit: This included persuasive writing like product reviews and opinion pieces which provided a real-world benchmark for comparison.
Synthetic data generated by GPT-3: This allowed for controlled experiments where the persuasiveness of the text could be specifically adjusted.
Rating: Human evaluators judged the persuasiveness of various text samples on a scale of 1-7. They didn’t know whether each piece was written by a human or an LLM, making the evaluation unbiased.
Results: People had difficulty distinguishing between text written by humans and LLMs, especially for shorter pieces like product reviews, raising concerns about misuse for manipulation or spreading misinformation.
Size matters: LLMs were found to be slightly more persuasive, but they also made more factual errors, which can hurt their effectiveness.
It’s all about the data: LLMs’ persuasiveness is heavily influenced by the training data. Models trained on datasets with more persuasive content are likely to generate more persuasive text themselves. It's like learning by example!
Longer pieces are easier to spot: While shorter LLM-written texts were hard to distinguish, longer opinion pieces were rated as slightly less persuasive. This might be due to issues like repetitiveness or lack of coherence that become more apparent in longer formats.
LLM training in simple, raw C/CUDA 🤌
Training an LLM doesn’t necessarily need a massive software library like PyTorch. Andrej Karpathy’s new project “llm.c” lets you train LLMs directly in C with CUDA acceleration, making the process incredibly lightweight and efficient. Forget about downloading hundreds of megabytes of software - this project does it all in a single, clean C file, just like having a mini-LLM right on your computer that you can tinker with and learn from.
Key Highlights:
Starting Point: Karpathy started with GPT-2 as it was the first time the modern stack was put together. He replicated its training process in just about 1,000 lines of code, showing how streamlined and straightforward LLM training can be.
Education: Learn how LLMs work. By exploring the code, you can gain insights into the core concepts of attention, transformers, and other essential building blocks of AI language models.
Technical Details: llm.c uses 12 decoder layers and 16 attention heads. It has a vocabulary size of over 50,000 tokens and a context window of 2048 tokens.
Customize: You can train the model on your own datasets to create LLMs tailored to your specific interests or needs.
AI Model for Real-World Comprehension 🌆
LLMs today are good at language and reasoning tasks, and even analyzing images, videos, and data. But they don’t understand the complex physical world around us. Archetype AI is building a foundation model called Newton, which they term as Large Behavior Model, that understands the physical world. It fuses natural language with complex sensory data like radar, chemical, and environmental sensors, to understand different aspects of the world phenomena that we as humans can't directly perceive.
Key Highlights:
Learning Approach: Models should learn not just about individual static objects but also about the relationships and interactions between them. Like how a car can interact with traffic lights, pedestrians, and other vehicles on the road.
Reasoning and Planning: Beyond understanding, Archetype AI wants the model to reason, plan, and make decisions based on what they learn. It can think ahead, anticipate problems, and come up with solutions, similar to how we use our own understanding of the world to navigate daily life.
Real-Time Adaptability: The emphasis is on building models that can understand the world in real-time, continuously updating and adapting to new situations as they arise.
😍 Enjoying so far, share it with your friends!
Tools of the Trade ⚒️
Udio: Create entire songs that have a proper intro and outro, with lyrics that have verses and hooks, in any style or genre, in any language, for any mood, all with just a simple text prompt! You can also give it lyrics yourself, tell it a famous singer’s style, and extend your clips forward and backward.
Double: Your AI Copilot with ChatGPT-style chat with code context, tab-autocomplete, and custom key bindings. It addresses issues with Github Copilot like improper bracket handling, interruptive bad completions, lack of library auto-imports, and improving upon variable naming, multi-cursor support, and model selection. Currently, the latest GPT-4 Turbo, Claude 3 Opus and DBRX Instruct are available models to choose from.
Captions: An AI-powered creative studio that simplifies the video production process, letting you generate studio-grade videos effortlessly. It provides tools for enhancing speech and audio in videos, automatic subtitles, eye contact correction, video trimming, compression, and more to enhance storytelling and video quality.
Hot Takes 🔥
The best AI companies are going to come from NYC — not SF.
(with a few notable exceptions) ~
Matt ShumerActually, even AWS is just a bunch of guys in warehouse somewhere overseas doing all the computing manually with a piece of paper and a slide rule. ~
Bojan TunguzWhoever can make a product that can RLHF a custom model for each user's inbox and rank/summarize emails in terms of a learned user-specific importance estimator function will be a billionaire ~
Beff Jezos — e/acc
Meme of the Day 🤡
How Mistral AI launches its models
That’s all for today! See you tomorrow with more such AI-filled content.
Real-time AI Updates 🚨
⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!
PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!