It was yet another thrilling week in the AI field with advancements that further extend the limits of what can be achieved with AI.
Here are 10 AI breakthroughs that you canβt afford to miss π§΅π
Forget Adobe Photoshop, Canva Doubles Down on AI π§βπ¨
Canva has introduced a range of new features to make professional design accessible to everyone, regardless of experience. These updates, revealed at Canva Create 2024, include new products, enhanced editing tools, and resources for teams. The most significant update is the integration of AI, making content creation faster, easier, and more intuitive.
Magic Media: Generate images and videos from text prompts for unique icons, stickers, and illustrations.
Magic Design: Turn ideas or media uploads into professional social posts, presentations, and videos.
AI Photo Editor: Use Magic Grab to select and manipulate photo elements easily and create templates with Mockups.
Video Editor: Generate short clips from longer videos, and remove background noise with Enhance Voice.
Magic Write: Generate text in your unique tone for headlines, script rewrites, and summaries.
Multilingual AI Models by Cohere for AI π
Cohere for AI has launched Aya 23, a new set of multilingual language models supporting 23 languages, focusing on depth and superior performance. Aya 23 includes two versions: an 8B model optimized for efficiency and accessibility, and a 35B model achieving top results across benchmarks. The Aya Dataset features 513 million instances of prompts and completions, incorporating instruction-style templates from fluent speakers.
Aya 23 models outperform previous versions and other models, with the 35B model showing a 41.6% increase in multilingual MMLU and a 6.6x improvement in multilingual mathematical reasoning.
xAI Raises $6 Billion to Build AGI π₯
Elon Muskβs xAI is making big moves. They just secured a massive $6 billion investment in Series B for a pre-money valuation of $18 billion, which is one of the biggest funding rounds ever in the AI space. This cash infusion comes from a group of top investors, including Valor Equity Partners, Andreessen Horowitz, and Sequoia Capital. But the real news here is: Musk plans to build a supercomputer that could rival even the most powerful ones out there.
This supercomputer will be at least 4x bigger than the biggest ones. It will be powered by Nvidia H100 GPUs, and will be up and running by 2025 end.
Apple with OpenAI is a Done Deal π€
Apple is expected to release new AI features at its upcoming WWDC, and the most anticipated one is the partnership with OpenAI for using its AI models to power generative AI features in the upcoming iOS18. The deal has reportedly been inked finally! The details however are still under wraps.
But Appleβs not putting all their AI eggs in one basket, theyβre also working with Google to include their Gemini model as an option in iOS 18, too. You might have a choice of AI assistants on your iPhone soon!
GPT-4 Outperforms Human Financial Analysts π¨βπΌ
A study from the University of Chicago shows that GPT-4 outperforms human financial analysts in analyzing financial statements and predicting earnings changes. Using Chain-of-Thought prompting, GPT-4 achieved a 60.35% accuracy in predicting earnings directions, compared to 52.71% by human analysts. GPT-4 was particularly strong with smaller, less profitable companies, while human analysts provided broader contextual insights. Trading strategies based on GPT-4βs predictions resulted in better risk-adjusted returns.
Reproduce GPT-2 (124M) in 90 Minutes for $20 π€―
90 minutes and $20 for training an entire LLM! Andrej Karpathyβs llm.c repo on GitHub retrains the GPT-2 (124M) model in a mind-blowing 90 minutes for a mere $20. It uses about 4,000 lines of C/CUDA code to achieve this, optimizing the process for high performance with around 60% Model Flops Utilization. The training process uses a single 8X A100 80GB SXM node, making it cost-effective and efficient. The model, trained on 10 billion tokens from the FineWeb dataset, achieves a HellaSwag accuracy of 29.9, surpassing the original GPT-2.
First-ever Code Generation Model by Mistral AI π©βπ»
Mistral AI has released Codestral, a 22-billion-parameter code model designed for generating code in over 80 programming languages, including Python, Java, Swift, and Fortran. It features a fill-in-the-middle mechanism to complete partial code and write tests, significantly reducing developersβ time and coding errors.
Codestral outperforms other coding models like Llama 3 70B and DeepSeek Coder 33B across all coding benchmarks. It is available on Hugging Face and integrated into tools like VSCode and JetBrains, with a free 8-week beta period via API.
The Dark Secrets Behind Sam Altmanβs Ouster π΅βπ«
Since Sam Altmanβs ouster from OpenAI in November 2023, the company has seen a fallout of its credibility. At The TED AI Show, Helen Toner who had to resign from OpenAIβs Board revealed what happened inside the company when Altman was fired. She revealed that Altman had withheld information, misled the Board, and failed to disclose his financial interests, making it hard for the Board to trust him. She also mentioned incidents of βpsychological abuseβ reported by executives, leading to a toxic work environment.
Appleβs Secure Cloud AI with Black Box Processing π
Apple has been pushing on-device AI for years, but now theyβre venturing into the cloud. To address privacy concerns, theyβre planning to use βconfidential computingβ techniques to keep user data private even while processing it on their servers. This βblack box processingβ approach keeps data encrypted throughout the entire process, meaning not even Apple can access it. Theyβve been developing this system for three years to ensure strong privacy protections, even against subpoenas or government requests.
Private and Expert-Driven LLM Evaluations by Scale AI π€«
Scale AI has introduced SEAL Leaderboards to provide a more trustworthy evaluation process for leading AI models like GPT-4, Gemini, Claude, and Mistral. These leaderboards assess models on coding, math, instruction following, and multilingual capabilities while reducing the risk of overfitting by using private evaluation datasets.
Notably, the GSM1k benchmark used for math evaluations mirrors the popular GSM8k without data contamination, revealing potential overfitting issues in some models. Expert-driven and continuously updated, SEAL Leaderboards offer a secure and objective measure of AI model capabilities.
Which of the above AI development you are most excited about and why?
Tell us in the comments below β¬οΈ
Thatβs all for today π
Stay tuned for another week of innovation and discovery as AI continues to evolve at a staggering pace. Donβt miss out on the developments β join us next week for more insights into the AI revolution!
Click on the subscribe button and be part of the future, today!
π£ Spread the Word: Think your friends and colleagues should be in the know? Click the βShareβ button and let them join this exciting adventure into the world of AI. Sharing knowledge is the first step towards innovation!
π Stay Connected: Follow us for AI updates, sneak peeks, and more. Your journey into the future of AI starts here!
Shubham Saboo - Twitter | LinkedIn βΈ Unwind AI - Twitter | LinkedIn