Future is Long Context LLMs
PLUS: FBI Issues Warning on Hackers using AI, Cryptography for AI-Generated Content
Hey there 👋
Today we’re exploring the two extremes of generative AI where on one side an Australian news corp is using the technology to generate thousands of articles ‘weekly’, and on the other side is the fear of deepfakes urging the need for watermarking AI-generated content. And it seems the tech giants have found a solution to it! Keep scrolling to know more.
This issue covers:
Latest Developments 🌍
Tools of the Trade ⚒️
Hot Takes 🔥
AI Meme of the Day 🤡
Read time: 3 mins
Latest Developments 🌍
Long Context LLMs for the Win 🦒
Abacus AI launched Giraffe, opensource LLM with a long context window fine-tuned on Llama-1. Forget about the typical 2K context length constraint of most models - Giraffe extends this to 4K and 16K, opening new realms of AI application. Explore their models, evaluation data, and experiments on GitHub or Huggingface
AI and the Future of Cybercrime 🕵️
FBI issued a warning about the growing trend of hackers exploiting AI tools, like ChatGPT and Meta's Llama 2, to launch complex phishing and malware attacks is alarming. These AI programs can even generate deepfake content, adding a new level of threat. The FBI's concerns highlight an urgent need for watermarking technology to differentiate real data from AI-generated. With AI's accessibility ever increasing, the race is on to secure the digital world.
AI News Corp 📰
Imagine a busy newsroom... but instead of journalists hunched over keyboards, there are AI bots churning out thousands of articles weekly. News Corp Australia is creating 3,000 articles weekly using generative AI focusing on local news like weather, traffic, and fuel prices. The entire operation is managed by a very small four-person team named Data Local and all AI-generated content is monitored by human journalists.
Cryptography to detect AI generated Content 🔑
Big tech companies and the White House are pushing for AI-generated content to carry labels disclosing its origins, but it's tricky to identify such material accurately. Enter C2PA, an open-source protocol started by tech giants like Adobe, Intel, Microsoft, and more. Similar to a digital "nutrition label," C2PA encrypts origin details into a piece of content, enabling transparency in a world increasingly impacted by AI.
Tools of the Trade ⚒️
Altos: Create high-performing ad campaigns, track results in real-time, and manage all aspects of your clients’ advertising efforts 10x better.
Supermanage: AI-powered tool for effortless preparation for 1-on-1 meetings, delivering customized briefs with insights from Slack channels.
Vocol: Turns voice into text with high accuracy, providing actionable insights from voice files, multilingual transcription, and real-time collaboration.
Hireguide: AI assistant for hiring teams that offers interview templates, automated interview notes, and a structured dashboard.
Datagran: Enterprise-grade platform for data analysis, modeling, and workflow automation, using LLMs.
😍 Enjoying so far, TWEET NOW to share with your friends!
Hot Takes 🔥
The worst thing about LLMs being so "safe" is that they are useless for generating truly hot takes. ~Bojan Tunguz
Trend: Religions are adopting AI. ~Jeremiah Owyang
Meme of the Day 🤡
That’s all for today!
See you tomorrow with more such AI-filled content. Don’t forget to subscribe and give your feedback below 👇
Real-time AI Updates 🚨
⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!!
PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!
Hey Shubham, fascinating read on the two extremes of generative AI. The long context LLMs are a game-changer, and I'm relieved to see efforts like C2PA tackling AI-generated content transparency. Keep up the excellent work.