AI tweets you missed in May 2024

Late night Twitter scrolling sometimes gives you a whip of serendipity that makes a lack of sleep worthwhile.

Last night I saw this on my feed from Séb at Google Deep Mind after my 116th "One more scroll then I'll put my phone down":

The good news for Séb, I said, is that I've started a draft of exactly this to revive this long ignored blog. My feed is so well trained on my interest in AI that I have a perfect front row seat to the bleeding edge, where hackers, builders and pwnage lords, in a bid to bookmark all the tools and toys I'd like to explore every Friday (my new R&D day after blocking it off from running my ad agency the rest of the week). I'd started pasting in the most interesting posts into a draft, yet with so many niche tools and directions, I wasn't sure there was appeal for a bunch of pasted tweets.

And then, validation in Séb's tweet. I replied with a deal: sign up to my newsletter, Séb, and I'll start publishing my AI news. And he did, and here we are.

With that context out the way, it's time to start delivering on the '1-2 lines of commentary' promise. Here's everything I bookmarked in May, leaning as much to the alternative headlines you didn't hear as I can manage.

Creative

Klarna CEO has halved its in-house marketing team and reduced image creation costs by $6 million with AI:

Multiple examples of video > anime style that look great. I'll call it now, we'll have entire TV shows and YouTube channels with identities led in this approach.

Automatically cut down video clips for social media:

Tools

Yohei is a king in AI twitter and tapped into the hive mind for their favourite tools for AI building. I'm yet to run through the list but it looks like a goldmine. (Yohei also did a fantastic X livestream on his view on the future of AI agents, worth watching if you catch him live):

Looking for an alternative to Retool? Try this.

Build landing pages. So many AI apps trying this but when it rides off pretty libraries like Tailwind you know it's going to look good enough.

Tako is not making as much noise as it should with this insanely powerful visualisation tool. I'm watching this like a hawk for our market research tools:

]

Loom making it easy to record SOPs is a game changer for my business. Producing systems is a huge bottleneck; this builds the documentation plus of course a video recording of the process for training:

Brainstorm rocket fuel:

Mindmap your tweets:

Code Interpreter

You can install libraries to Code Interpreter if they aren't available? 🤯

Talking of Code Interpreter, I've been running OpenWebUI as my company's AI chat platform (planning to write up a starter guide for others). Turns out I just needed to run an update.

While looking for an alternative to the above, I've stumbled on e2b and this account just pours out use cases for it. Here's one example.

Models

Never heard of Smaug but beating Llama3-70b deserves a shoutout.

Memory

Memory feels like it's waiting for innovation before we settle in to the 'right' way to do memory for AI. Examples for me that fall short so far: I want my AI to update it's own system prompt - ChatGPT does this but I'm not seeing how to do it locally yet. So it's cool to see LlamaIndex trying to move the needle on longterm memory:

New RAG methods pop up all the time, here's one that claims to emulate how the brain stores long term memory. Hype phrases aside, it's up to 30x faster and 13x cheaper:

This looks incredible:

This looks cool for studying content strategy on websites:

CLI tools

I have folders on my computer called 'To Tidy' that also contain folders called 'To Tidy', some of which contain the same damn files as others in the hierarchy. It makes Inception look like a Peppa Pig plotline. Imagine my excitement seeing this tools that self-organises your files privately with Ollama... then imagine my disappointment when I can't get fastapi to work. If you want to try your luck, here's the link.

GitHub - iyaja/llama-fs: A self-organizing file system with llama 3
A self-organizing file system with llama 3. Contribute to iyaja/llama-fs development by creating an account on GitHub.

Not the first CLI interface for AI, but credit here for designing a more user friendly approach to the interface:

Websites

Websim is the coolest experimental space in AI today. Infinite generative web. It deserves it's own blog post but as one example of being blown away by the ingenuity of hacking here, someone managed to get it to run an entire MS-DOS including live game execution by pulling live files from elsewhere. Incredible!

There's a lot of research assistants popping up. I'm building my own in Flowise but some days I see something like this and give up:

Yoink

Kiri has a great feed and managed to pluck the new GPT-4o system prompt:

Prompt templates are gimmicky but I always grab them incase. The full paper might reveal some underlying patterns to learn from too.

Talking of prompt optimisation:

And more on DSPy evaluations from James' colleague Mike. (These two guys are literally releasing the book on prompt engineering, out this month on O'reilly).

Opinions

People saying stuff about AI that should stay on the record, whether they're right or wrong.

Musk pledges to always release an open source model that always beats the current open leaderboard champion:

Peter Thiel making me feel good about my amateur maths and coding skills:

Why did Roon delete this?

The British AI described here is my spirit animal.

This random post reminding me I should start building my next agency. Part of me cant bring myself to commercially jump on the bandwagon but everything about AI now is the same vibe as I had with Facebook ads in 2014.

Auto-agents go bananas with infinite task lists. There's a future for these agents but they're frankly out of control. How can we get them to a stage where we trust their autonomy? Some thoughts here:

Yes please to this, alt text is a pain and should be easily automated away, with a huge market ready to lap this up with a plugin fee:

Replies to this are real food for thought:

I'm a mix of excited and disheartened with AI. Most people like me don't have the resources to really move and win in this space. (Must be nice being a Deep Mind Séb?) This tweet captures my feelings before I ignore it and plough on:

Another reason I'm scared:

This thread breaks down how much data is actually available on the web for LLMs to train. Makes the point that Google has an insane advantage with it's Google Books data:

And that's just in May!

Feedback in the comments, and I'll post the same next week for just one week's worth of content. How's that Séb?

Subscribe to Tom Davenport

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe