Well, it’s been a while since my last AI centric link list. Let’s add some recent reads and watches to your todo list.
Text:
- Brave just released an API for their SearchEngine with an emphasis on it’s use in training AI. It is a bit surprising given the privacy and security centric ethos of the Brave ecosystem.
- [Paper] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
- [Paper] Language Models are Few-Shot Learners
- [Paper] Smaller Language Models are Better Black-box Machine-Generated Text Detectors
- Well, this is very old for everyone in the field but maybe new to you… The paper that gave birth to modern large language models is already 8 years old and still a read worthy your time: Attention Is All You Need
- This paper is a bit much for you? No worries: Here is an article walking you through all key ideas behind chatGPT, Bard & Co.
- Eliezer Yudkowsky is convinced that AI will kill us all. Normally we would ignore him but it turns out he’s a very accomplished scientist who kinda knows what he’s talking about.
- There is quite the number of scientists who warn about the risks of AI. Some may make you feel like you’re living in the end times (looking at you, Eliezer Yudkowsky). If you’re looking for a balanced view on some of their argument then Paul Christianos post „Where I agree and disagree with Eliezer“ is a highly recommended read.
- The case for how and why AI might kill us all – I think the title says it all… And, yes, it is Eliezer again.
- A little less panicky but not any less concerning is this piece in MIT Technology Review explaining why Geoffrey Hinton, one of the fathers of modern AI, decided to leave Google so he can tell us why he’s now scared of his creation.
- Even if you don’t believe that it is quite as dangerous there are still plenty of issues. Governments for instance realize these days that AI is far far ahead of current laws and regulations. So politics is playing catch up, and Europe announces to spin up an AI research hub to apply accountability rules.
- Not depressed already? Good. ars technica tells you all about „The mounting human and environmental costs of generative AI„. That might do the trick.
- Math is your cocaine? You’ll be thrilled to browse through this blog post explaining the math related to computation and memory usage for transformers. You’re welcome.
Videos:
- Yes, 67 minutes is not exactly short. But I watched it anyhow and then I shared it with everyone I knew. But be warned: you’ll be worried afterwards. It talks you through parallels between the current AI revolution and the events when we introduced social media. Then it walks you through AI in general and how the situation today relates to society, politics and the moment we find ourselves in. Scary stuff. The A.I. Dilemma
- So you want to know how to build your own GPT? Here are 2 hours that take you through the entire process from start to finish.
- Eliezer Yudkowsky is a very respected scientist who truly beliefs that AI will kill us all. His belief comes from years of study and understanding that it is a very hard problem. This 90 minute lecture he gave 6 years ago at Standford University explains why.
- Another lecture, this time from Robert Miles. In about 18 minutes he walks you though the subject of AI safety.
- Lex Friedman talks for his podcast with high caliber thinkers such as the philosopher Sam Harris. Here you find the 20 minute cut-out where they speak about AI specifically.
Image: Midjourney
Neueste Kommentare