While machine learning has been around for a while (my first startup 14 years ago was based on ML & natural language processing), this wave of AI (transformers & generative AI) feels like a tsunami. Its is a culmination of multiple waves listed below that are crashing at the same time:
- Natural Language as an interface (Voice to Text and Text To Speech)
- Generate human like content & code (Generative AI using existing patterns)
- Reasoning Capabilities (Able to break down into smaller issues)
- Hardware (GPUs)
Given how fast this space is moving, it is natural to feel daunted. So if you are looking for starter kit to get started, here are my recommendations. Feel free to tweet me if I am missing something.
š“š½Ā Messiahās Corner
Why AI Will Save The World
By Marc Andreessen
pmarca.substack.com

Marc goes toe to toe with Ben Thompson
An Interview with Marc Andreessen about AI and How You Change the World
An interview with Marc Andreessen about COVID, AI opportunities and risk, and a16zās approach to the future.
stratechery.com
Elonās X.AI Twitter Space | annotated by AJ Ram
So, I'll just do a brief introduction of the company, and then the founding team will, I think, just say a few words about their background, things they've worked on, whatever they'd like to talk about, really, but I think it's helpful to hear from people in their own words, various things they've worked on, and what they want to do with X.
readwise.io
An Interview with Daniel Gross and Nat Friedman about the AI Hype Cycle
An interview with Daniel Gross and Nat Friedman about the AI hype cycle, what products are working, the current state of ChatGPT, the data constraint, and Nvidia.
stratechery.com
šŖš¼The Enablers
What OpenAI Really Wants
The young company sent shock waves around the world when it released ChatGPT. But that was just the start. The ultimate goal: Change everything. Yes. Everything.
www.wired.com
Nvidia On the Mountaintop
Nvidia has gone from the valley to the mountain-top in less than a year, thanks to ChatGPT and the frenzy it inspired; whether or not there is a cliff depends on developing new kinds of demand thatā¦
stratechery.com
Llama 2: an incredible open LLM
Meta is continuing to deliver high-quality research artifacts and not backing down from pressure against open source.
www.interconnects.ai

š°Gamblerās Paradise
BarbAIrians at the Gate: The Financial Opportunity of AI | Andreessen Horowitz
Generative AI is likely going to usher in a far more profound method of company transformation.
a16z.com
How to make history with LLMs & other generative models
Or, Iām getting tired of market maps and am ready for some hotter takes
leighmariebraswell.substack.com

AIās $200B Question
GPU capacity is getting overbuilt. Long-term, this is good for startups. Short-term, things could get messy. Follow the GPUs to find out why.
www.sequoiacap.com
AI: Startup Vs Incumbent Value
In each technology wave the value, revenue, market cap, profits and great people captured by startups versus incumbents differs. In some waves it all goes to startups, while in others it goes to incumbents or is split between them. Unexpectedly, the prior wave of
blog.eladgil.com

Twitter thread from @GavinSBaker | annotated by AJ Ram
Foundation Models without significant RLHF *and* access to high quality proprietary datasets are likely the fastest depreciating assets in human history. Ā @ericvishria Ā I think only four are likely to have enduring value and transition into āFoundation Agentsā over the next few years: Ā ChatGPT, Gemini, Grok/Tesla/X and Llama. Ā ChatGPT by virtue of RLHF and Microsoftās various datasets plus access to closed, internal data at most enterprises via CoPilot. Ā If OpenAI ever separated from Microsoft then its value would asymptote to zero. Ā OpenAI trying to make both a GPU competitor *and* a phone would be crazy bearish and epic strategic mistake. Ā Azure OpenAI doing much better than standalone OpenAI on enterprise side. Ā Enterprise is hard. Ā Gemini by virtue of RLHF (via the SGE) and Googleās many datasets (Youtube transcripts, gmail). Ā Grok by virtue of RLHF via inclusion in Xās premium tier and access to Xās real-time data. Ā Combination of Grok with the visual dataset and v12 algorithm from Tesla will likely c...
readwise.io
Apple Silicon and the Mac in the Age of AI | annotated by AJ Ram
Setting the Stage for AI On Device Perhaps unsurprisingly, the vast majority of inquiries I get from investors, founders, and the bulk of our client base from the tech ecosystem is on silicon and AI. For the better part of a year, I have been deeply engaged in these conversations, exploring the limits of currentā¦
readwise.io
š¬Ā 101
[1hr Talk] Intro to Large Language Models
This is a 1 hour general-audience introduction to Large Language Models: the core technical component behind systems like ChatGPT, Claude, and Bard. What they are, where they are headed, comparisons and analogies to present-day operating systems, and some of the security-related challenges of this new computing paradigm. As of November 2023 (this field moves fast!). Context: This video is based on the slides of a talk I gave recently at the AI Security Summit. The talk was not recorded but a lot of people came to me after and told me they liked it. Seeing as I had already put in one long weekend of work to make the slides, I decided to just tune them a bit, record this round 2 of the talk and upload it here on YouTube. Pardon the random background, that's my hotel room during the thanksgiving break. - Slides as PDF: https://drive.google.com/file/d/1pxx_ZI7O-Nwl7ZLNk5hI3WzAsTLwvNU7/view?usp=share_link (42MB) - Slides. as Keynote: https://drive.google.com/file/d/1FPUpFMiCkMRKPFjhi9MAhby68MHVqe8u/view?usp=share_link (140MB) Few things I wish I said (I'll add items here as they come up): - The dreams and hallucinations do not get fixed with finetuning. Finetuning just "directs" the dreams into "helpful assistant dreams". Always be careful with what LLMs tell you, especially if they are telling you something from memory alone. That said, similar to a human, if the LLM used browsing or retrieval and the answer made its way into the "working memory" of its context window, you can trust the LLM a bit more to process that information into the final answer. But TLDR right now, do not trust what LLMs say or do. For example, in the tools section, I'd always recommend double-checking the math/code the LLM did. - How does the LLM use a tool like the browser? It emits special words, e.g. |BROWSER|. When the code "above" that is inferencing the LLM detects these words it captures the output that follows, sends it off to a tool, comes back with the result and continues the generation. How does the LLM know to emit these special words? Finetuning datasets teach it how and when to browse, by example. And/or the instructions for tool use can also be automatically placed in the context window (in the āsystem messageā). - You might also enjoy my 2015 blog post "Unreasonable Effectiveness of Recurrent Neural Networks". The way we obtain base models today is pretty much identical on a high level, except the RNN is swapped for a Transformer. http://karpathy.github.io/2015/05/21/rnn-effectiveness/ - What is in the run.c file? A bit more full-featured 1000-line version hre: https://github.com/karpathy/llama2.c/blob/master/run.c Chapters: Part 1: LLMs 00:00:00 Intro: Large Language Model (LLM) talk 00:00:20 LLM Inference 00:04:17 LLM Training 00:08:58 LLM dreams 00:11:22 How do they work? 00:14:14 Finetuning into an Assistant 00:17:52 Summary so far 00:21:05 Appendix: Comparisons, Labeling docs, RLHF, Synthetic data, Leaderboard Part 2: Future of LLMs 00:25:43 LLM Scaling Laws 00:27:43 Tool Use (Browser, Calculator, Interpreter, DALL-E) 00:33:32 Multimodality (Vision, Audio) 00:35:00 Thinking, System 1/2 00:38:02 Self-improvement, LLM AlphaGo 00:40:45 LLM Customization, GPTs store 00:42:15 LLM OS Part 3: LLM Security 00:45:43 LLM Security Intro 00:46:14 Jailbreaks 00:51:30 Prompt Injection 00:56:23 Data poisoning 00:58:37 LLM Security conclusions End 00:59:23 Outro
www.youtube.com
Large language models, explained with a minimum of math and jargon
Want to really understand how large language models work? Hereās a gentle primer.
www.understandingai.org

How RLHF actually works
Why RLHF may still win out and why we haven't seen it yet in open-source.
www.interconnects.ai

Twitter thread from @yanatweets | annotated by AJ Ram
Iāve learned a ton playing and building with LLMs in the past 3+ years. Hereās the 21 most important terms to know if youāre just getting started š§µ
readwise.io
šļøBuilderās Toolkit
Every Company Needs an AI Strategy
Talk for Corporate Boards
sarahguo.com
What We Learned from a Year of Building with LLMs (Part I) | annotated by AJ Ram
Itās an exciting time to build with large language models (LLMs). Over the past year, LLMs have become āgood enoughā for real-world applications.
readwise.io
What We Learned from a Year of Building with LLMs (Part III): Strategy | annotated by AJ Ram
We previously shared our insights on the tactics we have honed while operating LLM applications. Tactics are granular: they are the specific actions employed to achieve specific objectives.
readwise.io
Collapsing the Talent Stack, Persona-Led Growth & Designing Organizations for the Future
As we explore how organizational design and product building are evolving, itās clear weāre overdue for change... Let's dive into Edition #8 covering the latest waves and the implications.
www.implications.com

Cheating is All You Need
There is something legendary and historic happening in software engineering, right now as we speak, and yet most of you donāt realize at all how big it is.
about.sourcegraph.com
The State of AI Agents
Read the latest insights from our journey | E2B offers sandboxed cloud environments for AI agents & AI apps with a single line of code
e2b.dev
Numbers Every LLM Developer Should Know | Anyscale
For LLM developers, it's useful to know numbers for back-of-the-envelope calculations. Here are the numbers we use, plus how and why to use them.
www.anyscale.com
š·š»āāļøProduct Managerās Corner
Twitter thread from @clairevo | annotated by AJ Ram
I think PMs and product leaders are under-investing in generative AI fluency & building the hard skills necessary to future proof their careers. Sure folks can mumble about semantic search or "agents" or chat as an interface, but could they sit down and really spec a great AI app? Product teams and leaders are responsible for understanding customer problems & goals, scoping out potential solutions, and providing enough detail that their partner design and dev teams can build a predictably high quality solution that can be tested in the market. But I think 9/10 PMs would flounder when asked to write a decent PRD for an "AI-powered" product feature. It's simply not enough to say something something LLM chat agent beep boop. Because these systems can be highly non deterministic, there's a whole other set of requirements that need to be considered and outlined in order to go beyond demo apps to reliable production releases. Often when people present gen AI ideas to me in various contexts, I ask: "How would you ...
readwise.io
How to use ChatGPT in your PM work
Real-life examples (and actual prompts) of how PMs are already using ChatGPT day-to-day
www.lennysnewsletter.com

40 AI apps to streamline each stage of the product lifecycle | Product Hunt
Explore the use of AI apps to save time and resources while enhancing the success of product planning, development, launch, and growth strategies.
www.producthunt.com
š¤Ā Things to ponder
A.I. Could Soon Need as Much Electricity as an Entire Country
Behind the scenes, the technology relies on thousands of specialized computer chips.
www.nytimes.com
Artificial intelligence technology behind ChatGPT was built in Iowa ā with a lot of water
As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.
apnews.com
The End of High-School English
Iāve been teaching English for 12 years, and Iām astounded by what ChatGPT can produce.
www.theatlantic.com
Revealed: The Authors Whose Pirated Books Are Powering Generative AI | annotated by AJ Ram
Editorās note: This article is part of The Atlanticās series on Books3. Check out our searchable Books3 database to find specific authors and titles.
readwise.io
The Exploited Labor Behind Artificial Intelligence | NOEMA
Supporting transnational worker organizing should be at the center of the fight for āethical AI.ā
www.noemamag.com
šReading Corner
Prediction Machines, Updated and Expanded: The Simple Economics of Artificial Intelligence
Prediction Machines, Updated and Expanded: The Simple Economics of Artificial Intelligence [Agrawal, Ajay, Gans, Joshua, Goldfarb, Avi] on Amazon.com. *FREE* shipping on qualifying offers. Prediction Machines, Updated and Expanded: The Simple Economics of Artificial Intelligence
www.amazon.com
The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma
The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma [Suleyman, Mustafa, Bhaskar, Michael] on Amazon.com. *FREE* shipping on qualifying offers. The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma
www.amazon.com
š±Ā Staying upto date
i
twitter.com