

Discover more from Shipped with AI
Fun with Embeddings
ShippedWith.AI is a weekly roundup of interesting things built atop AI, paired with ideas for entrepreneurs and investors in the space. The AI market moves fast. Our goal is to help you understand and find inspiration in it.
A lighter update this week. I wanted to focus on recent work that uses embeddings. For those unfamiliar, think of embedding vectors as MRI scans of a neural network that capture their representation of the input. This process has the useful property of assigning similar inputs to similar regions of vectorspace, enabling semantic search among other things.
If you ask a handful of people, some folks will tell you embeddings are the future, and others will tell you to just wait until LLMs can solve your problem end-to-end. There’s something to both views. For now, embeddings are an interesting way to leverage LLMs without fully trusting them with complete control of the final answer. Or, framed a different way, a constrained approach to fine-tuning the experience of using an LLM without actually fine-tuning the LLM itself.
This Week’s Links
Ask my Book by Sahil Lavingia - “Hear the author answer questions in their voice.”
Very fun website in which you can ask questions that match to what appear to be pre-loaded answers, spoken to you through an AI generated voice that sounds like the author. We’re not far off from the “clues from the dead” scene in iRobot. I feel like this is either going to become completely ubiquitous or a passing homepage fad. Hard to tell, but I love it.
Notion QA Bot by Harrison Chase - “Embed & search across Notion pages”
GitHub project to embed Notion-exported site content with OpenAI embeddings and then search them with Facebook’s FAISS vector engine. I think the sweet spot for indie hackers in this space is taking base project like this and then asking the question which sentences should I embed and under which which circumstances I retrieve them?
Raffle - “Search across your company’s content”
Whenever I find a GitHub project (Notion QA Bot) that demonstrates a capability in a few lines of code, I think it’s useful to find the fully-capitalized start-up version of the same thing. It’s a study in the difference between a technology and a product. In this case: automatic content integrations, UI integrations, and insights into employee search behavior.
Metaphor - “LLMs as Search Engine”
Ok this one isn’t strictly embeddings, but it’s important for you to see. Scroll to the bottom and play with the UI. I’m struck by both how enormous a leap over Google this is, but how challenging a UX problem this seems to present. Will LLM-based search create an unbundling of the “universal search box” (ironic, given the natural language interface of LLMs) or will we find a way map queries to the nearest prompt that works well with them (hey — a task for embeddings after all!)