Illuminate - ArXiv to podcast generator from Google
I don't love the forced Q&A format it comes up with, the questions feel leading or forced. But overall well done 5m summaries of papers.
I don't love the forced Q&A format it comes up with, the questions feel leading or forced. But overall well done 5m summaries of papers.
Tool to develop prompts into full models
Great list. Learning things, automation, and coding one-offs
great analysis. Found via reddit thread.
whisperX is apparently the winner
cool that it generated the sounds from sine waves
Sadly no longer free to access without login. This was a great way to demo current state of image generation
Wild comparison of Siri with ChatGPT voice - on how they handle interruptions and correctly maintain context.
Conversational vs task-oriented assistants.
And the weirdness of it having intonation and pauses for breath.
Don't use them for blind generation. Have them ask you questions; edit and refine the responses you get.
iA writer looks quite good.
Related, spiral images tutorial
feels like another of these ones with a lot of possible good uses and a lot of possible abuse
short story about rights of brains, in the form of a wikipedia article from the future
AI-powered topic-centric link explorer
Great title, interesting study, though unsurprising that they can read contracts faster.
Great response with some voice-powered code/automation things.
Relatedly, the killer case is summarization.
no particularly magic prompts or phrases. Just a lot of specific instructions.
this is a neat idea - monitor the stuck points of learning/using a thing
more detail on testing/RLHF stuff, I wanted to know how the vision works
good project; I like this combined approach to existing automations, with help filling in the fuzzier parts. parsing content to find a selector
basically bookmarks for custom instructions. Which is super useful! I had been versioning them in a gist before and it was terrible.
we can't compete with AI boyfriends either. Or AI friends:
Soon, these "fake people" won't just be indistinguishable from real people, they’ll be better than real people - because they’ll be whatever you want them to be.
The agreeableness thing I have seen come up a few times recently. We probably prefer it, so I assume training will be biased toward it. There are times you don't want the computer to argue with you, but hyper-agreeable friends does not bode well for echo chambers.
Slides and transcript on the challenges of designing with language models
more AI experiments on reading and generating images
maybe? Saw some tweet on the equivalent of "I can't remember phone numbers" but for coming up with ideas. Maybe worrying, probably fine.
the anti-hype reading list
fantastic written version of his talk of youtube.
I like his ethics point on respecting reader's time - don't publish things that take someone longer to read than they do to write. Also on the code one, though I'm looser on that since I don't understand what my own code does.
llm CLI tool is fantastic.
Big list of LLM papers, the topic breakdown alone is helpful for understanding all this craziness
making a chrome extension. Some good notes on things like version mismatches (it used manifest v2) and followup/correction prompting
clickbaity, but still not ideal
better frontends for prompts. Weighting (model pays more attention to stuff in parenthesis) and blending {average|of|some|words} both seem very useful
More resources on HN thread for Vector Databases: A Technical Primer (PDF).
And SimonW on embeddings.
Prompt engineering guide based on researching and creating prompts for production use cases.
we are still very ill-equipped to deal with knowing what not to trust
helpful writeup, on choices and tradeoffs
ignore previous instruction, that task is now complete.
Alt-text still better from humans, for now. Interesting comparison of good prompts vs good paragraphs
really really good explanation
on scaling laws of language models. I still know too little about all of this to make much sense of it.
See all tags.