Better Conversations through ML?
Comments on the internet. Like everyone else, I can't help to look at them, only to quickly want to look away.
Somehow no matter the audience or dryness of the topic, comment forums seem to almost inevitably spiral into a chaotic mess of personal attacks. And yet, there's so much value to be extracted (...much of the time) from the knowledge and perspective of others.
I was curious if we could use LLMs (and how much it would cost) to let us communicate with each other better. Instead of filtering, what if we let LLMs help us express ourselves to each other in a healthier manner?
Unsurprisingly, the LLMs are good at deflecting obviously personal attacks. Perhaps more surprising was that they aren't that easy to jailbreak out of their system prompt, and that LLama3-405B was the best of all of models I tried at following prompts. All of the models tend to rewrite comments in a relatively stilted manner (sigh, other than of course when you turn on the "evil" prompt), but it's not bad.
To see an example of how LLMs handle a charged conversation, see the conversation demo page.
Cost
So how much does it cost to do this? It's surprisingly affordable, and I imagine someone will figure out how to package it into an API soon enough. Processing the average comment and parent tree costs about $0.002 (2/10 of a cent) when using a hosted LLama3 provider.
With some simple filters and cheaper models in front of it, it would certainly be viable to deploy for most use cases. (I mean, if you've got enough users that this matters, you probably can afford it...). A risk would be someone figuring out how to jailbreak and use you as a free LLM service, but rate limits and filters would likely catch the vast majority of freeloaders.
Example Site
Below I mocked up a simple replica of Hacker News which uses an LLM to gently guide comments to ensure they are respectful and contribute to the conversation. Feel free to try adding comments and interacting with them (topics and comments are reset daily).
You can also change the system prompt to make the LLM deliberately evil or incoherent if you like in the settings.
- Porting systemd to musl Libc-powered Linux
- Building a WoW (World of Warcraft) Server in Elixir
- Desed: Demystify and debug your sed scripts
- The Internet Archive has lost its appeal in Hachette vs. Internet Archive
- Yi-Coder: A Small but Mighty LLM for Code
- CSS @property and the new style
- The Elements of APIs by John Holdun
- Shell Has a Forth-Like Quality (2017)
- Giving C++ std:regex a C makeover
- Sequel: The Database Toolkit for Ruby
- Dynamicland 2024
- Show HN: Laminar – Open-Source DataDog + PostHog for LLM Apps, Built in Rust
- Flexport Is Hiring Engineers in Amsterdam
- A Real Life Off-by-One Error
- Accelerando (2005)
- Understanding Pgvector's HNSW Index Storage in Postgres
- Show HN: An open-source implementation of AlphaFold3
- The first nuclear clock will test if fundamental constants change
- Tinystatus: A tiny status page generated by a Python script
- A SpamAssassin Surprise
- CSS-only infinite scrolling carousel animation
- Software Rasterizing Hair
- HIDman Adapting USB devices to work on old computers
- Lesser known parts of Python standard library – Trickster Dev
- Origami-Inspired Phased Arrays Are Reshaping the Future of Antennas
- Show HN: Wat Dat – A Firefox Extension for Instant Text Explanations
- Kagi Assistant
- Show HN: Mem0 – open-source Memory Layer for AI apps
- DAGitty – draw and analyze causal diagrams
- Routed Gothic Font