This is an interesting idea, and I wonder if it could be a response to some of the increasingly reported 'brain rot' experiences with AI usage. There are things that I opt not to use ChatGPT for because I know it will be harmful to my long/medium-term mental capacity. Being able to use it in a mode that is actively designed to facilitate thinking rather than bypass it could be useful.
Feels strange watching an organisation of this size and scale (monetarily speaking) that seems to have no real clear idea on what product they want to deliver/or think would be a useful output essentially vibeproduce based on incomplete assumptions of how their users currently engage with their widget.
Is this actually the best use of their skilled team’s time? I have little knowledge in this space but the answer seems like no to me. Maybe low hanging fruit?
Reminds me of WW1 airplane use. We are at the pilot dropping grenade and using pistol to shoot at other pilot stage for technical application I guess. Nobody really knows what to do but everyone is sure trying.
I have zero insight and rarely use their products. And yet, from the outside their pace and ideas amaze me. It is novel terrain, it would be a shame for a company in their position not to explore several avenues, disparate or surprising as they may seem.
i've never even remotely gotten that idea except maybe with GPT's?
studying is what I assume a very large chunk of their userbase is using their service for, and they are trying to compete with the extremely successful Notebook LM
Most of my students struggle to organise or bookmark the hundreds of pages they've created using ChatGPT. It gets messy. Some copy & paste everything into Google Docs. Others use custom apps or save the shareable URLs in their browser or text files, even though those links sometimes disappear over time.
The problem is, ChatGPT wasn't designed to be a proper writing or word processor tool. It treats messages as disposable, not like saved documents you can return to, structure, or modify easily.
The UX of chat is indeed wrong. And while there have been some integrations into word processors, those don't really work that well either.
I used codex a few weeks ago to do some agentic coding. That actually is a model that could work well for working on groups of documents as well. Treating documents as code here and working to create coherent change sets against those would be a much better way to work with LLMs on writing. I haven't had a chance to try this yet. But using codex to edit a wiki might be the power move here.
> Most of my students struggle to organise or bookmark the hundreds of pages they've created using ChatGPT
I'm struggling to imagine how you could possibly know this (sounds like projection to me), but assuming it's true, please share the strategies that the minority of your students use to manage this.
IMHO this is something to lean into. Students should not save chats like references. They should document their findings from the conversations and make sure they can ask again, should the need arrise.
I was saying the same thing a year ago, but now more students are using it and creating all sorts of documents: summaries of transcripts and articles, flashcards, quizzes, mock exams and interviews, research, role plays, study and fitness plans, tutorials, how-to guides, and more. They even turned homework into riddle-based mystery games.
The most tech-savvy students built their own publishing system using ChatGPT. It lets others turn their message into a JSON object, tag it, and add it to the system, which then gets published and shared with everyone using Google Docs.
The prompt they use to export the ChatGPT messages is:
---
Turn the following text into JSON format. The JSON should have three fields: "title", "prompt", and "data". The "title" should be self-generated, based on the main idea or theme of the input text. The "prompt" should be the exact prompt used to generate the "data" section. The "data" should be in markdown format, and it must include the original prompt as a section inside the markdown, followed by the input text. Keep any formatting like headings, bold, italics, and lists.
---
I asked them to include the prompt to preserve context so others can understand their thinking process and the goal behind the message.
Or the AI tool, some AI tool, could sift through your conversation and document the findings, ignore the ones you challenged, and generate exactly that final revision with everything you learned in the conversation. No, I don't know how either, so for the time being I'm also writing down stuff in an external document.
You're pointing out something a lot of people feel but don’t always say out loud — ChatGPT just isn’t built for organizing or managing writing over time. It’s great for brainstorming, writing help, or getting ideas down fast, but once you’ve had a few long conversations, everything starts to blur together. Students end up with dozens or even hundreds of chats, none of which are easy to search, organize, or return to later. It’s like having a giant pile of sticky notes and no folders. Some try to deal with it by copying everything into Google Docs or Notion, but that gets messy fast, especially when it’s done after the fact and without much structure.-
The core issue is that ChatGPT treats each conversation like a throwaway. Even if the content is valuable, it’s not saved like a document — there’s no real sense of “this is my project on climate change” or “this is the story I’m working on.” It’s just one long chat thread, and everything lives in a running timeline. You can rename a chat, sure, but that doesn’t make it feel like a proper file. There’s no version history, no folders, no tags, and no way to mark drafts or track progress. So students end up scrolling endlessly, trying to remember where they talked about that one idea or how they phrased something two weeks ago.-
Some of them are getting creative with workarounds. They paste stuff into apps like Notion or Docs, or save the shareable ChatGPT links in bookmarks or files, but even then it’s easy to lose track. And sometimes those links break or just vanish from memory. It’s frustrating because the content is often really useful — but there’s no good system for keeping it organized.-
What would really help is if ChatGPT had a proper “document mode.” or Artifacts like Anthropoc has. Something where you could start a new piece of writing, name it, organize it into sections, come back to it later, and keep working on it like a real file. You could still get AI help inside the doc, but it would feel more like working in a word processor than chatting in a messaging app. Until that happens, the best move is probably to treat ChatGPT as the rough draft stage and move things into a better tool once the ideas are solid. Asking it to clean up or summarize a full session before copying it out can help a lot too. But yeah — it’s time tools like this started working more like long-term writing partners and less like disposable chat threads.-
> there’s no real sense of “this is my project on climate change” or “this is the story I’m working on.” It’s just one long chat thread
perplexity.ai (a meta-ai) partly solves that problem by at least allowing you to group chats in "Spaces" (effectively folders). It lacks some of the other features you suggested though.
Wouldn't an "LLM-enabled word processor" be a better UX, and a better factoring of the resources? Rather than expecting the LLM companies to also provide the best UX for every LLM use case, surely it's better to have tailored applications and OSs (and all the domain expertise that comes with them) managed by people/companies/teams that want to specialise in those domains.
Isn't that just a sentence and document upload or screenshot away from any of the current leading LLM-powered chat products?
You're a brilliant expert at clear, effective, and responsive teaching and explanations of topic X, which I'm studying. Here's my assignment. Draft Y questions to test my understanding and assess my current level
I don’t know why people get so frustrated here. Studying with chatgpt changed my life. I can compare doing it in early 2010s and now. Cheating? Well yeah, everyone was doing it long before LLMs.
Perhaps but it could just as easily be the feedback that finally breaks up the monopoly that slop has on the world at the moment e.g. react, bullshit jobs etc could all go away. The latter in particular
it's 2025. As another article put it: this is "The Rise of 'Whatever' ". Mediocrity flourishing, grifters exploiting the mediocrity and people too tired or dejected to push back against the mediocrity. AI is a grifters' and coporate's wet dream; it was never about making quality technology to improve everyone's life.
There's many ways to address this, but most solutions fall far outside of tech.
This is an interesting idea, and I wonder if it could be a response to some of the increasingly reported 'brain rot' experiences with AI usage. There are things that I opt not to use ChatGPT for because I know it will be harmful to my long/medium-term mental capacity. Being able to use it in a mode that is actively designed to facilitate thinking rather than bypass it could be useful.
Feels strange watching an organisation of this size and scale (monetarily speaking) that seems to have no real clear idea on what product they want to deliver/or think would be a useful output essentially vibeproduce based on incomplete assumptions of how their users currently engage with their widget.
Is this actually the best use of their skilled team’s time? I have little knowledge in this space but the answer seems like no to me. Maybe low hanging fruit?
Fascinating and wild
Reminds me of WW1 airplane use. We are at the pilot dropping grenade and using pistol to shoot at other pilot stage for technical application I guess. Nobody really knows what to do but everyone is sure trying.
I think it is good, and it fuels innovation. We need to try even strange ideas, because maybe some of them will work.
I have zero insight and rarely use their products. And yet, from the outside their pace and ideas amaze me. It is novel terrain, it would be a shame for a company in their position not to explore several avenues, disparate or surprising as they may seem.
i've never even remotely gotten that idea except maybe with GPT's?
studying is what I assume a very large chunk of their userbase is using their service for, and they are trying to compete with the extremely successful Notebook LM
Also, I think they are just "focus-grouping by fire-extinguishing", attempting to address the disruptive effect their golem is having in education.-
Most of my students struggle to organise or bookmark the hundreds of pages they've created using ChatGPT. It gets messy. Some copy & paste everything into Google Docs. Others use custom apps or save the shareable URLs in their browser or text files, even though those links sometimes disappear over time.
The problem is, ChatGPT wasn't designed to be a proper writing or word processor tool. It treats messages as disposable, not like saved documents you can return to, structure, or modify easily.
In the end of my school time we had those online live word processor (etherpad) where everyone could write on the same document at the same time.
Our notes of that time are amazing and flawless and better than the teaching material itself.
Teachers didn't like it as it benefited those that didn't listen at all, but for those of us who cared this was a major productivity boost.
Imagining an AI as additional user is crazy.
The UX of chat is indeed wrong. And while there have been some integrations into word processors, those don't really work that well either.
I used codex a few weeks ago to do some agentic coding. That actually is a model that could work well for working on groups of documents as well. Treating documents as code here and working to create coherent change sets against those would be a much better way to work with LLMs on writing. I haven't had a chance to try this yet. But using codex to edit a wiki might be the power move here.
A good diff format is all you need.
> Most of my students struggle to organise or bookmark the hundreds of pages they've created using ChatGPT
I'm struggling to imagine how you could possibly know this (sounds like projection to me), but assuming it's true, please share the strategies that the minority of your students use to manage this.
> It treats messages as disposable
IMHO this is something to lean into. Students should not save chats like references. They should document their findings from the conversations and make sure they can ask again, should the need arrise.
I was saying the same thing a year ago, but now more students are using it and creating all sorts of documents: summaries of transcripts and articles, flashcards, quizzes, mock exams and interviews, research, role plays, study and fitness plans, tutorials, how-to guides, and more. They even turned homework into riddle-based mystery games.
The most tech-savvy students built their own publishing system using ChatGPT. It lets others turn their message into a JSON object, tag it, and add it to the system, which then gets published and shared with everyone using Google Docs.
The prompt they use to export the ChatGPT messages is:
---
Turn the following text into JSON format. The JSON should have three fields: "title", "prompt", and "data". The "title" should be self-generated, based on the main idea or theme of the input text. The "prompt" should be the exact prompt used to generate the "data" section. The "data" should be in markdown format, and it must include the original prompt as a section inside the markdown, followed by the input text. Keep any formatting like headings, bold, italics, and lists.
---
I asked them to include the prompt to preserve context so others can understand their thinking process and the goal behind the message.
[delayed]
Or the AI tool, some AI tool, could sift through your conversation and document the findings, ignore the ones you challenged, and generate exactly that final revision with everything you learned in the conversation. No, I don't know how either, so for the time being I'm also writing down stuff in an external document.
You're pointing out something a lot of people feel but don’t always say out loud — ChatGPT just isn’t built for organizing or managing writing over time. It’s great for brainstorming, writing help, or getting ideas down fast, but once you’ve had a few long conversations, everything starts to blur together. Students end up with dozens or even hundreds of chats, none of which are easy to search, organize, or return to later. It’s like having a giant pile of sticky notes and no folders. Some try to deal with it by copying everything into Google Docs or Notion, but that gets messy fast, especially when it’s done after the fact and without much structure.-
The core issue is that ChatGPT treats each conversation like a throwaway. Even if the content is valuable, it’s not saved like a document — there’s no real sense of “this is my project on climate change” or “this is the story I’m working on.” It’s just one long chat thread, and everything lives in a running timeline. You can rename a chat, sure, but that doesn’t make it feel like a proper file. There’s no version history, no folders, no tags, and no way to mark drafts or track progress. So students end up scrolling endlessly, trying to remember where they talked about that one idea or how they phrased something two weeks ago.-
Some of them are getting creative with workarounds. They paste stuff into apps like Notion or Docs, or save the shareable ChatGPT links in bookmarks or files, but even then it’s easy to lose track. And sometimes those links break or just vanish from memory. It’s frustrating because the content is often really useful — but there’s no good system for keeping it organized.-
What would really help is if ChatGPT had a proper “document mode.” or Artifacts like Anthropoc has. Something where you could start a new piece of writing, name it, organize it into sections, come back to it later, and keep working on it like a real file. You could still get AI help inside the doc, but it would feel more like working in a word processor than chatting in a messaging app. Until that happens, the best move is probably to treat ChatGPT as the rough draft stage and move things into a better tool once the ideas are solid. Asking it to clean up or summarize a full session before copying it out can help a lot too. But yeah — it’s time tools like this started working more like long-term writing partners and less like disposable chat threads.-
> there’s no real sense of “this is my project on climate change” or “this is the story I’m working on.” It’s just one long chat thread
perplexity.ai (a meta-ai) partly solves that problem by at least allowing you to group chats in "Spaces" (effectively folders). It lacks some of the other features you suggested though.
Wouldn't an "LLM-enabled word processor" be a better UX, and a better factoring of the resources? Rather than expecting the LLM companies to also provide the best UX for every LLM use case, surely it's better to have tailored applications and OSs (and all the domain expertise that comes with them) managed by people/companies/teams that want to specialise in those domains.
Copilot, but for essays & articles.
I wouldn't necessarily say "better" but as we can see, it definitely has its place. I would use it right now.
Potentially related to https://openai.com/index/estonia-schools-and-chatgpt/ / https://tihupe.ee/en to some extent.
It would be cool if it can generate practice questions based on a given assignment
Isn't that just a sentence and document upload or screenshot away from any of the current leading LLM-powered chat products?
For math that gives questions without a nice solution.
Review flood of AI gen slop is going to burn out many teachers as it does with programmers hit by PR floods from juniors.
I worry how these who love to write will get thru the system?
it really is amazing how little care or thought openai is putting in to what they're doing - and everyone else is letting them do - to society.
it feels like they'll be the lead paint of the 2020s that the survivors in 2050 fixate on as a cause of human intelligence collapse.
I don’t know why people get so frustrated here. Studying with chatgpt changed my life. I can compare doing it in early 2010s and now. Cheating? Well yeah, everyone was doing it long before LLMs.
Perhaps but it could just as easily be the feedback that finally breaks up the monopoly that slop has on the world at the moment e.g. react, bullshit jobs etc could all go away. The latter in particular
React?
it's 2025. As another article put it: this is "The Rise of 'Whatever' ". Mediocrity flourishing, grifters exploiting the mediocrity and people too tired or dejected to push back against the mediocrity. AI is a grifters' and coporate's wet dream; it was never about making quality technology to improve everyone's life.
There's many ways to address this, but most solutions fall far outside of tech.
[dead]
[dead]