-263

Today (December 2, 2025), AI Assist’s conversational search and discovery experience is now fully integrated into Stack Overflow. AI Assist continues to provide learners with ways to understand community-verified answers and get help instantly. The enhancements released today mean that logged-in users can benefit from their saved chat history to jump back into the flow and pick up where they left off, or share conversations for collaborative problem-solving. This update also allows us to explore further integrations in the future, as explained below.

The story so far

AI Assist was launched as a beta in June 2025 as a standalone experience at stackoverflow.ai. We learned a great deal through observing usage, having discussions with community members on Meta, getting thoughts from various types of users via interviews and surveys, and reviewing user feedback submitted within AI Assist. Based on this, the overall conversational experience was refined and focused on providing maximum value and instilling trust in the responses. Human-verified answers from Stack Overflow and the Stack Exchange network are provided first, then LLM answers fill in any knowledge gaps when necessary. Sources are presented at the top and expanded by default, with in-line citations and direct quotes from community contributions for additional clarity and trust.

Since our last updates in September, AI Assist’s responses and have been further improved in several ways:

  • At least a 35% response speed improvement
  • Better responsive UI
  • More relevant search results
  • Using newer models
  • Attribution on copied code
  • Providing for the reality that not all questions are the same
    • Depending on the query type, AI Assist now replies using one of 4 structures: solution-seeking, comparative/conceptual, methodology/process, or non-technical/career.
    • Every response also has a "Next Steps" or "Further Learning" section to give the user something they can do.

What’s changed today?

While logged in to Stack Overflow, users of AI Assist can now:

  • Access past conversations as a reference or pick up where they left off;
  • Share conversations with others, to turn private insights into collective knowledge;
  • Access AI Assist’s conversational search and discovery experiences on the site's home page (Stack Overflow only).

Example responses from AI Assist Example responses from AI Assist

AI Assist at the top of the Stack Overflow homepage (logged-in) AI Assist at the top of the Stack Overflow homepage (logged-in)

Conversations can be shared with others Conversations can be shared with others

What’s next?

By showcasing a trusted human intelligence layer in the age of AI, we believe we can serve technologists with our mission to power learning and affirm the value of human community and collaboration.

Research with multiple user types has shown that users see the value of AI Assist as a learning and time-saving tool. It feels aligned with how they already use AI tools and there is value in having the deeper integrations. Transparency and trust remain key expectations.

Future opportunities we’ll be exploring are things like:

  • Context awareness on question pages: making AI Assist adapt to where the user is
  • Going further as a learning tool: help users understand why an answer works, surface related concepts, and support long-term learning
  • Help more users learn how to use Stack Overflow: guide users on how to participate, helping them meet site standards

This is not the end of the work going into AI Assist, but the start of it on-platform. Expect to see iterations and improvements in the near future. We're looking forward to your feedback now and as we iterate.

51
  • 10
    The screenshot for sharing conversations is misleading, it suggests that the mechanism could be vulnerable to enumeration. The actual mechanism uses a UUID though, not a plain, incrementing number. Commented Dec 2 at 14:51
  • 78
    Where is a post, where someone asking for this? Oh, AI hype, I got it, we must have it here because.. well, no reasons. I don't use "random" assistants there and there. I have preferred one, which is way better than any website can offer me for free. Thanks. Commented Dec 2 at 15:33
  • 40
    I couldn't be less happy SE is getting in on the LLM game. Is there a way to opt out of being used for the hallucination engine or do I just need to delete all my answers? Commented Dec 2 at 15:39
  • 20
    @AshZade Will deleting chats also revoke SE's license to use the content of the chats? Commented Dec 2 at 15:57
  • 36
    So, how does this work if there's a ban on AI in questions and answers? Is it a ban on everyone else's AI, but SO AI is "fine"? That just sounds like more corporate hypocritical BS to me. Commented Dec 2 at 16:55
  • 21
    Nah. I like AI tools, and I don't think the integrations are bad from the outset– but I feel that adding yet another widget to the top of the home page which pushes more human questions out of view is a terrible compromise that I feel is wholly unacceptable on a site that proports to value human contributions above AI ones. The focus just feels so wrong to me, and I think having no way to hide, relocate, or even minimize that pane is just ridiculous. Commented Dec 2 at 17:23
  • 9
    @AshZade I don't think that's unreasonable, I just still don't like the overall message it seems to send, it doesn't sit well with me. I'd also posit that if the clickthrough on those questions is poor, then the homepage has an identity crisis that should be tackled wholistically... why is the clickthrough poor? Is it a relevancy problem? Would users prefer seeing something else other than questions? The AI box may very well be part of that solution, but I'd assert that it's not the whole picture. I haven't gotten the sense that Stack, or its users, really have a cohesive vision for that page. Commented Dec 2 at 17:50
  • 15
    Straight out of the "how can we make Stack Overflow crappier this quarter?" playbook. At least the Stack Exchange dictator is consistent. Commented Dec 2 at 23:56
  • 47
    It creeps and it creeps and it creeps. Why should human experts again spend their time answering questions here? Commented Dec 3 at 6:15
  • 8
    "our mission to power learning and affirm the value of human community and collaboration" since when? Did I missed some announcements? Commented Dec 3 at 7:06
  • 16
    @GSerg I'll answer in earnest: since we launched the Alpha, +80% of AI Assist's usage has been technical questions. It's one of the strongest reasons we continued the project. I commit to sharing detailed data in the next few weeks as we collect it. I have mentioned this point in past meta posts: the sentiment here does not reflect the usage of AI Assist overall. Commented Dec 3 at 12:56
  • 37
    Why, precisely, are we trying to get users to ask fewer questions? Commented Dec 3 at 16:41
  • 21
    Looking for information about how to disable and delete all the AI chat features from my view of the service. I'm here to interact with other developers, not an LLM. Commented Dec 3 at 17:37
  • 16
    Its not often that you see someone so eagerly digging their own grave as SO does. Commented Dec 4 at 16:50
  • 7
    Isn't the whole point of SE that it's a place where you can go to ask other people about your problems? Commented 2 days ago

46 Answers 46

1
2
5

About a year ago I asked this - "Is Stack Exchange planning to force users to ask their questions to an LLM before allowing posting on the site?":

Inspired by this question I was skipping through the recent blog post once again overhyping AI as The Answer To Life, The Universe And Everything Else.

Leaving aside the robo-review scenarios, I noticed one more weird implication in this picture:

"Stack Overflow Ecosystem: first ask to our LLM model and only after that be allowed to post

Here we can see what Stack Exchange describes as their "Ecosystem". In this scenario the company describes a quite specific flow.

  • A question is asked to the LLM.
  • If the LLM can answer (or can at least hallucinate an answer while tricking the user into thinking their problem is solved) then the process ends here.
  • Only if the LLM isn't able to provide an adequate answer the question is then posted for the community to answer.

Now, this looks quite similar to "Requiring users to check ChatGPT before posting their question", something that someone already tried to propose and was already met with quite a bit of backlash by the Meta community.

Is this the future the company is planning? Or is the image only meant to illustrate how some future additional tool will integrate with the site?

And obviously got employee after employee swearing you would never dare, that I am the bad one for assuming bad intent etc etc etc....

Fast forward to today...

enter image description here

As Lundin pointed out, this new "layout" seems to purposely "hide" the Ask Question button, while giving plenty space to the new no-one-asked-for AI-Based feature.

I already know that some white knight will point out that "you can still ask questions" and "this is not the same" but... please, spare me the devil advocating here. You perfectly know what that question meant, and you knew back then too.

So, I ask you again, more bluntly this time.

Do you still claim is not your purpose to put your poor AI-based solution above human provided answer? Do you claim that hiding the Ask Button while giving your new AI toy the central focus was just a coincidence? Do you claim you would not prefer users to first use your AI parrot and only after that proved ineffective finally move to post an actual question?

And more importantly... do you think we are blind to you actual goals?

10
  • 7
    Like... I get disliking the tool but the Ask Question button hasn't moved. The box is obnoxious and patronizing in how it's addressing the user but it's not in any way requiring you to use the tool before asking and the company hasn't even moved, let alone hidden, the ask question button. Commented Dec 3 at 16:07
  • 1
    @Catija claiming that was not true back then, claiming it is not true now. Picture me surprised not. Riddle me Catija, how comes Lundin came to the same conclusion - " the button must be hidden away out in the periphery along with the blog spam, to ensure that I ask my question to the AI rather than use the site as intended. " ? I suggest you go downvoting that too. Commented Dec 3 at 16:12
  • 6
    I think people have a lot of strong feelings about this and that's valid. I think that claiming that they are "hiding the Ask Button" while it's clearly visible in your own screenshot essentially invalidates everything you're saying in this post, meaning you're making yourself out to be unreasonable. If that's the choice you want to make, go for it - but you're not doing yourself any favors. Lundin's answer makes an argument by showing before/after, even if it had the AI Assist do it. It's a fair question and analysis without making these hyperbolic statements. Commented Dec 3 at 16:23
  • 2
    see previous comment @Catija. Feel free to defend the "poor billionaire company" as you wish, but I think it is quite clear to anyone who don't refuse evidence what is given focus on the new design and what is pushed to the side. A small clue: users tend to look at the center of the screen first. Commented Dec 3 at 19:12
  • 1
    I’m not sure what we can say to convince you that we don’t want less questions posted. We have a prompt after every response in AI Assist to post one if the response doesn’t help. We have an Ask Question button at the top right of the screen at all times, even if you scroll. Commented Dec 3 at 23:24
  • 1
    @AshZade The Ask button doesn't scroll. Maybe that's in a design y'all are developing but I don't see it hovering as I scroll the page down. Commented Dec 4 at 0:01
  • @Catija as far as i can tell, it does indeed not scroll, but only because the whole page doesn't scroll... only the middle section does. Once below a certain window width, the ask question button goes away entirely (unless i'm missing it somewhere,) but otherwise it sits stuck at the top right in a column by itself. If i wasn't looking for it, i wouldn't have recognized it as a button. Commented Dec 4 at 0:31
  • @user400654 Maybe I'm misunderstanding. The position of the Ask button (on SO) seems to be fixed in the upper right corner (full screen layout) such that when the page scrolls down, the button scrolls away, out of view. This contrasts with the top bar, which floats so that it's always visible. Ash's comment seems to state that the ask button is always there at the top right of the screen and it never scrolls away, much like the top bar. But as far as I can tell, that's not the case. When I scroll down, there is no ask button on the page any more. Commented Dec 4 at 17:37
  • Are we talking about the same page? I’m talking about the page you get when you click on ai assist, Commented Dec 4 at 17:48
  • @user400654 I'm talking about the homepage of SO, which is the page the answer's screenshot shows. Commented 2 days ago
5

My mobile experience in Android Chrome when Keyboard is up. Page top disappears and background text floats behind text input field.

There is also a space below the text input, see second screenshot.

enter image description here enter image description here

4

Could you please increase the size of the output window where the AI writes the response? Currently, the output window (red rectangle in the screenshot below) occupies only a small portion of the screen:

enter image description here

1
  • 6
    Yup! We're working on making the input box smaller and expand as you enter content, and tightening up the padding above and below it. Thanks for sharing. Commented Dec 2 at 21:50
4

Clicking on "share" too quickly, before AI Assist shows any response, sometimes results in AI Assist never showing any response and this error message:

enter image description here

Tested on Chrome+Windows.

1
  • 4
    Another great catch, we're on it. Thank you, Franck. Commented Dec 3 at 13:35
4

Quotes in AI assist responses sometimes mess up the font size, e.g. see https://stackoverflow.com/ai-assist/shared/0895f6cb-2b02-4832-b965-0d2568d060a2:

[meta-tag:feature-request]

In that case, it is caused by quoting codes containing comments starting with #: Configure tmux scroll speed

enter image description here

1
  • 3
    Thank you for submitting all of these, I've seen the response do this a few times as well. Working on it! Commented Dec 3 at 13:09
4

Useful feature for some, surely. But to echo requests in other comments, some of us would like a way to turn this feature off or remove it from the top of our Q&A site pages.

But I mainly came here to point out this "clever" use of Terms and Conditions:

You retain full ownership of your AI Inputs, as applicable, but you grant to Stack Overflow a worldwide, perpetual, irrevocable, non-exclusive, sublicensable, right to use, copy, modify, distribute, and store the AI Inputs.

It really sounds like you own it and we only get to pretend we do. There should be a way to opt out of granting you full license to use our inputs.

At what point do you decide to tell it like it is, rather than hide behind clever manipulations of terms of service?

"You pay your mortgage and retain the deed to your home and full ownership, but you grant us a worldwide, perpetual, irrevocable, non-exclusive, right to use, cohabitate, modify, lease, and use for storage". Who really owns it?

New contributor
drew010 is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
3
  • 2
    The way to opt-out of granting the license to use your input is to not use the tool. If you use their tool, you agree to the terms. Similarly, there is no way to contribute content to the Q&A sites without licensing it to the company. This is pretty much boilerplate for online services these days. Not saying that’s good; it’s just how it is. Commented yesterday
  • "Who really owns it?" Legally you. But you let them use the data. So they are on the same level as you in some regards. Something they could not do is giving others the same conditions. You could. Because you are the owner. And ColleenV is right. The only way out is not using the service. Commented 14 hours ago
  • @NoDataDumpNoContribution only they can grant others, because of the sublicense clause. The only things they can't do are transfer the ownership to someone self, or sue someone for noncompliance with the license, unless it's a direct license from them. (If they license to someone and then that person licenses it to someone who is noncompliance, they can't sue) Commented 11 hours ago
3

I've reported the moving Copy button bug on the official announcement:

Copy button should not move on scroll

This was fixed but apparently not for answers that are being quoted by the AI Assistant:

enter image description here

5
  • Filing this one as well, thank you! Can you share that chat as well? It should really wrap the code. Commented Dec 3 at 15:20
  • 2
    It shouldn't wrap, the original answer also does not wrap the code. Here is the link to the chat : stackoverflow.com/ai-assist/chat/… with the quote to this answer: stackoverflow.com/questions/70293924/… Commented Dec 3 at 15:24
  • 1
    I don't think you enabled sharing on the chat. Commented Dec 3 at 15:26
  • 1
    Oeps, here is another one from the share menu: stackoverflow.com/ai-assist/shared/… Commented Dec 3 at 15:32
  • 1
    Thank you! Working on it. Commented Dec 3 at 19:05
3

Allows AI assist to display LaTeX, tables and keys. Currently it's not allowed, e.g. see https://stackoverflow.com/ai-assist/shared/8ce17a9a-b269-4765-9697-b1804ed0ad57:

enter image description here

1
  • 4
    Thank you for submitting this as well, Franck! We're on it. We have a lot of users come through MathOverflow. Commented Dec 3 at 13:32
2

The success of the alpha and beta versions of AI Assist is statistically misleading and may not reflect reality of use in the open world.

Put it this way. A farm near where I live makes it's own ice cream made from goat's milk, they put a fair amount of work into promoting this on social media and invited people to try it and by and large had good feedback.

The problem is here is that the only people who engaged with this new product were people who previously liked goat ice cream or wanted to explore the idea further.

Putting this new product into larger outlets and supermarkets didn't result in the same feedback, the product didn't sell as much as the focused promotional events implied it would.

The reason is self-evident - the people who investigated the alpha and beta tests were already engaged enough to look and interact. The true context is entirely different.

It'll be interesting over the next couple of months to see what kinds of users use AI Assist, the quality of those inquiries and whether the responses met the expectations of the users.

2
  • 4
    I don't think we can say definitively that "The success of the alpha and beta versions of AI Assist is statistically misleading and don't reflect reality." yet given the last sentence in your post. I agree now that we're in the "supermarket", it'll truly test its success. Commented Dec 3 at 15:49
  • Usage will surely go up as the feature is now available from more places so it should see a higher share of the traffic. However, who really knows what the expected future usage is from the previous runs to compare with? More interesting in my view is what it will take away. How many people will still use the ask or answer buttons and generate human content? We will see. Numbers can hardly go down further, but maybe they can (or maybe not). Commented Dec 3 at 19:05
2

Can AI Assist match human experience, ingenuity and mentorship?

Relevant case where humans do stupid actions because they cherry-pick AI's information. Title of video “A man asked AI for health advice and it cooked every brain cell."

What security measures are the company employing to prevent anything like this from happening when someone inexperienced, and possibly, a self-taught developer (etc.) has to solve a problem that could affect negatively people's lives?

Is every answer going to have a warning telling visitors that AI Assist can make mistakes, mistakes due to it not having, for example, updated data?

2
  • 3
    "What security measures are the company employing to prevent anything like this from happening" They did this: stackoverflow.com/legal/terms-of-service/public#ai-addendum As in: "not our problem". You find a link to the policy below the AI prompt. Commented Dec 4 at 15:04
  • 4
    We have a few mechanisms in place for this, including guardrails we've implemented for what AI Assist can be used for and moderation API for inputs and outputs. That's not to say it provides full protection today but we're set up for a baseline, monitoring, and improving these mechanisms over time. Commented Dec 4 at 16:31
2

Leaving aside the hypocrisy, lack of reliable attribution, etc. that everyone else has mentioned...

Unless I'm mistaken, any LLM is only as good as its training data. At best, it synthesizes information from well-written human answers, preventing a human from having to go digging for those answers themselves. Assuming it's not being trained on classified or private documents (which, honestly, wouldn't surprise me if it was), LLMs don't have any relevant training data that a well-educated person couldn't find for themselves given a bit of digging.

I think SE/SO could absolutely make good use of "AI-powered" search: you enter a search query as if you were chatting with an LLM, a TNN in the backend advertised as AI churns away and produces a list of SE/SO questions that might give you an answer and a list of other search terms you might want to try if those don't help. You could probably even take the current LLM and make some modifications to turn it into that kind of search engine if you wanted to.

However, what's the point of having an AI synthesizing all the posts themselves when you can just ask the experts that wrote the training data? SE/SO is already probably the single largest Q&A corpus on the internet, so if you can't find the answer here, what's the odds that "AI" is going to be able to find it and correctly repeat it to you without plagiarizing the original answer?

Correct me if I'm wrong, but I think those odds are almost exactly 0.

2
  • Single largest corpus doesn't mean covers everything. At some point we tried duplicating documentation (the famous warlords of documentation project) and gave up. I could imagine that somebody having read everything on SO and additional all the documentation out there and additionally all the sources on GitHub and additionally Wikipedia does indeed know more than if only here. On the other hand, some answers on SO are really bad, users would always need to filter them out. Somehow the combination shown here isn't working well but it's not clear what would be optimal. Commented yesterday
  • 1
    You make some great points here that actually align with how we think about AI Assist: help users find what they're looking for faster & better. The underlying search tech is different than the on-site search (purposefully built for conversational and natural language inputs). "Search & discovery" means find the thing you're looking for but also we help you find other useful things. The LLM part is meant to augment using its training data for content we may not have on SO&SE. It's not meant to repeat anything we find on SO&SE, we show those verbatim. Commented yesterday
1

Having received an inbox notification about AI Assist I asked it:

Is AI Assist a solution in search of a problem?

The response was:

Short answer

AI Assist can be either — it is useful when it augments search, debugging, or learning workflows; it becomes a solution in search of a problem when it’s used without clear goals or human oversight.


Why it helps

  • Speeds discovery of likely-relevant posts, commands, and patterns.
  • Reduces repetition by summarizing known fixes or pointing to canonical answers.
  • Supports learning by highlighting concepts and next steps.

When it’s a problem

  • Overtrusting output — accepting suggestions without verification.
  • Noise amplification — surfacing many low-value matches instead of precise answers.
  • Workflow mismatch — forcing AI into processes that need human judgment or context.

Key trade-offs

  • Productivity vs. accuracy: faster triage but risk of incorrect or outdated suggestions.
  • Convenience vs. learning: can shortcut learning if relied on uncritically.

Practical guidance

  • Use AI Assist for triage and pointers, not as the final arbiter.
  • Verify commands, configurations, and code snippets before applying.
  • Provide clear context (error messages, environment, what you’ve tried) to get useful suggestions.
  • Prefer small, testable changes when following AI-suggested steps.

Based upon the above, I don't yet feel inclined to use it further.

-5

It would be convenient to be able to pass an image as input, as well as for the AI assistant to include an image in its response. Currently, neither of these features is available.

10
  • If it wants to be a real contender for the LLMs from the big companies, then probably it would need this. Commented Dec 3 at 7:48
  • 1
    @NoDataDumpNoContribution totally. No idea why so many DVs on that one. Commented Dec 3 at 7:50
  • 4
    Multi-modal support is on our list. Thanks for adding your voice to this. Commented Dec 3 at 13:36
  • Can you provide a use case or example? What image would you want to share with the bot? Commented Dec 3 at 15:34
  • @A.L i.sstatic.net/M9ZAcipB.jpg Commented Dec 3 at 18:49
  • @A.L that would be when you are programming? Commented Dec 4 at 15:38
  • @FranckDernoncourt That’s a weird example since it includes an error code and a QR code. And the question is about Stack Overflow, do you have another idea that would fit with SO topics? Commented Dec 4 at 16:38
  • 1
    @A.L AI Assist only addresses SU questions and it's easier to upload an image than typing error code or follow QR code+read page. Anyway example of image for SO: stackoverflow.com/q/79664199/395857 ; stackoverflow.com/q/79589019/395857 ; stackoverflow.com/q/79544040/395857 ; stackoverflow.com/q/30228069/395857 Commented Dec 4 at 17:50
  • @FranckDernoncourt I suggest you to add the last example in your answer, I think that it’s the most interesting use-case. Commented Dec 4 at 17:58
  • @A.L sounds good done Commented Dec 4 at 19:06
-5

(Omit ten thousands words here to describe the AI answer is unreliability, SE not respect answers contributors, and complaining about the boring notification wasting my time.)

In short: I have no motivation to using this in the foreseeable future, until to the product fails and abandoned.

Meanwhile, writing this answer to express my dissatisfaction with this AI Assist also wastes my time.

-9

Allow AI Assist to give sources other than Stack Exchange sources. That would improve attribution (pointing to sources instead of having the LLM rephrasing their content from memory) and make it easier for users to check if the AI Assist is correct (+ sometimes get extra information by reading the sources).

9
  • 8
    It's AI Assist for Stack Overflow, not [third party site]. Why would SE want to pull from external sources? Commented Dec 3 at 1:50
  • @Otakuwu more properly attributing non-SE sources, more accurate answers, and more easily verifiable answers. Commented Dec 3 at 4:02
  • 3
    @Otakuwu It does "pull from other sources" when it doesn't find anything with the AI generated answer. I rather see the problem that attribution cannot be reliably given by LLMs. Franck requests a feature that is hard, next to impossible to achieve. And if it only gives some attribution one could argue how much value this is. There is no alternative to a real search if you want to check for correctness. Commented Dec 3 at 7:47
  • @NoDataDumpNoContribution can provide sources easily if web search is used eg i.sstatic.net/jvyda7Fd.png Commented Dec 3 at 7:49
  • 1
    @FranckDernoncourt Some sources but are that all sources it used? I'm actually fine with a tool that searches for me, then summarizes and clusters the search results. But I thought that LLMs can do more than that. Commented Dec 3 at 7:55
  • 1
    @NoDataDumpNoContribution definitely can't cite all sources used for pretraining and keeping track of the provenance of the main knowledge/sources used to generate a given output is tough but can list sources for RAG. Commented Dec 3 at 8:05
  • Currently it does search and drop a few posts that may or may not be vaguely related to the topic, then hallucinates something not coming from the posts, while avoiding to answer the question. I've gotten this every time I tried to ask an advanced programming question - it even quotes answers written by me myself but the AI clearly doesn't understand them or uses them in the wrong context. It is simply too dumb to answer advanced questions. Commented Dec 3 at 10:20
  • 1
    This is interesting and something we've thought about: we're not the only trusted source on the internet, what if we partnered with a few, selective additional sources, like official docs (e.g. I often see python docs cited in answers)? Commented Dec 3 at 13:27
  • 1
    @AshZade thanks, that makes sense. The choice of additional sources depends on your targetted recall vs precision trade-off. Official documentations sounds like a great start Commented Dec 3 at 18:51
-9

That would be great if clicking on "ask your question directly to the community." would auto-populate the title and body based on the discussion with AI Assist. That may need some amendment to the genAI policy.

enter image description here

Currently it doesn't auto-populate them:

enter image description here

Aside from the convenience, that'd help SE get some edge over simply directly using https://gemini.google.com/ or https://chatgpt.com/, as currently I don't see the point on using SE's AI assist instead of GPT/Gemini/Grok/Claude/etc., except if one wants to focus on SE's content (which is rather limiting).

16
  • 10
    This is literally helping/encouraging the user to post AI generated content to SO, which is explicitly forbidden. I am strongly against any kind of population of any post content field by any form of LLM. Commented Dec 3 at 3:53
  • @WeijunZhou yes indeed, I'm in favor of AI-generated content (and also in favor of harshly suspending users posting repeatedly incorrect information). Commented Dec 3 at 4:03
  • 2
    Every site has its own GenAI policy. Suggesting something that encourages the user to actively violate existing policy (it doesn't matter whether you agree with the policy or not) is not acceptable. You should modify your suggestion to make sure that 1. it doesn't apply to sites where GenAI is banned, and 2. for sites where GenAI is not banned, the prepopulated content matches the attribution requirement. Note that posting GenAI content without disclosure/attribution is banned network-wide. Commented Dec 3 at 4:10
  • @WeijunZhou Policies aren’t set in stone; adding that feature would obviously require amending the policy. Commented Dec 3 at 4:19
  • It is not at all clear from your current version of answer. If you are also proposing the GenAI policy be modified, at least say that. Commented Dec 3 at 4:23
  • 1
    @WeijunZhou pretty obvious but done Commented Dec 3 at 4:26
  • While "adding that feature would obviously require amending the policy" is factually true, @FranckDernoncourt, this doesn't strike me as an area in which "let's create the capability, and then adjust the policies accordingly" would go over particularly well, as those policies are set by the communities. I think given the current state of affairs your actual feature request would at most be something communities can request to be enabled for them if it aligned with their policies on AI-generated content. Commented Dec 3 at 9:36
  • To add to what JNat said from the product side, it's been a feature request since our Alpha phase to help pass the conversation over to posting. Technically, it's not too difficult but we'll need to work with communities to do it right. Commented Dec 3 at 13:48
  • 3
    @AshZade it's very simple from a community standpoint as well: the SO community has overwhelmingly told you that it does not want LLM generated questions on the site (5519 up- vs 159 downvotes). So the only way to do it "right" would be not to do it at all. And before you dismiss that as "oh but there's feedback from other channels that people want this" - it does not matter if thousands of people want to ask via a LLM if the experts (you know, the people who answer questions) are not at all interested in dealing with LLM slop. Commented Dec 3 at 14:24
  • @l4mpi I understand because we (me, you, other Meta users) have had this conversation for months. I have more faith in us at Stack and know that it's on us to prove that we can do this well. Commented Dec 3 at 14:28
  • 2
    @AshZade I have exactly zero faith in that, especially because all of the LLM stuff was entirely pushed by SE in clear violation of earlier promises, and designed without any community input (I'm not talking about asking for feedback or fixing glaring issues after a release; the community only learned about this as it went live). I forsee this going the same way and waking up one day to an announcement that there is now some sort of LLM support / "agent" integration for asking questions, community wishes be damned, because some committee at SE decided that whatever they came up with was fine. Commented Dec 3 at 14:40
  • @l4mpi If SE users don't adapt to AI, the future of SE is bleak. SE inc. and their investors know that, hence all these new AI features while trying to avoid pissing off the anti-AI SE users. Commented Dec 3 at 18:43
  • 3
    @FranckDernoncourt I’m not quite sure where you get the idea that they are avoiding pissing off anti-ai users, they’ve been releasing ai powered features non-stop for nearly 3 years now, without a single one having any measurable positive impact that couldn’t have been achieved with improvements that didn’t require throwing ai at it. Commented Dec 3 at 18:55
  • @l4mpi I agree that the policy forbids generated questions and myself agree with many of the reasons for that, but on a day-to-day basis I think I've been seeing the site let it slide in questions more than enforce it. (It would be a lot of work to enforce it more strictly, and I'm unsure how much of a benefit that might be.) Commented Dec 3 at 18:55
  • @DanGetz probably because if a post is good, it doesn't matter how much of it was generated by AI. The point of SE is to learn. Commented Dec 3 at 19:02
1
2

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.