In which case it doesn’t need to give a summary in the first place. Too often people won’t click on those sources over the ai blurb. It’s designed to exploit laziness by making being misinformed easy. Seeing people just give AI blurbs without a source as evidence is annoyingly common.
Too often people don’t read past the headline or even have their algorithm trained to consistently give dis/misinformation. I don’t see criticism of a tool, but rather how has developed and is used. This applies in so many areas that I think the more effective approach is teaching people how to think more critically and criticizing the companies for not doing their due diligence in promoting that. Otherwise it comes across like being upset people use social media, in which case I think we are far too beyond the Pandoras box being opened to spend time focusing on that aspect. If you have solutions other than telling people not to use it, I’m all ears.
Machine learning is a useful technology that can do amazing things. “AI” is the cultural phenomenon of people thinking we created a magical solution to every problem. Machine learning might be able to sometimes query a search engine better, but LLMs will never know anything about the world because that’s not what it was designed for. Machine learning can make workers more productive, but we’re nowhere near the point where it can be a laborer itself. People should lean how the technology actually works so they realize that half of the corporate implementations are a bad idea.
I think you are right that the problem is often just lazy people not wanting to understand or use the tool in a way that is beneficial to them. But there are some good use cases for it, when I am coding I will ask questions and my session instructions are to only provide relevant links to source documentation that can be helpful in my problem and also provide tutorial links that could be relevant, never provide code or advice. I would say 7/10 times it gets me to the correct spot in the docs and provides some useful tutorials on the subject. Not perfect but I am not using it and just blindly trusting its advice, just using it to be a slightly faster search engine that gets me to the information I am seeking without me having to dig into the docs or jump from site to site finding the information.
Sure. So long as you are conscientious about how you’re using it, it can be useful. The problems arise when people think it’s magic and ignore that it will always have a chance of being wrong.
Unfortunately, I find that finding those sources by traditional search gets harder over time. Maybe the internet is now more garbage, maybe the search engines are more garbage, but a couple of times I failed to find a source on my own and used an LLM to find one (it may also fail, of course)
That website was made by someone suffering from some cognitive dissonance. They correctly observe that LLMs “can produce convincing-sounding information, but that information may not be accurate or reliable” and then somehow immediately afterwards conclude that “summarize this for me” is the type of thing which LLMs “might” be “good at”.
I really dont get this argument. You can ask it for live link sources to verify whatever info it generates, depending on the topic.
This is how i know what idiots to block. Thank you.
💀
In which case it doesn’t need to give a summary in the first place. Too often people won’t click on those sources over the ai blurb. It’s designed to exploit laziness by making being misinformed easy. Seeing people just give AI blurbs without a source as evidence is annoyingly common.
Too often people don’t read past the headline or even have their algorithm trained to consistently give dis/misinformation. I don’t see criticism of a tool, but rather how has developed and is used. This applies in so many areas that I think the more effective approach is teaching people how to think more critically and criticizing the companies for not doing their due diligence in promoting that. Otherwise it comes across like being upset people use social media, in which case I think we are far too beyond the Pandoras box being opened to spend time focusing on that aspect. If you have solutions other than telling people not to use it, I’m all ears.
Machine learning is a useful technology that can do amazing things. “AI” is the cultural phenomenon of people thinking we created a magical solution to every problem. Machine learning might be able to sometimes query a search engine better, but LLMs will never know anything about the world because that’s not what it was designed for. Machine learning can make workers more productive, but we’re nowhere near the point where it can be a laborer itself. People should lean how the technology actually works so they realize that half of the corporate implementations are a bad idea.
I think you are right that the problem is often just lazy people not wanting to understand or use the tool in a way that is beneficial to them. But there are some good use cases for it, when I am coding I will ask questions and my session instructions are to only provide relevant links to source documentation that can be helpful in my problem and also provide tutorial links that could be relevant, never provide code or advice. I would say 7/10 times it gets me to the correct spot in the docs and provides some useful tutorials on the subject. Not perfect but I am not using it and just blindly trusting its advice, just using it to be a slightly faster search engine that gets me to the information I am seeking without me having to dig into the docs or jump from site to site finding the information.
Sure. So long as you are conscientious about how you’re using it, it can be useful. The problems arise when people think it’s magic and ignore that it will always have a chance of being wrong.
Unfortunately, I find that finding those sources by traditional search gets harder over time. Maybe the internet is now more garbage, maybe the search engines are more garbage, but a couple of times I failed to find a source on my own and used an LLM to find one (it may also fail, of course)
Maybe you like this Website:
https://stopcitingai.com/
😬
That website was made by someone suffering from some cognitive dissonance. They correctly observe that LLMs “can produce convincing-sounding information, but that information may not be accurate or reliable” and then somehow immediately afterwards conclude that “summarize this for me” is the type of thing which LLMs “might” be “good at”.