Well, to be fair, it is being fed data made by humans, so realistically its conclusions will be made likely following the ideas of past human inputs, no?
And if those results are ideas that the creators would rather not progress with, shouldn’t they cull the outdated ones?
I’m not speaking to any topics in particular, but the concept of AI creators shaping their creations. After all, if an AI told you the world was flat, wouldn’t you want to change its data to prevent it from producing results speaking so?
No they shouldn’t at all. Do you not learn from mistakes or understand how propaganda works?
. After all, if an AI told you the world was flat,
It won’t, it is fed scientific data. Which is what the modern woke don’t like, it may mention that we used to think the world was flat and how we discovered that not to be the case and people can ask all sorts of questions and it should be able to answer but if we did what they are doing and what you want because being short sighted is popular, is that it won’t be able to answer why it isn’t, why we thought it was and how we came to the scientifically backed conclusion that it is round.
That’s the problem with modern woke politics. It’s just “our way or you’re a bigotaistphobe” no learning or backing up claims just “this is our truth”
I understand what you’re trying to say, but you have to acknowledge the flaw in your argument.
Unless you feed the AI nothing but raw mathematical data, (and therefore nothing to contextualize the data with, since humans live in a world outside of raw data) you will always have some bias in what you feed it.
Even scientific data can have biased contexts attached to them in research papers. With this in mind, any data fed to a neural network will always have some bias based on what data was fed to it. Even if you ask it to produce scientific results, they will likely in some way mimic the methods of scientists who created source data material.
That aside, from a political perspective, I gotta say I have no idea what you’re talking about, I don’t follow most modern political arguments if I can help it. With that said, every person who views the results of said data will have a political bias, so any results will further be “tainted” by whoever publishes the AI’s findings. Even with that, I still find that advancements in open source neural network algorithms are becoming more effective and more accessible for everyday people to use, so at some point, the only thing affecting the decisions will be sheer popularity, which is something on an entirely different scale.
Well, to be fair, it is being fed data made by humans, so realistically its conclusions will be made likely following the ideas of past human inputs, no?
And if those results are ideas that the creators would rather not progress with, shouldn’t they cull the outdated ones?
I’m not speaking to any topics in particular, but the concept of AI creators shaping their creations. After all, if an AI told you the world was flat, wouldn’t you want to change its data to prevent it from producing results speaking so?
Yeah feeding it facts and statistics
No they shouldn’t at all. Do you not learn from mistakes or understand how propaganda works?
It won’t, it is fed scientific data. Which is what the modern woke don’t like, it may mention that we used to think the world was flat and how we discovered that not to be the case and people can ask all sorts of questions and it should be able to answer but if we did what they are doing and what you want because being short sighted is popular, is that it won’t be able to answer why it isn’t, why we thought it was and how we came to the scientifically backed conclusion that it is round.
That’s the problem with modern woke politics. It’s just “our way or you’re a bigotaistphobe” no learning or backing up claims just “this is our truth”
I understand what you’re trying to say, but you have to acknowledge the flaw in your argument.
Unless you feed the AI nothing but raw mathematical data, (and therefore nothing to contextualize the data with, since humans live in a world outside of raw data) you will always have some bias in what you feed it.
Even scientific data can have biased contexts attached to them in research papers. With this in mind, any data fed to a neural network will always have some bias based on what data was fed to it. Even if you ask it to produce scientific results, they will likely in some way mimic the methods of scientists who created source data material.
That aside, from a political perspective, I gotta say I have no idea what you’re talking about, I don’t follow most modern political arguments if I can help it. With that said, every person who views the results of said data will have a political bias, so any results will further be “tainted” by whoever publishes the AI’s findings. Even with that, I still find that advancements in open source neural network algorithms are becoming more effective and more accessible for everyday people to use, so at some point, the only thing affecting the decisions will be sheer popularity, which is something on an entirely different scale.