I Built a Python script that uses a local Ollama LLM to automatically find and add movies to Radarr.
It picks random films from your library, asks Ollama for similar suggestions based on theme and atmosphere, validates against OMDb, scores with plot embeddings, then adds the top results to Radarr automatically.
Examples:
- Whiplash → La La Land, Birdman, All That Jazz
- The Thing → In the Mouth of Madness, It Follows, The Descent
- In Bruges → Seven Psychopaths, Dead Man’s Shoes
Features:
- 100% local, no external AI API
- –auto mode for daily cron/Task Scheduler
- –genre “Horror” for themed movie nights
- Persistent blacklist, configurable quality profile
- Works on Windows, Linux, Mac
GitHub: https://github.com/nikodindon/radarr-movie-recommender


Since no one is leaving critical comments that might explain all downvotes, I’m going to assume they’re reflexively anti-AI, which frankly, is a position that I’m sympathetic to.
But one of the benign useful things I already use AI for, is giving it criterias for shows and asking it to generate lists.
So I think your project is pretty neat and well within the scope of actually useful things that AI models, especially local ones, can provide the users.
LLMs are not the tool for a recommender job
Huh? There are other ways to link similarities of movies without the use of a llm. You may use ai to find similar movies but it’s nonsense that everyone has to ask a llm to link movies.
no one is saying everyone has to ask an LLM for movie recommendations
OP wrote a python script that call a llm to ask for a recommendation.
But you are right, op doesn’t say that everyone shall do it
No, it also doesn’t do that. It gets embeddings from an LLM and uses that to rank candidates.
Are you a trollm?
If not, I’m just too stupid to understand op.
If that’s not the same, I don’t know what is. Gotta go back to school, I guess.
It’s not, I read the code. It’s not merely asking the LLM for recommendations, it’s using embeddings to compute scores based on similarities.
It’s a lot closer to a more traditional natural language processing than to how my dad would use GPT to discuss philosophy.
No LLM use is benign. The effects on the environment, the internet, and society are real, and that cannot be ignored.
You can make the argument that in some cases it is justified, e.g.: for scientific research.
Saw it was already commented about CO2, so I thought I’d counter-point your environment claim regarding water usage (since that is something I’ve seen a lot of too).
The ISSA had a call to action due to the AI water use “crisis”: https://www.issa.com/industry-news/ai-data-center-water-consumption-is-creating-an-unprecedented-crisis-in-the-united-states/
68 billion gallons of water by 2028! That’s a lot…right? Well, what I found is that this is somewhat of a bad faith argument. 68 billion gallons annually is a lot for one town, but those are numbers from a national level and it isn’t compared to usage from anything else. So, lets look at US agriculture (that’s something that’s tracked very well by the USDA): https://www.nass.usda.gov/Publications/Highlights/2024/Census22_HL_Irrigation_4.pdf
That’s 26.4 trillion gallons of water annually. So, AI datacenter represents 0.26% of agriculture consumption. If AI datacenter consumption is a crisis, why is agriculture consumption not a crisis? You could argue that agriculture produces “something useful”, but usefulness doesn’t factor into the scarcity of a resource. So, either its not a crisis, or you are cherry picking something that has no meaningful outcome to solving the problem.
yeah, I think the whole “water” argument really dilutes the case against data centers.
On a serious note, the argument works for areas that already struggle to supply enough water for consumers. Otherwise, we should be focusing more on the power stress to the grid, and the domino effect on supply chain of hardware cost increases that it’s happening across many industries. It started with GPUs, now it’s CPU, storage, networking equipment, and other components.
If these prices are too high for a couple of years, we’ll start seeing generalized price increases as companies need to pass along the costs to consumers.
I think the supply chain issue is probably the most pressing out of all of them. The other points people have are either non-issues or a result of dropping usage hogs into existing electrical infrastructure. Infrastructure can be updated, though.
Supply chain is different. There isn’t a supply shortage of chips, its that profitability dictates you should sell them to datacenters or adjacent industry. Unlike infrastructure where you can just build out more, adding more supply for chips just means you have more to sell to datacenters. Since the demand is there, end of day profits will always win.
Didn’t down vote you. I hear this line of complaint in conjunction with AI, especially if the person saying it is anti-AI. Without even calculating in AI, some 25 million metric tons of CO2 emissions annually from streaming and content consumption. Computers, smartphones, and tablets can emit around 200 million metric tons CO2 per year in electrical consumption. Take data centers for instance. If they are powered by fossil fuels, this can add about 100 million metric tons of CO2 emissions. Infrastructure contributes around 50 million metric tons of CO2 per year.
Now…who wants to turn off their servers and computers? Volunteers? While it is true that AI does contribute, we’re already pumping out some significant CO2 without it. Until we start switching to renewable energy globally, this will continue to climb with or without AI. It seems tho, that we will have to deplete the fossil fuel supply globally before renewables become the de facto standard.
chill, this is extracting text embeddings from a local model, not generating feature-length films
So running a local model is unforgivable, but “scientific research” running on hyperscalers, can be justified?