If so, I’d like to know about that questions:
- Do you use an code autocomplete AI or type in a chat?
 - Do you consider environment damage that use of AIs can cause?
 - What type of AI do you use?
 - Usually, what do you ask AIs to do?
 
I try to avoid it, but ever since search engines have gone to shit, it has forced me to use it for debugging code. Stack overflow, r/Cprogramming, minimal articles on the specific issue, have ceased to exist ever since AI generation. And why should it? Why would a user post an issue (for example, on stack overflow) wait for a few days to get a few responses, when they could get an instant response with AI. Search engines have gone to shit so much, that My fathers startup company has issued a premium license for chatGPT because of how dead Search engines are.
I hate it, I wish I didn’t have to use it, and yet this is my reality.
At work, I still use JetBrains’, including the in-line, local code completion. Though it’s so slow, 99% of the time I’ve already written everything out before it can suggest something.
No; I don’t use AI at all for programming currently.
I’ve tried chat prompt coding two ways. One, with a language I know well. It didn’t go well; it insisted that an api existed that had been deprecated since before and removed around 2020, but I didn’t know that and I lost a lot of time. I also lost a lot of time because the code was generally good, but it wasn’t mine so I didn’t have a great understanding of how it flowed. I’m not a professional dev so I’m not really used to reading and expanding on others’ code. However, the real problem is that it did some stuff that was just not real, and it wasn’t obvious. I got it to write tests (something I have been meaning to learn to do) and every test failed; I’m not sure if it’s the test or the code because the priority for me at the time was getting code out, not the tests. I know, I should be better.
I’ve also used it with a language I don’t know well to accomplish a simple task - basically vibe coding. That went OK as far as functionality but based on my other experience it is illegible, questionably written, and not very stable code.
The idea that it’ll replace coders in a meaningful way is not realistic in the current level. My understanding of how LLMs work is incomplete, but i don’t think the hallucinations are easily overcome.
I use the Jetbrains AI Chat with Claude and the AI autocomplete. I mostly use the AI as a rubber duck when I need to work through a problem. I don’t trust the AI to write my code, but I find it very useful for bouncing ideas off of and getting suggestions on things I might have missed. I’ve also found it useful for checking my code quality but it’s important to not just accept everything it tells you.
Yeah, I use Claude/ChatGPT sometimes for:
- Throwaway scripts: “write me a bash script to delete all merged git branches starting with ‘foo’”
 - Writing functions that are tedious to look up but I can fairly easily evaluate for correctness: “write a C function to spawn a process and capture stdout and stderr merged”
 - Doing stuff in systems I’m not very familiar with: “write an OCaml function to copy a file”
 
I haven’t got around to setting up any of that agentic stuff yet. Based on my experience of the chat stuff I’m a bit skeptical it will be good enough to be useful on anything of the complexity I work on. Find for CRUD apps but it’s not going to understand niche compiler internals or do stuff with WASM runtimes that nobody has ever done before.
That’s about all I have found it good for too. Larger projects need solid architecture that the coder understands well. If you leave architectural decisions to LLM, they build some real janky solutions that take more time to sus out and correct than it takes to just so do it yourself.
When I use it, I use it to create single functions that have known inputs and outputs.
If absolutely needed, I use it to refactor old shitty scripts that need to look better and be used by someone else.
I always do a line-by-line analysis of what the AI is suggesting.
Any time I have leveraged AI to build out a full script with all desired functions all at once, I end up deleting most of the generated code. Context and “reasoning” can actually ruin the result I am trying to achieve. (Some models just love to add command line switch handling for no reason. That can fundamental change how an app is structured and not always desired.)
Sometimes it is helpful to summarize large unfamiliar codebases relatively quickly, provide a high level overview, quickly understanding the layout and structure and help me locate the particular areas I’m interested in but I don’t really use it to write or modify code directly. It can be good at analyzing logs and datafiles to find problems or patterns or areas that need closer (human) investigation. Even the documentation it produces can sometimes be tolerably decent, at least in comparison to my own which is sometimes intolerably bad or missing completely.
But as far as generating code? I’ve found the autocomplete largely useless and random. As for chat, where I can direct it more carefully, it might be able to accurately provide a well-known algorithm for something but then will use a mess of variables and inputs that interact with that algorithm in the stupidest ways possible, the more code you ask it to generate the worse it gets, getting painfully overengineered in some aspects and horribly lacking in others. If it even compiles and runs at all. Even for relatively simple find this/replace it with this refactoring I find I cannot fully trust it and rely on the results, so I don’t. I’m proficient enough with regex and scripting that I don’t find it any faster to walk a generative AI to the result I want while analyzing the fuzzy logic it uses to get there than it is to just write a perfectly deterministic script to do it instead.
As a general rule, I find it is sometimes better at quickly communicating particular things to my manager or other developers than I am, but I am almost always better and quicker at communicating things to computers than it is. That is, after all, my job. Which I happen to think I’m pretty good at.
As for the environmental aspect, that’s why I don’t use it in my personal life basically at all if I can avoid it. Only at work, and only because they judge my usage of it as part of my performance. I would be just as happy not using it at all for anything. And when I do use it for personal use, which is a point I haven’t really reached except for a bit of experimentation and learning, I am never willingly going to use a datacenter-hosted model/service/subscription, I will run it on my own hardware where I pay the bills so I am at least aware of the consequences and in control of the choices it’s making.
I use the JetBrain AI like a search engine when the web has no obvious answer. Most of time it gives me a good starting point, and the answer is adjusted to the existing content.
It can also translate snippets from one language or framework to another. For (a fake) example, translating from Unity in Python to Vulkan in C++.
I also use it to analyze shitty code from people who left the company a long time ago. Refactoring and cleaning obscure stuff like deeply hidden variables or things that would take days to analyze can be done in minutes.
I use it once a day at most.
I use the generator function in Databricks for Python to save me a lot of typing but autocomplete functions drive me crazy
My company has internally hosted AI. I use the web interface to copy/past info between it and my IDE. So far I have gotten best results from uploading the official Python documentation and the documentation for the framework I am using. I then specify my requirements, review the output, and either use the code or request a new revision with information on what I want it to correct. I generally focus on requesting smaller focused bits of code. Though that may be for my benefit so I can make sure I understand what everything is doing.
Generating quick programs like “a python script that calculates the mean value of two hex colours, outputting the result as a HTML file displaying the resulting three-color gradient”? Yeah, AI is decent at stupid simple tasks like that, and it’s much faster than me writing the script or calculating the values myself. I tend to generate things like these when I’m working on something else, don’t want to spend time on things outside the project I’m working on, and can’t find a website that does the thing I want.
Touching my actual code? Hell no.
My answer (OP): I use AI for short and small questions, like things I already know but I forgot, like “how to sort an array”, or about Linux commands, which I can test just in time or read the man page to make sure it works as intended.
I consider my privacy and environment, so I use a local AI (16b) for most of my questions, but for more complex things that I really need any possible help I use Deep Seek Coder v3.1 (671b) in the cloud via ollama.
I don’t use autocomplete code because it annoys me and don’t let me think about the code, I like to ask when I think I need it.
This is basically how I roll as well.
I did have cursor build an example fastapi project (which didn’t work at first) just to sort of give me a jump start on learning the framework.
I messed around with that, got it to work, learnes enough about how it works that I was then comfortable starting from scratch in a different project.
I kind of treat the local AI as a knowledge base. Short questions with examples. Mostly that just then lets me know what sort of stuff to look for in the real documentation, which is what actually solves my issues.
Cant these questions be answered more easily with an online search?
Maybe 5 years ago. Not anymore.
No
I use autocomplete, asking chat, and full agentic code generation with Cline & Claude.
I don’t consider environmental damage by AI because it is a negligible source of such damage compared to other vastly more wasteful industries.
I am primarily interested in text generation. The only use I have for generated pictures, voices, video or music is to fuck around. I think I generated my D&D character portrait. My last portrait was a stick man.
What I ask it to do? My ChatGPT history is vast and covers everything from “how is this word used” to “what drinks can I mix given what’s in my liquor cabinet” to “analyze code for me” to “my doctor ordered some labs and some came back abnormal, what does this mean? Does this test have a high rate I’d false positive?” to “someone wrote this at me on the internet, help me understand their point” to “someone wrote this at me in the internet, how can I tell them to fuck the fuck off… nicely?” And I write atrocious fiction.
Oh I use it a lot to analyze articles and identify bias, reliability, look up other related articles, things that sound bad but really don’t mean anything and point out gaps in the journalistic process (I.e. shoddy reporting).
I also have written a discord dungeon master bot. It works poorly due to aggressive censorship and slop on open AI.
The effects are far from negligible compared to other industries. A very small percentage of people world wide use it and even still, the amount of data centers and sizes and the rise in power costs are insane.
It’s actually extremely intense compared to other industries.
You’re looking at secondary effects with a lot of causes, then assuming the cause. Energy prices are far more complicated than just AI. My prices are up because trade war with Canada.
At this point I feel like you’re a bot. I’m no expert in the subject but there’s plenty of evidence showing the insane amount of resources or request which becomes monumental as the requests become more in depth.
The sky rocketing energy prices may not be felt everywhere just yet but they are felt near data centers that are sprouting up every where. And we’re just getting started.
For the “insane amount of resources per request”, let me show you some real evidence.
My computer has a 65 Watt cpu. It takes about 70 seconds per request. That’s ~1.26 Wh or 790 requests/kWh.
And that’s with a cpu. Servers use gpus which are more optimized for the task.








