I tried using AI tools to do some cleanup and refactoring of some legacy embedded C code and was curious if it could do any optimization or knew any clever algorithms.
It’s pretty good at figuring out the function of the code and adding comments, it did some decent refactoring of some sections to make them more readable.
It has no clue about how to work in a resource constrained environment or about the main concepts that separate embedded from everything else. Namely that it has to be able to run “forever”, operate in realtime on a constant flow of sensor data, and that nobody else is taking care of your memory management.
It even explained to me that we could do input filtering by using big arrays to do simple averaging on a device with only 1kB RAM, or use a long long for a never-reset accumulator without worrying about what will happen because “it will be years before it overflows”.
AI buddy, some of these units have run for decades without a power cycle. If lazy coders start dumping AI output into embedded systems the whole world is going to get a lot more glitchy.
This is how AI is a threat to humanity. Not because it will choose to act against us, but because people will trust what it says without question and base huge decisions on faulty information.
A million tiny decisions can be just as damaging. In my limited experience with several different local and cloud models you have to review basically all output as it can confidently introduce small errors. Often code will compile and run, but it has small errors that can cause output to drift, or the aforementioned long-run overflow type errors.
Those are the errors that junior or lazy coders will never notice and walk away from, causing hard to diagnose failure down the road. And the code “looks fine” so reviewers would need to really go over it with a fine toothed comb, which only happens in critical industries.
I will only use AI to write comments and documentation blocks and to get jumping off points for algorithms I don’t keep in my head. (“Write a function to sort this array”) It’s better than stack exchange for that IMO.
I tried using AI tools to do some cleanup and refactoring of some legacy embedded C code and was curious if it could do any optimization or knew any clever algorithms.
It’s pretty good at figuring out the function of the code and adding comments, it did some decent refactoring of some sections to make them more readable.
It has no clue about how to work in a resource constrained environment or about the main concepts that separate embedded from everything else. Namely that it has to be able to run “forever”, operate in realtime on a constant flow of sensor data, and that nobody else is taking care of your memory management.
It even explained to me that we could do input filtering by using big arrays to do simple averaging on a device with only 1kB RAM, or use a long long for a never-reset accumulator without worrying about what will happen because “it will be years before it overflows”.
AI buddy, some of these units have run for decades without a power cycle. If lazy coders start dumping AI output into embedded systems the whole world is going to get a lot more glitchy.
This is how AI is a threat to humanity. Not because it will choose to act against us, but because people will trust what it says without question and base huge decisions on faulty information.
A million tiny decisions can be just as damaging. In my limited experience with several different local and cloud models you have to review basically all output as it can confidently introduce small errors. Often code will compile and run, but it has small errors that can cause output to drift, or the aforementioned long-run overflow type errors.
Those are the errors that junior or lazy coders will never notice and walk away from, causing hard to diagnose failure down the road. And the code “looks fine” so reviewers would need to really go over it with a fine toothed comb, which only happens in critical industries.
I will only use AI to write comments and documentation blocks and to get jumping off points for algorithms I don’t keep in my head. (“Write a function to sort this array”) It’s better than stack exchange for that IMO.
Maybe the real “AI nuking the world” scenario was that it ie caused by the faulty information the AI hallucinated into existence