In my experience it refers to overengineering a problem.
As an example, I've seen a system designed to implement a flexible set of security policies for a web API that a coworker built that was almost impossible to debug. The design was such that you could define a chain of actions by string identifiers in a config file, which would do things like pull values out of a web request, compare it to other values, and make a go/no go decision based on that or restrict what update fields were allowed.
It might look something like this:
const policy = [
['forRole:All Internal', 'allow'],
['forRole:Agent', 'editFields:name,phone_number,email_address', 'allow'],
['forRole:External', 'getId:id', 'ifIsManagerForRecord:id:id', 'allow']
['deny']
];
Some of the policy chains were 15+ lines long, and all of that stuff like getId, editFields, etc, were functions that would get called and do stuff to the web request. They'd mutate the request and could be dangerous to chain together because they'd stomp on each other. And every time you needed to add a new corner case, like allowing conditional edits on fields based on a secondary role, we had to add new functionality that did the same stuff by modifying the request state in unsafe ways.
It was a disaster, and when we rebuilt that system I said, "You know, we basically built a programming language for the security policies. Let's just use JavaScript in the first place."
Now it's trivial to put a breakpoint in the code and know precisely where it is in the policy and the order that things will happen in.