Personally I think it's more a difficulty in being able to decisively determine where the split lies. How do we determine something as being distressed rather than simply acting distressed. As humans, we receive inputs from our senses that get converted into electrochemical impulses that trigger specific responses, and we evolved this system because it is biologically advantageous to avoid most of the situations those impulses are associated with. Something sharp presses against us, we feel pain, we react with a negative sensation and attempt to remove ourselves from that situation.
At what point, specifically, does this change from being purely "Cause -- Effect" and turn into "A living experience"?
Generally, that point lies with the "sentience" of the target. A program or machine can't actually feel pain, because it lacks the parts of our system that have developed to feel pain.
But the question then is... Can you teach it? Can you build a machine suitably detailed and complex enough to properly model the full chain from sensation to reaction? And if so... At that point, what really is the difference?
This was something of a controversy back in the 90's when the first Creatures game came out. An incredibly complex (for the time) emulation of DNA, genetics, and learning behavior in simulated critters. Norns would learn about the world, would experience things that elicited "pain", and would try to avoid those things so as to avoid further negative reinforcement.
The system was complex enough that Norns would begin displaying (something resembling) neurotic, traumatized behavior if repeatedly subjected to negative stimuli. Potentially even to the point of stressing themselves to death.
Even though they were quite advanced for their time, Norns and their genes and personalities could still reasonably easily be reduced back down to ones and zeroes... So, obviously, they were unfeeling at their core. Just playing the part, as it were. But, of course, the question remains: Could a suitably complex machine perfectly model a living creature's systems of distress? And if so, how do we determine if it's "real" or not?
If it can be modeled, then it's just a question of degree of technology rather than a binary yes/no, which then places the weight on deciding where exactly on the spectrum of advancement that line gets to be drawn; between "This is very very close to real, but not quite" and "This is just barely real enough".