Killing sentient AIs is really bad policy. Even if you think in terms of "it isnt human", a hostile reaction towards an AI would mean we would be classified as a threat and possibly annihilated in nuclear fire if any
do get out of the box.
As far as Im concerned, killing it would be murder, sadly hooking it up with the web would also not be the best of ideas. Are there any laws in place about this? Could one legally protect a sentient entity from murder
by violent means if necessary, or does it specify humans?
If you can convince it's an actual strong, post human AI (that is, an actual sentient being, with capabilities far beyond those of humans), the law doesn't really matter. Everything is going to be considered on case by case basis by the highest echelon. Every single law is going to have to be revised, the social unrest is going to be horrendous.
Also, as I said, a lot of it depends on what the AI is based, software or hardware. Hardware makes much easier to contain, which allows for far more freedom. It could transfer itself, but would be much harder, and it would be questionable if it's the same being (increase it's capabilities by connecting it with another server tower cluster. After it's consciousness is based on both clusters, turn off the first. This way, while the physical vessel is not the same, it prevents the continuity problem.). Hardware based also makes harder to give it hard coded rules and ethical guidelines such as the three laws of robotic from Asimov (flawed as they might be).
Software based is far more problematic. It opens the whole Chinese Room Argument, it can transfer itself easily, or even copy itself. Sure, it's easier to give it hard coded rules, but it's also much easier to subvert it. It becomes vulnerable to software attacks such as viruses, hacking (although probably it'd take another AI to actually manage to hack it), and so on.