(Apologies for the nonsense argument)
Part of the issue is that personhood is a social definition - as contrasted with the scientific definition of human (or animal). So a cardboard cut-out fails inasmuch as it cannot fufil the social function of a person. And again with the 'familiar' chimps, although in some respects they can indeed pass sufficiently, in other respects they fail - say in the understanding presupposed by voting. There have certainly been times when human beings have been regarded as not being persons - take the European colonization of Australia, the indigenous inhabitants were regarded as no better than animals (and by the by it wasn't until the 1960's that they got the vote). There are other more chilling examples. This is one thing theories of human rights attempt to address. And flipping it around governments are considered to be 'legal persons' inasmuch as they enter into contracts, sign treaties, are held responsible beyond the tenure/existence of the natural persons who sign on their behalf. If they were not 'persons' then they would not be able to engage with the social world like this. As such legal definitions do indeed capture something about persons which extends beyond any naive notion of solipsistic sovereignty,
A social definition for personhood implies that we decide what humanity is.
Yes, some of us have tried to define humanity more specifically than humanity. But we saw that that was wrong. Many cultures or, I'd argue, intrinsically. In our hearts. We rejected that idea.
If something is, by some miracle of science, able to operate among us as well as the least of us... Must it not be granted rights? How well must it understand the Rains in Spain before it is one of us?
Firstly I think part of your confusion arises from using the term 'humanity' ambiguously. Yes sometimes it is used as a synonym for 'human beings' and at others for a set of virtues that should be extended by (and to) members of the human species.
Person (personhood) as a concept it is distinct from human being/humanity/homo sapien, even if it largely overlaps and gets used in common parlance as equivalent. It's not - we would never refer to the American Government as a human being even lthough it is indeed a person.
But no we do not decide what human beings are. That is a species definition (homo sapiens) which has a scientific rendering. Specifically it is this the the Universal Declaration of Human Rights references, firstly in the preamble
Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world,
and then later in the 1st article
All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
Personhood on the other hand is a state that although commonly attributed to human beings is not something possessed inherently and uniquely by human beings. If you want to say that all human beings should be regarded as persons, I'm largely in agreement (there are troubling border cases like abortion, long-term vegative comas, etc. which make me hesitate on immediate absolute agreement) but that is not the way it has been historically constituted. Similar in this regard is 'citizen' (oh, Glitterhoof

). And there are 'persons' who are not individual human beings at all.
In fact it is the UDHR which does the heavy lifting of the 'We rejected that idea'* and it does so with reference human beings rather than persons. (Yes person is used in that document at times as a synonym for individual human being, as is people(s) for collective groups of human beings, but look to the above which precede any invocation of person - the person talk falls back upon the statements about human beings.
Returning to the topic of AI rights the import is that the substrate of human rights is just being human which AI's never will be (they are - presumably - machines and not homo sapiens). Really the Turing test has so much to answer for in shaping this discourse into 'being able to pass as human'. If the requirement is 'able to operate among us as well as the least of us' then there are already a multitude of machines that do more than the (long term, irrecoverable) vegative coma patient. So that seems insufficient. And how would we apply parts of the UDHR to machines anyway? For example (article 25)
Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.
In other words applying the human centred standards of 'human rights' to 'AI rights' just isn't going to work easily. And on the other hand stating what the 'operate as well as the least of us' consists of is non-trivial. Should AI need to possess practical wisdom (phronesis) in addition to instrumental rationality? What about the ability to experience emotions? Mortality? Should they be able to Love?
That said, I'm not against the notion of AI rights. But I do think that the particular rights that should be given to AI's must be suited to their circumstances and abilities. And until we get to the point where machine self-awareness arises we are not going to have a solid basis to say what the capabilities and limitations of such a consciousness is.
* More like - we are (slowly) rejecting that idea.