Well its obviously immoral to repress any sentient lifeform. Form is philosophically irrelevant.

The more interesting question is where would AI actually be sentient. I don't know if I could find say a personality totally and "willingly" subservient to be considered to have free will and moral choice. And if they lack that can it be considered sentient? To be sentient (while I'd hate to make a real world choice) I'd say they'd have to be able to say "No" to us humans to be considered sentient. Mind you that even with certain hard controls this might take the form of passive aggressiveness rather then open defiance.

And strictly speaking there shouldn't be many true AIs ever created, its hard to find place where there's a need for them. You don't need it to make a car, you don't need that much AI to make a butler, what precisely is the application for true AI? I would hope that (barring certain accidental creations which isn't nessecarily likely) the few artificial lifeforms produced would be essentially test platforms rather then ever seeing mass production.