The way around the contradiction in Jim's example is to have a hierarchy in the core directives, like in Asimov's rules. I would assume that the core directives include this kind of hierarchy.
Your idea that the core directives make AI "unable to become individualistic" sounds new and interesting. If you apply this rule, how do you build "self-preservation" in the AI (cf 3rd Asimov’s rule about the self-preservation of the robot). The solution might be that the core directives include collective self-preservation and give it an absolute priority over individual self-preservation, like for aunts and bees (that are often used as models for AI development). A worker aunt can have free will, i.e. make free decisions about finding food, fighting foes or feeding nymphs, but it will never act against the colony (or the queen). Hence, free will does not necessarily lead to individual freedom…
Does this open some doors? For example, what happens if the worker aunt decides that it is good for the future of the colony to feed nymphs with some caracteristics and not others?