by Carmen Bostic St.Clair and John Grinder
Reviewed by Steve Andreas
This book states, “Our intention is to provoke a professional high quality public dialogue among the practitioners of NLP, as an integral part of these developments,” (p. 348) and it is in this spirit that I write. I hope this can begin a fruitful and ongoing exchange that others in the field will contribute their thoughts and insights to. While I found much of the book interesting and thought-provoking, raising many issues that are important to the field, I also found a lot of it difficult to understand, and often contradictory in both form and content. I want to begin by mentioning the sections that I found particularly useful.
The discussion using the example of modeling spelling (pp. 80-92) is a very clear and much needed presentation of an important aspect of modeling, and the scientific method in general–how to use counterexamples to a generalization to enrich and differentiate the generalization, rather than to falsify it.
Scope and Category
I also appreciated the distinction made in this book between:
1. Hierarchies of wholeness or inclusion (what I have been describing as change in scope) in which the change is one of increased, and/or different information in sensory-based experience (what this book calls “First Access” FA) and
2. Hierarchies of logical levels which are created as a result of categories of experience–and categories of categories, etc. (which this book refers to as f2 verbal descriptions).
Bateson, Keeny, and many other illuminati of systems theory have completely missed this difference, which is quite significant and useful in tracking how a person responds in a given situation, and also as a guide to changing that response.
Changing scope is the basis for context reframing, which changes the kind and amount of “sensory” information in someone’s representation. Expanding scope is sometimes called “seeing the big picture.” Change of scope is the underlying basis for a number of the content reframing or “sleight of mouth” patterns (justification, consequences, etc.)
In contrast, a category is a group or set of experiences. When someone thinks of a category, they focus on the criteria used for creating the category, and tend to ignore most of the sensory-based information in the individual experiences in the group. This is the difference between FA (perception of a specific dog) and the f2 category “dog.” The category typically elicits a representation of an “average” or generic dog, rather than a specific dog. Categories provide the underlying basis for a different set of content reframing patterns, such as “redescription,” “model of the world,” “apply to self,” etc.
Charles Faulkner and I have been exploring the many ramifications of this important distinction between scope and category for the last couple of years. We continue to find uses for it, and have been teaching what we have learned so far in a seminar titled “Changing Levels of Meaning and Experience.”
Deletion, Distortion and Generalization
I appreciated the reexamination of this fundamental topic. (pp. 274-275) Deletion creates distortion, and this is the process of generalization. In the example of “dog” above, all the qualities of different individual dogs are deleted, leaving only the common characteristics of the generic dog that represents the category.
I came to similar conclusions some time ago. However, I would also add another word, amplification. While simple deletion results in distorting what is left, study of the neural nets active in perception clearly show that all of them also amplify certain signals at the same time that they delete others.
In my new book, Transforming Your Self: Becoming who you want to be, I model this process of generalization in considerable detail. (The contents, introduction, two chapters and the Appendix are on this web site, SteveAndreas.com)
Robert Dilts’ Neurological Levels
Dilts’ “neurological levels,” is another thing that Charles and I have been reexamining. As the book points out, Dilts’ “levels” do not consistently utilize criteria for either scope or category inclusion, and sometimes the hierarchical relationship is reversed. For instance, Dilts places “identity” at a higher (larger) level than “beliefs.” I think that he probably placed identity higher because it recursively describes itself, and this makes it much more powerful in influencing behavior. Curiously, this recursion actually bridges between logical levels (violating Russell’s “Theory of Logical Types”–more on this later).
However, since identity is composed of the beliefs that we have about ourselves, it is a smaller subset of the much more general term “beliefs,” so it should be at a lower (smaller) level in a hierarchy than beliefs.
The book also critiques Michael Hall’s “Meta-States” work, in which levels of experience also figure prominently. Like most others in the field, Hall misses the difference between scope and category that has been hidden for years in the experiences described by the words “meta,” “chunk,” “frame,” and “outcome” in the earlier NLP model. Since Hall places great value on the use of large categories, his approach is essentially a conscious mind, top-down, large-chunk, approach, often omitting FA, the unconscious, and the ecology of the larger system.
History and Epistemology
I and others also found the historical anecdotes of the early development of the field interesting and enjoyable–information that only Grinder and Bandler and a few others can provide.
The book includes a fairly large section on epistemology–how we perceive and know what we know. Although in many ways this is a pretty standard review of psychophysics and perception, I have talked with several people who found it very useful.
The entire book is about modeling, and it repeatedly bemoans the lack of modeling in the field. “In particular we refer to the lack of modeling, the very activity that defines the core of this discipline NLP.” (p. vii) “The vast majority of the actual activity at present in what is loosely referred to as the field of NLP is application and training.”(p. 55.) “It is regrettable that creating variations on such themes seem to be the principle focus of much activity in NLP as opposed to modeling of new patterns itself.” (p. 225) There is a great deal of discussion about the difference between NLP modeling, application, design, variations, and training (pp. 50-56), and in particular the difference between a new model and an application of an old model.
However, after reading the entire book very carefully several times, I’m still unclear what the distinctions are, and which patterns the book would place in each category.
The meta model is described as both the first model in NLP (pp. 142-163) and also as an application of the model already existing in transformational grammar.“The meta model can, for example, be usefully understood to be an application of the modeling of linguistic patterning inspired by Transformational Grammar” (p. 51).
The Milton Model is described as the third model in NLP (pp. 173-183) and elsewhere as the inverse of the meta model–in other words, the distinctions are (mostly) the same, only the uses are different, which seems to make it also an application of an existing model, rather than a new model. In short, no criteria are provided that would clearly distinguish between these different categories of modeling.
The book states that “The new code is an excellent example of pure design, a pure manipulation of these variables.” (p. 51)
OK, let’s take a look.
“New Code” NLP
The “new code” is a general model for change work, presented as an advance over the “classic code.”
“. . . in the new code, the so-called resource states are created directly through the participation of the client in an activity–often a game–that itself creates a high performance state but one, curiously enough, that has neither history nor content to it. It is simply a game but a game that activates neurological circuits that serve as the base for changes in the content selected previously by the client. The structure of the game itself is designed to ensure that certain characteristics that are typical of high performance states are present. But once again this occurs without any particular content and without reference to any historically experienced states.” (p. 233)
A “high performance state” is characterized by the Chain of Excellence: Respiration–> Physiology–> State–>Performance. (p. 233)
The Change Format for the New Code
1. Select from 3rd position some context in which you experience some behavior you wish to change/influence.
2. Localize physically this hallucinated context and the image and sounds of yourself in that context performing the behavior you wish to change/influence and step into the position of the image of yourself (1st position) without attempting to change anything–self-calibrate. This is also the opportunity for the coach to calibrate your present state response to the context in question.
3. Play the game (1st position) or equivalently, enter into the content free, high performance state (e. g. The Alphabet Game, the NASA game…)
Spend 15 minutes playing to allow full activation of the circuits underlying the performance in the game.
4. At the end of the play (15 minutes or until the circuits are fully activated), the player (1st position) without hesitation and most importantly without attempting consciously to influence in any way his experience steps back (into 1st position) into the physical space where in step 2 occurred–that is, the physical space (on the floor) where he had located the hallucinated context in which he wanted to change something.” (p. 240)
Comparison of New Code and Old Code
In the classic format for integrating a resource state with a problem state or context, a specific resource is chosen by the client’s or the therapist’s conscious mind, and then accessed by a verbal request to recall a past state “Think of a time when. . .” or an “as if” state “What would it be like if. . .” followed by associating into this past or potential representation, and then integrating it with the problem state or context.
In contrast, the new code format accesses a nonverbal high performance state as a resource, to eliminate or minimize verbalization and irrelevant historical content. Then while in this high performance state, the client steps back into the context chosen for change, unconsciously integrating the high performance state into the chosen context. In a variation discussed elsewhere in the book, the client asks their unconscious mind to choose a context to change/improve, so that the entire process is unconscious, including the results of the integration. This prevents any possibility of content intervention or interference by the conscious mind of the client or coach.
In my understanding, all states have content, so there is really no such thing as a “content-free high performance state” (p. 239) any more than there is a such a thing as “pure awareness.” Awareness is always awareness of something, so it always has some content, and so does a state. A “content-free high performance state” is simply a resource state in which the content is so different from the problem state as to seem irrelevant, as in the “alphabet game.” (pp. 242-245).
This process presupposes that a high performance state will be effective for any kind of problem, skill, response, behavior, or context, and I think that this “one-size-fits-all” assumption is patently false. A high performance state for watchmaking is quite different than one for football. The resource for resolving a phobia, dissociation, is exactly the opposite of the resource for resolving grief, association.
In most fields, development of methodology results in further differentiation of more specific methods for specific applications. At one point in the development of medicine, blood-letting was considered a cure for all sorts of ailments; now it is used only in very restricted cases in which someone has too many red blood cells.
In the modeling that Connirae and I have done, we have determined the characteristics of specific difficulties and/or skills, and then characterized the resource states that are appropriate for them (just as Bandler and Grinder did for spelling). Primarily these have been content-free submodality interventions, but some include specific content shifts. For instance, in grief people often recall the unpleasant ending of the relationship through death or other loss, rather than the valuable relationship that was lost. When this is the case, it is vitally important to ask the client to change the content of their representation; resolution of grief is impossible without this content shift.
Prior to NLP, most change work was focused on content. One of NLP’s earliest and greatest contributions was to refocus attention on process (while presupposing content). However, content interventions are also useful; what is important is to distinguish between content and process, and determine which is appropriate to change in a given situation.
On pp. 244 and 265 the book mentions spontaneous submodality shifts as evidence of the effectiveness of the new code format. However, there is absolutely no mention of the many very effective patterns involving direct and rapid change of submodalities that were developed by Richard Bandler, such as the Swish, the Compulsion Blowout, the Last Straw Threshold Pattern, the Decision Destroyer–not to mention the classic submodality phobia cure utilizing double dissociation.
There is also no mention whatsoever of the submodality patterns that Connirae and I developed for anger/forgiveness, shame, guilt, adjusting criteria, responding to criticism, internal/external reference, and aligning perceptual positions.
Since the book proposes a single change format for all change work as an improvement over other NLP methods, I would have appreciated at least some examples or discussion comparing the results of the new code approach with more specific patterns, with specific follow-up reports of the resulting changes. Is the new code format actually more effective with phobias than the classic V-K dissociation? Does it work better to teach someone how to spell? Is it really more effective for a compulsion, or grief, or shame than the specific submodality methods? Personally, I doubt it, but I’m quite willing to be shown.
Perceptual Positions (Triple Description)
Perceptual positions are first on the “partial list of new code patterning” (p. 239), essential ingredients of the new code.
I want to ask the reader to pause briefly to respond to a hypothetical proposal from someone that the five representational systems be relabeled 1, 2, 3, 4, 5 (instead of “visual” “auditory” etc.) I doubt that you would consider that an improvement, because numbers are a much more abstract and general verbal coding than words like “visual” that already have a simple meaning that is reasonably close to sensory-based experience.
The perceptual positions were probably numbered by following the terms “first person,” “second person,” “third person” familiar to linguists, but I think this is a carryover from the early modeling that ought to be revised to make learning easier. For over 15 years we have been using “self,” “observer” and “other.” This makes learning much easier, and also avoids other errors that are more likely with the 1st, 2nd, 3rd coding.
In order to take other position, you have to spatially leave self position and then“step into” (p. 251) the other person’s position. As you do this you have to pass through observer position as you make the transition between self and other.
Since one has to go from 1st position into 3rd position in order to get into 2nd position, the number sequence is misleading. The 1, 2, 3 coding imposes a sequence which is not only functionally inaccurate in terms of the process that one has to go through in order to change positions, it is also different than the sequence in which the positions are usually taught and learned.
Kinesthetics in Observer Position?
The book is very emphatic about this:
“It is important to make explicit that 3rd position is not a dissociated position in the sense that there are no kinesthetics involved in 3rd position. A well-formed 3rd position always involves strong kinesthetics.” (p. 255)
“We have been astonished to discover with alarming frequency an interpretation of 3rd position in which participants in training programs are being instructed that 3rd has no kinesthetics. Little wonder those participants find it difficult to operate effectively from their so-called 3rd position.” (p. 266)
In my view, “observer position” does just that, it observes–a dispassionate observer and nothing more. In Heinlein’s classic science fiction book, Stranger in a Strange Land there is the concept of the “fair witness” who reports only what s/he observes, without conclusions or evaluations. A fair witness would describe a brown horse as “A horse which appears to be brown on this side.” In the crime novels and movies of the 30s and 40s the detective would often say, “Just give me the facts, ma’m, just the facts” (no interpretations). In a carefully aligned observer position, the person feels the perceptual kinesthetics of being in that position, but no evaluative emotional feelings about the events being witnessed, except perhaps a soft feeling of compassion for the people being observed if they are involved in a difficult interchange. One possible explanation for our difference of opinion may lie in the book’s description of other position.
Other (2nd) Position
The book describes 2nd position as:
“2. adopting the characteristics and perceptions of some identifiable group. As an example to give the reader a taste of this, imagine what a well-aged hunk of cheese represents from the point of view of:
a. a mouse
b. a cow
c. a starving student
d. a lactose intolerant patient
e. a marketing executive
f. a lawyer
g. an accountant
3. systematically shifting perceptual position from one to another of the three privileged perceptual positions specified by Triple Description. We would like to note here that number 2 above could be classified as a generalized 2nd.” (p. 248)
I completely agree that taking the role of an executive, a lawyer, or an accountant is a “generalized 2nd” (other position). However, that means that taking the role of a “consultant” or “director” illustrated in the Angela/Geraldo exchange (pp. 250-256) is also a “generalized 2nd,” and not 3rd position, as stated.
I understand the great usefulness of taking on an other position (generalized or not), particularly if that person has great skill or expertise. However, any such other position will introduce its content biases, presuppositions, and emphases. These may be very useful, just as content reframing or other content interventions can be. However, it inevitably introduces content, as can be clearly seen in the Angela/Geraldo example, in which the consultant does much more than simply observe–offering future possibilities, scornful humor, intention, and asking specific and very directive questions. Every description and example of “3rd position” in this book is actually 2nd, because they all specify a person, role, or position other than that of a dispassionate observer, so of course they will have evaluative feelings.
This is totally inconsistent with the otherwise clear distinction made in this book between process and content, and the book’s emphasis on interventions that are based on process rather than content.
Clean Perceptual Positions
The book mentions the importance of a “very clean” 3rd position (pp. 234, 255) and “clearly” “clean” and “cleanly” are used repeatedly (pp. 250, 253, 256, 257) but nowhere does it define operationally how to achieve a clean position.
Many years ago Connirae modeled a systematic way to teach clean access to all three positions with her “Aligning Perceptual Positions” process (published in Anchor Point in February 1991). This process uses only the (content-free) submodality of location, and it makes a huge difference in how useful, informative, and resourceful all three positions are, yet the book makes no mention whatsoever of this pattern, which provides an operational definition of clean positions.
The book’s lack of an operational definition for “clean” positions might be excused, but for the fact that it criticizes Eric Robbie for using terms that he does not define:
“c. Robbie introduces and uses terminology without definition thereby removing all possibility of a serious attempt to appreciate whatever insights he is attempting to express–such minimal operational definitions are a prerequisite for opening a professional and interesting dialogue publicly within the field of NLP.” (p. 106)
Besides “clean” positions, this book uses many other terms that are not defined, including, “stalking,” “shunts,” “characterological adjectives.” “automatic movement to privileged states,” (p. 239), “NASA,” “trampoline,” (p. 263) and “mental spaces” (p 296). (“Characterological adjectives” and “mental spaces” may have accepted definitions in linguistics, but most of the book’s readers will not be linguists.)
The Presuppositions of NLP
This book takes an extreme and extraordinary position with regard to the NLP presuppositions:
“. . .we find the so-called presuppositions of NLP are, at best, a pedagogical device to assist people new to the adventure called NLP in making the required transitions in their thinking to the new forms of perception and thought implicit in the technology. Unfortunately presuppositions, like beliefs, are ultimately filters that reduce the ongoing experiences of their possessors. We personally do not find any value in the enumeration of such rationalizations (the so-called presuppositions of NLP).” (p. 202)
Yet the book also states:
“Many students of NLP, especially in their initial enthusiasm for the effective use of the patterning, seize upon an epistemologically peculiar (and impossible) goal. The task they set about to accomplish is to free themselves from all perceptual filters, often stating that thereby they will appreciate the world without distortion. Such a naive project is surely incoherent.” (p. 247)
Like it or not, we all do have presuppositions. Knowing what they are, and that they are arbitrary choices, allows us to choose to change them contextually, in order to create multiple perspectives and understandings. Attempting to dismiss them would not eliminate them, but only blind us to the perceptual and behavioral biases that result from them!
The book also says:
“Allow us to offer an extended example of one of these differentiators: specifically, the fourth;
4. neither the agent of change nor the client is required to believe any set of assumptions to utilize NLP patterning effectively.
In particular, for example, there is no need to subscribe to the so-called presuppositions of NLP in order to benefit from an effective application of the pattern to some problem or challenge. Normally, these presuppositions include statements such as,
Having choice is better than not having choice.
All the resources necessary to make the change the client desires are already available within the client at the unconscious level.”(pp. 201-202)
Yet elsewhere the book states:
“Further, these patterns [The “chain of excellence”] had in common a deep trust that unconscious processes when properly organized and constrained would produce deep, long term ecological changes in spite of, for example, a client’s declared conscious beliefs that such changes were impossible. . . . the ability of the unconscious to assess the longer-term consequences and then, based on this assessment, to make such selections (desired state, resource or replacement behavior) greatly exceeds that of the conscious mind.”(p. 236)
This statement (and the new code format itself) certainly appears to assume that “All the resources necessary to make the change the client desires are already available within the client at the unconscious level.” There are many other places where the book presupposes that having choice is better than not having choice.” (pp. 231, 247-248)
Moreover, the presupposition of unconscious positive intent, which is included in every list of the NLP presuppositions I have ever seen, is a fundamental basis for six step reframing, which the book describes as the bridge to the “new code.”
“. . . the Six Step Reframing format that we are proposing creates the bridge from the classic code to the new code. In the new code, we find that: . . .
3. There are precise constraints placed upon the selection of new behavior(s); more specifically, the new behavior(s) must satisfy the original positive intention(s) of the behavior(s) to be changed;” (p. 236)
So while the “presuppositions of NLP” are dismissed summarily, several of them are used as essential parts of the “new code!” These contradictions cast a long shadow of doubt over the rest of the book–does the right brain know what the left brain is doing?
Presentation of Patterning
On pp. 53 and 351 the book (redundundantly) offers specific and useful suggestions for presenting new models to the field of NLP, including 1. description of the pattern, 2. consequences of using the pattern, 3. selection criteria for the use of the pattern, and making a video available. (p. 352) However, in the description of new code patterning and format (pp. 239-240) many terms are not defined (as mentioned earlier) the consequences are not specified, no criteria for selection are offered, and no video is made available! This is only one of many, many discrepancies in this book between what is talked and what is walked.
The suggestion of having a library of videotaped examples of patterns available to the field for study is a particularly good one, to provide sensory-based examples of the actual use of patterns. People’s descriptions of their work are often very different from what they actually do (e.g. Virginia Satir). This is one of the reasons that Connirae and I have been producing videotapes for the last 21 years–so that people could see and hear samples of exactly what we were doing and writing about. Years ago we produced some videotapes of Grinder’s training, but the last Grinder video that I am aware of was produced about 18 years ago.
Russell’s “Theory of Logical Types”
This book is very clear that logical levels are created by inclusion of logical categories within larger, more encompassing categories. However, I found the discussion of logical types confusing. It defines two sets as being of the same logical type if there is isomorphic mapping between them. (pp. 295-301) But since no use whatsoever is given for the term “logical type,” I have no idea how this proposed distinction is of any use.
Bertrand Russell used the term “logical types” interchangeably with logical levels, in declaring that a class (at one logical level) cannot also be a member of itself (at a smaller level).
G. Spencer Brown in the preface to Laws of Form (1974) has shown that Russell’s theory of logical types is not only unnecessary, but if accepted, would deprive us of the branch of mathematics dealing with imaginary numbers, which is very useful in electronics and in calculations involving sine waves and other trigonometric functions! The theory of logical types would make impossible the many useful self-referential (and sometimes paradoxical) messages which people do, in fact, make and respond to. It would also outlaw important and interesting phenomena such as the self-concept, which describes itself recursively, including itself in its description.
The theory of logical types (and any conclusions derived from it by Bateson, Dilts, Hall and others) was declared “brain-dead” by Bertrand Russell himself in 1967, as reported by G. Spencer Brown (also in the preface to Laws of Form): “The theory was, he said, the most arbitrary thing he and Whitehead had ever had to do, not really a theory but a stopgap, and he was glad to have lived long enough to see the matter resolved.” Please let us hear no more about the “Theory of Logical Types.”
Language and Organization of the Book
Whispering would be a great challenge to an editor in language, punctuation, and organization of topics. For example, the fifteen pages of notes (pp. 105-119) for Chapter 3, Part I, could all have been integrated into the chapter instead of being dumped at the end. The chapter is not quite 3 times as long as the notes! Many topics are broken up and scattered throughout the book, and as already noted, often contradictory statements are made about the same topic.
The language used in the book is usually very academic and overly complex, for instance, the following sample. (I challenge the reader to read it once, and then summarize its meaning!):
“1. the meta model is a set of epistemological operations designed to verbally challenge (e.g. through specification) the mapping (f2) between FA and our mental maps as well as the internal logic of the language system itself (e.g. cause-effect relations) as it forms a base for the generation of linguistically mediated mental maps that guide behavior. A systematic application of this set of verbal patterns leads precisely and efficiently to the identification of the FA events (the reference experiences) that are the source of the representations to be changed to achieve the client’s goals.” (p. 198)
I would probably translate this paragraph into ordinary English something like the following:
“People respond to events based on their internal pictures, sounds and feelings. They also collect these experiences into groups or categories that are labeled with words. The meta-model is a method for helping someone go from the information-poor word maps back to the specific sensory-based experiences they are based on. It is here in the information-rich specific experiences that useful changes can be made that will result in changes in behavior.”
Since epistemology is how we know and test knowledge, it seems to me to be an error to think of the meta-model as “a set of epistemological operations” as mentioned above. The meta model simply translates from f2 into FA (just as one can translate a sentence from French into English) without making any epistemological statement about the truth of either one. The meta-model can be (and often is) taught as a set of practical techniques, without even a methodology to guide its use, much less an epistemology.
The book has very harsh words for teachers who are incongruent:
“1. There were a number of extremely well-trained practitioners of NLP who were themselves clearly capable of miracles (relative to the capabilities of other systems of change work) with clients; however it apparently had never occurred to them (or perhaps they simply had chosen not) to apply the patterning to themselves–that is, self-application of the patterning. Thus, my perception was that many of them were incongruent in significant contexts in their lives–there were portions of their personal and professional lives that showed absolutely no presence of the choices they busily assisted others in creating in their lives. I was not happy with this situation.” (p. 231)
And later we find: “CAVEAT: Messengers incongruent with the message they purport to bear are not listened to, nor should they be!” (p. 366) Given the incongruencies that appear in this book, this statement becomes self-referential (violating the theory of logical types), paradoxically telling us not to listen to what it says!
I agree that self-application of the methods we teach is vitally important for all of us, and one result of this is congruence, a worthy goal. However over the last 67 years I have learned a great deal from people who were incongruent, and a degree of incongruence may be inevitable for us mere mortals. We are all still learning, and all of us have a long way to go.
Learning a new skill or understanding always results in incongruence, at least initially. So if someone were always congruent, that would mean that they could never learn anything new! I do agree that incongruence is an important signal that indicates an opportunity for further learning, change, development and discovery, and this book provides more such opportunities than most of us have time to pursue.
Format of the book
As a publisher, I notice that the book is priced about 2 1/2 times higher than other comparable books. At that price it should at least have a color cover and an index! The small discounts offered make it extremely unlikely that a bookstore will stock it, since a bookstore needs at least a 40% discount to break even (They may “special order” it as a courtesy to a customer). The paragraphing and indentation could be greatly improved, and the bold-faced type used throughout the book is the equivalent of shouting (not whispering).
The foregoing is only a small sample of the topics in this book, but it is already more than this article has space for. The topics raised are important, and deserving of further discussion and clarification. I wish that they were presented in a form that is more user-friendly, so that more people would be willing to read them, and therefore be able to think about them carefully and respond to them.
Given the many kinds of difficulties I had in reading and understanding this book, I find it difficult to imagine that the authors took other position with the reader. I think that few people will buy Whispering. Of those who do, few will manage to read it. And of those who read it, very few will be able to understand it.
I hope that this review can be a contribution to the kind of friendly and respectful professional dialogue that the book calls for, and that can strengthen and further develop the field. There is some nourishing meat here, but also a lot of gristle that needs to be cooked and chewed very thoroughly before swallowing, and quite a lot of much less nourishing skin, feathers, teeth, claws, and bone that is best set aside.
* “Whispering in the Wind“, Carmen Bostic St. Clair and John Grinder book review, Anchor Point, Vol. 17, No. 3 p. 3 March 2003