The
most recent Newsletters are available by private subscription
Info
and Order
Now Buy the
Hard Copy!
Nutritional
Anthropology's Bible:
DEADLY
HARVEST
by
Geoff
Bond
COOKBOOK
Healthy
Harvest Information Page
|
Chapter
3
How We Eat and
Its Consequences
In this chapter, we will
examine the history of the current food supply using the new food groups
defined in chapter 2. We will indicate in general terms the consequences
of accepting these foods into the diet. There are some surprises: many
foods that we think of as being traditional and acceptable are in fact
recent and sometimes harmful. Many foods, although newcomers to the
human diet, are perfectly acceptable and in conformity with the Savanna
Model. To improve our health, we have to confront some incorrect yet
ingrained ideas about how we should be feeding ourselves.
GRAINS GROUP: BREAD, RICE, AND PASTA
Wherever we look, we find that farming was initially based on the
cultivation of grains of some sort. The reason was simple: it was
possible to grow, harvest, and store grains. Grains were the first major
new food to enter the food supply since the origins of the human
species. None of the world’s major civilizations could have gotten
started without them. It is not surprising, therefore, that we think of
grains as a normal, even essential part of our food supply. We are
taught by our parents and teachers at an early age that eating grains
helps build our bodies. This accepted belief has led most government
authorities to give farmers incentives to grow this crop and to
recommend grains as the staple (principle component) of their
population’s nutrition. But such advice is mistaken, even for
unrefined grains
. Nature has equipped many creatures to eat grains. For example, the
chicken has a hard, ridged palate to husk the seed and a powerful,
muscular gizzard to grind the grain into flour. It even swallows gravel
to help the grinding process. However, nature did not so equip humans.
Let us look at the processing required to turn grains into something
that will feed us. The hard, outer husk of the grain is inedible and
difficult to remove just by chewing, so the first farmers had to think
up new mechanical techniques to achieve what nature alone could not
provide. First, they had to split the edible part of the grain
(“wheat”) from the inedible husk (“chaff”) by a process known as
threshing. They did this with a flail (two long rods joined by a leather
thong) and beat the
Page 51 above
Page
52 Below
wheat until the grains were
separated from the chaff. It took a man one day to thresh the amount of
wheat that grows on about 100 square yards. Second, the wheat is
“winnowed” (separated from the chaff) by tossing the mixture of
wheat and chaff into the air; the wind then blows away the lighter
chaff. Even then, the food processing is not finished: humans do not
have teeth designed to chew the grain, so the farmers had to find
mechanical ways to break down the seeds into something the body can
handle. The solution is grindstones: with a lot of physical effort, they
could mill the grain into a coarse or fine powder called flour. Finally,
nature did not equip the human body to digest flour in its raw state.
Real grain eaters, like chickens, have special enzymes for the digestion
of raw flour. Their pancreas, the chief organ for secreting
starch-digesting enzymes, has several ducts, (1) while the human
pancreas has only one.
The only way the human
digestive system can handle flour is by cooking it first. Those first
farmers had to take the flour, make it into patties, and roast them in
the embers of a fire. In this way, humans were already moving from a
natural diet to one based on a rudimentary technology. Rudimentary, yet
quite impractical for the average hunter-gatherer. In making these
changes, those first farmers were smart enough to grow foods that tasted
good and provided a level of nourishment. However, although these new
foods filled their stomachs, they were not necessarily helpful to their
general health
Those early farmers were eating flour cooked without yeast—in
other words, unleavened bread. It took another 5,000 years before
someone in the Egyptian civilization discovered the use of yeast to
“raise” bread and give it a more agreeable texture. Modern breads
still owe their basic recipe to an inventive Egyptian baker who lived
around 4500 B.C.
The Problems with Eating Grains
Grains, as a class of food, were never part of our ancestral diet.
We are speaking of all types of grains—wheat, rye, rice, barley, oats,
quinoa, and so on—and all forms of these grains, including bread,
pastry, breakfast cereals, pasta, pizza, oatmeal, and cookies.
Consumption of all these grains is linked to a range of
Hormones
Hormones are potent chemical messengers. Thousands of them are in
continual movement, whizzing around the body, instructing organs to do
something or other. Tiny amounts of hormone have powerful effects: for
example, they turn caterpillars into butterflies. In humans, they
regulate every function of the body, including the immune system,
sexual functions, pregnancy, digestion, blood-clotting, fat control,
kidney function, bone building, growth, blood pressure, and even mood
and behavior
Page 52 above
Page
53 Below
conditions such as heart
disease, high cholesterol, cancers, osteoporosis, obesity, depressed
immune system, premature aging, and diabetes. There is a common thread
to some of these conditions: they are, in part, provoked by abnormal
surges in blood sugar. These surges in turn disrupt hormones that
control other processes, such as bone building, immune function, cell
renewal, and cholesterol control.
Grain consumption leads to micronutrient deficiency. Even whole
grains have poor concentrations of the multitude of these vital
substances that are essential to human health: vitamins, minerals,
carotenes, flavonoids, and many more. Grains are basically bulk fillers
that displace more nutritious foods from the diet. The situation is even
worse with refined grains, because with mechanization, the millers strip
out the most nutritious part of the grain. Now we know why governments
try to compensate for this shortfall by insisting on the
“fortification” of breakfast cereals and many other grain products.
Of course, these efforts are only a crude and inadequate substitute for
the real thing—the marvelous cocktail of thousands of compounds
working together as a team, which are provided by plants conforming to
the Savanna Model
From anthropological evidence, we know that the earliest farmers
suffered a sharply reduced quality of life: reduction of stature, (2)
increase in infant deaths, (3) reduction of life span, (4) increase in
infectious diseases, (5) increase in anemia, (6) diseased bones, (7) and
tooth decay. (8) Today, we can also link grain consumption to many other
conditions that cannot be preserved in the archaeological record,
including brain disorders, such as autism, (9) schizophrenia, (10) and
epilepsy, (11) and immune system disorders, such as multiple sclerosis,
(12) rheumatoid arthritis, (13) eczema, (14) and allergies. There is
even a common occupational ailment in the baking industry, “baker’s
asthma,” a debilitating allergic reaction to cereal flours. We are
only recently beginning to discover a host of microscopic substances,
known as antinutrients, that are common in grains and are secretly
gnawing at the foundations of our health in many unsuspected ways.
Antinutrients
Antinutrients are undesirable substances in food that work against
the good nutrients and often disrupt the inner workings of the body.
They are usually secreted by plants to kill predators such as germs,
fungi, and insects. In other words, antinutrients are often naturally
occurring germicides, fungicides, and insecticides
Grains are also linked to colon disorders, including irritable
bowel, colitis, colon cancer, and celiac disease. Full-blown celiac
disease has symptoms of diarrhea, depression, vitamin deficiency,
mineral deficiency, epilepsy, stunted
Page 53 above
Page
54 Below
growth, and osteoporosis. These
conditions had been observed for centuries. It is astounding to think
that it was only in the 1960s that a substance in grains known as gluten
was found to be the cause.
Gluten is more properly called “the gluten complex,” because it
is not a single compound but a cocktail of many similar proteins. The
human system is particularly irritated by the cocktail found in wheat,
followed by rye, barley, and oats. However, in Asia, sensitivity to the
gluten cocktail found in rice is also known. Indeed, all grains contain
gluten in some form or another and all of them cause trouble in the
human system
VEGETABLES AND SALADS
“Plant food” or vegetation has been the major component
of the human food supply since our origins. Some creatures, like our
cousin the gorilla, are designed to eat tough vegetation like twigs,
bark, stringy leaves, and fibrous stalks. However, humans are not able
to digest these plant parts. Moreover, our ancestors did not cook their
plant food either, so they focused on the young and succulent plant
parts. When we think of vegetables, we do not think of them as a
botanist does, as distinct parts of a plant with different functions.
However, each part has its own nutritional profile and a role to play in
our diet. Even today, we eat from a wide variety of plant parts,
sometimes raw in salads and sometimes cooked
Above ground, the edible part can be the stem, bud, leafstalk,
leaf, bean pod, or the immature flower. In addition, there are some
fruits, such as the avocado and tomato, which are included in the
vegetable category. Indeed, most people think of them and use them as
vegetables, so they are surprised to hear that, botanically, avocado and
tomato are fruits. A large percentage of our ancestors’ food supply
came from vegetation that was levered out of the ground with a digging
stick. Today, we still eat many foods that grow underground— roots,
tubers, bulbs, and corms (solid bulbs)
Most of the vegetables we use today have been known since ancient
times. Merchants, traders, and empire builders spread them around the
Old World. The Romans in particular moved plants around their
territories wherever they would flourish. Later, the Spanish,
Portuguese, Dutch, and British spread vegetables that they found with
the Inca, Aztec, and Maya, to the rest of the world
During all this time, gardeners were hybridizing and
“improving” the species, so that it is often uncertain just what the
original, wild species was like. The U.S. Department of Agriculture
(USDA) does not subdivide its Vegetable Group: they classify french
fries and ketchup as vegetables just like lettuce and broccoli. As this
example shows, it does indeed make a difference just what kind of
vegetable we are eating—not all “vegetables” conform to the type
of plant food to which we are naturally adapted. It is also true that
our Pleistocene ancestors in East Africa would not be familiar with a
single vegetable
Above: Page
54
Below:
Page
55
species in our present food supply. For reasons that will become
clear later, we divided vegetables into two new groups, “starchy”
vegetables and non-starchy vegetables
Starchy Vegetables
Certain plants have evolved the
ability to store food during times of plenty to see them through times
of hardship. Some of them store the food in the form of starch. In most
cases, the roots are pressed into service as storage organs. Examples
are Old World vegetables such as beets from southern Europe, parsnips
from temperate Europe, and carrots from Afghanistan. An aboveground
example is the chestnut. This might come as a surprise, for the chestnut
is usually lumped in with all the other tree-nuts. However tree-nuts
typically are rich in oil (around 50%), rich in protein (up to 25%), and
low in starch. The chestnut is very starchy and very low in protein and
oils (both around 1.5%). Its nutrient profile is like other starchy
vegetables and we therefore class it as such
However, it is a tuber from the New World that has relegated all
Old World starchy root vegetables to minor players—the potato. The
Spanish conquistadors first brought it back to Europe from Incan Peru in
the 16th century. A relative of the tomato plant, it was a small,
wrinkled tuber, rather like a walnut
For a long time, Europeans did not know what to do with it; some
farmers grew it to fatten their pigs. Then, in the 1800s, the British
blockaded France during its war against Napoleon. With their regular
foods in short supply, the French developed ways to incorporate potatoes
into their daily diet
Potatoes are not even edible in their raw state, as the human
digestive system can only cope with them if they are cooked—they
require processing. So, it is only in the last 200 years that the potato
entered the diet. But its success was immediate, widespread, and rapid.
It has relegated every other root vegetable to the sidelines. However,
this has not been a beneficial development
We all love the potato: it is the most commonly consumed vegetable,
served up in dozens of tasty and imaginative ways. Unfortunately for us,
its consumption is linked to readily observed conditions, such as
obesity, diabetes, high cholesterol, heart disease, and cancers, because
of abnormal surges in blood sugar. There are potential difficulties as
well with some of the other “starchy” root vegetables, such as the
aforementioned parsnip, beets, and carrots
We think of the potato as a safe food to eat—even if it might be
fattening— but very few people are aware that the potato is also
mildly toxic. Potato consumption is directly linked to allergies, bowel
disorders, confusion, and depression. Every year, dozens of people are
hospitalized with potato poisoning, and many more cases go undiagnosed.
These problems are directly linked to antinutrients in the potato that
our bodies can’t cope with. We will deal with the science behind these
startling assertions in Chapter 4.
Above Page 55
Below: Page 56
Non-Starchy Vegetables
Not all underground vegetables
are starchy. For example, turnip and radish, which both originated in
Asia, are non-starchy, as are bulbs such as onion and garlic from Asia
and the leek from the Middle East. Corms such as Chinese water chestnut
are also non-starchy. Unlike the starchy roots, they mostly get their
bulk from another compound called “inulin.” We will reveal the
significance of this in Chapter 4 when we look at the science behind our
food supply
The vegetables from above ground cover a huge range of plant parts:
stems, such as asparagus from the Mediterranean and kohlrabi from
Europe; buds, such as Brussels sprouts from Belgium; leafstalks, such as
celery from the Mediterranean and rhubarb from Asia; leaves, such as
Europe’s cabbage, lettuce, and spinach; immature flowers, such as
cauliflower from Europe, broccoli from Turkey, and artichoke from the
western Mediterranean; immature fruits, such as eggplant from southern
Asia and cucumber from northern India; mature “vegetable- fruits,”
such as tomato from Peru, avocado from Central America, and bell pepper
from the Andes; edible bean pods, such as runner beans from tropical
America; and edible fungi (mushrooms) from just about everywhere. Of
course, today, these plants are grown all over the world, wherever
farmers can produce them economically
The tomato is an unusual case. First known to the Incas, 500 years
ago the Spanish conquistadors brought samples back to their homeland
from Peru. The tomato comes from the same family as deadly nightshade,
so for a long time, Europeans, warned off by the bright red color,
thought the tomato was drop dead poisonous. Finally, some brave souls
tried it and survived the experience without any ill-effects. About 200
years ago, the tomato made it into the food supply. Like the potato, it
has now eclipsed all other Old World vegetables and conquered cuisines
around the world. It is not without its drawbacks: it does indeed
contain low levels of plant poisons (15) and some people react to them,
with arthritic symptoms, for example. (16)
It is hard to believe, but
true, that the tomato was unknown to Italian cuisine just 200 years ago.
The chili pepper, which gives Asian cooking and curries their fiery
properties, was unknown before the Spanish introduced it (from Mexico)
to India and Malaya 400 years ago
We have seen just how many new non-starchy vegetable foods have been
introduced into the human diet all around the world relatively recently.
Remarkably, with the exception of chili pepper, they are all beneficial
entries to the diet—none of them seems to have a major adverse effect
on human health. The chili pepper, however, irritates the lining of
every part of the digestive tract: it causes the colon to become more
porous, allowing germs, fungi, and food particles to enter the
bloodstream. This can lead to a whole range of conditions from allergies
to migraines to a depressed immune system
Above:
Page 56
Below: Page 57
FRUIT GROUP
Imagine that you are one of
our ancient ancestors rummaging for food on the African savanna 60,000
years ago. You see a familiar ripe fruit and pounce on it— you know it
is going to taste good! Fruit and humans have evolved together over eons
to help each other. The fruit wants its seeds dispersed, while humans
want nutritional gratification. The fruit immediately rewards you with
its gratifying, jazzy, sweetish taste, which is known as the “sugar
reward.” Moreover, since fruit was a rare commodity on the African
savanna, our brains are programmed to continue eating that sweetish
thing until the supply runs out
Our early ancestors of the African savannas would not recognize the
fruits available in our modern supermarkets. First, our fruit selections
are vastly different: apples, cherries, and plums originated in the
Middle East, pears in Europe, grapes in the Caucasus, strawberries in
America, oranges in China, and bananas in Malaya. Second, gardeners,
through selective planting techniques, have heavily modified these
different species from their original state since the farming
revolution. One has to admire the persistence and foresight of those
early New Stone Age farmers. They took the sour-sweet, woody crab apple
of the region and patiently bred it over many generations so that it
became a tasty apple. They did the same with many other fruits that are
familiar to us today, such as the plum, pear, and cherry. However, in
the last century, the process has accelerated: agro-industrialists have
selectively bred modern fruits to have an attractive appearance, long
shelf life, few seeds, less fiber and a powerfully sweet taste
Ancient farmers developed most of these fruits in temperate regions.
More recently, with the immense growth in global shipping during the age
of European exploration, many tropical fruits became popular. The most
common is the banana, originally from the jungles of Malaya, along with
the pineapple from the Caribbean, the mango from India, and the papaya
from Central America
The watermelon is from tropical Africa and it is just about the only
plant food that our Pleistocene ancestors would have recognized. The one
we eat today is a sweet-tasting descendant of the bitter-juiced tsama
melon, still used by the San as a water source. Just in the 1970s,
enterprising New Zealanders provided the most recent addition to
mass-market fruits, the kiwi fruit. They bred it from the Chinese
gooseberry, whose origins lie in subtropical parts of China
So, today’s common fruits are, in many respects, not like the fruits
in our Savanna Model. There are potential snags related to the massive
increase in sweetness from various kinds of natural sugars, some of
which are relatively harmless and others may pose problems. Fruits rich
in the wrong sugars can aggravate pre-existing ailments such as
diabetes, allergies, high cholesterol, and cancers. There is a massive
rise in indigestion in the U.S. and one major reason is eating fruits at
the wrong point in a meal: our bodies were not built to handle the
mixing up of unfamiliar foods. Different fruits have different
proportions of
Page
57
Above
Page
58 Below
each kind of sugar. Later in
the book, we will discuss what fruits to choose and how much and when to
eat them
PROTEIN-RICH FOODS OF ANIMAL ORIGIN
At the U.S. Department of
Agriculture (USDA), meat is the term applied to the flesh of
domesticated mammals, such as cattle, pig, and sheep. More
conventionally, this is known as “red meat,” which is the
designation used here. Similarly, “game” refers to the flesh of any
wild land animal, such as wild boar or pheasant. “White meat” refers
to flesh taken from domesticated birds, such as chickens, and
“seafood” refers to fish and shellfish. We will look at both wild
and domesticated sources of animal products. The USDA does not include
certain classes of animal foods that were common in our ancestors’
diet—the “exotic” categories of reptiles, worms, insects, and
gastropods (snails and slugs). This is fair enough as these foods are
not commonly eaten in developed countries, although there are many
societies around the world that still make use of them.
Red Meat and Game Mammals
We saw with the San how
mammals such as springhare (a kind of rodent), porcupine, and warthog
were part of our ancestral diet. Less commonly, there would be big game
such as antelope and, occasionally, giraffe and even leopard. We now
look in detail at sources of meat in our food supply, starting first
with farmed meat and then wild meat.
Within about 1,000 years of
learning to farm plants, the first cultivators turned their attention to
farming animals. They were fortunate that, still in the same location of
the Fertile Crescent, there were several species of animal that were capable
of
being tamed and raised in captivity (a process known as
“domestication”)
This is an important point: as biologist and historian Jared Diamond
shows, the absence of farmable plants and suitable animals in their
locality held back many other societies around the world in the
development of farming
These early farmers, about 8000 B.C., found three creatures that lent themselves to
taming and breeding in captivity: the “mouflon,” the “pasang,”
and the wild boar. In 6000 B.C., this same ingenious people domesticated the massive aurochs,
an ox-like creature that stood six feet high at the shoulder. All four
species of animal had body compositions very similar to the wild game
eaten by our ancestors of the Savanna Model. So far, so good
Ever inventive, these New Stone Age farmers bred these animals to
improve their value and usefulness. However, in doing so over the past
10,000 years they, and all farmers since, changed the breed. The mouflon
has been transformed into the sheep, the wild boar’s descendant is the
pig, the aurochs became the smaller cow, and the pasang became today’s
goat. As we shall see, with the exception of the goat, the changes were
not beneficial
page 58 Above
Page
59 Below
In discussing meat, we tend
to think of the muscle flesh—beef steaks, lamb chops, and pork
spareribs. However, our ancestors would eat just about every part of the
animal, from the brains, heart, and liver to the guts and the trotters
A few regional cuisines still make use of these so-called variety meats
or offal. However, most of us get to eat them in another form. Ever
since antiquity, these animal parts have been processed into sausages, pâtés,
hamburgers, luncheon meats, and meat pies. The manufacturers of these
products mostly have free license to mix-and-match all the animal parts
as they see fit and add fat to “extend” them, bulk them up with
low-cost ingredients. In no way can these products be compared favorably
to the offal eaten by our ancient ancestors: they are from the wrong
kind of creature and they are adulterated in many unknown ways. Worse,
unlike our ancestral diet, we eat these processed meats in vast
quantities on a daily basis rather than when there is the occasional
kill. In addition, many meats, both generic and manufactured, are
preserved by drying, salting, or smoking, such as bacon, salami, and
bologna
These processes certainly avoid sudden death from some nasty disease
contracted from decaying meat. However, they do some necessary things in
order to preserve the meat. For example, some (like bacon and salami)
are soaked in salt. That keeps harmful bacteria under control, but the
salt is detrimental to the human body. Most are fatty (which is not good
in itself) and the fats and oils have to be converted into more stable
varieties that do not go rancid—saturated fats. These are heart
harmful and disrupt many other workings of the body. The amount of wild
meat that the average person in the developed world consumes in a year
is close to zero. However, both in North America and in parts of Europe,
the hunting of wild animals is still possible on a controlled,
recreational basis. In this way, the meat of bear, moose, caribou, deer,
wild boar, elk, and similar creatures enters the diets of some
hunters’ families and the diners at specialist restaurants. This meat
corresponds quite closely to the hunted big game of the Savanna Model.
The same applies to small game such as the squirrel, hare, and rabbit
We are beginning to see the introduction of some “managed” wild
animals on the market, such as venison (from deer), kangaroo, antelope,
and bison (Plains buffalo). These creatures are not strictly speaking
domesticated—they breed according to their own inclinations and are
allowed to roam relatively freely on a range that closely resembles
their natural habitat. Their numbers are culled in a sustainable way and
their meat is introduced into the food chain. The American researcher
Loren Cordain considers that the meat from these animals is similar to
the Savanna Model, with the proviso that they browse the naturally
occurring vegetation and are not given commercial feed. (17)
White Meat and Game Birds
(Fowl)
We saw how the San would
catch various wild birds in traps and snares and
Page 59
Above
Page
60 Below
even hunt the ostrich. Our
lakeshore-inhabiting ancestors would have caught waterfowl too. Not
surprisingly, fowl (by definition any wild bird) are relatively hard to
catch and so they did not form a huge part of our ancestral diet. On the
other hand, the USDA applies the term poultry to birds that are farmed.
Chicken, Turkey, Duck,
Goose (Farmed)
It took quite a while before
any farming community discovered how to tame and raise birds in
captivity. The first was the chicken, which was domesticated from the
red jungle fowl by the civilization in India around 4,000 years ago.
Since then, chickens have become a familiar sight, ranging freely in
farmyards all over the Old World
Chicken. After the World War I, intense efforts were made
to industrialize the process of raising chickens. It was found that the
chicken could survive being cooped up in batteries of tiny cages under
controlled conditions of nutrition, light, heat, and humidity. Britain
developed the first “battery farms” in the 1920s. In the United
States, mass production of chicken meat took off after World War II.
American consumption quadrupled from 14 pounds (boneless) per person
annually in 1946 to 59 pounds annually in 2004.(18)
Today,
the vast proportion of chicken eaten in the developed world is from
intensively reared, caged birds; only a tiny proportion comes from a
“free range” farmyard lifestyle
Turkey. Turkeys are native to large parts of North
America. The Aztec of Mexico and the Zuni Indians of the American
Southwest were the first to domesticate them. In 1519, the Spanish
brought the Mexican species back to Europe. In 1621, the Pilgrims were
able to put hunted wild turkey on the Thanksgiving table in New England.
It was not until after World War II that turkeys were raised for meat on
a wide scale. They, like chickens, are raised intensively in large
covered sheds where they are crammed in so closely that they hardly have
room to fall over. Their meat is now almost as cheap as chicken and
American turkey consumption has quadrupled too, going from 3.5 pounds
(boneless) per person annually in 1946 to 14 pounds annually in 2004. (19)
Duck and Goose. Duck
and goose consumption is minimal compared to chicken and turkey.
Domestic ducks are descended from a hybrid of the Muscovy duck
domesticated by Incas in Peru and the mallard duck domesticated by the
Chinese some 2,000 years ago. Duck raising is practiced on a limited
scale in most countries, usually as a small-farm enterprise, although
large flocks of duck are bred in some areas of England, The Netherlands,
and the United States. Geese are described as domesticated in the
Egyptian and biblical writings of 3,000 years ago, but modern breeds are
descended from the greylag, a wild goose of northern Eurasia. Geese have
not attracted the attention of intensive farmers on the same scale as
chickens and turkeys. Goose raising is a minor farm enterprise in
practically all countries, but in central Europe and parts of France
Page
60
Above
Page
61 Below
there is important commercial
goose production. Notably in France, these birds are raised specially to
make the fatty delicacy “pâté de foie gras,” made from the
diseased livers of force-fed geese
Game Fowl (Wild)
The early civilizations
carried on the old traditions of hunting, trapping, and snaring fowl.
The ancient Egyptians caught and ate ostrich, bustard, crane, dove,
pigeon, duck, quail, partridge, pheasant, and goose. Birds associated
with the gods were taboo, notably the falcon, the ibis (a kind of
heron), and the vulture
The Greeks and Romans did not eat much fowl, although at feasts peacock,
thrushes, and ring-dove might be served. However, we must remember that
the food of the ordinary citizen was extremely frugal; banquets and
feasts were for the few, the wealthy gentry
Managed Game Birds. Wild bird flesh corresponds
closely to the Savanna Model. In addition, there is a large production
of “managed” game to provide sport for shooting parties. These are
predominantly pheasant, grouse, pigeon, partridge, and quail. (The
partridge is related to the francolin hunted by the San.) However, often
the managing techniques involve intensive feeding and the production of
slow-flying birds. Their meat might well be closer to battery chicken
quality than their wild counterparts
Ostrich and Emu. We are beginning to see some
ranching of large flightless birds, notably ostrich and emu. The ostrich
is the same species as the ostrich of our African homeland and hunted by
the San; it can stand up to 8 feet high. The emu, from the savannas of
Australia, is a slightly smaller bird, but still stands up to 6 feet
high; it has flesh similar to the ostrich. Provided the farming of these
creatures does not intensify (like it has for the chicken), their meat
is in conformity with the Savanna Model
Eggs
Eggs formed a regular part of
our ancestors diet whenever they could find them. Of course, they were
not restricted in the species of bird—anything from guinea fowl eggs
to ostrich eggs would do just fine. Being in the tropics, the seasons
did not vary much throughout the year, so there was usually the egg of
some bird or another available most of the time for the San
Farmed. The first farmers had to go looking for wild eggs.
The Fertile Crescent is outside the tropics (it is about the same
latitude as Washington, D.C.) and mostly eggs only came along in spring.
It was not until chickens were domesticated that eggs were “farmed”:
wherever the chicken arrived, the hen’s egg arrived too. In due
course, as duck, goose, and turkey were domesticated, these creatures
were bred for their eggs as well. Today, with the enormous advantage of
price and the massive volume of battery-hen production, it is the
hen’s egg that totally dominates the food supply. Does this matter?
Are there
Page 61 Above
Page
62 Below
significant differences
between battery-farmed hen’s eggs and wild eggs from a variety of
birds? We will see later that there are differences, but not necessarily
the ones we think
Wild. The gathering of wild eggs today is greatly restricted by
government regulation in most developed countries. However, the eggs of
many species are available in small quantities as a by-product of the
management of game birds
In this way, eggs from quail, pigeons, gulls, lapwings, plovers,
pheasants, and ostriches are available to culinary enthusiasts. We must
also mention eggs from reptiles: eggs from crocodiles and turtles would
have been quite common in the diet of our African Pleistocene ancestors.
Turtles lay eggs in prodigious numbers in sandy shorelines, and
collecting and commercializing them has become a major industry in
Malaysia. Wild eggs in general form a tiny part of consumption in the
developed world and, with the possible exception of quail eggs, most
people have never even seen one
Seafood (Fish and Shellfish)
Our ancient ancestors
certainly consumed fish and shellfish on a modest scale—up to 12% of
calories according to Michael Crawford, professor of nutrition at London
Metropolitan University. (20) As we saw in chapter 1, fish were speared and trapped as the
occasion presented itself. Pleistocene man (or more likely women) easily
collected shellfish along the shoreline of African lakes and rivers
Farmed. Early civilizations took a long time to learn to
farm fish. Carp originated in China and have been raised in ponds and
rice paddies there for 3,000 years. From about 500 B.C.,
the ancient Egyptians raised fish in specially built ponds. The main
species was Nile perch, a variety of tilapia, which is still commonly
available today. Carp cultivation has spread all over the world, notably
central Europe, but it was always on the scale of the village pond or
its equivalent
It was not until the 1960s that fish farming or “aquaculture” came
of age. Since then, salmon, trout, catfish, and tilapia have been farmed
on an industrial scale. They have almost completely displaced their wild
counterparts from our tables. Less commonly farmed are carp, mullet,
redfish, and sea bass. Efforts are already under way to farm tuna, cod,
sea bream, and turbot in vast enclosed offshore pens
The farming of shellfish, mainly mussels, oysters, shrimps, and prawns,
has been carried out on a minor scale for centuries in Europe and Japan.
Again, since the 1970s, rapid advances in technology have allowed the
farm production of shrimp and prawns to explode. They have elbowed out
the wild variety. The farming of clams, crayfish, oysters, and mussels
is also growing fast
The fish and shellfish consumed in our ancestral diet were entirely of
freshwater varieties. On the other hand, modern fish farming is
concentrated mostly on seafood. It appears that this is not an important
distinction—if there is a
Above
Page 62
Below
Page 63
problem with aquaculture, it
is with the way the creatures are often fed and the pollutants that get
into their bodies
Wild. Up until the 1970s, virtually the only fish on our plates were
ones caught in the wild. Now, we have seen the huge volume of fish,
notably salmon and trout, that are produced by fish farms. Even so, most
other species that we find in our supermarkets (fresh, frozen, or
canned) are still wild. Cod, halibut, tuna, sardine, plaice, mackerel,
pollock, herring, and many others, for the time being at least, are all
caught in the wild. We can say that many of them conform to the Savanna
Model while the others, if not conforming, are certainly not harmful
Exotic Animal Foods
Reptile foods, including
crocodile, alligator, and turtle, although uncommon in the Western diet,
are still readily available to the enthusiast. In addition, many
societies make use of snakes, such as python and boa constrictor, and
the French have made a delicacy of frog’s legs. All of these foods, as
they are currently available, readily fit the Savanna Model
There are many gatherer societies around the world, such as the Yanomamo
Indians of the Amazon and the Cahuilla Indians of California, that eat
(or used to eat) worms of all kinds. Curiously, there is little evidence
that the San ate worms and we can only surmise if they were a common
component of the Pleistocene diet. It is likely that they were—worms
are easy to unearth at certain times of the year by wetting the ground
and drumming to bring them to the surface
Italian biologist Dr. Maurizio Paoletti, from Padua University, has made
a study of “mini-livestock” eaten by forager tribes today and finds
that earthworms are an excellent food source, (21) which
we authenticate as conforming to the Savanna Model
Hunter-gatherers around the world still eat insects of all kinds and
anything is fair game. They collect the immature and adult forms of
grasshoppers and crickets; the caterpillars of silk moths; and the
larvae and pupae of beetles, bees, ants, flies and hornets. Dr. Paoletti
has found that the larvae of palm weevils, as raised by certain
Amazonian tribes, have an excellent nutritional profile and no
drawbacks.(22) The Australian Aborigines prize the witchety grub, a kind of
large caterpillar up to 3” long and 1/2”
in diameter. It is relatively fatty (19%) and, when toasted in the
embers of a fire, tastes a bit like roasted sweet-corn
Many primitive societies eat snails and their shell-less cousins, the
slug. The idea to some minds seems grotesque, yet they are a valuable,
easily collected source of food. In fact snails have been commonly
raised and eaten in the Middle East and Europe for thousands of years.
The French, of course, have made a national dish out of snails:
“escargots” cooked in garlic and butter are even considered a
delicacy. Snail and slug flesh conforms to the Savanna Model, although
the French recipe is not ideal nutritionally.
Above
Page 63
Page
64 Below
The Consequences of Eating Animal Foods
We have seen how the New
Stone Age farmers “improved” the breed of the pig, cow, and sheep.
Quite inadvertently, these improvements changed the nutritional
qualities. The flesh became much fatter, increasing from just 4% fat to
25% fat. Also, the type of fat changed from certain kinds of
polyunsaturated fat to various types of saturated fat. We now associate
the consumption of beef, pork, and lamb with cancers, heart disease,
high cholesterol, and cardiovascular diseases
In the next chapter we will examine this link. The goat, which has
remained popular with many simpler farming cultures, has not been
subjected to the same processes of intensive breeding and has largely
escaped this unhealthy transformation. Its meat is low in fat (just 2%),
half of which is harmless monounsaturated fat. Most meats of wild origin
have a similar fatty acid composition, in conformity with the Savanna
Model
Similarly, wildfowl and wild fish are just fine. Poultry, particularly
chicken and turkey, tend to be fattier and contain more of the unhealthy
fats. The breast (white meat) of the bird is the best, when it has the
skin and fat removed, and free-range chickens tend to be leaner and
healthier. Duck and goose are also fatty birds, but their fats are
semi-liquid at room temperature, indicating a low saturated fat content.
Eggs have more “good” fats if they come from chickens who have
ranged freely and eaten a diet natural to their species. Fish have more
“good” oils if they are wild or have at least been fed correctly on
the fish farms
PROTEIN-RICH FOODS OF PLANT ORIGIN Protein-rich
plant foods fall into two broad classes, nuts and legumes. Their protein
content is comparable to that of lean beef steak—20% to 25% and
sometimes more. In contrast, an egg is only around 13% protein. Nuts are
often called “tree-nuts” to distinguish them from the peanut, which
grows underground and is a legume
Page 64 Above
Page
65 Below
Nuts
In Chapter 1, we saw how the
mongongo nut was a great standby for the San. There were many other nuts
too, including those of the baobab tree, the ochna, and the soapberry
tree. However, the nuts that we know today have come from all over the
world. Almonds, walnuts, pistachios, and chestnuts are all native to the
Fertile Crescent and were domesticated early during the farming
revolution
The Brazil nut and the cashew nut are native to South America, the pecan
to North America, and the macadamia to Queensland in Australia, and all
of these nuts have become familiar to us in the West. They are often
processed in various ways, notably by roasting and salting, which
improves shelf life and taste, but it is not a nutritional improvement
The Coconut
The coconut is native to Malaya, but the first European to see one
was the Venetian adventurer, Marco Polo, in his travels to China in
the 13th century.
Conventionally, the U.S. Department of Agriculture classifies the
coconut as a tree-nut. However the nutritional profile of coconut meat
is nothing like other nuts: its predominant constituent is in fact
water, around 45%; the rest is oil (35%) and a high percentage of
dietary fiber (9%). There is some sugar (5%) and very little protein
(3%). The oil content is the determining nutritional characteristic of
coconut meat and for this reason we group coconuts with fats and oils.
Legumes
We saw too that the San
consumed foods called “beans,” notably the tsin bean. These are
podded seeds that belong to the pea family, similar to the legumes.
However, the class of legumes known as “dry beans” first entered the
food supply of humans only 11,000 years ago with the Farming Revolution.
Lentils and chickpeas are indigenous to the Kurdistan area and their
cultivation spread rapidly to other civilizations in Egypt, India, and
China. Those peoples then developed local varieties—for example, the
soybean in China, the fava (or broad) bean in Egypt, and mung bean in
India. Across the Pacific, the new civilizations in Central and South
America were developing the native kidney bean, pinto bean, haricot
bean, and lima bean. These beans, together with the fava bean and mung
bean, all come from the genus (a grouping of species) Phaseolus
and form the class of legumes that we think of as “beans.”
Unlike the case with grains, consumers in the developed world have not
taken up the use of beans (Phaseolus) with enthusiasm: in the U.S., consumption is
around 7 pounds per person annually; in Europe, it is 5 pounds annually.
We will see that this is not a bad thing
Soy comes from a different genus of legumes called Glycine.
Even
though soy originated in China, consumption there was minimal. According
to K. C. Chang, editor of Food in Chinese Culture, the total soy protein intake
in 1930s China was no more than 5 grams per person weekly. In Japan,
consumption has increased slowly since those days, but even now soy
protein intake is still only a modest 8 grams per day, according to
Chisato Nagata, a researcher at Gifu University School of Medicine, in
Japan.(23)
In
America, soy was unknown until about 80 years ago, when it was
introduced to feed cows. Then, in a promotional campaign reminiscent of
Kellogg’s breakfast cereal marketing wonder (see Chapter 2), just
since 1970 Americans have been taught to eat soy. Consumption has been
doubling every 12 years. The publicity touted soy as a meat substitute
with supposed health benefits and vegetarians and vegans have
enthusiastically adopted soy in all its forms—tofu, soy burgers, soy
yogurt, soy milk, soy cheeses, and so on. Their consumption can reach a
massive 70 grams per person How We Eat and Its
Consequences 65 daily. Even the average consumer is unwittingly consuming soy as
soy flour is added to all kinds of processed foods
When we buy a pack of dried beans or lentils, the label warns that the
contents must be thoroughly boiled. This tells us that, in their raw
natural state, legumes are poisonous. Our savanna ancestors could not
even boil water, let alone cook legumes, so humans never developed
resistance to the poisons in them. However, even after boiling, legumes
still contain harmful substances, slow-acting poisons that disrupt the
harmonious working of the body. According to their variety, beans and
lentils can provoke immune depression, malignant tumors, red blood cell
disruption, pancreatic problems, intestinal disease, and allergies. Soy
contains at least 15 allergens, of which three are considered
“major” by researcher Hideaki Tsuji of Okayama Prefectoral
University, in Japan. (24) Soy is also strongly linked to cancers, (25)
senile
dementia, (26) thyroid disorders, (27)
pancreatic problems, (28)
and
disrupted hormone function
MILK GROUP
The San tribe hunter would
track an antelope for several days to get close enough to shoot it with
poisonous arrows. We can be certain that neither the San, nor our
Pleistocene ancestors, ever got close enough to a mother antelope to
suckle its teats. Such a feat only became possible after the farming
revolution with the domestication of farm animals. Even so, not many
societies made much use of this unusual idea
It took the special circumstances encountered by the nomads of the
Russian Steppes to change that. They were early Europeans who lived in
the treeless plains of what is now the eastern Ukraine. By 4000 B.C.,
these people had learned to keep herds of horses, cattle, sheep, and
goats. However, under the sparse conditions of the steppe, a migratory
way of life became necessary. The animals consumed the grass faster than
it could grow, so the herders had to keep their animals moving in search
of new pastures and, as a consequence, abandon planting. This was the
first time that human beings learned to live largely from their animals.
In practice, this meant consuming the only renewable resource: milk,
cheese, and other dairy products. To do that, they had to tame mother
animals that had just given birth to a calf to allow milking by human
hand. By about 2000 B.C.,
the herders had mastered their techniques and, constantly in search of
new pastures, these nomads infiltrated much of northwest Europe,
carrying the practice of dairy farming with them
In this way, Slavs, Germans, Scandinavians, and Anglo-Saxons became
dairy farmers too, focusing on the cow. Some parts of southern Europe
adopted, in a minor way, sheep’s milk and goat’s milk. Roquefort
cheese is made from sheep’s milk in Toulouse, France, and the Greeks
use goat’s milk to make feta cheese. To the east, the Mongols took up
the practice of dairying with the yak (a kind of massive ox)
Page 66
Above
Page
67 Below
Other nomadic tribes stumbled
upon the use of milk too. About the time the Ukrainians were carrying
dairy farming to Europe (4,000 years ago), another herder, Abraham, was
setting out from present-day Iraq for his “land of milk and honey”
in Palestine. However, neither the Israelites nor for that matter the
Egyptians, Greeks, or Romans made an industry out of dairying
Just 500 years ago, Mongol invaders (the descendants of Genghis Khan)
brought dairying to the fringes of their empire in northern India and
Persia. A little later, the English, Germans, and Scandinavians brought
dairy farming to North America, Australia, and New Zealand.
Nevertheless, it comes as a surprise to us in the West to discover that,
as dairy consumers, we are in a small minority. A large majority of the
world’s population (some 5 billion out of 6 billion people) had no
idea about dairy until the last 50 years. These non-milk drinkers lived
in vast swathes of territory, from Africa to southern India, from China
to Japan, and from Latin America to Polynesia. The regular consumption
of dairy foods, even today, only applies to a minority of people on the
planet— those mostly living in the industrialized West
Interestingly, when in recent years Western dairymen entered these
untapped markets, they hit upon an unexpected difficulty. The new,
potential consumers thought that dairy consumption was a strange
practice and found that it often disagreed with them. We now understand
that dairy products can be a problem
For example, the San are uniformly intolerant of the lactose in milk and
this applies in some degree to everyone on the planet. Lactose
intolerance gives rise to allergies, headaches, bloating, colon
diseases, and many other disorders
The unhealthy properties of milk fat are now mostly accepted. We are
told that fat-free milk is good for us and it is even better to stay
away from cream, butter, and ice cream. For many years now, the
connection between these foods and high cholesterol, heart disease,
strokes, and hardening of the arteries has been well known. Scientific
findings show that dairy consumption from any source (cow, goat, sheep)
and in any form (including skimmed milk, cheese, and yogurt) is
associated with a number of serious, slow-acting diseases, including
osteoporosis, high cholesterol, cancers, allergies, heart disease, and
obesity. The notion that dairy products cause osteoporosis
is so contrary to conventional nutritional dogma that it needs solid
justification. In chapter 4, we will look at the scientific background
to these assertions
It has been noted that the Germanic peoples, the ones who adopted dairy
farming early, seem to tolerate milk quite well in their early years. We
find, however, that childhood tolerance to milk wears off. Germanic
senior citizens are just as vulnerable to milk intolerance as everybody
else. This is one of the few instances that we know of where a human
tribe has evolved an adaptation to a new food. We now suspect that early
dairy herders must have suffered a very high percentage of weanlings
dying from a bad reaction to milk. The ones that survived had a genetic
makeup that allowed them to live through the
Page
67 Above
Page
68 Below
experience and pass their
genes on to their descendants. Even so, such people still suffer, like
the rest of the population, from the slower-acting diseases caused by
dairy foods
FATS AND OILS
The term fat
and
the term oil mean essentially the same thing. A fat is simply an oil that is
solid at room temperature. Fats (oils) fall into three classes:
saturated, polyunsaturated, and monounsaturated. In nature, any
particular fat (oil) is a cocktail of all three classes. As a rule of
thumb, if it is solid (fat) at room temperature, then the chief
component is saturated fat
We have seen that the food supply of the African savanna was very low in
fat. It was never available on its own and the foods themselves did not
contain much. The San really loved to eat the warthog, which had a
relatively high fat content of around 10% (but still a lot lower than
red meat’s 25%). The other major source of fat was the mongongo nut.
The situation remained much the same throughout history until well after
the farming revolution. It was not until a few thousand years ago that
domesticated animals, notably the pig, were bred porky enough to yield a
fat that could be separated out. This kind of fat is lard, whereas fat
from cows and sheep is known as tallow. Even so, it was only in certain
places and certain levels of prosperity that farming peoples had the
luxury of free animal fat in cooking. Traditionally, Chinese, Indian,
and Japanese cooking is done with water, not fat
Butter is also an animal fat, so the first dairy farmers were among the
first to have fat as a separate entity. Several thousand years later, it
was the same people (mostly northern Europeans) who, in the Middle Ages,
discovered more efficient ways to raise livestock. This was the first
time that a large group of humans had an abundance of meat and fat
throughout the year. Fatty cuisine, utilizing cream, lard, and butter
became the norm in Germany, Central Europe, and England
These same peoples then brought the animal fat habit to North America,
Australia, and New Zealand. Animal fat consumption in U.S. was already
strong in 1909 at 34 pounds per person per year; by 2000, consumption
had accelerated to 42 pounds annually
Meanwhile, in the southern parts of Europe and in the Near East, early
farmers had domesticated the olive. The earliest recorded occurrence is
from the Greek island of Crete around 3500 B.C. (29) Its
cultivation was important to the ancient Greeks and Romans and they
spread it to all the countries bordering the Mediterranean
Fresh olives are extremely bitter and must be treated with lye (a strong
alkali leached from wood ash) before they can be eaten. Today, olives
are grown primarily for olive oil. The Greeks first extracted the oil
simply by heaping the olives on the ground in the sunshine and
collecting the oil as it dribbled out of the ripe fruit. Now it is
pressed out, but in the first pressing not a lot of pressure is used so
that the bitterness stays behind; this is known as “extra virgin
oil.” Greece 68 Deadly Harvest remains
the biggest consumer at about 42 pounds per person per year, while the
tiny consumption in the U.S. has risen from 10 ounces to 1.5 pounds per
person annually. Similar figures are seen in England, France, and
Germany
It is difficult to imagine, but just 100 years ago corn oil, peanut oil,
sunflower oil, rapeseed oil (Canola oil), safflower oil, cottonseed oil,
and other “vegetable” oils were virtually unknown to the ordinary
consumer. They existed, of course, but only as an unwanted by-product of
agricultural processes. The U.S. cooked with solid animal fats as did
northern Europe, including Britain and Germany. Then, in 1910, the first
process was developed by the food giant Procter and Gamble, in
Cincinnati, Ohio, for turning these waste vegetable oils into something
useful—cooking fat. The process was “hydrogenation.” Thus, Crisco(r) vegetable shortening was born
and swiftly commercialized as a replacement for lard. It was cheaper,
more convenient, and the quality more predictable than the animal fat
alternatives
Gradually, vegetable fat became popular until, by World War II, farmers
grew plants specifically to supply oil to the new vegetable fat
industry. Beginning in the 1950s, the budding fast food industry
discovered and liked these fats: they had a long shelf life and could be
reheated and reused repeatedly without producing “off” flavors.
Similar qualities endeared vegetable fats to the rapidly expanding snack
food industry. It is remarkable to think that fast foods and snack foods
have only been commonplace since the mid 1960s
However, in the 1970s researchers made the connection between saturated
fat and heart disease and the spotlight was put on the practice of
hydrogenation— yes, it was turning a relatively harmless plant oil
into a health-threatening saturated fat. The solution was
straightforward: just use the oil in its original, unhydrogenated state.
Supermarket shelves filled with a wide range of vegetable cooking oils.
By this time, the extraction technology had become more sophisticated.
Today, high temperatures and pressures double the yield and petroleum
solvents, such as hexane, extract the last drop out of the crushed oil
seed. The raw oil is then bleached, deodorized, de-gummed, de-waxed, and
refined with caustic soda. This produces vegetable oils that are clear,
heat stable, bland, and odorless (some varieties can be used as engine
oil).
Meanwhile, the fast food industry, expanding rapidly, continued using
solid hydrogenated vegetable fat (commonly known as “shortening”)
for its french fries until the 1990s. Recently, the concerns about
hydrogenation encouraged them to convert to the original, liquid,
unhydrogenated vegetable oil. This is a step in the right direction, but
not the whole story, as we shall see. The net result of the enthusiastic
adoption of vegetable oils is a dramatic, 24- fold increase in U.S.
consumption, from 1.5 pounds per person per year in 1909 to 36 pounds
per person annually in 2000. Overall consumption of all fats and oils
combined has more than doubled from 35 pounds per person per year in
Page
69 Above
Page
70 Below
We saw in Chapter 1 that humans are not designed to consume much fat and
oil, and what little they do consume has to be of a certain kind. Today, we are consuming very
high quantities of oils and fats—40% of calories for the average
American—and these fats and oils are different from those found in our
ancestral homeland. We can trace a range of diseases to this departure
from the Savanna Model: artery plaque, thrombosis, osteoporosis, high
blood pressure, arthritis, allergies, cancers, obesity, diabetes,
asthma, menstrual cramps, and many more. What is going on? We’ve all
heard the slogan “fat makes you fat”, but how can fat (oil) possibly
be responsible for such a wide range of other illnesses? The answer lies
in our hormones: many fats manipulate our hormones, others do nothing,
and yet others block hormones altogether. In other words, like bulls in
a china shop, we are blundering about, knocking over our hormones,
blissfully unaware of how the fats and oils we eat are disrupting the
fine balance of our bodies’ workings. This is a crucial, but neglected
aspect of what we eat: it can affect our body in subtle, unseen, yet
harmful ways
SUGAR GROUP
In Chapter 2, we split the
USDA’s “sweets” section from the Fats, Oils, and Sweets group and
renamed it the “Sugar Group.” What the USDA means by “sweets” is
sugar and foods with a high sugar content, such as candies, soft drinks,
and some desserts. They are mainly thinking of the familiar sugar that
we know as “table sugar,” although they also mention other sources
of sugar, including honey, maple syrup, and corn syrup
There are, in fact, several types of sugar. Fructose is a sugar that is
commonly found concentrated in many fruits (from which it gets its
name); another common sugar is glucose. Frequently, the two combine
equally to form a new type of sugar called sucrose. Table sugar is 99%
sucrose and comes either from sugar cane or sugar beets. As we have
seen, sweet foods were a rare commodity in the ancestral diet. The main
source was honey, which is composed of several different sugars, with
glucose and fructose as the major components
Honey
Even though most people today
do not eat much honey, it has become a byword for innate goodness,
sweetness, and even love. Winnie the Pooh said that “eating honey”
was his favorite pastime. Shakespeare mentions honey 47 times: as
endearments (“honey-love”), as flattery (“honeyed words”), as a
sugar-coating for something unpleasant, as a delicacy, as something
healing, and, by its association with bees, with industry and chasteness
Our Pleistocene ancestors gave priority to finding honey, but they would
not have found much. Australian anthropologist Betty Meehan lived for a
year with the native Anbarra aboriginals of Northern Australia and she
recorded an average honey consumption of around 4 pounds per person per
year. (30)
That
Page
70 Above
Page
71 Below
contrasts with the current average consumption of sugar in the
U.S. of about 160 pounds per person per year—40 times as much
The situation would have remained much the same up until the first
farmers learned how to “farm” bees. The first recorded instance of
beekeeping is in Ancient Egypt around 2400 B.C. From that time on, it is clear that, for the ancient Egyptians
at least, honey became more available. Even so, it is certain that honey
consumption was limited to the affluent classes: in 2100 B.C.,
the 1,000 manual workers building a monument ate “bread, vegetables,
and meat”, whereas the king’s messenger received in addition “oil,
fat, wine, figs, and honey.” (31). A marriage contract of around 1200 B.C.
provides the bride with “12 jars of honey per year” (around 20
pounds), so honey is still precious and rare enough to form part of a
marriage bargain. The boy-Pharaoh, Tutankhamen, had jars of honey buried
with him. On the other hand, it seems that the ordinary populace had to
make do with other sources of sweetness, which archaeologists have
identified as syrups made from the juices of figs, dates, and grapes.(32)
The practice of beekeeping spread to ancient Greece and Rome,
while the ancient Chinese imported honey from the Mediterranean area. In
A.D.
500, one retired Peking bureaucrat was paid a quart of honey per month
as pension. In late Bronze Age Britain (around 1000 B.C.),
the production of beeswax was vital for the casting of bronze objects.
We can suppose that the Ancient Britons enjoyed eating the honey that
came with the wax
In Europe’s Middle Ages, there are many records of honey production.
In England, Dame Alice de Bryene recorded in her household accounts for
the year 1412 to 1413 a consumption of 6-1/2 quarts of honey. In her
40-strong household, this works out at less than half a pound per person
per year. By Shakespeare’s time, at the turn of the 1600s, just about
every smallholder and cottager would have had a hive or two. Honey was
commonplace but not available in large quantities, perhaps not even the
4 pounds per person annually that the Australian aboriginal was able to
find by foraging. Even today, honey consumption in the U.S. languishes
at around 1 pound per person per year, but that is because of the
arrival of a powerful competitor—sugar
Table Sugar
Common sugar (or table sugar)
comes chiefly from either sugar cane or sugar beets. Sugar cane is
native to New Guinea in Southeast Asia and several thousand years ago,
sugar cane cultivation spread throughout tropical Asia, notably to
India. Alexander the Great, in his conquest of the Ganges area of India
during the 3rd century B.C.,
was one of the first Europeans to come into contact with sugar cane. He
reported the existence of a “stiff grass yielding a kind of honey.”
Mostly Indians just chewed the cane, but around this time, in 400 B.C.,
they were trying to develop ways to extract the juice. The methods were
rudimentary, but they were the first examples of sugar presses or
“mills.”
Page
71 Above
Page
72 Below
During the Dark Ages (around A.D.
500 to A.D.
1000), all contact with India was lost, so the crusaders in the 11th
century became the first Europeans for over a millennium to come into
contact with sugar. This was in Arabia and by this time extraction and
refining had improved. Sugar came as a solid lump or “loaf” and it
was as rare and expensive as spices. The source of sugar was a mystery,
one that was closely guarded by the Arab merchants, but the returning
crusaders were sufficiently enthusiastic (and entrepreneurial) to start
trading sugar with the Arabs. In Europe, as is the way with rare and
expensive commodities, wealthy households started to replace “cheap”
honey by extravagant sugar. Then, in the 1390s, sugar cane was planted
in southern Spain and Portugal by the Arab occupiers. The secret was out
and sugar cane was carried to the Canaries, the recently discovered
islands off the coast of Africa under Spanish control. In 1493, on his
second voyage, Columbus stopped in the Canaries and took the first sugar
cane cuttings to the New World. In the 1550s, the Portuguese already had
a strongly developed sugar industry in Brazil, with 2,000 sugar mills
along the northeast coast. (33)
Even
so, until the 1750s, sugar was still worth its weight in gold. Big
profits could be made by those who could find new sugar-growing areas
and more efficient means to extract the sugar. Speculators,
entrepreneurs, and planters hastened to cultivate sugar cane in all
suitable parts of the tropics and subtropics. During the 18th century, sugar plantations sprang up all over the Caribbean—in
Haiti, Barbados, Cuba, Jamaica, the Virgin Islands, and Guadeloupe. In
the century from 1700 to 1800, British consumption trebled from 4 pounds
per person per year to 12 pounds annually. By the end of that century,
sugar was readily available in rural areas as well as towns and was
within the reach of all classes in society. At first, most sugar in
Britain was used in tea, but later candies and chocolates became
extremely popular. Planting increased during the 19th century, expanding to Fiji, Hawaii, Australia, India, Thailand,
and southern Africa. During the 20th century,
Florida became a world-scale producer.
An Elizabethan Overindulgence
Sugar was still beyond the
means of the common folk, but it seems that the wealthy were already
overindulging. Queen Elizabeth I of England in the 16th century
received regal presents of loafsugar from the King of Morocco. In
1598, a foreign visitor remarked of Elizabeth that “her teeth were
black, a defect to which the English gentry seem subject from their
great use of sugar.” (34).
Perhaps
unwittingly, the King of Morocco’s generosity was the cause.
That was cane sugar, but in the middle of the 18th century,
a German scientist devised a method of extracting sugar from another
plant, mangel-wurzel, a
Page
72 Above
Page
73 Below
type of beet. Fifty years later, another German improved the
mangel-wurzel to the plant now known as “sugar beet” and erected the
first beet-sugar factory in 1802. In 1811, Napoleon was worried about
the British blockade of sugar imports from the West Indies (the same
blockade that drove the French to eat potatoes), so he set up sugar-beet
schools, factories, and plantations. Sugar-beet grows easily in
temperate climates and most European countries quickly set up their own
sugar-beet industry. The same techniques were adopted in North America,
Russia, China, Japan, and other temperate zones of the world. Now,
production of sugar from sugar beets rivals that from sugar cane
Just in the last century, sugar has moved from being a luxury item to a
cheap commodity. Annual consumption in America of sugar from these two
sources rose to 61.5 pounds per person in 2004. Even so, supply
outstrips demand and competition is intense. Farm prices have been
driven down and each country is protecting its sugar industry by holding
consumer prices high
This has led to yet another development: the extraction of sugar from
corn (maize) starch. It might surprise you to know that sugar can be
made from corn, but the marvels of modern technology have performed such
a feat. This product is called “high-fructose corn syrup” (HFCS),
although the name is a bit misleading, since it has exactly the same
quantities of fructose and sucrose as table sugar. It is a lot cheaper
than the artificially high price of cane sugar. Particularly in the
U.S., HFCS has replaced table sugar in a great many foods. High fructose
corn syrup mixes well in many foods, is cheap to produce, tastes sweet,
and is easy to store. It is used in everything from bread and pasta
sauces to bacon and beer as well as in “health products” like
protein bars. However, by far its greatest use is in carbonated soft
drinks—the American soft drinks industry switched from sugar to HFCS
in the 1970s. As a result, American annual consumption of HFCS has
soared from zero in 1969 to 59.2 pounds per person in 2004
The Problems with Eating Sugar
When we add all the sugar
sources together (including minor sources such as maple syrup, molasses,
and so on), annual sugar consumption in U.S. has shot up, just in 300
years, from around 4 pounds per person (as in the Savanna Model) to
141.0 pounds per person.(35) We might suppose that such a dramatic move away from the Savanna
Model in sugar consumption has consequences, and indeed it does. As is
now commonly accepted, sugar intake is not healthy: it disturbs blood
sugar control which, as with grains and potato, is linked to the
tremendous increase in heart disease, high cholesterol, diabetes,
obesity, cancers, bone disease, allergies, and many more conditions.
This constellation of diseases is sometimes called “Syndrome X” or
“sugar disease.” In addition, sugar is devoid of any other nutrients
and it works yet more harm by displacing more nutritious foods from the
diet
Page
73 Above
Page
74 Below
SALT
Salt is a compound made up of
two elements, sodium and chlorine. As a rule of thumb, 6 grams of salt
contain 2.5 grams of sodium and 3.5 grams of chlorine.(36)
Put
another way, 2.5 grams of sodium make 6 grams of salt. Often
nutritionists talk about the “sodium content” of food rather than
the “salt content,” because the body recognizes sodium in all its
forms and sodium, not chlorine, is what has such a decisive effect on
our health. Most of the sodium we consume comes in the form of table
salt, although some people get additional sodium, for example, from the
sodium bicarbonate in antacids
We saw in Chapter 1 how the San’s diet was very low in salt, about 650
mg per day (Americans on average consume ten times this amount). The San
have no sources of salt and the only sodium comes from what is naturally
present in the plants they eat. Our understanding of this ancestral diet
suggests that the situation was identical for the whole of our
evolutionary past, as our ancestors lived inland and had no access to
naturally occurring salt. The Savanna Model diet is very low in sodium
and, significantly, rich in another mineral called potassium
The USDA, both in its pyramid and dietary guidelines for Americans,
subtly but insistently encourages people to reduce salt consumption.
They point out that most salt is ingested from prepared and processed
foods, so that much of the salt is so disguised that we do not realize
it is there. Did you know that cornflakes are saltier than seawater?
But salt was not always so freely available. Homer related in The
Odyssey that Odysseus should look for a people who had no knowledge of
salt—these were the Epeirotes who, even after the capture of Troy,
knew nothing of the sea
The Greeks themselves came late to the use of salt and they might have
had a taboo against it. Early Indo-Europeans and Sanskrit-speaking
peoples (early Hindus) had no word for salt. To the Romans, salt was a
scarce commodity and they even paid their soldiers with it (our word salary
comes
from the Latin salarium meaning “salt-payment”). The same goes for
many other civilizations: salt was a form of money and was treated with
respect. Many Central American tribes knew nothing of salt until the
Spanish conquest and the same was true of central Africa before European
contact
Of course, many peoples who lived close to the sea had access to salt.
They created salt-drying beds along the shoreline and harvested the salt
for consumption and trade. Nevertheless, this was a cottage industry
until recent times, when salt production was put on an industrialized
footing. In some areas, salt beds deep under the Earth were discovered.
The Ancient Egyptians, Romans, and Greeks sent unfortunate wretches
underground to mine salt by hand
Nowadays, it is either excavated by huge mining machines or extracted
through boreholes using high-pressure steam. Suddenly, salt moved from
being a rare, tradable product to a freely available, cheap commodity.
Salt consumption rocketed in the U.S. from around 1 gram per person per
day to 10 grams per day
Page
74 Above
Page
75 Below
Researcher Boyd Eaton
estimates that the typical daily consumption of sodium in Pleistocene
times was no more than 0.7 gram per person.(37) It
was obtained purely from what was intrinsic to the foods they ate. The
average American consumes 4 grams of sodium (10 grams of salt) per day,
nearly six times as much. This heavy salt load poses a problem for the
body: it is linked to problems such as high blood pressure,
osteoporosis,(38)
and
blocked arteries. We tend to think of our arteries as being like inert
plastic plumbing, but in reality they are living tissue and high salt
levels irritate and scar them. The blood pressure specialist Professor
Louis Tobian has shown that salt damages arteries even if your blood
pressure is normal.(39)
Also,
over-consumption of salt drains calcium out of the bones and high salt
levels cause our kidneys to malfunction, provoking abnormally high blood
pressure
Nutritionists have demonized salt often enough, so what has been said so
far is not a surprise. However, there is another factor that is
important—the consumption of the mineral potassium. Sodium and
potassium work as a team in tiny, yet vital, quantities in the
electrical circuitry of body cells. They need to be consumed in a ratio
of about 1 part of sodium to 5 parts potassium. Boyd Eaton finds that
this is exactly the ratio consumed, quite naturally and without
forethought, by humans in Pleistocene times.(40)
Potassium
is abundant in fruits, salads, and vegetables. In the average American
diet today, the see-saw is unbalanced the other way—1 part potassium
to 2.5 parts sodium—and this has repercussions on the efficient
working of every cell in our bodies
BEVERAGES
In chapter 2, we introduced a
new food group, Beverages. The reason is that the USDA in its dietary
guidelines for Americans only makes passing reference to alcohol, sodas,
fruit juice, and water, and no mention at all is made of tea or coffee.
Nevertheless, beverages are an important factor in our food intake and
we need to know how they fit into the scheme of things. We therefore
make a food group of beverages and single out the ones that dominate our
Western consumption pattern
The main beverage for our Pleistocene ancestors was water, plain and
simple. Or perhaps not so plain—often it came from a waterhole used by
the other creatures of the savanna, containing all kinds of bugs, germs,
and sediment. In addition, fluid was obtained from vegetation such as
the tsama melon, roots and tubers, and even from rainwater collected in
the hollow trunks of trees. Finally, some liquid was obtained from the
mammals that were killed on occasion; the San would drink the blood and
stomach contents of antelope, for example
Alcoholic Beverages
It is an interesting thought
that in ancient times, no one had a means of boiling water. It was not
until the invention of kiln-fired pottery in Egypt around 6000 B.C.
Page
75 Above
Page
76 Below
that water could be heated and infused with herbs to give it
flavor. Within a heartbeat of learning how to make pots, these inventive
people also discovered how to ferment beverages to make forms of beer
and wine. In short order, most civilizations adopted, or discovered for
themselves, local variations on these basic beverages. In 2100 B.C.,
Sumerian doctors prescribed beer for many ailments; Egyptian doctors in
1500 B.C.
included beer or wine in 15% of their prescriptions. By 1170 B.C.,
Hammurabi of Babylon, in his code of laws, regulated drinking houses and
pre-biblical Canaanites had a multitude of uses for intoxicating fluids
Meanwhile, Indians and Chinese made intoxicating beverages from barley
and rice. The 3,000-year-old Hindu Ayurvedic medicine teaches both the
beneficial uses of alcoholic beverages and the consequences of
intoxication and the diseases of alcoholism. Most of the peoples in
India, as well as Sri Lanka, the Philippines, China, and Japan, have
continued to ferment a portion of their crops. Japanese sake is a
well-known drink made from fermented rice
In Africa, maize, millet, bananas, honey, the saps of the palm and the
bamboo, and many fruits have been used to ferment beers and wines, the
best known being kaffir beer and palm wines. The Tarahumara of northern
Mexico made beers from corn and agave, and the Papago Indians made a
cactus wine
Throughout Central and South America, the Indians made alcoholic
beverages from maize, tubers, fruits, flowers, and saps. In contrast,
the San, the Eskimo, the Australian aboriginal, the North American
Indian, and the Polynesian never discovered fermentation
Today, the choice of fermented drinks has narrowed down to two main
types, wine and beer. Wine is made from grapes and can have an alcoholic
strength up to 13%. Beer is made from malted barley and has strengths
between 4% and 6% alcohol; most varieties of beer are flavored with hops
to give it a bitter taste. Consumption of wine in the U.S. has increased
from 1.3 gallons per person per year in 1970 to 2.2 gallons annually in
2002. For beer, the figures show an increase from 18.5 gallons per
person per year to 22.0 gallons annually. (These are figures covering
the whole population, not just those of drinking age.)
Fermentation produces a
drink with a maximum alcohol content of only about 13%, but usually it
is much less. By about 2,800 years ago, the Chinese had worked out a
method to make the alcohol content much stronger—distillation
Around the same time, the Javanese discovered how to distill a potion
they call “arrack” from fermented sugar cane and rice. The Greeks
and Romans also made crude distilled products. However, it took the Arab
alchemists in the 8th
century
to develop the equipment and techniques to put distillation on a
predictable, economic, and palatable footing. By the late Middle Ages,
distilled spirits were widespread in Europe. The beverages could now
have an alcohol content ranging up to 80%. (Nowadays, most governments
restrict the alcohol
Page
76 Above
Page
77 Below
content to 45% maximum.) In
the 19th century, Western entrepreneurs industrialized the production of
spirits and actively sold to global markets. In this way, Scottish
whisky, Dutch gin, English rum, French brandy, American bourbon, and
Russian vodka beat out local brews to become world brands. Consumption
of spirits has declined in America from 1.8 gallons per person per year
in 1970 to 1.1 gallons annually in 2002. (Again, these figures cover the
whole population, not just those of drinking age.)
Back in the Middle Ages,
monks were experimenting with making alcoholic “elixirs” designed
for medicinal purposes, with closely guarded recipes using fruits,
sugar, herbs, and spices. We know these elixirs today as “liqueurs.”
Benedictine was among the first liqueurs in 1510. Chartreuse came in
1607 and was swiftly followed by Cointreau, Grand Marnier, Curacao, and
many more. They have an alcohol content ranging from 25% to 60%
Tea and Coffee
Earlier, we mentioned heated
water and infused herbs—one of them, tea, found by the Chinese around
350 B.C.,
has come to dominate the market. But tea did not come to Europe until
the English East India company, trading with the secretive Chinese in
the 1660s, introduced tea leaves to London’s coffee houses. This
ushered in the picturesque age of the famous sailing clippers: these
graceful, high-speed ships raced across the oceans to be the first with
their precious cargo in the capitals of Europe. However, for almost two
more centuries, no European knew what a tea plant looked like. Then, in
1827, a young Dutch tea taster, J.I.L.L. Jacobson, risked his life to
penetrate China’s forbidden tea gardens and bring back tea seeds to
cultivate the tea plant in the Dutch East Indies. In 1823,
coincidentally, a variety of tea had been discovered growing wild in
Assam, India. Under British government encouragement, tea plantations
were developed using plants from both Assam and China, and India became
a major producer and consumer of tea. Most tea in the world today is
so-called “black tea”: it comes from the same plant as green tea,
just the drying and fermentation process is different. Annual tea
consumption in U.S. is not as high as in other countries and has been
stable since 1970 at around 7 gallons per person
Coffee rivals tea in worldwide consumption. It is thought to have its
birthplace in southern Ethiopia and to take its name from the province
of Kaffa. It was as recently as the 15th century
that the plant was discovered and transplanted to southern Arabia. From
there, it swiftly became popular all over the Arab world. By the early
1600s, major European cities could boast of their coffeehouses, which
became centers of political, social, literary, and eventually business
influence. By the late 1600s, coffeehouses became popular in North
American cities such as Boston, New York, and Philadelphia. Annual
consumption of coffee has been falling in the U.S. in recent times, from
33.4 gallons per person in 1970 to 22 gallons in 2002
Page
77 Above
Page
78 Below
Cocoa
Cocoa has its origins in Central America, where the Maya and
Aztecs held it in great esteem. At the court of Montezuma, the Spanish
conquistador Hernando Cortes was served a bitter cocoa-bean drink. He
brought the bean to Europe, where the cocoa drink was sweetened,
flavored with cinnamon and vanilla, and served hot. The beverage
remained a Spanish secret for almost 100 years
In 1657, a Frenchman opened a shop in London, at which solid chocolate
for making the beverage could be purchased at 15 shillings a pound. At
this price, only the wealthy could afford to drink it, and fashionable
chocolate houses appeared in London, Amsterdam, and other European
capitals. It was not until the mid-19th century that cocoa became affordable for all levels of society.
Today, “chocolate drink” powders that have only a small percentage
of cocoa adulterated with sweetener, fillers, and artificial flavors,
dominate the market for cocoa
Soft Drinks The first marketed soft drinks appeared in
17th-century France as a mixture of water and lemon juice, sweetened
with honey. But the race was on to carbonate water—the idea was to
produce cheap versions of naturally occurring health spa mineral waters.
In 1772, the English scientist Joseph Priestley demonstrated a small
carbonating apparatus to the College of Physicians in London. For this
invention, he is nicknamed “the father of the soft drinks industry.”
Using Priestley’s apparatus, Thomas Henry, an apothecary in
Manchester, England, produced the first commercial quantities of
carbonated water. Jacob Schweppe, a jeweler in Geneva, read
Priestley’s papers and, by 1794, was selling highly carbonated waters
to his friends. He added other mineral salts and flavors, such as
ginger, lemon, and quinine (to make tonic water). Schweppe moved to
London and built a worldwide soft drinks empire
In 1886, Dr. John Pemberton, an Atlanta chemist, developed what he
called an “esteemed brain tonic and intellectual beverage, a cure for
all nervous affections, sick headache, neuralgia, hysteria, and
melancholy.” Pemberton’s product contained carbonated water, sugar
syrup, cocaine from coca leaves, caffeine from kola nuts, and other
secret flavors. It was later marketed under a telling name, Coca-Cola.
Because Pemberton was ill, he sold two-thirds of his business in 1888 to
cover expenses. He died later that year, never knowing how successful
the product would become. Asa Candler, an Atlanta druggist, bought the
entire business in 1891 for $2,300. The Coca-Cola company removed the
cocaine by 1929. Even so, consumption has soared. Americans in 1940
consumed an average of one 6.5 ounce bottle per week, or 2.6 gallons per
year.(41)
This
has increased ten times to 25.8 gallons per person for the year 2003.
Another carbonated cola beverage has been around almost as long—Pepsi
Cola; they sell 22.0 gallons per person annually. Thus, consumption of
just these two beverages
Page
78 Above
Page
79 Below
combined is nearly 48 gallons per person per year, or over a pint
a day. Other carbonated soft drinks account for a further 7 gallons per
year
Milk
We examined milk in the Milk
Group, but here we look at it as a beverage. Milk in its raw form can be
dangerously contaminated with unhealthy microbes. These used to cause a
lot of sickness until Victorian times. Then, inspired by the work of
Louis Pasteur, it was found that milk could be made safe by heating it
to 162°F (72°C) for 15 seconds. This “pasteurized” milk was the
form in which milk was commercialized until the 1960s. In those days,
milk used to have the cream float to the surface (some may remember
bottles of milk with a plug of rich cream at the top). Today, milk is
usually “homogenized” as well: the milk is heated and squirted by
pressure pumps through nozzles so that the cream stays evenly
distributed throughout the milk
Since the 1960s, there has been an awakening to the dangers of milk
fat— nutritionists have been advising the use of skimmed or
semi-skimmed milk over whole milk. Skim milk is made in a machine that
centrifuges the milk at 6,000 rpm to separate the fat from the skimmed
milk. Consumption of whole milk has declined dramatically from 25.5
gallons per person per year in 1970 to 8.0 gallons annually in 2002.
Meanwhile, skimmed milk consumption in its various forms has increased
from 5.8 gallons to 15.5 gallons annually. Overall, annual milk
consumption per person in America has declined from 31.3 gallons in 1970
to 22.2 gallons in 2002.(42) And
the unhealthy milk fat? That is recycled back to American consumers as
cream, butter, and ice cream
Juices
From the time when it was
learned how to preserve fruit juices in reasonable condition (using
pasteurization) in the 19th century, bottlers have canned and packaged various juice
products. They pressed the juices from the fruit, strained, clarified,
filtered, pectinized, and pasteurized it. They concentrate some juices
by evaporation. Today, by far the most popular juice is from oranges; it
is followed by apple, pineapple, and so on. Total fruit juice
consumption has been rising steadily from 5.7 gallons per person per
year in 1970 to 10 gallons annually in 2002
Water
Our Pleistocene ancestors’
water came from rivers, lakes, and waterholes. Often, they had to
compete with lions, crocodiles, and hyenas for a sip from a muddy,
excrement-infested water source. It is probable that they picked up many
nasty parasites and diseases from their water supply. Water supplies in
the early civilizations were even worse: the high concentrations of
population not only took water out of the river, but put sewage back in.
The major cities would be located
Page
79 Above
Page
80 Below
ed on a good river, and mostly the population had to get drinking
water from it as best they could. There were outbreaks of various
waterborne diseases, but usually the gods were blamed rather than
unsanitary practices
This changed dramatically in Victorian times: there were particularly
bad outbreaks of cholera, typhoid, and typhus in London and scientists
had discovered that sewage-contaminated water was the cause. In
reaction, the authorities undertook immense construction projects from
1850 to 1875 to build elaborate networks of pipes and tunnels to collect
raw sewage and carry it to treatment works outside the city. In
parallel, pumping stations, reservoirs, treatment stations, and pipe
networks were constructed to bring safe drinking water to every
household. It is said that this new science of public health engineering
has done more to prevent and cure disease than any conventional medical
treatment
Quickly, public health engineering spread to America and continental
Europe. Overseas, the public works department became one of the most
important development arms of British and French colonial governments
Water for municipal supplies comes from two chief sources: surface water
from rivers and lakes, and groundwater from water-bearing layers
underground. Surface water is usually dirtier and needs several stages
of treatment. It is first filtered and then “flocculated,” a process
whereby certain chemicals are added to the water to make the fine
particles clump together and sink to the bottom where they can be
strained off. Other chemicals are sometimes added to reduce acidity and
to bring hardness to acceptable levels. Both surface water and
groundwater need to be disinfected to kill harmful bacteria. Most
commonly, this is done by injecting chlorine gas; excess chlorine is
removed when it has done its work. The gas ozone is sometimes used
instead of chlorine because it leaves less odor, but it is more
expensive
In this way, municipal water contains traces of the chemicals that have
been added. They are mostly harmless substances like slaked lime, baking
soda, and alum (aluminum sulfate). Chlorine is potentially more
aggressive, but the active quantities that remain are usually harmless
too, certainly a lot less than in the average swimming pool
There is some evidence that a chemical called fluoride helps fight tooth
decay so, more controversially, some municipalities voluntarily dose
their water supply with fluoride. Now it happens that the waters of our
African homeland were quite rich in fluoride, certainly no less than the
concentrations deliberately put there by some municipal authorities.
Nevertheless, many consumers object to being forcibly medicated in this
way. A great many of the water treatment plants and distribution
networks were built over 100 years ago. Not only have they reached the
end of their useful lives, they suffer a chronic lack of investment. In
consequence, they are vulnerable to mistakes in chemical dosage and to
contamination through leaky pipework.
Page
80 Above
Page
81 Below
Everybody was happy drinking municipal water until the 1980s,
when the public became more concerned about the aging equipment, the
added chemicals, and the forced fluoridation. The bottled water company
Perrier brilliantly exploited this disquiet. They initiated a marketing
coup on a scale similar to Kellogg with breakfast cereals (see Chapter
2) and persuaded Americans and Europeans to abandon drinking the water
they could get for free out of a tap and buy water in a bottle.
The mineral water companies latched on to another alarm—that we are
all dehydrating from lack of water. Remarkably, they persuaded us to not
only switch from tap water to bottled water but also to drink much more
of it. Such was their success that consumption of bottled water has
soared from virtually zero in 1970 to 21.2 gallons per person per year
in 2002. Curiously, consumer watchdogs estimate that 60% of the bottled
water sold on the market is simply municipal water put into bottles
(sometimes with further treatment). Most of the remaining 40% of bottled
water does indeed come from natural springs and wells, but it still has
to be sterilized, conditioned, and carbonated.
The Health Consequences of Our Beverage Choices
Our species, like most on the
planet, are designed to get most of their liquid intake from water.
Until recent times, that was still the case for us, even in the West.
But we have seen the rise of alternative drinks, which have come along
just in the average grandparent’s lifetime. Setting aside wine,
distilled spirits, and liqueurs, which are not thirst quenchers, what
are we now consuming instead of water? When we add up the figures for
beer, tea, coffee, cocoa, soft drinks, juices, and
Page
81 Above
Page
82 Below
milk, we find that the average American is consuming, in a year, 150
gallons of liquid that is not plain water—that comes to 3.25 pints per
day! The average farm laborer in 1900 consumed a half pint of beer on a
Saturday night, and that was it for alcohol for the week. Today,
Americans of drinking age are consuming, on average, 5 pints a week,
much of it concentrated into one or two binges. Beer drinking on a large
scale is linked to obesity (beer gut), heart disease, high blood
pressure, high cholesterol, allergies, poor bone health, and cancers.
The connection is the same as for sugars: beer contains a hyperactive
sugar, maltose, which creates abnormal blood sugar surges. In addition,
some people are allergic to the barley gluten in beer. The alcoholic
content is also a problem (see sidebar), but beer is relatively dilute
in alcohol, so this factor is of secondary importance to the sugar
diseases.
The Problem with Alcohol
Alcohol occurs frequently in nature, especially where ripe fruits
ferment of their own accord. There are stories of elephants gorging on
overripe, fermenting mangoes and rampaging around in a drunken stupor.
The human body handles alcohol perfectly well in these modest,
naturally occurring circumstances. However, with our cleverness, we
have made alcohol much more readily available and in greater
concentrations. Greater consumption interferes with fat metabolism,
brain chemistry, and many other bodily functions. The liver, the organ
responsible for detoxifying alcohol from the blood, can develop the
fatal condition of cirrhosis. Sometimes, the one-way valve into the
stomach becomes a two-way valve, leading to acid reflux, when the
contents of the stomach rise back up the esophagus and burn the
lining. Plus, alcohol is empty calories: at best, it just adds to the
waistline, at worst, it displaces more nutritive foods from the diet.
Chronic alcoholics frequently suffer vitamin and mineral deficiency
diseases and their life span is shortened by 10 to 12 years because of
this.
Tea, whether black or green,
seems to be mostly positive in its health effects. It is rich in certain
micronutrients that are in short supply in the average Western diet. The
body gratefully seizes these and uses them to reinforce the immune
system, so that tea drinkers are less likely to suffer certain cancers
and infectious diseases. And the caffeine content is moderate: a cup of
tea contains about the same as a 12 oz can of cola
Although consumed by the large mug, the classic American coffee
is weakly brewed and relatively benign. The trend now is for
coffeehouses to serve much stronger brews but still in large portions,
which is getting us into the territory where caffeine overdose (see
sidebar) may undermine our health. Coffee in these concentrations is
associated with raised blood pressure, increased heart rate, strokes,
and heart disease. On the other hand, coffee does have some protective
effect against some types of cancer, Parkinson’s disease, and
diabetes. However, the balance of advantage stays with keeping the
coffee weak. Cocoa also contains caffeine at low levels, but it also
contains a rich variety of micronutrients that are heart healthy and
protective against many cancers. The warning is the same: use the
genuine cocoa powder, not the artificial confections that masquerade as
“chocolate drinks.”
Caffeine Overdose
Caffeine is found to a greater
or lesser degree throughout the plant kingdom. The human body is
clearly well adapted to handle it. Today, we tend to focus on the
plants with a high content, particularly coffee, for its stimulative
properties. On the whole, caffeine is quite benign and does not have
many drawbacks. However, used consistently and in large doses, it
interferes with blood sugar control and with bone health, and it
reduces elasticity of the arteries. Caffeine addicts who try to stop
often find that they suffer classic drug withdrawal symptoms:
headaches, sleeplessness, irritability, tiredness, and so on.
Page
82 Above
Page
83 Below
Soft drinks and, by volume,
colas dominate the market and have a number of problems. Their sugar
content is directly associated with childhood obesity and heart disease.
By adulthood, we see diabetes, cancers, raised blood pressure, high
cholesterol, and all the usual sugar diseases. Colas, because of certain
ingredients, are also associated with poor bone-building in children and
osteoporosis in adults
Fruit juices also have their problems. Fruits lose their fibrous
structure in the juicing process. In addition, pasteurization knocks out
many micronutrients, dramatically reducing their nutritional value.
Finally, juice processing brings out the sugar content, which hits the
bloodstream hard—fruit juices too are associated with the sugar
diseases and, in particular, obesity and diabetes
We have dealt with milk at length in the Milk Group and it is associated
with all the problems of that group: heart disease, poor bone-building,
allergies, obesity, and many more. Milk consumption has been dropping in
spite of increasingly desperate promotions by the dairy industry.
Studies suggest that they are losing out to carbonated soft drinks,
which is simply replacing one problem with another
Let us now turn to the other alcoholic beverages—wine, spirits, and
liqueurs. Wine, particularly red wine, contains a number of
micronutrients that appear to be helpful to health, especially
cardiovascular conditions and cancers. The proviso is that you should
drink no more than a couple of glasses per day. After that, the alcohol
content takes over and starts to dominate the consequences
Wine, particularly dry wine, does not have the catastrophic effect on
health that beer can have—wine drinkers on the whole suffer less from
beer belly and the sugar diseases. Spirits have higher concentrations of
alcohol, so the limit is reached more quickly and this is their main
danger. But they do not provoke the sugar diseases like beer does. There
is some evidence that high alcohol concentrations irritate the mouth,
throat, and esophagus linings to the point where cancers develop.
Spirits do not have any worthwhile concentrations of nutrients. Liqueurs
suffer the same drawbacks and have an additional one— high sugar
content. Liqueurs are doubly fattening (sugar and alcohol) and have
nothing worthwhile to contribute nutritionally
Finally, back to water: on the big scale, this is the least of our
worries. Municipal water supplies are still far healthier than the
fetid, polluted, and disease- ridden waters that our ancient ancestors
were obliged to drink. Bottled waters are a harmless diversion. The
alarms about dehydration are largely overdone, simply marketing
manipulation to get us to drink far more bottled water than we need
OUR CHANGED FOOD SUPPLY
We have examined how various
foods have entered the food supply. Not all newcomers to the diet are
unwelcome—many are fine alternatives to the foods
Page
83 Above
Page
84 Below
our ancestors were adapted to
in the Savanna Model. In recent history, new foods have arrived from all
over the world. Some of them, such as the potato, have colonized our
food supply so thoroughly that we cannot imagine life without them. In a
similar vein, most cuisines around the world have accepted that fine
addition to our diet, the tomato, which was unknown to Shakespeare just
400 years ago
However, things are not always what they seem. For example, just in the
past few centuries, the carrot has gone from purple to bright orange and
now it is going back to purple again. In changing the colors, we keep
changing the nutrients. The strawberry used to be just a little fruit
about the size of a pea. In this continuous hybridization process, what
nutrients have changed? In the industrialized production of the modern
world, generic foods can change out of recognition, just in a generation
Today, we see a host of new diseases afflicting our populations: autism,
allergies, asthma, heart disease, cancer, arthritis, bone disease,
obesity, diabetes, Alzheimer’s, and many more. These diseases have
become so pervasive that we think of them as part of the normal human
condition. We simply cannot imagine that there is a direct connection
between our lifestyle, notably eating habits, and these diseases
We have catalogued, food group by food group, the major divergences of
these foods from our ancestral foods of the African savanna and looked
at some of the consequences. We see that there are problems with grains,
milk products, potatoes, and dry beans. Less surprisingly, we find that
sugar creates havoc with our health. We should never have accepted
certain types of vegetable oil in bulk quantities and we have done
certain things to red meat that make it unhealthy to humans and
non-conforming to the Savanna Model. We come to the startling
realization that nature never intended us to eat some very familiar
foodstuffs, which are making us sick
Much of this new knowledge has not yet percolated into the schools, the
nutritionist creed, and the medical community. And many of these
revelations are daunting—they call into question many of our sincerely
held beliefs and make us realize how much our upbringing, our schools,
and the health industry have indoctrinated us. In the next chapter, we
provide the scientific background to these astonishing conclusions, and
then we will pull all the strands together to build the ideal eating
plan in modern terms.
|