Sunday, March 27, 2011

The role of mistakes in language learning


Of all the advice on Antimoon, "Do not make mistakes" is by far the most controversial.Hardly a month goes by without an e-mail or forum post from an angered English teacher, letting me know how stupid I am for telling learners to avoid mistakes. Don't I realize that mistakes are a necessary element of all learning? Haven't I heard the phrase "learn from your mistakes"? And why am I scaring learners into silence?
With this article, I hope to clear up the confusion once and for all.

Are mistakes good or bad?

Let me say something obvious first: mistakes are bad. If one had a choice between saying a sentence correctly and saying it with a mistake, nobody in their right mind would choose saying it with a mistake — and no teacher would recommend that. Similarly, no teacher would give a higher grade to a student who made more mistakes than another one.
The reason teachers sometimes say that mistakes shouldn't be avoided is that they believe one of two things (or both):
  1. That mistakes are an inseparable part of the learning process; therefore, the only way to avoid language mistakes would be to avoid speaking and writing in a foreign language, and that would be bad. (the "Mistakes are inevitable" group)
  2. That making mistakes, and having them corrected, is a good way to learn a language. (the "Mistakes are good" group)
Let's take a look at each of these beliefs:

1. “Mistakes are an inseparable part of the language learning process”

One of the most frequently repeated pieces of business advice is that to succeed in business, you have to try many different things, until one of them finally works. Even huge corporations often release unprofitable products despite spending enormous amounts of money on market research. That's because it is impossible to tell with 100% certainty whether customers will buy a product until it hits the shelves.
These examples teach us that in "unknown" areas like scientific research and marketing, mistakes are truly inevitable. Where no one can predict what will work, people who try and fail have a higher probability of success than those who are doing nothing for fear of failure. Think of Thomas Edison, who had to test over 6,000 unsuitablematerials for light bulb filaments until he found one that worked.
Are there any other situations where mistakes are necessary? Let's take learning to swim. Swimming is not an "unknown" area. There is no mystery in it. Everybody can watch slow-motion videos of Michael Phelps and see how it should be done.
The problem here is that there is a long way between knowing how to swim and doing it yourself. It takes hours of training to develop the right reflexes: putting your head under water at just the right moment, synchronizing your legs and your arms, breathing in and breathing out, etc. The process involves many mistakes; in fact, as a beginner, almost everything you do is wrong.
Are mistakes costly? You bet. Every swimmingcoach knows that mistakes can easily turn into bad habits, which are difficult to unlearn. When you fail in business, you lose money. These are the costs of mistakes. But you have to accept them, because there is no other way to succeed in business and in swimming.

What about language learning?

Now let's take a look at language learning. It's obvious that learning a language is not an "unknown" area like research or marketing. No one is asking learners to invent anything (indeed, they should not invent their own grammar and vocabulary) — they just need to do things exactly the way the native speakers do it.
Language learning is not like learning to swim, either. In swimming, you can remember very well how your coach moves his arms, but still be unable to re-create that motion.Knowing the right way is not enough. You need to train your muscles to react in a certain way, which is a long, error-prone process. First you make a lot of mistakes, then fewer and fewer, until you are error-free.
By contrast, in languages, knowing the right way isenough. If you can remember and understand a sentence in a foreign language, you can repeat it without any mistakes. If you remember two correct sentences, you can transform and combine them into another correct sentence. There is little room for mistakes.
Now of course things are a bit more complicated than that:
·             If you are just beginning to write or speak in a foreign language, it will perhaps take you 30 seconds to produce each sentence (you will make a lot of edits or, if you're speaking, you will stutter and start over). That's because it can take a long time torecall things, especially if they are not in your "active memory". But even if you're slow, your sentence can be error-free or almost error-free if you are careful.
·             As noted before, the key to producing correct sentences is remembering correct examples. To build your own sentences in a foreign language, you need thousands of words, hundreds of grammatical structures and dozens of idioms. Furthermore, you must know which words are used in what contexts. For example, end can meanstop (e.g. We must end the project), but you cannot say I have decided to end seeing her (you must say: I have decided to stop seeing her).
There are thousands of "exceptions" like that in any language. So, although there is only a short way between theory and practice, the theory is huge. However — and this is crucial — the theory can be acquired without producing mistakes (by reading and listening).
·             Although following correct examples is not hard, there can be some problems. One is that certain information is difficult to notice. For example, if you don't read English texts carefully, you may ignore the use of articles (a/the), as they are not necessary to understand the meaning. As a result, information on article usage will be missingfrom your memory and your correctness will suffer.
A smaller problem is related to "unexpected exceptions". Some usage patterns are so rare that it is very difficult to learn them, even with massive input. For instance, "I'd like to see The Simpsons" is correct if you mean the movie, but wrong if you mean the TV show (in that case, you're supposed to say "I'd like to watch The Simpsons"). Differences like that are really hard to learn, even if you read a lot.
·             There is one area of language learning where "knowing the right way" does not immediately translate into "doing it the right way": pronunciation. You can spend hours listening to a native speaker saying a word, and still be unable to repeat it properly. In fact, learning to pronounce the sounds of a new language is very much like learning to swim — your brain and your muscles have to get used to new movements. The process takes time and involves many mistakes.
What have we learned so far? First of all, that not all learning is trial-and-error learning. In particular, language learning is possible with few mistakes because:
  1. It does not require innovation or creativity. All you need to do is copy other people (native speakers).
  2. The gap between "knowing how it should be done" and "doing the right thing" is small. Once you have enough correct examples in your memory, it is relatively easy to transform and combine them into your own sentences.
  3. You need a lot of correct examples to speak a foreign language, but they can beacquired without mistakes.

2. “Making mistakes is a good way to learn a language”

We have seen that language learning can happen with few mistakes. But why should it? Isn't speaking and writing with mistakes an effective way to learn a foreign language? After all, "practice makes perfect", doesn't it?
Those who believe mistakes are a good way to learn probably picture the following model:
The learner says a sentence, makes some mistakes, the teacher corrects them, and the learner memorizes the proper way to say the sentence. Next time around, the learner willhopefully make fewer mistakes. The process is repeated until the learner is error-free.
Now, why doesn't this model work so well?

It's too slow

Language learning is a very memory-intensive task. There is a staggering number of words, phrases, structures, and subtle differences in usage to memorize. Some verbs take the infinitive (e.g. cease to do something), some take the gerund (e.g. stop doing something). We get in the car, but on the bus. My copy of Michael Swan's Practical English Usagehas 654 pages of examples like these and it does not cover everything I know about English. (In contrast, all the information you need to swim perfectly can probably be written on a few pages; the rest is a matter of practice.)
It is obvious that an efficient language learning method must somehow supply all this information to the learner. In other words, the learner needs a huge number of correct examples of the target language. In the feedback-based method, he gets correct examples when the teacher corrects him (and tells him how to say something) and when he listens to his own speech (and his sentences are correct).
The problem is that the flow of information is too slow. When you speak in a language you don't know well, you speak slowly, with many pauses. Before you open your mouth, you have to think what to say. Often, in the middle of the sentence, you start over when you realize you've made a mistake. There are also delays due to teacher-student interaction. All this means that the amount of correct language that reaches the learner's ears is quite small.
I did an informal experiment to measure this. First, I read a simplified German book for 15 minutes, using the "pause and think" method (paying attention to grammar and usage). I managed to read 55 sentences (476 words) before the time ran out. Then, I recorded myself talking about the story for 15 minutes. I constructed 21 German sentences (196 words).
In terms of the sheer amount of input you get, 4 language classes per week (45 minutes each) would be equivalent to reading a book for 74 minutes. That's about 10 minutes a day, 7 days a week. A whole school year (9 months) of classes (4 classes per week) provides about the same amount of input as a single 250-page book (84,000 words). Of course, these rough estimates assume classes where students speak all the time — in real classes, a great deal of time is simply wasted; a little time is also spent on pure input (reading a short article in the coursebook, listening to a short recording).

It requires a competent teacher

As the method is based on errors and feedback, it requires a competent feedback-giver. It is not clear where the learner is supposed to find such a person outside of the classroom. If he tries to interact with native speakers in real life or on the Internet, he will find that native speakers generally won't correct his mistakes, as long as they can understand him. So grammar like this (example by Johnny) is likely to go unchallengedoutside of the classroom:
that not truthful really, i now am answer and express my thinking, and you understand myself on this replying, i am no doubt about it, but if you don't correct my false words in this post then i keep practice false english and means i am learn this crap english.
The learner can of course "stick to the classroom", but making any real progress will require more than 8 classes a week. Such intensive courses are quite expensive and impractical for someone who has to go to work, school or college. It seems that a method based on "free" reading and listening will be both cheaper and more flexible.
Finally, there is also the problem of finding good teachers. While I have so far assumed that teacher feedback is perfect, the reality is that teachers will ignore many mistakes due to insufficient attention, knowledge or time. Non-native teachers will sometimes object tocorrect sentences, or their "corrections" may be wrong themselves. My experience in the Polish educational system shows that very few non-native teachers are a reliable source of correct input (not to mention pronunciation, which is a typical Achilles' heel).

It can reinforce mistakes

Every time you say an incorrect sentence, your brain gets used to it, and it becomes more likely that you will make the same mistake again. We are all familiar with the "fossilized" English learner who always says things like He make, She work, and no amount of corrections seems to help. This is what happens when you've been listening to your own version of English grammar for a long time.
Therefore, the output-based learning method actually contains two feedback loops:
  1. When the teacher corrects you and you try to remember that a given phrase is wrong (this correction will not always occur).
  2. Whenever you say an incorrect sentence.
The second effect (reinforcement) is probably weaker than the first one (correction), but it further slows down learning methods that are based on speaking. If a learner spends most of his time listening to bad grammar (produced by himself and other students), and he gets little other input, one cannot expect good results.

Final word

Language learning does not have to be based on speaking, mistakes, and repeated correction. Indeed, if your goal is good English — that is, if you want to be able to speak and write in English with few mistakes and understand English-language television — the feedback-based method is the wrong way to do it. It builds your knowledge very slowly and depends on a good instructor. As a result, only intensive, long-term courses with competent teachers can give satisfactory results, but these are very expensive and very impractical.
The alternative — input-based learning (more specifically, the Antimoon Method) — does not rely on mistakes and corrections. It gives you more information in less time and enables you to build your English whenever you want to, for as long as you want. On the other hand, it requires that you enjoy reading books in English or watching English-language programs, and that you apply the principles of careful reading and writing.

 http://www.antimoon.com/how/mistakes-in-learning.htm

Monday, March 21, 2011

Why You Should Celebrate Your Mistakes


When you make a mistake, big or small, cherish it like it’s the most precious thing in the world. Because in some ways, it is.
Most of us feel bad when we make mistakes, beat ourselves up about it, feel like failures, get mad at ourselves.
And that’s only natural: most of us have been taught from a young age that mistakes are bad, that we should try to avoid mistakes. We’ve been scolded when we make mistakes — at home, school and work. Maybe not always, but probably enough times to make feeling bad about mistakes an unconscious reaction.
Yet without mistakes, we could not learn or grow.
If you think about it that way, mistakes should be cherished and celebrated for being one of the most amazing things in the world: they make learning possible, they make growth and improvement possible.
By trial and error — trying things, making mistakes, and learning from those mistakes — we have figured out how to make electric light, to paint the ceiling of the Sistine Chapel, to fly.
Mistakes make walking possible for the smallest toddler, make speech possible, make works of genius possible.
Think about how we learn: we don’t just consume information about something and instantly know it or know how to do it. You don’t just read about painting, or writing, or computer programming, or baking, or playing the piano, and know how to do them right away.
Instead, you get information about something, from reading or from another person or from observing usually … then you construct a model in your mind … then you test it out by trying it in the real world … then you make mistakes … then you revise the model based on the results of your real-world experimentation … and repeat, making mistakes, learning from those mistakes, until you’ve pretty much learned how to do something.
That’s how we learn as babies and toddlers, and how we learn as adults. Trial and error, learning something new from each error.
Mistakes are how we learn to do something new — because if you succeed at something, it’s probably something you already knew how to do. You haven’t really grown much from that success — at most it’s the last step on your journey, not the whole journey. Most of the journey was made up of mistakes, if it’s a good journey.
So if you value learning, if you value growing and improving, then you should value mistakes. They are amazing things that make a world of brilliance possible.
Celebrate your mistakes. Cherish them. Smile.

Thursday, March 17, 2011

7 Business Mistakes That Nearly Broke Me… Literally


Over the past 6 years I founded 9 .com companies. Most of the companies failed miserably and lost me a million dollars or so, but luckily a few of them did well enough to cover my loses. The main reason I had a lot of unsuccessful ventures is because I made some really big mistakes. Hopefully you can learn from my mistakes and not make them.

Don’t spread yourself too thin

A lot of good opportunities will come your way and your gut reaction will be to do them all, but don’t. I made this mistake and what ended up happening is that all of my businesses suffered, even the ones that were doing well. Within 3 to 6 months of me spreading myself too thin all my businesses suffered because I hadn’t spent enough time on each of them, even though I had employees and business partners who were helping me out. Remember, no matter how big or small your business is, you have to spend all of your time on it.

Pick the right type of incorporation

Not only is it important for you to incorporate your business, but it is important to get the right type of incorporation. It may not seem important when you are starting your company, but once you start making money it becomes a huge deal. For example, my accountant tells me that if your company makes under $200,000 in profit a year, a C corporation is good for you due to tax benefits. If you make more than that, consider getting an S corporation or a limited liability company.
I found this out the hard way, when my company started profiting 7 figures a year I had the wrong type of corporation. I ended up getting taxed twice, the company paid taxes on the profit and then I paid taxes on the dividends I got from the company.

Be careful whom you trust

I was working with a few developers and engineers for 3 months and they came to me with a business proposition. The first 3 months of working with them went well, so I decided to hear them out. The businesses opportunity they presented was supposed to revolutionize the hosting industry, but the catch was, they were broke. After hearing them out I ended up giving them some money for living expenses, I bought them a house to live in, and I was dumping 4 to 5 figures into the company every week.
To keep a long story short, they screwed me out of my money, stole stuff from the company, and ruined the house I bought. No matter how well you think you know someone, be careful, because you don’t know who is going to screw you over. One stupid mistake can make you lose thousands of dollars.

Have a thorough hiring process

If you really want to grow your company, you will need employees. You’ll probably post some job openings to find these employees or ask a few friends if they know anyone that you could hire. No matter how you get your applicants, go through each person with a fine-tooth comb. If a friend recommends someone, it doesn’t mean that they’re a good hire. I made this mistake multiple times by hiring developers, designers, and sales people that friends recommended. Just because someone did well at their last job, it doesn’t mean they will do well working for you.

Make your employees accountable

If you aren’t strict with your employees they will start slacking off after a while. And if you travel frequently like I did, they will really start messing around when you aren’t there. The first day a new employee starts, you need to be strict with them because it is hard to get employees out of bad habits.
During the first few years of being an entrepreneur, I didn’t hold any of my employees accountable. Every time they told me something, I took their word for it. On top of that I wasn’t strict when people came in 30 minutes late and after a while it became a daily habit.
To solve this I started using project management software, like Liquid Planner, and I made each one of my employees upload what they completed at the end of each day. This way I could keep track of what each employee did.
On top of that I made every one clock in and out. This allowed me to see who came in on time and who didn’t. At first I didn’t think it was a big deal that a few of my employees came in 30 minutes late each day, but over a course of a year it added up to 3 missed weeks.

Collect your money on time

Making money may seem like a hard thing, but collecting money from people can be even harder. In 2008 I worked for 8 companies that never ended up paying me. This wasn’t a few dollars either, a few companies still owe me over $100,000, sadly I don’t think they will end up paying.
With your business collect the money first before you provide anything. No matter how large a company may be, they can still go bankrupt. Things like contracts and debt collectors won’t help you much if the company doesn’t have money.
And for some reason if someone owes you money, whine to them until you get paid. If you act like you don’t need the money, you’ll never get paid. But if you whine and act like you are in financial trouble, hopefully they will feel sorry and pay you.

Time is not on your side

Especially with your first company, you will want everything to be perfect. The reality is, there will always be problems and nothing will ever be perfect. So instead of trying to make everything perfect, just launch your company before someone else beats you to the punch.
With my first software company, I wanted the software to be perfect before I launched it. I took so long in trying to make the software perfect that Google launched a competing product before I did. After that I had no chance of succeeding because Google’s product was free and mine wasn’t.

Conclusion

No matter what, you are going to make mistakes in life. Even if I told you every mistake that I made, you will still make more. If you want to succeed you can’t give up!!! Sooner or later you will do well, but like anything worth while, it takes time.
http://www.quicksprout.com/2009/01/14/7-business-mistakes-that-nearly-made-me-go-broke/

Friday, March 11, 2011

The Worst Mistake in the History of the Human Race


To science we owe dramatic changes in our smug self-image. Astronomy taught us that our earth isn't the center of the universe but merely one of billions of heavenly bodies. From biology we learned that we weren't specially created by God but evolved along with millions of other species. Now archaeology is demolishing another sacred belief: that human history over the past million years has been a long tale of progress. In particular, recent discoveries suggest that the adoption of agriculture, supposedly our most decisive step toward a better life, was in many ways a catastrophe from which we have never recovered. With agriculture came the gross social and sexual inequality, the disease and despotism, that curse our existence.


At first, the evidence against this revisionist interpretation will strike twentieth century Americans as irrefutable. We're better off in almost every respect than people of the Middle Ages, who in turn had it easier than cavemen, who in turn were better off than apes. Just count our advantages. We enjoy the most abundant and varied foods, the best tools and material goods, some of the longest and healthiest lives, in history. Most of us are safe from starvation and predators. We get our energy from oil and machines, not from our sweat. What neo-Luddite among us would trade his life for that of a medieval peasant, a caveman, or an ape?
For most of our history we supported ourselves by hunting and gathering: we hunted wild animals and foraged for wild plants. It's a life that philosophers have traditionally regarded as nasty, brutish, and short. Since no food is grown and little is stored, there is (in this view) no respite from the struggle that starts anew each day to find wild foods and avoid starving. Our escape from this misery was facilitated only 10,000 years ago, when in different parts of the world people began to domesticate plants and animals. The agricultural revolution spread until today it's nearly universal and few tribes of hunter-gatherers survive.
From the progressivist perspective on which I was brought up, to ask "Why did almost all our hunter-gatherer ancestors adopt agriculture?" is silly. Of course they adopted it because agriculture is an efficient way to get more food for less work. Planted crops yield far more tons per acre than roots and berries. Just imagine a band of savages, exhausted from searching for nuts or chasing wild animals, suddenly grazing for the first time at a fruit-laden orchard or a pasture full of sheep. How many milliseconds do you think it would take them to appreciate the advantages of agriculture?
The progressivist party line sometimes even goes so far as to credit agriculture with the remarkable flowering of art that has taken place over the past few thousand years. Since crops can be stored, and since it takes less time to pick food from a garden than to find it in the wild, agriculture gave us free time that hunter-gatherers never had. Thus it was agriculture that enabled us to build the Parthenon and compose the B-minor Mass.
While the case for the progressivist view seems overwhelming, it's hard to prove. How do you show that the lives of people 10,000 years ago got better when they abandoned hunting and gathering for farming? Until recently, archaeologists had to resort to indirect tests, whose results (surprisingly) failed to support the progressivist view. Here's one example of an indirect test: Are twentieth century hunter-gatherers really worse off than farmers? Scattered throughout the world, several dozen groups of so-called primitive people, like the Kalahari bushmen, continue to support themselves that way. It turns out that these people have plenty of leisure time, sleep a good deal, and work less hard than their farming neighbors. For instance, the average time devoted each week to obtaining food is only 12 to 19 hours for one group of Bushmen, 14 hours or less for the Hadza nomads of Tanzania. One Bushman, when asked why he hadn't emulated neighboring tribes by adopting agriculture, replied, "Why should we, when there are so many mongongo nuts in the world?"
While farmers concentrate on high-carbohydrate crops like rice and potatoes, the mix of wild plants and animals in the diets of surviving hunter-gatherers provides more protein and a bettter balance of other nutrients. In one study, the Bushmen's average daily food intake (during a month when food was plentiful) was 2,140 calories and 93 grams of protein, considerably greater than the recommended daily allowance for people of their size. It's almost inconceivable that Bushmen, who eat 75 or so wild plants, could die of starvation the way hundreds of thousands of Irish farmers and their families did during the potato famine of the 1840s.
So the lives of at least the surviving hunter-gatherers aren't nasty and brutish, even though farmes have pushed them into some of the world's worst real estate. But modern hunter-gatherer societies that have rubbed shoulders with farming societies for thousands of years don't tell us about conditions before the agricultural revolution. The progressivist view is really making a claim about the distant past: that the lives of primitive people improved when they switched from gathering to farming. Archaeologists can date that switch by distinguishing remains of wild plants and animals from those of domesticated ones in prehistoric garbage dumps.
How can one deduce the health of the prehistoric garbage makers, and thereby directly test the progressivist view? That question has become answerable only in recent years, in part through the newly emerging techniques of paleopathology, the study of signs of disease in the remains of ancient peoples.
In some lucky situations, the paleopathologist has almost as much material to study as a pathologist today. For example, archaeologists in the Chilean deserts found well preserved mummies whose medical conditions at time of death could be determined by autopsy (Discover, October). And feces of long-dead Indians who lived in dry caves in Nevada remain sufficiently well preserved to be examined for hookworm and other parasites.
Usually the only human remains available for study are skeletons, but they permit a surprising number of deductions. To begin with, a skeleton reveals its owner's sex, weight, and approximate age. In the few cases where there are many skeletons, one can construct mortality tables like the ones life insurance companies use to calculate expected life span and risk of death at any given age. Paleopathologists can also calculate growth rates by measuring bones of people of different ages, examine teeth for enamel defects (signs of childhood malnutrition), and recognize scars left on bones by anemia, tuberculosis, leprosy, and other diseases.
One straight forward example of what paleopathologists have learned from skeletons concerns historical changes in height. Skeletons from Greece and Turkey show that the average height of hunger-gatherers toward the end of the ice ages was a generous 5' 9'' for men, 5' 5'' for women. With the adoption of agriculture, height crashed, and by 3000 B. C. had reached a low of only 5' 3'' for men, 5' for women. By classical times heights were very slowly on the rise again, but modern Greeks and Turks have still not regained the average height of their distant ancestors.
Another example of paleopathology at work is the study of Indian skeletons from burial mounds in the Illinois and Ohio river valleys. At Dickson Mounds, located near the confluence of the Spoon and Illinois rivers, archaeologists have excavated some 800 skeletons that paint a picture of the health changes that occurred when a hunter-gatherer culture gave way to intensive maize farming around A. D. 1150. Studies by George Armelagos and his colleagues then at the University of Massachusetts show these early farmers paid a price for their new-found livelihood. Compared to the hunter-gatherers who preceded them, the farmers had a nearly 50 per cent increase in enamel defects indicative of malnutrition, a fourfold increase in iron-deficiency anemia (evidenced by a bone condition called porotic hyperostosis), a theefold rise in bone lesions reflecting infectious disease in general, and an increase in degenerative conditions of the spine, probably reflecting a lot of hard physical labor. "Life expectancy at birth in the pre-agricultural community was bout twenty-six years," says Armelagos, "but in the post-agricultural community it was nineteen years. So these episodes of nutritional stress and infectious disease were seriously affecting their ability to survive."
The evidence suggests that the Indians at Dickson Mounds, like many other primitive peoples, took up farming not by choice but from necessity in order to feed their constantly growing numbers. "I don't think most hunger-gatherers farmed until they had to, and when they switched to farming they traded quality for quantity," says Mark Cohen of the State University of New York at Plattsburgh, co-editor with Armelagos, of one of the seminal books in the field, Paleopathology at the Origins of Agriculture. "When I first started making that argument ten years ago, not many people agreed with me. Now it's become a respectable, albeit controversial, side of the debate."
There are at least three sets of reasons to explain the findings that agriculture was bad for health. First, hunter-gatherers enjoyed a varied diet, while early fanners obtained most of their food from one or a few starchy crops. The farmers gained cheap calories at the cost of poor nutrition, (today just three high-carbohydrate plants -- wheat, rice, and corn -- provide the bulk of the calories consumed by the human species, yet each one is deficient in certain vitamins or amino acids essential to life.) Second, because of dependence on a limited number of crops, farmers ran the risk of starvation if one crop failed. Finally, the mere fact that agriculture encouraged people to clump together in crowded societies, many of which then carried on trade with other crowded societies, led to the spread of parasites and infectious disease. (Some archaeologists think it was the crowding, rather than agriculture, that promoted disease, but this is a chicken-and-egg argument, because crowding encourages agriculture and vice versa.) Epidemics couldn't take hold when populations were scattered in small bands that constantly shifted camp. Tuberculosis and diarrheal disease had to await the rise of farming, measles and bubonic plague the appearnce of large cities.
Besides malnutrition, starvation, and epidemic diseases, farming helped bring another curse upon humanity: deep class divisions. Hunter-gatherers have little or no stored food, and no concentrated food sources, like an orchard or a herd of cows: they live off the wild plants and animals they obtain each day. Therefore, there can be no kings, no class of social parasites who grow fat on food seized from others. Only in a farming population could a healthy, non-producing elite set itself above the disease-ridden masses. Skeletons from Greek tombs at Mycenae c. 1500 B. C. suggest that royals enjoyed a better diet than commoners, since the royal skeletons were two or three inches taller and had better teeth (on the average, one instead of six cavities or missing teeth). Among Chilean mummies from c. A. D. 1000, the elite were distinguished not only by ornaments and gold hair clips but also by a fourfold lower rate of bone lesions caused by disease.
Similar contrasts in nutrition and health persist on a global scale today. To people in rich countries like the U. S., it sounds ridiculous to extol the virtues of hunting and gathering. But Americans are an elite, dependent on oil and minerals that must often be imported from countries with poorer health and nutrition. If one could choose between being a peasant farmer in Ethiopia or a bushman gatherer in the Kalahari, which do you think would be the better choice?
Farming may have encouraged inequality between the sexes, as well. Freed from the need to transport their babies during a nomadic existence, and under pressure to produce more hands to till the fields, farming women tended to have more frequent pregnancies than their hunter-gatherer counterparts -- with consequent drains on their health. Among the Chilean mummies for example, more women than men had bone lesions from infectious disease.
Women in agricultural societies were sometimes made beasts of burden. In New Guinea farming communities today I often see women staggering under loads of vegetables and firewood while the men walk empty-handed. Once while on a field trip there studying birds, I offered to pay some villagers to carry supplies from an airstrip to my mountain camp. The heaviest item was a 110-pound bag of rice, which I lashed to a pole and assigned to a team of four men to shoulder together. When I eventually caught up with the villagers, the men were carrying light loads, while one small woman weighing less than the bag of rice was bent under it, supporting its weight by a cord across her temples.
As for the claim that agriculture encouraged the flowering of art by providing us with leisure time, modern hunter-gatherers have at least as much free time as do farmers. The whole emphasis on leisure time as a critical factor seems to me misguided. Gorillas have had ample free time to build their own Parthenon, had they wanted to. While post-agricultural technological advances did make new art forms possible and preservation of art easier, great paintings and sculptures were already being produced by hunter-gatherers 15,000 years ago, and were still being produced as recently as the last century by such hunter-gatherers as some Eskimos and the Indians of the Pacific Northwest.
Thus with the advent of agriculture and elite became better off, but most people became worse off. Instead of swallowing the progressivist party line that we chose agriculture because it was good for us, we must ask how we got trapped by it despite its pitfalls.
One answer boils down to the adage "Might makes right." Farming could support many more people than hunting, albeit with a poorer quality of life. (Population densities of hunter-gatherers are rarely over on person per ten square miles, while farmers average 100 times that.) Partly, this is because a field planted entirely in edible crops lets one feed far more mouths than a forest with scattered edible plants. Partly, too, it's because nomadic hunter-gatherers have to keep their children spaced at four-year intervals by infanticide and other means, since a mother must carry her toddler until it's old enough to keep up with the adults. Because farm women don't have that burden, they can and often do bear a child every two years.
As population densities of hunter-gatherers slowly rose at the end of the ice ages, bands had to choose between feeding more mouths by taking the first steps toward agriculture, or else finding ways to limit growth. Some bands chose the former solution, unable to anticipate the evils of farming, and seduced by the transient abundance they enjoyed until population growth caught up with increased food production. Such bands outbred and then drove off or killed the bands that chose to remain hunter-gatherers, because a hundred malnourished farmers can still outfight one healthy hunter. It's not that hunter-gatherers abandoned their life style, but that those sensible enough not to abandon it were forced out of all areas except the ones farmers didn't want.
At this point it's instructive to recall the common complaint that archaeology is a luxury, concerned with the remote past, and offering no lessons for the present. Archaeologists studying the rise of farming have reconstructed a crucial stage at which we made the worst mistake in human history. Forced to choose between limiting population or trying to increase food production, we chose the latter and ended up with starvation, warfare, and tyranny.
Hunter-gatherers practiced the most successful and longest-lasting life style in human history. In contrast, we're still struggling with the mess into which agriculture has tumbled us, and it's unclear whether we can solve it. Suppose that an archaeologist who had visited from outer space were trying to explain human history to his fellow spacelings. He might illustrate the results of his digs by a 24-hour clock on which one hour represents 100,000 years of real past time. If the history of the human race began at midnight, then we would now be almost at the end of our first day. We lived as hunter-gatherers for nearly the whole of that day, from midnight through dawn, noon, and sunset. Finally, at 11:54 p. m. we adopted agriculture. As our second midnight approaches, will the plight of famine-stricken peasants gradually spread to engulf us all? Or will we somehow achieve those seductive blessings that we imagine behind agriculture's glittering facade, and that have so far eluded us?

Tuesday, March 8, 2011

Learning From Mistakes Only Works After Age 12, Study Suggests


 Eight-year-old children have a radically different learning strategy from twelve-year-olds and adults. Eight-year-olds learn primarily from positive feedback ('Well done!'), whereas negative feedback ('Got it wrong this time') scarcely causes any alarm bells to ring.  Twelve-year-olds are better able to process negative feedback, and use it to learn from their mistakes.  Adults do the same, but more efficiently. 


Brain areas for cognitive control
The switch in learning strategy has been demonstrated in behavioural research, which shows that eight-year-olds respond disproportionately inaccurately to negative feedback. But the switch can also be seen in the brain, as developmental psychologist Dr Eveline Crone and her colleagues from the Leiden Brain and Cognition Lab discovered using fMRI research.  The difference can be observed particularly in the areas of the brain responsible for cognitive control. These areas are located in the cerebral cortex.
Opposite case
In children of eight and nine, these areas of the brain react strongly to positive feedback and scarcely respond at all to negative feedback.  But in children of 12 and 13, and also in adults, the opposite is the case.  Their 'control centres' in the brain are more strongly activated by negative feedback and much less by positive feedback.
Three-way division
Crone and her colleagues used fMRI research to compare the brains of three different age groups: children of eight to nine years, children of eleven to twelve years, and adults aged between 18 and 25 years.  This three-way division had never been made before; the comparison is generally made between children and adults. 
Unexpected
Crone herself was surprised at the outcome: 'We had expected that the brains of eight-year-olds would function in exactly the same way as the brains of twelve-year-olds, but maybe not quite so well.  Children learn the whole time, so this new knowledge can have major consequences for people wanting to teach children: how can you best relay instructions to eight- and twelve-year-olds?' ’
Ticks and crosses
The researchers gave children of both age groups and adults aged 18 to 25 a computer task while they lay in the MRI scanner.  The task required them to discover rules.  If they did this correctly, a tick appeared on the screen, otherwise a cross appeared.  MRI scans showed which parts of the brain were activated.
Learning in a different way
These surprising results set Crone thinking. 'You start to think less in terms of 'good' and 'not so good'.  Children of eight may well be able to learn extremely efficiently, only they do it in a different way.'
Learning from mistakes is complicated
She is able to place her fMRI results within the existing knowledge about child development. 'From the literature, it appears that young children respond better to reward than to punishment.' She can also imagine how this comes about: 'The information that you have not done something well is more complicated than the information that you have done something well.  Learning from mistakes is more complex than carrying on in the same way as before. You have to ask yourself what precisely went wrong and how it was possible.'
Is it experience?
Is that difference between eight- and twelve-year-olds the result of experience, or does it have to do with the way the brain develops?  As yet, nobody has the answer.  'This kind of brain research has only been possible for the last ten years or so,' says Crone, 'and there are a lot more questions which have to be answered. But it is probably a combination of the brain maturing and experience.'
Brain area for positive feedback
There is also an area of the brain that responds strongly to positive feedback: the basal ganglia, just outside the cerebral cortex.  The activity of this area of the brain does not change.  It remains active in all age groups: in adults, but also in children, both eight-year-olds and twelve-year-olds.

http://www.sciencedaily.com/releases/2008/09/080925104309.htm

LinkWithin

Related Posts with Thumbnails