"There is no absolute knowledge. And those who claim it, whether they are scientists or dogmatists, open the door to tragedy. All information is imperfect. We have to treat it with humility. That is the human condition"
It is this fact that makes my work, and the work of all scientists, so incomprehensible to those with little or no sympathy for science, and with good reason. Why should we remain unsure of anything? Is not such uncertainty only a perpetuation of our ignorance? Why is there no One Truth of God, no single paragon to which Plato aspired, no perfect yardstick to measure all things? We all want answers now; we want a solution to a problem that is going to work immediately and without chance of error. The long and difficult road of experimentation, fraught with failure at every turn and often ending in a graveyard of disproved notions, is for so many simply too frustrating to contend with. So they instead turn to figures of far greater danger, those who would seek a concrete and absolute knowledge - politicians, dogmatists. But science doesn't work like that. Like the process of evolution itself, we as scientists must build upon what is already available to improve upon it as our knowledge grows. We are not here to accept all of what we know, but to question it in the hopes of making it better.
Let me give an amusing example from my own experience to illustrate this point. In high school biology, I did a lab exercise that explored the effects of allelopethic chemicals, or plant toxins, on germinating seeds. The class had a good idea of what to expect, for we had been told that chemicals exuded by black walnuts or pines will kill plants that try to grow around them. Each group designed an experiment that would test the effects of the chemicals on shoot growth of sprouting beans. I will ignore the fact that our protocol was crude by necessity; the qualitative "chemical bath" for each separate experimental condition was simply the water from boiled pine needles, black walnuts, and wild-onion grass. Let's just say we didn't exactly have the best materials, or methods of measurement. But in any event, we watered each tray of beans and sat back to wait for the experimental plants to not grow. And yet, obstinate things that they were, all the beans grew to very equal, healthy heights.
This was rather disquieting. Obviously I must have done something wrong. Anxious for my grade, I squirmed to justify my "error," squashing the information into a standard answer that might explain the anomaly - the data must be lying; only human error can account for the difference I am seeing, etc. But it wasn't until I began to dump out the "failed" trays of seedlings in discouragement did I notice a curious and exciting difference: in comparison with the controls, the experimental seedlings all had spindly, weak roots. In our experiment, at least, the chemical baths did appear to have a significant effect, not on shoot, but root growth! Of course by then it was too late to re-write my lab report, nor did I have time to design a whole new experiment to test a revised hypothesis, but the experience did teach me one thing: I must never make excuses for discrepancies between expectation and results. Being wrong in my original assumption led me in another direction entirely, and was my first step in being less wrong, based on my new information.
This is all well and good. But having to be uncertain, having to, as Bronowski said, view all information as imperfect, is a concept that not everyone is willing to accept. A week or two after I drew that little cartoon you have seen, someone else drew a big X through the quote of "there is no such thing as right" and added at the bottom, "but is there such a thing as wrong?" I shortsightedly erased the whole drawing then, thinking at the time that defacing my picture was a rather rude thing of the person to do. But on reflection, I wished I had left it up, because I can understand why they did it. I do not blame anyone (nor do I discourage anyone) for disagreeing with the quote for the simple reason that nobody likes to be wrong. Humans hate to think that they are indeed fallible, that we are often mistaken. It's an unpleasant thing to have to admit. As I mentioned before, if we can know when we're being less wrong, then why can't we set a measure of perfection for ourselves; why not hold any one standard as a perfect yardstick?
I remember Dr. Grobstein at his Neurobiology lectures, waiting for a response from a nervous class that was either unwilling to answer his questions or afraid to, with that ancient deep-seated fear of making a mistake. The good professor would look at all of us callow students in turn and remind us gently that this was our chance for this week to be wrong. Such an idea was new to me, that it was okay, even accepted, that I should be wrong. It has taken me years of practice, and years of unlearning, to admit that it is not my business to be right, but that I must instead strive to be less wrong, for as things are, there is no better method. To assume the opposite, to believe that what we know now is the only immutable Truth, is to invite catastrophe. In science there are few hard truths, and we should never learn anything more if we ceased to question what we know.
It is this constant flux, this nature of ceaseless observation, which fascinates me. Science is not mindless data collecting, writing down numbers hour after hour and never questioning what they mean. It is being able to look at these data, to take them apart and put them together again in new ways that may lead to something beautiful. For there is beauty in all things, in the smallest atom, in the largest star, in knowing that at this moment, the cellular cascades that you have learned about in lecture, many of which are not yet well understood, are quickening in your body right now. There is a strange satisfaction in meditating on all that is unknown, and thinking of what questions we must ask in order to come to a better understanding.Serendip