Review: “Bugs” in programs and “errors” in texts

    itblogs.ru/blogs/borkus/default.aspx »class =" headermaintitle »id =" ___ ctl00 ctl00___ctl00_ctl00_bcr_bth___BlogTitle »mce_href =" itblogs.ru/blogs/borkus/default.aspx »> Vlad Borkus in his blog on ITBlogs.ru wrote a note" Review: “Bugs” in programs and “mistakes” in texts ”, which describes how journalists worry about their typos and errors, how they lick articles, and how ashamed they are when mistakes do happen. Then he gives interesting numbers:

    About the press: At the same time, let's look at the facts. A number from 40 to 116 pages with 12,000 characters on each page (an average of about 450,000 characters, 100,000 words, about 7-10 thousand lines, 10,000 sentences) was made during the week by about 30 people. Total - approximately we made 0.3 serious errors per 1000 lines. Multiply-divide by factor 2.

    About software: Now, let's look at the results of the IT sector, from which there were complaints. Programs are created over the years, according to some estimates (see examples at www.osp.ru/os/2005/04/185558 ) the number of errors in commercial products (final release) is up to 0.5 per 1000 lines.

    ... And although the volume of a typical program is comparable to the annual stack of magazines (hundreds of thousands of lines), ... But they still remain. And now, few people say to programmers: be ashamed of your work, how can it be! And just - fix the error, guys.  


    Of course, the question of his article is somewhat different, but the idea of ​​such a comparison seemed to me worthy of comment. So:

    It seems to me that it is wrong to compare errors in printed materials and programs.

    Typos that Vlad speaks about in programming ... they’re not something they just don’t meet, they don’t even beat for them, since almost all of them are caught in the compilation phase.

    The next stage is the compensatory mechanisms of the human brain, which automatically converts the worms when reading into the original concepts, often without informing the mind about it.

    Then it is included that a person perceives any article by an average of 5-10 percent, well, 50, if he reads very carefully. So even semantic errors as a rule pass by attention and do not affect anything.

    Well, and finally, most importantly, imagine that anyone reading an article in a newspaper or magazine starts like a thoughtless machine to follow what he reads tightly in letters. Can you imagine what a nightmare would be? Goebels blue dream, if, of course, humanity would have experienced this.

    Human language is used by definition for noisy channels with mildly suspicious sources of information. Computer languages ​​are used for secure and reliable channels with signal sources that you believe as yourself. Agree, a huge difference. I suspect that if human language were used in the same way, even the most licked article in reputable magazines would still contain more “blue screens” than all of Windows, which by the way are already very few.

    Well, finally, the natural question is: is it possible to make programs as “tolerant” of noisy channels as people? Can. Probably. Someday. But then using the programs will be as difficult as managing people. Who ruled knows.

    The original, as always on the blog, Thoughts that could not be kept in my head ...

    Also popular now: