Free Novel Read

This is Improbable Page 28


  Huff, David L., Vijay Mahajan, and William C. Black (1981). ‘Facial Representation of Multivariate Data.’ Journal of Marketing 45 (4): 53–59.

  A Golden Mean in Your Mouth

  Eddy Levin of Harley Street in London puts a golden ratio, not just golden teeth, into people’s mouths. Dr Levin has been at this for a while. It was he who wrote a study called ‘Dental Esthetics and the Golden Proportion’, which graced pages 244–252 of the September 1978 issue of the Journal of Prosthetic Dentistry.

  The golden ratio is a special number that has caught the eye and imagination of mathematicians, artists, and now, thanks to Levin, dentists. Some call it the ‘golden mean’ (though philosophers use that phrase to mean something else); some call it the ‘golden section’. Some Germans call it, evocatively, the ‘goldener Schnitt’. Most everyone calls it beautiful.

  The golden ratio is the number you get when you compare the lengths of certain parts of certain perfectly beautiful things (among them: snail shell spirals, the Parthenon in Athens, and Da Vinci’s ‘The Last Supper’). You’ll find that the ratio of the bigger part to the smaller equals the ratio of the combined length to the bigger. That ratio, that number, is always the same, ever so slightly bigger than 1.6180339.

  If doing sums causes you pain, just go find someone who has perfect teeth and who won’t mind you staring into his or her mouth.

  Levin explains that many years ago he was both studying maths and trying to find out what made teeth look beautiful. ‘It was at a moment,’ he says, ‘like when Archimedes got into his bath, that I suddenly realized that the two were connected – the Golden Proportion and the beauty of teeth. I began to put this into practise and started testing my ideas on my patients. My first case was a young girl in a hospital, where I was teaching, whose front teeth were in a terrible state and needed crowning. Despite the scepticism of the other members of staff and the unenthusiastic technicians with whom I had to work and whose co-operation I depended upon, I crowned all her front teeth, using the principles of the Golden Proportion. Everybody, including the young lady herself, agreed that her teeth now looked magnificent.’

  Most important, in Levin’s reckoning, is the simple tooth-to-tooth ratio: ‘The four front teeth, from central incisor to premolar are the most significant part of the smile and they are in the Golden Proportion to each other.’

  Levin created an instrument called the ‘golden mean gauge’. Made of stainless steel 1.5 millimetres thick, and retailing for £85 (about $135), it shows whether the numerous major dental landmarks ‘are in the Golden Proportion’, and it is suitable for autoclaving. He also offers a larger version that is ‘useful for full face measurements’ and ‘useful to measure larger objects or bigger pictures or furniture etc.’

  Levin, E. I. (1978). ‘Dental Esthetics and the Golden Proportion.’ Journal of Prosthetic Dentistry 40 (3): 244–52.

  In Brief

  ‘Discovering Interesting Holes in Data’

  by Bing Liu, Liang-Ping Ku, and Wynne Hsu (Proceedings of Fifteenth International Joint Conference on Artificial Intelligence, Nagoya, Japan, 1997)

  The authors, at the National University of Singapore, explain: ‘Clearly, not every hole is interesting … However, in some situations, empty regions do carry important information’.

  Cheese String Theory

  January 1995 was a signal month for the understanding of cheese. Maria N. Charalambides and two colleagues, J. G. Williams and S. Chakrabarti, published their master work: ‘A Study of the Influence of Ageing on the Mechanical Properties of Cheddar Cheese’. It showed a refined way to do mathematical calculations about cheese.

  Charalambides is a senior lecturer in the department of mechanical engineering at Imperial College, London. Her report begins with a two-page review of certain incisive cheese studies of the past. The aim of those studies, generally, was to compress a hunk of cheese between two plates, to see what the cheese would do.

  This is painstakingly technical work. In 1976, researchers named Culioli and Sherman ‘reported a change in the stress-strain behaviour of Gouda cheese when plates were lubricated with oil as opposed to when they were covered with emery paper’. Two years later, Sherman and a different collaborator did similar work with Leicester cheese. Subsequently, other scientists performed related experiments on mozzarella cheese, cheddar cheese, and processed cheese.

  The plates and the cheese rub and stick against each other. Their friction leads the cheese to warp – to bow outwards or flex inwards – when it’s under pressure. And this warp drives scientists half-mad. Frictionless cheese would be easier to study ... but frictionless cheese does not exist.

  ‘It is obvious,’ Charalambides writes, ‘that quantifying frictional effects in compression tests of cheese is a complicated matter.’ Complicated, yes – but Charalambides et al. managed to do it.

  They compressed cheese cylinders of various heights, calculated the stresses and strains in each of them, and then plotted a mathematical family of cheese stress-strain curves. Some further, almost mundane, calculations yielded up a delicious holy grail of cheese data: a way to estimate how cheese, minus the effects of friction, behaves under pressure.

  Then came the main event: measuring how cheese behaviour changes as the cheese goes from infancy to old age. It would be a happy cheese manufacturer who could reliably gauge a cheese’s age by doing a simple mechanical test.

  Charalambides and her team performed fracture tests on the cheese, too. Those and the compression tests, done on cheeses young and old, produced a numerical portrait of cheese behaviour from birth through the ripe age of seven months.

  The Charalambides report is a deeply pleasurable read for anyone who lives and breathes cheese and has a modest working knowledge of materials science. But those who care deeply about their cheese noted that the study looked at merely three varieties: mild cheddar, sharp cheddar, and Monterey jack.

  The next year, mozzarella enthusiasts must have scrambled to buy copies of the May/June 1996 issue of the Journal of Food Science, where they could read M. Mehmet Ak and Sundaram Gunasekaran’s, ‘Dynamic Rheological Properties of Mozzarella Cheese During Refrigerated Storage’.

  Since then many scientists have compressed and fractured many kinds of cheese, delving even into the realm of soft cheeses. Mathematics-based mechanical cheese testing is no longer just a romantic dream.

  Charalambides, M. N., J. G. Williams, and S. Chakrabarti (1995). ‘A Study of the Influence of Ageing on the Mechanical Properties of Cheddar Cheese.’ Journal of Materials Science 30: 3959–67.

  Ak, M. Mehmet, and Sundaram Gunasekaran (1996). ‘Dynamic Rheological Properties of Mozzarella Cheese During Refrigerated Storage.’ Journal of Food Science 61 (3): 566–69.

  Lee, Siew Kim, Skelte Anema, and Henning Klostermeyer (2004). ‘The Influence of Moisture Content on the Rheological Properties of Processed Cheese Spreads.’ International Journal of Food Science & Technology 39 (7): 763–71.

  An Improbable Innovation

  ‘A Tittle Obliquity Measurer’

  by Zhengcai Li (International patent application no. PCT/CN2007/003282, filed 2007)

  Li, from Tianjin, China, dotted all his i’s in the filing: ‘When the tittle obliquity measurer is tilted, the gravity pendulum rotates all the gears, and the bottom surface of the housing may be normal to the central line of a vertical shaft which is to be measured, the bottom surface of the housing may be normal to and crossed with the approximation plumb surface at the horizontal line which is parallel to the power shaft, the indication device can indicate the obliquity.’

  The tittle obliquity measurer

  Measuring Up Rulers

  Complimentary small plastic rulers, being imprecise, innacurate, flimsy, and defaced with advertising, draw only a measured amount of respect from metrologists. In 1994, two metrologists took measures to see exactly how much respect the rulers deserve.

  Metrologists are the people who come up with more accurate, more precise way
s to measure things.

  The metrology community incessantly tussles about new standards definitions for the intimidatingly important, never-quite-as-good-as-they-ideally-could-be standards – most famously, the kilogram, the metre, and the second.

  The father-and-son team of T. D. Doiron and D. T. Doiron looked, briefly, at a neglected standard. Their report, called ‘Length Metrology of Complimentary Small Plastic Rulers’, drew some measure of interest when it was presented at the Measurement Science Conference in Pasadena, California, in 1994.

  Theodore Doiron was a member of the Dimensional Metrology Group at the US National Institute of Standards and Technology. Daniel was, at the time, a teenager at school.

  The Doiron/Doiron report implies two simultaneous and opposite truths. Metrologists sometimes express contempt for small plastic rulers (known in the trade as SPRs), because they are made of cheap polystyrene and manufactured to loose tolerances. But metrologists also, down deep, harbour respect for these stylish, useful, slim, flat-bottomed objects, with the four straight-edge working surfaces and the top that boasts a sufficiency of both inked markings and raised graduations, said gradations being located at the outer edges of the bevelled top sides.

  The Doirons explain this ambivalent attitude: ‘There are virtually no active scientists or engineers who do not have a number of SPRs in their desks which are used continually for developing the earliest and most basic designs of virtually every object manufactured. A quick survey of engineers will show that these early sketches, the very basis of our manufacturing economy, are largely dependent on the use of SPRs. While there is a national standard for plastic rulers, Federal Specification GG-R-001200-1967 and the newer A-A-563 (1981), there has never been a systematic study of the metrology of this basic tool of the national measurement system.’

  Doiron and Doiron studied fifty rulers they had ‘collected over a long period of time at conferences and from colleagues’. They discovered that the government specification was itself so shockingly poor that they could point to a key passage and say: ‘We cannot figure out what this statement means.’

  After measuring things as best they could (and being good metrologists, they could measure things well indeed) the Doirons reached a pair of conclusions. First that most of the complimentary small plastic rulers ‘quite easily’ met the official (albeit murky) standard. Second, that ‘the older the ruler’ was, the more accurate it was likely to be.

  The National Institute of Standards and Technology (NIST) itself, an official told me, once ordered a batch of complimentary small plastic rulers that turned out, upon arrival, to be wretchedly calibrated. As a measure of caution (they have a reputation to protect), and perhaps with some umbrage and embarrassment, NIST returned them to the manufacturer.

  Doiron, Daniel T., and Theodore D. Doiron (1994). ‘Length Metrology of Complimentary Small Plastic Rulers.’ Proceedings of the Measurement Science Conference, Anaheim, Calif.

  The Lazy Bureaucrat Problem

  The lazy bureaucrat problem is ancient, as old as bureaucracy itself. In the 1990s, mathematicians decided to look at the problem. They have since made progress that, depending on your point of view, is either impressive or irrelevant.

  Four scientists at the State University of New York, Stony Brook, issued the first formal report. ‘The Lazy Bureaucrat Scheduling Problem’, by Esther Arkin, Michael Bender, Joseph Mitchell, and Steven Skiena, appeared in the journal Algorithms and Data Structures. The study describes a prototypically lazy bureaucrat, transforming this annoying person into a collection of mathematical formulas, theorems, proofs, and algorithms.

  ‘Objective Functions’ of ‘The Lazy Bureaucrat Scheduling Problem’

  This bureaucrat has a one-track mind. His objective, as Arkin, Bender, Mitchell, and Skiena describe it, is: ‘to minimize the amount of work he does (he is “lazy”). He is subject to a constraint that he must be busy when there is work that he can do; we make this notion precise ... The resulting class of “perverse” scheduling problems, which we term “Lazy Bureaucrat Problems”, gives rise to a rich set of new questions.’

  Other mathematicians and computer scientists took their own whacks at managing lazy bureaucrats.

  Arash Farzan and Mohammad Ghodsi at Sharif University of Technology in Tehran presented a paper in 2002 at the Iran Telecommunication Research Center. Titling it ‘New Results for Lazy Bureaucrat Scheduling Problem’, they announced that, in a mathematical sense, lazy bureaucrats are nearly impossible to manage well. A good solution, they said, is even ‘hard to approximate’. What they meant: no one could say for sure that the problem could be solved – even if someone works at it ceaselessly until the end of time.

  In 2003, Ghodsi and two other colleagues presented a new study. What would happen, they asked, if one imposed some tighter constraints on the lazy bureaucrat? The answer: the problem would be only slightly less nearly impossible to manage, even in theory.

  These and other studies at least demonstrate that annoying people, some of them, can be described mathematically. And that on paper (or in a computer), there might be better – although not necessarily good – ways to manage them.

  Managing a problem, though, does not necessarily solve it.

  The mathematicians who tackle these lazy bureaucrat problems take the lazy approach. None does the hard work necessary to actually solve the problem – they give no advice about getting rid of the lazy bureaucrats. Like most non-mathematicians, they let the lazy bureaucrats career on, forever clogging the system.

  For a hard worker, to read these studies is to take a descent into maddeningness.

  But not everyone feels that way. The Royal Economic Society issued a press release in 2008 bearing the headline ‘Lazy Bureaucrats: A Blessing in Disguise’. Touting a study by Josse Delfgaauw and Robert Dur at Erasmus University, Rotterdam, the Royal Society says: ‘Hiring lazy people into the civil service helps to keep the cost of public services down’. The study itself is, as the saying goes, more nuanced.

  Arkin, Esther M., Michael A. Bender, Joseph S. B. Mitchell, and Steven S. Skiena (1999). ‘The Lazy Bureaucrat Scheduling Problem.’ Algorithms and Data Structures 1663: 773–85.

  Farzan, Arash, and Mohammad Ghodsi (2002). ‘New Results for Lazy Bureaucrat Scheduling Problem.’ Proceedings of the 7th CSI Computer Conference, Iran Telecommunication Research Center, Tehran, 3–5 March 2002: 66–71.

  Esfahbod, Behdad, Mohammad Ghodsi, and Ali Sharifi (2003). ‘Common-Deadline Lazy Bureaucrat Scheduling Problems.’ Algorithms and Data Structures: Proceedings of the 8th International Workshop, WADS, Ottawa, Canada, 30 July–1 August: 59–66.

  Gai, L., and G. Zhang (2008). ‘On Lazy Bureaucrat Scheduling with Common Deadlines.’ Journal of Combinatorial Optimization 15 (2): 191–99.

  Random Promotion Discoveries

  Three Italian researchers were awarded the 2010 Ig Nobel Prize in management for demonstrating mathematically that organizations would become more efficient if they promoted people at random. But their research was neither the beginning nor the end of the story of how bureaucracies try – and fail – to find a good promotion method.

  Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo of the University of Catania, Sicily, calculated how a pick-at-random promotion scheme compares with other, more enshrined methods. They gave details in a report published in the journal Physica A: Statistical Mechanics and its Applications.

  Pluchino, Rapisarda, and Garofalo based their work on the Peter Principle – the notion that many people are promoted, sooner or later, to positions that overmatch their competence.

  The centre point – the intersection of common sense and the Peter Principle – gives ‘the most convenient strategy to adopt if one does not know which mechanism of competence transmission is operative in the organization’. Adapted from ‘The Peter Principle Revised: A Computational Study’.

  The three cite the works of other researchers who had taken tentative, exploratory steps in the same
direction. They fail, however, to mention an unintentionally daring 2001 study by Steven E. Phelan and Zhiang Lin, at the University of Texas at Dallas, that was published in the journal Computational & Mathematical Organization Theory.

  Phelan and Lin wanted to see whether, over the long haul, it pays best to promote people on supposed merit (we try, one way or another, to measure how good you are), or on an ‘up or out’ basis (either you get promoted quickly or you get the boot), or by seniority (live long and, by that measure alone, you will prosper). As a benchmark, a this-is-as-bad-as-it-could-possibly-get alternative, they also looked at what happens when you promote people at random. They got a surprise: random promotion, they admitted, ‘actually performed better’ than almost every alternative. Phelan and Lin seemed (at least in my reading of their paper) almost shocked, even intimidated, by what they found.

  But where Pluchino, Rapisarda, and Garofalo would later, independently, hone and raise this discovery for the world to admire, Phelan and Lin merely muttered, ever so quietly in the middle of a long paragraph, that ‘this needs to be further investigated in our future studies’. Then, by and large, they moved on to other things.

  Human beings, many of them, are clever. Always there is potential to devise a new, perhaps better method of choosing which individuals to promote in an organization. More recently, Phedon Nicolaides, of the European Institute of Public Administration Maastricht, the Netherlands, suggested what he sees as an improvement on random promotion: randomly choose the people who will make the promotion decisions. Professor Nicolaides published his scheme in the Cyprus Mail newspaper.