In The Papers: Inventive Activity

Naomi Lamoreaux, Kenneth Sokoloff, and Dhanoos Sutthiphisal have a working paper entitled, The Reorganization of Inventive Activity in the United States during the Early Twentieth Century.


The standard view of U.S. technological history is that the locus of invention shifted during the early twentieth century to large firms whose in-house research laboratories were superior sites for advancing the complex technologies of the second industrial revolution. In recent years this view has been subject to increasing criticism. At the same time, new research on equity markets during the early twentieth century suggests that smaller, more entrepreneurial enterprises were finding it easier to gain financial backing for technological discovery. We use data on the assignment (sale or transfer) of patents to explore the extent to which, and how, inventive activity was reorganized during this period. We find that two alternative modes of technological discovery developed in parallel during the early twentieth century. The first, concentrated in the Middle Atlantic region, centered on large firms with in-house R&D labs and superior access to the region’s rapidly growing equity markets. The other, located mainly in the East North Central region, consisted of smaller, more entrepreneurial enterprises that drew primarily on local sources of funds. Both modes seem to have made roughly equivalent contributions to technological change through the 1920s. The subsequent dominance of large firms seems to have been propelled by a differential access to capital during the Great Depression that was subsequently reinforced by the regulatory and military procurement policies of the federal government.

The standard view of inventive activity is that individuals dominated technological discovery until the early 1900s, when large, in-house R&D departments took over (2).  The big R&D departments had advantages such as big budgets and more resources, and as it became more and more expensive (such as requiring more equipment) to perform scientific experiments, naturally, the big firm became the generator of inventive value.

Using the assignment (either sale or transfer) of patents as a measure (3), the authors argue that this is not really the case.  Instead, in-house departments had both advantages in disadvantages:  yes, they had more resources, more concentrated resources, access to manufacturing, and easier internal sale (4-5), but big R&D departments also had information and contracting problems, as well as little real connection to the “real world” (5).  The most valuable patents acquired by large firms in the 1920s actually started outside the firms’ R&D departments (7).

This isn’t to say that large firms weren’t increasing their share during this time:  in 1870-71, the assignment at issue was 16.1%, whereas by 1928-29, it was 56.1%.  That’s an undeniable large increase.  But there were legitimate regional and curb (high-tech) markets which existed to finance smaller firms (11), and smaller firms were still productive during this time—they were responsible for 13-22% of patents (15).

When you break things down regionally, you can see two completely different stories.  The mid-Atlantic and East North Central regions were each responsible for approximately 1/3 of patents during this time (16).  The East North Central region  (which includes Illinois, Indiana, Michigan, Ohio, and Wisconsin) was heavily small firm/individual, whereas the mid-Atlantic specialized in R&D and large firms (16-17).  But large firm patents were not more important or valuable, and the authors argue that they may even have been less valuable (19-20).  And some of these labs were mostly for vetting outside ideas (23-24) rather than coming up with new ideas.

So why did large R&D firms take over?  There were a couple of reasons which relate closely to one another.  The Great Depression hit the ENC region much harder than the mid-Atlantic (31).  Furthermore, government spending, especially during World War II, tended to focus on large firms with R&D departments.  SEC filing requirements also became tougher, which dried up those curb markets (33).  So once again, government policy had unintended(?) long-term consequences which led to corporatism.

In The Papers: IQ and Economic Growth

Garret Jones, R.W. Hafer, and Bradley K. Hobbs are getting awfully close to thoughtcrime with their paper, IQ and the Economic Growth of U.S. States.


In the cross-country literature, cognitive skills are robust predictors of economic growth. We investigate claims by psychologists that the same is true at the state level. In a variety of specifications using four proxies for average state IQ used in the psychology literature, little evidence is found for a robust IQ-growth relationship at the state level.

The authors point out that IQ matters in cross-country surveys of economic growth—maybe even more than economic freedom (2).  In addition, IQ matters for individual earnings (4).  Both of these are close to heresy in polite society, but they’re true.  So, the authors say, let’s check out if this holds for US states as well.

They find that there are a few proxies for IQ in the US, but that none of them are really all that good.  The first one is SAT scores.  This is a problem because smarter people on the coasts tend to take the SAT, whereas smarter people in the midwest and mountain west tend to take the ACT.  So other studies look at the ACT and a composite of SAT and ACT.  Finally, there are NAEP federal tests (5-6).

The authors found that, using a simple correlation test, IQ did correlate with economic growth in three of the four studies; the only exception was the ACT-only survey (11).  Using a more detailed correlation method, the SAT and NAEP surveys showed a positive correlation, the ACT survey a negative correlation, and the composite no correlation (12).  And even for the SAT and NAEP surveys, the results were significantly weaker than national-level results:  for cross-country surveys, a 1 IQ point increase leads to a 0.1% increase in national economic growth.  On the cross-state level, a 1-point increase leads to a 0.05% increase in growth (NAEP), or even less than 0.025% (SAT) (13).

The big problem is that the IQ differences as measured are vast and flaky:  one survey has the IQ of north central states (Minnesota, Wisconsin, North Dakota, South Dakota, etc.) as 102.2, whereas another survey had them at 83.7 (27).  Considering that one survey (27) had North Dakota at 74.5 and Minnesota at 88.5 (note that these are two of the highest-performing states in terms of student achievement), I would chalk this up to a measurement problem rather than evidence that something which is true on the individual and national levels is untrue at the state level.

In The Papers: Those Dishonest Economists

I have a lot of respect for George Selgin’s academic research, and his paper entitled Those Dishonest Goldsmiths is a good reason why.


Modern accounts of the origins of fractional-reserve banking, in economics textbooks and elsewhere, often assert that London goldsmiths came up with the idea around the middle of the 17th century, and first implemented it by clandestinely lending coin that they were supposed to keep locked away in their vaults. I assess the veracity of this claim by examining contemporary, circumstantial evidence bearing upon it, and also by considering the circumstances under which, according to English legal doctrines at the time in question, goldsmiths were entitled to lend coin that had been surrendered to them. I conclude that the goldsmiths were almost certainly innocent of the crime for which they are so frequently accused, and that the accusation may well have taken shape through later writers’ confusion of (1) crimes other than embezzlement of whichgoldsmiths were accused by their contemporaries and (2) documented embezzlement of stored coin, not by goldsmiths but either by the British crown or by merchants’ servants.

Goldsmiths are believed to have pioneered fractional reserve banking in the 17th century (1).  This is a myth, but is very popular (2-4), including for many Austrians (4-5).  The basic idea is that jewelers and goldsmiths would hold gold (in coin and non-coin forms) in safes.  They realized that the gold is just sitting there, waiting to be collected.  So what’s the harm in lending out this gold in the meantime, earning a bit of interest back at low risk? (2)  People who tell this story talk of “warehouses” and goldsmiths who charged “storage fees,” implying that the smiths were embezzling or at least performing shady activities.

In reality, that entire story is almost entirely backwards.  Fractional reserve banking dates back at least to medieval northern Italy, and possibly even to ancient Rome or Greece (6).  Furthermore, there are no facts supporting the embezzlement claim (6).  There are, however, facts which go against the claim.  For example, instead of charging “storage fees,” goldsmiths paid out interest, or at least did not charge fees to holders.  This implies that people believed that the goldsmiths were getting something out of the arrangement.  Furthermore, there is no court testimony on the books for claims of embezzlement (6-7).  Considering that most goldsmiths at that time were Jewish, and considering that Screw the Jew was a European pastime, it would seem that if this line of legal argument held any validity, somebody certainly would have used it.  But as far as we know, nobody did.

In fact, in Samuel Pepys’s diary, he described interest payments and was shocked that the Amsterdam markets didn’t pay out interest (8).  So why would people like Pepys give their gold to smiths?  Because consumers prefer the convenience of bank deposit notes over carrying relatively heavy specie (8).  Also, government coins during the 1600s were awful—it was hard to find a coin which was not clipped, shaven, or otherwise degraded.  Bank notes did not have those problems (10).  And bearer notes, written by goldsmiths, were “intended to pass anonymously from hand to hand” (11).  This belies the idea that what you put in the vault was supposed to remain yours in perpetuity without claim to ownership shifting.

Selgin also discusses bailment laws which were in place in England during the 1500s and possibly even earlier.  These laws are based on Talmudic principles (no great surprise).  In the Talmud, if specie is tied in a package, it is meant to be stored for safekeeping; on the other hand, if the coins are “loose,” they may be lent out (14).  In English law, there was no resource to demand specific coins back unless they were presented to the holder in a sealed bag (15-16).

So where do these charges come from?  Selgin argues that the charges gained traction due to actual misconduct charges and actual embezzlement against smiths (18).  Goldsmiths were charged with clipping and melting down coins for sale as bullion, as well as usury (20).

In The Papers: Public Sector Unionism

Eileen Norcross has a working paper out entitled, Public Sector Unionism:  A Review.

In 2009, for the first time, the number of public sector union employees was larger than the number of private sector union employees (1).  Norcross argues that we need to understand private-sector and public-sector unions as two separate phenomena.  Private unions are a “labor cartel within the market economy,” which affects labor supply, prices, and economic growth (1).  They re-distribute earnings within industries, but have no way of forcing revenues up, so they fight over the share of a fixed-sized pie.  In contrast, public unions are a “monopoly provider of labor within a bureaucratic-political realm” (1).  Unlike private-sector unions, public-sector unions have the ability to require people to pay more for their services, and so they (may) control, at least in part, the size of the pie as well as how it will be distributed.

Norcross argues that corporatist policies lead to private-sector union increases (2).  This is why, at the height of American corporatism, more than 30% of workers were in private-sector unions; in contrast, that number is 6.9% as of 2010 (3).  Right-to-work laws undercut the corporatist underpinnings of labor unions and union-heavy states became the rust belt as jobs moved to the south.  “In effect,” writes Norcross, “the laws instituting unionism in the private sector were ‘repealed’ by market forces” (4).

So how about public-sector unions?  Well, if you go back to the Roosevelt administration, you would get warnings that government agents should not be unionized—they understood the fundamental difference between private-sector and public-sector unions.  By the 1950s, though, certain writers believed that public sector unions “would not result in fiscal abuses because the absence of profits in the public sector was ‘compensated by constant pressures for governmental economy'” (4).  Sadly, these people were not invited to write for I Love Lucy, regardless of how funny their claim may be.

As we can see, what really happened is that “local politicians encouraged public sector militancy in order to redirect federal dollars” (11).  An example of this is funding for inner-city schools earmarked for science being converted to increasing teachers’ wages.

In both cases, we see that a union is nothing more than a cartel:  it increases wages for members at the expense of non-members, shareholders, and consumers (11).  What’s different about public-sector unions, as I mentioned above, is that public-sector unions may increase their own demand for labor by electing friendly politicians (12).  These public-sector unions push and push until they bankrupt their localities—“the ultimate check on the growth of public sector unionism is municipal insolvency” (15).  Even then, politicians try to find ways around it:  ” ‘Politicians with short time horizons should be especially willing to pay fringe benefits’ ” (16).  As we have seen, fringe benefits (especially retirement accounts and health benefits) grew significantly faster than wages.

In the economic literature, there are a few theories on how effective public-sector unions are.  Rax and Ichnowski argue that unions are able to increase labor costs, but do not increase general expenditures (17)—in other words, like private-sector unions, public-sector unions still fight over a fixed-sized pie.  Valletta, however, theorizes that unions influence the budgetary process—without corresponding cuts, total municipal spending goes up (17).  But his results show that expenditures do not actually go up.  This does undercut the argument that public-sector unions directly increase government expenditures rather than simply redistributing them.

Perhaps people have maximum acceptable tax rates and politicians try to find ways to get there—that way, politicians maximize their power and influence.

In The Papers: Human Action

Gene Callahan has a relatively old paper that I just recently found, entitled Oakeshott and Mises on Understanding Human Action.


Although Michael Oakeshott and Ludwig von Mises were arguably two of the more 
profound theorists of human activity in the twentieth century, there has been remarkably 
little comparative study of their ideas. That is especially surprising when one considers 
how compatible those ideas were in a number of areas, such as the a priori nature of the 
postulates of human action, the nature of historical thought, the  fundamental dichotomy 
between explaining not- intelligent goings-on and intelligent activity, the ambiguous 
character of the statistical social sciences, and the importance of meaning in theorizing 
human conduct. Comparing their formulations of common concepts permits new, 
illuminating perspectives into each thinker's work.
Despite such compatibility, their ideas also contain interesting and important differences: 
on the modality or lack thereof in human thought, the nature of rationality and its 
relationship to tradition, and the character of economics as a science. This paper will 
explore both the similarities and differences between the ideas of Mises and Oakeshott. 
Because a full consideration of all of the areas mentioned above would likely result in a 
book rather than a paper, I will restrict myself here to examining their views on the 
general principles of human action and how those principles relate to the character of the 
social sciences

Callahan starts by elaborating upon Ludwig von Mises’s idea of praxeology:  from the fact that humans engage in purposeful activity, we may derive a number of things (1).  Mises spent much of his career diving deep into the science of human action—science in the old European sense of an “organized body of knowledge” rather than the modern notion of a quantitative natural science.  Although Oakeshott never directly engaged with Mises’s ideas, Callahan argues that the two share a number of similarities.  For example, both find vital the notion that actors understand their own circumstances and assign their own meaning to these circumstances.  A person acts because something “as he understands it, must appear to be unsatisfactory to him.”  In addition, there is an expectation for both that this action will improve the individual’s circumstances (2).

For both philosophers, values are not “given” and certainly neither believed that human action was nothing more than maximization subject to constraints (3).  Beyond that, though, there are certain strands which are compatible even though the two were not in contact.  Callahan argues that Mises understands what praxeology implies, whereas Oakeshott “puts them on the broader philosophical basis” (4).  Neither had a required ontology regarding how postulates come to be—they could come from G-d, biological evolution, or even random guesses (4).  What is important simply is that they exist, not necessarily how we get there.

What’s interesting is that Callahan argues that Mises was a methodological dualist:  Mises subscribed to the idea that intelligent action needs completely different theories compared to “non-intelligent goings-on” and that these are “fundamentally different activities” (5).  Oakeshott seems a little fuzzier, but you could make an argument that his thoughts are compatible with that notion as well.  Both believed, as Nardin wrote on Oakeshott, that “the social sciences, like the natural sciences, are explanatory, not prescriptive” (7).  In other words, we may use the social sciences to explain how people behave, not necessarily how people should behave.  This is where normative individualism comes into play.  Both also reject the notion of social holism, placing them squarely in the methodological individualist tradition (7-8).

I’ll end this with a quotation from Mises which I like a lot and wish would get broadcast in every econometrics course:

If a statistician determines that a rise of 10 per cent in the supply of potatoes in Atlantis at a definite time was followed by a fall of 8 per cent in the price, he does not establish anything about what happened or may happen with a change in the supply of potatoes in another country or at another time. He has not “measured” the “elasticity of demand” of potatoes. He has established a unique and individual historical fact. No intelligent man can doubt that the behavior of men with regard to potatoes, and every other commodity is variable. Different individuals value the same things in a different way, and valuations change with the same individuals with changing conditions. (11)

In The Papers: Trial By Battle

Pete Leeson continues to amaze me.  First, he defends ordeals.  Now, he defends trial by battle.


For over a century England’s judicial system decided land disputes by ordering disputants’ legal representatives to bludgeon one another before an arena of spectating citizens. The victor won the property right for his principal. The vanquished lost his cause and, if he were unlucky, his life. People called these combats trials by battle. This paper investigates the law and economics of trial by battle. In a feudal world where high transaction costs confounded the Coase theorem, I argue that trial by battle allocated disputed property rights efficiently. It did this by allocating contested property to the higher bidder in an all-pay auction. Trial by battle’s “auctions” permitted rent seeking. But they encouraged less rent seeking than the obvious alternative: a first-price ascending-bid auction.

To understand trial by battle, Leeson argues, we must understand the concept of Coasean efficiency, which is that willingness to pay implies a higher value for a good (2).  In early post-invasion England, there were abysmal records of land ownership, and thus there were many legitimate (and illegitimate) claims for courts go to through.  Unfortunately, in most cases, it wasn’t possible to tell who was the real legitimate landholder.  What we saw instead of legal battles were, well, legal battles:  legal representatives fighting (literally) over property rights (2).  Going back to Coasean efficiency, the people who valued land more would hire better champions, which would increase their odds of winning the legal proceeding (2).

But why would courts allow such a thing?  Leeson points to two arguments (aside from the basic one that legitimate land titles were rather hard to come by at this point in time, so a modern court system would not work to arbitrate these disputes):  there is less rent-seeking here than in first-price ascending-bid auction, and there are social benefits for spectators (3). Champions were armed with quarterstaffs and bucklers and would fight in an open area without assistance.  Considering the time period, this was about as tame as you could get, indicating that there were sport-related motives here.  Leeson points to evidence that, in certain cases, legal settlements occurred after the crowd had already gathered, so the judges told the champions to go out and bludgeon each other a bit, and then they would call it off with neither officially losing.

Up until the late 1200s, there were hardly any reputable land titles in England, meaning that those could not be used to adjudicate disputes.  In addition, witnesses lie and charters can be forged (4).  So how could trial by battle actually work out as a solution to this problem?  The theory here, according to Leeson, is that a champion should have been a witness (or his dead father was a witness) of the claimant’s or defendant’s right to own a particular parcel of land.  In reality, champions tended to be hired gladiators with reputations (5-6).  The demandant’s champion must kill or force the tenant’s champion into submission; the tenant’s champion may kill, force submission, or hold the demandant’s champion off until nightfall (7).  The losing champion (if still living at the end of a battle) paid a fine of 3 pounds and was never allowed to bear witness in a trial again (7).  This gave an incentive for champions not to take on others who were significantly better.

The existence of higher transactions costs makes it very important to get initial allocations right (8).  Unlike today, it was much more difficult at that time to figure out who owns a particular bit of land, and it was also much more difficult to buy or sell land, due to the rights of feudal lords and the owner’s descendents.  Essentially, trial by battle became a “violent auction to reveal the higher-valuing user’s identity and to allocate the contested land to him” (12).

Most trials by battle were settled after the champions were chosen and before fighting began (19).  In roughly 38% of cases, battle between the two contestants was pledged, but champions only fought roughly 20% of the time, meaning that 80% of cases were settled before battle commenced.  Even during these battles, settlement could occur, and Leeson has a good example of such an instance.

Trial by battle slowed down with Henry II’s Angevin reforms and the birth of common law (23).  And by 1179, the grand assize (a 12-knight jury) began to replace trial by battle as a means of adjudicating claims.  This could happen only as a result of better information regarding property ownership and the loosening of feudal/descendant rights (the latter mainly through primogeniture, which reduced sibling in-fighting over property by assigning it all to the first male).

In The Papers: Keeping It In The Family

Alberto Alesina and Paola Giuliano have a paper out entitled Family Ties and Political Participation.


We establish an inverse relationship between family ties and political participation, such that the more individuals rely on the family as a provider of services, insurance, transfer of resources, the lower is one’s civic engagment and political participation. We also show that strong family ties appear to be a substitute for generalized trust, rather than a complement to it. These three constructs-civic engagement, political participation, and trust- are part of what is known as social capital; therefore, in this paper, we contribute to the investigation of the origin and evolution of social capital. We establish these results using within-country evidence and looking at the behavior of immigrants from various countries in 32 different destination places.

They define amoral familism as caring and trusting only family members (2).  This reminds me of Stanley Kurtz’s phrase “I and my brother against my cousin,” or, more specifically, Steve Sailer’s “I against my brother. My brother and I against my cousin. My cousin and we against the world.

For Alesina and Giuliano, close family ties lead to less civic interest and thus less political participation (2).  They speculate (though somewhat indirectly) that this is perhaps why Latinos tend to be low-affinity voters.

Interesting point which seems backed up by regular experience:  “Men are always more interest in politics and more active in political activity” (9).

One thing that I noted was that the questions to determine political participation involve asking people how much they get from politics through TV, radio, or newspapers—there is no mention of the Internet here (11).  Because we’re dealing with second-generation immigrants, I don’t believe that this is a minor problem.

My takeaway:  family replaces government, and there is a competition between tribe and State.  Governments have incentives to destroy family ties and other parts of society:  then they can hook people on their own ties and gain power at the expense of these social institutions.

In The Papers: Not Biting The Hand That Feeds You

Suppose that you run a newspaper and one of your primary advertisers is the government.  You then get wind of a corruption scandal involving members of said government.  Do you alienate your sponsor or quash the story?  This is the real question Rafael Di Tella and Ignacio Franceschelli ask in Government Advertising and Media Coverage of Corruption Scandals.


We construct measures of the extent to which the 4 main newspapers in Argentina report government corruption in their front page during the period 1998-2007 and correlate them with the extent to which each newspaper is a recipient of government advertising. The correlation is negative. The size is considerable: a one standard deviation increase in monthly government advertising (0.26 million pesos of 2000) is associated with a reduction in the coverage of the government’s corruption scandals by 0.31 of a front page per month, or 25% of a standard deviation in our measure of coverage. The results are robust to the inclusion of newspaper, month, newspaper*president and individual-corruption scandal fixed effects as well as newspaper*president specific time trends.

The authors survey the four main newspaper in Argentina over the time period 1998-2007, and focus on corruption scandals involving government officials (2).  Their theory is that adverse coverage correlates negatively with government funding (2).  Because the government tends to finance the media to a great extent in Argentina, the results are different than in someplace like the US, where the (often left-leaning) media tend to have partisan papers, and in which party affiliation affects coverage (3-4), regardless of who is currently in power.

This difference is not simply academic.  In the Argentinian case, 200 tax inspectors were sent to investigate one newspaper the day after a report of corruption within the tax agency was published (5, footnote 6).  Aside from direct threats, there are more indirect methods:  much of the “private” advertising is actually advertising by government-affiliated firms (6-7).  As the authors note, “One of the characteristics of small developing countries is the relatively large influence of the government on business” (7).  I would re-phrase that to say that “one of the characteristics keeping countries underdeveloped is the relatively large influence of government on business.”

At any rate, the authors have a data set of 254 scandals, of which more than 150 were reported on only one paper’s front page (8).  They looked at front-page offerings, and consider a scandal “buried” if a paper does not print an article on a front page regarding that story.  They also found that 256,000 pesos (at 2000 values) led to a half a cover drop (that is, a 37% drop) in corruption reporting for one month (13).  Non-government corruption coverage, meanwhile, was not affected by government payments (15-16).  On the other hand, there is a positive correlation between corruption coverage and circulation (17).  So this leads newspaper companies to come to a financial decision:  the drop of one front-page corruption story leads to 560,000 pesos from the government.  But each front-page story leads to 1.7 million pesos from subscribers (1.48 million more subscribers, each paying roughly 1.15 pesos apiece).  It would seem as though this should be a no-brainer, but if the marginal cost of a newspaper is 0.77 pesos, the authors note that this would be a break-even point:  0.77 = (1.7 – 0.56) / 1.48.  Actually, the authors have 0.75 pesos, but they use 0.58 in their calculation rather than 0.56, which I believe to be in error.

My quick takeaway:  the best way to reduce corruption is to have a free press and a small, limited government which can neither afford nor be allowed to influence the media.

In The Papers: Mere Quibbles

Note to self:  never make George Selgin mad.  Selgin, in Mere Quibbles, lays the smack down on Philipp Bagus and David Howden.


Despite its title, Philipp Bagus and David Howden’s critique of The Theory of Free Banking does more than merely “quibble” with that book’s arguments: their criticisms of those arguments are such as to suggest that the very foundation upon which my defense of free banking rests is deeply flawed. Here I defend my work against Bagus and Howden’s criticisms, by showing that they rest upon careless or disingenuous readings of my arguments and a poor grasp of basic monetary economics.

I don’t have many notes from this paper, but it was kind of like watching Mike Tyson pummel some poor schmuck.  You want to see Bagus and Howden’s manager throw in the towel and call the fight off, it’s so bad.

The big thing to understand, and something which Bagus and Howden occasionally did but then forgot about, is that free banking focuses on balancing MV* based on money demand (1-2). You need to stabilize MV, not just P (17), as the monetarists try to do.  This is something akin to Friedman’s k-percent rule (where a central bank increases money reserves at a fixed rate each year), and often leads to more trouble than gains.

Another thing of note is that banks hold reserves because rivals will dump notes for redeemability (3).  This is something which was very common in the Scottish free banking example (about which Selgin has written in a great book):  the new kid on the block starts collecting deposits and issuing notes.  All of the other, established, banks gladly accept the new bank’s notes from their customers, and then they wheel in large numbers of the notes, hoping to bleed dry that bank’s reserves.  Banks which are not prepared fail, and new bank owners quickly learn to defend against competitors.

Later on, Selgin notes that “A bank borrower contributes no more to the demand for money than a ticket agent contributes to the demand for plays and concerts” (10).  This is a great point, and something which typically gets forgotten in the “fractional reserves are evil” camp.

The final note I have from this paper is that “a persistent divergence of the actual from the natural rate requires a persistent divergence of the actual from the equilibrium purchasing power of money” (19).  If the quantity of money supplied does not match the quantity of money demanded, interest rates will be out of alignment, leading potentially to an Austrian boom and bust cycle.

* – MV = Supply of Money * Velocity of Money.  In the classical accounting equation, MV = PY.  In other words, the supply of money and its “velocity” (how often it changes hands) is equivalent to the total price level and the total amount of output in an economy.

In The Papers: The Repeated Failure Of Keynesianism

John Taylor has a new paper out, entitled An Empirical Analysis of the Revival of Fiscal Activism in the 2000s.


Macroeconomic data indicate that the three American discretionary countercyclical stimulus packages of the 2000s had little if any direct impact on consumption or government purchases, and thus did not stimulate the economy as Keynesian models would predict. Households largely saved the transfers and tax rebates. The federal government only increased purchases by a very small amount. State and local governments saved their stimulus grants and shifted expenditures from purchases toward transfers. Counterfactual simulations of the 2009-10 period show that a stimulus-induced decline in state and local government purchases was larger than the increase at the federal level. Counterfactual simulations also show that a larger stimulus package—with the proportions going grants, federal purchases, and net transfers to households as in 2009-10— would not have increased government purchases or consumption by a larger amount. These results from the 2000’s experience raise doubts about the efficacy of such packages adding weight to similar assessments reached more than 30 years ago.

Very basic Keynesian theory holds that a drop in investment can be countered by an increase in government spending, temporary tax refunds, or handouts (2).  They use this theory to create models, showing that an increase in government spending by $X will decrease unemployment by y%.  The most famous of these is the oft-derided ARRA unemployment chart published by Christina Romer.  When dealing with tracking your projections, Taylor argues that you cannot use the model to verify whether the model worked:  the model will just spit out the same thing over and over (2).  So if you use the ARRA unemployment reduction model, plug in the actual numbers, and look at the results, you’ll say, “Boy, it’s a good thing we had ARRA, because we would have been really beaten up otherwise!”  Unfortunately, this does not follow:  all you’re doing is using the model to verify the model.  Instead, you need to use a different technique:  actually study the data rather than plugging it into the model.

Taylor focuses on three Keynesian stimulus periods:  the tax rebates of 2001 and 2008, and ARRA (5-6).  He notes that the 2008 and 2009 “stimuli” fail to increase personal consumption (7) and the temporary stimulus impact is not statistically different from 0 (8).  In other words, these Keynesian stimuli failed.  Even better, ARRA failed to operate how it was supposed to:  the change in federal purchases as a percentage of GDP, or even as a percentage of total ARRA spending, was tiny (12), meaning that these small changes wouldn’t “turn around the economy.”  State and local money was supposed to be used for infrastructure (you know—those “shovel-ready” jobs that we’re supposed to joke about now) and the purchase of other goods and services.  Instead, Taylor notes that state spending leveled off from 2009-2011, meaning that states replaced net borrowing with temporary ARRA funds for that time period (12-13).  So the crowding-out effect was almost 100% for those funds.

And for the part that didn’t crowd out other spending, states tended to use ARRA for Medicaid and TANF (welfare), so these were transfer programs rather than new spending (28).  Even within the confines of the Keynesian model, this is a major failure.  On the other hand, it does give credence to the permanent income hypothesis:  just as people saved the one-off tax rebates in 2001 and 2008, states “saved” their ARRA funds as well, using those to reduce borrowing (29).

As a final note, I should note that the last decade shows that crony capitalism is not limited to one party.  George W. Bush and Barack Obama are very similar when it comes down to action (even though Obama spends half of his time tactlessly complaining about Bush).  They both bought into ideas of Keynesian “stimulus” and both have hurt us as a result.