36 Chambers – The Legendary Journeys: Execution to the max!

September 24, 2011

Review You Can Use [tm]: Heavy Rain (PS 3)

Filed under: Reviews you can use [tm]!, Video Games — Tony Demchak @ 2:04 am

Heavy Rain

Quantic Dream/Sony Computer Entertainment Europe (PS 3)

Interactive Movie

Pros

– The story is gripping

– A fair bit of replay value (18 endings and a bunch of minor variations)

– It’s hilarious to hear European actors acting American (like the Boston FBI Agent, Norman Jayden, who uses a ridiculous New England accent for his name and nothing else)

Cons

– SIXAXIS!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

– Occasionally nude males (not full frontal, but bad enough)

– The prologue is the worst part of the game (seriously, by a lot)

Recommendation: Another awesome PS 3 exclusive. It’s amazingly well done, very rich, and you never feel entirely out of control (or in control), which is excellent. I bet the Move version is actually better, since Sixaxis remains an idiotic idea.

(more…)

September 23, 2011

In The Papers: Inventive Activity

Filed under: Economics, In The Papers — Kevin Feasel @ 1:27 pm

Naomi Lamoreaux, Kenneth Sokoloff, and Dhanoos Sutthiphisal have a working paper entitled, The Reorganization of Inventive Activity in the United States during the Early Twentieth Century.

Abstract:

The standard view of U.S. technological history is that the locus of invention shifted during the early twentieth century to large firms whose in-house research laboratories were superior sites for advancing the complex technologies of the second industrial revolution. In recent years this view has been subject to increasing criticism. At the same time, new research on equity markets during the early twentieth century suggests that smaller, more entrepreneurial enterprises were finding it easier to gain financial backing for technological discovery. We use data on the assignment (sale or transfer) of patents to explore the extent to which, and how, inventive activity was reorganized during this period. We find that two alternative modes of technological discovery developed in parallel during the early twentieth century. The first, concentrated in the Middle Atlantic region, centered on large firms with in-house R&D labs and superior access to the region’s rapidly growing equity markets. The other, located mainly in the East North Central region, consisted of smaller, more entrepreneurial enterprises that drew primarily on local sources of funds. Both modes seem to have made roughly equivalent contributions to technological change through the 1920s. The subsequent dominance of large firms seems to have been propelled by a differential access to capital during the Great Depression that was subsequently reinforced by the regulatory and military procurement policies of the federal government.

The standard view of inventive activity is that individuals dominated technological discovery until the early 1900s, when large, in-house R&D departments took over (2).  The big R&D departments had advantages such as big budgets and more resources, and as it became more and more expensive (such as requiring more equipment) to perform scientific experiments, naturally, the big firm became the generator of inventive value.

Using the assignment (either sale or transfer) of patents as a measure (3), the authors argue that this is not really the case.  Instead, in-house departments had both advantages in disadvantages:  yes, they had more resources, more concentrated resources, access to manufacturing, and easier internal sale (4-5), but big R&D departments also had information and contracting problems, as well as little real connection to the “real world” (5).  The most valuable patents acquired by large firms in the 1920s actually started outside the firms’ R&D departments (7).

This isn’t to say that large firms weren’t increasing their share during this time:  in 1870-71, the assignment at issue was 16.1%, whereas by 1928-29, it was 56.1%.  That’s an undeniable large increase.  But there were legitimate regional and curb (high-tech) markets which existed to finance smaller firms (11), and smaller firms were still productive during this time—they were responsible for 13-22% of patents (15).

When you break things down regionally, you can see two completely different stories.  The mid-Atlantic and East North Central regions were each responsible for approximately 1/3 of patents during this time (16).  The East North Central region  (which includes Illinois, Indiana, Michigan, Ohio, and Wisconsin) was heavily small firm/individual, whereas the mid-Atlantic specialized in R&D and large firms (16-17).  But large firm patents were not more important or valuable, and the authors argue that they may even have been less valuable (19-20).  And some of these labs were mostly for vetting outside ideas (23-24) rather than coming up with new ideas.

So why did large R&D firms take over?  There were a couple of reasons which relate closely to one another.  The Great Depression hit the ENC region much harder than the mid-Atlantic (31).  Furthermore, government spending, especially during World War II, tended to focus on large firms with R&D departments.  SEC filing requirements also became tougher, which dried up those curb markets (33).  So once again, government policy had unintended(?) long-term consequences which led to corporatism.

September 21, 2011

New Books Purchased

Filed under: (In)Security, Computinating, Database Administration — Kevin Feasel @ 6:21 pm

Lately, I have been on a security kick, and so I have turned that into the purchase of three books on the topic.  The first two come from recommendations from pauldotcom.  The first is Metasploit: The Penetration Tester’s Guide.  It’s already a little out of date (because it doesn’t cover version 4), but it sounds like a great introduction to the framework.

The second book is Hacking:  The Art of Exploitation.  It sounds like a graduate-level course in computer security, going deep into the topics and really learning the topic.

Finally, going back to database administration—that is, after all, my day job—I picked up Denny Cherry’s Securing SQL Server:  Protecting Your Database From Attackers.  Just seeing his name on the book led me to believe that it would be good, and I have heard quite positive reviews from other people I trust.

All three of these books sound like they should be great, and I’ll need to make some time to start reading them after they arrive.

September 20, 2011

Free Currency Competition

Filed under: Catallactics — Kevin Feasel @ 6:14 pm

Larry White has a great write-up of the Free Competition in Currency Act of 2011.  It’s important that he notes that it would not eliminate the USD, or even the Federal Reserve’s role in currency markets.  Rather, it would re-create the open competition which existed for over a century in the United States.  The best part about this act is that it does not mandate anything either way.  If people decide that they want to use only Federal Reserve USD, they are free to.  But if there are better alternatives, people are free to switch.

One problem that I could see is that the elimination of the USD as applicable to “all debts” is that there could be higher transaction costs in negotiating currencies.  This would also require that the Supreme Court not try to re-establish Juilliard v. Greenman.

September 19, 2011

Windows 8 In A Nutshell

Filed under: Computinating — Kevin Feasel @ 5:28 pm

Microsoft is going back to their roots with Windows 8.

September 18, 2011

Some Thoughts On Madden 12 Drafting

Filed under: Sports, Video Games — Kevin Feasel @ 10:37 am

Last year, I wrote a post on Madden 11 drafting.  The punch line is that, if you have a system, it’s easy to break Madden drafting and consistently get late-round gems.  They changed this in Madden 12, to the point where I have had success, but haven’t yet figured out a way to bust the system.

With Madden 10 and 11, I considered it a disappointment to draft a guy with C potential any earlier than, say, the 6th or 7th round.  With this edition, I have drafted guys with C and D potential in the first round, so I don’t have a system down yet.

So far, it seems that Pro Days aren’t very important for most positions.  They provide some information on “intangibles” (play recognition, stamina, injury), but the caption—which specifically mentions “catching”—doesn’t apply for wide receivers or tight ends:  Pro Days don’t unlock catching skills for those positions.  The combine is important for gathering information on physical attributes:  speed, strength, that kind of thing.

So far, my strategy has almost mirrored my Madden 10 and 11 strategy:  deep dives on specific positions.  If I want a wide receiver, I’ll scout lots of receivers.  Unfortunately, this leaves me guessing further down in the draft.  Sometimes I get lucky—I have drafted two outstanding linebackers in the 4th round, despite knowing next to nothing about them—but that doesn’t always work.

Offensive linemen are pretty easy to draft:  in the in-season scouting, you can get pass and run blocking, as well as impact run blocking.  From this, you know which linemen are likely to be good and which are likely to suck, so you can narrow down your search pretty quickly.  There aren’t any other positions that are as easy to draft around, but this is part of how I’ve been able to draft a number of A- and B-potential offensive linemen without scouting them too hard.

Starting next season, I think I am going to focus more on a targeted-risk approach.  I have already started to do this with the individual workouts, due to the fact that you only get five of them.  With individual workouts, my basic plan is as follows:  figure five positions that you really need to improve.  In my last draft, they were WR, TE, CB, MLB, and DT.  I also needed some HB talent, but had run a guy through the combine and pro day, so I knew I was going to draft him.  But this draft has led me to an embryonic strategy:  for five positions, pick two guys, both of which you expect you could draft.  At this point, you have some probability that at least one of them will be a good player.  So, we have the case above.  I definitely had two wide receivers and two tight ends in mind; the other three I had not applied this to and was kind of guessing.  Naturally, if you only have one first-round pick, you shouldn’t be scouting 10 1st round guys:  probably 7-8 of them will be gone before you get to your next pick, which will be a huge waste of your effort.  Instead, look at some later-round guys as well, especially 2nd-4th round folks.

Anyhow, once you have the pairs, pick one of the two and scout for each position.  If the workout turns out really well for one guy, slot him in for drafting.  Otherwise, pick the other guy.  What you are doing here is some basic Bayesian inference and updating your priors.  Ideally, you would hit upon five gems, slot them so you are sure to draft them, and improve your team considerably.  However, it also helps you avoid busts.  So, in my case, I hit upon an awesome WR and a crappy TE (who had amazing physical stats, but turned out to be an oaf who couldn’t catch a ball with superglue on his hands and blocked like his superglued hands got stuck on his helmet).  Thus, I slotted in my WR and picked the other TE in my pair.  Unfortunately, my WR got chosen #1 overall (whoops…  This is the risk when scouting awesome skill position guys), so I needed to choose the other guy in my pair.  However, I was able to get that other TE, who turned out to be awesome.  Unfortunately, my #2 choice at WR had a 58 catch and is limited to kick returns.

This strategy is not nearly as effective at gaming the system as last year’s was, but I am going to give it a full workout next season.  The concept of Bayesian inference is solid, and the idea behind this is that there are X talented players at a position in a draft, where X is some variable > 0.  At thin positions like kicker or punter, this may actually be equal to 0, but at most spots, there will be at least a couple B-level guys, and probably at least one or two A-level guys.  Let’s say that there are 2 A, 3 B, 4 C, 3 D, and 2 F at a position.  Basic scouting will help you ferret out some of the F and D guys:  people who seem to have some skills _really_ far off probably help.  Part of this is based on the assumption that “non-important” stats (like stiff arm for wide receivers) are correlated, at least to some extent, with player talent.  Again, this is not 100% accurate, but I believe the rate is higher than 50%, so it would be better than simply guessing.  Doing this means that your average pick will probably be closer to looking at 2-3-2-1-0, as you’ve filtered out the obviously crappy players.  Thus, when you choose two players, you’re choosing from a higher-quality subset of your entire player distribution, where your highest-likelihood choice is a B, and the likelihood of a bust is pretty low (but not 0%).  If you scout one of the two players, you have learned important information about that scouted player, including exactly where in the distribution he lies.  If he’s a C or D, it makes it that much less likely that the other guy you’ve chosen in a C or D, so you can feel better about drafting the unknown.  But if you choose an A or even a B, you probably should draft him, as the probability of the unknown being a better choice is relatively lower and the probability of the other guy being a bust relatively higher.

This particular strategy reduces the risk of drafting a bust, but it does not mean that you will draft the highest-quality guy out there.  There is simply too much uncertainty now, so the best you can do is try to mitigate it by playing percentages.  So it is possible that you’ve chosen a guy with 99 potential and a guy with 85 potential, scouted the 85 guy, and then decided to draft him instead because he was good enough.  But doing it this way spreads out your most precious draft chits (the individual workouts) and makes it likely that you will get up to 5 quality players in each draft before you start guessing in the later rounds.

September 17, 2011

Portal Free Until September 20th

Filed under: Video Games — Kevin Feasel @ 11:40 pm

Steam is allowing people to download the original Portal for free until September 20th.  I’ve only played Portal 2 (in cooperative mode), so I am interested in checking out the original.

Hat tip to our resident Penguatroll for the heads up.

The Collapse Of A Nobel-Worthy Career

Filed under: Economics — Kevin Feasel @ 10:37 pm

When the mighty fall, they fall hard.  Bruno Frey’s self-plagiarism charges appear to be worse than he first let on.

September 16, 2011

In The Papers: IQ and Economic Growth

Filed under: Economics, In The Papers — Kevin Feasel @ 1:17 pm

Garret Jones, R.W. Hafer, and Bradley K. Hobbs are getting awfully close to thoughtcrime with their paper, IQ and the Economic Growth of U.S. States.

Abstract:

In the cross-country literature, cognitive skills are robust predictors of economic growth. We investigate claims by psychologists that the same is true at the state level. In a variety of specifications using four proxies for average state IQ used in the psychology literature, little evidence is found for a robust IQ-growth relationship at the state level.

The authors point out that IQ matters in cross-country surveys of economic growth—maybe even more than economic freedom (2).  In addition, IQ matters for individual earnings (4).  Both of these are close to heresy in polite society, but they’re true.  So, the authors say, let’s check out if this holds for US states as well.

They find that there are a few proxies for IQ in the US, but that none of them are really all that good.  The first one is SAT scores.  This is a problem because smarter people on the coasts tend to take the SAT, whereas smarter people in the midwest and mountain west tend to take the ACT.  So other studies look at the ACT and a composite of SAT and ACT.  Finally, there are NAEP federal tests (5-6).

The authors found that, using a simple correlation test, IQ did correlate with economic growth in three of the four studies; the only exception was the ACT-only survey (11).  Using a more detailed correlation method, the SAT and NAEP surveys showed a positive correlation, the ACT survey a negative correlation, and the composite no correlation (12).  And even for the SAT and NAEP surveys, the results were significantly weaker than national-level results:  for cross-country surveys, a 1 IQ point increase leads to a 0.1% increase in national economic growth.  On the cross-state level, a 1-point increase leads to a 0.05% increase in growth (NAEP), or even less than 0.025% (SAT) (13).

The big problem is that the IQ differences as measured are vast and flaky:  one survey has the IQ of north central states (Minnesota, Wisconsin, North Dakota, South Dakota, etc.) as 102.2, whereas another survey had them at 83.7 (27).  Considering that one survey (27) had North Dakota at 74.5 and Minnesota at 88.5 (note that these are two of the highest-performing states in terms of student achievement), I would chalk this up to a measurement problem rather than evidence that something which is true on the individual and national levels is untrue at the state level.

September 15, 2011

Congratulations To Exceptional DBA Jeff Moden

Filed under: Database Administration — Kevin Feasel @ 7:31 am

Jeff Moden is the Exceptional DBA of 2011. Considering how much I have learned and appropriated (with attribution) from him, there’s no way I couldn’t vote for him, and I’m sure that my vote pushed him over the top. I’m glad that he’s getting the recognition he deserves; the next stop is MVP…

« Newer PostsOlder Posts »

The Silver is the New Black Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 94 other followers